On Iteration Speed
If you’ve been reading my shared thoughts, then you’ve already heard me talk about how I see much of the world through the lens of experiments. Being able to design, execute and analyze an experiment is critical to uncovering the hidden realities around us. It is the essence of the scientific method, and is responsible for much of the progress the world has seen over the past 400 years. And when I talk about experiments, I’m not just referring to cells-in-a-dish type of experiments. I’m talking about experiments in product development, organizational design, hiring approaches, and business models. There are many more ways beyond “science” for us to use the scientific method of inductive reasoning, hypothesis creation, testing, analyzing and updating our beliefs.
Something I’ve learned about experimentation that is critical to progress is iteration speed. I have witnessed organizations view experimentation as a massive burden (or rather where their systems do not enable them to consider it as anything other than that), such that each new experiment takes weeks or months to plan, execute and analyze. In these places, experiments are infrequent events, resulting in meaningful learning being an infrequent event as well.
There can be many reasons for this, some valid, some less so. It’s possible that the cost of running an experiment is too high to be a frequent occurrence. A great example of this is the system and process of randomized controlled clinical trials. The cost and complexity is simply too high for frequent iteration of testing of new therapeutic candidates in humans. Another reason is the fear of failure or negative results from an experiment. Again, this is very valid in the case of human clinical studies - we absolutely want to minimize the risk of causing harm to other humans. But there are many experimentation types where these two aspects can be appropriately addressed, but it requires investment.
A key approach to improve iteration speed is to invest deeply in background processes that account for much of the cost (monetary, labor, brain power, etc.) of designing, executing and analyzing an experiment. This requires standardization of these processes, and in turn, a willingness to reduce flexibility to a degree. It’s a tradeoff that can dramatically unlock iteration speed, and thus discovery and learnings. For physical experimentation, that might look like creating clear standard operating procedures (SOPs) and standardizing design components (e.g. deciding that an experiment class will always use 8-point half-log dose response curves, or will always use the same set of experimental controls). For product development, that might look like honing your CI/CD process so it takes less of your attention, or implementing a robust feature flagging system so that it can be easy to turn experimental features on or off. It can be tempting to just focus on “running the next experiment”, but you do so at the opportunity cost of not increasing your iteration speed. Often that tradeoff is worth it, but if you sacrifice that investment in your platform too long, the opportunity cost will compound and grow significantly.
Another key approach is more of a mental or cultural one, and that pertains to your relationship with failure. Experiments fail all the time. We sometimes internalize that as indication of our own failure. I have seen teams paralyzed by fear, leading to a prolonged state of inaction. They do not run the experiment they feel they need to run because there is a non-negligible probability of failure. Momentum gets squandered and nothing gets learned. The best teams I’ve seen embrace the possibility of failure as a learning opportunity. Please don’t misunderstand me - these teams don’t go seek failure. They don’t try to identify the riskiest experiment they could conduct and pursue it because it is “bold” or something like that. But they have a healthy relationship with failure. As humans, our psyches overestimate the consequences of failure in the 21st century. For ages, failure (to hunt food, build an adequate shelter, find a water source) meant death. Or social failure meant exile. Our brains are still wired with this failure-consequence relationship intact and haven’t evolved to recognize that the majority of ways we can fail now do not lead to lasting consequences, and that most of the ways we can fail today lead to learning. So check your relationship with failure, and ask yourself if that is holding you back from rapid experimentation and significant learning.
Finding ways to lower the costs (in all forms) of experimentation and adjusting your relationship to failure can remove the barriers to increasing your experimental iteration speed, which may unlock a new wave of learning and improvement wherever you are.