On Biotech Interdependence
One thing I’m always interested in about each biotech I advise as they get started is how they handle metadata. Most of them tend to think of this as metadata entry or capture - something you do to describe what you’ve done. If you’re working with microtiter plates, you’re dealing with platemaps; if working with samples, a sample sheet. They describe what they put into each well or each sample - which compound(s) at which concentrations, at which time points, combined with which other biological entities, on which cell types/lines, etc. It seems to be done almost as an afterthought.
It makes sense, too. The primary goal is to execute the experiment, right? At least, that’s often how the wet-lab team thinks about their primary responsibilities; since they are the ones in control of the pipettes, or the robots, they often end up being the ones responsible for recording what they did, what went where. Unfortunately, that’s often a second-class citizen in their mental responsibilities book. But occasionally they also end up in charge of the analysis of the experiment, which you might think is a good thing because then they’ll have a good reason to be clear about what happened, and record the metadata in a structured way. In fact, the wet-lab team being responsible for the data analysis often creates a perverse incentive to not capture metadata in a well structured way. While it might be better (for the organization, for clarity, for potential future use cases) to create a well-defined structure, it is often harder initially to agree upon a common language or schema, or slower to actually adhere to one rather than just jot down notes that you are confident you’ll remember yourself. And so often the metadata gets captured in a non-systematic way that only the capturer will know how to interpret, preventing integration into broader systems or analysis by others (and creating a form of technical/scientific debt - more on that in another post).
Some folks advocate for a new generation of biotech employees - ones that are as comfortable on a keyboard with VSCode as they are with a pipette in hand. Employees that are capable of designing, executing and then analyzing their own experiments. These employees would indeed be quite capable, and in a very small organization or lab would be extremely valuable. But as an organization grows, as its complexities increase, these independent super scientists (or at least how they might be used in an organization) can become a liability. To be clear, it’s not the scientists themselves that are the liabilities, but rather the independence. This independence can create significant speed in isolation, but a serious collective slowdown because it enables siloing of information and knowledge.
So what am I arguing for? Dependence of wet-lab scientists on dry-lab comrades, where the computational counterparts operate as a service organization to the scientists? No. This is akin to the prior norm, of IT service organizations operating at the beck and call of the science organization (still found in a number of older pharma and biotech companies that are in the process of digital modernization). I do not advocate we go there, but rather towards an era of interdependence of the wet-lab and dry-lab. Only with them equally represented throughout an organization - on project teams, strategic committees, and at the highest levels of leadership - will we see the types of relationships emerge that lead towards behaviors and processes that create collective, organizational speed, and with it, significant scientific progress.