The Rise of Unit Operations: Delivering Operational Consistency, Quality & Reproducibility to Pharma

Commercial Director, Preclinical Pharma

As drug discovery outsourcing continues to grow over the next decade, rising to a predicted $43.7 billion industry by 2026, so too will competition among contract life sciences R&D firms. Internal services groups within the largest pharma are also part of this landscape, competing for the same headcount and sample volume as the CROs.

Central to the development of new biologics and NMEs is a supporting system of CROs, and central to the proposition of these firms is the speed at which they can deliver value for big pharma clients. Software has an enormous part to play here. Aspects of regulated drug development, such as process QC and auditing, can be dramatically accelerated by software. 

Yet, enterprise software implementations are frequently dogged by lengthy implementation or “time-to-value,” technical barriers to end-user satisfaction, and a lack of scalability – experience shows the investment of time and skill required to realize the value of software licenses is a major component of the total cost of ownership.

External forces are at work too. Amid the shift towards outsourcing and the convergence of manufacturing and R&D, can data governance and quality be upheld? In other words, what are the ways software can introduce greater control as the delivery model becomes further subject to outside forces? And at CROs, where sample throughput while maintaining quality is paramount, how can software help them satisfy a variable base of sponsor preferences 

The answer lies in applying consistency, quality, and reproducibility, the same characteristics sought in scientific data, to the way scientific data systems are deployed. This cannot consistently be said to be the case today. Many scientific software implementations are described (sometimes derisively) as “snowflakes.” Customized and configured to the maximum degree, often at significant effort, they accommodate local, technical and scientific needs.  They have succeeded at making COTS (commercial off-the-shelf) software fit-to-purpose, a laudable accomplishment. However, three major unintended consequences frequently can and do result. Are these installations portable? Are they upgradable? And, short of that, are they even maintainable?

Of course, that depends. Modern scientific software is typically configured through a set of graphical user interfaces, scripts, and server parameters. The most rigorous implementations are fully documented and versioned. Some software vendors, IDBS among them, can offer tools to make templates and configurations more portable. More typically however configurations must be reproduced from instance-to-instance, even from test, to development, to production. Failure to maintain parity among those instances can have a devastating impact on system validation and user acceptance testing. In other words, these disparities introduce risk.

Maintenance and upgradability are another different story. A reissue of a template as an upgrade can result in the loss of local enhancements. Some customizations to web-based tools are performed via blunt instruments – such as the manual revision of javascript pages (although no vendor would recommend it!). These adjustments are lost in the absence of documentation or other discontinuity. And when there is inconsistency in the manner in which local configurations are performed, extensive detective work can be required to understand the system design before technical issues can be addressed; system maintenance becomes extremely challenging. 

Enter Unit Operations

One of the great strengths of modern configurable scientific data systems over legacy LIMS is their flexibility. Can we channel that flexibility in a way that minimizes the “snowflake” phenomenon? Experience with unit operations suggests this is possible. 

Unit operations identify a means by which system design can become predictable, reproducible, and consistent. There are at least two ways a unit operation can be defined. First, as a repeatable, reusable task – for example, querying the inventory for a container barcode. And second, as a generic, parameterized mechanism for defining a workflow, and this is where the remainder of this article will focus.

Imagine defining a laboratory process within software. This might begin with a simple set of metadata describing the work to be performed and data to be captured. Electronic lab notebooks have functioned at this level for years. Then consider what other elements of an experiment have dimensions themselves:  samples, procedure steps, instrument parameters at each step, appropriate personnel. Because each dimension may have any number of values (for example, a procedure may have three steps, or five), and a design may have any number of dimensions, a data model begins to take shape. 

This approach can be the core of a unit operations practice. The flexibility of an electronic system is maintained with “guard rails” around key dimensions that maintain extensibility. It is an especially powerful tool in the areas of method management and method execution.

A Means of Operationalizing Data Governance

A scientific data strategy is now accepted as a sine qua non of mature research organizations. One component of this strategy is the establishment of data governance standards and practices. Even initiating this effort is a colossal task for organizations faced with a plethora of systems, strong personalities, and vigorously independent research groups. However, there are immediate organizational and operational benefits to undertaking the process. Forming and executing a data strategy requires understanding others’ work and their data. And treating data as an asset contemplates a future where data is findable, accessible, interoperable, and reusable. 

Unit operations can integrate data governance into template design. Because these digital units of work adhere to a pattern, template designers are guided into a standard practice in how data is stored and referenced. Scientist end users can then create their own workflows for reuse using generic unit operations templates. For instance, a formulations group can use the same format to deploy their blending, milling, and compression steps, link those steps together under a campaign and thread the progress and genealogy of samples of interest into a simple dashboard. Unit operations reduce the effort required to design workflows such as these, decrease reliance on expert template designers, provide users familiar workspaces in method execution from one process to the next, and support the highest quality reporting.

There are few obstacles to implementing unit operations in scientific software, beyond using software that can support such an approach. More than anything, institutional leadership is required to align on the “best practices” that will guide the adoption of unit operations as a design principle. John F. Conway, industry expert, comments, “The only way to drive this type of change is through a culture change, and that means everybody changes. And many organizations allow a middle layer to have creative leeway, which is always a good thing. If you can build enough decision criteria around this, it can take effect from the top-down and the bottom up. They’ll see what’s in it for them.” Change management, he notes, is crucial for success. “That means a true R&D transformation – everyone in the organization must change.”

As web hosting moved off premise and the Cloud opened for business, the now familiar notion of “as-a-service” took shape. The theme of as-a-service offerings is a reduction in the level of effort and expertise required by the customer. Hosting-as-a-service, now the backbone of the web, came first. Software-as-a-Service realized that this same model could be applied to the benefit of software customers, vastly reducing technical overhead and improving scalability. Unit operations, which could be considered “Workflow-as-a-Service,” deliver this value to laboratory scientists. The technical burden of workflow design is almost entirely lifted, new workflows can be rapidly deployed, and the ultimate objective of shortened time-to-value is achieved, while gathering model-quality data.