Bioassays 2011

During the last decade, tremendous strides have been made in proteomics and genomics. Bioassays have made similar progress, but receive much less attention. With intense and careful work, bioassays have earned a respected place in quality assurance programs for therapeutics. The assays can be reliable with %CVs in the high to mid-single digits. This seems quite remarkable when dealing with the idiosyncrasies of living cells. IBC’s 21st international Intensive Symposium on Biological Assay Development and Validation (Bioassays 2011) attracted over 100 scientists to San Francisco’s Hyatt Hotel at Fisherman’s Wharf, May 11–13, 2011, to talk shop.

Planning is the most important activity in developing an assay. Bioassays are important in the discovery, manufacture, and regulation of biotherapeutics. The specific purpose (potency, stability indicating, etc.) needs to be clearly written out followed by the essential performance criteria. Risk is an important consideration, since single-point assays are usually much less expensive than multipoints, which should show the expected pattern in response. Single-point calibration is difficult because, in bioassays, absolute measures of accuracy are often not possible. Living systems operate with many variables and over a very small region in space. Cross-reactivity is expected, be it positive or compensating. If the need is potentially long-term, then a corresponding plan for continuing maintenance should be included.

To be useful, assays must produce results that can be duplicated. This is essential for belief in the results. Furthermore, others need to be able to get similar results. Indeed, every variable may affect the outcome. This was discussed by Dr. Stan Deming of Statistical Designs (Houston, TX). Bioassays are problematic since absolute determination of accuracy is seldom possible. This makes it difficult to compare the results of two assays, since neither can be traced to primary standards. One can compare precision, based on replicates, but this is not directly transferable to accuracy, except to acknowledge accuracy from nonprecise results is unlikely. This is compounded by common misconceptions about the meaning of “correlation,” “equivalence,” and mean values. Even using the best available technology, two assays usually show differences in the mean and precision. Assuming that one assay precedes the second in time, one cannot confidently choose between the two results. All this is very relevant when transferring a method from the originator to another laboratory such as a contract research organization (CRO) or QC laboratory. Dr. Deming and others recommend that the acceptance criteria for the assay in the receiving facility be set prior to transfer.

The key role of statistics was amplified in a lecture by Robert Singer of Robert Singer Consulting (Union City, CA), which described the first major revision of the U.S. Pharmacopeia (USP) section on bioassays (111) in 50 years. With time, many things have evolved, particularly in statistics. In 2000, an expert committee on statistics was formed to review the old 111 and plan for a new edition. In 2001 and in following years, three advisory panels were formed that reorganized revision 111 into three interrelated chapters. These are working through the USP review process, which seeks to consistently use a common ontology and philosophy. The committees also try to find a way to provide a seamless flow through the chapters. Harmonization with the European Pharmacopoeia is less of a concern.

Five chapters are anticipated. “Design and Analysis of Biological Assays” will carry USP number 111. Since it is lower than 1000, this chapter will have regulatory authority. The next four chapters will all have numbers larger than 1000, which means that they are advisory. Titles include: “Design and Development of Biological Assays (1032),” “Biological Assay Validation (1033),” “Analysis of Biological Assays (1034),” and “Roadmap (1030).” “Roadmap” describes the philosophy and includes a glossary. The latter four passed through the public comment period in 2010 and are now in the final stages, including review by the Statistics Expert Committee. Adoption is expected in 2012 at the latest.

Much of the remaining program featured case studies, with reports of problems and their resolution. Dr. Lee Smith of GrayRigge Associates, Ltd. (Wokingham, U.K.) advocated the use of statistics coupled with charting to understand the data. He warned that regulators expect one to understand their data.

Control charts and statistical process control

The next lecture, by Dr. Thomas Millward of Novartis Biologics (Basel, Switzerland), presented a case in point where a trending display of results from serial batches clearly showed one that was out of line, yet within specification. In another example, Dr. Millward showed that pipetting viscous solutions should not be relied upon. Simple volumetric dilution of an atmospheric pressure ionization (API) preparation and a reference solution indicated that the API was 7% low. However, when the API preparation (now much less viscous) was diluted with size exclusion chromatography (SEC) mobile phase to fall in the concentration range of a potency bioassay, the results showed that the API had 99% potency compared to the reference.

In the workshop on the first day of the meeting, Dr. Deming discussed the use of control charts, including the rule of 8, 9, or 10. Dr. Paul Caccamo of Emergent BioSolutions, Inc. (Rockville, MD) expanded upon this in a lecture using statistical process control charts to monitor performance, including assay performance. Plotting data as a function of time is simply trending, but when statistics are added, one has a statistical process control chart (SPC). SPCs can be used to control a process or to monitor subprograms such as bioassays. Process control limits are an attribute of the process. It is essential that these be clearly related to the specification limits, which are ultimately customer driven.

Debugging methods

“Everything can affect the results” was a common admonition. As an example of the unexpected, Dr. Sheri Klass of LigoCyte Pharmaceuticals, Inc. (Bozeman, MT) described an animal assay for vaccine potency that gave an annoying number of “nonresponses.” She investigated intraperitoneal injection using vegetable dye. Originally, the injection was to the left of the mouse’s midline. She noted that in the particular strain of mice (C57 BLK6), the cecum lays diagonally across the abdomen. The dye showed that some injections were going into the cecum and rapidly cleared. Simply moving the injection point to the right of the centerline improved the injection efficiency to better than 98%. In other strains of mice, the organs are organized differently and may not be consistent within the strain.

Only one reference was made to instrument design as a source of poor performance. It was a 96-well luminometer equipped with two reading heads to double the throughput/time. One head measured rows 1–6 and the second rows 7–12 in a 96-well plate. Filling the plate with the same reagent in all wells showed a 12% range in response with a 3–5% range between the reading heads.

Assay maintenance

Several lecturers touched on the problems associated with keeping an assay working with time. In the closing lecture of the meeting, Dr. Cynthia Woods of Genentech, Inc. (South San Francisco, CA) addressed long-term assay maintenance. It is probably no surprise that the first step is planning, specifically developing a maintenance and revalidation plan. The planning horizon may be as long as 20 years. The plan should include real-time assessments of reagent stability. This should be done prior to forecast need. A key indicator is a higher than expected failure rate. Poor performance of a reagent may point to a stability problem or influence of an unknown variable that was missed during the initial validation. Even though only one batch of reagent was used in the original validation, differences (problems) should be anticipated when it is necessary to change batches. Also, one should not assume that reagents are time stable. Periodic revalidation should be part of the plan. Three case studies were presented.

Method transfer

Dr. Sally Seaver of Seaver Associates, LLC (Concord, MA) explained that method transfer is where one really learns about the assay, since it usually uncovers previously unknown, but essential, variables. Dr. Seaver observed, “You find the tribal knowledge only after you run someone else’s assay.” Method transfer between two firms such as an innovator and a supporting CRO can be even more problematic. Bioassays cannot be avoided. Dr. Michael Sadic of Aptuit (Greenwich, CT), a CRO, advised that a new drug application (NDA) for a biotherapeutic will require at least one bioassay. The bioassay can be developed by either the innovator or the CRO. Since transferring a method to another party can be difficult, one needs to agree upon and document acceptance criteria before starting. The earlier the method is developed and qualified, the more experience and confidence one may have in the assay. Starting early is much better for everyone, since it avoids responding to the heroic pressure that exists later in the program. Under extreme pressure, one may be forced to accept an assay that is failure prone.

This was the 21st edition of the Bioassays meeting. Clearly, the technology has evolved over the years. The organizers have done an outstanding job of keeping the symposium content up-to-date with advances in technology and regulation. Also, IBC deserves recognition and gratitude for handling logistical and attendee support. This enhanced the outcome immeasurably.

Dr. Stevenson is a Consultant and Editor for American Laboratory/Labcompare; e-mail: [email protected].