In April, Symyx Technologies launched a Contract Development and Manufacturing Organization (CDMO) with the goal of helping biopharma companies move new drug candidates from discovery to clinical trials faster and more reliably.
At BIO 2009, we spoke with Eric Carlson, Symyx’s vice president of life sciences, and Matthew Pino, the company’s director of business operations, for their views on why the discovery-to-clinical space is becoming more important in the evolution of pharma outsourcing, and how high-throughput screening must evolve to meet the needs of biopharma.
PhM: Why did you decide on launching a CDMO, rather than a contract R&D company? Is manufacturing know-how important to your plans?
E.C.: If you look at how most contract organizations are set up today, they’re typically either contract research or contract manufacturing firms. In the first case, they’re primarily focused on managing clinical trials and doing some development and research work to support that effort, or they’re mainly interested in the manufacturing side, and commercialization or commercial product.
What we’re trying to do is to focus on how to best take a product out of discovery, and provide all the services needed to develop it get it into the clinic efficiently, to provide data to customers that will enable them to make the right decision. With demand to get product through the clinic and expedite the process, we want to focus on providing all the services needed. Recently, big pharma companies have begun to change their outsourcing model to allow them to focus on discovery and to outsource more of their development business so they can get product to clinic as soon as possible and make better, faster decisions.
Our services are set up around this newer outsourcing model. We do preliminary formulation screening, formulation development and optimization, and also we have GMP experts on staff who understand what is needed to put a product into a commercially viable format and meet all the regulatory requirements.
So our processes, quality systems, tech transfer and project management goals are to make the move from discovery to clinic as seamless as possible for customers. As we leverage Symyx technology and automation, that’s where we see benefit. Our end game is not commercial manufacturing but the clinic.
PhM: Is the concept of Quality by Design something that you incorporate into your work?
E.C.: QbD is really just a catch phrase for standardized methodologies, a process through which you “do things right” based on science and regulatory requirements, and then statistical models to show that data have validity.
M.P.: We feel that our software enables an integrated approach, because you can get data from a very early point in development all the way through to manufacturing, and be sure where those data are coming from. We aim to be more of an integrated partner with customers, and transparent data transfer is critical to eliminating the “black box” approach to contract research, and establishing a partnership.
PhM: Are pharma companies becoming better at leveraging IT to facilitate this degree of connection?
M.P.: Absolutely. We can point to several different examples where our technology allows us to run small-scale high-throughput experiments, each generating lots of data. Using our software, we’re then pushing our experimental data out to the IT structure of clients’ labs, so that we become a virtual extension of their labs and their scientists access and use the design data as it comes out.
PhM: What do you see as the biggest challenges in applying QbD to this portion of the value chain?
E.C.: The greatest challenge with QbD will always be applying commercial standards to development. Industry is struggling with how best to apply data, statistically. If you only have two lots, and you need to put both on stability, how much control can you have with two data points?
M.P.: The industry also needs to handle data in a rigorous way. Scientific informatics allows you to do that in a controlled environment.
Batch history automation is very important, because it will allow you to pull out information that you might not think you’ll need, instead of relying on an operator who might not have captured that particular piece of data.
PhM: How has high-throughput screening changed in the last few years, and how is it being made more relevant to biopharma R&D?
E.C.: If you look at its history, it is mostly thought of as a discovery tool, mainly for small molecule type research. The approach there, generally speaking, is that you’re doing very high throughput experiments. Each experiment is rather simple, with low information content, mainly yes/no screens, but you’re doing lots and lots of them.
With biologics, you need much more of a focus on what you’d need with materials science and development, where you can’t just do yes/no screens. In biopharma, you need much more information, and may need multiple types of tests to make multiple decisions to automate more complex sample preparation and track samples throughout process.
PhM: What ROI do you find?
E.C.: We generally look at productivity enhancement. We’ve seen 5- and 10-fold increases in productivity, because each individual scientist can run many more experiments. Depending on where you are and what your goal is, you may do more and more studies of the same scope or you may decide instead to broaden the scope.
The scientist can use automation and data management to look at a broader range of candidates, or focus on one specific issue and dive deep and create an extensive data set.