Reaching New Heights with High Throughput Experimentation

High throughput experimentation is back again.

It’s actually been with us for decades, but companies are paying more attention to this powerful approach to chemistry. They’ve solved some old problems, gained a better understanding of how to design HTE studies, and realized the data-science potential unlocked by high throughput chemistry.

Join us on a 4-decade journey through HTE. From the 80s and early 90s, to what’s happening in pharmaceutical labs today, and what scientists hope for in the future, you’ll hear it on Episode 5 of The Analytical Wavelength.

Read the full transcript

Reaching New Heights with High Throughput Experimentation

Neal Fazakerley  00:00

The key to the success of modern high throughput methods is the application of all the great design-first and experiment design principles and tools that allow you to put the right experiments and target compounds on a plate and make compounds as quickly as possible.

Jesse Harris  00:29

High throughput experimentation is a powerful tool for accelerating research and development. In recent years, it has become an important part of the pharmaceutical and chemical industry.

Charis Lam  00:40

The interest in HTE is exciting, but many people don’t realize it’s actually been around for decades. We wanted to have a conversation about the origins of HTE and why it might have been unsuccessful in the past. That way, we can understand how it can be successful in the future.

Jesse Harris  00:57

Hi, I’m Jesse.

Charis Lam  00:58

And I’m Charis. We’re the co-hosts of The Analytical Wavelength, a podcast about chemistry and chemical data. Today, we’re talking about HTE. And our first guest is Ralph Rivero, a consultant at 20/15 Visioneers. Ralph has decades of experience with HTE, and now he consults on research and development IT and informatics.

Charis Lam  01:22

Joining us here today, we have Ralph Rivero, Senior Principal Visioneer at 20/15 Visioneers. Ralph has years of experience in the pharmaceutical industry and with HTE, including at companies like GSK and Merck. He received his PhD in organic chemistry from the University of Pennsylvania. Welcome, Ralph.

Ralph Rivero  01:41

My pleasure.

Charis Lam  01:42

Yeah, it’s great to have you. Alright, let’s start with our usual icebreaker question. What is your favorite chemical?

Ralph Rivero  01:50

Wow, okay, that’s, that’s a tough one. But um, so if you say chemical, I will, I’ll take it to an element. And I guess I’m going to be slightly boring here because I’m an organic chemist. So I’m going to go with carbon as my favorite element. It’s what I went to school for five years to learn about, the chemistry of carbon. And it’s continuing to grow, the knowledge around carbon-based chemistry, organometallic chemistry, all the different types of chemistry that are growing. So it’s kind of a boring answer for an organic chemist to say carbon is their favorite element. Now, if you said chemical, that’s pretty broad, too. I’m just gonna go with that as my favorite element for now.

Jesse Harris  02:33

That’s, that’s great. There’s a lot of possibilities in there, you know?

Ralph Rivero  02:36

Yes, there certainly is.

Jesse Harris  02:37

Everything from coal to diamonds. It’s perfect.

Ralph Rivero  02:39

Yes, yes.

Jesse Harris  02:41

You have a lot of experience with high throughput experimentation. Can you kind of give us a little bit of your background so people get a sense of what your experience is?

Ralph Rivero  02:49

Sure. When I joined the pharmaceutical industry back in 1987, it was an era where computer-aided drug design was the new transformational technology that was going to change our industry. And like many other new technologies, it had a lot of irrational exuberance that didn’t necessarily pan out. It was a time where we were using computational and computer-aided drug design to design molecules, designing molecules that fit binding sites. Of course, it was not the full story.

Ralph Rivero  03:26

So soon after I started working at Merck, combinatorial chemistry was introduced to the industry. And I saw that as an opportunity. Being a medicinal chemist, learning that you are continually going through iterations of “design, make, test and analyze.” Those iterations occur infinitely during a drug discovery program. I saw combinatorial chemistry as an opportunity to prepare obviously, multiple compounds, but not necessarily to screen them as mixtures, which is where combinatorial chemistry began, screening mixtures. But I saw it as an opportunity to use the technology to prepare arrays or use it for parallel synthesis to prepare discrete molecules so that you can, in essence, ask multiple questions. You have a hypothesis; you have multiple hypotheses. And you use technologies, use automation liquid handlers to prepare multiple compounds at the same time, which gives you multiple answers. And therefore in theory, your iterations should be reduced because you are getting more answers to more questions. So that happened in the late 80s, early 90s. And again, it was in its infancy. Combinatorial chemistry maybe got a little bit of a bad rap because it was primarily used to screen mixtures. But I see that as an initial lead-in to what was parallel synthesis and high throughput experimentation and then it’s been evolving ever since.

Charis Lam  05:03

So you kind of touched upon this with your experiences starting in the 80s, and 90s. And there were some basic HTE capabilities available. But it seems that many scientists are still doing things much the same way 20 years later. So why do you think HTE has been relatively slow to be adopted?

Ralph Rivero  05:21

Yeah, so I think part of it was the initial hype. Of course, when the hype doesn’t necessarily pan out, I think it turns off maybe some people that have to make the decisions on investment. But more importantly, I think the technology has continued to evolve. The early technology, whether it’s instruments or software, or both, the combination of both, which is really necessary to have a successful high throughput experimentation department. I think they have continued to evolve to the point now where I think you have instruments, automation, they’re very modular, very sophisticated, and you have the software that allows scientists to easily record what they’re doing.

Ralph Rivero  06:05

I think, early on, there was always a piece missing. And that piece, plus the culture of scientists, which are “I know how to do my job, I can do this, I don’t need to learn this new technology.” There’s a huge change-management component to introducing these types of technologies. So I think, while we were using, we started with, you know, 12-channel pipettors, manual pipettors, to liquid handlers, to instruments that were designed to make peptides that we tried to, to augment in order to make small molecules. That evolution has continued over the last 20, 30 years. And it hasn’t caught on because I think it’s a moving target. It’s very dynamic, and the skill sets that are required, and also the way we’ve tried to implement it. I think in industry, there has been a lot of debate about whether you centralize or decentralize; there’s new opportunities now with cloud labs, where organizations are taking on the big investment of the automation and the software, and literally democratizing the technology by allowing folks who don’t have access to it to have access to it. So it’s, it’s still moving, even though it’s been 20, you know, two or three decades.

Jesse Harris  07:26

Great. Now, I wanted to pick up on some of these questions around, you know, culture and business practices. Building a successful HTE program is sometimes seen as just being a technical problem about getting the right combination of equipment, software, and scientists all in the same building. You indicated that you think that business processes are as important as the technical questions to have a successful program. So why is that? And what are some of the examples of how you’ve seen that maybe.

Ralph Rivero  07:53

So when I, when I left Merck, I started a group in Johnson and Johnson. It was high throughput chemistry; I’m not sure exactly what we called it back then; it was in 1995. And the initial commitment was there to purchase the automation, which again, the automation was not necessarily optimized for small molecules. So once you made the initial investments, organizations want to see a return on their investment. And there’s a number of ways you can measure that return on the investment. But ultimately, drug discovery is something where you don’t see the real return unless you’re, you know, you set up some metrics. And the metrics are often things like reactions run, leads, you know, leads per year, all these things. And when you don’t necessarily see those metrics, you assume that your investment hasn’t panned out. And you know, that was certainly the case early on.

Ralph Rivero  08:47

So the organizational commitment that’s required is really important, because it’s not just the initial investment. It’s the continued investment as the technology advances. And I think, J&J is an organization committed to investing in research, obviously, we know they are, as is Merck, as is GSK. You’re talking about commitment for years. And I don’t think that those type of commitments, I haven’t seen that type of commitment, I think there’s a lot of excitement early on. But when you don’t see that return, immediately, and you really need to be looking well into the future with respect to the data that these technologies are going to provide you. I think that that commitment just hasn’t been there.

Ralph Rivero  09:34

And that commitment, then if you have that commitment, then you can create the culture. And then when you have the culture within an organization to take on these new technologies, then you can build the skill sets and the expertise within those organizations. And then they don’t question things like, you know, we need an ELN that makes it easy to capture experiments on a high throughput fashion. So parallel experiments were not easy to capture in the early ELNS. The templates, it just would take a long time to capture those experiments. And then after you did capture them, the data was somewhat buried. The ability to mine that data was not necessarily there.

Ralph Rivero  10:16

So all those things are obstacles. And management changes in organizations. We know in the pharmaceutical industry, we’re always rethinking, reshuffling the deck around how we do drug discovery, all with good intentions. But often we stopped doing something that we just should have had a bigger commitment to.

Charis Lam  10:38

That’s a great point. I think we often think of these problems as being scientific problems. But the business and the change management is just as important.

Ralph Rivero  10:45

Very much so.

Charis Lam  10:46

Sort of related to that you talked about centralized and decentralized as two potential models for HTE labs. How do you think a company should decide which model they go with?

Ralph Rivero  10:56

That’s a really good question. And it’s a question we struggled with both at J&J and at GSK. And I’m sure that Merck eventually has faced that as well. It really comes down to commitment and the skill sets and, and the commitment to have a group that will become experts at the technology. And it’s not just experts at organic chemistry and medicinal chemistry.

Ralph Rivero  11:20

It’s experts, often tying that chemistry to the software, to learning how to run the instruments, which are not trivial, I think, you know, early on, they were trivial, but they actually didn’t, didn’t work very well. They would make peptides, but they wouldn’t do very sophisticated chemistry. Now, we have instruments that can do all kinds of different organic chemical reactions and transformations. But that flexibility when it comes to instrumentation usually means complexity when it comes to programming, and having the robot or the instrument do what you want it to do.

Ralph Rivero  12:01

So while I attempted very often to have experts within a group, we always had a bigger vision to have it be turnkey. Have the chemistry, the instruments, excuse me, be turnkey for anybody in the department, that is, any chemist to use. Turnkey is a nice aspiration, but it’s very difficult to do. So you know, chemists usually can walk up to an LC/MS and put their sample on and then walk away and then their spectrum shows up and they get to see what it is. That’s pretty turnkey. But to have turnkey to make an array of 96 compounds, I think that’s again, a nice aspiration, but not necessarily something that is easily achieved.

Ralph Rivero  12:45

So over time, I’ve evolved to feel like this is something that you need expert groups, you need to centralize it, and you need to have that group. And it’s almost like a dirty word in industry to be a service group. Everybody’s a service group in drug discovery. So biology services chemistry; chemistry services biology. It’s the ultimate team sport is what I say about drug discovery, because you can’t do it alone. There is many people that are involved. And that expertise, that high throughput experimentation expertise, is critical to generating data that is used to make decisions. And, you know, while we say it’s a service group, it’s a service group that should be usable for any other scientist that needs to run a high throughput experiment, whether it’s optimizing a set of reaction conditions, or making a set of 96 uniquely different molecules.

Ralph Rivero  13:37

So over time, I’ve evolved how I think it should be implemented. And then as I said, I think there is this third option that is now being introduced by a couple of organizations that are cloud labs. And these organizations provide the opportunity for scientists anywhere in the world, even in their living rooms, or in their offices at home, to have access to instruments through the cloud, to run experiments. And the commitment by these companies is that they will keep their instruments up to date. They’re very modular. They have the software, they have the software packages to provide the scientists with the data so that they can make decisions, which all of drug discovery is about making decisions. And those decisions need to be driven by the data that’s generated.

Jesse Harris  14:28

That’s a great transition to the next topic we wanted to talk about, which was using data for applications like AI and machine learning. That’s obviously an area of a lot of excitement right now in the drug discovery, pharmaceuticals space. But there’s obviously a ton of data that comes out of these high throughput experimentation programs. So what’s the best way to just create a linkage between these AI and ML programs and the HTE programs?

Ralph Rivero  14:59

So I wish I had an answer, a very specific answer for you, Jesse. But what I will say is that, over the years, what I’ve noticed is that we, we’ve missed a lot of opportunities as we generated this data. So we’ve looked at high throughput experimentation as a means to solve an immediate problem, sometimes not realizing that the data that’s being generated in those experiments is incredibly valuable to inform future experiments.

Ralph Rivero  15:27

So that link that you’re describing is how well we create that intersection between the data generation and the relational databases that take that data that’s properly annotated and curated. So this is very much an IT question. And I’ve always said, I’m not an IT guy. But I can tell you when we need an IT solution. And this is an IT solution that many are working on. Again, the ELNs are sort of a gateway to pulling that data into these relational databases. Again, the annotation, the metadata that’s associated, the curation of these databases are the foundation of tomorrow’s artificial intelligence and machine learning.

Ralph Rivero  16:16

And those are opportunities so that scientists don’t have to rediscover the wheel over and over and over again. Even in Big Pharma scientists often do experiments that if they had access to information, they might not do those experiments. They might do a different experiment. And in this day and age, I think it’s, you know, there’s no excuse for scientists not to have access to information. And this information, which is coming in large volumes, has to be treated as a valuable asset for all organizations, because it is.

Ralph Rivero  16:49

I mean, AI is the future, machine learning is the future. I have a daughter who works at IBM, and she tells me. You know, I tell her some of the stuff that goes on, and she says, “Dad, you know, computers could do that a lot better than humans.” I said, “I know, Steph. But it’s, it’s, we’re not there yet. We’re just not there yet.” And I know there’s lots of organizations that are tackling this problem. But that, that will really lead to transformational change in how we discovered, you know, eliminating redundant experiments which are happening every day.

Charis Lam  17:20

Thanks for sharing such great insights with us, Ralph. Before we go, is there any last advice you would like to give to our audience about HTE?

Ralph Rivero  17:27

Um, no. I think, you know, I’ve been a proponent of high throughput experimentation for years. I’ve always thought if you could run one experiment, you know, why can’t you run 12, whatever, an array. It is all about asking the right questions, and then getting those answers quickly. It gives you the ability to do things faster. I think all the challenges have been that when you set up these experiments, you then have all this data to look at it. And I think, if we tackle that problem, we will considerably affect the cycle time for drug discovery, because it is all about decisions that are made. And those decisions are based on data. And if you have that data, and it’s accessible, you will make better decisions, and we will have more successes. So I think it’s an exciting time. But it’s continuing to evolve. And as I said, I think the options for high throughput experimentation are out there for anybody to use them.

Jesse Harris  18:20

Wonderful. Thank you so much for your time, Ralph. We really appreciate it. Great insights. And yeah, we really appreciate having you.

Ralph Rivero  18:27

Thanks, Jesse. Thanks, Charis. Appreciate it. Take care.

Charis Lam  18:31

Ralph, give us a great overview of how HTE has developed over the years.

Jesse Harris  18:37

Yeah, he talked about some of the old challenges we overcame, but also mentioned some existing challenges that we need to be tackling right now.

Charis Lam  18:44

Right. Let’s see how scientists currently working in the lab are dealing with those issues. Our next guest is Neil Fazakerley, a scientific team leader at GSK. He and his group have been improving data handling and automation.

Charis Lam  18:58

For our guest here today, we have Neil Fazakerley, who is the scientific team lead at the discovery high throughput group in GSK, where he has worked for seven years, Neil earned his PhD in Chemistry from the University of Manchester, and he’s here to talk to us today about high throughput experimentation in the pharmaceutical industry. Welcome, Neal.

Neal Fazakerley  19:19

Hi.

Jesse Harris  19:21

Hi. Great to have you. We always like to start out with a fun icebreaker question. What’s your favorite chemical?

Neal Fazakerley  19:28

So samarium diiodide. It’s a single-electron reductant that I spent about four years of my life working on during my PhD. And there’s a wealth of highly selective transformations that it’s useful for, to install really interesting molecular architectures.

Neal Fazakerley  19:48

I guess, for something people know about or have heard of: paracetamol and Tylenol. Not just because no one likes a headache, or it’s great for kids when they’ve got a fever. But right back at the beginning of my exposure to organic chemistry, I spent a week’s work experience during high school in a research lab. And along with other demonstrations, we made paracetamol and recrystallized it, got analytical data on it, and looked at it under the microscope. And I guess that inspired my love of building molecules. So that’s really why I love samarium diiodide, because it’s been used to construct some really tough natural products through some pretty neat reactions.

Charis Lam  20:30

That’s really cool. It’s definitely a unique one for this podcast, the first time we’ve heard that one.

Jesse Harris  20:34

Yeah, you know you’re speaking to a synthetic chemist when they’re talking about the samarium diiodide, you know, as opposed to the water or caffeine.

Charis Lam  20:42

Alright, so on to HTE. Over the last several years, we know that GSK has built up its HTE capabilities, including its data management features. What were the objectives behind this initiative?

Neal Fazakerley  20:55

So I guess, to drive from experiments to insights to decisions more quickly. And the primary objective of this work with ACD/Labs has been the development of a software package for the design, write-up, and electronic lab notebooks standard write up of our experiments, and the analysis of the results in a single software tool. The key for me is the association of analytical results to all the experiment conditions associated with a well or an experiment. And this results in a dataset that can be used to mine the results of our experiments. And in the future, train machine learning models, which inform how we design our reactions.

Neal Fazakerley  21:42

We’ve come a really long way in being able to write up some really complex reactions and experiments. And having the software calculate solutions we need to prepare and generate sequence files for robotics or our analytical instruments. And then I guess beyond that, we have our analytical data processing with automated peak picking, annotation, or automated structure verification with NMR. And the key is that all of that data can be plotted directly against the components and conditions in a specific well of the experiment.

Jesse Harris  22:21

HTE and related fields of combinatorial chemistry has been around for decades. What new improvements make it a good choice for laboratories now, because I think many people thought that combinatorial chemistry was kind of dead.

Neal Fazakerley  22:35

Yeah, I think it’s always been a good choice for labs. I think one of the challenges faced by combi chem was a belief that getting more data points or making more compounds removed or lowered the burden on design. And we know this is a numbers game you can’t win. The chemical space is too vast; there’s not enough carbon to make and test everything. The key to the success of modern high throughput methods is the application of all the great design first and experiment-design principles and tools that allow you to put the right experiments and target compounds on a plate and make compounds as quickly as possible.

Neal Fazakerley  23:19

Ultimately, our philosophy boils down to: if the results of experiments one and two don’t inform on whether you should do experiments three and four, then do all three at the same time on the same plate and analyze the data all together. And that extends to whether you’re asking 24, 96, or 1536 questions. They should all be well thought out and designed.

Charis Lam  23:45

That makes a lot of sense. So what are some of the challenges involved in setting up HTE in a lab?

Neal Fazakerley  23:51

I guess the bottlenecks change. And sometimes it can be quite hard to predict what will break. As chemists, we’re quite used to being the bottleneck in drug discovery process. And biologists knew they could test everything we would make. And we could easily get an analysis on all of our compounds. But when you shift from a team of chemists making a few hundred compounds each month, to a world where an individual chemist can make 1000s of well-designed compounds each week, then suddenly, the analysis of those compounds becomes slower than the synthesis.

Neal Fazakerley  24:29

Interpretation of the data and design of the next round of compounds really, really becomes important. How can you use models to accelerate this? And that’s something we’re really looking at. All of the matrix team need to up their capacities to understand the biology of the compounds. Actually, even data transfer between servers can cause a lag that impacts timelines, and ultimately you break other people’s processes. But I guess if you don’t, you haven’t gone big enough.

Jesse Harris  25:01

Yeah, I think that that’s a really important point, particularly in any sort of innovative field, like, the thing about innovation is that you need to be pushing boundaries, you need to be doing something kind of special. So it’s great that you guys are kind of acknowledging that in your philosophy. But along that line, HTE generates a lot of data very quickly by design. Are there any best practices for handling that data that you guys have either come up with yourselves or through your interactions with the literature?

Neal Fazakerley  25:30

I guess you ideally have some really nice software that does it for you. And I guess that’s what we’ve been trying to do with ACD/Labs and build that package. I think it’s all about the automated tabulation of your analytical data, such that your data isn’t trapped in static tables and flat PDFs, or even worse, paper lab notebooks. You need a solution that associates your tabulated analytical data with the conditions, the reagents, the compounds in each well of your experiment. And once this is taken into a database or a reaction data model, you have the opportunity to make real use of it through the leaps forward in machine learning and neural network models, which you can use to really help more rapidly design your next round of experiments and drive you to a decision.

Charis Lam  26:21

It sounds like there’s a lot of room forward with what you can do with the data once you’ve collected it. What advice do you have for some of our listeners who might be interested in building up the HTE capabilities at their organization?

Neal Fazakerley  26:37

Really think very carefully about the questions you want to answer. It’s easy to get excited about automation solutions and robotics because they’re really cool. Experimental design, hypothesis, and synthetic method are important, even more so than in singleton chemistry. My personal view is that an automation solution that can be configured to do almost anything is almost always configured not to do the thing you want to do today. So investing in the specific tools that automate really well-defined processes, and then really working on the smooth handoffs between them, will drive the most efficiency. Beyond that, build great relationships with the teams you work with, be those the assay scientists who need to generate data on all your compounds, or the computational scientists who support you to derive insights from your experiments and your data, or the software team you’re working with on developing tools to allow you to design, write up, and analyze your experiments.

Jesse Harris  27:40

Well, thank you so much, Neal. I think that’s all our questions for today. But that was really lovely to hear about. A lot of insights there, I think.

Neal Fazakerley  27:49

Thank you very much for your time. It’s been great speaking to you.

Charis Lam  27:51

Thanks. It was great to hear how Neil’s team is working with high throughput experimentation, and how software is helping them handle their data.

Jesse Harris  27:59

Yes, we’ve had quite the perspective on HTE this episode. From the 80s and 90s to what’s happening right now inside pharmaceutical labs. Remember to check our show notes for additional resources, including a link to Ralph’s team at 20/15 Visioneers.

Charis Lam  28:13

Thanks for your support of The Analytical Wavelength. This is the last episode of Season 1. We’ll be returning in early 2022 for Season 2, which will be all about predictions.

Jesse Harris  28:23

Join us to hear about using AI in drug discovery, machine learning in chemistry labs, and computer assisted structure elucidation, featuring guests from academia and industry.

Charis Lam  28:33

Remember to subscribe so you’ll know when we return. We may also drop a bonus episode into the feed in the meantime. We’re available on all major podcast feeds, including Spotify, Apple Podcasts and Google Podcasts.

Jesse Harris  28:47

Thank you so much for listening. Until next time.

Charis Lam  28:50

The Analytical Wavelength is brought to you by ACD/Labs. We create software to help scientists make the most of their analytical data by predicting molecular properties, and by organizing and analyzing their experimental results. To learn more, visit us at www.acdlabs.com.

The Analytical Wavelength is brought to you by ACD/Labs. We create software to help scientists make the most of the analytical data by predicting molecular properties, and by organizing and analyzing the experimental results. To learn more, please visit us at www.acdlabs.com.


Enjoying the show?

Suscribe to the podcast using your favourite service.