Pharma Focus Asia

Challenges and Advantages of Using Artificial Intelligence in Pharma

Fausto Artico, Global R&D Tech Head and Director of Innovation and Data Science, GSK

Kevin Harrigan, Director of Innovation and Engineering, GSK

The use of Artificial Intelligence (AI) in Pharma can greatly accelerate activities. However, Pharma companies today face many tough challenges in creating reliable AI models for the complex environments that characterise the sector. This is because, given the soon-to-be pervasive presence of AI models in our lives, you must consider many more factors besides the usual regulatory ones when you want to create and deploy AI models into Pharma workflows. In this article, we therefore discuss some of the most critical challenges you face when you need to create reliable AI models for Pharma. We also explain how you can solve, or at least ameliorate, some of these challenges. Finally, we provide advice on how you can deploy AI models at scale and explain the advantages that you will garner in so doing.

Introduction

Many advanced, large-scale AI initiatives still need to be proved successful in Pharma. This is because Pharma is a highly regulated sector. The discovery and approval processes for a drug can easily take 12+ years, and advanced AI is still a field in formation. It is important, then, to design the future advanced AI systems focusing on certain critical elements. This is because not integrating them into the designs can completely invalidate otherwise technically sound advanced AI solutions. We will therefore discuss three principal issues, mainly: ethical concerns surrounding the objective of the advanced AI systems; the ability to explain the AI models in layman’s terms; and the dangers of training the models using data sources that contain biases.

Ethics

What is the final objective of the advanced AI system you are creating? The answer to this question is very important. Different final objectives imply the need to ponder different ethical matters during the creation of such systems. When you design an advanced AI system, you should at least ask yourself the following questions: Is the goal of the advanced AI system the optimisation of some processes? Is it to automate them? Is it to generate new income streams? Is it to create a better user experience? In addition, how can we trust the actions that the advanced AI models propose? Will they really be for the greatest good of everybody? How can we verify this? Could an advanced AI system be considered a form of intelligence? And if so, how should we treat AI? Where is the point at which we should worry whether an advanced AI system is complex enough that it could generate unintended consequences?

Try to limit the scope and decision power of advanced AI systems. For example, you can create advanced AI systems to solve “narrow” problems that require the optimisation of a small number of equipment settings in a manufacturing train at a site. Doing so, you will feel more comfortable trying the recommendations the systems propose because the consequences of erroneous decisions will be very limited and not critical. This is especially true if you execute Proof of Values on just a piece of equipment for a very limited time. Such design and method of operating make it possible for you to start to build trust in the models’ decisions and later, when you have become confident enough, to scale them up into production on multiple manufacturing trains and sites. And make sure to have a monitoring system in place so so that you can continuously assess whether the environmental conditions in which the models operate are the same as those used for their training and thus safe to continue to use.

Advanced AI systems are there to help humans to take decisions, not to supplant them. The more important and the more critical the consequences of the decisions an advanced AI system needs to make, the more important it is that you have humans assess them. In addition, the final approval for the important decisions should always involve a series of steps and activities, and not just one decision maker or single point of failure. Furthermore, it is unlikely that future regulators will approve the use of completely automated advanced AI systems for Pharma processes. And complete digital twins will probably not wholly obviate, if at all, the need for tests on animals first and later clinical trial phases on humans. Therefore, advanced AI systems will accelerate some activities and increase the probability of success of some objectives (e.g., the discovery of new drugs), but they will only be enhancements to existing procedures and processes and not substitutes for them. It is also impossible to take an advanced AI system to court. Regulators hold humans accountable for the decisions taken by the systems they design. So, it is more important than ever to simplify the models as much as possible to allow humans to understand them and to double-check the suggestions and actions the systems propose. This is especially true if we want to develop swarm systems composed of many models, each solving or improving human activities on very narrow tasks but with the need to communicate with each other and to feed each other inputs and outputs.

Explainability

Many non-AI Pharma domain knowledge experts need to interact with advanced AI systems. It is, therefore, important that they understand why the models propose certain actions or why they “think” they have discovered important insights they deem appropriate to flag for human attention. This is especially true if the models’ actions and insights are counterintuitive. Answers on why and how models work that are purely based on statistical principles are not going to be understood by many people. If people cannot understand why the models make certain choices they will see them as black boxes, will not trust them and/or resist using and integrating them in their standard ways of working. I have seen many situations in which domain knowledge experts in non-tech domains (e.g., biologists) simply refused to believe the models’ choices were correct because they did not understand how such choices were generated. It is difficult for a person to accept such choices if their validity can only be proved after years of testing and long processes, as is the case in clinical trials. Compare this to other situations such as in manufacturing, for example. There you can just execute a “quick,” albeit not always easier, Proof of Value using existing equipment and a lot of historical data related to mechanical engineering processes that we know very well in contrast to our more limited knowledge of the biology and working mechanisms of a human body.

Hybrid models will probably be the way of the future. Domain knowledge experts and data scientists need to work together to get the best of both worlds (i.e., life science and data science domains). We cannot just use models that are purely statistical. While such models could be great to discover correlations and therefore tell us how to achieve a specific objective, they are very limited in their ability to explain why such phenomena happen from the scientific point-of-view. Combining science with more purely statistical methods will drive research and discovery in ways that can be understood and used by many domain knowledge experts and so advance science much more rapidly than today. This is critical considering that we finally have enough data and computational power to analyse processes and activities in ways that are impossible for humans but easy for machines. People can continue to develop science but leverage machines and algorithms to more efficiently verify hypotheses, search for patterns and more generally liberate themselves from tedious activities like sifting through enormous amounts and types of data.

Most data is and will remain unstructured. Data types such as images, long-form text and the like, for example, contain many more bits of data than what is captured and stored in tables and/or other structured data formats. What this implies is that many models in future will have to be able to approximate cognitive capabilities to interact with humans. In fact, it is not far-fetched to imagine that more and more advanced AI solutions will be able to interact with humans in a way that is more humanlike than what is possible today. Our ability to ask questions to advanced AI systems, interact with them and, more generally, design them in ways that make this possible will only increase over time. The fact that you will not need to be a data scientist able to code to interact with such advanced AI systems will speed up our ways of working and make our life easier.

Personalisation will be important too. It is not difficult to imagine that models will be able to interact with a variety of stakeholders who have a variety of interests and use a variety of lenses to interpret the world. Already today, we have Large Language Models (LLM) that can generate text and be trained on various corpora. Training them on the corpora in which different professionals were educated will make it possible for such models to use the language of various professions and so to interface with diverse types of people (e.g., lawyers, biologists, HR people, engineers in manufacturing, etc.). Layering speech and voice capabilities on top of the LLMs and adding personalisation features that are related to the personality and attitude style of each individual they interact with will open possibilities that are difficult to image today but that will greatly enhance our human-machine interactions. This is especially true if we also design empathic capabilities into such future advanced AI systems as well as other soft skills that would enable them to take into account feelings/emotions, mental thought patterns and social dynamics.

Biases

Datasets used for training should be representative of the environmental conditions in which the models will operate. This is easier said than done. You need to be careful about this because it is easy to make the erroneous assumption that the models can generalise well enough. Typical examples are: recommendations on how to select a cohort of people for a clinical trial without realising that minorities will not be sufficiently represented because of a lack of data related to them in the datasets used to feed the model, and corpora or databases that are thought to contain high quality data and to be perfectly valid and curated but instead contains mistakes and errors.

Augment context to allow people to understand how activities were executed and what was done. Today, many people do not trust models because they cannot retrieve enough information on why they were created in some ways and not others, or why some data sources were used or chosen and others not. Essentially, it is important that during model creation, manually or automatically you enrich the datasets and the protocol with metadata that explains and helps to contextualise the choices the designers made to create the model. Therefore, you should design, implement and deploy as many automated checks as possible for the models and the datasets used for training them. The reason for doing so is that in this way you can at least log and track what was decided, why, the problems that you are aware of and that affect the datasets and models, etc. In this way, if in future somebody wants to verify the protocol that was used to create the models and the datasets, he/she will be able to do so quickly and will be able to add additional checks to verify if something was omitted in the protocol or a new hypothesis on the datasets and/or model needs to be tested. This is important for reassessing the model (to invalidate and retrain it or recreate it) if for any reason you become aware of new conditions that are critical to how it should have been created as well as in which environmental conditions it is supposed to operate. Software makes it possible to automate many such activities. And since many choices could be questioned at a later time by other people, it is important to generate and save a history of all these choices. It will be impossible, otherwise, to execute any further assessment, validate or disproof the model, or add new checks at a later time in reliable ways that augment the analyses previously executed. The alternative of recreating them from scratch would not only be prohibitively time consuming, it may not even be possible.

Also, beware of the fact that problems can arise because some environmental conditions were thought unimportant and are not captured by any sensor or change very infrequently or slowly. They can unexpectedly generate strong nonlinear dynamics that have never been assessed or discovered because they never manifested before and so never caused any issues till now. You cannot always solve this problem (i.e., understand and discover what you do not know you do not know). However, with the right monitoring systems, you can at least verify whether the environmental baselines related to the things that you are tracking have changed or not, and if the answer is no, be more quickly able to execute root cause analyses. That is to say, you know that it is better to focus your attention on something that you are not logging and monitoring yet because all the other things you have always logged and monitored seem to be working as usual in ways that never generated any problem before. Conversely, the other issue that you could face is when things that you are logging and monitoring really are the root cause of the problems because, for the first time ever, they combined and interacted in ways they never have before (even if they still satisfy all the requirements). You could use clustering techniques in these situations to verify what is different and has never happened before even if everything looks in spec. (Admittedly, this could be difficult to execute if the sampling frequency of your monitoring systems is not high enough).

Summary

Advanced AI systems can greatly enhance existing Pharma activities. This is because, with the right computational power available and thanks to their ability to process enormous amounts of data, we can start to solve more global problems connecting different verticals and business units in ways that are much more holistic than ever before. However, we need to be careful how we design, test and continuously monitor such systems. This is because they have a tendency to overfit the datasets used for their training and are usually not able to strongly generalise as conditions in the environments in which they operate change in ways that are not represented and captured in the datasets. In addition, there are important ethical components that are going to become more prominent as we start to build advanced AI systems able to mimic broader cognitive human capabilities. In Pharma, considering the importance of many decisions related to the preservation and betterment of human lives, we should use advanced AI systems to accelerate and enhance human activities and ways of working, but not think of them as replacements for our higher cognitive functions. Hybrid models have great advantages and will be the model of choice more readily adopted by non-data science domain knowledge experts. Such experts will be reassured using such models because they will understand that they, the experts, will remain critical for the design, use and interaction with such models. Finally, from the regulatory point-of-view, advanced AI systems will probably always be seen as just tools and not fully sentient. Humans will therefore remain fundamental in using such systems, will have to continue to validate their choices, and will leverage them to accelerate and improve many important and complex activities impossible to tackle exclusively by humans.

--Issue 51--

Author Bio

Fausto Artico

Fausto has two PhDs (Information and Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. He also holds multiple certifications from MIT. As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and DataScience expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Kevin Harrigan

Kevin graduated with a BSAE in aerospace engineering from Pennsylvania State University. During his collegiate career he gained experience in a co-opposition with Capital One Financial as a Data Analyst at in Richmond, Virginia. It was this experience that afforded him the opportunity to find passion in data munging, applied statistics, and programming. Following graduation, he accepted a full time offer in their newly formed Digital Enterprise Organization, expanding his technical and analytical knowledge in areas such as distributed computing, clickstream analytics, multivariate testing, anomaly detection, and propensity modelling.

magazine-slider-imageMFA + MMA 2024CPHI Chine || PMEC China 2024Asia Healthcare Week 2024Advance DoE WorkshopNitrosamine Advance Workshop 2024CPHI Korea 2024CHEMICAL INDONESIA 2024INALAB 2024 Thermo Scientific - DynaDrive and DynaSpinDigital Health Asia 2024Rehab Expo 2024ISPE Singapore Affiliate Conference & Exhibition 20242024 PDA Pharmaceutical Manufacturing & Quality Conference2024 PDA Cell and Gene Pharmaceutical Products Conference 2024 PDA Aseptic Manufacturing Excellence Conference2024 PDA Aseptic Processing of Biopharmaceuticals Conference3rd World ADC Asia 2024LogiPharma Asia 2024