As the everyday world becomes immersed in technology and we become increasingly dependent on computers to assist in day-to-day tasks, the field of life sciences is following the same path of utilizing artificial intelligence (AI). Biological companies, especially those providing products for sales, such as medical equipment or those in the service industry, is beginning to rely on AI to increase performance efficiency.
In fact, companies that participated in the McKinsey Analytics 2020 Annual State of AI Survey reported a 60-80% revenue increase in both marketing and sales and service operations departments due to the adoption of AI practices. High performing AI users stated that AI provides more efficient and effective results when there are well-designed and executed strategies in place to adopt the technology, appropriately trained and educated personnel to oversee AI management, and updates and proper data are fed into the AI to allow it to work in an optimized capacity.
One of the most significant advantages of adopting AI programs is the level of customization allowed for the individual needs of a company. The data and production needs of a medical research company are vastly different than those of an environmental systems laboratory. AI allows companies to customize their specific requirements, whether on the digital front of data management and storage or in the physical realm of machines used for production and manufacturing. Most companies employing AI also find it preferable and feasible to train their own IT technicians to implement, manage, and troubleshoot the system, which proves more cost-effective than hiring experts externally.
Despite all of the advantages of adopting AI practices, there are still risks involved in allowing computer-generated systems to take over the production and application of vital aspects of a life science company. Cybersecurity, compliance with regulations, and privacy protection remain the top risks when dealing with an AI program. Factoring in these risks seems to put biological companies into two different camps: 1. willing to take risks and work to mitigate them to employ AI or 2. not comfortable with the level of risk, so they choose not to use AI.
Companies choosing to implement AI systems overall find that the rewards outweigh the risks if the time and resources are appropriately executed to understand how the system works and what benefits it can provide.
Artificial intelligence in its most basic form can perform simple tasks and follow basic logical assumptions such as “If X is true, then Y should occur.” However, as technology advances, this simplistic application quickly becomes obsolete. Studies have shown that the human brain can interpret more complicated equations than a basic AI system can handle.
Machine learning (ML) takes artificial intelligence to the next level. By building models based on empirical data, machine learning can tackle more complex, multi-dimensional problems. Thus, through the implementation of ML, we can achieve new technologies such as self-driving cars and voice-activated smart systems like Alexa and Siri.
Machine learning is based on a four-step process in which data is first acquired for modeling and training purposes. Next, the data is normalized to build and utilize the model through which multiple iterations are run. Finally, patterns and predictions are made by interpreting and determining patterns revealed through the results of the model runs.
With technology advancing on an exponential level every day, advanced machine learning, otherwise known as deep learning, is not only becoming possible but is also starting to be applied to life science fields like crop research and pharmaceuticals. Deep learning (DL) is the process by which an AI goes beyond interpreting and analyzing datasets and begins to formulate novel solutions and new ways of approaching scientific problems. These DL programs have already proven to be foundationally integral to the field of life sciences. For example, an AI program called DeepMind solved the mystery of predicting three-dimensional protein folding – a scientific question that had gone unanswered for over fifty years.
The implications of this kind of technology for life science fields allow for artificial intelligence-aided research and development, active machine learning as production processes are being implemented, and access to a level of efficiency that humans alone could not achieve. In addition to the computation ability of machine learning to aid in research, laboratory information management systems (LIMS) can work cohesively with artificial intelligence especially toward the goal of efficient data management for complex workflows.
Overall, the life science companies choosing to implement AI practices are being rewarded by the results, especially in light of the COVID-19 pandemic.
Ever since 2020 and the major shutdown of countless companies due to the COVID-19 pandemic, AI has been more readily accepted for use in production and development fields. The need to keep the production of essential businesses going, especially in the medical, pharmaceutical, and energy sectors, was made uncomfortably apparent when the pandemic forced workers to stay home, and production slowed or even halted altogether for most companies.
With the advent of AI and automated production machines that can learn programs and execute production functions historically implemented by human beings, the need for live bodies on the production floor has dramatically decreased. Additionally, data storage and management can be handled remotely through a LIMS that is linked to the cloud so onsite data management and the need for bulky manual data storage has been eliminated. This allows companies to cut costs and still provide a high production rate despite emergencies that may necessitate people to avoid physically going to work.
The COVID-19 pandemic has also brought to light the need for processes such as accurate disease identification, predictive forecasting of epidemic outbreaks both geographically and temporally, and rapid development and manufacturing of effective vaccines. All of these needs can be addressed and performed to a higher degree of efficacy than currently executed through the use of artificial intelligence.
Not only are simple AI programs being implemented, machine learning – a higher level of artificial intelligence that can compute more complex algorithms and use predictive reasoning – has also become an up-and-coming technology with many promising uses.
Pharmaceutical research has never been more critical, especially in light of the recent COVID-19 pandemic and diseases and conditions that still plague humankind without any effective treatment solutions. On average, it costs approximately $2 billion to manufacture a new drug which takes ten to fifteen years to develop and bring to market. This statistic does not even consider that only 1 in 5,000 to 10,000 compounds produced during the research and development phase move on to clinical trials and that 90% of drugs tested in clinical trials fail due to ineffectiveness or health hazards.
In addition to the hurdles that drug discovery must overcome, the pharmaceutical market is saturated with known compounds that are tried and true for common illnesses but are not effective against more complex and challenging diseases. However, exploring the field of new and original drug compounds is not a field many are willing to invest in because of the high rate of failure.
This is where machine learning can help the pharmaceutical field grow by leaps and bounds. Whereas human limitations mean the testing of thousands of compounds, whether in hypothetical or empirical environments, can take years and thus cost billions of dollars, the same process, if run by AI, can significantly cut time and costs by about half. Traditionally, the discovery of an effective compound in the research and development stage takes about six years and $1 billion. However, that timeframe could be reduced to only three years and cost approximately $600 million with ML.
AI programs would be able to sort through and discard unlikely candidates faster than humans and may even discover novel compounds and think of new ways to approach a disease solution that the human brain has not yet been able to unlock. These programs would also be able to communicate directly with a laboratory information management system to catalog and run data analysis on unlikely, potential, and successful drug candidates to further avoid errors in data management.
The LIMS acts as the central repository for the data produced in the lab, providing a direct mechanism that enables AI/ML to mine through quality data, allowing managers to make decisions based on insights. “For laboratories, especially those at biopharmaceutical companies, algorithms can speed up product development. This, in turn, improves the development of critical new medicines to patients”.
The automated data management of complex workflows in STARLIMS can assist in the data analysis needed in Life Sciences companies to boost into the new world of AI/ML aided research.
 The State of AI in 2020. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2020
 Ahmad, A. S. and Sumari, A. D. W. 2017. Cognitive artificial intelligence: brain-inspired intelligent computation in artificial intelligence. In 2017 Computing Conference, pp. 135-141. IEEE.
 The Increasing Use Of AI In The Pharmaceutical Industry (forbes.com)
 Harnessing Data Science And AI For Drug Development Innovation. https://www.pharmaceuticalonline.com/doc/harnessing-data-science-and-ai-for-drug-development-innovation-0001
 Ibid (ref 4).
 Ibid (ref 4).
 LIMS Trends 2020: Digital Transformation Drives Changes in Laboratories. https://solution4labs.com/en/blog/lims/lims-trends-2020