Home Artificial Intelligence Using Artificial Intelligence And High Performance Computing To Speed Up Scientific Discovery And Drug Development

Using Artificial Intelligence And High Performance Computing To Speed Up Scientific Discovery And Drug Development

by Bernard Marr
0 comment

The human body is an incredibly complex machine, and no one has come close to creating a complete model of how it works. This is a big challenge when it comes to developing drugs and medicines that interact with the organism in order to fight diseases like cancer or Alzheimer’s.

The interactions between a drug and biological molecules within a cell are particularly difficult to model, making predictions about efficacy – the potential beneficial capacity of a chemical substance within an organism – a complex process.

Studying and analyzing these processes requires quantum chemistry – calculating the states that occur during chemical reactions at an atomic level – and has traditionally been hugely expensive in terms of the computing power required.

Today, leading institutions are harnessing the power of artificial intelligence (AI) – meaning machine learning and deep learning – along with the newest generation of high-performance computing platforms (HPC).

This not only allows simulations to run much more quickly, generating a greater amount of valuable data but also means a richer and more varied range of datasets can be analyzed. The result is models – simulations – that more accurately resemble the scientific reality we are attempting to understand.

I recently spoke with Dr. Markus Oppel, of the University of Vienna, about ways AI and HPC are revolutionizing scientific research and improving our fundamental understanding of how our organisms work.

A key focus is understanding photochemical reactions, such as how light influences and even activates the reactions between drugs and cellular molecules – for example, a cancer drug implanting parts of itself into a cell that can destroy the cancer.

“If you can understand how this is going on, you can improve it,” Dr. Oppel tells me, “First there’s understanding, and then you can hopefully learn from it and try to improve things …[but] we need some completely new techniques of modeling and scientific insight into this process.”

Dr. Oppel is involved in the theoretical side of science, which means he spends his time in front of a computer and not in a laboratory. Instead, his work focuses on creating simulations that will later be applied by lab chemists in practical experiments and trials.

As in just about every industry and field of research, he and his colleagues are thinking about how to use neural networks, and especially deep learning, to replace the “heavy quantum chemistry.” This will mean getting data from a trained network in minutes or seconds, rather than having to run calculations on huge clusters that could traditionally take hours or days.

One project currently being undertaken by two Ph.D. students at the university involves trying to trace the action of drugs within a cell. The usual way of doing this essentially involves shining a light onto the cell and studying the reaction with microscopy. This requires time, as it involves working to the timescale of the interaction between the drug and cell molecules, and data collection is a slow process. Now, neural networks are being created to enable accurate simulation of the process, generating data far more quickly and potentially speeding up the processes of drug discovery and development.

Their collaboration with tech leaders, including Hewlett Packard Enterprise (HPE) and NVIDIA, gives them access to some of the most powerful computing infrastructure when it comes to modeling complex systems and analyzing the vast datasets that this results in.  

Dr. Oppel says, “The great thing about this artificial intelligence and deep learning frameworks is, they simply run, almost out of the box. For neural network-based simulations, the group uses HPE Apollo 6500 Servers with NVIDIA GPUs, which accelerate especially the training of the networks by several orders of magnitude. The traditional, heavy quantum chemistry calculations are usually done on an HPE Apollo 2000 cluster, and for memory-intensive jobs, HPE Superdome Flex is employed.”

The huge amount of data generated – just running a simulation for a couple of hours results in terabytes of information – creates challenges, too. Only a small proportion of this data will be immediately useful for the project that’s creating it, but it still has to be stored at least until it has been analyzed, and possibly for far longer if it’s potentially useful for future projects. Due to the publicly funded nature of much of the work, there are also pressures to make the data publicly available. This alone can require big infrastructure investments to publish and maintain openly accessible datasets of the volumes involved.

“Even if we had the storage space, who can profit from it?” Asks Dr. Oppel.

“It’s a huge amount of data that we’re producing, and the only companies that at the end of the day would be able to use this amount of data would be big cloud providers – they have the infrastructure and can train their own neural networks, but I’m curious if they would then publish [their research] openly or if they would just try and make money out of it.”

Greater access to the kind of HPC and AI technology that the University of Vienna is working with across the global scientific and research communities would clearly enable other academic institutions – rather than just Silicon Valley giants – to benefit from the insights that can be pulled from these potentially immensely valuable datasets.

Interestingly, all of the required infrastructure is based on-premise, with technological reasons – rather than the more frequently cited legal or regulatory reasons – for staying out of the cloud: “I get emails every 10 minutes from marketing people wanting to put me in the cloud – for our purposes, it’s not possible.

“Maybe we could do the computation in the cloud – it would be painful but not impossible. But then we have the huge amounts of data … it’s a matter of cost-efficiency. At the end of the day, you look at the bill and think I could have bought a big system of my own and put it on premises. We don’t do any cloud … not because of legal issues but for practical reasons.”

The complex interactions between chemicals and living organisms are still an area of great mystery – as fascinatingly unknown as the fundamental physics that form the structure of our universe.

There are clearly many great scientific discoveries yet to be made, in every field, before we can say with any certainty that we understand the reality that we live in. Dr. Oppel and his fellow researchers seem confident that AI, deep learning, and neural networks are an important stepping-stone on our road to achieving that understanding.

I’m certainly very excited about seeing the practical results that will come from this mix of theoretical chemistry and cutting-edge technology. And what about quantum computing? Surely it’s a natural partner for quantum chemistry?

“Quantum chemistry is always one of the first examples that people make when they talk about the potential of quantum computing,” Dr. Oppel says.

“I think we will see some real use cases in the near future. We never know for sure – no one can see the future. But if I told people ten years ago, I would be running all of my calculations on graphics cards; they would say I was crazy. So no one knows for sure what will happen in the next five or ten years, but if I have to bet, I would say yes – quantum computing is on the rise, and we will see applications of that.”

You may also like