AI Monthly digest #3 – artificial intelligence in science getting big
November brought a lot of significant AI-related breaking news. Machine learning and deep learning models were folding proteins, deriving the laws of physics from fictional universes and mimicking the human brain in a way never before seen.
This edition of AI Monthly digest looks at scientific improvements made by AI with significant support from tech giants. Stories from November show that AI is clearly a great tool to improve countless lives – and the protein-folding model is the best example of solving problems thought to be unsolvable. Or at least since our universe came into existence.
1. Deepmind’s AlphaFold wins Protein-Folding Contest
It’s a bit ironic that the protein-based human brain designed silicon-based tools to better predict protein folding. That’s precisely the case with AlphaFold, the neural network designed by Deepmind to predict the shape of a protein from the DNA code.
The basic building blocks of all known life forms, proteins come in many forms, shapes and levels of complexity. Their shape and function are encoded within DNA. Yet the DNA itself contains only a sequence of amino acids that form the protein, and the way they will form a structure is not encoded in any way. Thus it is challenging to establish what the protein will look like. As Levintha’s Paradox states, it would take longer than the age of the universe to enumerate the possible configurations of a typical protein before reaching the right structure. Yet, the shape of protein is crucial in the treatment and diagnosis of numerous diseases, including Alzheimer’s, Parkinson’s and Hutchington’s.
Thanks to the relatively high amount of gene-related data, Deepmind was able to build a neural network that could predict the shape of proteins only from a DNA sequence. The algorithm analyzes the distance between particular amino acids and compares that with existing models, preparing estimations about possible shapes and folds.
The effects are seen as one of the most significant breakthroughs and an unprecedented progress on protein folding.
To read more about the model and how it works, see recent Deepmind’s blogpost.
2. Towards building an AI Physicist
A Physicist’s job is to create models of the universe that give us a better understanding of the reality surrounding us. Newton, Galileo, Archimedes and many others had to convert measurements and observations into the fundamental laws of the universe. The ability to discover important answers from data is the best proof of their genius.
Last month, MIT’s Tailin Wu and Max Tegmark presented work reporting on an “AI Physicist” deriving the laws of physics in artificial worlds. To crack the mysterious environments, the AI agent uses four strategies embedded in its architecture: divide-and-conquer, Occam’s razor, Unification and Lifelong Learning, which together allow it to discover and manipulate theories without supervision.
The system was able to produce correct theories for various simulated worlds created only to test its accuracy. The success is significant from a scientific point of view, as it proves that neural networks may be the tools to speed up physical science in more ways than we ever expected.
The Arxiv paper can be read here.
3. SpiNNaker – Human brain supercomputer runs for the first time
Building neural networks is about mimicking the human brain’s activity and processing abilities. Most common neural networks work by communicating in a constant way and exchanging information in a stream. The spiking neural network is more “biological” insofar as it works by forcing artificial neurons to communicate in spikes and exchange information in a “flash”. Instead of sending the information from point A to point B, it supports sending multiple bits of parallel information.
This system is mimicked in a SpiNNaker (Spiking Neural Network Machine) built at the University of Manchester’s School of Computer Science, backed by EPSRC and supported by the European Human Brain Project. The most significant outcome of the SpiNNaker is that it builds a working, small-scale model of the human brain, with the manifold scientific possibilities it brings with it.
More information may be read here.
4. New ImageNet state-of-the-art with GPipe
Since its foundations, the greatest challenge in computer science has been insufficient computing power. Multiplying the number of cores is one way to solve the problem, while optimizing the software is another. The challenge comes to bear in neural networks, as training a new one requires gargantuan computing power and no less time.
Thus, Google Brain’s GPipe is a significant improvement that makes neural networks more cost-effective. By using GPipe, neural networks can process significantly more parameters. And that leads to better results in training.
GPipe combines data parallelism and model parallelism, with a high level of automation and memory optimization. In the paper, researcher expanded the AmoebaNet from 155.3 million parameters to 557 million parameters and inserted as input 480×480 ImageNet Images. The result was an improvement in ImageNet Top-1 Accuracy (84.3% vs 83.5%) and Top-5 Accuracy (97.0% vs 96.5%), making the solution the new state-of-the-art.
5. Enterprise machine learning done right – Uber shows its best practices
With machine learning being a recent development, there are no proven and tested methodologies to build new AI-based developments nor best practices with which to dive in and use in development.
Uber’s recent blogpost shared an interesting vision of incorporating AI-driven culture into the company. Instead of building one large ML project to perform one task on an enterprise scale, Uber powers its teams with data scientists and looks for ways to automate their daily work. The company has now done a dozen projects, carried out by a variety of teams – from Uber Eats menu items ranking or marketplace forecasting to customer support.
On the back of this strategy, Uber has gone from a company that did not use machine learning at all to one now heavily infused with AI-based techniques. More details about ML applications in Uber can be seen in their blogpost.
Summary
AI, machine learning and deep learning tend to be seen as buzzwords, applied only by tech giants to solve their hermetic problems. Folding proteins, deriving the laws of physics and simulating the brain as it really works all show that AI is the rule-breaker and disruptor it any field it is applied in. Even if it was dominated by bold minds physicists of all time.