AI Monthly Digest #10 – AI tackles climate change and deciphers long-forgotten languages
June brought record-breaking temperatures, perfectly highlighting the global challenge of climate change. Is that AI-related news? Check and see in the latest AI Monthly Digest.
A common misconception about machine learning projects is that they are by definition big. However, any number of AI-powered micro-tweaks and improvements are applied in everyday work. A good example of both micro and macro tweaks that can fix a major problem can be found in the paper described below.
AI tackling climate change
The world witnessed an extraordinarily hot June, with average temperatures 2 degrees celsius above normal in Europe. According to the World Meteorological Organization, the heatwave is consistent with predictions based on greenhouse gas concentrations and human-induced climate change.
Tackling this challenge will not be easy: according to World Bank Data, fossil fuel energy consumption still stacks to 79% of total. Furthermore, greenhouse gasses, particularly methane, are emitted by cattle, with livestock being responsible of 14.5% of total human-induced greenhouse emissions.
The most prominent figures in AI today, including DeepMind CEO Demis Hassabis, Turing award winner Yoshua Bengio, and Google Brain co-founder Andrew Ng, have authored a comprehensive paper on ways that AI can tackle the changing climate.
Their call for collaboration is meant to inspire practitioners, engineers and investors to deliver short- and long-term solutions for measures within our reach. Those include producing low-carbon electricity through better forecasting, scheduling, and control for variable sources of energy, mitigating the damage produced by high-carbon economies through, for example, better predictive maintenance as well as help minimize energy use in transportation, smart buildings and cities. The applications can vary from designing grid-wide control systems or optimizing scheduling with more accurate demand forecasting.
Why does it matter
Climate change is one of the greatest challenges mankind faces today, with truly cataclysmic scenarios approaching. Further temperature increases may lead to a variety of disasters, from flooding coastal regions due to melting ice caps, agricultural crises and conflicts over access to water.
Green energy promises solutions, yet these are not without their challenges, many of which could be solved with machine learning, deep learning or reinforcement learning. Responsibility is among deepsense.ai’s most important AI trends, and being responsible for the planet would be an excellent example of just why we chose to focus on that trend.
We will provide more in-depth content on climate change and AI-powered ways of tackling it. So stay tuned!
Giants racing to produce the best image recognition
If machine learning is today’s equivalent of the steam engine revolution, data and hardware are the coal and engine that power the machines. Facebook and Google are like the coal mines of yesteryear, having access to large amounts of fuel and power to build new models and experiment.
It should come as no surprise that breakthroughs are usually powered by the tech giants. Google’s state of the art in image recognition, EfficientNet, has been a recent giant step forward. The model was delivered by automated searching procedure uniformly scaling each dimension of the network in order to find the best combination.
EfficientNet stands for something.
The result is state-of-the-art in Image recognition. At least when it comes to combining efficiency and accuracy. But not when it comes to accuracy alone.
Not even a month later Facebook delivered a model that outperformed Google’s. The key lay in scaling the enormous dataset it was trained on. The social media mogul has access to Instagram’s database, which holds no less than billions of user-tagged images, a dataset ready to be chewed over by a hungry deep learning model.
The neural network was released to the public using a recently launched Pytorch Hub platform for sharing cutting edge models.
Why does it matter
Both advances show how important machine learning is for the tech giants and how much effort they invest in pushing their research forward. Every advancement in image recognition brings new breakthroughs closer. For example, models are becoming more accurate in detecting diabetic retinopathy using images of the eye. Every further development delivers new ways to solve problems that would be unsolvable without ML (Machine learning) – manufacturing for visual quality control is among the best examples.
XLNet outperforms BERT
As we noted in a past AI Monthly Digest, Google has released Bidirectional Encoder Representation from Transformations (BERT). BERT was, until recently, the state-of-the-art when it comes to Natural Language Processing benchmarks. The newly announced XLNet is an autoregressive pretraining method (as opposed to an autoencoder-like BERT) which learns a language model by predicting the next word in a sequence using the permutation of all the surrounding words. An intuitive explanation can be found (here).
The XLNet model proved more effective than BERT in beating all 20 benchmark tasks.
Why does it matter
Understanding a natural language was considered a benchmark for intelligence, with Alan Turing’s test being among the best examples. Every push forward delivers new possibilities in building new products and solving problems, be they business ones or something more uncommon, like the example below.
AI-powered archeology? Bring it on!
Deep learning-based models are getting even better at understanding natural language. But what about language that is natural, but has never been deciphered due to lack of knowledge or a frustratingly small amount of extant text?
Recent research from MIT and Google shows that a machine learning approach can deliver major improvements in deciphering ancient texts. In the basics of modern natural language processing techniques, all of the words in a given text are assumed to be related to each other. The machine itself doesn’t “understand” text it in a human way, but rather forms its own assumptions based on the relations and connotations of each word in a sentence.
In this approach, the translation process is not built on understanding the world, but rather finding similarly connotated words that transfer the same message. This is entirely different than humans’ approach to language.
By making the algorithm less data-hungry, the researchers deliver a model that translates texts from rare and long-lost languages. The approach is described in this paper.
Why does it matter
While there are countless examples of machine learning in business, there are also new horizons to discover in the humanities. Deciphering the secrets of the past is every bit as exciting as building defenses against the challenges of the future.
The more sophisticated approach to and possible brute-force breaking of unknown languages provides a way to uncover more language-related secrets.
A Disc of Phaistos? Or a Voynich manuscript maybe?