AI Monthly Digest #17 – a lovely chatbot
January Jump-started the year with news of the latest achievements in NLP, a breakthrough in neural networks for solving math problems, and AI creating artificial life.
Assuming that the world will continue changing as quickly as it is now, 2020 looks likely to bring unprecedented development to AI. Let’s take a closer look at all this exciting news.
Meena – a chatbot you would like to talk to
Chatbots are undoubtedly one of the hottest AI trends of the last few years (apart from the reinforcement learning of course), having been widely embraced by businesses seeking to streamline efficiency. Most applications are based on a trigger-and-response paradigm with little to no use of real natural language processing techniques. However, a real breakthrough may now have been made.
The GPT-2 saga was one of the most important and influential AI-related stories of 2019. That a model like GPT-2 produces convincing text, however, doesn’t necessarily spell the end of development in this area. In fact, Google joining the race in January proves the contrary.
Google Brain has created Meena, a 2.6-billion parameter end-to-end neural conversational model trained on 341 GB of text from the Internet. Meena is 1.7x larger than OpenAI’s GPT-2 and was trained on 8.5x more data for 30 days on a TPUv3 Pod (2048 TPU cores). To put it in a nutshell, it is larger and more efficient than any such model created to date, though for now, due to the potential risks that accompany releasing a demo, it remains unavailable to the public.
Why it matters
Meena is further proof that larger models, bigger datasets and superior computing power are driving progress in today’s mainstream AI research. A problem that challenges the developers in a way that has never been seen before – it is not about devising new concepts and testing them, but rather producing bigger datasets and neural networks.
A scalable pipeline for designing reconfigurable organisms
The dichotomy between machines and living organisms has long been absolute. The key difference was in the material – machines are (were?) made of steel or plastic with no living tissue, at least not at a purpose.
Research presented in January, however, introduces new technology to the world: biological machines. They can be automatically designed and optimized in simulation for a desired locomotion behaviour and then manufactured and deployed in physical environments.
Josh Bongard, the lead researcher, said: “These are novel living machines. They’re neither a traditional robot nor a known species of animal. It’s a new class of artifact: a living, programmable organism.”
Watch the 11-min video covering the topic:
Why it matters
First and foremost, it is fascinating to watch the fantasy of science-fiction writers become reality. The blending of organic matter and machines has long figured in fiction. Writers have seen the blending both as delivering conscious robots and living organisms made of metal (Necrons from Warhammer 40k are good example) or living organisms designed to fulfill the role of a machine (the living spaceships of Companions from Gene Roddenberry’s Earth: Final Conflict series are a good example).
Although this research shows only humble beginnings, it is fascinating indeed.
Solving maths with neural network
Math creates problems that are getting increasingly hard to tackle. And solving problems isn’t always about computing. Math requires thought, and that’s what machines are not good at, so the skyrocketing amount of computing power available has made little difference.
Artificial intelligence, of course, aims to change this.
Scientists from Facebook AI Research took a neural network originally designed for language modeling and machine translation and trained it to solve advanced mathematics equations, where the task is to predict symbolic integration of the input equation. It turned out that such neural translation was able to find correct answers more often than traditional software including Maple, Mathematica, and Matlab.
Solving equations requires symbolic reasoning, which is one of the hardest challenges for neural network-based systems. It has become an active area of research recently and we can expect more in the upcoming months. For example, a new paper from Deepmind MEMO: A Deep Network for Flexible Combination of Episodic Memories tackles the problem of reasoning over long distances.
Why it matters
Math problems are inherently abstract But solving them can fuel progress, even if it isn’t what we anticipate. Solving the Seven Bridges of Königsberg riddle was the first step toward graph theory and modern topology.
So automating the process is a first step toward speeding up the progress overall. And it is good news for us all.