The full year of AI Monthly Digest
The AI Monthly Digest is deepsense.ai’s attempt to filter out the buzz and chaos from the news ecosystem and provide readers with the news that matters. The team also strives to make the information understandable for lay readers, to make knowledge on developments in Artificial Intelligence more accessible and the domain itself less esoteric.
While keeping an eye on global developments in AI, deepsense.ai’s team behind the digest got not only fresh and up-to-date information on state-of-the-art technologies, but also significant insight into the ongoing changes in the industry.
So what has changed? What has happened since the first issue of AI Monthly Digest came out? In this text we will cover:
- Natural Language Processing improvements
- Image processing breakthroughs
- Reinforcement Learning transfer from research to business applications
- Societal impact of Artificial Intelligence (AI)
Natural language processing
Two major breakthroughs have brought about rapid progress in Natural Language Processing (NLP).
BERT, XLnet and the speed of change
The first breakthrough was Google delivering BERT, a model that improves pre-trained word embeddings (a vector representations of words that enable computers to process the text) and enables data scientists to further fine-tune their networks to fulfill specific roles, such as automated chatbots or document processing support tools. BERT has been around since October 2018.
Less than a year later, XLNet outperformed BERT and delivered the new state-of-the-art in NLP. Indeed, one’s cutting edge can get rusty in no time flat.
GPT-2 – a long story
The next breakthrough witnessed by AI Monthly digest readers was the GPT-2 model, which excels in natural text generation. The model introduced in February delivered texts that were nearly indistinguishable from ones delivered by a human writer. To see just how indistinguishable, have a look at the examples below.
Example:
Human written:
In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
Model completion (machine-written, 10 TRIES)
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.
While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”
The effects were so impressive that OpenAI, the company behind the creation, decided NOT to go public with the model, flouting established industry and research standards. But they believed they were acting responsibly.
Over the past year we have also reported on research on the business efficiency of automated translations. A key fact that has come to light is that there is a 10.9% increase in trade when automated translations are used in the e-commerce. That’s fairly convincing evidence for the business efficiency of AI-powered translations.
Image Recognition
In NLP, state-of-the-art technology is struggling to match human accuracy, whereas machines have already achieved superhuman performance in image recognition. And to say that advancement has been rapid would be an understatement.
In the second edition of AI Monthly Digest, news about delivering a convincing image of a hamburger, a cat or a dog was a thing indeed. Mere months later, neural networks were delivering very realistic faces–and not just of fake people. Researchers generated a network that makes a video of person speaking just from a single frame. As a part of the research, the team delivered speaking paintings, including the Mona Lisa and Van Gogh. Admittedly, the hamburger paled in comparison.
Elsewhere, a neural network developed the ability to spot early signs of depression in humans. According to WHO data, depression is a leading cause of disability in the world and when not cured can lead to suicide. The software was accurate in spotting the signs of depression in more than 80% of cases.
Thus neural networks and the data scientists behind them can boost healthcare not only by employing machine learning for drug discovery, but also to make early diagnoses. And in the case of spotting depression, they managed to do this using just the information they could gather from the cameras on mobile devices.
Reinforcement learning
One of the most important areas of research work, reinforcement learning was a frequent area of focus in the first 12 months of AI monthly Digest.
An important development was Deepmind’s AlphaStar beating human champions in Starcraft 2, The game poses multiple challenges for human players and artificial intelligence alike. It is played in real time, requiring excellent reflexes. Every unit has a strategy it works with and counter-strategies that make it useless.
Also, due to the fog of war (the darkness covering the map outside of the player’s units sightline) the player has incomplete information on their opponent’s position and actions. Compare that with the complete openness of chess and Go.
The Deepmind-designed AlphaStar model vanquished top StarCraft pro-players TLO and MaNa from TeamLiquid, though not without a catch: the model had an unfair advantage over a human player by being able to “see” all the visible battlefield at the same time. Humans are limited to the frame they can see on their screen, so the ability to switch between multiple spots on a world map is limited. This advantage proved so crucial that when the model was stripped of it, MaNa won the match.
While gaining superiority over human players is one thing, being a worthy and entertaining opponent is something else entirely. The AI in gaming is currently based on endless if-then loops that are prone to failure, especially in the more complicated, open worlds seen in The Witcher 3 or Grand Theft Auto.
The first game to feature a reinforcement learning-trained agent as an opponent is the MotoGP 19, a racing game.
So, during the first 12 months of AI Monthly Digest, reinforcement learning moved beyond the realm of pure research and into real-life business applications.
Society
Like the steam engines of the past, Artificial Intelligence is poised to transform society in unprecedented ways. The first signs of change are being seen now – sometimes for the better and sometimes for the worse.
Amazon’s AI-based recruitment solution was biased against women in the company’s recruitment process. The main reason was that male coders were overrepresented in the dataset the recruitment model was trained on. Amazon the model down, but not before giving us a glimpse of the possible effects of deploying insufficiently supervised models.
To avoid such situations and make AI technology more comprehensible for non-skilled users, the Finnish government launched an AI-popularization program. The grassroots movement of providing the uninitiated– barbers, bakers and car mechanics, for example – with basic AI training, gained the attention of the Finnish government, which sought to make machine learning a commodity. An AI-powered demand forecasting for a small, family business? Why not?
A humorous but thought-provoking example of the social impact AI research can have is the panic Starcraft players exhibited when Deepmind revealed that AlphaStar was lurking in Battle.net, and was primed to do battle with random players.
Given the superiority AlphaStar had shown over the MaNa and TLO, it comes as no surprise that players were at once thrown back on their heels and put on their toes:killer AI had come to humiliate them and turn the Battle.net rankings on its head. Thus, users exchanged tips on spotting the non-human opponent. So, while AlphaStar no longer wields an unfair advantage, the players remain thrilled.
Summary
In 12 short months, we’ve gone from being amazed by a convincing image of a hamburger to delivering images of fake people. The world came through three breakthroughs in natural language processing and players have witnessed turmoil around reinforcement learning-powered agents.
The world of AI is changing fast. AI Monthly Digest will be keeping its A-eye trained on the ball.