AI Monthly Digest #15 – the end of two sagas
November brought the release of the latest and largest version of the GPT-2 Natural Language Processing (NLP) model and the information that AlphaStar has reached the top level in Starcraft on Battle.net. There is also Lee Sedol’s bitterness to contemplate. The Go master recently retired.
This issue also chronicles a scam pulled off using a deep neural network. Researchers had warned us it would happen, and it finally has.
OpenAI releases its most recent iteration of GPT-2
The GPT-2 model was arguably the most significant Natural Language Processing news of 2019. The model’s performance made headlines, of course, but so too did OpenAI, the organization behind the model, opting not to make it public, at least not initially. And that sparked debate.
Those who opposed making the model public pointed out that a technology that produces such high-quality texts could be dangerous for society, and all the more so given the explosion of fake news in recent years. A model that can create texts that are nearly indistinguishable from human-written ones could support the creation of fake news in unprecedented ways.
On the other hand, this model represents strong progress for both NLP and text generation, and its ability to serve the public is precisely why, according to the second group, it should be made public.
The researchers malice-proofed their model, if you will, and released subsequent iterations step by step. A note on the matter can be read on the OpenAI’s website.
Why it matters
GPT-2 has been one of the dominant AI-related topics in 2019. Making it public finally enabled researchers to further expand its capabilities and companies to use it for their own benefit. How? According to The Verge, GPT-2 can write all sorts of texts – poems, stories, news, fake news and even code. It is undoubtedly a powerful tool that can support various goals, be they benevolent or malevolent.
Finally, the release brought the additional benefit of sparking open, public and debate on the ethics of publication of the AI-related research.
Google applies BERT to its search engine… and then Microsoft too!
When thinking about NLP applications, the top-of-mind usage is in improving search engines, be it an e-commerce one installed within an online shop or a general-purpose one like Google or Bing.
According to a recent Google blog post, up to 15% of everyday Google queries have never been made before. What’s more, search engine users have developed their own sub-language for building queries, which are usually based on keywords. Although effective to a degree, it is far from being a natural form of communication.
Search engines have struggled to process more complex sentence-length queries. So “good chocolate cake” is a better query than “could you please deliver me a recipe for an awesome chocolate cake, thank you in advance”. The latter could be completely misunderstood by the search engine.
BERT was added to Google Search in October. And November saw BERT installed in Bing, Microsoft’s search engine. According to the latest data from Statista, Bing commands 5.26% market share, second only to Google. Microsoft itself claims that Bing’s share is widely underestimated and reaches up to 33% on desktop.
No matter which version we take as truth, both search engines benefit from Google’s model.
Why it matters
Applying BERT to search engines delivers yet more proof that AI and machine learning improve our daily lives in unprecedented ways, even if it is often entirely “behind the scenes”. From now on, millions of users will use artificial intelligence solutions every hour of every day without even realizing it.
Last but not least, BERT will improve the way search engines work, further improving our lives.
The $243,000 deepfake scam
Experts have been warning us about deep fakes for a long time now. Indeed, such fakes allowed us to hear Barack Obama saying things he never actually said. Elsewhere, a renowned professor wrote of hearing his own voice uttering things he had never said.
But this time things got real. Using deep neural networks, cybercriminals managed to fake the voice of a company’s CEO, who needed to make an urgent transfer of $243,000 to a Hungarian supplier.
Why it matters
This is the first big scam pulled off using neural networks, and proves all too prescient the warnings that neural networks may be employed for less than noble ends. Here AI hasn’t been used to make the world safer, but to attack.
AlphaStar reaches grandmaster level
The AlphaStar saga is arguably the second most impressive story in the ML world of 2019. Using a reinforcement learning-trained agents, DeepMind achieved the level of master in the popular e-sport game StarCraft II. Deepmind’s first set of agents beat the top players TLO and MaNa. But controversy ensued when it was discovered that the Deepmind agent had access to the entire Starcraft map at once, while the human players were limited to frames seen on the screen.
The new set of agents managed to reach the top league during duels with users from all around the world on the online player-matching platform battle.net. The agent could have played with any race given – a Terran, Protoss or Zerg, against any other race. More impressive still, a single neural network handled all the matches, employing and executing different tactics suited for different races and opponents. While this may pale next to what a human player can do, the fact that neural networks traditionally handle single tasks makes this accomplishment stand out.
Why it matters
Reinforcement learning is a hot trend when it comes to delivering new, impressive results in machine learning. Training an agent that plays a popular computer game may sound like child’s play rather than serious scientific work. Yet the implications are serious – the agent acts in real time, with limited information, to solve complicated problems.
In the larger scheme of things, this is a next step in creating agents that can solve more complex (and more important) problems in real time. Building the next generation of autonomous cars is only a top-of-the-mind example.
The AlphaStar Saga also prompts us to contemplate both the future of AI and its present-day flaws.
After being defeated by AI in 2016, global Go champion Lee Sedol retired last month, claiming that nothing more can be accomplished in Go. “There is an entity that cannot be defeated,” he remarked in an interview with the Yonhap News Agency in Seoul.
“Entity” is an interesting word here. Human champions die or retire. AlphaGo will be there forever, an eternal champion to be unbeaten.
On the other hand, Sedol pointed out that he had taken a game from AlphaGo by exploiting a bug: when he made a totally unexpected move, the neural network got confused. Players who have done battle with AlphaStar frequently point out the model’s inflexibility. AlphaStar seems challenged to change its strategy once it has begun playing, allowing human players to plot their own strategies to exploit this shortcoming.
“The whole secret lies in confusing the enemy, so that he cannot fathom our real intent.”
– Sun Tzu