AI Monthly Digest #7 – machine mistakes, and the hard way to profit from non-profit

March saw some major events concerning top figures of the ML world, including OpenAI, Yann LeCun, Geoffrey Hinton and Yoshua Bengio.

The past month was also the backdrop for inspiring research on how machines think and how different they can be from humans when provided with the same conditions and problem to solve.

OpenAI goes from non-profit to for-profit

OpenAI was initially the non-profit organization focused on pushing the boundaries of artificial intelligence in the same manner the open-source organizations are able to deliver the highest-class software. The Mozilla Foundation and the Linux Foundation, the non-profit powerhouses behind the popular products, are the best examples.

Yet unlike the popular software with development powered by human talent, AI requires not only brilliant minds but also a gargantuan amount of computing power. The cost of reproducing the GPT-2 model is estimated to be around $50,000 – and that’s only one experiment to conduct. Getting the job done requires a small battalion of research scientists tuning hyperparameters, debugging and testing the approaches and ideas.

Related:  AI Monthly Digest #6 - AI ethics and artificial imagination

Staying on technology’s cutting edge pushed the organization toward the for-profit model to fund the computing power and attracting top talent, as the company notes on its website.

Why does it matter?

First of all, OpenAI was a significant player on the global AI map despite being a non-profit organization. Establishing a for-profit arm creates a new strong player that can develop commercial AI projects.

Moreover, the problem lies in the need for computing power, marking the new age of development challenges. In a traditional software development world, a team of talented coders is everything one would need. When it comes to delivering the AI-models, that is apparently not enough.

The bitter lesson of ML’s development

OpenAI’s transition could be seen as a single event concerning only one organization. It could be, that is, if it wasn’t discussed by the godfathers of modern machine learning.

Related:  With your head in the clouds - how to harness the power of artificial intelligence

Richard Sutton is one of the most renowned and influential researchers of reinforcement learning. In a recent essay, he remarked that most advances in AI development are powered by access to computing power, while the importance of expert knowledge and creative input from human researchers is losing significance.

Moreover, numerous attempts have been made to enrich machine learning with expert knowledge. Usually, the efforts were short-term gains with no bigger significance when seen in the broader context of AI’s development.

Why does it matter?

The opinion would seem to support the general observation that computing power is the gateway to pushing the boundaries of machine learning and artificial intelligence. That power combined with relatively simple machine learning techniques frequently challenges the established ways of solving problems. The RL-based agents playing GO, chess or Starcraft are only top-of-mind examples.

Yann LeCun, Geoffrey Hinton and Yoshua Bengio awarded the Turing Award

The Association for Computing Machinery, the world’s largest organization of computing professionals, announced that this year’s Turing Award went to three researchers for their work on advancing and popularizing neural networks. Currently, the researchers split their time between academia and the private sector, with Yann LeCun being employed by Facebook and New York University, Geoffrey Hinton working for Google and the University of Toronto, and Yoshua Bengio splitting his time between the University of Montreal and his company Element AI.

Why does it matter?

Named after Alan Turing, a giants of mathematics and the godfather of modern computer science, the Turing Award has been called IT’s Nobel Prize. While the lack of such a prize in IT is obvious — IT specialists get the Turing Award.

Nvidia creates a wonder brush – AI that turns a doodle into a landscape

Nvidia has shown how an AI-powered editor swiftly transforms simple, childlike images into near-photorealistic landscapes. While the technology isn’t exactly new, this time the form is interesting. It uses Generative Adversarial Networks and amazes with the details it can muster – if the person drawing adds a lake near a tree, the water will reflect it.

Why does it matter?

Nvidia does a great job in spreading the knowledge about machine learning. Further applications in image editing will no doubt be forthcoming, automating the work of illustrators and graphic designers. But for now, it is amazing to behold.

So do you think like a computer?

While machine learning models are superhumanly effective in image recognition, if they fail, their predictions are usually at least surprising. Until recently, it was believed that people are unable to predict how a computer will interpret an image when not in the right way. Moreover, the totally inhuman way of recognizing the image is prone to mistakes – it is possible to prepare an artificial image that can effectively fool the AI behind the image recognition and, for example, convince the model that a car is in fact a bush.

Related:  AI Monthly Digest #5 - AlphaStar beats human champions, robots learn to grasp and a Finnish way to make AI a commodity

The confusion about the machines identifying objects usually comes from the fact that most AI models are narrow AI. The systems are designed to work in a closed environment and solve a narrow problem, like identifying cars or animals. Consequently, the machine has a narrow catalog of entities to name.

To check if humans are able to understand how the machine is making its mistakes, the researchers provided volunteers with images that had already fooled AI models together with the names the machines were able to choose from for those images. In those conditions, people provided the same answers as the machines 75% of the time.

Why does it matter?

A recent study from John Hopkins University shows that computers become increasingly human even in their mistakes and that surprising outcomes are the consequence of extreme narrowness of the artificial mind. A typical preschooler has an incomparably larger vocabulary and amount of experience collected than even the most powerful neural network, so the likelihood of a human finding a more accurate association for the image are many times larger.

Again, the versatility and flexibility of the human mind is the key to its superiority.

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *