AI Monthly Digest #14 – quantum supremacy
October brought amazing news for Google: the company has reached quantum supremacy. Today we’re going to look at just what that means.
November was no slouch, either, bringing us as it did greater clarity on the development of the most prominent frameworks as well as an interesting initiative to impose good practices in machine learning research.
To top off all the news, this issue of AI Monthly Digest reports on a funny incident related to robots. Have fun reading!
Google gains quantum supremacy
Quantum computing is not a new concept. It has been around for 30 years, even if it has spent most of that time as a concept with no practical applications. A situation comparable with the one of neural networks.
At some point, however, big data and cloud-based computing power changed the game for neural networks and artificial intelligence. Today, neural networks, the theoretical plaything of yesterday, stands behind demand forecasting systems, powers natural language processing tools and supports the daily operations of countless companies in multiple other ways with reinforcement learning on the cutting-edge.
Although a practical application of quantum machines has yet to come, Google’s recent experiment shows clear progress. Their quantum computer, operating on a 54-qubit processor called ‘Sycamore,” performed in 200 seconds computations that would take 10,000 years on the fastest supercomputer existing.
While the 10,000 years was later amended down to two days, the efficiency Google has managed to muscle into their system still boggles the mind.
Why does it matter
Or rather, why does it matter for machine learning? The answer is a bit tricky, but the question was bandied about at the most recent AI World Conference.
Quantum computing will enable a machine learning model to reflect and process more complex conditions and scenarios. According to the Boston Consulting Group, the optimal model for the upcoming future is to combine traditional and quantum computing.
Keras or PyTorch? Or maybe Tensorflow?
One of the most popular deepsense.ai blog posts examined the differences between Keras, the most popular library for Tensorflow, and PyTorch, one of the most popular deep learning frameworks.
According to the latest analysis done by The Gradient, the question of which is better is now irrelevant. Keras has recently been incorporated into TensorFlow as an official API to make the framework more convenient and easier to use. For those who love head-to-head battles for supremacy, the clash no longer pits Keras against PyTorch, but rather TensorFlow against PyTorch.
What’s even more interesting, each framework seems to push the other’s development. TensorFlow remains popular in the business community while scientist favor PyTorch.
Why does it matter
Knowledge of popular frameworks is crucial for data scientists as well as for business efficiency – choosing the right one can be essential to a project’s success.
In the end, the choice may end up being irrelevant as both deliver a comprehensive platform to deal with machine learning and facilitate the whole process. The frameworks are tools, after all, and should play a supporting role rather than a limiting one.
Robots meet reality
The film Robocop offered an iconic portrayal of machines being used in law enforcement. The dystopian future of grim Detroit is nothing like the streets of today’s Los Angeles, where the first police patrol robots are being tested.
As CNBC reports, their performance is far from perfect. For example, one robot ignored a woman who was trying to inform it of a nearby fight, asking her only if…she would move.
Why does it matter
In the larger scheme of things, it doesn’t. But it is a funny yet welcome reminder that the road toward fully autonomous machines is more tortuous than we might expect.
Sotabench – benchmarking models for everyone
The progress that has been made in machine learning is mind-boggling. AI Monthly Digest was designed to cut through the buzz and deliver reliable and trustworthy news. On the other hand, it is easy to fall into the trap of considering every new model “state-of-the-art” or “best-performing”.
Yet there are multiple benchmarks to test models and determine which one is best suited for purpose. One obstacle to that being done is that the developers community cannot, due to a lack of time, test the models.
To address this problem, the people behind Papers With Code launched the site sotabench.com, which undertakes to benchmark all open source models with tests.
Why does it matter
The initiative is the next step toward establishing a set of good practices among machine learning developers and researchers. In “Papers with code”, the group promoted the idea of publishing not only research papers, but also the code behind them. Thus, anyone who so chose to could check and reproduce the effects of a research him or herself. This step makes claims of delivering the next breakthrough more feasible.
In fact, any effort to make the development of AI more transparent and easier to reproduce is more than welcome.