AI Monthly digest #4 – artificial intelligence and music, a new GAN standard and fighting depression
December brought both year-end summaries and impressive research work pushing the boundaries of AI.
Contrary to many hi-tech trends, artificial intelligence is a popular term in society and there are many (not always accurate) views on the topic. The December edition of AI Monthly digest both shares the news about the latest developments and addresses doubts about this area of technology.
Previous editions of AI Monthly digest:
- AI Monthly digest #3: Artificial Intelligence in science getting big
- AI Monthly digest #2 – the fakeburger, BERT for NLP and machine morality
- AI Monthly digest #1 – AI stock trading & Kaggle record
Tuning up artificial intelligence and music
Machines are getting better at image recognition and natural language processing. But several fields remain unpolished, leaving considerable room for improvement. One of these fields is music theory and analytics.
Music has different timescales, as some parts repeat at scale of seconds while others extend throughout an entire composition and sometimes beyond. Moreover, music composition employs a great deal of repetition.
Google’s Magenta-designed model leverages the relative attention to spot how far two tokens (motifs) are, and produced convincing and quite relaxing pieces of piano music. The music it generated generally evokes Bach more than Bowie, though.
Researchers provided samples of both great and flawed performances. Although AI-composed samples still refer to classical music, there are more jazz-styled improvisations.
Further information about studies on artificial music can be found on Magenta’s Music Transformer website.
Why does it matter?
Music and sound is another type of data machine learning can analyze. Pushing research on music further will deepen our knowledge of music and styles in a way that AI made Kurt Vonnegut’s dream of analyzing literature a reality.
Furthermore, the music industry may be the next to leverage data and the knowledge and talents of computer scientists. Apart from tuning the recommendation engines for streaming services, they may contribute more to the creation of music. A keyboard is after all a musical instrument.
2. The machine style manipulator
Generating fake images from real ones is a thing. Generative Adversarial Networks enhance their training abilities by analyzing real images, generating fake ones and then training to be as good as possible in determining which is real and what is shown in the images.
The challenge neural networks and those who design them fact is in producing convincing images of people, cars, or anything else that is to be recognized by the networks. In a recent research paper, the group behind the “one hour of fake celebrity faces” project introduced a new neural network architecture that separates high-level attributes and stochastic variations. In the case of human images, the high-level attribute may be a pose while freckles or the hairdo are stochastic variations.
In a recent video, researchers show the results of applying a style-based generator and manipulating styles later to produce different types of images.
The results are impressive – researchers were able to produce convincing images of people from various ethnic backgrounds. By controlling different levels of styles, researchers were able to tune up everything on image – from gender and ethnicity to the shape of glasses worn.
Why does it matter?
That’s basically the next state-of-the-art in GAN networks, the best-performing image recognition technology used. Producing fancy-looking fake images of faces and houses can significantly improve the performance of image recognition models. Ultimately, however, this technology may be a life saver, especially when applied in medical diagnosis, for example in diabetic retinopathy.
3. AI failures – errare (not only) humanum est
2018 saw significant improvements in machine learning techniques and artificial intelligence proved even further how useful it is. However, using it in day-to-day human life will not be without its challenges.
In his 1896 novel “An Outcast of the Islands”, Joseph Conrad wrote “It’s only those who do nothing that make no mistakes”. This idea can also be applied to theoretically mistake-proof machines. Apart from inspiring successes, 2018 also witnessed some significant machine learning failures:
- Amazon’s gender-biased AI recruiter – the machine learning model designed to pre-process the resumes sent to the tech giant overlooked female engineers due to bias in the dataset. The reason was obvious – the tech industry is male-dominated. As algorithms have neither common sense nor social skills, it assumed that women are just not a good match for the tech positions the company was trying to fill. Amazon ditched the flawed recruiting tool, yet the questions about hidden bias in datasets remain.
- Uber’s fatal autonomous car crash – the story of fatal crash is a bitter lesson for all autonomous car manufacturers. Uber’s system not only detected the pedestrian it hit while driving, but also autonomously decided to proceed and ignore warnings, killing 49-year old Elaine Herzberg.
- World Cup predictions gone wrong – The World Cup gave us another bitter lesson, this time for predictive analytics. While the model built to predict brackets may have been sophisticated, it failed entirely. According to its predictions, Germany should have met Brazil in the finals. Instead, the German team didn’t manage to get out of its group while Brazil bent the knee before South Korea. The final came down to France versus Croatia, an unthinkable combination, both for machine learning and football enthusiasts around the world. The case was further described in our blogpost about failure in predictive analytics.
More examples of AI failures can be found in the Synced Review Medium blogpost.
Why does it matter?
Nobody’s perfect. Including machines. That’s why users and designers need to be conscious of the need to make machine learning models transparent. What’s more, it is the next voice to ensure that machine learning model results are validated – a step that is tempting to overlook for early adopters.
4. Smartphone-embedded AI may detect the first signs of depression
A group of researchers from Stanford University has trained a model with pictures and videos of people who are depressed and people who are not. The model analyzed all the signals the subjects sent, including tone of voice, facial expressions and general behaviour. These were observed during interviews conducted by an avatar controlled by a real physician. The model proved effective in detecting depression more than 80% of the time. The machine was able to recognize slight differences between people suffering from depression and people who were not.
Why does it matter?
According to WHO, depression is the leading cause of disability worldwide. If not cured, it can lead to suicide, the second most common cause of death among 15-29-year-olds.
One barrier to helping people suffering from depression is inaccurate assessment. There are regions in the world where less than 10% of people have access to proper treatment. What’s more, mental illness is often stigmatized, and treatment is both costly and hard to access. These factors, and the fact that early symptoms are easily overlooked, lead many patients to avoid looking for care and medical support.
The experiment is a step toward building an automated and affordable system for spotting signs of depression early on, when the chance for a cure is highest.
5. So just how can AI hurt us?
Machine learning is one of the most exciting technologies of the twenty-first century. But science fiction and common belief have provided no lack of doomsday scenarios of AI harming people or even taking over the world. Dispelling the myths and disinformation and providing knowledge should be a mission all AI-developing companies. If you’re new to the discussion, here’s an essay addressing the threat of AI.
Why does it matter?
Leaving the doubts unaddressed may result in bias and prejudice when making decisions, both business and private ones. The key to making the right decisions is to be informed on all aspects of the issue. Pretending that Stephen Hawking’s and Elon Musk’s warnings about the cataclysmic risks AI poses were pointless would indeed be unwise.
On the other hand, the essay addresses less radical fears about AI, like hidden bias in datasets leading to machine-powered discrimination or allowing AI to go unregulated.
That’s why the focus on machine morality and the transparency of machine learning models is so important and comes up so frequently in AI Monthly digest.
Summary
December is the time to go over the successes and failures of the past year, a fact that applies equally to the machine learning community. Facing both the failures and challenges provides an opportunity to address common issues and make the upcoming work more future-proof.