Table of contents
Table of contents
December brought both year-end summaries and impressive research work pushing the boundaries of AI.
Contrary to many hi-tech trends, artificial intelligence is a popular term in society and there are many (not always accurate) views on the topic. The December edition of AI Monthly digest both shares the news about the latest developments and addresses doubts about this area of technology.
Previous editions of AI Monthly digest:
- AI Monthly digest #3: Artificial Intelligence in science getting big
- AI Monthly digest #2 – the fakeburger, BERT for NLP and machine morality
- AI Monthly digest #1 – AI stock trading & Kaggle record
Tuning up artificial intelligence and music
Machines are getting better at image recognition and natural language processing. But several fields remain unpolished, leaving considerable room for improvement. One of these fields is music theory and analytics. Music has different timescales, as some parts repeat at scale of seconds while others extend throughout an entire composition and sometimes beyond. Moreover, music composition employs a great deal of repetition. Google’s Magenta-designed model leverages the relative attention to spot how far two tokens (motifs) are, and produced convincing and quite relaxing pieces of piano music. The music it generated generally evokes Bach more than Bowie, though. Researchers provided samples of both great and flawed performances. Although AI-composed samples still refer to classical music, there are more jazz-styled improvisations. [irp posts=”19780″ name=”Don’t waste the power. AI supply chain management”] Further information about studies on artificial music can be found on Magenta’s Music Transformer website. Why does it matter? Music and sound is another type of data machine learning can analyze. Pushing research on music further will deepen our knowledge of music and styles in a way that AI made Kurt Vonnegut’s dream of analyzing literature a reality. Furthermore, the music industry may be the next to leverage data and the knowledge and talents of computer scientists. Apart from tuning the recommendation engines for streaming services, they may contribute more to the creation of music. A keyboard is after all a musical instrument.2. The machine style manipulator
Generating fake images from real ones is a thing. Generative Adversarial Networks enhance their training abilities by analyzing real images, generating fake ones and then training to be as good as possible in determining which is real and what is shown in the images. The challenge neural networks and those who design them fact is in producing convincing images of people, cars, or anything else that is to be recognized by the networks. In a recent research paper, the group behind the “one hour of fake celebrity faces” project introduced a new neural network architecture that separates high-level attributes and stochastic variations. In the case of human images, the high-level attribute may be a pose while freckles or the hairdo are stochastic variations. In a recent video, researchers show the results of applying a style-based generator and manipulating styles later to produce different types of images. The results are impressive – researchers were able to produce convincing images of people from various ethnic backgrounds. By controlling different levels of styles, researchers were able to tune up everything on image – from gender and ethnicity to the shape of glasses worn. Why does it matter? That’s basically the next state-of-the-art in GAN networks, the best-performing image recognition technology used. Producing fancy-looking fake images of faces and houses can significantly improve the performance of image recognition models. Ultimately, however, this technology may be a life saver, especially when applied in medical diagnosis, for example in diabetic retinopathy.3. AI failures – errare (not only) humanum est
2018 saw significant improvements in machine learning techniques and artificial intelligence proved even further how useful it is. However, using it in day-to-day human life will not be without its challenges. In his 1896 novel “An Outcast of the Islands”, Joseph Conrad wrote “It’s only those who do nothing that make no mistakes”. This idea can also be applied to theoretically mistake-proof machines. Apart from inspiring successes, 2018 also witnessed some significant machine learning failures:- Amazon’s gender-biased AI recruiter – the machine learning model designed to pre-process the resumes sent to the tech giant overlooked female engineers due to bias in the dataset. The reason was obvious – the tech industry is male-dominated. As algorithms have neither common sense nor social skills, it assumed that women are just not a good match for the tech positions the company was trying to fill. Amazon ditched the flawed recruiting tool, yet the questions about hidden bias in datasets remain.
- Uber’s fatal autonomous car crash – the story of fatal crash is a bitter lesson for all autonomous car manufacturers. Uber’s system not only detected the pedestrian it hit while driving, but also autonomously decided to proceed and ignore warnings, killing 49-year old Elaine Herzberg.
- World Cup predictions gone wrong – The World Cup gave us another bitter lesson, this time for predictive analytics. While the model built to predict brackets may have been sophisticated, it failed entirely. According to its predictions, Germany should have met Brazil in the finals. Instead, the German team didn’t manage to get out of its group while Brazil bent the knee before South Korea. The final came down to France versus Croatia, an unthinkable combination, both for machine learning and football enthusiasts around the world. The case was further described in our blogpost about failure in predictive analytics.