deepsense.ai
  • Careers
    • Job offers
    • Summer internship
  • Clients’ stories
  • Services
    • AI software
    • Team augmentation
    • AI advisory
    • Train your team
    • Generative models
  • Industries
    • Retail
    • Manufacturing
    • Financial & Insurance
    • IT operations
    • TMT & Other
    • Medical & Beauty
  • Knowledge base
    • deeptalks
    • Blog
    • R&D hub
  • About us
    • Our story
    • Management
    • Advisory board
    • Press center
  • Contact
  • Menu Menu
AIOps Network Traffic Analysis (NTA) - a business guide

AIOps Network Traffic Analysis (NTA) – a business guide

March 27, 2020/in Big data & Spark, Machine learning /by Konrad Budek

Network Traffic Analysis (NTA) is a key component of modern cybersecurity in companies. With machine learning and artificial intelligence solutions, the sheer amounts of data to analyze is an asset to be used rather than, as was once the case, a challenge to overcome.

This post looks at:

  • What is network traffic analysis
  • The benefits of network traffic analysis
  • How AI and machine learning can support network traffic analysis

According to Markets and Markets data, the global network traffic monitoring software and traffic analysis tool market is projected to grow from $1.9 billion in 2019 to  $3.2 billion by 2024. The growth is driven mostly by the increasing demand for sophisticated network monitoring tools and advanced network management systems that can handle the growing traffic and increasing flow of information.

The growth in internal traffic is a direct reflection of global trends. According to Cisco data, nearly 66% of the global population will be online by 2023. The increase in traffic is driven not only by users but also by the myriad of connected devices that form the IoT cloud around us.

The share of Machine-to-Machine (M2M) connections is estimated to grow from 33% in 2018 to 50% in 2023, while the consumer segment will rise to 74% of this share and business segment for 26%.

What is network traffic analysis

In its most basic form, Network Traffic Analysis (NTA) is the process of recording and analyzing network traffic patterns in search of suspicious elements and security threats. The term was originally coined by Gartner to describe a growing industry in the computer security ecosystem.

The foundation of NTA is the assumption that there is a “normal” situation in the system that reflects daily operations. Due to seasonal or general trends, operations fluctuate naturally, but overall the system remains stable and thus internal network monitoring can be done with a traffic analyzer. Knowing the “normal” situation is the first step in spotting signs of malicious activities within the system.

In addition to spotting security threats, NTA is also used to optimize the system, spotting inefficiencies as well as the system’s need for additional components when it arises.

Network Traffic Analysis software tools analyze a system’s communication flow, including

  • TCP/UDP packets
  • “Virtual network traffic” done in virtual private networks
  • Traffic to and from cloud environments  (storage, computing power, etc.)
  • API calls to cloud-based apps or SaaS solutions.

This means that nearly all traffic and information flow can be tracked and analyzed by smart network traffic analysis solutions. Modern solutions often use sophisticated techniques like reinforcement learning.

A key component of network analytics tools is the dashboard used to interface with the team, which receives clear information about the network. The dashboard enables easier network performance monitoring and diagnostics and is a convenient way to convey technical knowledge to those who lack it. Reducing complexity to simplicity, the dashboard will

Play its part in convincing your financial director to spring for a powerful new server or another essential component.

NTA solutions are clearly sophisticated and powerful tools. But what are the direct benefits of network traffic analysis?

The benefits of network traffic analysis

There are at least several benefits:

  • Avoiding bandwidth and server performance bottlenecks – Armed with knowledge about how information flows in the system, one can analyze network problems, define problems and start looking for solutions.
  • Discovering apps that gobble up bandwidth – tweaking the system can deliver significant savings when API calls are reduced or information is reused.
  • Proactively reacting to a changing environment – a key feature when it comes to delivering high-quality services for clients and customers. The company can react to increasing demand or spot signs of an approaching peak to harden the network against it. Advanced network traffic analysis tools are often armed with solutions designed to respond in real-time to network changes much faster than any administrator would.
  • Managing devices exclusively – with modern network monitoring applications companies can group devices and network components to manage them, effectively making use of network performance analytics done earlier.
  • Resource usage optimization – With all apps, devices, components, and traffic pinpointed with a dashboard, the company can make more informed decisions about the system’s resources and costs.

The key challenge in computer network management is processing and analyzing the gargantuan amounts of data networks produce. Looking for the proverbial needle in the haystack is an apt metaphor for searching for insights among the data mined from a network.

Using ML tools is the only way to effectively monitor network traffic.

How machine learning can support traffic analysis

The key breakthrough that comes from using machine learning-powered tools in NTA is in automation. The lion’s share of the dull and repetitive yet necessary work is done by machines. Also, in real-time network analysis, time is another component that can be handled only by machines. Machines and neural networks can spot and analyze the hidden patterns in data to deliver a range of advantages for companies. To name just a few:

Intrusion detection

The first and sometimes the only sign of intrusion into a system is suspicious traffic that can be easily overlooked. Intrusions are often detected only after 14 days.

AI-based solutions are tireless, analyzing traffic in real-time. Armed with the knowledge of infrastructure operations, the system can spot any sign of malicious activity.

Reducing false positives

AI-based solutions are less prone to the false-positives that can turn the life of a system administrator into a living hell. AI-based systems significantly enrich ML-supported NTA with false-positive detection and reduction, enabling the team to focus more on real challenges than on verifying every alert.

Workload prediction

With data about ongoing system performance, the solution can deliver information about predicted traffic peaks or downs to optimize spending.

Thus the benefits are twofold. First, the company can manage the costs of infrastructure, be it cloud or on-prem, to handle the real traffic and avoid overpaying. Second, there is much more predictability in the estimated need for resources, so they can be booked in advance or the costs can be optimized in other ways.

Spotting early signs of attack (DDoS)

Distributed Denial of Service attacks attempts to suddenly overload a company’s resources in an effort to take down the website or other online service. The losses are hard to predict – from the company’s reputation being hit as unable to defend itself against cybercrime attacks, to the staggering and quickly accruing losses due to being unavailable for customers.

With the early information about incoming attacks, the company can set up defenses like blocking certain traffic, ports or locations to keep availability on other markets. Also, network traffic reports can be used by various agencies that fight cybercrime and will hunt for those responsible for the attack.

Malicious packet detection

Sometimes it is not about the intrusion and the malicious activity is not aimed directly at the company. A user could have downloaded malware onto a private device connected with an enterprise network via a VPN. With that, the infection can spread or the software itself can leverage the company’s resources, such as computing power, and use it for its own purposes, like mining cryptocurrency without the owner’s consent.

Summary

Network traffic monitoring and analysis is one of the key components of modern enterprise-focuses cybersecurity. The gargantuan amounts of data to process also make it a perfect foundation for ML-based solutions, which thrive on data.

That’s why deepsense.ai delivers a comprehensive AIOps architecture-based platform for network data analytics.

If you have any questions about the AIOps solutions we provide, don’t hesitate to contact Andy Thurai, our Head of US operations via the contact form or aiops@deepsense.ai email address.

https://deepsense.ai/wp-content/uploads/2020/03/aiops-network-traffic-analysis-nta-a-business-guide.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2020-03-27 12:00:122021-01-05 16:43:20AIOps Network Traffic Analysis (NTA) – a business guide
Five AI trends 2020 to keep an eye on

Five AI trends 2020 to keep an eye on

March 9, 2020/in Deep learning, Machine learning, Reinforcement learning /by Konrad Budek

While making predictions may be easy, delivering accurate ones is an altogether different story. That’s why in this column we won’t just be looking at the most important trends of 2020, but we’ll also look at how the ideas we highlighted last year have developed.

In summarizing the trends of 2020, one conclusion we’ve come to is that society is getting increasingly interested in AI technology, in terms of both the threats it poses and common knowledge about other problems that need to be addressed.

AI trends 2019 in review – how accurate were our predictions?

AI trends 2020 - AI trends 2019 in review – how accurate were our predictions?
In our AI Trends 2019 blogpost we chronicled last year’s most important trends and directions of development to watch. It was shortly after launching the AI Monthly Digest, a monthly summary of the most significant and exciting machine learning news. Here’s a short summary of what we were right and wrong about in our predictions.

  • Chatbots and virtual assistants –  powered by a focus on the development of Natural Language Processing (NLP), our prediction was accurate–the growth in this market would be robust. The chatbot market was worth $2.6 billion in 2019 and is predicted to reach up to $9.4 billion by 2024.
  • the time needed for training would fall – the trend gets reflected by larger neural networks being trained in a feasible time, with GPT-2 being the best example.
  • Autonomous vehicles are on the rise – the best proof is in our own contribution to the matter in a joint-venture with Volkswagen.
  • Machine learning and artificial intelligence are being democratized and productionized – According to Gartner, 37% of organizations have implemented AI in some form. That’s a 270% increase over the last four years.
  • AI and ML responsibility and transparency – the trend encompasses the delivering unbiased models and tools. The story of Amazon using an AI-based recruiting tool that turned out to be biased against female applicants made enough waves to highlight the need for further human control and supervision over automated solutions.

Apparently, deepsense.ai’s data science team was up to date and well-informed on these matters.

“It is difficult to make predictions, especially about the future.”
-Niels Bohr

The world is far from slowing down and Artificial Intelligence (AI) appears to be one of the most dominant technologies at work today. The demand for AI talents has doubled in the last two years with technology and the financial sector absorbing 60% of talented employees on the market.

The Artificial Intelligence market itself is predicted to reach $390.9 billion by 2025, mainly by primarily by automating dull and repetitive tasks. It is predicted that AI will resolve around 20% of unmet healthcare demands.

Considering the impact of AI on people’s daily lives, spotting the right trends to follow is even more important. AI is arguably the most important technology trend of 2020, so enjoy our list!

Natural language processing (NLP) – further development

AI trends 2020 - Natural language processing (NLP) – further development
Whether the world was ready for it or not, GPT-2 was released last year, with balance between safety and progress a guiding motif. Initially, OpenAI refused to make the model and dataset public due to the risk of the technology being used for malicious ends.

The organization released versions of the model throughout 2019, with each confirmed to be “hardened against malicious usage”. The model was considered cutting edge, though like most things in tech, another force soon prevailed. At the end of January 2020, Google Brain took the wraps off of Meena, a 2.6-billion parameter end-to-end neural conversational model trained on 341 GB of online text.

The convenience of NLP solutions is enjoyed by users who have embraced virtual assistants like Google Assistant, Alexa or Siri. According to Adroit Market Research, the market of Intelligent Virtual Assistants is predicted to grow at 33% compound annual growth rate between now and 2025. The market was valued at $2.1 billion in 2019. The increasing use of smartphones and other wearable intelligent devices, among other trends, is predicted to be a driver of the growth.

Started with a consumer-centric approach, virtual assistants are predicted to get more involved in business operations, further automating processes as well as tedious and repetitive tasks. According to Computerworld, approximately 40% of business representatives are planning to implement voice technology within 24 months – that is, no later than in 2021. NLP is shaping up to be a major trend not only this year, but well into the future.

Autonomous vehicles

AI trends 2020 - Autonomous vehicles

It is 2020 and driverless cars have yet to hit the streets. In hindsight, the Guardian’s prediction that there would be 10 million self-driving cars on the road by 2020 is all too easy to scoff at now.

On the other hand, tremendous progress has been made and with every month the autonomous car gets closer to rolling out.

deepsense.ai has also contributed to the progress, cooperating with Volkswagen on building a reinforcement learning-based model that, when transferred from a simulated to a real environment, managed to safely drive a car.

But deepsense.ai is far from being the only company bringing significant research about autonomous cars and developing the technology in this field. Also, it is a great difference between seeing an autonomous car on busy city streets and in the slightly less demanding highway environment, where we can expect the automation and semi-automation of driving to first get done.

According to the US Department of Transportation, 63.3% of the $1,139 billion of goods shipped in 2017 were moved on roads. Had autonomous vehicles been enlisted to do the hauling, the transport could have been organized more efficiently, and the need for human effort vastly diminished. Machines can drive for hours without losing concentration. Road freight is globally the largest producer of emissions and consumes more than 70% of all energy used for freight. Every optimization made to fuel usage and routes will improve both energy and time management.

AI getting popular – beneath the surface

AI trends 2020 - AI getting popular – beneath the surface
There is a lot of buzz around how AI-powered solutions impact our daily lives. While the most obvious change may be NLP powering virtual assistants like Google Assistant, Siri or Alexa, the impact on our daily lives runs much deeper, even if it’s not all that visible at first glance. Artificial intelligence-powered solutions have a strong influence on manufacturing, impacting prices and supply chains of goods.

Here are a few applications being used without batting an eye:

  • Demand forecasting – companies collect tremendous amounts of data on their customer relationships and transactional history. Also, with the e-commerce revolution humming along, retail companies have gained access to gargantuan amounts of data about customer service, products and services. deepsense.ai delivers demand forecasting tools that not only process such data but also combines it with external sources to deliver more accurate predictions than standard heuristics.  Helping companies avoid overstocking while continuing to satisfy demand is one essential benefit demand forecasting promises.
  • Quality control – harnessing the power of image recognition enables companies to deliver more accurate and reliable quality control automation tools. Because machines are domain-agnostic, the tools can be applied in various businesses, from fashion to construction to manufacturing. Any product that can be controlled using human sight can also be placed under the supervision of computer vision-powered tools.
  • Manufacturing processes optimization – The big data revolution impacts all businesses, but with IoT and the building of intelligent solutions, companies get access to even more data to process. But it is not about gathering and endless processing in search of insights – the data is also the fuel for optimization, sometimes in surprising ways. Thanks solely to optimization, Google reduced its cooling bill by 40% without adding any new components to its system. Beyond cutting costs, companies also use process optimization to boost employee safety and reduce the number of accidents.
  • Office processes optimization – AI-powered tools can also be used to augment the daily tasks done by various specialists, including lawyers or journalists. Ernst & Young is using an NLP tool to review contracts, enabling their specialists to use their time more efficiently. Reuters, a global media corporation and press agency, uses AI-powered video transcription tools to deliver time-coded speech-to-text tools that are compatible with 11 languages.

Thanks to the versatility and flexibility of such AI-powered solutions, business applications are possible even in the most surprising industries and companies. So even if a person were to completely abandon technology (right…), the services and products delivered to them would still be produced or augmented with AI, be they clothing, food or furniture.

AI getting mainstream in culture and society

AI trends 2020 - AI getting mainstream in culture and society
The motif of AI is prevalent in the arts, though usually not in a good way. Isaac Asimov was among the first writers to hold that autonomous robots would need to follow a moral code in order not to become dangerous to humans. Of course, literature has offered a number of memorable examples of AI run amok, including Terminator and HAL 9000 from Space Odyssey.

The question of moral principles may once have been elusive and abstract, but autonomous cars have necessitated a legal framework ascribing responsibility for accidents. Amazon learned about the need to control AI models the hard way, albeit in a less mobile environment: a recruiting tool the company was using had to be scrapped due to a bias against women.

The impact of AI applications on people’s daily lives, choices and careers is building pressure to deliver legal regulations on model transparency as well as information not only about outcomes, but also the reasons behind them. Delivering AI in a black-box mode is not the most suitable way to operate, especially as the number of decisions made automatically by AI-powered solutions increases.

Automating the development of AI

AI trends 2020 - Automating the development of AI
Making AI mainstream is not only about making AI systems more common, but widening the availability of AI tools and their accessibility to less-skilled individuals. The number of models delivering solutions to power with the machine and deep learning will only increase.It should therefore come as no surprise that the people responsible for automating others’ jobs are keen to support their own jobs with automation.

Google enters the field with AutoML, a tool that simplifies the process of developing AI and making it available for a wider audience, one that, presumably, is not going to use ML algorithms in some especially non-standard ways. AutoML joins IBM’s autoAI, which supports data preparation.

Also, there are targeted cloud offerings for companies seeking to harness ready-to-use components in their daily jobs with a view to augmenting their standard procedures with machine learning.

Summary

While the 2020 AI Trends themselves are similar to those of 2019, the details have changed immensely, thus refreshing our perspective seemed worth our while. The world is changing, ML is advancing, and AI is ever more ubiquitous  in our daily lives.

https://deepsense.ai/wp-content/uploads/2020/03/Five-AI-trends-2020-to-keep-an-eye-on-blogpost-header.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2020-03-09 09:00:252022-07-04 18:20:35Five AI trends 2020 to keep an eye on
What is AIOps - AI for IT operations explained

What is AIOps – AI for IT operations explained

January 31, 2020/in Machine learning /by Andy Thurai

Every business now depends on IT. Efficient IT Operations is mandatory for all businesses, especially those operating in a hybrid mode – a mix of existing data centers and multi-cloud locations.

As with any business process, IT operations can be augmented with machine learning-based solutions. IT is particularly fertile ground for AI as it is mostly digital, has seemingly endless processes requiring automation and there are gigantic amounts of data to process.

IT Operations are expensive!

According to Research and Markets data, global IT Ops & service management (ITSM) is predicted to reach $35.98 billion by 2025 with an annual growth rate of 7.5% YoY.

As the importance of IT operations has ramped up, so has the pressure on ITOps teams. A range of issues puts pressure on teams: shrinking budget for IT operations, multi cloud-based applications, dynamic scaling of infrastructure, limited availability of experienced ITOps personnel, the constant threat from outsiders given the nature of cloud applications, the extension of applications to edge locations with IoT and mobile devices.

AIOps is here to support the maintenance teams and provide AIOps tools to solve problems once thought unsolvable.

What is AIOps

AIOps supports infrastructure management and operations with AI-based solutions. It is employed mainly to automate tasks, improve process efficiency, quicken reactions — sometimes even to a real-time response rate — and deliver accurate predictions on upcoming events.

The big data revolution and machine learning technology have driven change, making it possible to process the vast amounts of information IT infrastructure generates. AI can solve the following challenges:

  • Anomaly detection – despite fluctuations and the dynamic nature of data, the internal infrastructure ecosystem is a stable environment. Thus, any anomaly can signal the existence of a problem. Also, early detection of an anomaly is usually a sign of a problem that has yet to be fully understood.
  • Event consolidation – An AI model can simplify huge amounts of data, dividing it into multiple layers and finding insights.
  • Service tickets analytics – when fed data on tickets submitted to a service desk, an ML-based model can predict seasonal spikes and requests. This can help the service desk owner deploy help desk personnel s needed.
  • Detecting seasonality and trends – when using an AI-powered solution, any trend can be divided into 3 components – seasonality, trend and residual. That increases the predictability of long-term commitments and makes managing them more effective.
  • Frequent pattern mining – machine-powered analysis delivers insights that are beyond the reach of humans. Machines not only process more data but also , unlike humans, make unbiased decisions. They also find correlations that are impossible for humans to detect.
  • Time series forecasting – AI-based models can forecast future values such as memory load, network throughput ticket count or other values in the future. This enables AIOps solutions to deliver early alert predictions.
  • Noise reduction – AIOps solutions eliminate noise and concentrate on the real underlying problems.

AI helps ITOps run smoother

There are currently several major challenges for IT departments.

Fraud Detection/Security

According to IBM data quoted by CSO, the average time to identify a breach in 2019 was 209 days. Such a sizable delay is caused mainly by security teams being overwhelmed with work and the stealth operations of criminals. Cybercrime is a highly profitable venture, with profits reaching up to $1.5 trillion a year. Cybercriminals don’t play favorites, targeting victims of all stripes, from individuals to international corporations. In May 2019 authorities tracked down a group that had availed itself of an estimated $100 million.

Anomaly detecting AI and machine learning-based AIOps solutions can spot even the slightest signs that unexpected events are occurring in a system. AIOps can be trained to learn what “typical operations” look like and spot anything out of place. It can also send real-time notifications to the team.

Eliminate Downtime

According to an ITIC study, for 86% of companies surveyed, a single hour of downtime costs $300,000. For 34%, the cost comes in at a staggering $1 million.

AIOps comes with various tools to help keep the lights on and operations running smoothly. In addition to anomaly detection, time series prediction serves as a benchmark and a tool for designing maintenance flow. It also supports efficient resource management. Pattern mining spots inefficient components and bottlenecks to be optimized. It also enables the mapping of both seasonality and trends, so resources supporting operations can be assigned efficiently.

Capacity planning

Before the cloud, companies were forced to overpay for servers and computing power because they had to stay on top of fluctuations in their seasonal needs for computer power. Today, despite the access they have to endless power and storage in the cloud, IT teams around the world continue to struggle with capacity planning and delivering scalable infrastructure to meet the irregular demands for infrastructure.

Nearly all AIOps functionalities support the goal of delivering a stable and scalable environment. With capacity planning supported by time series forecasting and ticket analysis, IT teams can manage their infrastructure scaling and maintenance not only to avoid downtime but also to minimize costs and utilize their systems as efficiently as possible.

A great example comes from Google, whose AI-based system delivered new operational efficiency recommendations for data center cooling systems, effectively cutting costs by 40%.

Noise reduction and pattern mining deliver clear insights.  On the other hand, scraping through the data in real-time enables an AIOps platform to deliver insights faster and using those insights more actionably.

Summary

AIOps machine learning-powered solutions can significantly improve today’s data-heavy IT infrastructure management.

A good way to learn more about the AIOps strategy landscape is to meet our team during the upcoming AIOps conference in Fort Lauderdale, Florida. Our specialists will be more than happy to strategize, assist, answer your questions, and share their expertise from the field. If you’d like to meet us there, just drop us a line or stop by our booth (#1104). Contact us at AIOps@deepsense.AI to see how we can help!

https://deepsense.ai/wp-content/uploads/2020/01/What-is-AIOps-AI-for-IT-operations-explained.jpg 337 1140 Andy Thurai https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Andy Thurai2020-01-31 10:00:322022-10-10 18:40:43What is AIOps – AI for IT operations explained
Key findings from the International Conference on Learning Representations (ICLR)

Trends and fads in machine learning – topics on the rise and in decline in ICLR submissions

October 24, 2019/in Machine learning /by Michał Kustosz and Błażej Osiński

ICLR (The International Conference on Learning Representations) is one of the most important international machine learning conferences. It’s popularity is growing fast, putting it on a par with such conferences as ICML, NeurIPS or CVPR.

The 2020 conference is slated for next April 26th, but the submissions deadline has already come and gone. 2585 publicly available papers were submitted. That’s about a thousand more than were featured at the 2019 conference.

2nd law of paper-dynamics tells us that the number of submitted papers will reach 100.000 in 24 years. That’s some serious growth!

We analyzed abstracts and keywords of all the ICLR papers submitted within the last three years to see what’s trending and what’s dying out. Brace yourselves! This year, 28% of the papers used or claimed to introduce state-of-the-art algorithms, so be prepared for a great deal of solid machine learning work!

“Deep learning” – have you heard about it?

To say you use deep learning in Computer Vision or Natural Language Processing is like saying fish live in water. Deep learning has revolutionized machine learning and become it’s underpinning. It’s present in almost all fields of ML, including less obvious ones like time series analysis or demand forecasting. This may be why the number of references to deep learning in keywords actually fell – from 19% in ‘18 to just 11% in ‘20–It’s just too obvious to acknowledge.

Deep learning

A revolution in network architecture?

One of the hottest topics this year turned out to be Graph Neural Networks. GNN is a deep learning architecture for graph-structured data. These networks have proved tremendously helpful in some applications in medicine, social network classification and modeling the behavior of dynamic interacting objects. The rise of GNNs is unprecedented, from 12 papers mentioning them in 18’ to 111 in 20’!

graph neural network

All Quiet on the GAN Front

The next topic has been extremely popular in recent years. But what has been called ‘the coolest idea in machine learning in the last twenty years’ has quickly become exploited. Generative Adversarial Networks can learn to mimic any distribution of data – creating impressive never seen artificial images. Yet they are on the decline. Yet they are on the decline, despite being prevalent in the media (deep fakes).

generative adversarial networks

Leave designing your machine learning to… machines

Finding the right architecture for your neural network can be a pain in the neck. Fear not, though: Neural Architecture Search (NAS) will save you. NAS is a method of building network architecture automatically rather than handcrafting it. It has been used in several state-of-the-art algorithms improving image classification, object detection or segmentation models. The number of papers on NAS increased from a mere five in ‘18 to 47 in ‘20!

neural architecture

Reinforcement learning – keeping stable

The percentage of papers on reinforcement learning has remained more or less constant. Interest in the topic remains significant – autonomous vehicles, Alpha Star’s success in playing StarCraft, and advances in robotics were allwidely discussed this year. RL is a stable branch of machine learning, and for good reason: future progress is widely anticipated.

reinforcement learning

What’s next?

That was just a sample of machine learning trends. What will be on top next year? Even the deepest neural network cannot predict it. But interest in machine learning is still on the rise, and the researchers are nothing if not creative. We shouldn’t be surprised to hear about groundbreaking discoveries next year and a 180-degree change in the trends.

To see a full analysis of the trends papers throughout the last three conferences click the photo below:

ICLR most popular keywords papers

 

https://deepsense.ai/wp-content/uploads/2019/10/key-findings-from-the-international-conference-on-learning-representations-iclr.png 337 1140 Michał Kustosz https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Michał Kustosz2019-10-24 12:49:092021-01-05 16:44:22Trends and fads in machine learning – topics on the rise and in decline in ICLR submissions
AI Monthly Digest #13 – an unexpected twist for the stock image market

AI Monthly Digest #13 – an unexpected twist for the stock image market

October 7, 2019/in Machine learning, AI Monthly Digest /by Konrad Budek and Arkadiusz Nowaczynski

September brought us two interesting AI-related stories, both with a surprising social context.

Despite its enormous impact on our daily lives, Artificial Intelligence (AI) is often still regarded as too hermetic and obscure for ordinary people to understand. As a result, an increasing number of people use Natural Language Processing-powered personal assistants, yet only a tiny fraction try to understand how they work and how to use them effectively. This makes them somewhat of a black box.

Making the field more comprehensible and accessible is one aspect of  AI researchers’ mission. That’s why research recently done by OpenAI is so interesting.

Hide-and-Seek – the reinforcement learning way

Reinforcement learning has delivered inspiring and breathtaking results. The technique is used in the training models behind autonomous cars and the controlling of sophisticated devices like automated arms and robots.

Unlike in supervised learning, a reinforcement learning model learns by interacting with the environment. The scientist can shape its behavior by applying a policy of rewards and punishments. The mechanism is close to that which humans use to learn.

Reinforcement learning has been used to create super killing agents to go toe-to-toe against human masters in Chess, Go and Starcraft. Now OpenAI, the company behind the GPT-2 model and several other breakthroughs in AI, has created agents that play a version of hide-and-seek, that most basic and ageless of children’s games.

OpenAI researchers divided the agents into two teams, hiders and seekers, and provided them a closed environment with walls and movable objects like boxes, walls and ramps. Any team could “lock” these items to make them unmovable for the opposing team. The teams developed a set of strategies and counter-strategies in a bid to successfully hide from or seek out the other team. The strategies included:

  • Running – the first and least sophisticated ability, enabling one to avoid the seekers.
  • Blocking passages – the hider could block passages with a box in order to build a safe shelter.
  • Using a ramp – to overcome the wall or a box, the seekers team learned to use a ramp to jump over an obstacle or climb a box and see the hider.
  • Blocking the ramp – to prevent the seekers from using the ramp to climb the box, the hiders could block access to the ramp. The process required a great deal of teamwork, which was not supported by the researchers in any way.
  • Box surfing – a strategy developed by seekers who were basically exploiting a bug in the system. The seekers not only jumped on a box using a ramp that had been blocked by the hiders, but also devised a way to move it while standing on it.
  • All-block – the ultimate hider-team teamwork strategy of blocking all the objects on the map and building a shelter.

The research delivered, among other benefits, a mesmerizing visual of little agents running around.

Why does it matter?

The research itself is neither groundbreaking nor breathtaking. From a scientific and developmental point of view, it looks like little more than elaborate fun. Yet it would be unwise to consider the project insignificant.

AI is still considered a hermetic and difficult field. Showing the results of training in the form of friendly, entertaining animations is a way to educate society on the significance of modern AI research.

Also, animation can be inspiring for journalists to write about and may lead youth to take an interest in AI-related career paths. So while the research has brought little if any new knowledge, it could well end up spreading knowledge on what we already know.

AI-generated stock photos available for free

Generative Adversarial Networks have proved to be insanely effective in delivering convincing images of not only hamburgers and dogs, but also human faces. One breakthrough is breathtaking indeed. Not even a year ago the eerie “first AI-generated portrait” was sold on auction for nearly a half-million dollars.

Now, generating faces of non-existent people is as easy as generating any other fake image – a cat, hamburger or landscape. To prove that the technology works, the team behind the 100K faces project delivered a hundred thousand AI-generated faces to use in any stock usage, from business folders, to flyers to presentations. Future use cases could include delivering on-the-go image generators that, powered by a demand forecasting tool, provides an image that best suits demand.

More information on the project can be found on the team’s Medium page.

Why does it matter

The images added to the free images bank are not perfect. With visible flaws in a model’s hair, teeth or eyes, some are indeed far from it. But that’s nothing a skilled graphic designer can’t handle. Also, there are multiple images that look nearly perfect – especially when there are no teeth visible in the smile.

Many photos are good enough to provide a stock photo as a “virtual assistant” image or to fulfill any need for a random face. This is an early sign that professional models and photographers will see the impact of AI in their daily work sooner than expected.

https://deepsense.ai/wp-content/uploads/2019/10/AI-Monthly-Digest-13-–-an-unexpected-twist-for-the-stock-image-market.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2019-10-07 12:26:182021-01-05 16:44:27AI Monthly Digest #13 – an unexpected twist for the stock image market
5 examples of the versatility of computer vision algorithms and applications

5 examples of the versatility of computer vision algorithms and applications

July 25, 2019/in Machine learning /by Konrad Budek

Computer vision enables machines to perform once-unimaginable tasks like diagnosing diabetic retinopathy as accurately as a trained physician or supporting engineers by automating their daily work. 

Recent advances in computer vision are providing data scientists with tools to automate an ever-wider range of tasks. Yet companies sometimes don’t know how best to employ machine learning in their particular niche. The most common problem is understanding how a machine learning model will perform its task differently than a human would.

What is computer vision?

Computer vision is an interdisciplinary field that enables computers to understand, process and analyze images. The algorithms it uses can process both videos and static images. Practitioners strive to deliver a computer version of human sight while reaping the benefits of automation and digitization. Sub-disciplines of computer vision include object recognition, anomaly detection, and image restoration. While modern computer vision systems rely first and foremost on machine learning, there are also trigger-based solutions for performing simple tasks.

The following case studies show computer vision in action.

Diagnosing diabetic retinopathy

Diagnosing diabetic retinopathy usually takes a skilled ophthalmologist.  With obesity on the rise globally, so too is the threat of diabetes. As the World Bank indicates, obesity is a threat to world development – among Latin America’s countries only Haiti has an average adult Body Mass Index reading below 25 (the upper limit of the healthy weight range). With rising obesity comes a higher risk of diabetes – it is believed that obesity comes with 80-85% risk of developing type 2 diabetes. This results in a skyrocketing need for proper diagnostics.

What is the difference between these two images?


The one on the left has no signs of diabetic retinopathy, while the other one has severe signs of it.

By applying algorithms to analyze digital images of the retina, deepsense.ai delivered a system that diagnosed diabetic retinopathy with the accuracy of a trained human expert. The key was in training the model on a large dataset of healthy and non-healthy retinas.

AI movie restoration

The algorithms trained to find the difference between healthy and diseased retinas are equally capable of spotting blemishes on old movies and making the classics shine again.

Recorded on a celluloid film, old movies are endangered by two factors – the fading technology of reading tapes that enable users to watch them and the nature of the tape, which degenerates with age. Moreover, the process of digitizing the movie is no guarantee of flawlessness, as the recorded film comes with multiple new damages.

However, when trained on two versions of a movie – one with digital noise and one that is perfect – the model learns to spot the disturbances and remove them during the AI movie restoration process.

Digitizing industrial installation documentation

Another example of the push towards digitization comes via industrial installation documentation. Like films, this documentation is riddled with inconsistencies in the symbols used, which can get lost in the myriad of lines and other writing that ends up in the documentation–and must be made sense of by humans. Digitizing industrial documentation that takes a skilled engineer up to ten hours of painstaking work can be reduced to a mere 30 minutes thanks to machine learning.

Building digital maps from satellite images

Despite their seeming similarities, satellite images and fully-functional maps that deliver actionable information are two different things. The differences are never as clear as during a natural disaster such as a flood or hurricane, which can quickly if temporarily, render maps irrelevant.

deepsense.ai has also used image recognition technology to develop a solution that instantly turns satellite images into maps, replete with roads, buildings, trees and the countless obstacles that emerge during a crisis situation. The model architecture we used to create the maps is similar to those used to diagnose diabetic retinopathy or restore movies.

Check out the demo:

Your browser does not support the video tag.

 Aerial image recognition

Computer vision techniques can work as well on aerial images as they do on satellite images. deepsense.ai delivered a computer vision system that supports the US NOAA in recognizing individual North Atlantic Right whales from aerial images.

With only about 411 whales alive, the species is highly endangered, so it is crucial that each individual be recognizable so its well-being can be reliably tracked. Before deepsense.ai delivered its AI-based system, identification was handled manually using a catalog of the whales. Tracking whales from aircraft above the ocean is monumentally difficult as the whales dive and rise to the surface, the telltale patterns on their heads obscured by rough seas and other forces of nature.

w_0_bboxBounding box produced by the head localizer

These obstacles made the process both time-consuming and prone to error. deepsense.ai delivered an aerial image recognition solution that improves identification accuracy and takes a mere 2% of the time the NOAA once spent on manual tracking.

The deepsense.ai takeaway

As the above examples show, computer vision is today an essential component of numerous AI-based solutions. When combined with natural language processing, it can be used to read the ingredients from product labels and automatically sort them into categories. Alongside reinforcement learning, computer vision powers today’s groundbreaking autonomous vehicles. It can also support demand forecasting and function as a part of an end-to-end machine learning manufacturing support system.

The key difference between human vision and computer vision is the domain of knowledge behind data processing. Machines find no difference in the type of image data they process, be it images of retinas, satellite images or documentation – the key is in providing enough training data to allow the model to spot if a given case fits the pattern. The domain is usually irrelevant.

https://deepsense.ai/wp-content/uploads/2019/07/5-examples-of-the-versatility-of-computer-vision-algorithms-and-applications.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2019-07-25 13:40:162021-01-05 16:44:515 examples of the versatility of computer vision algorithms and applications

AI movie restoration – Scarlett O’Hara HD

July 18, 2019/in Machine learning /by Konrad Budek

With convolutional neural networks and state-of-the-art image recognition techniques it is possible to make old movie classics shine again. Neural networks polish the image, reduce the noise and apply colors to the aged images. 

The first movies were created in the late nineteenth century with celluloid photographic film used in conjunction with motion picture cameras.

Skip ahead to 2018, when the global movie industry was worth $41.7 billion globally. Serving entertainment, cultural and social purposes, films are a hugely important heritage to protect. And that’s not always easy. Especially considering the fact that modern movies are produced and screened digitally, with the technology of celluloid tape fading into obsolescence.

Challenges in film preservation

The challenge and importance of preserving the cultural heritage of old movies has been underscored by numerous organizations including the European Commision, which noted that a lack of proper devices to play aging technology on could make it impossible to watch old films.

In deepsense.ai’s experience with restoring film, the first challenge is to remove distortions. Classics are usually recorded in low resolution while the original tapes are obviously aged and filled with noise and cracks. Also, the transition process from celluloid tape to digital format usually damages the material and results in the loss of quality.

By using AI-driven solutions, specifically supervised learning techniques, deepsense.ai’s team removed the cracks and black spots from the digitized version of a film. The model we produced uses deep neural networks trained on a movie with cracks and flaws added manually for training purposes. Having some films in original and broken form, the system learned to remove the flaws. An example of generated noise put on the classic Polish movie “Rejs” and the neural network’s output is displayed below.

The example clearly shows that our neural network can process and restore even a thoroughly damaged source material and make it shine again. The networks start to produce low-quality predictions when the images are so darkened and blurred that the human eye can barely recognize people in the film.

How to convert really old movies into HD

A similar training technique was applied to deliver a neural network used to improve the quality of an old movie. The goal was to deliver missing details and “pump up” the resolution from antiquated to HD quality.

The key challenge lay in reproducing the details, which was nearly impossible. Due to technological development, it is difficult for people to watch lower quality video than what they are used to.

The model was trained by downscaling an HD movie and then conducting a supervised training to deliver the missing details.

Move your mouse cursor over the image to see the difference.

Lewy alt
Prawy alt

The model performs well thanks to the wide availability of training data. The team could downscale the resolution of any movie, provide the model with the original version and let the neural network learn how to forge and inject the missing detail into the film.

A key misconception about delivering HD versions of old movies is that the neural network will discover the missing details from the original. In fact, there is no way to reclaim lost details because there were none on the originally registered material. The neural network produces them on the go on with the same techniques Thispersondoesnotexist and similar Generative Adversarial Networks use.

So, the source material is enriched with details that only resemble reality, but are in fact not real ones. This can be a challenge (or a problem) if the material is to be used for forensic purposes or detailed research. But when it comes to delivering the movies for entertainment or cultural ends, the technique is more than enough.

Coloring old movies

Another challenge comes with producing color versions of movie classics, technically reviving them for newer audiences. The process was long handled by artists applying color to every frame. The first film colored this way was the British silent movie “The Miracle” (1912).

Because there are countless color movies to draw on, providing a rich training set, a deep neural network can vastly reduce the time required to revive black and white classics. Yet the process is not fully automatic. In fact, putting color on the black and white movie is a titanic undertaking. Consider Disney’s “Tron,” which was shot in black and white and then colored by 200 inkers and painters from Taiwan-based Cuckoo’s Nest Studio.

When choosing colors, a neural network tends to play it safe. An example of how this can be problematic would be when the network misinterprets water as a field of grass. It would do that because it is likely more common for fields than for lakes to appear as a backdrop in a film. 

By manually applying colored pixels to single frames, an artist can suggest what colors the AI model should choose.

There is no way to determine the real color of a scarf or a shirt an actor or actress was wearing when a film rendered in black and white was shot. After all these years, does it even matter? In any case, neural networks employ the LAB color standard, leveraging lightness (L) to predict the two remaining channels (A and B respectively).

Transcription and face recognition

Last but not least, transcribing dialogue makes analysis and research much easier – be it for linguistic or cultural studies purposes. With facial recognition software, the solution can attribute all of the lines delivered to the proper characters.

The speech-to-text function processes the sound and transcribes the dialogue while the other network checks which of the people in the video moves his or her lips. When combined with image recognition, the model can both synchronize the subtitles and provide the name of a character or actor speaking.

While the content being produced needs to be supervised, it still vastly reduces the time required for transcription. In the traditional way, the transcription only takes at least the time of a recording and then needs to be validated. The machine transcribes an hour-long movie in a few seconds.

Summary

Using machine learning-based techniques to restore movies takes less time and effort than other methods. It also makes efforts to preserve the cultural heritage more successful and ensures films remain relevant. Machine learning in business gets huge recognition but ML-based techniques remain a novel way to serve the needs of culture and art.. deepsense.ai’s work has proven that AI in the art can serve multiple purposes, including promotion and education. Maybe using it in art and culture will be one of 2020’s AI trends.

Reviving and digitalizing classics improves the access to and availability of cultural goods and ensures that those works remain available, so future generations will, thanks to AI, enjoy Academy-awarded movies of the past as much as, if not more than, we do now.

https://deepsense.ai/wp-content/uploads/2019/07/AI-movie-restoration.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2019-07-18 14:35:042021-10-19 13:17:39AI movie restoration – Scarlett O’Hara HD
Outsmarting failure. Predictive maintenance powered by machine learning

Outsmarting failure. Predictive maintenance powered by machine learning

November 13, 2018/in Data science, Machine learning /by Konrad Budek

Since the days of the coal-powered industrial revolution, manufacturing has become machine-dependent. As the fourth industrial revolution approaches, factories can harness the power of machine learning to reduce maintenance costs.

The internet of things (IoT) is nothing new for industry.  Worldwide, the number of cellular-enabled factory automation devices reached 270 000 in 2012 worldwide. In 2018 it will rise to a staggering 820 000. Machines are present in every stage of the production process, from assembly to shipment. Although automation makes industry more efficient, with rising complexity it also becomes more vulnerable to breakdowns, as service is both time-consuming and expensive.

Four levels of predictive maintenance

According to PricewaterhouseCoopers, there are four levels of predictive maintenance.

1. Visual inspection, where the output is entirely based on the inspector’s knowledge and intuition
2. Instrument inspection, where conclusions are a combination of the specialist’s experience and the instrument’s read-outs
3. Real-time condition monitoring that is based on constant monitoring with IoT and alerts triggered by predefined conditions
4. AI-based predictive analytics, where the analysis is performed by self-learning algorithms that continuously tweak themselves to the changing conditions

As the study indicates, a good number of the companies surveyed by PwC (36%) are now on level 2 while more than a quarter (27%) are on level 1. Only 22% had reached level 3 and 11% level 4, which is basically level 3 on machine learning steroids. The PwC report states that only 3% use no predictive maintenance at all.

Staying on track

According to the PwC data, the rail sector is the most advanced sector of those surveyed with 42% of companies at level 4, compared to 11% overall.

One of the most prominent examples is Infrabel, the state-owned Belgian company, which owns, builds, upgrades and operates a railway network which it makes available to privately-owned transportation companies. The company spends more than a billion euro annually to maintain and develop its infrastructure, which contains over 3 600 kilometers of railway and some 12 000 civil infrastructure works like crossings, bridges, and tunnels. The network is used by 4 200 trains every day, transporting both cargo and passengers.

Outsmarting failure. Predictive maintenance powered by machine learning

According to the PwC data, the rail sector is the most advanced sector of those surveyed with 42% of companies at level 4, compared to 11% overall.

The company faces both technical and structural challenges. Among them is its aging technical staff, which is shrinking.

At the same time, the density of railroad traffic is increasing – the number of daily passengers has increased by 50% since 2000, reaching 800 000. What’s more, the growing popularity of high-speed trains is exerting ever greater tension on the rails and other infrastructure.

To face these challenges, the company has implemented monitoring tools, such as sensors for monitoring overheating tracks, cameras which inspect the pantographs and meters to detect drifts in power consumption, which usually occur before mechanical failures in switches. All of the data is collected and analyzed by a single tool designed to apply predictive maintenance. Machine learning models are a component of that tool.

As sounding brass

Mueller Industries (Memphis, Tennessee) is a global manufacturer and distributor of copper, brass, aluminum and plastic products. The predictive maintenance solution the company uses is based on sound analysis. Every machine can be characterized by the sound it makes and any change in the tone or the sounds it makes may be a sign of impending malfunction. The analysis of the sound and the vibrations of the machine is done in real-time with the cloud-based machine learning solution that seeks patterns in the data gathered.

Both the amount and the nature of the data collected render it impossible for a human to analyze, but a machine-learning powered AI solution handles it with ease. The devices are able to gather data in ultrasonic and vibration sensors and analyze them in real time. Contrary to experience-based analytics, using the devices requires little-to-no training and can be done on the go.

Endless possibilities

With the power of machine learning enlisted, handling the tremendous amounts of data generated by the sensors in modern factories becomes a much easier task. It allows the company to detect failures before they paralyze the company, thus saving time and money. What’s more, the data that is gathered can be used to further optimize the company’s performance, including by searching for bottlenecks and managing workflows.

That’s why 98% of industrial companies expect to increase efficiency with digital technologies.

https://deepsense.ai/wp-content/uploads/2018/11/Outsmarting-failure.-Predictive-maintenance-powered-by-machine-learning.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2018-11-13 10:43:062021-01-05 16:45:15Outsmarting failure. Predictive maintenance powered by machine learning
A comprehensive guide to demand forecasting

A comprehensive guide to demand forecasting

May 28, 2019/in Data science, Machine learning, Popular posts /by Konrad Budek and Piotr Tarasiewicz

Everything you need to know about demand forecasting – from the purpose and techniques to the goals and pitfalls to avoid.

Essential since the dawn of commerce and business, demand forecasting enters a new era of big-data rocket fuel.

What is demand forecasting?

The term couldn’t be clearer: demand forecasting forecasts demand. The process of predicting the future involves processing historical data to estimate the demand for a product. An accurate forecast can bring significant improvements to supply chain management, profit margins, cash flow and risk assessment.

What is the purpose of demand forecasting?

Demand forecasting is done to optimize processes, reduce costs and avoid losses caused by freezing up cash in stock or being unable to process orders due to being out of stock. In an ideal world, the company would be able to satisfy demand without overstocking.

Demand forecasting techniques

Demand forecasting is an essential component of every form of commerce, be it retail, wholesale, online, offline or multichannel. It has been present since the very dawn of civilization when intuition and experience were used to forecast demand.

Sybilla – deepsense.ai’s demand forecasting tool

More recent techniques combine intuition with historical data. Modern merchants can dig into their data in a search for trends and patterns. At the pinnacle of these techniques, are demand forecasting machine learning models, including gradient boosting and neural networks, which are currently the most popular ones and outperform classic statistics-based methods.

The basis of more recent demand forecasting techniques is historical data from transactions. These are data that sellers collect and store for fiscal and legal reasons. Because they are also searchable, these data are the easiest to use.

Sybilla – deepsense.ai’s demand forecasting tool

How to choose the right demand forecasting method – indicators

As always, selecting the right technique depends on various factors, including:

  • The scale of operations – the larger the scale, the more challenging processing the data becomes.
  • The organization’s readiness – even the large companies can operate (efficiency aside) on fragmented and messy databases, so the technological and organizational readiness to apply more sophisticated demand forecasting techniques is another challenge.
  • The product – it is easier to forecast demand for an existing product than for a newly introduced one. When considering the latter, it is crucial to forming a set of assumptions to work on. Owning as much information about the product as possible is the first step, as it allows the company to spot the similarities between particular goods and search for correlations in the buying patterns. Spotting an accessory that is frequently bought along with the main product is one example.

How demand forecasting can help a business

Demand forecasting and following sales forecasting is crucial to shaping a company’s logistics policy and preparing it for the immediate future. Among the main advantages of demand forecasting are:

  • Loss reduction – any demand that was not fulfilled should be considered a loss. Moreover, the company freezes its cash in stock, thus reducing liquidity.
  • Supply chain optimization – behind every shop there is an elaborate logistics chain that generates costs and needs to be managed. The bigger the organization, the more sophisticated and complicated its inventory management must be. When demand is forecast precisely, managing and estimating costs is easier.
  • Increased customer satisfaction – there is no bigger disappointment for consumers than going to the store to buy something only to return empty-handed. For a business, the worst-case scenario is for said consumers to swing over to the competition to make their purchase there. Companies reduce the risk of running out of stock–and losing customers–by making more accurate predictions.
  • Smarter workforce management – hiring temporary staff to support a demand peak is a smart way for a business to ensure it is delivering a proper level of service.
  • Better marketing and sales management – depending on the upcoming demand for particular goods, sales and marketing teams can shift their efforts to support cross- and upselling of complementary products,
  • Supporting expert knowledge – models can be designed to build predictions for every single product, regardless of how many there are. In small businesses, humans handle all predictions, but when the scale of the business and the number of goods rises, this becomes impossible. Machine learning models extend are proficient at big data processing.

How to start demand forecasting – a short guide

Building a demand forecasting tool or solution requires, first and foremost, data to be gathered.

While the data will eventually need to be organized, simply procuring it is a good first step. It is easier to structure and organize data and make them actionable than to collect enough data fast. The situation is much easier when the company employs an ERP or CRM system, or some other form of automation, in their daily work. Such systems can significantly ease the data gathering process and automate the structuring.

Sybilla – deepsense.ai’s demand forecasting tool

The next step is building testing scenarios that allow the company to test various approaches and their impact on business efficiency. The first solution is usually a simple one, and is a good benchmark for solutions to come. Every next iteration should be tested to see if it is performing better than the previous one.

Historical data is usually everything one needs to launch a demand forecasting project, and obviously, there are significantly less data on the future. But sometimes it is available, for example:

  • Short-term weather forecasts – the information about upcoming shifts in weather can be crucial in many businesses, including HoReCa and retail. It is quite intuitive to cross-sell sunglasses or ice cream on sunny days.
  • The calendar – Black Friday is a day like no other. The same goes for the upcoming holiday season or other events that are tied to a given date.

Sources of data that originate from outside the company make predictions even more accurate and provide better support for making business decisions.

Common pitfalls to avoid when building a demand forecasting solution

There are numerous pitfalls to avoid when building a demand forecasting solution. The most common of them include:

  • The data should be connected with the marketing and ads history – a successful promotion results in a significant change in data, so having information about why it was a success makes predictions more accurate. If machine learning was used to make the predictions, the model could have misattributed the changes and made false predictions based on wrong assumptions.
  • New products with no history – when new products are introduced, demand must still be estimated, but without the help of historical data. The good news here is that great strides have been made in this area, and techniques such as product DNA can help a company uncover similar products its past/current portfolio. Having data on similar products can boost the accuracy of prediction for new products.
  • The inability to predict the weather – weather drives demand in numerous contexts and product areas and can sometimes be even more important than the price of a product itself! (yes, classical economists would be very upset). The good news is that even if you are unable to predict the weather, you can still use it in your model to explain historical variations in demand.
  • Lacking information about changes – In an effort to support both short- and long-term goals, companies constantly change their offering and websites. When the information about changes is not annotated in the data, the model encounters sudden dwindles and shifts in demand with apparently no reason. In the reality, it is usually a minor issue like changing the inventory or removing a section from website.
  • Inconsistent portfolio information – predictions can be done only if the data set is consistent. If any of the goods in a portfolio have undergone a name or ID change, it must be noted in order not to confuse the system or miss out on a valuable insight.
  • Overfitting the model – a vicious problem in data science. A model is so good at working on the training dataset that it becomes inflexible and produces worse predictions when new data is delivered. Avoiding overfitting is down to the data scientists.
  • Inflexible logistics chain – the more flexible the logistics process is, the better and more accurate the predictions will be. Even the best demand forecasting model is useless when the company’s logistics is a fixed process that allows no space for changes.

Sybilla – deepsense.ai’s demand forecasting tool

Summary

Demand and sales forecasting is a crucial part of any business. Traditionally it has been done by experts, based on know-how honed through experience. With the power of machine learning it is now possible to combine the astonishing scale of big data with the precision and cunning of a machine-learning model. While the business community must remain aware of the multiple pitfalls it will face when employing machine learning to predict demand, there is no doubt that it will endow demand forecasting with awesome power and flexibility.

https://deepsense.ai/wp-content/uploads/2019/05/A-comprehensive-guide-to-demand-forecasting.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2019-05-28 11:24:422021-01-05 16:45:37A comprehensive guide to demand forecasting
AI Monthly Digest #8 – new AI applications for music and gaming

AI Monthly Digest #8 – new AI applications for music and gaming

May 9, 2019/in Machine learning, AI Monthly Digest /by Konrad Budek and Arkadiusz Nowaczynski

The April edition of AI Monthly Digest looks at how AI is used in entertainment, for both research and commercial purposes.

After its recent shift from non-profit to for-profit, OpenAI continues to build a significant presence in the world of AI research. It is involved in two of five stories chosen as April’s most significant.

AI Music – spot the discord…

While machine learning algorithms are getting increasingly better at delivering convincing text or gaining superior accuracy in image recognition, machines struggle to understand the complicated patterns behind the music. In its most basic form, the music is built upon repetitive motifs that return based on sections of various length – it may be a recurrent part of one song or a leading theme of an entire movie, opera or computer game.

Machine learning-driven composing is comparable to natural language processing – the short parts are done well but the computer gets lost when it comes to keeping the integrity of the longer ones. April brought us two interesting stories regarding different approaches to ML-driven composition.

OpenAI developed MuseNet, a neural network that produces music in a few different styles. Machine learning algorithms were used to analyze the style of various classical composers, including Chopin, Bach, Beethoven and Rachmaninoff. The model was further fed rock songs by Queen, Green Day and Nine Inch Nails and pop music by Madonna, Adele and Ricky Martin, to name a few. The model learned to mimic the style of a particular artist and infuse it with twists. If the user wants to spice up the Moonlight Sonata with a drum, the road is open.

OpenAI has rolled out an early version of the model and it performs better when the user is trying to produce a consistent piece of music, rather than pair up a disparate coupling of Chopin and Nine Inch Nails-style synthesizers.

OpenAI claims that music is a great tool with which to evaluate a model’s ability to maintain long-term consistency, mainly thanks to how easy it is to spot discord.

…or embrace it

While OpenAI embraces harmony in music, Dadabots has taken the opposite tack. Developed by Cj Carr and Zack Zukowski, Databots model imitates rock, particularly metal bands. The team has put their model on YouTube to deliver technical death metal as an endless live stream – the Relentless Doppelganger.

While it is increasingly common to find AI-generated music on Bandcamp, putting a 24/7 death metal stream on YouTube is undoubtedly something new.

Fans of the AI-composed death metal have given the music rave reviews. As The Verge notes, the creation is “Perfectly imperfect” thanks to its blending of various death metal styles, transforming vocals into a choir and delivering sudden style-switching.

It appears that bare-metal has ushered in a new era in technical death metal.

Why does it matter?

Researchers behind the Relentless Doppelganger remark that developing music-making AI has mainly been based on classical music, which is heavily reliant on harmony, while death metal, among others, embraces the power of chaos. It stands to reason, then, that the  music generated is not perfect when it comes to delivering harmony. The effect is actually more consistent with the genre’s overall sound. What’s more, Databots’ model delivers not only instrumentals, but also vocals, which would be unthinkable with classical music. Of course, the special style of metal singing called growl makes most of the lyrics incomprehensible, so little to no sense is actually required here.

Related:  AI Monthly Digest #7 - machine mistakes, and the hard way to profit from non-profit

From a scientific point of view, OpenAI delivers much more significant work. But AI is working its way into all human activity, including politics, social problems and policy and art. From an artistic point of view, AI-produced technical death metal is interesting.

It appears that when it comes to music, AI likes it brutal.

AI in gaming goes mainstream

Game development has a long and uneasy tradition of delivering computer players to allow users to play in single-player mode. There are many forms of non-ML-based AI present in video games. They are usually based on a set of triggers that initiate a particular action the computer player takes. What’s more, modern, story-driven games rely heavily on scripted events like ambushes or sudden plot twists.

This type of AI delivers an enjoyable level of challenge but lacks the versatility and viciousness of human players coming up with surprising strategies to deal with. Also, the goal of AI in single-player mode is not to dominate the human player in every way possible.

Related:  AI Monthly Digest #6 - AI ethics and artificial imagination

The real challenge in all of this comes from developing bots, or the computer-controlled players, to deliver a multiplayer experience in single-player mode. Usually, the computer players significantly differ from their human counterparts and any transfer from single to multiplayer ends with shock and an instant knock-out from experienced players.

To deliver bots that behave in a more human way yet provide a bigger challenge, Milestone, the company behind MotoGP 19, turned to reinforcement learning to build computer players to race against human counterparts. The artificial intelligence controlling opponents is codenamed A.N.N.A. (Artificial Neural Network Agent).

A.N.N.A. is a neural network-based AI that is not scripted directly but created through reinforcement learning. This means developers describe an agent’s desired behaviour and then train a neural network to achieve it. Agents created in this way show more skilled and realistic behaviors, which are high on the wish list of Moto GP gamers.

Why does it matter?

Applying ML-based artificial intelligence in a mainstream game is the first step in delivering a more realistic and immersive game experience. Making computer players more human in their playing style makes them less exploitable and more flexible.

The game itself is an interesting example. It is common in RL-related research to apply this paradigm in strategic games, be it chess, GO or Starcraft II for research purposes. In this case, the neural network controls a digital motorcycle. Racing provides a closed game environment with a limited amount of variables to control. Thus, racing in a virtual world is a perfect environment to deploy ML-based solutions.

In the end, it isn’t the technology but rather gamers’ experience that is key. Will reinforcement learning bring a new paradigm of embedding AI in games? We’ll see once gamers react.

Bittersweet lessons from OpenAI Five

Defense of The Ancients 2 (DOTA 2) is a highly popular multiplayer online battle arena game with two teams, each consisting of five players fighting for control over a map. The game blends tactical, strategic and action elements and is one of the most popular online sports games.

OpenAI Five is the neural network that plays DOTA 2, developed by OpenAI.

The AI agent beat world champions from Team OG during the OpenAI Five Finals on April 13th. It was the first time an AI-controlled player has beaten a pro-player team during a live-stream.

Why does it matter?

Although the project seems similar to Deepmind’s AlphaStar, there are several significant differences:

  • The model was trained continuously for almost a year instead of starting from zero knowledge for each new experiment – the common way of developing machine learning models is to design the entire training procedure upfront, launch it and observe the result. Every time a novel idea is proposed, the learning algorithm is modified accordingly and a new experiment is launched starting from scratch to get a fair comparison between various concepts. In this case, researchers decided not to run training from scratch, but to integrate ideas and changes into the already trained model, sometimes doing elaborate surgery on their artificial neural network. Moreover, the game received a number of updates during the training process. Thus, the model was forced at some points not to learn a new fact, but to update its knowledge. And it managed to do so. The approach enabled the team to massively reduce the computing power over the amount it had invested in training previous iterations of the model.
  • The model effectively cooperated with human players – The model was available publicly as a player, so users could play with it, both as ally and foe. Despite being trained without human interaction, the model was effective both as an ally and foe, clearly showing that AI is a potent tool to support humans in performing their tasks — even when that task is slaying an enemy champion.
  • The research done was somewhat of a failure – The model performs well, even if building it was not the actual goal. The project was launched to break a previously unbroken game by testing and looking for new approaches. The best results were achieved by providing more computing power and upscaling the neural network. Despite delivering impressive results for OpenAI, the project did not lead to the expected breakthroughs and the company has hinted that it could be discontinued in its present format. A bitter lesson indeed.

Blurred computer vision

Computer vision techniques deliver astonishing results. They have sped up the diagnosing of diabetic retinopathy, built maps from satellite images and recognized particular whales from aerial photography. Well-trained models often outperform human experts. Given that they don’t get tired and never lose their focus, why shouldn’t they?

But there remains room for improvement for machine vision, as researchers from KU Leuven University in Belgium report. They delivered an image that fooled an algorithm, rendering the person holding a card with an image virtually invisible to a machine learning-based solution.

Why does it matter?

As readers of William Gibson’s novel Zero Hour will attest, images devised to fool AI are nothing new. Delivering a printable image to confound algorithm highlights a serious interest among malicious players interfering with AI.

Examples may include images produced to fool AI-powered medical diagnostic devices for fraudulent reasons or sabotaging road infrastructure to render it useless for autonomous vehicles.

AI should not be considered a black box and algorithms are not unbreakable. As always, reminders of that are welcome, especially as responsibility and transparency are among the most significant AI trends for 2019.

https://deepsense.ai/wp-content/uploads/2019/05/AI-Monthly-Digest-8-–-new-AI-applications-for-music-and-gaming.jpg 337 1140 Konrad Budek https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Konrad Budek2019-05-09 11:34:462021-01-05 16:45:42AI Monthly Digest #8 – new AI applications for music and gaming
Page 2 of 7‹1234›»

Start your search here

Build your AI solution
with us!

Contact us!

NEWSLETTER SUBSCRIPTION

    You can modify your privacy settings and unsubscribe from our lists at any time (see our privacy policy).

    This site is protected by reCAPTCHA and the Google privacy policy and terms of service apply.

    CATEGORIES

    • Generative models
    • Elasticsearch
    • Computer vision
    • Artificial Intelligence
    • AIOps
    • Big data & Spark
    • Data science
    • Deep learning
    • Machine learning
    • Neptune
    • Reinforcement learning
    • Seahorse
    • Job offer
    • Popular posts
    • AI Monthly Digest
    • Press release

    POPULAR POSTS

    • ChatGPT – what is the buzz all about?ChatGPT – what is the buzz all about?March 10, 2023
    • How to leverage ChatGPT to boost marketing strategyHow to leverage ChatGPT to boost marketing strategy?February 26, 2023
    • How can we improve language models using reinforcement learning? ChatGPT case studyHow can we improve language models using reinforcement learning? ChatGPT case studyFebruary 20, 2023

    Would you like
    to learn more?

    Contact us!
    • deepsense.ai logo white
    • Services
    • Customized AI software
    • Team augmentation
    • AI advisory
    • Generative models
    • Knowledge base
    • deeptalks
    • Blog
    • R&D hub
    • deepsense.ai
    • Careers
    • Summer internship
    • Our story
    • Management
    • Advisory board
    • Press center
    • Support
    • Terms of service
    • Privacy policy
    • Code of ethics
    • Contact us
    • Join our community
    • facebook logo linkedin logo twitter logo
    • © deepsense.ai 2014-
    Scroll to top

    This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.

    OKLearn more

    Cookie and Privacy Settings



    How we use cookies

    We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

    Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

    Essential Website Cookies

    These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

    Because these cookies are strictly necessary to deliver the website, refuseing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

    We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

    We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

    Other external services

    We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

    Google Webfont Settings:

    Google Map Settings:

    Google reCaptcha Settings:

    Vimeo and Youtube video embeds:

    Privacy Policy

    You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

    Accept settingsHide notification only