deepsense.ai
  • Careers
    • Job Offers
    • Summer Internship
  • Clients’ stories
  • Industries
    • Retail
    • Manufacturing
    • Financial & Insurance
    • IT Operations
    • TMT & Other
    • Medical & Beauty
  • Train your team
  • Knowledge base
    • Blog
    • R&D Hub
  • About us
    • Our story
    • Management
    • Advisory Board
    • Press center
  • Contact
  • Menu Menu
An internal validation leaderboard in Neptune

An internal validation leaderboard in Neptune

January 19, 2017/in Data science, Deep learning, Machine learning, Neptune /by Patryk Miziuła

Internal validation is a useful tool for comparing results of experiments performed by team members in any business or research task. It can also be a valuable complement of public leaderboards attached to machine learning competitions on platforms like Kaggle.

In this post, we present how to build an internal validation leaderboard using Python scripts and the Neptune environment. As an example of a use case, we will take the well known classification dataset CIFAR-10. We study it using a deep convolutional neural network provided in the TensorFlow tutorial.

Why internal leaderboard?

Whenever we solve the same problem in many ways, we want to know which way is the best. Therefore we validate, compare and order the solutions. In this way, we naturally create the ranking of our solutions – the leaderboard.
We usually care about the privacy of our work. We want to keep the techniques used and the results of our experiments confidential. Hence, our validation should remain undisclosed as well – it should be internal.
If we keep improving the models and produce new solutions at a fast pace, at some point we are no longer able to manage the internal validation leaderboard manually. Then we need a tool which will do that for us automatically and will present the results to us in a readable form.

Business and research projects

In any business or research project you are probably interested in the productivity of team members. You would like to know who and when submits his or her solution to the problem, what kind of model they use and how good the solution is.
A good internal leaderboard stores all that information. It also allows you to search for submissions sent by specific user, defined in some time window or using a particular model. Finally, you can sort the submissions with respect to the accuracy metric to find the best one.

Machine learning competitions

The popular machine learning platform, Kaggle, offers a readable public leaderboard for every competition. Each contestant can follow his position in the ranking and try to improve several times a day.
However, an internal validation would be very useful for every competing team. A good internal leaderboard has many advantages over a public one:

  • the results remain exclusive,
  • there is no limit on the number of daily submissions,
  • metrics other than those chosen by the competition organizers can be evaluated as well,
  • the submissions can be tagged, for example to indicate the used model.

Note that in every official competition the ground truth labels for the test data are not provided. Hence, to produce the internal validation we are forced to split the available public training data. One part is used to tune the model, the other is needed to evaluate it internally. This division can be an origin of unexpected problems (e.g., data leaks) so perform it carefully!

Why Neptune?

Neptune was designed to manage multiple experiments. Among many features, it supports storing parameters, logs and metric values from various experiment executions. The results are accessible through an aesthetic Web UI.
In Neptune you can:

  • gather experiments from various projects in groups,
  • add tags to experiments and filter by them,
  • sort experiments by users, date of creation, or – most importantly for us – by metric values.

Due to that, Neptune is a handy tool for creating an internal validation leaderboard for your team.

An internal validation leaderboard in Neptune

Tracking a TensorFlow experiment in Neptune

Let’s do it!

Let’s build an exemplary internal validation leaderboard in Neptune.

CIFAR-10 dataset

We use the well-known classification dataset CIFAR-10. Every image in this dataset is a member of one of 10 classes, labeled by numbers from 0 to 9. Using the train data we build a model which allows us to predict the labels of images from test data. CIFAR-10 is designed for educational purposes, therefore the ground truth labels for test data are provided.

Evaluating functions

Let’s fix the notation:

  • \(N\) – number of images we have to classify.
  • \(c_i\) – class to which the \(i\)th image belongs; \(iin{0,ldots,N-1}\), \(c_iin{0,ldots,9}\).
  • \(p_{ij}\) – estimated probability that the \(i\)th image belongs to the class \(j\); \(iin{0,ldots,N-1}\), \(jin{0,ldots,9}\), \(p_{ij}in[0,1]\).

We evaluate our submission with two metrics. The first metric is the classification accuracy given by
\(frac 1Nsum_{i=0}^{N-1}mathbb{1}Big(argmax_j p_{ij}=c_iBig)\)
This is the percentage of labels that are predicted correctly. We would like to maximize it, the optimal value is 1. The second metric is the average cross entropy given by
\(-frac 1Nsum_{i=0}^{N-1}log p_{ic_i}\)
This formula is simpler than the principal entropy since the classes are completely mutually exclusive. We would like to minimize it, preferably to 0.

Implementation details

Prerequisites

To run the code we provide you need the following software:

  • Neptune: apply for our Early Adopters Program or try it immediately with Neptune Go,
  • TensorFlow 1.0.

Repository

The code we use is based on that available in the TensorFlow convolutional neural networks tutorial. You can download our code from our GitHub repository. It consists of the following files:

File Purpose
main.py The script to execute.
cifar10_submission.py Computes submission for a CIFAR-10 model.
evaluation.py Contains functions required to create the leaderboard in Neptune.
config.yaml Neptune configuration file.

Description

When you run main.py, you first train a neural network using function cifar10_train provided by TensorFlow. We hard-coded the number of training steps. This could be enhanced to dynamic using Neptune action, but for the sake of brevity we skip this topic in the blog post. Due to TensorFlow Integration you can track the tuning of the network in Neptune. Moreover, the parameters of the tuned network are stored in a file manageable by TensorFlow saver objects.
Then function cifar10_submission is called. It restores parameters of the network from the file created by cifar10_train. Next, it forward-propagates the images from the test set through the network to obtain a submission. The submission is stored as a Python Numpy array submission of the shape \(Ntimes 10\), the \(i\)th row contains estimated probabilities \(p_{i0},ldots,p_{i9}\). The ground truth labels forms a Python Numpy array true_labels of the shape \(Ntimes 1\), the \(i\)th row contains label \(c_i\).
Ultimately, for given submission and true_labels arrays function evaluate_and_send_to_neptune from script evaluation.py computes metric values and sends them to Neptune.
File config.yaml is a Neptune job configuration file, essential for running Neptune jobs. Please download all the files and place them in the same folder.

Step by step

We create a validation leaderboard in Neptune in 4 easy steps:

  1. Creating a Neptune group
  2. Creating an evaluation module
  3. Sending submissions to Neptune
  4. Customizing a view in Neptune’s Web UI

1. Creating a Neptune group

We create the Neptune group where all the submissions will be stored. We do this as follows:

  1. Enter the Neptune home screen.
  2. Click "+" in the lower left corner, enter the name “CIFAR-10 leaderboard”, click "+" again.
    An internal validation leaderboard in Neptune 2
  3. Choose “project” “is” and type “CIFAR-10”, click “Apply”.
    An internal validation leaderboard in Neptune 3

Our new group appears in the left column. We can edit or delete it by clicking the wrench icon next to the group name.

An internal validation leaderboard in Neptune 4

2. Creating an evaluation module

We created the module evaluation.py consisting of 5 functions:

  1. _evaluate_accuracy and _evaluate_cross_entropy compute the respective metrics,
  2. _prepare_neptune adds tags to the Neptune job (if specified – see Step 4) and create Neptune channels to send evaluated metrics,
  3. _send_to_neptune sends metrics to channels,
  4. evaluate_and_send_to_neptune calls the above functions.

You can easily adapt this script to evaluate and send any other metrics.

3. Sending submissions to Neptune

To place our submissions in the Neptune group, we need to specify project: CIFAR-10 in a Neptune config file config.yaml . This is a three-line-long file, it also contains project name and a description.
Assume that the files from our repository are placed in the folder named leaderboard . The last preparation step we have to do is clone CIFAR-10 scripts from the TensorFlow repository. To do it, we go to the folder above folder leaderboard  and type:

git clone https://github.com/tensorflow/models/
export PYTHONPATH="$PWD/models/tutorials/image/cifar10:$PYTHONPATH"

Now we are ready to send our results to the leaderboard created in Neptune!  We run the script main.py from the folder above folder leaderboard  by typing

neptune run leaderboard/main.py --config leaderboard/config.yaml --dump-dir-url leaderboard/dump --paths-to-dump leaderboard

using Neptune CLI. The script executes for about half an hour on a modern laptop. Training would be significantly faster on a GPU.
There are only 5 lines related to Neptune in the main.py script. First we load the library:

from deepsense import neptune

Then we initialize a Neptune context:

ctx = neptune.Context()

Next, command

ctx.integrate_with_tensorflow()

automatically creates and manages Neptune channels related to TensorFlow SummaryWriter objects. Thereby, we can observe the progress of our network in the Neptune Dashboard. Finally, in lines

tags = ["tensorflow", "tutorial"]
evaluation.evaluate_and_send_to_neptune(submission, true_labels, ctx, tags)

we evaluate our submission and send metric values to dedicated Neptune channels. tags is a list of tags which we can add to the Neptune job. In this way, we attach some keywords to the Neptune job. We can easily filter jobs by tags in the Neptune Web UI.

4. Customizing a view in Neptune’s Web UI

If the job has been successfully executed, we can see our submission in the Neptune group we created. One more thing worth doing is setting up the view of columns.

  1. Click “Show/hide columns” in the upper part of the Neptune Web UI.
  2. Check/uncheck the names. You should:
    • uncheck “Project” since all the submissions in this group come from the same project CIFAR-10,
    • check channel names “accuracy” and “cross entropy” because you want to sort with respect to them.

You can sort submissions by accuracy or cross entropy value by clicking the triangle over the respective column.

Summary

That’s all! Now your internal validation leaderboard in Neptune is all set up. You and your team members can compare your models tuned up to the CIFAR-10 dataset. You can also filter your results by dates, users or custom tags.
Of course, CIFAR-10 is not the only possible application of the provided code. You can easily adapt it for other applications like: contests, research or business intelligence. Feel free to use an internal validation leaderboard in Neptune wherever and whenever you need.

https://deepsense.ai/wp-content/uploads/2019/02/creating-internal-leaderboard-in-neptune-updated-logo.jpg 337 1140 Patryk Miziuła https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Patryk Miziuła2017-01-19 16:36:352021-02-23 11:19:31An internal validation leaderboard in Neptune
Neptune 1.3 with TensorFlow integration and experiments in Docker

Neptune 1.3 with TensorFlow integration and experiments in Docker

December 30, 2016/in Data science, Deep learning, Machine learning, Neptune /by Rafał Hryciuk

We’re happy to announce that a new version of Neptune became available this month. The latest 1.3 release of deepsense.ai’s machine learning platform introduces powerful new features and improvements. This release’s key added features are: integration with TensorFlow and running Neptune experiments in Docker containers (see complete release notes).

TensorFlow Integration

The first major feature introduced in Neptune 1.3 is TensorFlow integration. We think that TensorFlow will become a leading technology for deep learning problems. TensorFlow comes with its own monitoring tool: TensorBoard. We don’t want to compete with TensorBoard, instead we want to incorporate TensorBoard’s well known functionalities into Neptune. Starting with Neptune 1.3, data scientist can see all available TensorBoard metrics and graphs in  Neptune. Read more.

Running Neptune Experiments in Docker Containers

Neptune creates a snapshot of code for every experiment execution. Thanks to this users can easily recreate the results of every experiment. The problem is that the technology world is changing very quickly and saving the source code is often not enough. We also need to save our execution environment because the source code depends on specific versions of libraries. Neptune 1.3 gives users the option to run a Neptune experiment in a Docker container. A Docker container is an encapsulation of the execution environment. Thanks to this the user can have containers with different versions of the libraries and use them on the same host to recreate the experiment’s results.
Running Neptune experiments in Docker containers is also important for Windows users. The suggested way of running TensorFlow experiments on Windows is to run them in Docker containers. Now, a data scientist can use TensorFlow with Neptune on Windows.
Follow the link to read more about running Neptune experiments in docker containers.

Future Plans

We are already working on the next version of Neptune which will be released at the end of January 2017. The next release will contain:

  • Client Library for R and Java; and
  • Support for hyperparameter optimization,  grid search method.

We hope you will enjoy working with our machine learning platform, which now features TensorFlow integration and enables running experiments in Docker containers. If you’d like to provide us with any feedback, feel free to use our forum at https://community.neptune.ml/.
Don’t have Neptune yet? Join our Early Adopters Program and get free access.

https://deepsense.ai/wp-content/uploads/2019/02/neptune-1-3-with-tensorflow-integration-and-experiments-in-docker.jpg 337 1140 Rafał Hryciuk https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Rafał Hryciuk2016-12-30 10:33:302021-02-23 11:19:27Neptune 1.3 with TensorFlow integration and experiments in Docker
Neptune - Machine Learning Platform

Neptune – Machine Learning Platform

September 26, 2016/in Data science, Deep learning, Machine learning, Neptune /by Rafał Hryciuk

In January 2016, deepsense.ai won the Right Whale Recognition contest on Kaggle. The competition’s goal was to automate the right whale recognition process using a dataset of aerial photographs of individual whales. The terms and conditions for the competition stated that to collect the prize, the winning team had to provide source code and a description of how to recreate the winning solution. A fair request, but as it turned out, the winning solution’s authors spent about three weeks recreating all of the steps that led them to the winning machine learning model.

When data scientists work on a problem, they need to test many different approaches – various algorithms, neural network structures, numerous hyperparameter values that can be optimized etc. The process of validating one approach can be called an experiment. The inputs for every experiment include: source code, data sets, hyperparameter values and configuration files. The outputs of every experiment are: output model weights (definition of the model), metric values (used for comparing different experiments), generated data and execution logs. As we can see, that’s a lot of different artifacts for each experiment. It is crucial to save all of these artifacts to keep track of the project – comparing different models, determining which approaches were already tested, expanding research from some experiment from the past etc. Managing the process of experiment executions is a very hard task and it is easy to make a mistake and lose an important artifact.
To make the situation even more complicated, experiments can depend on each other. For example, we can have two different experiments training two different models and a third experiment that takes these two models and creates a hybrid to generate predictions. Recreating the best solution means finding the path from the original data set to the model that gives the best results.

Recreating the path that led to the best model
Recreating the path that led to the best model

The deepsense.ai research team performed around 1000 experiments to find the competition-winning solution. Knowing all that, it becomes clear why recreating the solution was such a difficult and time consuming task.
The problem of recreating a machine learning solution is present not only in an academic environment. Businesses struggle with the same problem. The common scenario is that the research team works to find the best machine learning model to solve a business problem, but then the software engineering team has to put the model into a production environment. The software engineering team needs a detailed description of how to recreate the model.
Our research team needed a platform that would help them with these common problems. They defined the properties of such a platform as:

  • Every experiment and the related artifacts are registered in the system and accessible for browsing and comparing;
  • Experiment execution can be monitored via real-time metrics;
  • Experiment execution can be aborted at any time;
  • Data scientists should not be concerned with the infrastructure for the experiment execution.

deepsense.ai decided to build Neptune – a brand new machine learning platform that organizes data science processes. This platform relieves data scientists of the manual tasks related to managing their experiments. It helps with monitoring long-running experiments and supports team collaboration. All these features are accessible through the powerful Neptune Web UI and useful for scripting CLI.
Neptune - Machine Learning Platform Logo
Neptune is already used in all machine learning projects at deepsense.ai. Every week, our data scientists execute around 1000 experiments using this machine learning platform. Thanks to that, the machine learning team can focus on data science and stop worrying about process management.

Experiment Execution in Neptune

Main Concepts of the Machine Learning Platform

Job

A job is an experiment registered in Neptune. It can be registered for immediate execution or added to a queue. The job is the main concept in Neptune and contains a complete set of artifacts related with the experiment:

  • source code snapshot: Neptune creates a snapshot of the source code for every job. This allows a user to revert to any job from the past and get the exact version of the code that was executed;
  • metadata: name, description, project, owner, tags;
  • parameters: customly defined by a user. Neptune supports boolean, numeric and string types of parameters;
  • data and logs generated by the job;
  • metric values represented as channels.

Neptune is library and framework agnostic. Users can leverage their favorite libraries and frameworks with Neptune. At deepsense.ai we currently execute Neptune jobs that use: TensorFlow, Theano, Caffe, Keras, Lasagne or scikit-learn.

Channel

A channel is a mechanism for real-time job monitoring. In the source code, a user can create channels, send values through them and then monitor these values live using the Neptune Web UI. During job execution, a user can see how his or her experiment is performing. The Neptune machine learning platform supports three types of channels:

  • Numeric: used for monitoring any custom-defined metric. Numeric channels can be displayed as charts. Neptune supports dynamic chart creation from the Neptune Web UI with multiple channels displayed in one chart. This is particularly useful for comparing various metrics;
  • Text: used for logs;
  • Image: used for sending images. A common use case for this type of channel is checking the behavior of an applied augmentation when working with images.
Neptune - Machine Learning Platform, Comparing two Neptune image channels
Comparing two Neptune image channels

Queue

A queue is a very simple mechanism that allows a user to execute his or her job on remote infrastructure. A common setup for many research teams is that data scientists develop their code on local machines (laptops), but due to hardware requirements (powerful GPU, large amount of RAM, etc) code has to be executed on a remote server or in a cloud. For every experiment, data scientists have to move source code between the two machines and then log into the remote server to execute the code and monitor logs. Thanks to our machine learning platform, a user can enqueue a job from a local machine (the job is created in Neptune, all metadata and parameters are saved, source code copied to users’ shared storage). Then, on a remote host that meets the job requirements the user can execute the job with a single command. Neptune takes care of copying the source code, setting parameters etc.
The queue mechanism can be used to write a simple script that queries Neptune for enqueued jobs and execute the first job from the queue. If we run this script on a remote server in an infinite loop, we don’t have to log to the server ever again because the script executes all the jobs from the queue and reports the results to the machine learning platform.

Creating a Job

Neptune is language and framework agnostic. A user can communicate with Neptune using REST API and Web Sockets from his or her source code written in any language. To make the communication easier, we provide a high-level client library for Python (other languages are going to be supported soon).
Let’s examine a simple job that provided with amplitude and sampling_rate generates sine and cosine as functions of time (in seconds).

import math
import time
from deepsense import neptune
ctx = neptune.Context()
amplitude = ctx.params.amplitude
sampling_rate = ctx.params.sampling_rate
sin_channel = ctx.job.create_channel(name='sin', channel_type=neptune.ChannelType.NUMERIC)
cos_channel = ctx.job.create_channel(name='cos', channel_type=neptune.ChannelType.NUMERIC)
logging_channel = ctx.job.create_channel(name='logging', channel_type=neptune.ChannelType.TEXT)
ctx.job.create_chart(name='sin & cos chart', series={'sin': sin_channel, 'cos': cos_channel})
ctx.job.finalize_preparation()
# The time interval between samples.
period = 1.0 / sampling_rate
# The initial timestamp, corresponding to x = 0 in the coordinate axis.
zero_x = time.time()
iteration = 0
while True:
    iteration += 1
    # Computes the values of sine and cosine.
    now = time.time()
    x = now - zero_x
    sin_y = amplitude * math.sin(x)
    cos_y = amplitude * math.cos(x)
    # Sends the computed values to the defined numeric channels.
    sin_channel.send(x=x, y=sin_y)
    cos_channel.send(x=x, y=cos_y)
    # Formats a logging entry.
    logging_entry = "sin({x})={sin_y}; cos({x})={cos_y}".format(x=x, sin_y=sin_y, cos_y=cos_y)
    # Sends a logging entry.
    logging_channel.send(x=iteration, y=logging_entry)
    time.sleep(period)

The first thing that we can see is that we need to import Neptune library and create a neptune.Context object. The Context object is an entrypoint for Neptune integration. Afterwards, using the context we obtain values for job parameters: amplitude and sampling_rate.
Then, using neptune.Context.job we create numeric channels for sending sine and cosine values and a text channel for sending logs. We want to display sin_channel and cos_channel on a chart, so we use neptune.Context.job.create_chart to define a chart with two series named sin and cos. After that, we need to tell Neptune that the preparation phase is over and we are starting the proper computation. That is what: ctx.job.finalize_preparation() does.
In an infinite loop we calculate sine and cosine functions values and send these values to Neptune using the channel.send method. We also create a human-readable log and send it through logging_channel.
To run main.py as a Neptune job we need to create a configurtion file – a descriptor file with basic metadata for the job.

name: Sine-Cosine Generator
project: Trigonometry
owner: Your Name
parameters:
  - name: amplitude
    type: double
    default: 1.0
    required: false
  - name: sampling_rate
    type: double
    default: 2
    required: false

config.yaml contains basic information about the job: name, project, owner and parameter definitions. For our simple Sine-Cosine Generator we need two parameters of double type: amplitude and sampling_rate (we already saw in the main.py how to obtain parameter values in the code).
To run the job we need to use the Neptune CLI command:
neptune run main.py –config config.yaml –dump-dir-url my_dump_dir — –amplitude 5 –sampling_rate 2.5
For neptune run we specify: the script that we want to execute, the configuration for the job and a path to a directory where snapshot of the code will be copied to. We also pass values of the custom-defined parameters.

Job Monitoring

Every job executed in the machine learning platform can be monitored in the Neptune Web UI. A user can see all useful information related to the job:

  • metadata (name, description, project, owner);
  • job status (queued, running, failed, aborted, succeeded);
  • location of the job source code snapshot;
  • location of the job execution logs;
  • parameter schema and values.
Neptune – Machine Learning Platform, Parameters for Sine-Cosine Generator
Parameters for Sine-Cosine Generator

A data scientist can monitor custom metrics sent to Neptune through the channel mechanism. Values of the incoming channels are displayed in the Neptune Web UI in real time. If the metrics are not satisfactory, the user can decide to abort the job. Aborting the job can also be done from the Neptune Web UI.

Neptune – Machine Learning Platform, Channels for Sine-Cosine Generator
Channels for Sine-Cosine Generator
Neptune – Machine Learning Platform, Comparing values of multiple metrics using Neptune channels
Comparing values of multiple metrics using Neptune channels

Numeric channels can be displayed graphically as charts. A chart representation is very useful to compare various metrics and to track changes of metrics during job execution.

Neptune – Machine Learning Platform, Chart for Sine-Cosine Generator
Chart for Sine-Cosine Generator
Neptune – Machine Learning Platform, Charts displaying custom metrics
Charts displaying custom metrics

For every job a user can define a set of tags. Tags are useful for marking significant differences between jobs and milestones in the project (i.e if we are doing a MINST project, we can start our research by running the job with a well known and publicly available algorithm and tag it ‘benchmark’).

Comparing Results and Collaboration

Every job executed in the Neptune machine learning platform is registered and available for browsing. Neptune’s main screen shows a list of all executed jobs. User can filter jobs using job metadata, execution time and tags.

Neptune – Machine Learning Platform, Neptune jobs list
Neptune jobs list

A user can select custom-defined metrics to show as columns on the list. The job list can be sorted using values from every column. That way, a user can select which metric he or she wants to use for comparison, sort all jobs using this metric and then find the job with the best score.
Thanks to a complete history of job executions, data scientists can compare their jobs with jobs executed by their teammates. They can compare results, metrics values, charts and even get access to the snapshot of code of a job they’re interested in.
Thanks to Neptune, the machine learning team at deepsense.ai was able to:

  • get rid off spreadsheets for keeping history of executed experiments and their metrics values;
  • eliminate sharing source code across the team as an email attachment or other innovative tricks;
  • limit communication required to keep track of project progress and achieved milestones;
  • unify visualisation for metrics and generated data.

Join the Early Adopters Program

Apply for our Early Adopters Program and get early access to Neptune – Machine Learning Platform.
Benefits of joining the program include:

  • You will be one of the first to get full access to this innovative product designed especially for data scientists for FREE;
  • You will have direct impact on future product features;
  • You will get support from our team of engineers;
  • You can share your ideas with our experts and the community of the world’s leading data scientists.
https://deepsense.ai/wp-content/uploads/2019/02/neptune-machine-learning-platform.jpg 217 750 Rafał Hryciuk https://deepsense.ai/wp-content/uploads/2019/04/DS_logo_color.svg Rafał Hryciuk2016-09-26 11:48:132021-01-05 16:50:56Neptune – Machine Learning Platform
Page 2 of 212

Start your search here

NEWSLETTER SUBSCRIPTION

    You can modify your privacy settings and unsubscribe from our lists at any time (see our privacy policy).

    This site is protected by reCAPTCHA and the Google privacy policy and terms of service apply.

    THE NEWEST AI MONTHLY DIGEST

    • AI Monthly Digest 20 - TL;DRAI Monthly Digest 20 – TL;DRMay 12, 2020

    CATEGORIES

    • Elasticsearch
    • Computer vision
    • Artificial Intelligence
    • AIOps
    • Big data & Spark
    • Data science
    • Deep learning
    • Machine learning
    • Neptune
    • Reinforcement learning
    • Seahorse
    • Job offer
    • Popular posts
    • AI Monthly Digest
    • Press release

    POPULAR POSTS

    • AI trends for 2021AI trends for 2021January 7, 2021
    • A comprehensive guide to demand forecastingA comprehensive guide to demand forecastingMay 28, 2019
    • What is reinforcement learning? The complete guideWhat is reinforcement learning? The complete guideJuly 5, 2018

    Would you like
    to learn more?

    Contact us!
    • deepsense.ai logo white
    • Industries
    • Retail
    • Manufacturing
    • Financial & Insurance
    • IT Operations
    • TMT & Other
    • Medical & Beauty
    • Knowledge base
    • Blog
    • R&D Hub
    • deepsense.ai
    • Careers
    • Summer Internship
    • Our story
    • Management
    • Scientific Advisory Board
    • Press center
    • Support
    • Terms of service
    • Privacy policy
    • Contact us
    • Join our community
    • facebook logo linkedin logo twitter logo
    • © deepsense.ai 2014-
    Scroll to top

    This site uses cookies. By continuing to browse the site, you are agreeing to our use of cookies.

    OKLearn more

    Cookie and Privacy Settings



    How we use cookies

    We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.

    Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.

    Essential Website Cookies

    These cookies are strictly necessary to provide you with services available through our website and to use some of its features.

    Because these cookies are strictly necessary to deliver the website, refuseing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.

    We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.

    We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.

    Other external services

    We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.

    Google Webfont Settings:

    Google Map Settings:

    Google reCaptcha Settings:

    Vimeo and Youtube video embeds:

    Privacy Policy

    You can read about our cookies and privacy settings in detail on our Privacy Policy Page.

    Accept settingsHide notification only