Scheduling Spark jobs in Seahorse

In the latest Seahorse release we introduced the scheduling of Spark jobs. We will show you how to use it to regularly collect data and send reports generated from that data via email.

Use case

Let’s say that we have a local meteo station and the data from this station is uploaded automatically to Google Sheets. That is, it contains only one row (i.e. row number 2) of data which always contains current weather characteristics.

There is one header row and one row with data: temperature, humidity, atmospheric pressure, wind speed and time.

We would like to generate some graphs from this data every now and then and do it without any repeated effort.


Collecting data

The first workflow that we create collects a snapshot of live data and appends it to a historical data set. It will be run at regular intervals.

Let’s start by creating two data sources in Seahorse. The first data source represents the Google Spreadsheet with live-updated weather data:

Seahorse's Google Spreadsheet data source options.

For the historical weather data, we create a data source representing a file stored locally within Seahorse. Initially, it only contains a header and one row – current data.

Data source named "Historical weather data", which has its source in a library CSV file.

Now we can create our first Spark job which reads both data sources, concatenates them and writes back to the “Historical weather data” data source. In this way, there will be a new data row in “Historical weather data” every time the workflow is run.

Screenshot shows a workflow with four nodes, which will run as a scheduled job. At the top, there are two "Read DataFrame" operations named "Historical weather data" and "Live weather data". Their ouput is connected to "Union" operation named "Append live data snapshot". Its output is fed to "Write DataFrame" operation, writing to "Historical weather data". This is one of the workflow to be scheduled using Spark job scheduling.

There is one more step for us to do – this Spark job needs to be scheduled. It is run every half an hour and after every scheduled job execution an email is sent.

Form for selecting scheduling options: "Select cluster preset: default Run workflow every hour at 05,35 minute(s) past the hour Send email reports to:"

In the email there is a link to that workflow, where we are able to see node reports from the execution.

Sending a weather report

The next workflow is very simple – it consists of a Read DataFrame operation and a Python Notebook. It is done separately from collecting the data because we want to send reports at a much lower rate.

Workflow consisting of two operations - "Read DataFrame", reading from "Historical weather data" and connected to it "Python Notebook".

In a notebook, first we transform the data a little:


Next, we prepare a function that will be used to line-plot a time series of weather characteristics like temperature, air pressure and humidity:


After editing the notebook, we set up its parameters so that reports are sent to our email address after each run.

Options of Python Notebook. Execute notebook: true. Send E-mail report: true. Email Address:

Finally, we can set up a schedule for our report. It is similar to the previous one, only sent less often– we send a report every day in the evening. After each execution two emails are sent – one, as previously, with a link to the executed workflow and the second one containing a notebook execution result.


After some time, we have enough data to plot it. Here is a sample from the “Historical weather data” data source…

Data report for "Historical weather data" Read DataFrame collected using Spark job scheduling. There is a table: columns are "temperature", "humidity", "atmospheric pressure", "wind speed" and "time"; rows contain some sample data.

… and graphs that are in the executed notebook:

Two cells in Python Notebook, with inputs: "plot('temperature')" and "plot('atmospheric pressure')". Both outputs are images with line graphs.


We created a data processing and email reporting pipeline using two simple workflows and some Python code. Seahorse works well not only for big data, but also for scenarios when you need to integrate data from different sources and periodically generate reports – Seahorse data sources and Spark job scheduling are the right tool for the job.

Related Posts

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *