Executing ML Jobs in Azure Databricks From StreamSets – DZone AI | xxxExecuting ML Jobs in Azure Databricks From StreamSets – DZone AI – xxx
菜单

Executing ML Jobs in Azure Databricks From StreamSets – DZone AI

二月 9, 2019 - MorningStar

Over a million developers have joined DZone.
Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

{{announcement.body}}

{{announcement.title}}

Let’s be friends:

Executing ML Jobs in Azure Databricks From StreamSets

DZone’s Guide to

Executing ML Jobs in Azure Databricks From StreamSets

In this post, you will learn how to execute machine learning jobs in Azure Databricks using StreamSets Databricks Executor.

Mar. 15, 19 · AI Zone ·

Free Resource

Join the DZone community and get the full member experience.

Join For Free

How to Simplify Apache Kafka. Get eBook.

Executing ML Jobs in Azure Databricks From StreamSets - DZone AIIn my previous post, I demonstrated how to achieve low-latency inference using Databricks ML models in StreamSets. Now let’s say you have a dataflow pipeline that is ingesting data, enriching it, and performing transformations, and based on certain condition(s), you’d like to (re)train the Databricks ML model. For instance, using a different value for hyperparameter n_estimators (“number of trees” in the forest), which is one of the most important parameters of Random forest machine learning method.

In this post, you will learn how to execute such machine learning jobs in Azure Databricks using StreamSets Databricks Executor.

Imp: I have used (re)training a model by passing in hyperparameter value merely as an example, and the overarching takeaway should be that by following the guidelines outlined in this post, you should be able to execute other types of jobs in Azure Databricks using StreamSets Databricks Executor.

Before we dive into details, let’s look at the different components involved.

Prerequisites

Training Databricks ML Model on Azure

We’ll use Databricks Notebook I’ve created to train a RandomForestRegressor model.

Here are the details about the model and Databricks Notebook referenced above:

  • Code: It is written in Scala, but it can be easily ported to Python
  • Hyperparameters: For simplicity, I did not tune any hyperparameters — fine tuning hyperparameters is highly recommended before deploying models in production environments.
  • Dataset: The model is trained on a classic dataset that contains advertising budgets for media channels — TV, radio, and newspapers — and their sales.
  • Inference: The model is trained to predict sales (number of units sold) based on advertising budgets allocated to TV, radio and newspapers media channels.
  • Model Export: After the model is trained, it’s exported using Databricks ML Model Export (You’ll need to fill in your AWS credentials and uncomment code to save the model in your AWS S3 bucket).

Getting Started in Azure Databricks

STEP 1. Follow the instructions outlined here to upload Advertising dataset. (Note: You don’t need to create a table as long as the file is uploaded and can be accessed at /FileStore/tables/Advertising.csv)

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

STEP 2. Follow the instructions outlined here to import the Databricks Notebook. (And make sure it is attached to a Spark cluster running in Azure Databricks.)

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Before moving on, run all commands/cells in the Notebook to make sure everything checks out and that there are no errors.

STEP 3. Follow the instructions outlined here to create a job.

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

STEP 4. Follow the instructions outlined here to generate an authentication token. (The auth token will be required to execute the job from StreamSets.)

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Execute Databricks ML Job in Azure

Before we look at how to execute this job using StreamSets Databricks Executor, let’s do a quick test using the curl command and Azure Databricks Jobs API.

curl 'https://BASE_AZURE_URL/api/2.0/jobs/run-now' -X POST -H "Authorization: Bearer BEARER_TOKEN" -d "{/"job_id/": JOB_ID}"

Replace BASE_AZURE_URL, BEARER_TOKEN and JOB_ID and execute the command. So in my case, it looked like this:

curl 'https://westus.azuredatabricks.net/api/2.0/jobs/run-now' -X POST -H "Authorization: Bearer dapi53916e671db41XXXXXXXXXXXXXXX" -d "{/"job_id/": 1}"

If all goes well, the JSON response will look something like this:

{"run_id":25,"number_in_job":25}

Execute Databricks ML Job in Azure Using StreamSets Databricks Executor

Now let’s see how to execute the same job using StreamSets Databricks Executor. Assume there’s a dataflow pipeline with a data source/origin, optional processors to perform transformations, a destination and some logic or condition(s) to trigger a task in response to events that occur in the pipeline. In our case, that task is to execute the Databricks ML job in Azure using StreamSets Databricks Executor. (For more information on dataflow triggers, refer to the documentation.)

For simplicity, let’s focus on the following fragments of the dataflow pipeline.

Job tab of StreamSets Databricks Executor:

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Where:
Cluster Base URL: Your Azure Databricks Service URL
Job Type: Notebook Job
Job ID: Id of the job created in step 3
Parameters: key = NUM_OF_TREES; value = ${record:value(‘/tune_trees’)}. Note: In this example, its value is dynamically set to what’s in the record field named ‘tune_trees.’

Credentials tab of StreamSets Databricks Executor

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Where:
Credential Type: Token
Token: Auth token created in step 4

Running the Pipeline

Assuming all goes well with no errors including processor logic kicking off in response to event(s) configured in the pipeline, the job will start running in Azure Databricks. As a result, the associated Databricks Notebook in Azure will execute all the commands, which will effectively (re)train the RandomForestRegressor model using NUM_OF_TREES as its n_estimators hyperparameter value passed in from StreamSets Databricks Executor. (See Notebook code snippet below from Cmd 12.)

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

You can view the job transitioning from Pending, Running to Succeeded states in the Jobs interface as shown below.

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Executing ML Jobs in Azure Databricks From StreamSets - DZone AI

Summary

In this post, you learned how to execute jobs in Azure Databricks using StreamSets Databricks Executor. In particular, we looked at automating the task of (re)training Databricks ML model using different hyperparameters for evaluating and comparing model accuracies. Note: It goes without saying that training models, evaluating them, model versioning, and serving different versions of the model are not trivial undertakings by any means, and that is not the focus of this post.

If you’re interested in learning how to use trained models to achieve low-latency inference in StreamSets, check out the tech blogs, Low-Latency Inference Using Databricks ML In StreamSets and Real-Time Machine Learning With TensorFlow In Data Collector.

StreamSets Data Collector is open source, under the Apache v2 license.

12 Best Practices for Modern Data Ingestion. Download White Paper.

Topics:
ai ,artificial intelligence tutorial ,machine learning ,databricks ,azure

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.linkDescription }}

{{ parent.urlSource.name }}

· {{ parent.articleDate | date:’MMM. dd, yyyy’ }} {{ parent.linkDate | date:’MMM. dd, yyyy’ }}


Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51