Amazon Machine Learning Developer Guide

Transcription

Amazon Machine Learning Developer Guide
Amazon Machine Learning Developer
Guide
Release 1.0
Amazon Web Services
April 17, 2015
CONTENTS
1
.
.
.
.
1
1
4
4
5
2
Setting Up Amazon Machine Learning
2.1 Sign Up for AWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
7
3
Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
3.1 Step 1: Download, Edit, and Upload Data . . . . . . . . . . . . . . . . .
3.2 Step 2: Create a Datasource . . . . . . . . . . . . . . . . . . . . . . . .
3.3 Step 3: Create an ML Model . . . . . . . . . . . . . . . . . . . . . . . .
3.4 Step 4: Review the ML Model Predictive Performance and Set a Cut-Off
3.5 Step 5: Use the ML Model to Create Batch Predictions . . . . . . . . . .
3.6 Step 6: Clean Up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
10
12
18
19
22
26
Creating and Using Datasources
4.1 Understanding the Data Format for Amazon ML
4.2 Uploading Your Data to Amazon S3 . . . . . . .
4.3 Creating a Data Schema for Amazon ML . . . .
4.4 Data Insights . . . . . . . . . . . . . . . . . . .
4.5 Using Amazon S3 with Amazon ML . . . . . .
4.6 Using Amazon Redshift with Amazon ML . . .
4.7 Using Amazon RDS with Amazon ML . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
29
29
32
33
36
42
43
49
4
What is Amazon Machine Learning?
1.1 Amazon Machine Learning Key Concepts
1.2 Accessing Amazon Machine Learning . .
1.3 Regions and Endpoints . . . . . . . . . .
1.4 Resources . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
Training ML Models
57
5.1 Types of ML Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2 Training Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3 Training Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6
Data Transformations for Machine Learning
6.1 Importance of Feature Transformation . . .
6.2 Feature Transformations with Data Recipes
6.3 Recipe Format Reference . . . . . . . . .
6.4 Suggested Recipes . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
63
64
68
i
6.5
6.6
7
8
9
Data Transformations Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Data Rearrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Evaluating ML Models
7.1 ML Model Insights . . . .
7.2 Binary Model Insights . .
7.3 Multiclass Model Insights
7.4 Regression Model Insights
7.5 Evaluation Alerts . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
75
75
76
79
81
82
Generating and Interpreting Predictions
8.1 Creating Batch Prediction Objects . . . .
8.2 Working with Batch Predictions . . . . .
8.3 Reading the BatchPrediction Output Files
8.4 Requesting Real-time Predictions . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
85
85
86
87
90
.
.
.
.
95
95
97
97
98
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Managing Amazon Machine Learning Objects
9.1 Listing Objects . . . . . . . . . . . . . . .
9.2 Retrieving Object Descriptions . . . . . .
9.3 Updating Objects . . . . . . . . . . . . . .
9.4 Deleting Objects . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10 Monitoring Amazon ML with Amazon CloudWatch Metrics
11 Amazon Machine Learning Reference
11.1 Granting Amazon ML Permissions to Read Your Data from Amazon S3
11.2 Granting Amazon ML Permissions to Output Predictions to Amazon S3
11.3 Controlling Access to Amazon ML Resources by Using IAM . . . . .
11.4 Dependency Management of Asynchronous Operations . . . . . . . .
11.5 Operation Request Status . . . . . . . . . . . . . . . . . . . . . . . .
11.6 System Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.7 Names and IDs for all Objects . . . . . . . . . . . . . . . . . . . . . .
11.8 Object Lifetimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
101
102
104
111
111
112
113
113
12 About Amazon Web Services
115
Index
117
ii
CHAPTER
ONE
WHAT IS AMAZON MACHINE LEARNING?
Welcome to the Amazon Machine Learning Developer Guide. Amazon Machine Learning (Amazon ML) is
a robust AWS machine learning platform in the cloud that allows software developers to train predictive
models and use them to create powerful predictive applications.
The rest of this section introduces the key concepts and terms that will help you understand what you need
to do to create powerful machine learning models with Amazon ML.
Note
If you are new to machine learning, we recommend that you read Machine Learning Concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts/) before you continue with
the Amazon Machine Learning Developer Guide. If you already are familiar with machine
learning, you can proceed directly to Amazon Machine Learning Key Concepts in this guide.
1.1 Amazon Machine Learning Key Concepts
This section of the developer guide summarizes the following key concepts and describes in greater detail
how they are used within Amazon ML:
• Datasources contain metadata associated with data inputs to Amazon ML
• ML models generate predictions using the patterns extracted from the input data
• Evaluations measure the quality of ML models
• Batch predictions asynchronously generate predictions for multiple input data observations
• Real-time predictions synchronously generate predictions for individual data observations
1.1.1 Datasources
A datasource is an object that contains metadata about your input data. Amazon ML reads your input data,
computes descriptive statistics on its attributes, and stores the statistics—along with a schema and other
information—as part of the datasource object. Next, Amazon ML uses the datasource to train and evaluate
an ML model and generate batch predictions.
Important
1
Amazon Machine Learning Developer Guide, Release 1.0
A datasource does not store a copy of your input data. Instead, it stores a reference to the
Amazon S3 location where your input data resides. If you move or change the Amazon S3 file,
Amazon ML cannot access or use it to create a ML model, generate evaluations, or generate
predictions.
The following table defines terms that are related to datasources.
Term
Attribute
Datasource
Name
Input
Data
Location
Observation
Row ID
Schema
Statistics
Status
Target
Attribute
Definition
A unique, named property within an observation. In tabular-formatted data such as
spreadsheets or comma-separated values (CSV) files, the column headings represent the
attributes, and the rows contain values for each attribute.
Synonyms: variable, variable name, field, column
(Optional) Allows you to define a human-readable name for a datasource. These names
enable you to find and manage your datasources in the Amazon ML console.
Collective name for all the observations that are referred to by a datasource.
Location of input data. Currently, Amazon ML can use data that is stored within Amazon
S3 buckets, Amazon Redshift databases, or MySQL databases in Amazon Relational
Database Service (RDS).
A single input data unit. For example, if you are creating an ML model to detect fraudulent
transactions, your input data will consist of many observations, each representing an
individual transaction.
Synonyms: record, example, instance, row
(Optional) A flag that, if specified, identifies an attribute in the input data to be included in
the prediction output. This attribute makes it easier to associate which prediction
corresponds with which observation.
Synonyms: row identifier
The information needed to interpret the input data, including attribute names and their
assigned data types, and names of special attributes.
Summary statistics for each attribute in the input data. These statistics serve two purposes:
The Amazon ML console displays them in graphs to help you understand your data
at-a-glance and identify irregularities or errors.
Amazon ML uses them during the training process to improve the quality of the resulting
ML model.
Indicates the current state of the datasource, such as In Progress, Completed, or Failed.
In the context of training an ML model, the target attribute identifies the name of the
attribute in the input data that contains the “correct” answers. Amazon ML uses this to
discover patterns in the input data and generate an ML model. In the context of evaluating
and generating predictions, the target attribute is the attribute whose value will be predicted
by a trained ML model.
Synonyms: target
1.1.2 ML Models
An ML model is a mathematical model that generates predictions by finding patterns in your data. Amazon
ML supports three types of ML models: binary classification, multiclass classification and regression.
2
Chapter 1. What is Amazon Machine Learning?
Amazon Machine Learning Developer Guide, Release 1.0
The following table defines terms that are related to ML models.
Term
Regression
Multiclass
Binary
Model
Size
Number
of Passes
Regularization
Definition
The goal of training a regression ML model is to predict a numeric value.
The goal of training a multiclass ML model is to predict values that belong to a limited,
pre-defined set of permissible values.
The goal of training a binary ML model is to predict values that can be either 0 or 1.
ML models capture and store patterns. The more patterns a ML model stores, the bigger it
will be. ML model size is described in Mbytes.
When you train an ML model, you use data from a datasource. It is sometimes beneficial to
use each data record in the learning process more than once. The number of times that you
let Amazon ML use the same data records is called the number of passes.
Regularization is a machine learning technique that you can use to obtain higher-quality
models. Amazon ML offers a default setting that works well for most cases.
1.1.3 Evaluations
An evaluation measures the quality of your ML model and determines if it is performing well.
The following table defines terms that are related to evaluations.
Term
Model
Insights
AUC
Macroaveraged
F1-score
RMSE
Cut-off
Accuracy
Precision
Recall
Definition
Amazon ML provides you with a metric and a number of insights that you can use to
evaluate the predictive performance of your model.
Area Under the ROC Curve (AUC) measures the ability of a binary ML model to
predict a higher score for positive examples as compared to negative examples.
The macro-averaged F1-score is used to evaluate the predictive performance of
multiclass ML models.
The Root Mean Square Error (RMSE) is a metric used to evaluate the predictive
performance of regression ML models.
ML models work by generating numeric prediction scores. By applying a cut-off value,
the system converts these scores into 0 and 1 labels.
Accuracy measures the percentage of correct predictions.
Precision measures the percentage of actual positives among those examples that are
predicted as positive.
Recall measures the percentage of actual positives that are predicted as positives.
1.1.4 Batch Predictions
Batch predictions are for a set of observations that can run all at once. This is ideal for predictive analyses
that do not have a real-time requirement.
The following table defines terms that are related to batch predictions.
1.1. Amazon Machine Learning Key Concepts
3
Amazon Machine Learning Developer Guide, Release 1.0
Term
Output
Location
Manifest
File
Definition
The results of a batch prediction are stored in an S3 bucket output location.
This file relates each input data file with its associated batch prediction results. It is
stored in the S3 bucket output location.
1.1.5 Real-time Predictions
Real-time predictions are for applications with a low latency requirement, such as interactive web, mobile,
or desktop applications. Any ML model can be queried for predictions by using the low latency real-time
prediction API.
The following table defines terms that are related to real-time predictions.
Term
Real-time
Prediction API
Real-time
Prediction
Endpoint
Definition
The Real-time Prediction API accepts a single input observation in the request
payload and returns the prediction in the response.
To use an ML model with the real-time prediction API, you need to create a real-time
prediction endpoint. Once created, the endpoint contains the URL that you can use to
request real-time predictions.
1.2 Accessing Amazon Machine Learning
You can access Amazon ML by using any of the following:
• Amazon ML console
You can access the Amazon ML console by signing into the AWS Management Console,
and opening the Amazon ML console at
https://console.aws.amazon.com/machinelearning/.
• AWS CLI
For information about how to install and configure the AWS CLI, see Getting Set Up with
the AWS Command Line Interface in the AWS Command Line Interface User Guide.
• Amazon ML API
For more information about the Amazon ML API, see Amazon ML API Reference.
• AWS SDKs
For more information about the AWS SDKs, see Tools for Amazon Web Services.
1.3 Regions and Endpoints
For a list of supported AWS regions and URLs of Amazon ML console and API endpoints, see Regions
and Endpoints in the Amazon Web Services General Reference.
4
Chapter 1. What is Amazon Machine Learning?
Amazon Machine Learning Developer Guide, Release 1.0
1.4 Resources
The following table lists resources you will find useful as you work with Amazon ML.
Resource
Description
Amazon ML FAQs
Covers the top questions that developers have asked about this
(http://aws.amazon.com/machineproduct.
learning/faqs/)
Amazon ML Release Notes
Provides a high-level overview of the current release. The
(http://aws.amazon.com/releasenotes/MachineLearning)
release notes also provide specific information about any new
features, corrections, and known issues.
Amazon ML API Reference
Describes all the API operations for Amazon ML in detail. It
(http://docs.aws.amazon.com/machine- also provides sample requests and responses for supported web
learning/latest/APIReference/)
service protocols.
Machine Learning Concepts
Provides an overview of the basic concepts in the field of
(http://docs.aws.amazon.com/machine- machine learning.
learning/latest/mlconcepts/)
AWS Developer Resource Center
Provides a central starting point to find documentation, code
(http://aws.amazon.com/resources/)
samples, release notes, and other information to help you build
innovative applications with AWS.
AWS Support
Serves as a hub for creating and managing your AWS Support
(https://aws.amazon.com/premiumsupport/)
cases. It also includes links to other helpful resources, such as
forums, technical FAQs, service health status, and AWS
Trusted Advisor.
Amazon ML product information
Captures all the pertinent product information about Amazon
(http://aws.amazon.com/machineML in a central location.
learning/)
Contact Us
Provides a central contact point for inquiries concerning AWS
(https://aws.amazon.com/contact-us/)
billing, accounts, events, and more.
1.4. Resources
5
Amazon Machine Learning Developer Guide, Release 1.0
6
Chapter 1. What is Amazon Machine Learning?
CHAPTER
TWO
SETTING UP AMAZON MACHINE LEARNING
You need an AWS account before you can use Amazon Machine Learning for the first time. If you don’t
have an account, see Sign Up for AWS.
2.1 Sign Up for AWS
When you sign up for Amazon Web Services (AWS), your AWS account is automatically signed up for all
services in AWS, including Amazon ML. You are charged only for the services that you use. If you have an
AWS account already, skip this step. If you don’t have an AWS account, use the following procedure to
create one.
To sign up for an AWS account
1. Go to http://aws.amazon.com, (http://aws.amazon.com/) and choose Sign Up.
2. Follow the on-screen instructions.
Part of the sign-up procedure involves receiving a phone call and entering a PIN using the phone keypad.
7
Amazon Machine Learning Developer Guide, Release 1.0
8
Chapter 2. Setting Up Amazon Machine Learning
CHAPTER
THREE
TUTORIAL: USING AMAZON ML TO PREDICT RESPONSES TO A
MARKETING OFFER
With Amazon Machine Learning (Amazon ML), you can build and train predictive applications and host
your applications in a scalable cloud solution. In this tutorial, we show you how to use Amazon ML to
create a datasource, build a machine learning (ML) model, and use the model to generate batch predictions.
Our sample exercise in the tutorial shows how to identify potential customers for targeted marketing
campaigns, but you can apply the same principles to create and use a variety of machine learning models.
To complete the sample exercise, you use the publicly available banking and marketing dataset from the
University of California at Irvine (UCI) repository (http://archive.ics.uci.edu/ml/datasets.html). This
dataset contains information about customers as well as descriptions of their behavior in response to
previous marketing contacts. You use this data to identify which customers are most likely to subscribe to
your new product. In the sample dataset, the product is a bank term deposit. A bank term deposit is a
deposit made into a bank with a fixed interest rate that cannot be withdrawn for a certain period of time,
also known as a certificate of deposit (CD).
To complete the tutorial, you download sample data and upload the data to Amazon S3 to create a
datasource—an Amazon ML object that contains information about your data. Next, you create an ML
model from the datasource. You evaluate and adjust the ML model’s performance, and then use it to
generate predictions.
Note
You need an AWS account for this tutorial. If you don’t have an AWS account, see Setting Up
Amazon Machine Learning.
Complete the following steps to get started using Amazon ML:
Step 1: Download, Edit, and Upload Data
Step 2: Create a Datasource
Step 3: Create an ML Model
Step 4: Review the ML Model’s Performance and Set a Score Threshold
Step 5: Use the ML Model to Generate Batch Predictions
Step 6: Clean Up
9
Amazon Machine Learning Developer Guide, Release 1.0
3.1 Step 1: Download, Edit, and Upload Data
To start, you download the data and check to see if you need to format it before you provide it to Amazon
ML. For Amazon ML formatting requirements, see Understanding the Data Format for Amazon ML. To
make the download step quick for you, we downloaded the banking and marketing dataset from the UCI
Machine Learning Repository (http://archive.ics.uci.edu/ml/), formatted it to conform to Amazon ML
guidelines, shuffled the records, and made it available at the location that is shown in the following
procedure.
To download and save the data
1. To open the datasets that we have placed in an Amazon S3 bucket for your use, click
https://s3.amazonaws.com/aml-sample-data/banking.csv and
https://s3.amazonaws.com/aml-sample-data/banking-batch.csv
2. Download the files by saving them as banking.csv and banking-batch.csv on your desktop.
If you open the banking.csv file, you should see rows and columns full of data. The header row
contains the attribute names for each column. An attribute is a unique, named property. Each row
represents a single observation.
You want your ML model to answer the following question: Will this customer subscribe to my new
product? In the dataset, the answer to this question is in attribute y, which is located in column U.
This column contains the values 1 (yes) or 0 (no). The attribute that you want Amazon ML to learn
to predict is known as the target attribute.
The y attribute that you are going to predict is a binary attribute. For binary classification, Amazon
ML understands only 1 or 0. To help Amazon ML learn how to predict which of your customers will
subscribe to the marketing campaign, we edited the original UCI dataset to make all values of y that
are yes equal 1 and all values that are no equal 0. In the dataset that you downloaded, we have
already edited the yes and no values to be 1 and 0.
The following two screenshots show the data before and after our edits.
10
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
The banking-batch.csv data does not contain the binary attribute, y. Once you have an ML model, we will
use the model to predict y for each row in the banking-batch.csv file.
Next, upload your banking.csv and banking-batch.csv files to an Amazon S3 bucket that you own. If you
have not created a bucket, see the Amazon S3 User Guide
(http://docs.aws.amazon.com/AmazonS3/latest/UG/CreatingaBucket.html) to learn how to create one.
To upload the file to an Amazon S3 bucket
1. Sign into the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3.
2. In the buckets list, create or choose the bucket where you want to upload the file, and then choose
Upload.
3. Choose Add Files.
4. In the dialog box that appears, navigate to your desktop, choose banking.csv and banking-batch.csv,
and then choose Open.
Note
The datasource does not actually store your data. The datasource only references it. If you
move or change the S3 file, Amazon ML cannot access or use it to create a ML model,
generate evaluations, or generate predictions.
3.1. Step 1: Download, Edit, and Upload Data
11
Amazon Machine Learning Developer Guide, Release 1.0
Now you are ready to create your datasource.
3.2 Step 2: Create a Datasource
After you upload banking.csv to your Amazon S3 bucket, you need to provide Amazon ML with the
following information:
• The Amazon S3 location of your data
• The names of the attributes in the data and the type of each attribute (numeric, text, categorical, or
binary type)
• The name of the attribute that holds the answer that you want Amazon ML to learn to predict
You provide this information to Amazon ML by creating a datasource. A datasource is an Amazon ML
object that holds the location of your input data, the attribute names and types, the name of the target
attribute, and descriptive statistics for each attribute. Operations like ML model training or ML model
evaluations use a datasource ID to reference your data.
In the next step, you reference banking.csv as the input data of your datasource, provide the schema using
the Amazon ML console to assign data types, and select a target attribute.
Input Data Amazon ML uses input data to train ML models. Input data must be in a CSV. To create your
targeted marketing campaign, use the Banking dataset as input data. Input data for training contains the
correct answer for the attribute y that you want Amazon ML to predict. You must provide Amazon ML
with a dataset for which you know the correct answer so that Amazon ML can learn the patterns among the
input attributes. Learning these patterns helps Amazon ML predict which customers are more likely to
subscribe to the new product.
To reference input data for the training datasource
1. Open the Amazon Machine Learning console at https://console.aws.amazon.com/machinelearning/.
On the Amazon ML console, you can create data sources, ML models, evaluations, and batch
predictions. You can also view detail pages for these objects, which include information such as the
object’s creation status.
2. On the Entities page, choose Create new, Datasource.
12
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
3. On the Input Data page, for Where is your data located?, select S3.
For S3 Location, type the location of the banking.csv file dataset: example-bucket/banking.csv
4. For Datasource name, type Banking Data 1.
5. Choose Verify.
6. In the S3 permissions dialog box, choose Yes.
3.2. Step 2: Create a Datasource
13
Amazon Machine Learning Developer Guide, Release 1.0
Amazon ML validates the location of your data.
7. If your information is correct, a property page appears with a Validation success message. Review
the properties, and then choose Continue.
3.2.1 Schema
Next, you establish a schema. A schema is composed of attributes and their assigned data types. There are
two ways to provide Amazon ML with a schema:
• Provide a separate schema file when you upload your Amazon S3 data
• Allow Amazon ML to infer the attribute types and create a schema for you
In this tutorial, Amazon ML infers the schema for you.
14
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
For more information about creating a separate schema file, see this link.
To create a schema by using Amazon ML
1. On the Schema page, for Does the first line in your CSV contain the column names?, choose Yes.
The data type of each attribute is inferred by Amazon ML based on a sample of each attribute’s values. It is
important that attributes are assigned the most correct data type possible to help Amazon ML ingest the
data correctly and to enable the correct feature processing on the attributes. This step influences the
predictive performance of the ML model that is trained on this datasource.
2. Review the data types identified by Amazon ML by checking the sample values for the attributes on
all three pages:
• Attributes that are numeric quantities for which the order is meaningful should be marked as numeric
• Attributes that are numbers or strings that are used to denote a category should be marked as
categorical
• Attributes that are expected to take only values 1 or 0 should be marked as binary
• Attributes that are strings that you would like to treat as words delimited by spaces should be marked
as text
3.2. Step 2: Create a Datasource
15
Amazon Machine Learning Developer Guide, Release 1.0
3. In preceding example, Amazon ML has correctly identified the data types for all the attributes, so
choose Continue.
Next, you select a target attribute.
3.2.2 Target Attribute
In this step, you select a target attribute. The target attribute is the attribute that the ML model must learn to
predict. Because you are trying to send the new marketing campaign to customers who are most likely to
subscribe, you should choose the binary attribute y as your target attribute. This binary attribute labels an
individual as having subscribed for a campaign in the past: 1 (yes) or 0 (no). When you select y as your
target attribute, Amazon ML identifies patterns in the datasource that was used for training to create a
mathematical model. The model can generate predictions about data for which you do not know the answer.
For example, if you want to predict your customers’ education levels, you would choose education as your
target attribute.
Note
Target attributes are required only if you use the datasource for training ML models and
evaluating ML models.
To select y as the target attribute
1. On the Target page, for Do you want to use this dataset to create and/or evaluate a ML model?,
choose Yes.
2. In the lower right of the table, choose the single arrow until the attribute y appears in the table.
16
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
3. In the Target column, choose the option next to y.
Amazon ML confirms that y is selected as your target.
4. Choose Continue.
5. On the Row ID page, for Do you want to select an identifier?, choose No.
6. Choose Review.
7. On the Review page, choose Finish.
Once you choose Finish, the request to create the datasource is submitted. The datasource moves into
Initialized status and takes a few minutes to reach Completed status. You do not need to wait for the
datasource to complete, so proceed to the next step.
3.2. Step 2: Create a Datasource
17
Amazon Machine Learning Developer Guide, Release 1.0
3.3 Step 3: Create an ML Model
After the request to create the datasource has been submitted, you use it to train an ML model. The ML
model generates predictions by using your training datasource to identify patterns in the historical data.
To create an ML model
1. Choose Amazon Machine Learning, ML models.
On the ML models summary page, choose Create new ML model.
2. Because you’ve already created a datasource, choose I already created a datasource pointing to
my S3 data.
3. In the table, choose Banking Data 1, and then choose Continue.
4. On the ML model settings page, for ML model name, type Subscription propensity model.
Giving your ML model a human readable name helps you identify and manage the ML model.
5. For Training and evaluation settings, choose Default.
18
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
6. For Name this evaluation, type Subscription propensity evaluation.
7. Choose Review.
8. Review your data, and then choose Finish.
Once you choose Finish, the following requests are submitted:
• Split the input datasource into 70% for training and 30% for evaluation
• Create the ML model to train on 70% of the input data
• Create an evaluation to evaluate the ML model on 30% of the input data
The split datasources, ML model, and evaluation move into Pending status and take a few minutes to reach
Completed status. You need to wait for the evaluation to complete before proceeding to step 4.
Please see Training models and evaluating models for more information.
3.4 Step 4: Review the ML Model Predictive Performance and Set a
Cut-Off
Now that the ML model is successfully created and evaluated, let’s see if it is good enough to put to use.
Amazon ML already computed an industry-standard quality metric called the Area Under a Curve (AUC)
metric that expresses the performance quality of your ML model. Start by reviewing and interpreting it.
3.4.1 Reviewing the AUC Metric
An evaluation describes whether or not your ML model is better than making random guesses. Amazon ML
interprets the AUC metric to tell you if the quality of the ML model is adequate for most machine learning
applications. Learn more about AUC in the Amazon Machine Learning Concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts/).
Next, let’s look at the AUC metric of your ML model.
To view the AUC metric of your ML model
1. Choose Amazon Machine Learning, ML models.
3.4. Step 4: Review the ML Model Predictive Performance and Set a Cut-Off
19
Amazon Machine Learning Developer Guide, Release 1.0
2. In the ML models table, select Subscription propensity model.
3. On the ML model report page, choose Evaluations, Subscription propensity evaluation.
4. Choose Summary.
5. On the Evaluation summary page, review your information. This page includes a summary of your
evaluation, including the AUC performance metric of the ML model.
20
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
Next, you set a score threshold in order to change the ML model’s behavior when it makes a mistake.
3.4.2 Setting a Score Threshold
Our ML model works by generating numeric prediction scores, and then applying a threshold to convert
these scores into binary 0/1 labels. By changing the score threshold, you can adjust the ML model’s
behavior for which records are predicted as 0/1.
To set a score threshold for your ML model
1. On the Evaluation summary page, choose Adjust Score Threshold.
Amazon ML displays the ML model performance results page. This page includes a chart that shows the
score distribution of your predictions. You use this page to view advanced metrics and the effect of
different score thresholds on the performance of your model. You can fine-tune your ML model
performance metrics by adjusting the score threshold value.
2. Let’s say you want to target the top 3% of the customers that are most likely to subscribe to the
product. Slide the vertical selector to set the score threshold to a value that corresponds to 3% of the
records predicted as “1”.
3.4. Step 4: Review the ML Model Predictive Performance and Set a Cut-Off
21
Amazon Machine Learning Developer Guide, Release 1.0
You can review the impact of this score threshold on the ML model’s performance. Now let’s say the false
positive rate of 0.007 is acceptable to your application.
3. Choose Save Score Threshold.
The score threshold is saved for this ML model.
Each time you use this ML model to make predictions, it will predict records with scores>0.77 to be
predicted as “1”, and the rest of the records will be predicted as “0”.
Remember, machine learning is an iterative process that requires you to discover what score threshold is
most appropriate for you. You can adjust the predictions by adjusting your score threshold based on your
use case.
To learn more about the score threshold, see the Amazon Machine Learning Concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts/).
3.5 Step 5: Use the ML Model to Create Batch Predictions
In Amazon ML, there are two ways to get predictions—batch and online. If your application requires
predictions to be generated in real-time, you first need to mount the ML model to get online predictions.
When you mount an ML model, you make it available to generate predictions on demand, and at low
latency. These real-time predictions are usually used in interactive web, mobile, or desktop applications.
For this tutorial, you choose the method that generates predictions for a large batch of input records without
going through the real-time Enable for Real-time Prediction interface.
22
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
A batch prediction is useful when you want to generate predictions for a set of observations all at once, and
you do not have a low latency requirement. For your targeted marketing campaign, you want a single file
with all of the answers included in it. In this sample problem, you are scoring your customers for whom
you have not yet marketed your new product as a batch, and you don’t need to predict who will subscribe to
the new product in real time.
3.5.1 Batch Predictions
When creating batch predictions, you select your banking data ML model as well as the prediction data
from which you want to generate predictions. When the request is complete, your batch predictions are
sent to an Amazon S3 bucket that you define. When Amazon ML makes the predictions, you will be able
to more effectively strategize and execute your targeted marketing campaign.
To create batch predictions
1. Choose Amazon Machine Learning, Batch predictions.
2. Choose Create new batch prediction.
3. On the ML Model for batch predictions page, choose Subscription propensity model from the
list.
The ML model name, ID, creation time, and the associated datasource ID appears.
4. Choose Continue.
To generate predictions, you need to show Amazon ML the data that you need answers to. This is called
the input data.
5. For Locate the input data, choose My data is in S3, and I need to create a datasource.
3.5. Step 5: Use the ML Model to Create Batch Predictions
23
Amazon Machine Learning Developer Guide, Release 1.0
6. For Datasource name, type Banking Data 2.
7. For S3 Location, enter the location of your banking-batch.csv.
8. For Does the first line in your CSV contain the column names?, choose Yes.
9. Choose Verify.
10. In the S3 permissions dialog box, choose Yes.
Amazon ML validates the location of your data.
11. Choose Continue.
12. For S3 destination, type an easily accessible Amazon S3 bucket for your prediction files.
13. For Batch prediction name, type Subscription propensity predictions.
14. In the S3 permissions dialog box, choose Yes.
15. Choose Review.
16. On the Review page, choose Finish.
The batch prediction request is sent to Amazon ML and entered into a queue. At first, the status of your
batch prediction is set as Pending. The time it takes for a batch prediction to complete depends on the size
of your datasource and the complexity of your ML model.
After the batch prediction has successfully completed, its status changes to Completed.
To view the predictions
1. Choose Amazon Machine Learning, Batch predictions.
24
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
2. In list of batch predictions, choose Subscription propensity predictions. The Batch prediction
info page appears.
3. Navigate to the Output S3 URL in your Amazon S3 console to view the batch prediction.
The prediction is stored in a compressed .gz file.
4. Download the file to your desktop, and uncompress and open the prediction file.
3.5. Step 5: Use the ML Model to Create Batch Predictions
25
Amazon Machine Learning Developer Guide, Release 1.0
The file includes two columns: bestAnswer and score. The bestAnswer column is based on the score
threshold that you set in step 4.
3.5.2 Prediction Examples
The following examples show a positive and negative prediction based on the score threshold.
Positive prediction:
In the positive prediction example, the value for bestAnswer is 1, and the value of score is 0.88682. The
value for bestAnswer is 1 because the score value is above the score threshold of 0.77 that you saved.
Negative prediction:
The value of bestAnswer in the negative prediction example is 0 because the score value is 0.76525, which
is less than the score threshold of 0.77.
3.6 Step 6: Clean Up
You have now successfully completed the tutorial. To prevent your account from accruing additional S3
charges, you should clean up the data stored in S3 for this tutorial.
To delete the input data used for training, evaluation, and batch prediction steps
1. Open the Amazon S3 console.
2. Navigate to the S3 bucket where you stored the banking.csv and banking-batch.csv.
3. Select the two files and the .writePermissionCheck.tmp file.
4. Choose Actions, Delete.
5. When prompted for confirmation, choose OK.
To delete the predictions generated from the batch prediction step
1. Open the Amazon S3 console.
2. Navigate to the bucket where you stored the output of the batch predictions.
3. Select the batch-prediction folder.
26
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
Amazon Machine Learning Developer Guide, Release 1.0
4. Choose Actions, Delete.
5. When prompted for confirmation, click OK.
To learn how to use the API, see the Amazon Machine Learning API Reference
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
3.6. Step 6: Clean Up
27
Amazon Machine Learning Developer Guide, Release 1.0
28
Chapter 3. Tutorial: Using Amazon ML to Predict Responses to a Marketing Offer
CHAPTER
FOUR
CREATING AND USING DATASOURCES
You can use Amazon ML datasources to train an ML model, evaluate an ML model, and generate batch
predictions using an ML model. Datasource objects contain metadata about your input data. When you
create a datasource, Amazon ML reads your input data, computes descriptive statistics on its attributes, and
stores the statistics, a schema, and other information as part of the datasource object. Once you create a
datasource, you can use the Amazon ML data insights to explore statistical properties of your input data,
and you also can use the datasource to train an ML model.
Note
This section assumes you are familiar with Amazon Machine Learning concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts/).
4.1 Understanding the Data Format for Amazon ML
You must save your input data in the comma-separated values (CSV) format. The input data is the data that
you use to create a datasource. Each row in the CSV corresponds to a single data record (observation).
Each column in the CSV file corresponds to an attribute. For example, the following screenshot shows the
contents of a .csv file that has four observations, each on its own line. Each observation is divided into
eight attributes separated by a comma.
4.1.1 Attributes
You can specify attribute names in one of two ways:
29
Amazon Machine Learning Developer Guide, Release 1.0
• Include the attribute names in the first line (also known as a header line) of the .csv file that you will
use as your input data
• Include the attribute names in a separate schema file that is located in the same S3 bucket as your
input data
For more information about using schema files, see Creating a Data Schema for Amazon ML.
The following screenshot shows a .csv file that includes the names of the attributes in the first line.
customerId,jobId,education,housing,loan,campaign,duration,willRespondToCampaign
1,3,basic.4y,no,no,1,261,0
2,1,high.school,no,no,22,149,0
3,1,high.school,yes,no,65,226,1
4,2,basic.6y,no,no,1,151,0
4.1.2 CSV Format Requirements
The CSV format for Amazon ML must meet the following requirements:
• Plain text using a character set such as ASCII, Unicode, or EBCDIC.
• Consists of observations, one observation per line.
• Each observation is divided into attribute values separated by a comma delimiter.
• If an attribute value contains a comma (the delimiter), the entire attribute value must be enclosed in
double quotes.
• Each observation is terminated by an end-of-line character, which is a special character or sequence
of characters indicating the end of a line.
• Attribute values cannot include end-of-line characters, even if the attribute value is enclosed in
double quotes.
• Every observation must have the same number of attributes and sequence of attributes.
• Each observation must be no larger than 10 MB. Amazon ML rejects any observation greater than 10
MB during processing. If Amazon ML rejects more than 10,000 observations, then Amazon ML
rejects the entire .csv file.
4.1.3 Using Multiple Files as Data Input to Amazon ML
You can provide your input to Amazon ML as a single file, or as a collection of several files. Collections
must satisfy these properties:
• All files must have the same data schema.
• All files must reside in a common Amazon S3 prefix, and the prefix must end with a forward slash
(‘/’) character.
30
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
For example, if your data files are named input1.csv, input2.csv, and input3.csv, and your Amazon S3
bucket name is s3://examplebucket, your file paths might look like this:
s3://examplebucket/path/to/data/input1.csv
s3://examplebucket/path/to/data/input2.csv
s3://examplebucket/path/to/data/input3.csv
In that case, you would provide the following S3 location as input to Amazon ML:
‘s3://examplebucket/path/to/data/’
4.1.4 End-of-Line Characters in CSV Format
When you create your .csv file, each observation will be terminated by a special end-of-line character. This
character is not visible, but is automatically included at the end of each observation when you press your
Enter or Return key. The special character that represents the end-of-line varies depending on your
operating system. Unix-like systems such as Linux or OS X use a line feed character that is indicated by
“\n” (ASCII code 10 in decimal or 0x0a in hexadecimal). Microsoft Windows uses two characters called
carriage return and line feed that are indicated by “\r\n” (ASCII codes 13 and 10 in decimal or 0x0d and
0x0a in hexadecimal).
If you want to use OS X and Microsoft Excel to create your .csv file, perform the following procedure. Be
sure to choose the correct format.
To save a .csv file if you use OS X and Excel
1. When saving the .csv file, choose Format, and then choose Windows Comma Separated (.csv).
2. Choose Save.
4.1. Understanding the Data Format for Amazon ML
31
Amazon Machine Learning Developer Guide, Release 1.0
Important
Do not save the .csv file by using the formats Comma Separated Values (.csv) or
MS-DOS Comma Separated (.csv) because Amazon ML will be unable to read them.
4.2 Uploading Your Data to Amazon S3
You must upload your input data to Amazon S3 because Amazon ML reads data from S3 locations. You
can upload your data directly to S3 (for example, from your computer), or Amazon ML can copy data that
you’ve stored in Amazon Redshift or Amazon Relational Database Service (RDS) into a .csv file and
upload it to S3.
For more information about copying your data from Amazon Redshift or Amazon RDS, see Using Amazon
Redshift with Amazon ML or Using Amazon RDS with Amazon ML.
The remainder of this section describes how to upload your input data directly from your computer to
Amazon S3. Before you begin the procedures in this section, you need to have your data in a .csv file. For
information about how to correctly format your .csv file so that Amazon ML can use it, see Understanding
the Data Format for Amazon ML.
To upload your data from your computer to Amazon S3
1. Sign in to the AWS Management Console and open the Amazon S3 console at
https://console.aws.amazon.com/s3.
2. Create a bucket or choose an existing bucket.
32
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
(a) To create a bucket, choose Create Bucket. Name your bucket, choose a region (you can choose
any available region), and then choose Create. For more information, see Create a Bucket
(http://docs.aws.amazon.com/AmazonS3/latest/gsg/CreatingABucket.html) in the Amazon
Simple Storage Getting Started Guide.
(b) To use an existing bucket, search for the bucket by choosing the bucket in the All Buckets list.
When the bucket name appears, select it, and then choose Upload.
3. In the Upload dialog box, choose Add Files.
4. Navigate to the folder that contains your input data .csv file, and then choose Open.
4.3 Creating a Data Schema for Amazon ML
A schema is composed of all attributes in the input data and their corresponding data types. Amazon ML
uses the information in the schema to correctly read and interpret the input data, compute statistics, apply
the correct attribute transformations, and fine-tune its learning algorithms.
You can choose one of two ways to provide Amazon ML with a schema:
• Provide a separate schema file when you upload your Amazon S3 data
• Allow Amazon ML to infer the data types of each attribute in the input data file and automatically
create a schema for you
Four available data types are available in Amazon ML:
• NUMERIC
• CATEGORICAL
• TEXT
• BINARY
For information about statistics associated with each data type, see Descriptive Statistics.
Each attribute must be assigned the correct data type so that Amazon ML can read the input data correctly
and produce accurate predictions. Let’s walk through an example to see how attributes are assigned to data
types, and how the attributes and data types are included in a schema. We’ll call our example “Customer
Campaign” because we want to predict which customers will respond to our email campaign. Our input file
consists of a .csv file with nine columns:
1,3,basic.4y,no,no,1,261,0
2,1,high.school,no,no,22,149,0
3,1,high.school,yes,no,65,226,1
4,2,basic.6y,no,no,1,151,0
{
"version": "1.0",
"rowId": "customerId",
4.3. Creating a Data Schema for Amazon ML
33
Amazon Machine Learning Developer Guide, Release 1.0
"targetAttributeName": "willRespondToCampaign ",
"dataFormat": "CSV",
"dataFileContainsHeader": false,
"attributes": [
{
“attributeName”: "customerId",
"attributeType": "NUMERIC"
},
{
“attributeName”: "jobId",
"attributeType": "NUMERIC"
},
{
“attributeName”: "education",
"attributeType": "CATEGORICAL"
},
{
“attributeName”: "housing",
"attributeType": "BINARY"
},
{
“attributeName”: "loan",
"attributeType": "BINARY"
},
{
“attributeName”: "campaign",
"attributeType": "NUMERIC"
},
{
“attributeName”: "duration",
"attributeType": "NUMERIC"
},
}
The customerId attribute and the NUMERIC data type are associated with the first column, the jobId
attribute and the NUMERIC data type are associated with the second column, the education attribute and
the CATEGORICAL data type are associated with the third column, and so on. The eighth column is
associated with the willRespondToCampaign attribute with a BINARY data type, and this attribute also is
defined as the target attribute.
4.3.1 Using the “targetAttributeName” Field
The target attribute is the name of the attribute that you want to predict. In the context of training an ML
model, the target attribute identifies the name of the attribute in the input data that contains the “correct”
answers for the target attribute. Amazon ML uses this input data, which includes the correct answers, to
discover patterns and generate a ML model. The ML model can be used to generate predictions on data
where the value of the target attribute is blank or missing.
You can define the target attribute in the Amazon Machine Learning console when you create a datasource,
or you can define it in a schema file. If you create your own schema file, use the following syntax to define
the target attribute:
34
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
“targetAttributeName”: “attribute”,
In the preceding example, attribute is the name of the attribute in your input file that will be defined as the
target attribute. In the schema for our Customer Campaign, the attribute willRespondToCampaign is
defined as the target attribute:
“targetAttributeName”: “willRespondToCampaign ”,
4.3.2 Using the “rowID” Field
The row ID is an optional flag associated with an attribute in the input data. If specified, the attribute
marked as the row ID is included in the prediction output. This attribute makes it easier to associate which
prediction corresponds with which observation. An example of a good row ID is a customer ID or a
similar, unique attribute.
Note:
The row ID is for your reference only and will not be used when training a ML model.
Selecting an attribute as a row ID excludes it from being used for training an ML model.
You can define the row ID in the Amazon ML console when you create a datasource or by defining it in a
schema file. If you are creating your own schema file, use the following syntax to define the row ID:
“rowId”: “attribute”,
In the preceding example, attribute is the name of the attribute in your input file that will be defined as the
row ID.
In the schema file of our “Customer Campaign” example, the attribute customerId is defined as the row ID:
“rowId”: “customerId”,
The following is valid output we could obtain when generating batch predictions:
bestAnswer,score,Row ID
0,0.46317,55
1,0.89625,102
In the preceding example, Row ID represents the attribute customerId. For example, customerId 55 is
predicted with low confidence (0.46317) to respond to our email campaign, while customerId 102 is
predicted with high confidence (0.89625) to respond to our email campaign.
4.3.3 Providing Schema to Amazon ML
After you create your schema file, you need to make it available to Amazon ML. You can choose one of
two options:
1. Provide the schema by using the Amazon ML console
Use the console to create your datasource, and include the schema file by appending the
.schema extension to the file name of your input data. For example, if the S3 URI to your input
data is s3://my-bucket-name/data/input.csv, the file name of your schema will be
4.3. Creating a Data Schema for Amazon ML
35
Amazon Machine Learning Developer Guide, Release 1.0
s3://my-bucket-name/data/input.csv.schema. Amazon ML will automatically locate the
schema file that you provided instead of attempting to infer the schema from your data.
If you want to use a directory of files as your data input to Amazon ML, append the .schema
extension to your directory path. For example, if your data files reside in the location
s3://examplebucket/path/to/data/, the file name of your schema will be
s3://examplebucket/path/to/data/.schema.
2. Provide the schema by using the Amazon Machine Learning API
If you plan to call the Amazon Machine Learning API to create your datasource, you can
upload the schema file into S3, and then provide the URI to that file in the
DataSchemaLocationS3 attribute of CreateDataSourceFromS3. For more information, see
CreateDataSourceFromS3
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
You also can choose to provide the schema directly in the payload of CreateDataSource*
APIs instead of first saving it to S3. You do this by placing the full schema string in the
DataSchema attribute of CreateDataSourceFromS3, CreateDataSourceFromRDS or
CreateDataSourceFromRedshift APIs. For more information, see the Amazon Machine
Learning API Reference
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
4.4 Data Insights
Amazon ML computes descriptive statistics on your input data that you can use to understand your data.
4.4.1 Descriptive Statistics
Amazon ML computes the following descriptive statistics for different attribute types:
Numeric:
• Distribution histograms
• Number of invalid values
• Minimum, median, mean, and maximum values
Binary and categorical:
• Count (of distinct values per category)
• Value distribution histogram
• Most frequent values
• Unique values counts
• Percentage of true value (binary only)
• Most prominent words
• Most frequent words
36
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
4.4.2 Accessing Data Insights on the Amazon ML console
On the Amazon ML console, you can choose the name or ID of any datasource to view its Data Insights
page. This page provides metrics and visualizations that enable you to learn about the input data associated
with the datasource, including the following information:
• Data summary
• Target distributions
• Missing values
• Invalid values
• Summary statistics of variables by data type
• Distributions of variables by data type
The following sections describe the metrics and visualizations in greater detail.
Data Summary
The data summary report of a datasource displays summary information, including the datasource ID,
name, where it was completed, current status, target attribute, input data information (S3 bucket location,
data format, number of records processed and number of bad records encountered during processing) as
well as the number of variables by data type.
Target Distributions
The target distributions report shows the distribution of the target attribute of the datasource. In the
following example, there are 39,922 observations where the willRespondToCampaign target attribute
equals 0. This is the number of customers who did not respond to the email campaign. There are 5,289
observations where willRespondToCampaign equals 1. This is the number of customers who responded to
the email campaign.
4.4. Data Insights
37
Amazon Machine Learning Developer Guide, Release 1.0
Missing Values
The missing values report indicates the attributes in the input data that contain missing values. Only
attributes with numeric data types can have missing values. Because missing values can affect the quality
of training a ML model, all missing values should be corrected, if possible.
Note
If Amazon ML encounters an observation in the input data that contains missing values, it will
ignore or reject that observation. If more than 10,000 observations are rejected during the
processing of a datasource, the entire input of the input data will be rejected, and the
datasource creation will fail.
Invalid Values
Invalid values can occur only with Numeric and Binary data types. You can find invalid values by viewing
the summary statistics of variables in the data type reports. In the following examples, there is one invalid
value in the duration Numeric attribute and two invalid values in the Binary data type (one in the housing
attribute and one in the loan attribute).
38
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
Variable-Target Correlation
After you create a datasource, Amazon ML can evaluate the datasource and identify the correlation, or
impact, between variables and the target. For example, the price of a product might have a significant
impact on whether or not it is a best seller, while the dimensions of the product might have little predictive
power.
It is generally a best practice to include as many variables in your training data as possible. However, the
noise introduced by including many variables with little predictive power might negatively affect the
quality and accuracy of your ML model.
You might be able to improve the predictive performance of your model by removing variables that have
little impact when you train your model. You can define which variables are made available to the machine
learning process in a recipe, which is a transformation mechanism of Amazon ML. To learn more about
recipes, see Data Transformation for Machine Learning.
Summary Statistics of Attributes by Data Type
In the data insights report, you can view attribute summary statistics by the following data types:
• Binary
• Categorical
• Numeric
• Text
Summary statistics for the Binary data type show all binary attributes. The Correlations to target column
shows the information shared between the target column and the attribute column. The Percent true
column shows the percentage of observations that have value 1. The Invalid values column shows the
number of invalid values as well as the percentage of invalid values for each attribute. The Preview column
provides a link to a graphical distribution for each attribute.
4.4. Data Insights
39
Amazon Machine Learning Developer Guide, Release 1.0
Summary statistics for the Categorical data type show all Categorical attributes with the number of unique
values, most frequent value, and least frequent value. It also provides a link to a graphical distribution for
each attribute.
You also can view summary statistics by Numeric data type. The statistics show all Numeric attributes with
the number of missing values, invalid values, range of values, mean, and median. It also provides a link to a
graphical distribution for each attribute.
Understanding the Distribution of Categorical and Binary Attributes
By clicking the Preview link associated with a categorical or binary attribute, you can view that attribute’s
distribution as well as the sample data from the input file for each categorical value of the attribute.
40
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
For example, the following screenshot shows the distribution for the categorical attribute jobId. The
distribution displays the top 10 categorical values, with all other values grouped as “other”. It ranks each of
the top 10 categorical values with the number of observations in the input file that contain that value, as
well as a link to view sample observations from the input data file.
Understanding the Distribution of Numeric Attributes
To view the distribution of a numeric attribute, click the Preview link of the attribute. When viewing the
distribution of a numeric attribute, you can choose bin sizes of 500, 200, 100, 50, or 20. The larger the bin
size, the smaller number of bar graphs that will be displayed. In addition, the resolution of the distribution
will be coarse for large bin sizes. In contrast, setting the bucket size to 20 increases the resolution of the
displayed distribution.
The minimum, mean, and maximum values are also displayed, as shown in the following screenshot.
4.4. Data Insights
41
Amazon Machine Learning Developer Guide, Release 1.0
4.5 Using Amazon S3 with Amazon ML
Amazon Simple Storage Service (Amazon S3) is storage for the Internet. You can use Amazon S3 to store
and retrieve any amount of data at any time, from anywhere on the web. Amazon ML uses Amazon S3 as a
primary data repository.
Amazon ML can access your input files in Amazon S3 to create datasource objects for training and
evaluating your ML models. When you generate batch predictions by using your ML models, Amazon ML
outputs the prediction file to a bucket that you specify. To enable Amazon ML to perform these tasks, you
must grant permissions.
42
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
4.5.1 Permissions
To grant permissions for Amazon ML to access one of your S3 buckets, you must edit the bucket policy.
To grant Amazon ML permission to read data from your bucket in S3, see Granting Amazon ML
Permissions to Read Your Data from Amazon S3
To grant Amazon ML permission to output the batch prediction results to your bucket in S3, see Granting
Amazon ML Permissions to Output Predictions to Amazon S3
For more information about managing access permissions to Amazon S3 resources, see the Amazon S3
Developer Guide (http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html).
4.5.2 Amazon S3 Regions and Data Consistency
Amazon S3 provides eventual consistency for some operations, so it is possible that new data will not be
available immediately after the upload, which could result in an incomplete data load or loading stale data.
All COPY operations from buckets in the US Standard Region are eventually consistent. Any COPY
operations where the cluster and the bucket are in different regions are also eventually consistent. All other
regions provide read-after-write consistency for uploads of new objects with unique file names, also known
as object keys.
To ensure that your application loads the correct data, we recommend that you create new object keys. To
avoid slow read times, we also recommend that you store your data in the same region as the Amazon ML
endpoint.
Amazon S3 provides eventual consistency in all regions for overwrite operations. Creating new object keys
in Amazon S3 for each data load operation provides strong consistency in all regions except US Standard.
For more information about managing object keys and metadata, see Object Key and Metadata
(http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html) in the Amazon S3 Developer
Guide.
For more information about data consistency, see Amazon S3 Data Consistency Model
(http://docs.aws.amazon.com/AmazonS3/latest/UG/Introduction.html#ConsistencyMode) in the Amazon
S3 Developer Guide.
4.6 Using Amazon Redshift with Amazon ML
Amazon ML allows you to create a datasource object from data residing in Amazon Redshift. When you
perform this action, Amazon ML executes the SQL query you specify by invoking the Amazon Redshift
Unload command on the Amazon Redshift cluster that you specify. Amazon ML stores the results at the
S3 location of your choice, and the Amazon ML object is created on that S3 data.
In order for Amazon ML to connect to your Amazon Redshift database and read data on your behalf, you
need to provide the following:
• Amazon Redshift cluster ID/name
• Amazon Redshift database name
4.6. Using Amazon Redshift with Amazon ML
43
Amazon Machine Learning Developer Guide, Release 1.0
• Database credentials (username and password)
• SQL query that specifies the data that you want to use to create the datasource
• S3 output location used to store the results of the UNLOAD command
• IAM role with the security context that is used to make the connection
• (Optional) Location of the data schema file
Additionally, you need to ensure that the IAM users or roles who create Amazon Redshift datasources
(whether through the console, or by using CreateDatasourceFromRedshift) have the iam:PassRole
permission. For more information, see Configuring IAM User or Role Permission to Enable Role Passing.
Note
Amazon ML does not support Amazon Redshift clusters in private VPCs.
4.6.1 Amazon Redshift Cluster ID/Name
This parameter enables Amazon ML to find and connect to your cluster. You can obtain the cluster ID from
the Amazon Redshift console. This parameter is case sensitive. For more information about clusters, see
Amazon Redshift Clusters (http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-clusters.html).
4.6.2 Amazon Redshift Database Name
This parameter tells Amazon ML which database within the Amazon Redshift cluster contains the data that
you want to access.
4.6.3 Amazon Redshift Database Credentials
These parameters specify the username and password of the Amazon Redshift database user in whose
context the security query will be executed.
Note
A username and password are required to allow Amazon ML to connect to your Amazon
Redshift database. Once the connection is established, your password is no longer used by
Amazon ML, and Amazon ML never stores your password.
4.6.4 Amazon Redshift SQL Query
This parameter specifies the SQL SELECT query to be executed on your Amazon Redshift database.
Amazon ML uses the Amazon Redshift UNLOAD
(http://docs.aws.amazon.com/redshift/latest/dg/t_Unloading_tables.html) command to securely copy the
results of your query to S3.
Note
44
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
Machine learning technology works best when input records are presented in random order
(shuffled). You can easily shuffle the results of your Amazon Redshift SQL query by using its
random() function. For example, let’s say the original query is the following:
“SELECT col1, col2, . . . FROM training_table”
You can embed random shuffling by updating the query:
“SELECT col1, col2, . . . FROM training_table ORDER BY random()”
4.6.5 S3 Output Location
This parameter specifies the name of the “staging” S3 location where the results of the Amazon Redshift
SQL query will be stored.
Note
Because Amazon ML assumes the IAM role defined in the “Amazon ML Amazon Redshift
IAM role” section, it will have permissions to access any objects in the specified S3 staging
location. Because of this, we recommend that you do not store any sensitive or confidential
files in this S3 staging location. For example, if your root bucket is s3://mybucket/, we suggest
you create a location just to store files that you want Amazon ML to access, such as
s3://mybucket/AmazonMLInput/.
4.6.6 Amazon ML Amazon Redshift IAM role
This parameter specifies the name of the IAM role that will be used to automatically configure security
groups for the Amazon Redshift cluster and the S3 bucket policy for the S3 staging location.
Security Group Configuration
Amazon ML needs inbound access to your Amazon Redshift cluster to establish a connection. Amazon
Redshift cluster security groups or Amazon VPC groups govern inbound access to your Amazon Redshift
cluster, as explained in Amazon Redshift Security Cluster Groups
(http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-security-groups.html) in the Amazon
Redshift Cluster Management Guide. You need to create an IAM role with permissions to configure these
security groups and provide this role’s name to Amazon ML. Amazon ML uses this role to configure
access to your cluster from the list of IP addresses (CIDRs) associated with Amazon ML. The same role
can be used later to automatically update the list of CIDRs associated with Amazon ML, with no action
required on your part.
S3 Bucket Policy Configuration
When Amazon ML executes the Amazon Redshift query to retrieve your data, the results will be placed
into an intermediate S3 location. By configuring your IAM role with permissions to create and retrieve S3
objects and modify bucket policies, you can eliminate the work needed to configure and manage these
permissions ahead of time. Specifically, you need to grant the following permissions to Amazon ML:
4.6. Using Amazon Redshift with Amazon ML
45
Amazon Machine Learning Developer Guide, Release 1.0
• s3:PutObject: Grants Amazon ML the permissions to write unloaded results of your Amazon
Redshift query to Amazon S3
• s3:ListBucket and s3:GetObject: Grants Amazon ML the permissions to access and read these
results from Amazon S3 in order to create a datasource
• s3:PutObjectAcl: Enables Amazon ML to give the bucket owner full control of the unloaded S3
objects
Creating and Configuring the Amazon ML Redshift Role
An IAM role has two parts: a permissions policy (or policies) that states the permissions given to the role,
and a trust policy that states who can assume the role.
The following permission policy defines a role that gives permission to create and modify EC2 and
Amazon Redshift security groups, and configures read and write access to the S3 bucket where data
unloaded from Amazon Redshift will be stored:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
"ec2:DescribeInternetGateways",
"ec2:RevokeSecurityGroupIngress",
"s3:GetObject",
"s3:GetBucketLocation",
"s3:GetBucketPolicy",
"s3:PutBucketPolicy",
"s3:PutObject",
"redshift:CreateClusterSecurityGroup",
"redshift:AuthorizeClusterSecurityGroupIngress",
"redshift:RevokeClusterSecurityGroupIngress",
46
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
"redshift:DescribeClusterSecurityGroups",
"redshift:DescribeClusters",
"redshift:ModifyCluster"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
The following trust policy allows Amazon ML to assume the role that is defined in the preceding example:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": “machinelearning.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
4.6. Using Amazon Redshift with Amazon ML
47
Amazon Machine Learning Developer Guide, Release 1.0
Configuring IAM User or Role Permission to Enable Role Passing
The IAM user or role that will create Amazon ML datasources from Amazon Redshift needs permission to
pass roles to other services—that is what enables it to pass the IAM role defined in the preceding example.
The permission needed to accomplish that is iam:PassRole. The following example below shows how to
configure the iam:PassRole permission to pass any role belonging to the AWS account that is making the
request. This policy can be restricted to specific roles if needed:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"*"
]
}
]
}
4.6.7 Location of the Data Schema File
This parameter specifies the S3 path to the schema of the data that will be exported from Amazon Redshift.
If not specified, Amazon ML will guess the schema based on the data output.
4.6.8 Example
The following screenshot shows values for the parameters that you need to set to use Amazon Redshift
with Amazon ML. This screen appears in the first step of the Create Datasource wizard, when you click on
the Redshift radio button to select the data location.
48
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
4.7 Using Amazon RDS with Amazon ML
Amazon ML allows you to create a datasource object from data stored in a MySQL database in Amazon
Relational Database Service (Amazon RDS). When you perform this action, Amazon ML creates an AWS
Data Pipeline object that executes the SQL query that you specify, and places the output into an S3 bucket
of your choice. Amazon ML uses that data to create the datasource.
In order for Amazon ML to connect to your MySQL database in RDS and read data on your behalf, you
need to provide the following:
• RDS database instance identifier
• MySQL database name
• IAM role that is used to create, activate, and execute the data pipeline
4.7. Using Amazon RDS with Amazon ML
49
Amazon Machine Learning Developer Guide, Release 1.0
• Database user credentials
– User name
– Password
• AWS Data Pipeline security information
– IAM resource role
– IAM service role
• RDS security information
– Subnet ID
– Security group IDs
• SQL query that specifies the data that you want to use to create the datasource
• S3 output location (bucket) used to store the results of the query execution
• (Optional) Location of the data schema file
Additionally, you need to ensure that the IAM users or roles who create Amazon Redshift datasources
(whether through the console, or by using CreateDatasourceFromRedshift) have the iam:PassRole
permission. For more information, see Configuring IAM User
<reference.html#controlling-access-to-amazon-ml-resources-by-using-iam> to Enable Role Passing
Note
Amazon ML supports MySQL databases only in VPCs.
4.7.1 RDS Database Instance Identifier
The RDS database instance identifier is a customer-supplied name that uniquely identifies the database
instance when interacting with Amazon RDS. You can find the RDS database instance identifier in the
RDS console.
4.7.2 MySQL Database Name
This parameter specifies the name of the MySQL database within the RDS database instance.
4.7.3 IAM Role Used to Create, Activate, and Execute the Data Pipeline
This parameter specifies the IAM role that Amazon ML assumes on behalf of the user to create and activate
a data pipeline in the user’s account and copy data (the result of the SQL query) from Amazon RDS to
Amazon S3. Amazon ML then creates a datasource based on the data in Amazon S3.
An IAM role has two parts: a permissions policy (or policies) that states the permissions given to the role,
and a trust policy that states who can assume the role.
The following permissions policy defines a role that gives permission to describe the RDS database
instances; describe the database security name; describe, create, modify and activate the data pipelines; and
50
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
pass the resource and service roles to AWS Data Pipeline. Additionally, it allows Amazon ML to list the
contents of the S3 bucket where data will be output. This bucket name must be the same as the bucket
name that is passed as a part of the S3 output location parameter. Finally, the policy grants the GetObject
permission for the S3 path where the data will be output. For example, if you specify
“s3://examplebucket/output/path” as the output staging location, you will need to grant the s3:ListBucket
permission to the “arn:aws:s3:::examplebucket” resource, and the s3:GetObject permission to the
“arn:aws:s3:::examplebucket/output/path” resource.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds:DescribeDBInstances",
"rds:DescribeDBSecurityGroups",
"datapipeline:DescribeObjects",
"datapipeline:CreatePipeline",
"datapipeline:PutPipelineDefinition",
"datapipeline:ActivatePipeline",
"datapipeline:DescribePipelines",
"datapipeline:QueryObjects",
"iam:PassRole"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
4.7. Using Amazon RDS with Amazon ML
51
Amazon Machine Learning Developer Guide, Release 1.0
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::examplebucket"
]
}
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::examplebucket/outputpath"
]
}
]
}
The following trust policy allows Amazon ML to assume the role defined in the preceding example:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": “machinelearning.amazonaws.com"
},
52
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
"Action": "sts:AssumeRole"
}
]
}
4.7.4 Database User Credentials
To connect to the Amazon RDS database instance, you must supply the user name and password of the
database user who has sufficient permissions to execute the query that you provide.
4.7.5 AWS Data Pipeline Security Information
The AWS Data Pipeline security information that you must provide consists of the names of two roles: the
resource role and the service role.
The resource role is assumed by an Amazon EC2 instance to carry out the copy task from Amazon RDS to
Amazon S3. The easiest way to create this resource role is by using the
DataPipelineDefaultResourceRole template, and listing machinelearning.aws.com as a trusted service.
For more information about the template, see Setting Up IAM roles
(http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html) in the AWS Data
Pipeline Developer Guide.
The service role is assumed by AWS Data Pipeline to monitor the progress of the copy task from Amazon
RDS to Amazon S3. The easiest way to create this resource role is by using the DataPipelineDefaultRole
template, and listing machinelearning.aws.com as a trusted service. For more information about the
template, see Setting Up IAM roles
(http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html) in the AWS Data
Pipeline Developer Guide.
4.7.6 Amazon RDS Security Information
The Amazon RDS security information needed to connect to Amazon RDS is contained in two parameters:
VPC Subnet ID and RDS Security Group IDs. You need to set up appropriate ingress rules for the VPC
subnet that is pointed at by the Subnet ID parameter, and provide the ID of the security group that has this
permission.
4.7.7 MySQL SQL Query
This parameter specifies the SQL SELECT query to be executed on your MySQL database. Results of the
command execution will be copied to S3.
Note
4.7. Using Amazon RDS with Amazon ML
53
Amazon Machine Learning Developer Guide, Release 1.0
Machine learning technology works best when input records are presented in random order
(shuffled). You can easily shuffle the results of your MySQL SQL query by using the rand()
function. For example, let’s say the original query is the following:
“SELECT col1, col2, . . . FROM training_table”
You can add random shuffling by updating the query:
“SELECT col1, col2, . . . FROM training_table ORDER BY rand()”
4.7.8 S3 Output Location
This parameter specifies the name of the “staging” S3 location where the results of the MySQL SQL query
will be output.
Note
You need to ensure that Amazon ML has permissions to read data from this location once the
data export from RDS is completed. For information about setting these permissions, see the
Granting Amazon ML Permissions to Read Your Data from Amazon S3 .
4.7.9 Configuring an IAM user or Role Permission to Enable Role Passing
The IAM user or role that will create Amazon ML datasources from Amazon RDS needs permission to
pass roles to other services—that is what enables it to pass the IAM role defined in the preceding example.
The permission needed to accomplish that is iam:PassRole. The following example policy shows how to
configure the iam:PassRole permission to pass any role belonging to the AWS account that makes the
request. This policy can be restricted to specific roles if needed:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"*"
]
}
54
Chapter 4. Creating and Using Datasources
Amazon Machine Learning Developer Guide, Release 1.0
]
}
4.7. Using Amazon RDS with Amazon ML
55
Amazon Machine Learning Developer Guide, Release 1.0
56
Chapter 4. Creating and Using Datasources
CHAPTER
FIVE
TRAINING ML MODELS
The process of training an ML model involves providing an ML algorithm (that is, the learning algorithm)
with training data to learn from. The term ML model refers to the model artifact that is created by the
training process.
The training data must contain the correct answer, which is known as a target or target attribute. The
learning algorithm finds patterns in the training data that map the input data attributes to the target (the
answer that you want to predict), and it outputs an ML model that captures these patterns.
You can use the ML model to get predictions on new data for which you do not know the target. For
example, let’s say that you want to train an ML model to predict if an email is spam or not spam. You
would provide Amazon ML with training data that contains emails for which you know the target (that is, a
label that tells whether an email is spam or not spam). Amazon ML would train an ML model by using this
data, resulting in a model that attempts to predict whether new email will be spam or not spam.
For general information about ML models and ML algorithms, see Machine Learning Concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts).
5.1 Types of ML Models
Amazon ML supports three types of ML models: binary classification, multiclass classification, and
regression. The type of model you should choose depends on the type of target that you want to predict.
5.1.1 Binary Classification Model
ML models for binary classification problems predict a binary outcome (one of two possible classes). To
train binary classification models, Amazon ML uses the industry-standard learning algorithm known as
logistic regression.
Examples of Binary Classification Problems
• “Is this email spam or not spam?”
• “Will the customer buy this product?”
• “Is this product a book or a farm animal?”
57
Amazon Machine Learning Developer Guide, Release 1.0
• “Is this review written by a customer or a robot?”
5.1.2 Multiclass Classification Model
ML models for multiclass classification problems allow you to generate predictions for multiple classes
(predict one of more than two outcomes). For training multiclass models, Amazon ML uses the
industry-standard learning algorithm known as multinomial logistic regression.
Examples of Multiclass Problems
• “Is this product a book, movie, or clothing?”
• “Is this movie a romantic comedy, documentary, or thriller?”
• “Which category of products is most interesting to this customer?”
5.1.3 Regression Model
ML models for regression problems predict a numeric value. For training regression models, Amazon ML
uses the industry-standard learning algorithm known as linear regression.
Examples of Regression Problems
• “What will the temperature be in Seattle tomorrow?”
• “For this product, how many units will sell?”
• “What price will this house sell for?”
5.2 Training Process
To train an ML model, you need to specify the following:
• Input training datasource
• Name of the data attribute that contains the target to be predicted
• Required data transformation instructions
• Training parameters to control the learning algorithm
During the training process, Amazon ML automatically selects the correct learning algorithm for you,
based on the type of target that you specified in the training datasource.
58
Chapter 5. Training ML Models
Amazon Machine Learning Developer Guide, Release 1.0
5.3 Training Parameters
Typically, machine learning algorithms accept parameters that can be used to control certain properties of
the training process and of the resulting ML model. In Amazon ML, these are called training parameters.
You can set these parameter using the Amazon ML console, API, or command-line tools. If you do not set
any parameters, Amazon ML will use default values that are known to work well for a large range of
machine learning tasks.
You can specify values for the following training parameters:
• Maximum model size
• Maximum number of passes over training data
• Regularization type
• Regularization amount
In the Amazon ML console, the training parameters are set by default. The default settings are adequate for
most ML problems, but you can choose other values to fine-tune the performance. Certain other training
parameters, such as the learning rate, are configured for you based on your data.
The following sections provide more information about the training parameters.
5.3.1 Maximum Model Size
The maximum model size is the total size, in units of bytes, of patterns that Amazon ML creates during the
training of an ML model.
By default, Amazon ML creates a 100 MB model. You can instruct Amazon ML to create a smaller or
larger model by specifying a different size. For the range of available sizes, see Training Parameters: Types
and Default Values.
If Amazon ML cannot find enough patterns to fill the model size, it creates a smaller model. For example,
if you specify a maximum model size of 100 MB, but Amazon ML finds patterns that total only 50 MB, the
resulting model will be 50 MB. If Amazon ML finds more patterns than will fit into the specified size, it
enforces a maximum cut-off by trimming the patterns that least affect the quality of the learned model.
Choosing the model size allows you to control the trade-off between a model’s predictive quality and the
cost of use. Smaller model sizes potentially result in many patterns being removed to fit under the
maximum size limit, affecting the quality of predictions. Larger models, on the other hand, cost more to
query for real-time predictions.
NOTE
If you use an ML model to generate real-time predictions, you will incur a small capacity
reservation charge that is determined by the model’s size. For more information, see Amazon
ML pricing.
Larger input datasets do not necessarily result in larger models because models store patterns and not input
data—if the patterns are few and simple, the resulting model will be small. Input data that has a large
number of raw attributes (input columns) or derived features (outputs of the Amazon ML data
transformations) will likely have more patterns that will be found and stored during the training process.
5.3. Training Parameters
59
Amazon Machine Learning Developer Guide, Release 1.0
Picking the correct model size for your data and problem is best approached with a few experiments. The
Amazon ML model training log (which you can download from the console or through the API) contains
messages about how much model trimming (if any) occurred during the training process, allowing you to
estimate the potential hit-to-prediction quality.
5.3.2 Maximum Number of Passes over the Data
For best results, Amazon ML may need to make multiple passes over your data to discover patterns. By
default, Amazon ML makes ten passes, but you can change the default by setting a number up to 100.
Amazon ML keeps track of the quality of patterns (model convergence) as it goes along, and automatically
stops the training when there are no more data points or patterns to discover. For example, if you set the
number of passes to 20, but Amazon ML discovers that no new patterns can be found by the end of 15
passes, then it will stop the training at 15 passes.
In general, datasets with only a few observations typically require more passes over the data to obtain
higher quality models. Larger datasets often contain many similar data points, which remove the need for a
large number of passes. The impact of choosing more data passes over your data is two-fold: model
training takes longer, and it costs more.
5.3.3 Regularization Type and Amount
Complex ML models (having many input attributes) could suffer in predictive performance because of the
excessive number patterns discovered in the data. As the number of patterns increases, so does the
likelihood that the model learns unintentional data artifacts rather than true data patterns. This results in the
model doing very well on the training data but being unable to generalize well on data that was not used
during training. This phenomenon is known as overfitting the training data.
Regularization helps prevent linear models from overfitting training data examples by penalizing extreme
weight values. L1 regularization has the effect of reducing the number of features used in the model by
pushing to zero the weight of features that would otherwise have very small weights. L1 regularization thus
results in sparse models and reduces the amount of noise in the model. L2 regularization results in smaller
overall weight values and has the effect of stabilizing the weights when there is high correlation between
the features. You can control the amount of L1 or L2 regularization by using the regularization parameter.
An extremely large regularization parameter could result in all features having zero weights.
Selecting and tuning the optimal regularization approach is an active subject in machine learning research.
You will probably benefit from selecting a moderate amount of L2 regularization, which is the default in
the Amazon ML console. Advanced users can choose between three types of regularization (none, L1, or
L2) and amount. For more information about regularization, go to Regularization (mathematics)
(http://en.wikipedia.org/wiki/Regularization_(mathematics))Wikipedia.
5.3.4 Training Parameters: Types and Default Values
The following table shows the available training parameters in Amazon ML, along with the default values
and the allowable range for each.
60
Chapter 5. Training ML Models
Amazon Machine Learning Developer Guide, Release 1.0
Training
TypeDefault Value
Parameter
maxMLMod-In- 33,554,432 bytes (32
elSizeInte- MiB)
Bytes
ger
sgd.maxPasses
In- 10
teger
sgd.l1RegularizationAmount
Dou- 0 (By default, L1 is not
ble used)
sgd.l2RegularizationAmount
Dou- 1E-6 (By default, L2 is
ble used with this amount
of regularization)
5.3. Training Parameters
Description
Allowable range: 100,000 (100 KiB) to 2,147,483,648 (2
GiB)
Depending on the input data, the model size might affect
the performance.
Allowable range: 1-100
Allowable range: 0 to MAX_DOUBLE.
L1 values between 1E-4 and 1E-8 have been found to
produce good results. Larger values are likely to produce
models that are not very useful.
You cannot set both L1 and L2. You must choose one or
the other.
Allowable range: 0 to MAX_DOUBLE.
L2 values between 1E-2 and 1E-6 have been found to
produce good results. Larger values are likely to produce
models that are not very useful.
You cannot set both L1 and L2 . You must choose one or
the other.
61
Amazon Machine Learning Developer Guide, Release 1.0
62
Chapter 5. Training ML Models
CHAPTER
SIX
DATA TRANSFORMATIONS FOR MACHINE LEARNING
Machine learning models are only as good as the data that is used to train them. A key characteristic of
good training data is that it is provided in a way that is optimized for learning and generalization. The
process of putting together the data in this optimal format is known in the industry as feature
transformation.
6.1 Importance of Feature Transformation
Consider a machine learning model whose task is to decide whether a credit card transaction is fraudulent
or not. Based on your application background knowledge and data analysis, you might decide which data
fields (or features) are important to include in the input data. For example, transaction amount, merchant
name, address, and credit card owner’s address are important to provide to the learning process. On the
other hand, a randomly generated transaction ID carries no information (if we know that it really is
random), and is not useful.
Once you have decided on which fields to include, you transform these features to help the learning
process. Transformations add background experience to the input data, enabling the machine learning
model to benefit from this experience. For example, the following merchant address is represented as a
string:
“123 Main Street, Seattle, WA 98101”
By itself, the address has limited expressive power – it is useful only for learning patterns associated with
that exact address. Breaking it up into constituent parts, however, can create additional features like
“Address” (123 Main Street), “City” (Seattle), “State” (WA) and “Zip” (98101). Now, the learning
algorithm can group more disparate transactions together, and discover broader patterns – perhaps some
merchant zip codes experience more fraudulent activity than others.
For more information about the feature transformation approach and process, see Machine Learning
Concepts (http://docs.aws.amazon.com/machine-learning/latest/mlconcepts).
6.2 Feature Transformations with Data Recipes
There are two ways to transform features before creating ML models with Amazon ML: you can transform
your input data directly before showing it to Amazon ML, or you can use the built-in data transformations
63
Amazon Machine Learning Developer Guide, Release 1.0
of Amazon ML. You can use Amazon ML recipes, which are pre-formatted instructions for common
transformations. With recipes, you can do the following:
• Choose from a list of built-in common machine learning transformations, and apply these to
individual variables or groups of variables
• Select which of the input variables and transformations are made available to the machine learning
process
Using Amazon ML recipes offers several advantages. Amazon ML performs the data transformations for
you, so you do not need to implement them yourself. In addition, they are fast because Amazon ML applies
the transformations while reading input data, and provides results to the learning process without the
intermediate step of saving results to disk.
6.3 Recipe Format Reference
Amazon ML recipes are JSON files that have three sections:
• Groups enable grouping of multiple variables, for ease of applying transformations. For example,
you can create a group of all variables having to do with free-text parts of a web page (title, body),
and then perform a transformation on all these parts at once.
• Assignments enable the creation of intermediate named variables that can be reused in processing.
• Outputs define which variables will be used in the learning process, and what transformations (if
any) apply to these variables.
6.3.1 Groups
You can define groups of variables in order to collectively transform all variables within the groups, or to
use these variables for machine learning without transforming them. By default, the following groups are
created for you:
• ALL_INPUTS – Every variable defined in the datasource schema, regardless of type. However,
special variables such as target and row ID are not included in ALL_INPUTS, and cannot be used in
recipes.
• ALL_TEXT, ALL_NUMERIC, ALL_CATEGORICAL, ALL_BINARY –Type-specific groups
based on variables defined in the datasource schema.
These variables can be used in the outputs section of your recipe without being defined. You can also create
custom groups by adding to or subtracting variables from existing groups, or directly from a collection of
variables. In the following example, we demonstrate all three approaches, as well as the syntax for the
grouping assignment:
“groups”: {
“Custom_Group”: “group(var1, var2)”,
“All_variables_except_one”: “group_remove(ALL_INPUTS, var1)”,
64
Chapter 6. Data Transformations for Machine Learning
Amazon Machine Learning Developer Guide, Release 1.0
“All_Categorical_plus_one_other”: “group(ALL_CATEGORICAL, var2)”
}
Group names need to start with an alphabetical character and can be between 1 and 64 characters long. If
the group name does not start with an alphabetical character or if it contains special characters (, ‘ ” \t \r \n (
) \), then the name needs to be quoted to be included in the recipe.
6.3.2 Assignments
You can assign one or more transformations to an intermediate variable, for convenience and readability.
For example, if you have a text variable named email_subject, and you apply the lowercase transformation
to it, you can name the resulting variable email_subject_lowercase, making it easy to keep track of it
elsewhere in the recipe. Assignments can also be chained, enabling you to apply multiple transformations
in a specified order. The following example shows single and chained assignments in recipe syntax:
“assignments”: {
“email_subject_lowercase”: “lowercase(email_subject)”,
“email_subject_lowercase_ngram”:“ngram(lowercase(email_subject), 2)”
}
Intermediate variable names need to start with an alphabet character and can be between 1 and 64
characters long. If the name does not start with an alphabet or if it contains special characters (, ‘ ” \t \r \n (
) \), then the name needs to be quoted to be included in the recipe.
6.3.3 Outputs
The outputs section controls what input variables will be used for the learning process, and what
transformations apply to them. An empty or non-existent output section is an error, because no data will be
passed to the learning process.
The simplest outputs section simply includes the predefined ALL_INPUTS group, instructing Amazon
ML to use all the variables defined in the datasource for learning:
“outputs”: [
“ALL_INPUTS”
]
The output section can also refer to the other predefined groups by instructing Amazon ML to use all the
variables in these groups:
“outputs”: [
“ALL_NUMERIC”,
6.3. Recipe Format Reference
65
Amazon Machine Learning Developer Guide, Release 1.0
“ALL_CATEGORICAL”
]
The output section can also refer to custom groups. In the following example, only one of the custom
groups defined in the grouping assignments section in the preceding example will be used for machine
learning. All other variables will be dropped:
“outputs” [
“All_Categorical_plus_one_other”
]
The outputs section can also refer to variable assignments defined in
the assignment section:
“outputs” [
“email_subject_lowercase”
]
And input variables or transformations can be defined directly in the outputs section:
“outputs” [
“var1”,
“lowercase(var2)
]
Output needs to explicitly specify all variables and transformed variables that are expected to be available
to the learning process. Say, for example, that you include in the output a Cartesian product of var1 and
var2. If you would like to include both the raw variables var1 and var2 as well, then you need to add the
raw variables in the output section:
“outputs” [
“cartesian(var1,var2)”,
“var1”,
“var2”
]
Outputs can include comments for readability by adding the comment text along with the variable:
“outputs” [
"quantile_bin(age, 10) //quantile bin age",
66
Chapter 6. Data Transformations for Machine Learning
Amazon Machine Learning Developer Guide, Release 1.0
"age // explicitly include the original numeric variable along with the
binned version"
]
You can mix and match all of these approaches within the outputs section.
6.3.4 Complete Recipe Example
The following example refers to several built-in data processors that were introduced in preceding
examples:
{
"groups": {
"LONGTEXT": "group_remove(ALL_TEXT, title, subject)",
"SPECIALTEXT": "group(title, subject)",
"BINCAT": "group(ALL_CATEGORICAL, ALL_BINARY)"
},
"assignments": {
"binned_age" : "quantile_bin(age,30)",
"country_gender_interaction" : "cartesian(country, gender)"
},
"outputs": [
"lowercase(nopunct(LONGTEXT))",
"ngram(lowercase(nopunct(SPECIALTEXT)),3)",
"quantile_bin(hours-per-week, 10)",
"hours-per-week // explicitly include the original numeric variable
along with the binned version",
"cartesian(binned_age, quantile_bin(hours-per-week,10)) // this one is
critical",
"country_gender_interaction",
"BINCAT"
]
6.3. Recipe Format Reference
67
Amazon Machine Learning Developer Guide, Release 1.0
}
6.4 Suggested Recipes
When you create a new datasource in Amazon ML and statistics are computed for that datasource, Amazon
ML will also create a suggested recipe that can be used to create a new ML model from the datasource.
The suggested datasource is based on the data and target attribute present in the data, and provides a useful
starting point for creating and fine-tuning your ML models.
To use the suggested recipe on the Amazon ML console, choose Datasource or Datasource and ML
model from the Create new dropdown list. For ML model settings, you will have a choice of Default or
Custom Training and Evaluation settings in the ML Model Settings step of the Create ML Model wizard.
If you pick the Default option, Amazon ML will automatically use the suggested recipe. If you pick the
Custom option, the recipe editor in the next step will display the suggested recipe, and you will be able to
verify or modify it as needed.
Note
Amazon ML allows you to create a datasource and then immediately use it to create an ML
model, before statistics computation is completed. In this case, you will not be able to see the
suggested recipe in the Custom option, but you will still be able to proceed past that step and
have Amazon ML use the default recipe for model training.
To use the suggested recipe with the Amazon ML API, you can pass an empty string in both Recipe and
RecipeUri API parameters. It is not possible to retrieve the suggested recipe using the Amazon ML API.
6.5 Data Transformations Reference
6.5.1 N-gram Transformation
The n-gram transformation takes a text variable as input and produces strings corresponding to sliding a
window of (user-configurable) n words, generating outputs in the process. For example, consider the text
string “I really enjoyed reading this book”.
Specifying the n-gram transformation with window size=1 simply gives you all the individual words in that
string:
{"I", "really", "enjoyed", "reading", "this", "book"}
Specifying the n-gram transformation with window size =2 gives you all the two-word combinations as
well as the one-word combinations:
{"I really", "really enjoyed", "enjoyed reading", "reading this", "this
book", "I", "really", "enjoyed", "reading", "this", "book"}
Specifying the n-gram transformation with window size = 3 will add the three-word combinations to this
list, yielding the following:
68
Chapter 6. Data Transformations for Machine Learning
Amazon Machine Learning Developer Guide, Release 1.0
{"I really enjoyed", "really enjoyed reading", "enjoyed reading this",
"reading this book", "I really", "really enjoyed", "enjoyed reading",
"reading this", "this book", "I", "really", "enjoyed", "reading",
"this", "book"}
You can request n-grams with a size ranging from 2-10 words. N-grams with size 1 are generated
implicitly for all inputs whose type is marked as text in the data schema, so you do not have to ask for
them. Finally, keep in mind that n-grams are generated by breaking the input data on whitespace
characters. That means that, for example, punctuation characters will be considered a part of the word
tokens: generating n-grams with a window of 2 for string “red, green, blue” will yield {“red,”, “green,”,
“blue,”, “red, green”, “green, blue”}. You can use the punctuation remover processor (described later in
this document) to remove the punctuation symbols if this is not what you want.
To compute n-grams of window size 3 for variable var1:
“ngram(var1, 3)”
6.5.2 Orthogonal Sparse Bigram (OSB) Transformation
The OSB transformation is intended to aid in text string analysis and is an alternative to the bi-gram
transformation (n-gram with window size 2). OSBs are generated by sliding the window of size n over the
text, and outputting every pair of words that includes the first word in the window.
To build each OSB, its constituent words are joined by the “_” (underscore) character, and every skipped
token is indicated by adding another underscore into the OSB. Thus, the OSB encodes not just the tokens
seen within a window, but also an indication of number of tokens skipped within that same window.
To illustrate, consider the string “The quick brown fox jumps over the lazy dog”, and OSBs of size 4. The
six four-word windows, and the last two shorter windows from the end of the string are shown in the
following example, as well OSBs generated from each:
Window, {OSBs generated}
“The quick brown fox”, {The_quick, The__brown, The___fox}
“quick brown fox jumps”, {quick_brown, quick__fox, quick___jumps}
“brown fox jumps over”, {brown_fox, brown__jumps, brown___over}
“fox jumps over the”, {fox_jumps, fox__over, fox___the}
“jumps over the lazy”, {jumps_over, jumps__the, jumps___lazy}
“over the lazy dog”, {over_the, over__lazy, over___dog}
“the lazy dog”, {the_lazy, the__dog}
“lazy dog”, {lazy_dog}
6.5. Data Transformations Reference
69
Amazon Machine Learning Developer Guide, Release 1.0
Orthogonal sparse bigrams are a substitute for n-gram that might work better in some situations. If your
data has large text fields, OSBs might work better than n-grams – experiment to see what works best.
You can request a window size of 2 to 10 for OSB transformations on input text variables.
To compute OSBs with window size 5 for variable var1:
“osb(var1, 5)”
6.5.3 Lowercase Transformation
The lowercase transformation processor converts text inputs to lowercase. For example, given the input
“The Quick Brown Fox Jumps Over the Lazy Dog”, the processor will output “the quick brown fox jumps
over the lazy dog”.
To apply lowercase transformation to the variable var1:
“lowercase(var1)”
6.5.4 Remove Punctuation Transformation
Amazon ML implicitly splits inputs marked as text in the data schema on whitespace. Punctuation in the
string ends up either adjoining word tokens, or as separate tokens entirely, depending on the whitespace
surrounding it. If this is undesirable, the punctuation remover transformation may be used to remove
punctuation symbols from generated features. For example, given the string “Welcome to AML - please
fasten your seat-belts!”, the following set of tokens is implicitly generated:
Applying the punctuation remover processor to this string results in this set:
Note that only the prefix and suffix punctuation marks are removed. Punctuations that appear in the middle
of a token, e.g. the hyphen in “seat-belts”, are not removed.
To apply punctuation removal to the variable var1:
“no_punct(var1)”
6.5.5 Quantile Binning Transformation
The quantile binning processor takes two inputs, a numerical variable and a parameter called bin number,
and outputs a categorical variable. The purpose is to discover non-linearity in the variable’s distribution by
grouping observed values together.
In many cases, the relationship between a numeric variable and the target is not linear (the numeric variable
value does not increase or decrease monotonically with the target). In such cases, it might be useful to bin
the numeric feature into a categorical feature representing different ranges of the numeric feature. Each
categorical feature value (bin) can then be modeled as having its own linear relationship with the target.
For example, let’s say you know that the continuous numeric feature account_age is not linearly correlated
with likelihood to purchase a book. You can bin age into categorical features that might be able to capture
the relationship with the target more accurately.
70
Chapter 6. Data Transformations for Machine Learning
Amazon Machine Learning Developer Guide, Release 1.0
The quantile binning processor can be used to instruct Amazon ML to establish n bins of equal size based
on the distribution of all input values of the age variable, and then to substitute each number with a text
token containing the bin. The optimum number of bins for a numeric variable is dependent on
characteristics of the variable and its relationship to the target, and this is best determined through
experimentation. Amazon ML suggests the optimal bin number for a numeric feature based on data
statistics in the Suggested Recipe.
You can request between 5 and 1000 quantile bins to be computed for any numeric input variable.
To following example shows how to compute and use 50 bins in place of numeric variable var1:
“quantile_bin(var1, 50)”
6.5.6 Normalization Transformation
The normalization transformer normalizes numeric variables to have a mean of zero and variance of one.
Normalization of numeric variables can help the learning process if there are very large range differences
between numeric variables because variables with the highest magnitude could dominate the ML model, no
matter if the feature is informative with respect to the target or not.
To apply this transformation to numeric variable var1, add this to the recipe:
normalize(var1)
This transformer can also take a user defined group of numeric variables or the pre-defined group for all
numeric variables (ALL_NUMERIC) as input:
normalize(ALL_NUMERIC)
Note
It is not mandatory to use the normalization processor for numeric variables.
6.5.7 Cartesian Product Transformation
The Cartesian transformation generates permutations of two or more text or categorical input variables.
This transformation is used when an interaction between variables is suspected. For example, consider the
bank marketing dataset that is used in Tutorial: Using Amazon ML to Predict Responses to a Marketing
Offer. Using this dataset, we would like to predict whether a person would respond positively to a bank
promotion, based on the economic and demographic information. We might suspect that the person’s job
type is somewhat important (perhaps there is a correlation between being employed in certain fields and
having the money available), and the highest level of education attained is also important. We might also
have a deeper intuition that there is a strong signal in the interaction of these two variables—for example,
that the promotion is particularly well-suited to customers who are entrepreneurs who earned a university
degree.
The Cartesian product transformation takes categorical variables or text as input, and produces new
features that capture the interaction between these input variables. Specifically, for each training example,
it will create a combination of features, and add them as a standalone feature. For example, let’s say our
simplified input rows look like this:
target, education, job
6.5. Data Transformations Reference
71
Amazon Machine Learning Developer Guide, Release 1.0
0, university.degree, technician
0, high.school, services
1, university.degree, admin
If we specify that the Cartesian transformation is to be applied to the categorical variables education and
job fields, the resultant feature education_job_interaction will look like this:
target, education_job_interaction
0, university.degree_technician
0, high.school_services
1, university.degree_admin
The Cartesian transformation is even more powerful when it comes to working on sequences of tokens, as
is the case when one of its arguments is a text variable that is implicitly or explicitly split into tokens. For
example, consider the task of classifying a book as being a textbook or not. Intuitively, we might think that
there is something about the book’s title that can tell us it is a textbook (certain words might occur more
frequently in textbooks’ titles), and we might also think that there is something about the book’s binding
that is predictive (textbooks are more likely to be hardcover), but it’s really the combination of some words
in the title and binding that is most predictive. For a real-world example, the following table shows the
results of applying the Cartesian processor to the input variables binding and title:
Textbook
1
0
0
Title
Economics:
Principles, Problems,
Policies
The Invisible Heart:
An Economics
Romance
Fun With Problems
Bind- Cartesian product of nopunct(Title) and Binding
ing
Hard- {“Economics_Hardcover”, “Principles_Hardcover”,
cover “Problems_Hardcover”, “Policies_Hardcover”}
Soft- {“The_Softcover”, “Invisible_Softcover”, “Heart_Softcover”,
cover “An_Softcover”, “Economics_Softcover”,
“Romance_Softcover”}
Soft- {“Fun_Softcover”, “With_Softcover”, “Problems_Softcover”}
cover
The following example shows how to apply the Cartesian transformer to var1 and var2:
cartesian(var1, var2)
6.6 Data Rearrangement
The data rearrangement functionality enables you to create a Datasource that is based on only a portion of
the input data that it points to. For example, when you create an ML Model using the Create ML Model
wizard in the Amazon ML console, and choose the default evaluation option, Amazon ML automatically
reserves 30% of your data for ML model evaluation, and uses the other 70% for training. This functionality
is enabled by the Data Rearrangement feature of Amazon ML.
If you are using the Amazon ML API to create Datasources, you can specify which part of the input data a
new Datasource will be based on by passing in the rearrangement instructions within the
DataRearrangement parameter to the CreateDataSourceFromS3, CreateDataSourceFromRedshift or
72
Chapter 6. Data Transformations for Machine Learning
Amazon Machine Learning Developer Guide, Release 1.0
CreateDataSourceFromRDS APIs. DataRearrangement contents are a JSON string containing the
beginning and end locations of your data, expressed as percentages. For example, the following
DataRearrangement string specifies that the first 70% of the data will be used to create the Datasource:
{
“splitting”: {
“percentBegin”: 0,
“percentEnd”: 70}
}
6.6. Data Rearrangement
73
Amazon Machine Learning Developer Guide, Release 1.0
74
Chapter 6. Data Transformations for Machine Learning
CHAPTER
SEVEN
EVALUATING ML MODELS
You should always evaluate a model to know if it will do a good job of predicting the target on new and
future data. Because future instances have unknown target values, you need to check the accuracy metric of
the ML model on data for which you already know the target answer, and use this accuracy as a proxy for
accuracy on future data.
To properly evaluate a model, you hold out a random sample of data that has been labeled with the target
(ground truth), and you do not use this to train the model. Evaluating the predictive accuracy of an ML
model with the same data that was used for training is not useful, because it rewards models that can
“remember” the training data, as opposed to generalizing from it. Once you have finished training the ML
model, you send the model the held-out observations for which you know the target values. You then
compare the predictions returned by the ML model against the known target value. Finally, you compute a
summary metric that tells you how well the predicted and true values match.
In Amazon ML, you evaluate an ML model by creating an evaluation. To create an evaluation for an ML
model, you need an ML model that you want to evaluate, and you need labeled data that was not used for
training. First, create a datasource for evaluation by creating an Amazon ML datasource on the held-out
data. The data used in the evaluation must have the same schema as the data used in training and include
actual values for the target variable. If all your data is in a single file or directory, you can use the Amazon
ML console to help you to randomly split the data into 70% for training and 30% for evaluation. You can
also specify other custom split ratios through the Amazon ML API. Once you have an evaluation data
source and an ML model, you can create an evaluation and review the results of the evaluation.
7.1 ML Model Insights
When you evaluate an ML model, Amazon ML provides an industry-standard metric and a number of
insights to review the predictive accuracy of your model. In Amazon ML, the outcome of an evaluation
contains the following:
• A prediction accuracy metric to report on the overall success of the model
• Visualizations to help explore the accuracy of your model beyond the prediction accuracy metric
• The ability to review the impact of setting a score threshold (only for binary classification)
• Alerts on criteria to check the validity of the evaluation
75
Amazon Machine Learning Developer Guide, Release 1.0
The choice of the metric and visualization depends on the type of ML model that you are evaluating. It is
important to review these visualizations to decide if your model is performing well enough to match your
business requirements.
7.2 Binary Model Insights
7.2.1 Interpreting the Predictions
The actual output of many binary classification algorithms is a prediction score. The score indicates the
system’s certainty that the given observation belongs to the positive class (the actual target value is 1).
Binary classification models in Amazon ML output a score that ranges from 0 to 1. As a consumer of this
score, to make the decision about whether the observation should be classified as 1 or 0, you interpret the
score by picking a classification threshold, or cut-off, and compare the score against it. Any observations
with scores higher than the cut-off are predicted as target= 1, and scores lower than the cut-off are
predicted as target= 0.
In Amazon ML, the default score cut-off is 0.5. You can choose to update this cut-off to match your
business needs. You can use the visualizations in the console to understand how the choice of cut-off will
affect your application.
Measuring ML Model Accuracy
Amazon ML provides an industry-standard accuracy metric for binary classification models called Area
Under the (Receiver Operating Characteristic) Curve (AUC). AUC measures the ability of the model to
predict a higher score for positive examples as compared to negative examples. Because it is independent
of the score cut-off, you can get a sense of the prediction accuracy of your model from the AUC metric
without picking a threshold.
The AUC metric returns a decimal value from 0 to 1. AUC values near 1 indicate an ML model that is
highly accurate. Values near 0.5 indicate an ML model that is no better than guessing at random. Values
near 0 are unusual to see, and typically indicate a problem with the data. Essentially, an AUC near 0 says
that the ML model has learned the correct patterns, but is using them to make predictions that are flipped
from reality (‘0’s are predicted as ‘1’s and vice versa). For more information about AUC, go to the
Receiver operating characteristic (http://en.wikipedia.org/wiki/Receiver_operating_characteristic) page on
Wikipedia.
The baseline AUC metric for a binary model is 0.5. It is the value for a hypothetical ML model that
randomly predicts a 1 or 0 answer. Your binary ML model should perform better than this value to begin to
be valuable.
Using the Performance Visualization
To explore the accuracy of the ML model, you can review the graphs on the Evaluation page on the
Amazon ML console. This page shows you two histograms: a) a histogram of the scores for the actual
positives (the target is 1) and b) a histogram of scores for the actual negatives (the target is 0) in the
evaluation data.
76
Chapter 7. Evaluating ML Models
Amazon Machine Learning Developer Guide, Release 1.0
An ML model that has good predictive accuracy will predict higher scores to the actual 1s and lower scores
to the actual 0s. A perfect model will have the two histograms at two different ends of the x-axis showing
that actual positives all got high scores and actual negatives all got low scores. However, ML models make
mistakes, and a typical graph will show that the two histograms overlap at certain scores. An extremely
poor performing model will be unable to distinguish between the positive and negative classes, and both
classes will have mostly overlapping histograms.
Using the visualizations, you can identify the number of predictions that fall into the two types of correct
predictions and the two types of incorrect predictions.
Correct Predictions
• True positive (TP): Amazon ML predicted the value as 1, and the true value is 1.
• True negative (TN): Amazon ML predicted the value as 0, and the true value is 0.
Erroneous Predictions
• False positive (FP): Amazon ML predicted the value as 1, but the true value is 0.
• False negative (FN): Amazon ML predicted the value as 0, but the true value is 1.
Note
The number of TP, TN, FP, and FN depends on the selected score threshold, and optimizing for
any of one of these numbers would mean making a tradeoff on the others. A high number of
TPs typically results in a high number of FPs and a low number of TNs.
Adjusting the Score Cut-off
ML models work by generating numeric prediction scores, and then applying a cut-off to convert these
scores into binary 0/1 labels. By changing the score cut-off, you can adjust the model’s behavior when it
makes a mistake. On the Evaluation page in the Amazon ML console, you can review the impact of
various score cut-offs, and you can save the score cut-off that you would like to use for your model.
When you adjust the score cut-off threshold, observe the trade-off between the two types of errors. Moving
the cut-off to the left captures more true positives, but the trade-off is an increase in the number of false
positive errors. Moving it to the right captures less of the false positive errors, but the trade-off is that it will
7.2. Binary Model Insights
77
Amazon Machine Learning Developer Guide, Release 1.0
miss some true positives. For your predictive application, you make the decision which kind of error is
more tolerable by selecting an appropriate cut-off score.
Reviewing Advanced Metrics
Amazon ML provides the following additional metrics to measure the predictive accuracy of the ML
model: accuracy, precision, recall, and false positive rate.
Accuracy
Accuracy (ACC) measures the fraction of correct predictions. The range is 0 to 1. A larger value indicates
better predictive accuracy:
Precision
Precision measures the fraction of actual positives among those examples that are predicted as positive.
The range is 0 to 1. A larger value indicates better predictive accuracy:
Recall
Recall measures the fraction of actual positives that are predicted as positive. The range is 0 to 1. A larger
value indicates better predictive accuracy:
False Positive Rate
The false positive rate (FPR) measures the false alarm rate or the fraction of actual negatives that are
predicted as positive. The range is 0 to 1. A smaller value indicates better predictive accuracy:
Depending on your business problem, you might be more interested in a model that performs well for a
specific subset of these metrics. For example, two business applications might have very different
requirements for their ML model:
• One application might need to be extremely sure about the positive predictions actually being
positive (high precision), and be able to afford to misclassify some positive examples as negative
(moderate recall).
78
Chapter 7. Evaluating ML Models
Amazon Machine Learning Developer Guide, Release 1.0
• Another application might need to correctly predict as many positive examples as possible (high
recall), and will accept some negative examples being misclassified as positive (moderate precision).
Amazon ML allows you to choose a score cut-off that corresponds to a particular value of any of the
preceding advanced metrics. It also shows the tradeoffs incurred with optimizing for any one metric. For
example, if you select a cut-off that corresponds to a high precision, you typically will have to trade that off
with a lower recall.
Note
You have to save the score cut-off for it to take effect on classifying any future predictions by
your ML model.
7.3 Multiclass Model Insights
7.3.1 Interpreting the Predictions
The actual output of a multiclass classification algorithm is a set of prediction scores. The scores indicate
the model’s certainty that the given observation belongs to each of the classes. Unlike for binary
classification problems, you do not need to choose a score cut-off to make predictions. The predicted
answer is the class (for example, label) with the highest predicted score.
Measuring ML Model Accuracy
Typical metrics used in multiclass are the same as the metrics used in the binary classification case after
averaging them over all classes. In Amazon ML, the macro-average F1 score is used to evaluate the
predictive accuracy of a multiclass metric.
Macro Average F1 Score
F1 score is a binary classification metric that considers both binary metrics precision and recall. It is the
harmonic mean between precision and recall. The range is 0 to 1. A larger value indicates better predictive
accuracy:
The macro average F1 score is the unweighted average of the F1-score over all the classes in the multiclass
case. It does not take into account the frequency of occurrence of the classes in the evaluation dataset. A
larger value indicates better predictive accuracy. The following example shows K classes in the evaluation
datasource:
Baseline Macro Average F1 Score
Amazon ML provides a baseline metric for multiclass models. It is the macro average F1 score for a
hypothetical multiclass model that would always predict the most frequent class as the answer. For
7.3. Multiclass Model Insights
79
Amazon Machine Learning Developer Guide, Release 1.0
example, if you were predicting the genre of a movie and the most common genre in your training data was
Romance, then the baseline model would always predict the genre as Romance. You would compare your
ML model against this baseline to validate if your ML model is better than an ML model that predicts this
constant answer.
Using the Performance Visualization
Amazon ML provides a confusion matrix as a way to visualize the accuracy of multiclass classification
predictive models. The confusion matrix illustrates in a table the number or percentage of correct and
incorrect predictions for each class by comparing an observation’s predicted class and its true class.
For example, if you are trying to classify a movie into a genre, the predictive model might predict that its
genre (class) is Romance. However, its true genre actually might be Thriller. When you evaluate the
accuracy of a multiclass classification ML model, Amazon ML identifies these misclassifications and
displays the results in the confusion matrix, as shown in the following illustration.
The following information is displayed in a confusion matrix:
• Number of correct and incorrect predictions for each class: Each row in the confusion matrix
corresponds to the metrics for one of the true classes. For example, the first row shows that for
movies that are actually in the Romance genre, the multiclass ML model gets the predictions right
for over 80% of the cases. It incorrectly predicts the genre as Thriller for less than 20% of the cases,
and Adventure for less than 20% of the cases.
80
Chapter 7. Evaluating ML Models
Amazon Machine Learning Developer Guide, Release 1.0
• Class-wise F1-score: The last column shows the F1-score for each of the classes.
• True class-frequencies in the evaluation data: The second to last column shows that in the
evaluation dataset, 57.92% of the observations in the evaluation data is Romance, 21.23% is Thriller,
and 20.85% is Adventure.
• Predicted class-frequencies for the evaluation data: The last row shows the frequency of each
class in the predictions. 77.56% of the observations is predicted as Romance, 9.33% is predicted as
Thriller, and 13.12% is predicted as Adventure.
The Amazon ML console provides a visual display that accommodates up to 10 classes in the confusion
matrix, listed in order of most frequent to least frequent class in the evaluation data. If your evaluation data
has more than 10 classes, you will see the top 9 most frequently occurring classes in the confusion matrix,
and all other classes will be collapsed into a class called “others.” Amazon ML also provides the ability to
download the full confusion matrix through a link on the multiclass visualizations page.
7.4 Regression Model Insights
7.4.1 Interpreting the Predictions
The output of a regression ML model is a numeric value for the model’s prediction of the target. For
example, if you are predicting housing prices, the prediction of the model could be a value such as 254,013.
Note
The range of predictions can differ from the range of the target in the training data. For
example, let’s say you are predicting housing prices, and the target in the training data had
values in a range from 0 to 450,000. The predicted target need not be in the same range, and
might take any positive value (greater than 450,000) or negative value (less than zero). It is
important to plan how to address prediction values that fall outside a range that is acceptable
for your application.
Measuring ML Model Accuracy
For regression tasks, Amazon ML uses the industry standard root mean square error (RMSE) metric. It is a
distance measure between the predicted numeric target and the actual numeric answer (ground truth). The
smaller the value of the RMSE, the better is the predictive accuracy of the model. A model with perfectly
correct predictions would have an RMSE of 0. The following example shows evaluation data that contains
N records:
Baseline RMSE
Amazon ML provides a baseline metric for regression models. It is the RMSE for a hypothetical regression
model that would always predict the mean of the target as the answer. For example, if you were predicting
the age of a house buyer and the mean age for the observations in your training data was 35, the baseline
7.4. Regression Model Insights
81
Amazon Machine Learning Developer Guide, Release 1.0
model would always predict the answer as 35. You would compare your ML model against this baseline to
validate if your ML model is better than a ML model that predicts this constant answer.
Using the Performance Visualization
It is common practice to review the residuals for regression problems. A residual for an observation in the
evaluation data is the difference between the true target and the predicted target. Residuals represent the
portion of the target that the model is unable to predict. A positive residual indicates that the model is
underestimating the target (the actual target is larger than the predicted target). A negative residual indicates
an overestimation (the actual target is smaller than the predicted target). The histogram of the residuals on
the evaluation data when distributed in a bell shape and centered at zero indicates that the model makes
mistakes in a random manner and does not systematically over or under predict any particular range of
target values. If the residuals do not form a zero-centered bell shape, there is some structure in the model’s
prediction error. Adding more variables to the model might help the model capture the pattern that is not
captured by the current model. The following illustration shows residuals that are not centered around zero.
7.5 Evaluation Alerts
Amazon ML provides insights to help you validate whether you evaluated the model correctly. If any of the
validity criteria are not met by the evaluation, the Amazon ML console alerts you. Amazon ML uses the
following criteria to decide whether an evaluation is valid:
• Evaluation of ML model is done on held-out data
Amazon ML alerts you for this criterion if you use the same datasource for training and evaluation.
82
Chapter 7. Evaluating ML Models
Amazon Machine Learning Developer Guide, Release 1.0
If you use Amazon ML to split your data, you will meet this validity criterion. If you do not use
Amazon ML to split your data, make sure to evaluate your ML model with a datasource other than
the training datasource.
• Sufficient data was used for the evaluation of the predictive model
Amazon ML alerts you for this criterion if the number of observations/records in your evaluation
data is less than 10% the number of observations you have in your training datasource. To properly
evaluate your model, it is important to provide a sufficiently large data sample. This criterion
provides a check to let you know if you are using too little data. The amount of data required to
evaluate your ML model is subjective. 10% is selected here as a stop gap in the absence of a better
measure.
• Schema matched
Amazon ML alerts you about this criterion if the schema on the training and evaluation datasource
are not the same. If you have certain attributes that do not exist in the evaluation datasource or if
certain attributes are extra, Amazon ML lets you know this through this alert.
• All records from evaluation files were used for predictive model performance evaluation
It is important to know if all the records provided for evaluation were actually used for evaluating the
model. Amazon ML alerts you about this criterion if some records in the evaluation datasource were
invalid and were not included in the accuracy metric computation. For example, if the target variable
is missing for some of the observations in the evaluation datasource, then Amazon ML is unable to
check if the ML model’s predictions for these observations are correct. In this case, the records with
missing target values will be considered invalid.
• Distribution of target variable
Amazon ML provides you with the distribution of the target from training and evaluation datasources
for you to review if the target is distributed similarly in both datasources. If the model was trained on
training data with a target distribution that differs from the distribution of the target on the evaluation
data, then the accuracy of the model could suffer because it is being evaluated on data with very
different statistics. It is best to have the data distributed similarly over training and evaluation data,
and have these datasets mimic as much as possible the data that the model will encounter when
making predictions.
• Accuracy metric on the training and evaluation dataAmazon ML provides you with an accuracy
metric to provide a measure of predictive accuracy on the training and evaluation data. For example,
it is useful to know if a binary classification model has extremely good AUC on the training data but
very poor AUC on the evaluation data. This would be an indicator of the model overfitting to the
training data. It is important to review the accuracy metric of both training and evaluation data to
understand model fit issues. For more information about model fit, see Machine Learning Concepts
(http://docs.aws.amazon.com/machine-learning/latest/mlconcepts).
7.5. Evaluation Alerts
83
Amazon Machine Learning Developer Guide, Release 1.0
84
Chapter 7. Evaluating ML Models
CHAPTER
EIGHT
GENERATING AND INTERPRETING PREDICTIONS
Amazon ML provides two mechanisms for generating predictions: asynchronous (batch-based) and
synchronous (one-at-a-time).
Use asynchronous predictions, or batch predictions, when you have a number of observations and would
like to obtain predictions for the observations all at once. The process uses a datasource as input, and
outputs predictions into a .csv file stored in an S3 bucket of your choice. You need to wait until the batch
prediction process completes before you can access the prediction results.
Use synchronous, or real-time predictions, when you want to obtain predictions at low latency. The
real-time prediction API accepts a single input observation serialized as a JSON string, and synchronously
returns the prediction and associated metadata as part of the API response. You can simultaneously invoke
the API more than once to obtain synchronous predictions in parallel. For more information about
throughput limits of the real-time prediction API, see real-time prediction limits in Amazon ML API
reference (http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
8.1 Creating Batch Prediction Objects
The BatchPrediction object describes a set of predictions that have been generated by using your ML
model and a set of input observations. When you create a BatchPrediction object, Amazon ML will start an
asynchronous workflow that computes the predictions. You must provide the following parameters to
create a BatchPrediction object, using either the Amazon ML service console or API:
• Datasource ID
• BatchPrediction ID
• ML Model ID
• Output Uri
• (Optional) BatchPrediction Name
8.1.1 DataSource ID
This parameter specifies the ID of the datasource that points to the observations for which you want
predictions. For example, if you want predictions for data in a file called s3://examplebucket/input.csv, you
85
Amazon Machine Learning Developer Guide, Release 1.0
would create a datasource object that points to the data file, and then pass in the ID of that datasource with
this parameter.
You must use the same schema for the datasource that you use to obtain batch predictions and the
datasource that you used to train the ML model that is being queried for predictions. The one exception is
the target attribute: You can omit the target attribute from the datasource for a batch prediction; if you
provide it, Amazon ML will ignore its value.
Note
If you use the Amazon ML console to create a batch prediction, you can enter the path to the
S3 bucket that contains the data with observations for which the predictions will be generated.
Amazon ML will create a datasource for this data and ensure that the schema matches the
schema of the ML model before it produces predictions.
8.1.2 BatchPrediction ID
This parameter contains the ID to assign to the batch prediction.
8.1.3 MLModel ID
This parameter contains the ID of the ML model to query for predictions.
8.1.4 OutputUri
This parameter contains the URI of the S3 bucket for the output of the predictions. Amazon ML must have
permissions to write data into this bucket. For more information about S3 permissions configuration, see
Granting Amazon ML Permissions to Output Predictions to Amazon S3
The OutputUri parameter must refer to an S3 path that ends with a forward slash (‘/’) character, as shown
in the following example:
s3://examplebucket/examplepath/
8.1.5 (Optional) BatchPrediction Name
This optional parameter contains a human-readable name that you can use to name your batch prediction.
8.2 Working with Batch Predictions
Amazon ML provides the following batch prediction objects in the Amazon ML API
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/):
• CreateBatchPrediction
• UpdateBatchPrediction
• DeleteBatchPrediction
86
Chapter 8. Generating and Interpreting Predictions
Amazon Machine Learning Developer Guide, Release 1.0
• GetBatchPrediction
• DescribeBatchPredictions
On the Amazon ML console, you can retrieve details of individual BatchPrediction objects, including their
status, output location, and processing log file. You can also update the names of existing BatchPrediction
objects or delete the objects.
8.3 Reading the BatchPrediction Output Files
Perform the following steps to retrieve the batch prediction output files:
1. Locate the batch prediction manifest file.
2. Read the manifest file to determine the locations of output files.
3. Retrieve the output files that contain the predictions.
4. Interpret the contents of the output files. Contents will vary based on the type of ML model that was
used to generate predictions.
The following sections describe the steps in greater detail.
8.3.1 Locating the Batch Prediction Manifest File
The manifest files of the batch prediction contain the information that maps your input files to the
prediction output files.
To locate the manifest file, start with the output location that you specified when you created the batch
prediction object. You can query a completed batch prediction object to retrieve the S3 location of this file
by using either the Amazon ML API or the Amazon ML console.
The manifest file is located in a known path under the output location. This path consists of static string
“/batch-prediction/” appended to the output location. The name of the manifest file is the ID of the batch
prediction, with the extension “.manifest”.
For example, if you create a batch prediction object with the ID bp-example, and you specify the S3
location s3://examplebucket/output/ as the output location, you would find your manifest file here:
s3://examplebucket/output/batch-prediction/bp-example.manifest
8.3.2 Reading the Manifest File
The contents of the .manifest file are encoded as a JSON map, where the key is a string of the name of an
S3 input data file, and the value is a string of the associated batch prediction result file. There is one
mapping line for each input/output file pair. Continuing with our example, if the input for the creation of
the BatchPrediction object consists of a single file called data.csv that is located in
s3://examplebucket/input/, you might see a mapping string that looks like this:
{"s3://examplebucket/input/data.csv":"
s3://examplebucket/output/batch-prediction/result/bp-example-data.csv.gz"}
8.3. Reading the BatchPrediction Output Files
87
Amazon Machine Learning Developer Guide, Release 1.0
If the input to the creation of the BatchPrediction object consists of three files called data1.csv, data2.csv,
and data3.csv, and they are all stored in the S3 location s3://examplebucket/input/, you might see a
mapping string that looks like this:
{"s3://examplebucket/input/data1.csv":"s3://examplebucket/output/batch-prediction/result/bp
"s3://examplebucket/input/data2.csv":"
s3://examplebucket/output/batch-prediction/result/bp-example-data2.csv.gz",
"s3://examplebucket/input/data3.csv":"
s3://examplebucket/output/batch-prediction/result/bp-example-data3.csv.gz"}
8.3.3 Retrieving the Batch Prediction Output Files
You can download each batch prediction file obtained from the manifest mapping and process it locally.
The file format is CSV, compressed with the gzip algorithm. Within that file, there is one line per input
observation in the corresponding input file.
To join the predictions with the input file of the batch prediction, you can perform a simple
record-by-record merge of the two files. The output file of the batch prediction always contains the same
number of records as the prediction input file, in the same order. If an input observation fails in processing,
and no prediction can be generated, the output file of the batch prediction will have a blank line in the
corresponding location.
8.3.4 Interpreting the Contents of Batch Prediction Files for a Binary Classification
ML model
The columns of the batch prediction file for a binary classification model are named bestAnswer and score.
The score column contains the raw prediction score assigned by the ML model for this prediction. Amazon
ML uses logistic regression models, so this score attempts to model the probability of the observation that
corresponds to a true (“1”) value.
The bestAnswer column contains the prediction label (“1” or “0”) that is obtained by evaluating the
prediction score against the cut-off score. For more information about cut-off scores, see Error!
Reference source not found. You set a cut-off score for the ML model by using either the Amazon ML
API or the model evaluation functionality on the Amazon ML console. If you don’t set a cut-off score,
Amazon ML uses the default value of 0.5.
For example, if the cut-off score for the ML model is 0.75, the contents of the batch prediction output file
for a binary classification model might look like this:
bestAnswer,score
0,0.0087642
1,0.7899012
0,6.323061E-3
88
Chapter 8. Generating and Interpreting Predictions
Amazon Machine Learning Developer Guide, Release 1.0
0,2.143189E-2
1,0.8944209
...
The second and fifth observations in the input file have received prediction scores above 0.75, so the
bestAnswer column for these observations indicates value “1”, while other observations have the value “0”.
8.3.5 Interpreting the Contents of Batch Prediction Files for a Multiclass Classification ML Model
The batch prediction file for a multiclass model contains a variable number of columns: one for each of the
classes found in the training data, and an additional column named trueLabel. The names of all columns
can be read off the header line in the batch prediction file.
When you request predictions from a multiclass model, Amazon ML will compute several predictions for
each observation in the input file, one for each of the classes found in the input dataset. This strategy is
known as one against all (OAA). It is equivalent to asking the question “What is the prediction score for
this specific observation being of a given class, as opposed to any other class?” The prediction with the
highest overall score is by default considered to be the most likely prediction, and the name of the winning
class is printed in the bestAnswer column. However, all prediction scores are available to you, and you can
choose to interpret them differently. Because prediction scores model underlying probabilities of the
observation belonging to one class or another, the sum of all the prediction scores across a row is 1, and
each individual score can be interpreted as a “probability that the observation belongs to this class.”
Consider the example of attempting to predict the number of stars that a customer will rate a product. In
this example, we use whole stars only,
on a 1-5 scale. Our classes are named 1_star, 2_stars, 3_stars, 4_stars, and 5_stars. The multiclass
prediction output file might look like this:
trueLabel, 1_star, 2_stars, 3_stars, 4_stars, 5_stars
3_stars, 0.0087642, 0.27195, 0.477781, 0.175411, 0.066094
1_star, 0.559931, 0.000310, 0.000248, 0.199871, 0.239640
1_star, 0.719022, 0.007366, 0.195411, 0.000878, 0.077323
3_stars, 0.189813, 0.218956, 0.248910, 0.226103, 0.116218
2_stars, 0.003129, 0.8944209, 0.003902, 0.072191, 0.026357
...
In the preceding example, the first observation has the highest number
associated with the label 3_stars (prediction score = 0.477781), so the name of the class 3_stars appears in
the trueLabel column for this observation.
8.3. Reading the BatchPrediction Output Files
89
Amazon Machine Learning Developer Guide, Release 1.0
8.3.6 Interpreting the Contents of Batch Prediction Files for a Regression ML Model
The batch prediction file for a regression model contains a single column named score. This column
contains the raw numeric prediction for each observation in the input data. The following example shows
an output file for a batch prediction performed on a regression model:
score
-1.526385E1
-6.188034E0
-1.271108E1
-2.200578E1
8.359159E0
...
8.4 Requesting Real-time Predictions
You can query an ML model created with Amazon ML for predictions in real-time by using the
low-latency predict operation. This functionality is commonly used to enable predictive capabilities within
interactive web, mobile, or desktop applications. The Predict operation accepts a single input observation
in the request payload, and returns the prediction synchronously in the response. This sets it apart from the
batch prediction API, which is invoked with an Amazon S3 URI that points to files with input observations,
and asynchronously returns a URI to a file that contains predictions for all these observations.
To use the real-time prediction API, you must first create an endpoint for real-time prediction generation.
You can do this on the Amazon ML console or by using the CreateRealtimeEndpoint operation.
Note
Once you create a real-time endpoint for your model, you will start incurring a capacity
reservation charge that is based on the model’s size. For more information, see Pricing. To
stop incurring the charge, remove the real-time endpoint by using the console or the
DeleteRealtimeEndpoint operation when you no longer need to obtain real-time predictions
from that model.
8.4.1 Real-time Prediction Latencies
The Amazon ML system is designed to respond to most online prediction requests within 100 MS.
8.4.2 Locating the Real-time Prediction Endpoint
Real-time endpoints are properties of ML models. When you create a real-time endpoint by using the
CreateRealtimeEndpoint operation, the URL and status of the endpoint is returned to you in the response.
90
Chapter 8. Generating and Interpreting Predictions
Amazon Machine Learning Developer Guide, Release 1.0
If you created the real-time endpoint by using the console, or if you want to retrieve the URL and status of
an existing endpoint, you can call the GetMLModel operation with the ID of the model that you want to
query for real-time predictions. The endpoint information will be contained within the EndpointInfo
section of the response. For a model that has a real-time endpoint associated with it, the EndpointInfo
might look like this:
EndpointInfo":
{"CreatedAt": 1427864874.227,
"EndpointStatus": "READY",
"EndpointUrl": "https://endpointUrl",
"PeakRequestsPerSecond": 200}
A model without an associated real-time endpoint might return the
following:
EndpointInfo":
{"EndpointStatus": "NONE",
"PeakRequestsPerSecond": 0}
8.4.3 Real-time Prediction Request Format
A sample Predict request payload might look like this:
{"MLModelId": "model-id",
"Record":
{"key1": "value1",
"key2": value2},
"PredictEndpoint": " https://endpointUrl"}
The PredictEndpoint field must correspond to the EndpointUrl field of the EndpointInfo structure. This
field is used by Amazon ML to route the request to the appropriate servers in the real-time prediction fleet.
The MLModelId is the identifier of a previously trained and mounted model.
A Record is a map that contains the inputs to your Amazon ML model. It is analogous to a single row of
data in your training data set, without the target variable. Values in the Record map should have the same
type as the training data. For example, variables that are NUMERIC in the training data also should be
numbers in the Record map.
Note
You can omit variables for which you do not have a value, although be aware that this might
lower the accuracy of your prediction.
8.4. Requesting Real-time Predictions
91
Amazon Machine Learning Developer Guide, Release 1.0
8.4.4 Real-time Prediction Response Format
The format of the response returned by Predict requests depends on the type of model that is being queried
for prediction. In all cases, the details field will contain information about the prediction request, notably
including the PredictiveModelType field with the model type.
The following example shows a response for a BINARY model:
{
"Prediction":
{
"details":
{
"PredictiveModelType": "BINARY"
},
"predictedLabel": "0",
"predictedScores":
{
"0": 0.47380468249320984
}
}
}
Notice the predictedLabel field that contains the predicted label, in this case 0. Amazon ML computes the
predicted label by comparing the prediction score against the classification cut-off:
• You can obtain the classification cut-off that is currently associated with an ML model by inspecting
the ScoreThreshold field in the response of the GetMLModel, or by viewing the model information
in the Amazon ML console. If you do not set a score threshold, Amazon ML uses the default value
of 0.5.
• You can obtain the exact prediction score for a binary classification model by inspecting the
predictedScores map. Within this map, the predicted label is paired with the exact prediction score.
The following example shows a response for a REGRESSION model. Notice that the predicted numeric
value is found in the predictedValue field:
{
"Prediction":
92
Chapter 8. Generating and Interpreting Predictions
Amazon Machine Learning Developer Guide, Release 1.0
{
"details":
{
"PredictiveModelType": "REGRESSION"
},
"predictedValue": 15.508452415466309
}
}
The following example shows a response for a multiclass model: .. code:
{"Prediction":
{
"details":
{
"PredictiveModelType": "MULTICLASS"
},
"predictedLabel": "red",
"predictedScores":
{
"red": 0.12923571467399597,
"green": 0.08416014909744263,
"orange": 0.22713537514209747,
"blue": 0.1438363939523697,
"pink": 0.184102863073349,
"violet": 0.12816807627677917,
"brown": 0.10336143523454666
}
}
8.4. Requesting Real-time Predictions
93
Amazon Machine Learning Developer Guide, Release 1.0
}
Similar to binary classification models, the predicted label/class is found in the predictedLabel field. You
can further understand how strongly the prediction is related to each class by looking at the
predictedScores map. The higher the score of a class within this map, the more strongly the prediction is
related to the class, with the highest value ultimately being selected as the predictedLabel.
94
Chapter 8. Generating and Interpreting Predictions
CHAPTER
NINE
MANAGING AMAZON MACHINE LEARNING OBJECTS
Amazon ML provides four objects that you can manage through the Amazon ML console or the Amazon
ML API:
• Datasources
• ML models
• Evaluations
• Batch predictions
Each object serves a different purpose in the lifecycle of building a machine learning application, and each
object has specific attributes and functionality that apply only to that object. Despite these differences, you
manage the objects in similar ways. For example, you use almost identical processes for listing objects,
retrieving their descriptions, and updating or deleting them.
The following sections describe the management operations that are common to all four objects and notes
any differences.
9.1 Listing Objects
The Entities page of the Amazon ML console displays a dashboard view of the four object types, as shown
in the following screenshot.
95
Amazon Machine Learning Developer Guide, Release 1.0
This dashboard view shows columns that are common to all object types: their names, IDs, status codes,
and creation times. You can select individual items to display additional details, including details that are
specific to that object type. For example, you can expand a datasource to see the target field name and type
as well as the number of variables.
You can sort list views by any field by selecting the double-triangle icon next to a column header.
9.1.1 Console Display Limit
On the Amazon ML console, you can view up to 1,000 of your most recently created objects of any given
type. To find objects that are not displayed, simply enter the name or ID of the object in the search box.
9.1.2 Listing Objects Through the API
You can list objects by using the following operations in the Amazon ML API
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/):
• DescribeDataSources
• DescribeMLModels
• DescribeEvaluations
• DescribeBatchPredictions
Each of these operations includes parameters for filtering, sorting, and paginating through a long list of
objects. There is no limit to the number of objects that you can access through the API.
The API response to a Describe* command includes a pagination token if appropriate, and a list of
light-weight descriptions for each object. The size of the list is limited by the Limit parameter in the
command, which can take a maximum value of 100. Note that the response might include fewer objects
96
Chapter 9. Managing Amazon Machine Learning Objects
Amazon Machine Learning Developer Guide, Release 1.0
than the specified limit and still include a nextPageToken that indicates that more results are available. API
users should even be prepared to receive a response with that has 0 items in the list, but also contains an
nextPageToken. This is an unlikely but possible scenario.
The object descriptions include the same general information for all object types that is displayed in the
console. Additional details are included that are specific to that object type. For more information, see
Amazon ML API Reference (http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
9.2 Retrieving Object Descriptions
You can view detailed descriptions of any object through the console or through the API.
9.2.1 Detailed Descriptions in the Console
To see descriptions on the console, navigate to a list for a specific type of object (datasource, ML model,
evaluation, or batch prediction). Next, locate the row in the table that corresponds to the object, either by
browsing through the list or by searching for its name or ID.
9.2.2 Detailed Descriptions from the API
Each object type has an operation that retrieves the full details of an Amazon ML object:
• GetDataSource
• GetMLModel
• GetEvaluation
• GetBatchPrediction
Each operation takes exactly two parameters: the object ID and a Boolean flag called Verbose. Calls with
Verbose set to true will include extra details about the object, resulting in higher latencies and larger
responses. To learn which fields are included by setting the Verbose flag, see the Amazon ML API
Reference (http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
9.3 Updating Objects
Each object type has an operation that updates the details of an Amazon ML object (See Amazon ML API
Reference (http://docs.aws.amazon.com/machine-learning/latest/APIReference/)):
• UpdateDataSource
• UpdateMLModel
• UpdateEvaluation
• UpdateBatchPrediction
9.2. Retrieving Object Descriptions
97
Amazon Machine Learning Developer Guide, Release 1.0
Each operation requires the object’s ID to specify which object is being updated. You can update the names
of all objects. You can’t update any other properties of objects for datasources, evaluations, and batch
predictions. For ML Models, you can update the ScoreThreshold field, as long as the ML model does not
have a real-time prediction endpoint associated with it.
9.4 Deleting Objects
You can delete Amazon ML objects when you are done with them by using the following operations (See
Amazon ML API Reference (http://docs.aws.amazon.com/machine-learning/latest/APIReference/)):
• DeleteDataSource
• DeleteMLModel
• DeleteEvaluation
• DeleteBatchPrediction
The only input parameter to each of these methods is the ID of the object that is being deleted.
There is no additional cost to keep Amazon ML objects after you are done with them. The primary benefit
of deleting objects is for organizational purposes.
Important
When you delete objects through the Amazon ML API, the effect is immediate, permanent,
and irreversible.
98
Chapter 9. Managing Amazon Machine Learning Objects
CHAPTER
TEN
MONITORING AMAZON ML WITH AMAZON CLOUDWATCH METRICS
Amazon ML automatically sends metrics to Amazon CloudWatch so that you can gather and analyze usage
statistics for your ML models. For example, to keep track of batch and real-time predictions, you can
monitor the PredictCount metric according to the RequestMode dimension. The metrics are automatically
collected and sent to Amazon CloudWatch every five minutes. You can monitor these metrics by using the
Amazon CloudWatch console, AWS CLI, or AWS SDKs.
There is no charge for the Amazon ML metrics that are reported through CloudWatch. If you set alarms on
the metrics, you will be billed at standard CloudWatch rates (http://aws.amazon.com/cloudwatch/pricing/).
For more information, see the Amazon ML list of metrics in Amazon CloudWatch Namespaces,
Dimensions, and Metrics Reference
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html)
in the Amazon CloudWatch Developer Guide.
99
Amazon Machine Learning Developer Guide, Release 1.0
100
Chapter 10. Monitoring Amazon ML with Amazon CloudWatch Metrics
CHAPTER
ELEVEN
AMAZON MACHINE LEARNING REFERENCE
11.1 Granting Amazon ML Permissions to Read Your Data from Amazon S3
To create a datasource object from your input data in Amazon S3, you must grant Amazon ML the
following permissions to the S3 location where your input data is stored:
• GetObject permission on the S3 bucket and prefix.
• ListBucket permission on the S3 bucket. Unlike other actions, ListBucket must be granted
bucket-wide permissions (rather than on the prefix). However, you can scope the permission to a
specific prefix by using a Condition clause.
If you use the Amazon ML console to create the datasource, these permissions can be added to the bucket
for you. You will be prompted to confirm whether you want to add them as you complete the steps in the
wizard.The following example policy shows how to grant permission for Amazon ML to read data from the
sample location s3://examplebucket/exampleprefix, while scoping the ListBucket permission to only the
exampleprefix input path.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "machinelearning.amazonaws.com" },
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/exampleprefix/*"
},
{
"Effect": "Allow",
101
Amazon Machine Learning Developer Guide, Release 1.0
"Principal": {"Service": "machinelearning.amazonaws.com"},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::examplebucket",
"Condition": { "StringLike": { "s3:prefix": "exampleprefix/*" }}
}]
}
To apply this policy to your data, you must edit the policy statement associated with the S3 bucket where
your data is stored.
To edit the permissions policy for an S3 bucket
1. Sign into the AWS Management Console and open the Amazon S3 console at this link
(https://console.aws.amazon.com/s3).
2. Right-click the bucket name and choose Properties.
3. Click Edit bucket policy.
4. Enter the preceding policy, customizing it as needed.
11.2 Granting Amazon ML Permissions to Output Predictions to
Amazon S3
To output the results of the batch prediction operation to Amazon S3, you must grant Amazon ML the
following permissions to the output location, which is provided as input to the Create Batch Prediction
operation:
• GetObject permission on your S3 bucket and prefix.
• PutObject permission on your S3 bucket and prefix.
• PutObjectAcl on your S3 bucket and prefix.
– Amazon ML needs this permission to ensure that it can grant the canned ACL
(http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl)
bucket-owner-full-control permission to your AWS account, after objects are created.
• ListBucket permission on the S3 bucket. Unlike other actions, ListBucket must be granted
bucket-wide permissions (rather than on the prefix). You can, however, scope the permission to a
specific prefix by using a Condition clause.
If you use the Amazon ML console to create the batch prediction request, these permissions can be added
to the bucket for you. You will be prompted to confirm whether you want to add them as you complete the
steps in the wizard.
The following example policy shows how to grant permission for Amazon ML to write data to the sample
location s3://examplebucket/exampleprefix, while scoping the ListBucket permission to only the
102
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
exampleprefix input path, and granting the permission for Amazon ML to set put object ACLs on the
output prefix:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "machinelearning.amazonaws.com"},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::examplebucket/exampleprefix/*"
},
{
"Effect": "Allow",
"Principal": { "Service": "machinelearning.amazonaws.com"},
"Action": "s3:PutObjectAcl",
"Resource": "arn:aws:s3:::examplebucket/exampleprefix/*",
"Condition": { "StringEquals": { "s3:x-amz-acl":
"bucket-owner-full-control" }}
},
{
"Effect": "Allow",
"Principal": {"Service": "machinelearning.amazonaws.com"},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::examplebucket",
"Condition": { "StringLike": { "s3:prefix": "exampleprefix/*" }}
11.2. Granting Amazon ML Permissions to Output Predictions to Amazon S3
103
Amazon Machine Learning Developer Guide, Release 1.0
}]
}
To apply this policy to your data, you must edit the policy statement on the S3 bucket where your data
resides. To edit the policy:
1. Log into Amazon S3 Console.
2. Click the bucket name.
3. Click Properties.
4. Click Edit bucket policy.
5. Enter the policy above, customizing it as-needed.
11.3 Controlling Access to Amazon ML Resources by Using IAM
AWS Identity and Access Management (IAM) enables you to do the following:
• Create users and groups under your AWS account
• Assigns unique security credentials to each user under your AWS account
• Control each user’s permissions to perform tasks using AWS resources
• Easily share your AWS resources between the users in your AWS account
• Create roles for your AWS account and define the users or services that can assume them
• Use existing identities for your enterprise to grant permissions to perform tasks using AWS resources
By using IAM with Amazon ML, you can control whether users in your organization can perform a task
using specific Amazon ML API actions, and whether they can use specific AWS resources.
For more information about IAM, see the following:
• Identity and Access Management (IAM)
• IAM Getting Started Guide
• Using IAM
11.3.1 Policy Syntax
An IAM policy is a JSON document that consists of one of more statements. Each statement is structured
as follows:
{
"Statement":[{
"Effect":"effect",
104
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
"Action":"action",
"Resource":"arn",
"Condition":{
"condition":{
"key":"value"
}
}
}
]
}
There are various elements that make up a statement:
• Effect: The effect can be Allow or Deny. By default, IAM users don’t have permission to use
resources and API actions, so all requests are denied. An explicit allow overrides the default. An
explicit deny overrides any allows.
• Action: The action is the specific API action for which you are granting or denying permission.
• Resource: The resource that’s affected by the action. To specify a resource in the statement, you
need to use its Amazon Resource Name (ARN).
• Condition: Conditions are optional. They can be used to control when your policy will be in effect.
As you create and manage IAM policies, you might want to use the AWS Policy Generator and the IAM
Policy Simulator
11.3.2 Actions for Amazon ML
In an IAM policy statement, you can specify any API action from any service that supports IAM. For
Amazon ML, use a machinelearning prefix with the name of the API action, as shown in the following
examples:
machinelearning:CreateDataSourceFromS3
machinelearning:DescribeDataSources
machinelearning:DeleteDataSource
machinelearning:GetDataSource
To specify multiple actions in a single statement, separate them with commas:
11.3. Controlling Access to Amazon ML Resources by Using IAM
105
Amazon Machine Learning Developer Guide, Release 1.0
"Action": ["machinelearning:action1", "machinelearning:action2"]
You can also specify multiple actions using wildcards. For example, you
can specify all actions whose name begins with the word "Get":
"Action": "machinelearning:Get*"
To specify all Amazon ML operations, use the * wildcard:
"Action": "machinelearning:*"
For the complete list of Amazon ML API actions, see the Amazon Machine Learning API Reference
(http://docs.aws.amazon.com/machine-learning/latest/APIReference/).
11.3.3 Amazon Resource Names (ARNs) for Amazon ML
Each IAM policy statement applies to the resources that you specify by using their ARNs.
Use the following ARN resource format for Amazon ML resources:
“Resource”: arn:aws:machinelearning:region:account:resource-type/identifier
Examples
Datasource ID: my-s3-datasource-id
"Resource":
arn:aws:machinelearning:us-east-1:<your-account-id>:datasource/my-s3-datasource-id
ML model ID: my-model-id
"Resource":
arn:aws:machinelearning:us-east-1:<your-account-id>::mlmodel/my-ml-model-id
Batch prediction ID: my-batchprediction-id
"Resource":
arn:aws:machinelearning:us-east-1:<your-account-id>::batchprediction/my-batchprediction-id
Evaluation ID: my-evaluation-id
"Resource": arn:aws:machinelearning:us-east-1:<your-account-id>::evaluation/my-evaluation-i
11.3.4 Example Policies for Amazon ML
Example 1: Allow users to read machine learning resources metadata
This policy allows a user or group to perform DescribeDataSource, DescribeMLModel,
DescribeBatchPediction, DescribeEvaluation actions and GetDataSource, GetMLModel,
GetBatchPrediction, and GetEvaluation actions on the specified entity; the Describe * operations
permissions cannot be restricted to a particular resource:
106
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"machinelearning:Get*"
],
"Resource": [
"arn:aws:machinelearning:us-east-1:<your-account-id>:datasource/S3-DS-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:datasource/REDSHIFT-DS-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:mlmodel/ML-MODEL-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:batchprediction/BP-ID1",
"arn:aws:machinelearning:us-east-1: <your-account-id>:evaluation/EV-ID1"
]
},
{
"Effect": "Allow",
"Action": [
"machinelearning:Describe*"
],
"Resource": [
"*"
]
}
]
}
11.3. Controlling Access to Amazon ML Resources by Using IAM
107
Amazon Machine Learning Developer Guide, Release 1.0
Example 2: Allow users to create machine learning resources
This policy allows a user or group to perform CreateDataSourceFromS3, CreateDataSourceFromRedshift,
CreateDataSourceFromRDS, CreateMLModel, CreateBatchPrediction, and CreateEvaluation actions. The
permissions of these actions cannot be restricted to a particular resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"machinelearning:CreateDataSource*",
"machinelearning:CreateMLModel",
"machinelearning:CreateBatchPrediction",
"machinelearning:CreateEvaluation"
],
"Resource": [
"*"
]
}
]
}
Example 3: Allow users to manage (create/delete) real-time endpoints and perform real-time
predictions on a specific ML model
This policy allows users or groups to perform CreateRealtimeEndpoint, DeleteRealtimeEndpoint, and
Predict actions on a specific ML model.
{
"Version": "2012-10-17",
"Statement": [
{
108
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
"Effect": "Allow",
"Action": [
"machinelearning:CreateRealtimeEndpoint",
"machinelearning:DeleteRealtimeEndpoint",
"machinelearning:Predict"
],
"Resource": [
"arn:aws:machinelearning:us-east-1:<your-account-id>:mlmodel/ML-MODEL"
]
}
]
}
Example 4: Allow users to update and delete specific resources
This policy allows a user or group to perform UpdateDataSource, UpdateMLModel,
UpdateBatchPrediction, UpdateEvaluation, DeleteDataSource, DeleteMLModel, DeleteBatchPrediction,
and DeleteEvaluation actions on specific resources in your AWS account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"machinelearning:Update*",
"machinelearning:DeleteDataSource",
"machinelearning:DeleteMLModel",
"machinelearning:DeleteBatchPrediction",
"machinelearning:DeleteEvaluation"
],
11.3. Controlling Access to Amazon ML Resources by Using IAM
109
Amazon Machine Learning Developer Guide, Release 1.0
"Resource": [
"arn:aws:machinelearning:us-east-1:<your-account-id>:datasource/S3-DS-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:datasource/REDSHIFT-DS-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:mlmodel/ML-MODEL-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:batchprediction/BP-ID1",
"arn:aws:machinelearning:us-east-1:<your-account-id>:evaluation/EV-ID1"
]
}
]
}
Example 5: Allow any Amazon ML action
This policy allows a user or group to use any Amazon ML action. Because this policy grants full access to
all your machine learning resources, you should restrict it to administrators only:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"machinelearning:*""
],
"Resource": [
"*"
]
}
]
}
110
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
11.4 Dependency Management of Asynchronous Operations
Batch operations in Amazon ML depend on other operations in order to complete successfully. To manage
these dependencies, Amazon ML identifies requests that have dependencies, and verifies that the operations
have completed. If the operations have not completed, Amazon ML sets the initial requests aside until the
operations that they depend on have completed.
There are some dependencies between batch operations. For example, before you can create an ML model,
you must have created a datasource with which you can train the ML model. Amazon ML cannot train an
ML model if there is no datasource available.
However, Amazon ML supports dependency management for asynchronous operations. For example, you
do not have to wait until data statistics have been computed before you can send a request to train an ML
model on the datasource. Instead, as soon as the datasource is created, you can send a request to train an
ML model using the datasource. Amazon ML does not actually start the training operation until the
datasource statistics have been computed. The createMLModel request is put into a queue until the
statistics have been computed; once that is done, Amazon ML immediately attempts to run the
createMLModel operation. Similarly, you can send batch prediction and evaluation requests for ML
models that have not finished training.
The following table shows the requirements to proceed with different AmazonML actions
In order to. . .
Create an ML model (createMLModel)
Create a batch prediction (createBatchPrediction)
Create a batch evaluation (createBatchEvaluation)
You must have. . .
Datasource with computed data statistics
Datasource
ML model
Datasource
ML model
11.5 Operation Request Status
When you submit a request, you can check the status of the request through the Amazon ML API. For
example, if you submit a createMLModel request, you can check the status of the request by using the
describeMLModel call. Amazon ML will respond with one of the six statuses in the following table.
11.4. Dependency Management of Asynchronous Operations
111
Amazon Machine Learning Developer Guide, Release 1.0
Status
PENDING
Definition
Amazon ML is validating the request.
OR
Amazon ML is waiting for computational resources to start running the request. It is likely
that this is occurring because your account has hit the maximum number of concurrent
running batch operation requests. If this is the case, the status will transition to InProgress
when other running requests have completed or get canceled.
OR
Amazon ML is waiting for a batch operation that your request depends on to complete.
INYour request is currently running.
PROGRESS
COMThe request has finished and the object is ready to be used (ML models and datasource) or
PLETED viewed (batch predictions and evaluations).
FAILED There is something wrong with the data that you provided or you have canceled an operation.
For example, if you try to compute data statistics on a datasource that failed to complete, you
might receive an Invalid or Failed status message. The error message associated with the
status explains more about why the operation was not completed successfully.
DELETED The object has been previously deleted by the user.
11.6 System Limits
In order to provide a robust, reliable service, Amazon ML imposes certain limits on the requests you make
to the system. Most ML problems fit easily within these constraints. However, if you do find that your use
of Amazon ML is being restricted by these limits, you can contact AWS customer service
(https://aws.amazon.com/contact-us/) and request to have a limit raised. For example, you might have a
limit of five for the number of jobs that you can run simultaneously. If you find that you often have jobs
queued that are waiting for resources because of this limit, then it probably makes sense to raise that limit
for your account.
The following table shows default per-account limits in Amazon ML. Not all of these limits can be raised
by AWS customer service.
Limit Type
Size of training data *
Size of batch prediction input
Size of batch prediction input (number of records)
Number of variables in a data file (schema)
Recipe complexity (number of processed output variables)
TPS for each real-time prediction endpoint
Total TPS for all real-time prediction endpoints
Total RAM for all real-time prediction endpoints
Number of simultaneous jobs
Longest run time for any job
Number of classes for multiclass ML models
ML model size
System Limit
100 GB
1 TB
100 million
1,000
10,000
200
10,000
10 GB
5
7 days
100
2 GB
• The size of your data files is limited to ensure that jobs finish in a timely manner. Jobs that have been
112
Chapter 11. Amazon Machine Learning Reference
Amazon Machine Learning Developer Guide, Release 1.0
running for more than seven days will be automatically terminated, resulting in a FAILED status.
11.7 Names and IDs for all Objects
Every object in Amazon ML must have an identifier, or ID. The Amazon ML console generates ID values
for you, but if you use the API you must generate your own. Each ID must be unique among all Amazon
ML objects of the same type in your AWS account. That is, you cannot have two evaluations with the same
ID. It is possible to have an evaluation and a datasource with the same ID, although it is not recommended.
We recommend that you use randomly generated identifiers for your objects, prefixed with a short string to
identify their type. For example, when the Amazon ML console generates a datasource, it assigns the
datasource a random, unique ID like “ds-zScWIuWiOxF”. This ID is sufficiently random to avoid
collisions for any single user, and it’s also compact and readable. The “ds-” prefix is for convenience and
clarity, but is not required. If you’re not sure what to use for your ID strings, we recommend using
hexadecimal UUID values (like 28b1e915-57e5-4e6c-a7bd-6fb4e729cb23), which are readily available in
any modern programming environment.
ID strings can contain ASCII letters, numbers, hyphens and underscores, and can be up to 64 characters
long. It is possible and perhaps convenient to encode metadata into an ID string. But it is not recommended
because once an object has been created, its ID cannot be changed.
Object names provide an easy way for you to associate user-friendly metadata with each object. You can
update names after an object has been created. This makes it possible for the object’s name to reflect some
aspect of your ML workflow. For example, you might initially name an ML model “experiment #3”, and
then later rename the model “final production model”. Names can be any string you want, up to 1,024
characters.
11.8 Object Lifetimes
Any datasource, ML model, evaluation, or batch prediction object that you create with Amazon ML will be
available for your use for at least two years after creation. Amazon ML might automatically remove objects
that have not been accessed or used for over two years.
11.7. Names and IDs for all Objects
113
Amazon Machine Learning Developer Guide, Release 1.0
114
Chapter 11. Amazon Machine Learning Reference
CHAPTER
TWELVE
ABOUT AMAZON WEB SERVICES
Amazon Web Services (AWS) is a collection of digital infrastructure services that developers can leverage
when developing their applications. The services include computing, storage, database, and application
synchronization (messaging and queuing). AWS uses a pay-as-you-go service model: you are charged only
for the services that you—or your applications—use.
115
Amazon Machine Learning Developer Guide, Release 1.0
116
Chapter 12. About Amazon Web Services
INDEX
B
C
training parameters, 55
transforming data, 61
tutorial, 7
types of ML Models, 55
cloud watch metrics, 98
create datasources, 27
W
batch prediction terms, 1
what is Amazon Machine Learning, 1
D
datasource terms, 1
E
evaluate ML Models, 73
evaluation terms, 1
F
feature transform, 61
I
interpret predictions, 83
K
key concepts, 1
M
manage Amazon Machine Learning Objects, 94
model terms, 1
R
real-time predictions, 83
recipe, 61
Reference, 99
S
S3 permission, 99
set up, 5
T
training ML models, 55
117