.TensorFlow From CSV to API

Tense-or-Flow

This tutorial illustrates one way to train a feed forward neural network based on a CSV file using TensorFlow. After training a model we’ll setup a small REST API to serve requests to predict Iris species based on their sepal length, sepal width, petal length and petal width.

The REST API and work flow is meant to be as legible as possible to illustrate how to use these features. I’ve avoided advanced topics in an attempt to keep the process clear and highlight what I feel are important aspects of using TensorFlow.

NOTE: This code is not intended to be production quality.

You can try out the API by POSTing an example Iris with these features:

{
  "sepal_length": 4.9,
  "sepal_width": 3.1,
  "petal_length": 1.5,
  "petal_width": 0.1
}

To the

1
/predictionrequest/
endpoint. Then using the UUID in the response to GET the results of the prediction from the
1
/predictionrequest/{uuid}/
endpoint.

Curl Commands CREATE:
  curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{
    "sepal_length": 4.9,
    "sepal_width": 3.1,
    "petal_length": 1.5,
    "petal_width": 0.1
  }' 'http://tensorflow-iris-api.cedarstreet.io/predictionrequest/'
  
GET IRIS:
    curl -X GET --header 'Accept: application/json' 'http://tensorflow-iris-api.cedarstreet.io/predictionrequest/<UUID RETURNED ABOVE>'
  

Setup

Why TensorFlow?

TensorFlow has a well thought out architecture, active development members and a novice friendly community.

TensorFlow includes useful features which save time on some machine learning tasks.

TensorFlow has focused on making complex machine learning tasks accessible to a wider audience. They've been constantly improving their examples while extending documentation. It's a solid start for a reliable project.

I’m going to skip the majority of the setup because these are steps which are well covered in detail on TensorFlow’s website. All the code you’ll work on is in Python 3.

I recommend installing TensorFlow in a virtualenv.

Data

We’ll use this Iris dataset to power our example API. The dataset is a simple CSV which could be replaced with interesting data like personal health information, NBA stats or asteroid orbit information. This same process can be expanded on to create an end-to-end solution for training deep learning networks based on any CSV.

Our goal is that we’d like to make a request of our API to find out which species of Iris we have based on its sepal length, sepal width, petal length and petal width. In order to do this we’ll start by training a model based on Iris information we’ve found in a CSV.

Each row in our CSV looks like this:

5.1,3.5,1.4,0.2,Iris-setosa
7.0,3.2,4.7,1.4,Iris-versicolor

With the following layout:

Sepal Length Sepal Width Petal Length Petal Width Species
5.1 3.5 1.4 0.2 Iris-setosa
7.0 3.2 4.7 1.4 Iris-versicolor

We are going to use the first four columns from our training set as our features (x) to build our model. The last column “Species”, we’ll use as our target class in training which is often denoted as “T” or “y_”. Our target class is what we want to find out which in this case is the species of Iris based on its attributes.

We can consider our training values as a rank 1 tensor. It’s important to make certain you understand what a tensor is. Familiarity with tensors, their dimensionality and rank are important aspects in what we’re building. The technical vocabulary is used often.

What’s a Tensor?

Load our First CSV

In order to load this information in TensorFlow we’ll use their CSV loading example.

Try it out, you can run this as a script or straight from a Jupyter Notebook.

import tensorflow as tf

directory = "./train/*.csv"
filename_queue = tf.train.string_input_producer(
    tf.train.match_filenames_once(directory),
    shuffle=True)

# Each file will have a header, we skip it and give defaults and type information
# for each column below.
line_reader = tf.TextLineReader(skip_header_lines=1)

_, csv_row = line_reader.read(filename_queue)

# Type information and column names based on the decoded CSV.
record_defaults = [[0.0], [0.0], [0.0], [0.0], [""]]
sepal_length, sepal_width, petal_length, petal_width, iris_species = \
    tf.decode_csv(csv_row, record_defaults=record_defaults)

# Turn the features back into a tensor.
features = tf.pack([
    sepal_length,
    sepal_width,
    petal_length,
    petal_width])

with tf.Session() as sess:
    tf.initialize_all_variables().run()

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    # We do 10 iterations (steps) where we grab an example from the CSV file. 
    for iteration in range(1, 11):
        # Our graph isn't evaluated until we use run unless we're in an interactive session.
        example, label = sess.run([features, iris_species])

        print(example, label)
    coord.request_stop()
    coord.join(threads)

That is all it takes to work with CSV information using TensorFlow. From here you could train any number of different types of networks.

One-hot Encoding

FlipFlops

We’re about to change the

1
iris_species
to be in a new format using a technique called “One-hot encoding”. The name alone sounds complex and cool but it is fairly trivial.

Currently our species (classes) are

1
"Iris-setosa"
,
1
"Iris-versicolor"
and
1
"Iris-virginica"
which are all strings. Strings don’t work well with the type of training we’re about to do so instead we need a unique number to represent each species.

We’ll encode our three species into a one-hot three dimensional rank 1 tensor (three component vector). Now instead of the three species names we’re going to use their one-hot encoding instead.

Species Name (Class) One-hot Encoded
Iris-setosa 001
Iris-versicolor 010
Iris-virginica 100
all_species = ["Setosa", "Versicolor", "Virginica"]
onehot = {}
# Target number of species types (target classes) is 3 ^
species_count = len(all_species)

# Print out each one-hot encoded string for 3 species.
for i, species in enumerate(all_species):
    # %0*d gives us the second parameter's number of spaces as padding.
    print("%s,%0*d" % (species, species_count, 10 ** i))

Training

Training a Neural Network

I won’t go into any details here, this training mechanism is common and the name for the network is a “Feed Forward Neural Network”. They are well documented and the code examples in TensorFlow are well documented.

If you don’t feel like you can explain why we’re training a neural network then please watch this excellent video on the subject.

The training code used

1
cross_entropy
and a
1
GradientDescentOptimizer
which are fairly common.

Neural Network Training

Also, the creator of that video has an incredible book series I recommend named Artificial Intelligence for Humans.

Testing

Now to see how our model performs. It’s important to separate the testing logic from your ongoing training so we’ll create a new script to test. We separate this logic so we don’t impact the speed of training, often I’ll move a model checkpoint to a separate file system then run the testing on a remote system.

Network testing logic can be seen here. It is a rudimentary form of testing because accuracy isn’t the full story. To get a better understanding we can use an F1 score and further research the Bias Variance Tradeoff.

The Iris dataset is miniature so our testing won’t reveal much in this case except that we need more data.

API

Document your API

We can train and test. Cool, but now we need to make an API so people can try our new model.

First things first, let’s not jump into making an API. APIs require forethought and planning because their purpose is to communicate your information to the wider world. Instead of coding up some “quick” API we’ll plan it out in advance using a Swagger 2.0 spec.

Thankfully this process is fun and can be accomplished using Swagger’s Online Editor. You’ll need to download then drag this json file into the editor which will load up what we expect our API routes to be. It isn’t a fully fledged API but it will work for now.

The API has two routes, one to create a

1
PredictionRequest
and another to get the
1
PredictionRequest
. There is an important reason we used two separate routes.

In the example code, requesting the prediction responds immediately by running the prediction. This was done for clarity but in a production environment I’d recommend using a queue to process these requests. This will allow you to have a web cluster serving predictions based off an interim volatile storage system placed between your web and application tier.

Separation of these systems is important because it is often the case that using the model to predict a value might take numerous milliseconds. When there aren’t many requests this won’t be trouble but soon you’ll hit a resource exhaustion if concurrent users begin using your service.

While exhaustion of system resources is important a critical secondary benefit is a savings in cost. GPU instances are extremely expensive to run while

1
t2.nano
systems cost less than a coffee in SF per month. Allowing your web tier to run on
1
t2.nano
instances while only booting GPU instances periodically is a drastic savings in cost.

TL;DR - Save Money and on to the API

Instead of returning the prediction immediately we request a prediction which the client is expected to request again until the status has changed. This process allows us to scale out our web/API separately from our machine learning tasks.

Next Steps

While an Iris predicting API is not the next feline social network, the process used is the backbone of many successful companies. TensorFlow is a project which makes building state of the art Neural Networks within grasp of novice software developers all over the world.

Please expand on this information and use it to build something cool. If I’ve made mistakes (I know I have) please highlight them on github, all the code is open source.

Resources