Machine Learning with Tensorflow.js pt.2

A few weeks ago I got started with Tensorflow and covered Tensors and operations. This week I’m going to continue to cover the basic building blocks of Tensorflow and then go over an interactive example that incorporates these elements.

Tensors – These are basically shaped collections of numbers. They can be multi-dimensional (array of arrays) or a single value. Tensors are all immutable which means they cant be changed once created and require manual disposal to avoid memory leaks in your application.

// 2x3 Tensor
const shape = [2, 3]; // 2 rows, 3 columns
const a = tf.tensor([1.0, 2.0, 3.0, 10.0, 20.0, 30.0], shape);
a.print(); // print Tensor values
// Output: [[1 , 2 , 3 ],
// [10, 20, 30]]

const c = tf.tensor2d([[1.0, 2.0, 3.0], [10.0, 20.0, 30.0]]);
// Output: [[1 , 2 , 3 ],
// [10, 20, 30]]

Operations – An operation is just a mathematical function that can be used on a tensor. These include multiplication, addition, and subtraction.

const d = tf.tensor2d([[1.0, 2.0], [3.0, 4.0]]);
const d_squared = d.square();
// Output: [[1, 4 ],
// [9, 16]]


Models & Layers –  A model is a function that performs some set of operations on tensors to produce a desired output. These can be constructed using plain operations but there are also a lot of built in models with Tensorflow,js that rely on established learning and statistical methods.

// Define function
function predict(input) {
// y = a * x ^ 2 + b * x + c
// More on tf.tidy in the next section
return tf.tidy(() => {
const x = tf.scalar(input);

const ax2 = a.mul(x.square());
const bx = b.mul(x);
const y = ax2.add(bx).add(c);

return y;

// Define constants: y = 2x^2 + 4x + 8
const a = tf.scalar(2);
const b = tf.scalar(4);
const c = tf.scalar(8);

// Predict output for input of 2
const result = predict(2);
result.print() // Output: 24


Memory Management – Tensorflow.js uses the GPU on your computer to handle most of the operations which means that typical garbage collection isn’t available. Tensorflow therefore includes the tidy and dispose methods that allow you to dump unused tensors out of memory

// tf.tidy takes a function to tidy up after
const average = tf.tidy(() => {
// tf.tidy will clean up all the GPU memory used by tensors inside
// this function, other than the tensor that is returned.
// Even in a short sequence of operations like the one below, a number
// of intermediate tensors get created. So it is a good practice to
// put your math ops in a tidy!
const y = tf.tensor1d([1.0, 2.0, 3.0, 4.0]);
const z = tf.ones([4]);
return y.sub(z).square().mean();
average.print() // Output: 3.5


As with my last post much of this material comes from Tensorflow.js Getting Started and Dan Shiffman’s The Coding Train series.


How do we learn about these points?

We need:

  • A dataset
  • A loss function (how wrong are we?)
  • An optimizer (how do we improve?)
  • A predict function (how do we demonstrate what we’ve learned)

Simple Linear Regression:

In this case (linear regression) we need a bunch of X and Y values that we can plot. Lets say the X represents the age of a building and the Y represents the value. We establish arrays to hold these points.

let x_vals = [];
let y_vals = [];

Since we’re modeling a line we need to find a slope and a Y-intercept to plug our x and y values into (Y=m*X +b)

let m, b;

Now we select the sort of optimizer we want to use and set a learning rate. (this comes from the tensorflow.js library and a bunch of different methods can be found there). In this case Stochastic gradient descent (‘sgd’). The learning rate will tell the optimizer how aggressive to be when doing its optimization. We don’t want our optimizer to react too much to any one datapoint so this value should usually be less than 0.5.

const learningRate = 0.5;
const optimizer = tf.train.sgd(learningRate);

Since we are trying to learn the slope(m) and Y intercept of our line(b) we need to hand the optimizer something to optimize. Here we initialize m and b to random values that will get continuously adjusted as our program learns. Were also going to create a canvas to help with visualizations. (This comes from the p5.js library)

function setup() {
createCanvas(400, 400);
m = tf.variable(tf.scalar(random(1)));
b = tf.variable(tf.scalar(random(1)));

Now we want to define a function to figure out how wrong our currently random slope and intercept values are compared to our dataset

function loss(pred, labels) {
return pred.sub(labels).square().mean();

This next function is where we apply what we’ve learned from out loss function and our optimizer. We need a best guess for what our building value (Y) is given any age(X).

function predict(x) {
const xs = tf.tensor1d(x);
// y = mx + b;
const ys = xs.mul(m).add(b);
return ys;

We can now populate our learning dataset by using mouse clicks on the canvas (again, this is P5.js)

function mousePressed() {
let x = map(mouseX, 0, width, 0, 1);
let y = map(mouseY, 0, height, 1, 0);

This next while loop (called draw) uses some p5 goodness to animate this project. You can see the Predict function being called and tensors being garbage collected with the dispose and tidy functions. The only other tensorflow function we use here is a memory function to make sure we have no leaks and dataSync() to pull the X and Y values out of the tensorflow objects. Importantly this could be set up using promises. However this example is simple and we should get values fast enough to render properly.

function draw() {

tf.tidy(() => {
if (x_vals.length > 0) {
const ys = tf.tensor1d(y_vals);
optimizer.minimize(() => loss(predict(x_vals), ys));


for (let i = 0; i < x_vals.length; i++) { let px = map(x_vals[i], 0, 1, 0, width); let py = map(y_vals[i], 0, 1, height, 0); point(px, py); } const lineX = [0, 1]; const ys = tf.tidy(() => predict(lineX));
let lineY = ys.dataSync();

let x1 = map(lineX[0], 0, 1, 0, width);
let x2 = map(lineX[1], 0, 1, 0, width);

let y1 = map(lineY[0], 0, 1, height, 0);
let y2 = map(lineY[1], 0, 1, height, 0);

line(x1, y1, x2, y2);


See the Pen wXzwdY by jesse (@SuperJesse) on CodePen.

Polynomial version:

See the Pen yEaBpB by jesse (@SuperJesse) on CodePen.


Machine learning in the browser with Tensorflow.js pt.1

In my previous blog post I discussed perceptrons, a very early example of machine learning. As a recap, perceptrons are simple learning algorithms that can solve linearly separable problems.

Two lines demonstrate the correct and predicted classification of each point on a grid
Perceptron solving a linearly separable problem (source: nature of code)

This is cool,  but not very useful. As far as I can tell most of the problems a perceptron can solve can be done much more quickly by passing your data through a well considered IF statement (I.e. If coffee mug is in photo then it is a photo of coffee). These days we can see all sorts of applications of machine learning that seem to solve much more complicated problems. Self driving cars are learning what a person looks like, can make assumptions about how they’ll move and can direct a car to respond based on this information. Much of this more advanced machine learning is through multilayer perceptrons, neural nets and other advanced methods.

Single layer perceptron (nature of code)
Multi-layer perceptron
Example of complex Non-linearly seperable data

One of the best way to get started working with these advance machine learning algorithms is through Google’s tensorflow library. This has been available as a python library for some time and was recently updated to include a Javascript library as well. In this post I’m going to cover how to quickly get this running and some basic concepts that you need to understand as you get started. Much of this material is covered in the getting started section on the tensorflow.js website as well.

The first thing you can do is try creating a basic html page that loads the tensorflow library. This can be done very quickly.

Inserting the flowing script into your index.html file allows you access to the whole library.

<script src=”″> </script>

Any javascript or tensorflow work can now be written inside additional script tags or be linked from another page. Try out this very simple test that Tensorflow provides:

When you open this page in your browser you should see the following:

We’re learning! You’ll notice that whenever we reload the page we get a slightly different value as the algorithm comes to a different approximation of the output for an unknown data point.

A few basic concepts:

Before you can program a self driving car, we need to understand the way Tensorflow works. As the name suggests it manly deals with “Tensors”. A tensor is basically a formation of numbers. “7” is a scalar, “[7,8,9]” is a vector and a matrix is a 2d version of a vector that I am having trouble drawing. BUT they are also all tensors. Tensors are basically just shapes of numbers, as I understand them, and Tensorflow allows you to manipulate these numbers and have them interact with each-other , typically through linear-algebra equations. These interactions are called operations (Ops) and involve multiplication, addition, and subtraction of differently sized tensors.

These tensor operations are how we adjust the weights that connect our various neurons. If you recall from the perceptron post, adjusting these weights that connect our neurons is how we “learn” associations between inputs and outputs. As we attempt to feed our ML application with more data we need larger and larger tensors which is why a library that manages all that math and data is very useful.

Fun examples:

So what can we achieve with Tensorflow in the browser? This appears to be a common question so Tensorflow has a few great example projects on their site. Here are my favorites:


Woman using posenet

Teachable Machine:

Checkout the Tensorflow JS website for more info and Dan Shiffman’s youtube series for some fun intro videos. Look for a follow up to this post in 3 weeks!

Machine Learning, Perceptrons and Ruby

Machine learning(ML) refers to a collection of programming techniques that allow computers to make distinctions or recognize patterns without explicit commands. This field is based on statistical methods and emerged from artificial intelligence research in the late 1950s and early 1960s. Applications of ML include optical character recognition, sentiment analysis, computer vision and prediction making. People with experience in ML are highly desired in the job market and learning based algorithms are making more and more important decisions in our society. So as an emerging programer its probably worth while to learn a bit about how machines learn.

Use conventional code if you can articulate a concrete series of actions that would produce the desired functionality.
Should I use Machine Learning? (Source: Learning Machines)


As an introduction to ML this post will walk through how to build a single layer perceptron in Ruby. The perceptron was one of the first functional ML algorithms. It was developed by Frank Rosenblatt in 1957 and was used to build a machine that could identify certain objects. At the time Rosenblatt stated that the “perceptron [is] “the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

I am far from an expert in this field but luckily perceptrons are relatively straight forward models to build. I have seen them written in python, Java, and javascript but had a hard time finding a ruby version. Attempting to build this out in ruby seemed like a decent contribution that I could make.

Using a common biological analogy, a perceptron is like a single neuron in a larger brain. It is designed to take in inputs and based on those inputs generate an output for other neurons.


Diagram of a neuron
Neurons (Source: Nature of Code)


A diagram of a single layer perceptron
Perceptron (Source: Nature of Code)
A diagram that illustrates how a perceptron can be useful
Example use case (Source: Learning Machines)

Read More

Eat Your Feelings DB CLI

Eat Your Feelings DB (EYF) is a simple command line project that Edward N, and I developed for our Module One final project at the Flatiron school in 2018. EYF accesses restaurant reviews via the Google places API and associates each review with an emotional analysis from Parallel Dots. We built an OO model and active record database to select for restaurants and reviewers based on the emotional content of their respective reviews.

This app can be forked on GitHub and cloned to a local machine. To re-seed the database drop the existing tables and re-migrate. Enter Parallel Dots and Google Places API keys in the appropriate spots in the db/seeds.rb file. If you’d like to analyze different restaurants replace the strings in the “Places” array with alternative Google Place IDs. Once you have done this run “bundle install” then “rake db:seed” in terminal and it should populate your database with reviews and emotional analysis. Then you should be able to enter “Ruby bin/run.rb” in terminal and launch our application. Enter “all restaurants” to confirm the data was properly entered.

We would value any contributions in areas of user interface, automatically selecting / adding new restaurants, and performing more analytical operations on the dataset.

We ran into a few interesting things while working on this project. First we spent a while looking through different APIs and the values that they returned to find what would fit our concept the best. Originally we envisioned using Yelp and IBM Watson but both APIs proved too restrictive and too generalized. Parallel Dots and Google Places on the other hand worked very well with some minor restrictions, mainly Google Places limits each query to 4 reviews which was fine for our purposes but may limit more rigorous analysis.

Another API related challenge we are still working on is how to manage API keys while regularly pushing to GitHub. We ultimately had to reset our API keys after accidentally making them public. Attempts to undo this were frustrating and we still have a lot to learn about protecting credentials like this. [Update: We learned how to use gitignore!]

Lastly, building the interface and having a good framework for what our methods should be returning was a recurring issue. In some cases we wanted the object methods “puts-ing” text to the screen other times we wanted a second helper method to format the output for the user. It was hard to figure out how to make this consistent and what our data types should be. There is probably a lot of room for improvement here too. Ultimately we are happy with our final product and think that EYF represents a great proof of concept for how machine learning and sentiment analysis can be applied.

Building simple Twitter bots with Ruby!

Build your own bot

If you’re interested in building interactive web based software twitter bots are a great place to start. Twitter is a fairly open platform that can be easily access through API’s and libraries which are available for many different programming languages. Combining this access with other API’s, datasets and original content allows for thousands of possibilities and projects that can be completed with only a couple of hours of work.

Some fun example bots:

Think Piece Bot – This bot generates headlines for imaginary opinion articles.

Census American Bot – Tweets anonymized details from real US census forms.

Stealth Mountain – This bot replies to anyone who writes about a “sneak peak” rather than a “sneak peek”, a major problem among media types.

New York Times First Said – Tweets each time the NYTs uses a word for the first time in their history.

Fast Plants & News – This account tweets a picture of some hydroponics plants and associates that with news clips.

Read More