Integrating Google’s AutoDraw AI API with Angular

Integrating Google’s AutoDraw AI API with Angular
👋🏻
This post was written by Juan Delgadillo, an international software engineer, mentor, and traveler with more than a decade of experience helping worldwide startups and companies to design, build and deploy their web and mobile applications with a strong emphasis on added value, user experience, and efficiency.

In this post I’ll explain you how to integrate Google’s AutoDraw AI API with Angular through Canvas. AutoDraw is a Google’s AI experiment they launched few weeks ago which allows you to draw or at least to doodle anything you’d like to draw and then through AI and pre-trained artificial neural networks it recommends you possible icons you’d be looking for.


Table of contents


Installing Angular CLI

Angular CLI allow us to create Angular projects in a quicker way, to install Angular CLI you have to run this command:

npm install -g @angular/cli

Once the previous command has finished you can verify your Angular CLI version by typing:

ng --version

Creating our Angular app

Now that we already have Angular CLI installed, let’s create our project by running this command:

ng new angular-autodraw

We’ll have to wait a little bit for previous command to finish, it’ll create our Angular app structure and necessary folders and files to run our project, besides installing its npm dependencies.

When it has finished we can move to angular-autodraw directory and launch our app by typing:

cd angular-autodraw
npm start

We’ll be able to see our app running on http://localhost:4200

AutoDraw’s UI

Now that we have our app up a running, let’s define how it’ll look from an HTML perspective, we’ll see the full code and then discuss:

<div class="canvas">
  <canvas #canvas width="350" height="350"></canvas>
  <button type="button" class="clear-canvas-button" (click)="eraseCanvas()">Clear canvas</button>
</div>
<div class="autodraw-results">
  <ng-template ngFor [ngForOf]="drawSuggestions" let-suggestion>
    <figure class="autodraw-image" *ngFor="let icon of suggestion.icons" (click)="pickSuggestion(icon)">
      <img src="{{ icon }}" width="90" height="90" alt="{{ suggestion.name }}" title="{{ suggestion.name }}">
    </figure>
  </ng-template>
</div>

As you can see it’s very easy what we have as AutoDraw’s component html template, we need a canvas element where we’re going to draw on and then a container which we’ll put the results in. Notice the #canvas local variable, we’ll need it to reference to our canvas element later on our component and subscribe to some mouse events making decisions depending on them.

AutoDraw’s service

We’ll create a service which it’s going to allow us to make some request to Google’s AutoDraw API, and also, it’ll be in charge of loading some stencils which we are going to match when our request is resolved with draw suggestions.

import { Headers, RequestOptions, Http } from '@angular/http';
import { Injectable } from '@angular/core';
import 'rxjs/add/operator/map';
const API_ENDPOINT = 'https://inputtools.google.com/request?ime=handwriting&app=autodraw&dbg=1&cs=1&oe=UTF-8';
const STENCILS_ENDPOINT = 'src/data/stencils.json';
@Injectable()
export class AutoDrawService {
stencils;
constructor(
    private http: Http
  ) { }
loadStencils () {
    this.http.get(STENCILS_ENDPOINT).subscribe(response => this.stencils = response.json());
  }
drawSuggestions (
    shapes: Array<Array<number[]>>, drawOptions: {
    canvasWidth: number,
    canvasHeight: number
  }) {
    let headers = new Headers({
      'Content-Type': 'application/json; charset=utf-8'
    });
    let options = new RequestOptions({ headers });
return this.http.post(
      API_ENDPOINT,
      JSON.stringify({
        input_type: 0,
        requests: [{
          language: 'autodraw',
          writing_guide: {
            "width": drawOptions.canvasWidth,
            "height": drawOptions.canvasHeight
          },
          ink: shapes
        }]
      }),
      options
    ).map(response => {
      let data = response.json();
      let results = JSON.parse(data[1][0][3].debug_info.match(/SCORESINKS: (.*) Service_Recognize:/)[1])
        .map(result => {
          return {
            name: result[0],
            icons: (this.stencils[result[0]] || []).map(collection => collection.src)
          }
        });
      return results;
    });
  }
}

This service has two main methods, loadStencils() which loads all stencils we are going to match the request’s response with and drawSuggestions() that is in charge of making the necessary request to Google’s AutoDraw API and returns us the matching stencils that are going to be our draw suggestions.

Now we’ve got the service and UI, the only thing missing here is our main functionality which will be located in our component. So, let’s go ahead and create it.

AutoDraw’s functionality

First, let’s see the code and then discuss we’ll be focusing on the important parts:

import { Component, OnInit, OnDestroy, ViewChild } from '@angular/core';
import { Subscription } from 'rxjs/Subscription';
import { Observable } from 'rxjs/Observable';
import { AutoDrawService } from './services';
import 'rxjs/add/observable/fromEvent';
@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit, OnDestroy {
constructor (
    private autoDrawService: AutoDrawService
  ) {}
@ViewChild('canvas') canvas;
drawSuggestions: Array<object>;
canvasMouseEventSubscriptions: Subscription[];
previousXAxis: number = 0;
  previousYAxis: number = 0;
  currentXAxis:  number = 0;
  currentYAxis:  number = 0;
context;
  pressedAt: number;
  pressing:  boolean = false;
  currentShape: Array<number[]>;
  shapes: Array<Array<number[]>> = [];
  intervalLastPosition: number[] = [-1, -1];
ngOnInit () {
    this.autoDrawService.loadStencils();
    this.context = this.canvas.nativeElement.getContext('2d');
    let mouseEvents = ['mousemove', 'mousedown', 'mouseup', 'mouseout'];
this.canvasMouseEventSubscriptions = mouseEvents.map(
      (mouseEvent: string) => Observable
        .fromEvent(this.canvas.nativeElement, mouseEvent)
        .subscribe((event: MouseEvent) => this.draw(event))
    );
  }
ngOnDestroy () {
    for (let mouseEventSubscription of this.canvasMouseEventSubscriptions) {
      mouseEventSubscription.unsubscribe();
    }
  }
eraseCanvas () {
    this.shapes = [];
    this.context.clearRect(0, 0, this.canvas.nativeElement.width, this.canvas.nativeElement.height);
  }
prepareNewShape () {
    this.currentShape = [
      [], // X coordinates
      [], // Y coordinates
      []  // Times
    ];
  }
storeCoordinates() {
    if (this.intervalLastPosition[0] !== this.previousXAxis && this.intervalLastPosition[1] !== this.previousYAxis) {
      this.intervalLastPosition = [this.previousXAxis, this.previousYAxis];
      this.currentShape = [
        [...this.currentShape[0], this.previousXAxis],
        [...this.currentShape[1], this.previousYAxis],
        [...this.currentShape[2], Date.now() - this.pressedAt]
      ];
    }
  }
onDrawingMouseDown (mouseEvent: MouseEvent) {
    let highlightStartPoint, drawColorStartingPoint = 'black';
this.previousXAxis = this.currentXAxis;
    this.previousYAxis = this.currentYAxis;
    this.currentXAxis = mouseEvent.clientX - this.canvas.nativeElement.offsetLeft;
    this.currentYAxis = mouseEvent.clientY - this.canvas.nativeElement.offsetTop;
this.pressing = true;
    this.pressedAt = Date.now();
    highlightStartPoint = true;
this.prepareNewShape();
if (highlightStartPoint) {
      this.context.beginPath();
      this.context.fillStyle = drawColorStartingPoint;
      this.context.fillRect(this.currentXAxis, this.currentYAxis, 2, 2);
      this.context.closePath();
      highlightStartPoint = false;
    }
// Stores coordinates every 9ms
    return window.setInterval(() => this.storeCoordinates(), 9);
  }
onDrawingMouseMove (mouseEvent: MouseEvent) {
    let drawStroke = 8, drawColor  = 'black';
this.previousXAxis = this.currentXAxis;
    this.previousYAxis = this.currentYAxis;
    this.currentXAxis = mouseEvent.clientX - this.canvas.nativeElement.offsetLeft;
    this.currentYAxis = mouseEvent.clientY - this.canvas.nativeElement.offsetTop;
this.context.beginPath();
    this.context.moveTo(this.previousXAxis, this.previousYAxis);
    this.context.lineTo(this.currentXAxis, this.currentYAxis);
    this.context.strokeStyle = drawColor;
    this.context.fillStyle = drawColor;
    this.context.lineCap = 'round';
    this.context.lineJoin = 'round';
    this.context.lineWidth = drawStroke;
    this.context.stroke();
    this.context.closePath();
  }
draw(mouseEvent: MouseEvent) {
    let storeCoordinateInterval;
if (mouseEvent.type === 'mousedown') {
      storeCoordinateInterval = this.onDrawingMouseDown(mouseEvent);
    }
if (mouseEvent.type === 'mouseup' || this.pressing && mouseEvent.type === 'mouseout') {
      this.pressing = false;
      clearInterval(storeCoordinateInterval);
      this.commitCurrentShape();
    }
if (mouseEvent.type === 'mousemove' && this.pressing) {
      this.onDrawingMouseMove(mouseEvent);
    }
  }
commitCurrentShape() {
    this.shapes.push(this.currentShape);
    let drawOptions = {
      canvasWidth: this.canvas.nativeElement.width,
      canvasHeight: this.canvas.nativeElement.height
    };
this.autoDrawService.drawSuggestions(this.shapes, drawOptions)
      .subscribe(suggestions => this.drawSuggestions = suggestions);
  }
pickSuggestion(source: string) {
    this.eraseCanvas();
    let image = new Image();
    image.onload = () => this.context.drawImage(image, 0, 0);
    image.src = source;
  }
}

This component injects the AutoDrawService we created before, also it creates canvas instance variable pointing to our #canvas ViewChild we defined in our UI. In ngOnInit component’s lifecycle we load AutoDrawService’s stencils and subscribe to some mouse events we are interested about, passing all these events information to our draw() main method which depending on the MouseEvent type, it calls other methods to perform specific logic.

Conclusion

It’s been a pretty exciting experience integrating this Google’s API and see how the AI is evolving every day, nowadays new tools and services are using deep neural networks and AI to perform certain activities in many software disciplines like computer vision, speech and audio processing, natural language processing, bioinformatics, chemistry, search engines and so forth.

It’s really easy integrate and use any API with Angular due to the powerful way it is designed and also the easy way it integrates with reactive programming.

Currently I’m learning how to create deep neural networks and leverage its capabilities in many systems I’m working on, I’m reading the “Deep Learning” book written by Ian Goodfellow, Yoshua Bengio, and Aaron Courville.

I hope by the end of this year to have a deep neural network implemented by myself sharing the whole experience I went through and how to integrate them using nowadays frameworks and tools like Angular… stay tuned ;)

You can checkout the full source code here. And see a live demo here.

Greetings and asynchronous hugs.