There are few skeptics of the claim that Artificial Intelligence is used behind The Grid, a tool for generating designs dictated by content.

In order to dispel some of the skepticism against The Grid's AI, we must test its capacity to solve familiar types of artificial intelligence problems. In particular, we re-apply the same technology behind The Grid, Grid Style Sheets (GSS), in order to perform the familiar task of classification. Classification is a well known application of machine learning.

By testing the GSS engine's capacity to solve classification problems, we can develop an intuition for its predictive power. If the GSS is able to solve the generalized binary classification problem, then it is applicable to the entire class of solved classification problems including face detection, handwriting recognition, pattern recognition, etc. This highlights much of the potential behind The Grid's technology.

This article will feature a live demo of support vector machine training using GSS. We start from theory and work our way down to a practical implementation with simple HTML, CSS, and JavaScript.

We begin by introducing the problem of classification and how support vector machines are used to solve them. Second, we show how support vector machines are solved with linear programming. Finally, we build the support vector machines with GSS.

Red-Blue Classification

We begin with a simple example. Suppose that we have a set of red and blue balls arranged on a table. An example of an arrangement can be seen below.

Training Data

We would like to compute the position and orientation of a separator that separates red balls from blue balls. We may also observe that there many such feasible separators. Any of the separators is okay for now; we select one arbitrarily.

Feasible Boundaries

As above, selecting a linear separator to distinguish two classes of objects is an example of a classification problem.

Support vector machines (SVM) are models capable of learning the optimal position and orientation of the separator. Formally, the separator is known as the decision boundary.

In higher dimensions, SVMs compute hyperplanes.

In particular, SVMs have the property that the computed decision boundary maximizes the distance between its two closest training points of opposite class. These points are known as the support vectors. In our example, the separator is precisely at the midpoint between the two closest balls of opposite color. We can visualize this as below.

Support Vector Machine

To summarize the SVM training learning process:

  1. Plot the training data in space
  2. Propose linear separators that separate the data by their class
  3. Select the linear separator that maximizes the distance between the two closest training points of opposite class

SVMs as Linear Programming

For those that are more mathematically-inclined, we can formulate the learning process as a linear programming problem for which we want to find the minimum parameters of the position and orientation of the separator such that it correctly separates the red and blue balls. Formally, We can characterize linear separators by equations the form $y = wx + b$ which is parameterized by $w$ and $b$. Given these parameters, we can formulate this problem as follows:

$$ \begin{equation*} \begin{array}{llll} &\min_{w,b} &w b \\ &\textrm{subject to} &wx_i + b &\leq -1 &\textrm{when } \operatorname{color}(x_i) = \textrm{red} \\ && wx_i + b &\geq 1 &\textrm{when } \operatorname{color}(x_i) = \textrm{blue} \end{array} \end{equation*} $$

As we will soon discover, Cassowary, the linear constraint solving algorithm used by the GSS engine, is able to solve linear programming problems of this form. We exploit this fact to build support vector machines.

Linear programming with GSS

GSS defines a domain-specific language, Constraint CSS (CCSS), for definining linear constraints on CSS properties. This language is beyond the scope of this article, but you can learn more about it in the CCSS documentation. What is important to know is that CCSS is made possible through the Cassowary linear constraint solver. Specifically, CCSS encodes the variables for Cassowary to solve.

Cassowary defines an objective function which is the minimization of the unsatisfied constraints:

\begin{equation*} \begin{array}{ll} &\min_{\epsilon} &||\epsilon||_1 \\ &\textrm{subject to} &Ax \geq y - \epsilon \end{array} \end{equation*}

We can interpret the $\epsilon$ as a vector containing a component $\epsilon_j \geq 0$ for each constraint indexed at $j$. $\epsilon_j = 0$ means that the constraint was satisfied; $\epsilon_j > 0$ means that the constraint was unsatisfied proportional to the value of $\epsilon_j$. In linear optimization, we call these terms, slack variables since they relax the constraint.

Slack variables are used to transform inequality constraints of the form $Ax \geq y$ to equality constraints of the form $Ax = y - \epsilon_j$

Since Cassowary solves these systems in terms of slack variables, we must add slack variables to our original formulation of the SVM problem and transform it as necessary. After a bit of work, we come up with the new formulation:

$$ \begin{equation*} \begin{array}{llll} &\min_{\epsilon} &||\epsilon||_1 \\ &\textrm{subject to} &wx_i + b &= -1 - \epsilon_i &\textrm{when } \operatorname{color}(x_i) = \textrm{red} \\ && wx_i + b &= 1 - \epsilon_i &\textrm{when } \operatorname{color}(x_i) = \textrm{blue} \\ && w &= \epsilon_w \\ && b &= \epsilon_b \end{array} \end{equation*} $$

Now, we need to reconsider how this new formulation changes our interpretation of the SVM solution. However, little analysis is necessary.

Recall that slack variables simply relax the constraints such that some of the constraints may be left unsatisfied. This means that the solutions will be the same for problems in which all the constraints are satisfiable i.e. $\epsilon_j = 0$ for all $j$. Then, for $\epsilon_j > 0$, solutions exist when some of the constraints are unsatisfiable. This means that the new formulation is more robust than our original formulation! This type of SVM is known as a soft margin SVM.

What we have observed so far is that that through Cassowary, GSS is able to represent robust types of SVM formulations for classification. Now, we will use this knowledge to exploit GSS in order to train such a classifier.

Using GSS to Learn SVMs

In this section, we provide a live demo of SVM training using GSS included this article.

Training Data

In order to train an SVM, we must have training data. For this demonstration, we will use blue and red balls defined in HTML and CSS that are arranged in a horizontal line. The data we will train an SVM on is shown below.


The CSS is available on a gist and the HTML is shown below. Alternatively, you may also inspect the source of this page for reference.

<div id="svm-example1" class="svm-example">
  <div class="red a">a</div>
  <div class="red b">b</div>
  <div class="red c">c</div>
  <div class="blue z">z</div>
  <div class="blue y">y</div>
  <div class="decision-boundary"></div>

The x-position of each training point is provided in JavaScript as an object from the element class to the x-position.

var X = {'a': 0, 'b': 70, 'c': 140, 'z': 200, 'y': 300};

Each training point is labeled as either red or blue which determines its color. In JavaScript, we encode the labels in an object y that contains mappings for each element to its associated class:

var y = {'a': -1, 'b': -1, 'c': -1, 'z': 1, 'y': 1};

Note that the decision boundary is also an element which will be updated after GSS has processed the training data.

Solving Linear Programs with GSS

In order to utilize the GSS engine, we must include its script which is available for inclusion from Amazon Web Services.

<script type="text/javascript" src=""></script>

Note that we will not be using the CCSS syntax that GSS normally provides. Instead, we will directly encode the constraints of our data as abstract syntax trees (AST).

Solving with GSS

As a rough outline, we can give GSS an abstract syntax tree and it will try to solve the encoded constraints to produce a valid set of parameters:

// var ast = encoded abstract syntax tree
var svm_container = document.getElementById('svm-container');
var engine = new GSS(svm_container);
var results = engine.solve(ast);

Note that the GSS engine takes an HTML element as a parameter. This HTML element contains all of the training points as elements that GSS will operate on.

Since we have a linear programming solver, we need develop two functions for preprocessing and postprocessing in JavaScript for transforming the training data into constraints for the GSS engine and for interpreting the results from the solution returned by the GSS engine respectively.


The preprocessing function that follows will return an abstract syntax tree for the GSS engine by parsing the x-positions in X and the classifications in y as linear constraints of the form $y = mx + b$.

function formulate_svm_problem(X, y) {
  var ast = [];
  for (var key in X) {
    ast.push(['==', ['get', ['.', key], 'x'], X[key]]);
    var z = ['*', ['+', ['*', X[key], ['get', 'w']], ['get', 'b']], y[key]];
    ast.push(['>=', z, 1]);
  return ast;

To make this more explicit, we establish two constraints per training point:

  • Make the HTML position of our point equal to the data's x-position
  • Make the x-position of our point be to the left of the separator if the class is negative
  • Make the x-position of our point be to the right of the separator if the class is positive

This is precisely an encoding of the original linear programming formulation that we have defined for SVMs with a slight variation:

$$ \begin{equation*} \begin{array}{llll} &\min_{w,b} &w b \\ &\textrm{subject to} &y\cdot(wx_i + b) &\geq 1 \end{array} \end{equation*} $$

The single constraint will encode both constraints from before. This is a nifty shorthand to reduce the number of constraints that we must solve for.


The post-processing function that follows will update the position and orientation of the decision boundary to separate the two classes given the results from the GSS engine solver.

function update_decision_boundary(svm_container, results) {
  var decision_boundary = svm_container.getElementsByClassName('decision-boundary')[0];
  var left = -results.b / results.w; = left + 19 + 'px';

Here, we simply solve for the x-position of the decision boundary which is when $wx + b = 0$. Isolating for $x$, we achieve the followwing equation:

$$ x = \frac{-b}{w} $$

Note that we also offset the x-position by a few pixels because our separator is inherently narrower than the training points.

Wrapping it up

Finally, we can take our training data X and y and put it into the GSS engine to solve the SVM and compute the parameters, w and b that parameterize the position and orientation of the decision boundary.

var X1 = {'a': 0, 'b': 70, 'c': 140, 'z': 200, 'y': 300};
var y1 = {'a': -1, 'b': -1, 'c': -1, 'z': 1, 'y': 1};

var example1 = document.getElementById("svm-example1");
var engine1 = new GSS(example1);
var results1 = engine1.solve(formulate_svm_problem(X1, y1));
update_decision_boundary(example1, results1);

After executing this code, we receive the result below.


Thus, we have created a one-dimensional SVM. We may also generalized this to two or three dimensions by solving for more variables in addition to w and b in our pre-processing and post-processing functions.

Soft margin SVM

As you may recall, the Cassowary linear constraint solver is robust to unsatisfiable constraints. Subsequently, the code easily handles cases where the colored balls are inseparable. Specifically, it continues to find the best position and orientation for a linear separator that minimizes the misclassifications.

We may observe this with the training data below which includes a new element d with x-position at 240px.

var X2 = {'a': 0, 'b': 70, 'c': 140, 'z': 200, 'y': 300, 'd': 240};
var y2 = {'a': -1, 'b': -1, 'c': -1, 'z': 1, 'y': 1, 'd': -1};

var example2 = document.getElementById("svm-example2");
var engine2 = new GSS(example2);
var results2 = engine2.solve(formulate_svm_problem(X2, y2));
update_decision_boundary(example2, results2);

This data generates the following results:


We now see that GSS has the potential to adapt to new types of problems.


There has been a significant amount of theoretical material presented in this article and hopefully you are able to at least establish an intuition for why GSS is capable of driving The Grid's claim that it is in fact driven by AI.

With a bit of creativity, you may begin to imagine how this technology can actually be used for applications beyond The Grid's claims. This is inherently the power behind algorithms and artificial intelligence as broad methods for achieving wide ranges of applications.

Here is one interesting application that The Grid has not claimed yet: reverse engineering designs. It is possible to learn an underlying style that is common behind a set of websites built by the same design firm with this technology. It is isn't difficult to see that with automated reverse engineeing, it follows that imitating that style is also automatable with The Grid's standard claim.

We can only really speculate about how they use this technology, but now we are certain that it is definitely driven by AI.

For some philosophical intuition and some mechanical notes, see Leigh Taylor's experience with it.