Here at Stitch Fix, we work on many fun and interesting areas of Data Science. One of the more unusual ones is drawing maps, specifically internal layouts of warehouses. These maps are extremely useful for simulating and optimising operational processes. In this post, we’ll explore how we are combining ideas from recommender systems and structural biology to automatically draw layouts and track when they change.
Distance matters
We have a number of warehouses across the country shipping orders to customers, with every item (colored squares in the animations below) in every order (same colored squares) being collected by a warehouse worker (green square). Within the warehouses, multiple orders are picked together to form a batch. We can get significant efficiency gains by (i) optimising the routes our workers take in the warehouses (solving instances of the travelling salesman problem), and (ii) by optimising which orders to pick together in a batch (an instance of the vehicle routing problem). Below are some hypothetical examples showing the improvements that can be made comparing an un-optimised batch (top left), with a batch that has had its route optimised (top right), and finally to one that has had both its route and order batching optimised (bottom). Compare the counter on the bottom right which shows how many steps a worker needs to take in each scenario.
It becomes apparent from the animations above that proper optimisation results in significant efficiency gains. A critical component to optimising our warehouses is having an understanding of the layout. Aside from being useful for routing and batching, we can also use the layouts to run simulations and test process changes virtually.
The problem at Stitch Fix is that we are constantly growing, and so layouts are constantly changing. The classical solution to this problem is to track layouts and changes to it manually, potentially aided by some image processing. This is labor intensive, prone to user error, and doesn’t scale well as the number of warehouses and frequency of layout changes increases. Can we compute them automatically?
Computing warehouse layouts
During the day, workers in the warehouse scan items successively, walking from one item location to the next. They repeat this process, gradually filling a batch (made of multiple orders and items) and then start the process over again with a new batch and a new set of items. Here is some made up data that illustrates what logs of these events might look like;
occurred_at | location | worker_id | batch_id |
---|---|---|---|
2017-04-28 08:12:01 | AA1 | 001 | 1 |
2017-04-28 08:12:21 | DE3 | 001 | 1 |
2017-04-28 08:12:47 | BA7 | 001 | 1 |
2017-04-28 08:13:03 | GH5 | 001 | 1 |
These item events tell us that worker
001
was at location AA1
at a particular time and then moved to location DE3
a short time later (and then to BA71
and so on) while picking batch 1
. At this point, we have no idea where any of the character locations (AA1
or DE3
) are or how they are positioned relative to each other. Given enough data, can we get the warehouse layout?
We can start by building a very basic mental model about what is happening when a warehouse worker fulfills an order. If we consider the time differences between events (for a particular worker) going from location to location , then this delta time , is approximately:
where is the distance the person walked, is the inverse walking speed and is the time it takes to perform actions at the picking location, e.g. scanning the item (in reality, there is going to be noise and these are going to be distributions, but we will ignore that for now and come back to it later). From all this timing data we can build up a symmetric matrix of pairwise times for each location in the warehouse which is proportional to the pairwise distances for all the locations (after adjusting for the scan offset).
An example matrix looks like:
Where the rows and columns are the locations (AA1
etc.) and the value is the time taken to transit between the locations.
How does it work?
So how do we go from our to the warehouse layout? Recovering our original warehouse layout now becomes an optimisation problem where we want to find the locations that match our measured .
One way to do this is to minimise the following:
where are the measured distances (or times) and is the estimated distances from our best estimates for the positions, . This is known as “Multi-dimensional scaling” (MDS)^{1}. MDS can be used to recover the original locations , as well as being used as a dimensionality reduction technique (since the projected dimensions used to calculate need not be the same as the original in ). The nature of the problem will determine how the elements are calculated. In general we write it as where is some distance metric (or a similarity or affinity). For example, a common form is just where is Euclidean distance and is a Manhattan distance. Extensions of MDS (nonmetric MDS) allow for a much wider variety of and can be used even when only the order of the distances/similarities are known. For the case that we have a noiseless Euclidean distance matrix (), the positions can be recovered in a single step (“classical” MDS). However, in our warehouse we have something else, since we need to go around racks and use path-finding to determine the actual distances. In theory, this is not a blocker, we can still try to solve the optimisation problem but it makes it a bit harder. One way to solve this is by replacing the correct path-finding distance function , with a function, , parameterized by weights (e.g by using a neural network). This is nice because the parameterized function is differentiable and (hopefully) much quicker to calculate.
So from our , we can now recover our warehouse layout using MDS, as shown below. in the animation (with the original layout on the right);
What about missing values
What happens if we have incomplete timing data? Imagine we have only half of our , with the other half missing (e.g. because workers have not visited all pairs of locations yet). Here is an example of the (symmetrically) corrupted :
We can can use matrix completion methods to repair and fill in the missing values. There are many ways to solve this problem, one such method “soft-impute”^{2,3} aims to solve the following optimisation problem;
where is a low-rank approximation of our measured data and the comparison is made over our observed values () and is the nuclear norm of . The optimisation here is basically saying find a low-rank that agrees with our observed values. We can observe the evolution of the repaired during the soft-impute method.
We can then do the same reconstruction procedure to get back our layout.
The thing to note here is that our matrix completion problem for repairing takes the exact same form as one of the classic recommendation problems (collaborative filtering^{4}). In collaborative filtering, instead of pairwise distances, we have item-person ratings and the aim is to fill in the missing values (i.e. predict what people will like). One of the most famous examples of this was the Netflix prize, a competition for the best algorithm to predict user ratings for films, based on previous ratings without any other information about the users or films.
Incorporating uncertainty
In reality, our measured times are going to be noisy and imperfect. We can incorporate the uncertainty in a number of ways. One way which we will discuss here is to go the full Bayesian route. This amounts to trying to find the posterior distribution () (our estimates and their uncertainty) of the latent variables () conditioned on our data (). This is given by Bayes rule as:
The problem here is that the integral is often not available in closed form and instead must be approximated. One method to obtain the posteriors is to use Markov Chain Monte-Carlo sampling, which is asymptotically exact, but can be computationally expensive. An alternative is to use variational inference (VI)^{5}, which speeds things up at the expense of accuracy (generally speaking). VI allows us to obtain estimates for the posteriors of our positions by turning the problem (of determining the posteriors) into an optimisation problem. The true posterior is approximated with a family of distibutions (e.g. Normal distributions), with the goal now to optimise the hyperparamters such that matches according to some metric or divergence (e.g. Kullback-Leibler (KL) divergence), i.e.
If we do this for our MDS (now variational MDS) then we can reconstruct the point estimates with their uncertainties.
Shown below is the reconstruction of the point estimates (left) of the positions and their respective uncertainty (right, where we have assumed normal distributions for the posteriors and used the KL divergence).
We have to cheat a little for this method to work as we need a differentiable distance function. In the above example we use a Manhattan distance but we could also approximate the correct path-finding distance function with a parameterized (and learnable) function, (as mentioned earlier). The above example was made by modifying Chris Moody’s pytorch variational t-sne (see also Edward Tutorials for some nice examples).
Final thoughts
One final interesting parallel is that the problem of reconstructing a warehouse layout from pairwise times (as explored in this post), is a very similar problem to that encountered in structural biology. Data obtained via nuclear magnetic resonance spectroscopy is proportional to the pairwise distances between nuclei. It then becomes an inverse problem of reconstructing the locations of the nuclei in from the (often noisy and incomplete) data. The steps mentioned above can be combined into a single algorithm to determine protein structure (an example of a reconstructed protein structure is shown below). This problem is known generally as the “molecular distance geometry problem”^{6}.
At Stitch Fix we’ve replaced the error-prone and unscalable process of manually updating warehouse layouts with an algorithm that automatically infers the layout from the data itself. We use that information to improve the efficiency of our operations, having confidence that we’re not surfacing bad recommendations to the business because of a typo in the document describing the warehouse layout. The algorithm itself recognizes if its assumptions are out of date and adjusts accordingly.
The mathematical methods involved in solving this problem exposed similarities to problems in structural biology, recommendation systems, and more. It is amazing and beautiful that problems that are so different on the surface end up having such similar underlying mathematical structure.