Moving from Data-Driven to AI-Driven: The Next Step in the Evolution of Business Workflows

Eric Colson
- San Francisco, CA
xxx

This post is an adaptation of the article that originally appeared in HBR.


Many companies have adopted a “data-driven” approach for operational decision-making. While data can improve decisions, it requires the right processor to fully leverage it. Many people assume that processor is human. The term “data-driven” implies that the data is to be curated by — and summarized for — humans to process. However, in order to fully leverage the value contained in the data, companies need to bring Artificial Intelligence (AI) into the workflows and sometimes this means getting us humans out of the way, shifting our focus to where we can best contribute. We need to evolve from data-driven to AI-driven.

Discerning “data-driven” from “AI-driven” isn’t just semantics; it’s distinguishing between two different assets: data and processing ability. Data holds the insights that can enable better decisions; processing is the way to extract those insights and take actions. Humans and AI are both processors, yet they have very different abilities. To understand how to best leverage each it’s helpful to review our own biological evolution as well as how decision-making has evolved in industry.

Just fifty years ago human judgment was the central processor of business decision-making. Professionals relied on their highly-tuned intuitions, developed from years of experience in their domain, to pick the right creative for an ad campaign, determine the right inventory levels to stock, or approve the right financial investments. Experience and gut instinct were all that were available to discern good from bad, high from low, and risky vs. safe.

xxx

It was, perhaps, all too human. Our intuitions are far from ideal for use in decision-making. Our brains are inflicted with many cognitive biases that impair our judgement in predictable ways. This is the result of hundreds of thousands of years of evolution where, as early hunter-gatherers, we developed a system of reasoning that relies on simple heuristics — shortcuts or rules-of-thumb that circumvent the high cost of processing a lot of information. This enabled quick, almost unconscious decisions to get us out of potentially perilous situations. However, ‘quick and almost unconscious’ didn’t always mean optimal or even accurate. Imagine a group of our hunter-gatherer ancestors huddled around a campfire when a nearby bush suddenly rustles. A decision of the ‘quick and almost unconscious’ type needs to be made: conclude that the rusting is a dangerous predator and flee, or, inquire to gather more information to see if it is potential prey - say, a rabbit, that can provide rich nutrients. Our more impulsive ancestors - those that decided to flee - survived at a higher rate than their more inquisitive peers. This is because the cost of wrongly concluding it was a predator when it was only a rabbit is relatively low – some forgone nutrition for the evening. However, the cost of inquiring to gather more information when the rustling was actually a predator can be devastating - the cost of life! With such asymmetry in outcomes, evolution favors the trait that leads to less costly consequences, even at the sacrifice of accuracy1. Therefore, the trait for more impulsive decision-making and less information processing becomes prevalent in the descendant population.

The result of this selection process is the myriad of cognitive biases that come pre-loaded in our inherited brains. These biases influence our judgment and decision-making in ways that depart from rational objectivity. We give more weight than we should to vivid or recent events. We coarsely classify subjects intro broad stereotypes that don’t sufficiently explain their differences. We anchor on prior experience even when it is completely irrelevant. We tend to conjure up specious explanations for events when it’s really just random noise (see “You can’t make this stuff up … or can you?”). These are just a few of the dozens of ways cognitive bias plagues human judgment - the very thing we had once placed as the central processor of business decision-making. Relying solely on human intuition is inefficient, capricious, fallible and limits the ability of the organization.

Data-Driven Workflows

Thank goodness for the digital revolution. Connected devices now capture unthinkable volumes of data: every transaction, every customer gesture, every micro- and macroeconomic indicator, all the information that can inform better decisions. In response to this new data-rich environment we’ve adapted our workflows. IT departments support the flow of information using machines (databases, distributed file systems, and the like) to reduce the unmanageable volumes of data down to digestible summaries for human consumption. The summaries are then further processed by humans using the tools like spreadsheets, dashboards, and analytics applications. Eventually, the highly processed, and now manageably small, data is presented for decision-making. This is the “data-driven” workflow. Human judgment is still in the role of central processor, yet now with summarized data as a new input.

xxx

While it’s undoubtedly better than relying solely on intuition, humans playing the role of central processor still creates several limitations.

  1. We don’t leverage all the data. Data summarization is necessary to accommodate the throughput of human processors. For as much as we are adept at digesting our surroundings, effortlessly processing vast amounts of ambient information, we are remarkably limited when it comes to processing structured data. Processing millions or billions of records of structured data is unfathomable; we can only processes small summaries - say, total sales and average selling price rolled up to a region level. Yet, summarized data can obscure many of the insights, relationships, and patterns contained in the original (big) data set. Aggregate statistics like sums and averages don’t provide the whole picture needed for decisions 2. Often a decision requires understanding the full distribution of data values or important relationships between data elements. This information is lost when data is aggregated. In other cases summarized data can be outright misleading. Confounding factors can give the appearance of a positive relationship when it is actually the opposite (see Simpson’s and other paradoxes). Yet, once data is aggregated it may be impossible to recover the factors in order to properly control for them3. In short, by using humans as central processors of data, we are still trading off accuracy to circumvent the high cost of human data processing.

  2. Data is not enough to Insulate us from cognitive bias. With humans in the role of central processors, the data summaries are directed by humans in a way that is prone to all the same biases mentioned earlier. We direct the summarization in a manner that is intuitive to us. We ask that the data be aggregated to segments that we feel are representative archetypes. Yet, we have that tendency to coarsely classify subjects intro broad stereotypes that don’t sufficiently explain their differences. For example, we may roll up the data to attributes such as geography even when there is no discernible difference in behavior between regions. There’s also the matter of grain - the level to which the data is summarized. Since we can only handle so much data we prefer a very coarse grain in order to make it digestible. For example, an attribute like geography needs to be kept at a region level where there are relatively few values (i.e., “east” vs. “west”). Dropping down to a city or zipcode level just won’t work for us as it is too much data for our human brains to process. We also prefer simple relationships between elements - we’ll approximate just about everything as linear because it’s easier for us to process. The relationship between price and sales, market penetration and conversion rate, credit risk and income — all are assumed linear even when the data suggests otherwise.

Alas, we are accommodating our biases when we drive the data.

AI-Driven Workflows

We need to evolve further to bring AI into the workflow. For routine decisions that only rely on structured data, we are better off delegating decisions to AI. AI does not suffer from cognitive bias4. AI can be trained to find segments in the population that best explain variance — even if they are unintuitive to our human perceptions. AI can be trained to find segments in the population that best explain variance at fine-grain levels even if they are unintuitive to our human perceptions or result in thousands or even millions of groupings. And, AI is more than comfortable working with nonlinear relationships, be they exponential, power laws, geometric series, binomial distributions, or otherwise.

xxx

This workflow better leverages the information contained in the data and is more consistent and objective in its decisions. It can better determine which ad creative is most effective, the optimal inventory levels to set, or which financial investments to make.

While humans are removed from this workflow, it’s important to note that mere automation is not the goal of an AI-driven workflow. Sure, it may reduce costs, but that’s only an incremental benefit. The value of AI is making better decisions than what humans alone can do. This creates step-change improvement in efficiency and enables new capabilities. This is evolution of the punctuated type.

Leveraging both AI and Human processors in the workflow

Removing humans from workflows that only involve the processing of structure data does not mean that humans are obsolete. There are many business decisions that depend on more than just structured data. Vision statements, company strategies, corporate values, market dynamics - all are examples of information that is only available in our minds and transmitted through culture and other forms of non-digital communication. This information is inaccessible to AI, yet can be extremely relevant to business decisions.

For example, AI may objectively determine the right inventory levels in order to maximize profits. However, in a competitive environment a company may opt for higher inventory levels in order to provide a better customer experience, even at the expense of profits. In other cases, AI may determine that investing more dollars in marketing will have the highest ROI among the options available to the company. However, a company may choose to temper growth in order to uphold quality standards. In other cases still, the selection of the best marketing creative for an ad may require considerations that AI can’t make (see “Hollywood vs. The Algorithm”). The additional information available to humans in the form of strategy, values, and market conditions can merit a departure from the objective rationality of AI. In such cases, AI can be used to generate possibilities from which humans can pick the best alternative given the additional information they have access to5.

xxx

They key is that humans are not interfacing directly with data but rather with the possibilities produced by AI’s processing of the data. Values, strategy and culture is our way of reconciling our decisions with objective rationality. This is best done explicitly and fully informed. By leveraging both AI and humans we can make better decisions that using either one alone.

The Next Phase in our Evolution

Moving from data-driven to AI-driven is the next phase in our evolution. Embracing AI in our workflows affords better processing of structured data and allows for humans to contribute in ways that are complementary.

This evolution is unlikely to occur within the individual organization just as evolution by natural selection does not take place within individuals. Rather, it’s a selection process that operates on a population. The more efficient organizations will survive at higher rate. Since it’s hard to for mature companies to adapt to changes in the environment, I suspect we’ll see the emergence of new companies that embrace both AI and human contributions from the beginning and build them natively into their workflows.


References and Footnotes

[1]↩ Shermer, Michael. “Patternicity: Finding Meaningful Patterns in Meaningless Noise.” Scientific American. N.p., 1 Dec. 2008.

[2]↩ This is not to suggest that data summaries are not useful. To be sure, they are invaluable in providing basic visibility into the business. But they will provide little value for use in decision-making. Too much is lost in the preparation for humans.

[3]↩ The best practice is to use randomized controlled trials (ie A|B testing). Without this practice, even AI may not be able to properly control for confounding factors.

[4]↩ It should be acknowledged that there is a very real risk of using biased data that may cause AI to find specious relationships that are unfair. Be sure to understand how the data is generated in addition to how it is used.

[5]↩ The order of execution for such workflows is case-specific. Sometimes AI is first to reduce the workload on humans. In other cases, human judgment can be used as inputs to AI processing. In other cases still, there may be iteration between AI and human processing.

Tweet this post! Post on LinkedIn
Multithreaded

Come Work with Us!

We’re a diverse team dedicated to building great products, and we’d love your help. Do you want to build amazing products with amazing peers? Join us!