The views expressed here are solely those of the author and do not necessarily represent the views of FreightWaves or its affiliates.
Author’s Disclosure: I am not an investor in Optimal Dynamics, either personally or through REFASHIOND Ventures. I have no other financial relationship with Optimal Dynamics.
On July 7 I started a series on AI in Supply Chain (#AIinSupplyChain). The first article in the series profiled Optimal Dynamics, a startup that has launched a product to automatically optimize operations for large trucking fleets.
The second article in this series profiled Warren Powell, co-founder of Optimal Dynamics and a soon-to-be professor emeritus of operations research at Princeton University, after a 39-year tenure there. He is also the Founder and Manager of Princeton University’s Computational Stochastic Optimization and Learning Labs (CASTLE Labs).
Throughout his career, Powell has been at the forefront of researching and developing models and algorithms for stochastic optimization with practical applications in transportation and logistics. His research at CASTLE Labs has been supported by more than $50 million in funding. He has authored two books and an edited volume of articles, and 250+ papers on decision-making under uncertainty with applications to the problems encountered within industrial supply chains.
Powell is nearing completion of a new book “Reinforcement Learning and Stochastic Optimization: A Unified Framework for Sequential Decisions.” His academic genealogy and lineage includes 60 Ph.D students and postdocs, 10 Masters students and 200+ undergraduate senior theses.
In this installment of the AI in Supply Chain series, he explains why solving real world supply chain problems is so hard for most AI systems, and hints at some of the breakthroughs he has made during the decades over which he has spent conducting applied research on stochastic optimization – a fancy way of saying optimization under uncertainty.
What are high-dimensional decision problems?
High-dimensional problems are problems for which the relevant data on which decisions must be made has hundreds, possibly thousands of attributes, or dimensions – in plain English, the number of variables is unmanageable. These attributes or dimensions are subject to randomness, and so they are treated as quantities possessing both a magnitude and a direction – vector quantities. Such data is analyzed using algorithms designed specifically for this purpose.
Modeling a truck driver, then modeling a truck fleet
To model a truck driver, you need a 15-dimensional attribute vector (location, domicile, hours of service, hazmat flags, citizenship, equipment characteristics, etc). The spatial dimension quickly introduces tens of thousands of dimensions (think of city pairs, for example), which is then magnified when one has to consider the attributes of drivers, equipment, the types of products being shipped, and other relevant data and information.
Computational complexity increases with high-dimensional data because the number of possible combinations scales exponentially. This is where Powell and the researchers he has been collaborating with at CASTLE Labs have made the breakthroughs that Optimal Dynamics is now bringing to market.
In the scenario with the truck driver above, Powell estimates that one gets 1020 different combinations of attributes – think of a “1” with 20 zeros after it. Optimal Dynamics uses a technique called variable dimensional learning to get estimates of all of these values, and CORE.ai doesn’t need a ton of data to do this.
As Powell puts it, “This is what makes solving dynamic resource allocation problems so hard, high-dimensionality and uncertainty.” (See the July 7 and July 17 installments of #AIinSupplyChain for an explanation of dynamic resource allocation problems – the links are at the end of this article.)
During a conversation in October 2019, Paul Hofmann, who holds a Ph.D in nonlinear quantum dynamics and chaos theory from Technische Universität Darmstadt in Germany, and is the Chief Innovation Officer of the Alpega Group and INET Logistics, told me that a distinctive feature of Optimal Dynamics’ platform, CORE.ai, is that it allows truly dynamic optimization of logistics companies’ operations over different time horizons, from long-term through mid-term to real-time, always taking changes and uncertainties into account.
Hofmann has a unique perspective on this topic because he was the chief technology officer of two AI and data science startups: Saffron Tech, a cognitive computing company acquired by Intel; and SpaceTime Insight (now Nokia), an AI company in energy logistics.
He collaborated with Powell’s CASTLE Labs during his tenure as vice president of R&D at SAP Labs in Palo Alto, California. He sponsored CASTLE Labs to explore applications of dynamic resource allocation in the energy industry.
Cognitive systems promise to aid In decision-making, but the reality is muddled
In the April 23, 2019 commentary, Logistics network optimization – why this time is different, I raised the topic of cognitive systems and how such systems are starting to be applied in supply chain optimization.
According to the website for the Cognitive Research Lab at Ulster University in the U.K., “Data analytics has evolved over the years from descriptive (what has happened) to diagnostic (why did it happen) to predictive (what could happen) to prescriptive (what action could be taken). The next big paradigm shift will be towards cognitive analytics. which will exploit the massive advances in high performance computing by combining advanced AI and machine learning techniques with data analytics approaches.”
It goes on to say that, “Cognitive analytics applies human-like intelligence to certain tasks, and brings together a number of intelligent technologies, including semantics, artificial intelligence algorithms, deep learning and machine learning. Applying such techniques, a cognitive application can get smarter and more effective over time by learning from its interactions with data and with humans.”
Nevertheless, as the Cofounder and Organizer of The Worldwide Supply Chain Federation and The New York Supply Chain Meetup, I am hearing from members of the community that often such advanced, automated decision-making systems fail to deliver on their promise, and they encounter fatal setbacks when efforts are made to implement them towards solving business problems in sectors like manufacturing, healthcare, fast-moving consumer goods, energy, automotive, agriculture, transportation and logistics, and others.
This is corroborated in the business press; online, in newspapers and magazines, and in books exploring the topic. For example, in the June 11, 2020 edition of its Technology Quarterly, the Economist published Businesses are finding AI hard to adopt.
While digital native companies like Facebook, Apple, Amazon, Netflix, Google, Microsoft, Baidu, JD.com, Alibaba, Tencent, Baidu, Tesla and other technology upstarts use data analytics and AI to gain commercial strength, incumbent digital immigrant companies in legacy industries continue to lag behind and lose market share as they struggle and fail to unlock the potential and promise of data analytics and AI.
Some of the reasons commonly given for AI’s failure to proliferate within industrial settings are: it is difficult to integrate newer AI systems with existing, mature systems and processes; companies in mature industries lack sufficient expertise to implement such technologies or, if the expertise is available, it is too scarce and expensive; executives do not understand these technologies; there is a perception that the technology is too immature; there is insufficient data of the quality required to train AI systems; and in the past executives have been burned after buying into hype around AI.
These and other stumbling blocks to the adoption of AI in legacy industries like trucking are highlighted in Winning With AI, an October 2019 research report published by MIT Sloan Management Review, BCG Gamma, and BCG Henderson Institute.
According to that report, “Seven out of 10 companies surveyed report minimal or no impact from AI so far. Among the 90% of companies that have made at least some investment in AI, fewer than two of five report obtaining any business gains from AI in the past three years. This number improves to three of five when we include companies that have made significant investments in AI.”
The promise of AI is highlighted in Sizing The Prize, a 2017 report by PwC in which the authors estimate that AI will boost global GDP by up to 14%, or $15.7 trillion by 2030. However, in its 2020 AI Predictions, PwC finds that only 4% of executives plan enterprise-wide deployments of AI. That is down from 20% the prior year.
Optimal Dynamics is unique in the sense that, taken together, the members of its team have experience confronting and solving each of the problems that cause AI implementations to fail once such products are brought out of academic labs and into business settings.
How Optimal Dynamics’ CORE.ai is different from IBM Watson and Google DeepMind AlphaGo
While working on this article, this thought kept crossing my mind – in 1997 IBM’s Deep Blue beat Gary Kasparov, the chess grandmaster. In 2013, people watched IBM Watson, the Jeopardy-winning super computer, beat Ken Jennings and Brad Rutter – two of the best Jeopardy players. Then in 2017 people watched AlphaGo, the documentary about how a computer developed by DeepMind, now owned by Google, beat Lee Sedol, the 18-time World Go Champion in 2016.
To non-computer scientists the questions might arise – “Is Optimal Dynamics’ CORE.ai any different or better than IBM’s Watson or Google DeepMind’s AlphaGo? Can’t those systems do the same thing that CORE.ai can?”
According to Powell, “CORE.ai has almost nothing in common with Watson or AlphaGo.”
“Watson is a machine learning system that must be trained using a large dataset. Watson uses a mixture of tools, but deep learning (with neural networks) is at the heart of the system. You need very large datasets to train neural networks. So, to get it to play chess, it had to be trained with a lot of history, but also millions of simulations of playing against itself. You can only do this type of training with a game (such as chess, or Go).”
Deep learning is a subfield of machine learning in which algorithms are built using layers of artificial neural networks. Neural networks are mathematical structures which mimic the characteristics and function of neurons in the brain. Although neural networks have existed since the 1950s, they have recently regained popularity because of advances in algorithm design and computational power.
As the Jeopardy and Go examples demonstrate, deep learning algorithms now equal or surpass human performance on certain tasks. AlphaGo combines deep learning and reinforcement learning. (See the July 17 instalment of #AIinSupplyChain for an explanation of reinforcement learning.)
He continued, “Otherwise, you need an external dataset, such as a set of patients with decisions made by doctors, or a set of radiology images that are labeled if they exhibit cancer. But, these are all instances of what I call single-entity problems, comparable to optimizing a single truck. You could never use Watson to manage a fleet of trucks – it is simply the wrong tool for the problem.”
Elaborating further, he said “Our technology makes decisions – we need a ‘model’ that captures how the physical system evolves over time given the decisions we make (adding to inventory, dispatching trucks), and metrics that tell us how well we are doing. We then use algorithms to choose among a family of functions called ‘policies’ which are methods for making decisions.”
CORE.ai does not need a huge historical dataset of past decisions. That is a big distinction. However, CORE.ai still needs to search for the best policy for making decisions.
Elaborating further on where AI systems like IBM’s Watson and Google DeepMind’s AlphaGo would fail within an industrial setting, Powell explains again that, “A major characteristic of logistics is dimensionality. Tools such as Watson (or DeepMind’s tool for playing Go) are solving one-dimensional problems – what move to make. For Go, there are typically up to about 300 possible moves (depending on how full the board is). If we are managing 100 trucks, where each truck has a truck driver described by a 15-dimensional attribute vector, we can think of this as comparable to playing 100 games of Go all at the same time, and where we cannot make the same move for any two games. Our dispatch problem has a decision with 1,000 dimensions (if we only consider assigning a driver to up to 10 loads). The number of possible decisions is around 21000 = 10300 (think of a “1” with 300 zeros after it). And CORE.ai can handle thousands of trucks.”
He concludes his explanation of the distinction between Optimal Dynamics’ CORE.ai and IBM’s Watson and Google DeepMind’s AlphaGo by saying, “Handling these big problems is not as hard as it seems – we could handle these problems decades ago, as long as we ignored the impact of decisions now on the future.”
“Imagine planning a path to a destination without looking into the future. The problem with managing fleets and inventories is that you have to plan into the future, when you cannot predict the future (or even pretend that you can). It is this combination of uncertainty over time, and the dimensionality of the decisions, that makes these problems so hard,” he said.
In short, IBM’s Watson and Google DeepMind’s AlphaGo (and it’s descendant AlphaZero) cannot be applied to these high-dimensional dynamic resource allocation problems, but CORE.ai can.
Watson, AlphaGo and AlphaZero excel in games characterized by perfect information. In the real world, in the industrial supply chains that matter, where high-dimensional and dynamic resource allocation problems are the rule, we never have perfect information about the future.
The Economist has declared AI dead as far as industrial applications are concerned. I prefer to reframe the question. In my opinion, AI is not dead for industrial supply chain applications. However, we need to rethink how we are asking the questions, and therefore to reassess how we are going about solving this class of problems.
Looking to the future, I believe that artificial intelligence and machine learning will contribute to the biggest transformations in how decisions are made in real-world industrial supply chains. However, there is a need to get the research out of academic labs and into the real world.
This #AIinSupplyChain series will continue through the end of 2020, as I seek to shine a light on areas where data, artificial intelligence, and predictive analytics are living up to their promise and potential for solving high-dimensional decision problems by combining the best aspects of machine intelligence and human decision-making.
If you are a team working on innovations that you believe have the potential to significantly refashion global, industrial supply chains we’d love to tell your story in FreightWaves. I am easy to reach on LinkedIn and Twitter. Alternatively, you can reach out to any member of the editorial team at FreightWaves at firstname.lastname@example.org.
The reference archive – dig deeper into the #AIinSupplyChain Series with FreightWaves