Watch Now


Computing power is decentralizing and moving to the edge

 The interior of a Tesla Model S. ( Photo: Shutterstock )
The interior of a Tesla Model S. ( Photo: Shutterstock )

Most people are familiar with the concept of cloud computing—anyone who uses Gmail and Google Docs to store work off their own computers or people who share pictures with friends with iPhoto, have used the cloud. Using the cloud to manage personal files takes the storage burden away from consumer devices, allowing them to be lighter and faster, while also letting people access their data from whichever device is at hand.

Decentralization may be all the rage now, but the first revolution in networked computing power was about centralization. Cloud computing services offered by Amazon, Microsoft, and Google proved to be much more efficient, powerful, and less expensive than corporate data centers. While consumers may store photographs and music on the cloud, even large data-centric companies like Salesforce are running their databases on these platforms (Salesforce moved to Amazon Web Services after a crippling 2016 outage at their NA14 site). 

Ultimately, the cloud was about giving companies without dedicated IT staffs or hardware budgets access to high end computing infrastructure. Centralizing server capacity actually made it cheaper and more reliable, paradoxically leveling the playing field and democratizing information services.

There are limit to what the cloud can achieve, though, and we’re starting to approach those limits with the more intensive application of robotics, artificial intelligence, onboard telematics, and autonomous vehicle technology. Driverless cars present the most obvious use case where cloud computing just is not fast enough. An autonomous car driving down the highway at 70 mph cannot connect to a cloud-based server farm quickly enough to find out whether it should slam on its brakes for a cardboard box in the road. Data about the car’s surroundings must be taken in by a sensor array, fed into algorithms, processed, and acted upon by computing resources housed locally on the vehicle itself. 

According to a new report by CB Insights, the global internet-of-things (IoT) market is expected to exceed $1.7T by 2019, more than tripling its size from $486B in 2013. By 2020, it is estimated that the average person will generate 1.5 gigabytes of data every day. An unimaginable torrent of data from those devices is already clogging network bandwidth in and out of the cloud, slowing down response times and making the cloud laggy for applications that need realtime data processing.

Apart from the the sheer volume of data being sent back and forth over the network, another limitation of cloud computing is that, at the end of the day, someone still has to go through everything that’s been collected and figure out how to use it. Imagine that you’re a fleet manager of a 500 truck fleet, and every truck and trailer has several telematics devices continuously uploading data on everything from engine performance, GPS location, the temperature and load balance of the trailer, and the movement of the driver’s eyes. Sure, your AWS bill might be going up, but the larger problem is how to operationalize the data—how to structure it, analyze it, and then make better business decisions. Sure, the cloud makes it easy to build a huge pool of data, but someone still has to figure out what it means.

So there are at least two primary limitations to what can be achieved on the cloud: network bandwidth and what we can call analytic bandwidth, or the problem of navigating and making sense of the data. 

In response to those limitations, we’re starting to see storage capacity and processing power decentralize again, moving away from vast server farms back into smaller devices on the ‘edge’ of the network. This is called edge computing. The premise of edge computing is that a telematics device on a truck will collect and analyze data as it comes in, only periodically alerting the fleet manager with actionable recommendations instead of uploading a steady stream of raw data to a server. The downside of these devices is that with power comes cost. Edge computing telematics devices are larger, much more expensive, and consume much more electricity than smaller sensors with wireless uplinks to the cloud. Uber’s self-driving cars, for example, with all of their onboard processing power, often have their entire trunks filled with computers. 

So far, the edge computing market is small compared to the total market for IoT devices. According to CB Insights, the global edge computing market is estimated to reach $6.72B by 2022. The market for these larger, more complex devices is still nascent, but adoption should accelerate if the growth in data volume continues to outstrip bandwidth growth.

A relatively new term, ‘fog computing’, refers to the interface between the edge of the network and the cloud. “In other words,” wrote CB Insights, “fog computing extends the cloud closer to the edge of a network… Going back to our train scenario: sensors can gather data, but cannot immediately act upon it. For example, if a train engineer wants information on how a train’s wheels and brakes have been operating, he can use sensor data aggregated over time to anticipate whether parts need service or not. In this situation, data processing uses the edge, but it is not always immediate (unlike determining engine status). Using fog computing, short-term analytics can be assessed at a given point in time and do not require full travel back to a centralized cloud.”

Microsoft has said it’s investing $5B in IoT over the next four years, and Hewlett Packard Enterprise will spend $4B over the next four years specifically on edge computing, building its Edgeline Converged Edge System for industrial partners who desire data center-level computing power in remote settings, like copper mines and offshore oil rigs. 

The transportation industry, which already has millions of devices on commercial vehicles, will be one of the natural habitats for edge computing. Trucking is already notorious as a laggard in technological adoption, and the industry’s relative lack of data professionals, software licenses, and computing hardware will make it difficult for any carrier but the largest and most sophisticated to process and operationalize large amounts of data. The more analysis and decision-making performed locally on the device, the better it will be for trucking. 

John Paul Hampstead

John Paul conducts research on multimodal freight markets and holds a Ph.D. in English literature from the University of Michigan. Prior to building a research team at FreightWaves, JP spent two years on the editorial side covering trucking markets, freight brokerage, and M&A.