The surge of edge computing has been building momentum from the deep waters of the internet since the early 90s when two MIT professors left academia to form Akamai, the first content delivery network (CDN). For the better part of 20 years, CDN’s main challenge was to deliver content to human-operated devices as quickly and efficiently as possible. CDNs are what make websites load faster and videos buffer less frequently.
What an fascinating time for infrastructure. Over the last couple of years, edge computing has emerged from relative obscurity to become one of the most talked about trends. Many vigorous forces have combined beneath the surface to create today’s giant wave of edge computing.
The disruptive potential of edge computing is fueled by the unprecedented growth of data, the imminent impact of 5G networks, the growing importance of latency and regulation in dealing with data, and the emergence of a distributed computing architecture that favors specialized hardware like GPU’s and offloads. Infrastructure is starting to evolve at what is called “software speed,” iterating rapidly and attracting a wide array of contributors.
Like most new ecosystems at the early development stages, the excitement and potential of edge computing is a complex set of definitions by a wide range of participants, adding to the confusion around the topic.
Here are some takeaways from the report: (1) The edge is a location, not a thing; (2) There are lots of edges, but the edge we care about today is the edge of the last mile network; (3) This edge has two sides: an infrastructure edge and a device edge; and (4) Compute will exist on both sides, working in coordination with the centralized cloud.
Today’s internet is built around a centralized cloud architecture, which alone can’t feasibly support emerging application and business requirements. Edge computing has the potential to improve the performance, scalability, reliability, and regulatory compliance options for many critical applications.
Cloud computing was a game-changer, but it already has reached a limit. The temporary, elastic allocation of resources was a significant benefit for many application operators who once upon a time would have had to purchase the required compute and data storage resources themselves, and then decide what to do with them after their needs were met. But, just as cloud computing physically centralized the compute and data storage resources used by applications, it also generated new problems: performance, locality, and sovereignty.
A model was needed, something that merged the best of the cloud with the best of personal computing. By combining the density, flexible resource allocation and pay-as-you-go pricing of cloud computing with the ability of the personal computer, the next step in computing architecture could be achieved.
Edge computing places high-performance compute, storage, and network resources as close as possible to end users and devices. Doing so lowers the cost of data transport, decreases latency, and increases locality.
Edge computing will take a big portion of today’s centralized data centers and cloud and put it in everybody’s backyard. In fact, that’s exactly what companies like Vapor IO and Hangar are doing, such as FreightWaves has previously covered.
The Kinetic Edge is a technical architecture for edge computing that can span a metropolitan region. Project Volutus, a partnership with Crown Castle, Packet, Intel and the Open19 Foundation, seeks to build Kinetic Edge cities across the U.S. by embedding thousands of carefully-situated micro data centers into the wireless infrastructure, including co-locating at cell towers and aggregation hubs, allowing for high-powered servers and storage devices to be placed one hop away from the wireless network.
Edge computing is the next step helping to usher forward things like the autonomous revolution. Whether buses, cars, or aircraft, edge infrastructure will enable the scalability, as well as improve safety, and increase the efficiency of autonomous. Autonomous vehicles rely on high-performance compute, storage, and network resource to process the massive volumes of data they produce, and this must be done with low latency to ensure that the vehicle can operate safely at efficient speeds.
Using on-device or centralized data center resources would be practical only if autonomous vehicles were speed-limited, making them inefficient for common use. To replace human operators, autonomous vehicles must operate safely at the same speed as human-operated vehicles using data from many sources so a vehicle knows how to react to stimuli it may not be able to directly acquire, such as real-time data about other cars around a blind intersection.
As the edge computing infrastructure ecosystem evolves, everyday users will look to take advantage of compute and network resources in hundreds of locations. Currently, only the largest CDNs in the world stretch to this kind of footprint—and only running a single, fairly simplistic application, according to the report.
As we look at building and taking advantage of a robust edge computing ecosystem, defined by a much more diverse and distributed architecture, what are the implications? The wave promises to take us on a wild ride. From the software we use, to the way in which we purchase computing resources, and the expectations for service levels, physical security, hardware lifecycling, refresh cycles and more, the implications are dynamic and far-reaching.
Stay up-to-date with the latest commentary and insights on FreightTech and the impact to the markets by subscribing.