Edge computing has grown over the past several years to become one of the most important current trends in IT. It is increasingly viewed as a part of digital transformation, and linked with other trends such as the internet of things (IoT), analytics and cloud computing. But, as with those trends, there is no precise definition – and often much hype – about what edge computing is.
A simple definition of edge computing is that it involves some processing and decision-making taking place at the edge of the network, rather than everything being centralised in the datacentre or the cloud. This goes against the widely held notion that all IT functions will eventually be cloud-hosted, and some have even suggested that edge computing will replace the cloud.
Instead, both cloud and edge computing will co-exist, since they address different requirements. According to analyst firm IDC, the two approaches are complementary and “interact in a smart and intelligent way”. In a report called The technology impacts of edge computing in Europe, the firm forecast that in 2020, more than 50% of cloud deployments by European organisations will include edge computing and that 20% of endpoint devices and systems will execute artificial intelligence (AI) algorithms.
One of the factors driving edge computing is the IoT, which is bringing a whole host of new connected devices to networks. These devices need to be managed, but more importantly many of them are designed to generate streams of data to be analysed for operational reasons or for insights that could lead to more efficient ways of working.
The latter is a good example, not only because 5G is expected to play a key role in many IoT deployments thanks to its ability to support a much greater number of connected devices per cell base station, but because the sheer processing power required for operating 5G networks means that cell base stations are becoming more like miniature datacentres.
Data at the edge
While centralising all processing in the cloud might seem like the more efficient thing to do, that idea runs into problems when it comes to latency – the delay in transmitting data across the network and in getting a response back.
This means data often needs to be processed and acted on at the point where it is being generated. In a smart factory, for example, sensors monitoring machinery might detect a serious fault condition that requires an instant remedial response.
The volumes of data generated by some applications are also growing rapidly. For example, some test autonomous vehicles have been found to generate as much as 8-10TB (terabytes) of data per day. In many cases, transmitting everything to the cloud may not be a viable option, according to Seagate’s executive vice-president and head of operations, Jeff Nygaard.
“It’s not free to move data through a pipe from endpoint to edge to cloud; it costs money to send data through that pipeline. The idea that you should really only be moving data if you need to move the data – based on how you’ve architected, and how you get value out of the data – is something you should be thinking about,” said Nygaard, speaking in a panel discussion on edge computing.
For reasons such as these, it makes sense in many situations to analyse data as it is generated at the edge, and this has led to a requirement for more powerful hardware capable of running analytics on all that data. This means edge systems have expanded from relatively simple edge gateway devices managing a bunch of sensors, to include full-blown servers and even micro-datacentres.
This fits with analyst firm Ovum’s view, outlined in its report Defining the market for edge and edge cloud. Ovum sees a near edge based on traditional servers, storage or hyper-converged infrastructure (HCI) devices, with an outer edge made up of gateway devices, the latter either fully managed or immutable so that they are simply replaced if and when upgrades are required.
Micro-datacentres are enclosures containing one or more datacentre racks, which can be filled up with servers, storage and network kit, plus power and cooling systems. In other words, they can house the kind of IT equipment that would normally be found in a rack in a normal datacentre, but can be installed in a factory or on an oil rig, or anywhere where a decent amount of compute power is required.
These are available from suppliers such as Schneider Electric and Rittal, but also from major IT suppliers such as HPE and Dell EMC, which are naturally keen to sell such enclosures ready configured with their own servers, storage and networking.
But it is also important to recognise that whether data is processed at the edge or in the cloud will depend on the application, and the two are not mutually exclusive. Edge computing allows for data to be filtered and processed before being sent to the cloud, for example, while the cloud may also serve as a central site for collating data for further analysis from multiple edge sites.
AI at the edge
In addition to analytics, edge systems are increasingly going to be called on to carry out demanding tasks such as visual recognition, inspecting items on a factory production line for defects, for example. These tasks often rely on AI techniques such as machine learning or deep learning models to deliver results speedily, and this means hardware accelerators such as graphics processing units (GPUs) or field programmable gate arrays (FPGAs) may be required.
In fact, GPU maker Nvidia unveiled an edge platform last year that combines its GPU products with a software stack incorporating Kubernetes, a container runtime and containerised AI frameworks, designed to run on standard server hardware.
The EGX platform is described by Nvidia as bringing the power of accelerated AI computing to the edge with an easy-to-deploy cloud-native software stack. EGX partners include HPE, Dell EMC, Fujitsu, Cisco and Supermicro.
New applications and services are also driving the development of edge computing. Demand for high-bandwidth streamed video, for example, is leading service providers to cache content locally at datacentres located closer to customers.
Amazon Web Services (AWS) announced in December 2019 that it is planning to build a series of hyper-local datacentre hubs close to major cities for exactly this reason. These hubs, dubbed “Local Zones” by AWS, are intended to attract businesses with latency-sensitive workloads and will be housed in small-scale datacentres rather than the company’s large regional facilities.
Challenges and rewards
Of course, there are potential issues with edge computing. Having numerous sites collecting and analysing data means more sites that need to be configured and monitored, all of which adds complexity. And the distributed nature of edge computing means that technicians are not always likely to be available onsite if and when a failure occurs.
Edge computing also has implications for networks. As more computing happens at the edge, network bandwidth will have to adapt to this shift in emphasis. According to IDC, edge computing directly increases the importance of networks, especially delocalised networks. Edge computing will also require innovation in how networks are analysed, managed and orchestrated.
Security is an obvious issue for all IT infrastructure, but with edge systems potentially running unattended in remote sites, physical security of the hardware is as much a concern as the potential for a cyber attack. It also means that access control and protection of data both at rest and in transit become even more important.
Management issues may include the need to deliver secure application updates to the edge hardware, plus the ability to remotely diagnose and fix any issues that may develop.
According to Ovum, operational management is more likely to be an extension of the existing operational management market, rather than specific products for edge computing. Likewise, it expects orchestration at the edge to effectively form part of an expansion of the multicloud management market, which, according to Ovum’s forecasts will be worth $11bn by 2022.
Key takeaways on edge computing are that it is not going to replace cloud, but in some instances can be thought of as bringing cloud computing closer to where data is being generated. It is intended to support new and emerging workloads that may be latency sensitive, require a lot of compute power, or involve such large data volumes that sending everything back to the cloud is impractical. All of this brings new challenges, but also potential rewards for organisations that can get it right.