The Light Path to a Coherent Cloud Edge Banner

Improving Edge Computing with Coherent Optical Systems on Chip

Improving Edge Computing with Coherent Optical Systems on Chip

Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end-user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Cloud-native applications are driving the construction of edge infrastructure and services. However, they cannot distribute their processing capabilities without considerable investments in real estate, infrastructure deployment, and management.

This situation leads to hyperscalers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers too. They must manage hundreds or thousands of new nodes that will be hard to control and maintain. 

At EFFECT Photonics, we believe that coherent pluggables with an optical System-on-Chip (SoC) can become vital in addressing these datacom and telecom sector needs and enabling a new generation of distributed data center architectures. Combining the optical SoCs with reconfigurable DSPs and modern network orchestration and automation software will be a key to deploying edge data centers.

Edge data centers are a performance and sustainability imperative

Various trends are driving the rise of the edge cloud:

  • 5G technology and the Internet of Things (IoT): These mobile networks and sensor networks need low-cost computing resources closer to the user to reduce latency and better manage the higher density of connections and data.
  • Content delivery networks (CDNs): The popularity of CDN services continues to grow, and most web traffic today is served through CDNs, especially for major sites like Facebook, Netflix, and Amazon. By using content delivery servers that are more geographically distributed and closer to the edge and the end user, websites can reduce latency, load times, and bandwidth costs as well as increasing content availability and redundancy.
  • Software-defined networks (SDN) and Network function virtualization (NFV). The increased use of SDNs and NFV requires more cloud software processing.
  • Augment and virtual reality applications (AR/VR): Edge data centers can reduce the streaming latency and improve the performance of AR/VR applications.

Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.

Type of EdgeData centerLocationNumber
of DCs
per 10M people
Average LatencySize
On-premises edgeEnterprise
site
BusinessesNA2-5 ms1 rack
max
Network (Mobile)Tower edgeTowerNationwide300010 ms2 rack
max
Outer edgeAggrega-
tion points
Town15030 ms2-6 rack
max
Inner edgeCoreMajor city1040 ms10+ rack
max
Regional edgeRegionalMajor city10050 ms100+
racks
Not edgeHyperscaleState/
national
160+ ms5000+
racks
Table 1: Types of edge data centers and their characteristics. Source: STL Partners

Cisco estimates that 85 zettabytes of useful raw data were created in 2021, but only 21 zettabytes were stored and processed in data centers. Edge data centers can help close this gap. For example, industries or cities can use edge data centers to aggregate all the data from their sensors. Instead of sending all this raw sensor data to the core cloud, the edge cloud can process it locally and turn it into a handful of performance indicators. The edge cloud can then relay these indicators to the core, which requires a much lower bandwidth than sending the raw data.

Figure 1 Global data center traffic vs. useable data created per year. Graphic sourced from Cisco Global Cloud Index.
Figure 1 Global data center traffic vs. useable data created per year. Graphic sourced from Cisco Global Cloud Index.

Edge data centers therefore allow more sensor data to be aggregated and processed to make systems worldwide smarter and more efficient. The ultimate goal is to create entire “smart cities” that use this sensor data to benefit their inhabitants, businesses, and the environment. Everything from transport networks to water supply and lightning could be improved if we have more sensor data available in the cloud to optimize these processes. Distributing data centers is also vital for future data center architectures. While centralizing processing in hyper-scale data centers made them more energy-efficient, the power grid often limits the potential location of new hyperscale data centers. Thus, the industry may have to take a few steps back and decentralize data processing capacity to cope with the strain of data center clusters on power grids. For example, data centers can be relocated to areas where spare power capacity is available, preferably from nearby renewable energy sources. EFFECT Photonics envisions a system of datacentres with branches in different geographical areas, where data storage and processing are assigned based on local and temporal availability of renewable (wind-, solar-) energy and total energy demand in the area.

Figure 2: High-speed fiber-optic connections allow data processing and storage to move to locations where excess (green) energy is available. If power is needed for other purposes, such as charging electric vehicles, data can be moved elsewhere
Figure 2: High-speed fiber-optic connections allow data processing and storage to move to locations where excess (green) energy is available. If power is needed for other purposes, such as charging electric vehicles, data can be moved elsewhere

Coherent technology simplifies the scaling of edge data center interconnects

As edge data center interconnects became more common, the issue of how to interconnect them became more prominent. Direct detect technology had been the standard in the short-reach data center interconnects. However, reaching the distances greater than 50km and bandwidths over 100Gbps required for modern edge data center interconnects required external amplifiers and dispersion compensators that increased the complexity of network operations. At the same time, advances in electronic and photonic integration allowed longer reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the  400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. If needed, extended reach 400ZR+ pluggables can cover several hundreds of kilometers. Cignal AI forecasts that 400ZR shipments will dominate in the edge applications, as shown in Figure 3.

Figure 3: Forecast of 100G port equivalents shipped for edge applications. These shipments are overwhelmingly 400ZR standard technology. Source: Cignal AI Transport Applications Report Q42021.
Figure 3: Forecast of 100G port equivalents shipped for edge applications. These shipments are overwhelmingly 400ZR standard technology. Source: Cignal AI Transport Applications Report Q42021.

Further improvements in integration can further boost the reach and efficiency of coherent transceivers. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ optical System-On-Chip (SoC) technology can achieve higher transmit power levels and longer distances while keeping the smaller QSFP-DD form factor, power consumption, and cost.

Maximizing Edge Computing with Automation

With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will be hard to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Fortunately, several new technologies are enabling this scalable and automated network management. 

First of all, the rise of self-tuning algorithms has made the installation of new pluggables easier than ever. They eliminate additional installation tasks such as manual tuning and record verification. They are host-agnostic, can plug into any third-party host equipment, and scale as you grow. Standardization also allows modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. The communication channels used for self-tuning algorithms can also be used for remote diagnostics and management, such as the case of EFFECT Photonics NarroWave technology. 

Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, AI in network management will become a significant factor in reducing the energy consumption of future telecom networks.

Figure 2: Comparing a traditional network approach (left) with an SDN/NFC approach (right).
Figure 2: Comparing a traditional network approach (left) with an SDN/NFC approach (right).

Future smart transceivers with reconfigurable digital signal processors (DSPs) can give the AI-controlled management layer even more degrees of freedom to optimize the network. These smart transceivers will relay more device information for diagnosis, and depending on the management layer instructions, they can change their coding schemes to adapt to different network requirements

Takeaways

Cloud-native applications require edge data centers with lower latency, and that better fit the existing power grid. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Fortunately, coherent pluggables with self-tuning can play a vital role in addressing these datacom and telecom sector challenges and enabling a new generation of distributed data center architectures. Combining these pluggables with modern network orchestration and automation software will boost the deployment of edge data centers. EFFECT Photonics believes that with these automation technologies (self-tuning, SDNs, AI), we can reach the goal of a self-managed, zero-touch automated network that can handle the massive scale-up required for 5G networks and edge computing.

Corlia van Tonder

Share this post

Share This

Copy Link to Clipboard

Copy