Everyone who has attended a major venue or event, such as a football stadium or…
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure on the ground.
The digital signal processor (DSP) is the electronic heart of coherent transmission systems. The fundamental function of the DSP is encoding the electronic digital data into the amplitude, phase, and polarization of the light signal and decoding said data when the signal is received. The DSP does much more than that, though: it compensates for impairments in the fiber, performs analog-to-digital conversions (and vice versa), corrects errors, encrypts data, and monitors performance. And recently, DSPs are taking on more advanced functions such as probabilistic constellation shaping or dynamic bandwidth allocation, which enable improved reach and performance.
Given its vital role in coherent optical transmission, we at EFFECT Photonics want to provide an explainer of what goes on inside the DSP chip of our optical transceivers.
There’s More to a DSP Than You Think…
Even though we colloquially call the chip a “DSP”, it is an electronic engine that performs much more than just signal processing. Some of the different functions of this electronic engine (diagram below) are:
- Analog Processing: This engine segment focuses on converting signals between analog and digital formats. Digital data is composed of discrete values like 0s and 1s, but transmitting it through a coherent optical system requires converting it into an analog signal with continuous values. Meanwhile, a light signal received on the opposite end requires conversion from analog into digital format.
- Digital Signal Processing: This is the actual digital processing. As explained previously, this block encodes the digital data into the different properties of a light signal. It also decodes this data when the light signal is received.
- Forward Error Correction (FEC): FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link. FEC algorithms allow the electronic engine to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
- Framer: While a typical electric signal sent through a network uses the Ethernet frame format, the optical signal uses the Optical Transport Network (OTN) format. The framer block performs this conversion. We should note that an increasingly popular solution in communication systems is to send Ethernet frames directly over the optical signal (a solution called optical Ethernet). However, many legacy optical communication systems still use the OTN format, so electronic engines should always have the option to convert between OTN and Ethernet frames.
- Glue Logic: This block consists of the electronic circuitry needed to interface all the different blocks of the electronic engine. This includes the microprocessor that drives the electronic engine and the serializer-deserializer (SERDES) circuit. Since coherent systems only have four channels, the SERDES circuit converts parallel data streams into a single serial stream that can be transmitted over one of these channels. The opposite conversion (serial-to-parallel) occurs when the signal is received.
We must highlight that each of these blocks has its own specialized circuitry and algorithms, so each is a separate piece of intellectual property. Therefore, developing the entire electronic engine requires ownership or access to each of these intellectual properties.
So What’s Inside the Actual DSP Block?
Having clarified first all the different parts of a transceiver’s electronic engine, we can now talk more specifically about the actual DSP block that encodes/decodes the data and compensates for distortions and impairments in the optical fiber. We will describe some of the critical functions of the DSP in the order in which they happen during signal transmission. Receiving the signal would require these functions to occur in the opposite order, as shown in the diagram below.
- Signal Mapping: This is where the encoding/decoding magic happens. The DSP maps the data signal into the different phases of the light signal—the in-phase components and the quadrature components—and the two different polarizations (x- and y- polarizations). When receiving the signal, the DSP will perform the inverse process, taking the information from the phase and polarization and mapping it into a stream of bits. The whole process of encoding and decoding data into different phases of light is known as quadrature modulation. Explaining quadrature modulation in detail goes beyond the scope of this article, so if you want to know more about it, please read the following article.
- Pilot Signal Insertion: The pilot signal is transmitted over the communication systems to estimate the status of the transmission path. It makes it easier (and thus more energy-efficient) for the receiver end to decode data from the phase and polarization of the light signal.
- Adaptive Equalization: This function happens when receiving the signal. The fiber channel adds several distortions to the light signal (more on that later) that change the signal’s frequency spectrum from what was initially intended. Just as with an audio equalizer, the purpose of this equalizer is to change specific frequencies of the signal to compensate for the distortions and bring the signal spectrum back to what was initially intended.
- Dispersion and Nonlinear Compensation: This function happens when receiving the signal. The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects. These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal that try to offset these dispersion and nonlinear distortions.
- Spectrum Shaping: Communication systems must be efficient in all senses, so they must transmit as much signal as possible within a limited number of frequencies. Spectrum shaping is a process that uses a digital filter to narrow down the signal to the smallest possible frequency bandwidth and achieve this efficiency.
When transmitting, the signal goes through the digital-to-analog conversion after this whole DSP sequence. When receiving the signal, it goes through the inverse analog-to-digital conversation and then through the DSP sequence.
Recent Advances and Challenges in DSPs
This is an oversimplification, but we can broadly classify the two critical areas of improvement for DSPs into two categories.
Transmission Reach and Efficiency
The entire field of communication technology can arguably be summarized with a single question: how can we transmit more information into a single frequency-limited signal over the longest possible distance?
DSP developers have many tools in their kit to answer this question. For example, they can transmit more data using more states in their quadrature-amplitude modulation process. The simplest kind of QAM (4-QAM) uses four different states (usually called constellation points), combining two different intensity levels and two different phases of light.
By using more intensity levels and phases, more bits can be transmitted in one go. State-of-the-art commercially available 400ZR transceivers typically use 16-QAM, with sixteen different constellation points that arise from combining four different intensity levels and four phases. However, this increased transmission capacity comes at a price: a signal with more modulation orders is more susceptible to noise and distortions. That’s why these transceivers can transmit 400Gbps over 100km but not over 1000km.
One of the most remarkable recent advances in DSPs to increase the reach of light signals is probabilistic constellation shaping (PCS). In the typical 16-QAM modulation used in coherent transceivers, each constellation point has the same probability of being used. This is inefficient since the outer constellation points that require more power have the same probability as the inner constellation points that require lower power.
PCS uses the low-power inner constellation points more frequently, and the outer constellation points less frequently, as shown in Figure 5. This feature provides many benefits, including improved tolerance to distortions and easier system optimization to specific bit transmission requirements. If you want to know more about it, please read the explainers here and here.
Increases in transmission reach and efficiency must be balanced with power consumption and thermal management. Energy efficiency is the biggest obstacle in the roadmap to scale high-speed coherent transceivers into Terabit speeds.
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the DSP chip. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip, and its radiofrequency (RF) interconnects with the optical engine. DSP chips must therefore be adaptable and smart, using the least amount of energy to encode/decode information. You can learn more about this subject in one of our previous articles. The interconnects with the optical engine are another area that can see further optimization, and we discuss these improvements in our article about optoelectronic co-design.
In summary, DSPs are the heart of coherent communication systems. They not only encode/decode data into the three properties of a light signal (amplitude, phase, polarization) but also handle error correction, analog-digital conversation, Ethernet framing, and compensation of dispersion and nonlinear distortion. And with every passing generation, they are assigned more advanced functions such as probabilistic constellation shaping.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever.Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDM
As part of the Welsh Government’s efforts to ensure all residents have access to fast…
As part of the Welsh Government’s efforts to ensure all residents have access to fast and reliable digital infrastructure, EFFECT Photonics is assisting with the roll out of 5G Fixed Wireless Access (FWA) services in hard-to-reach areas in Wales. Led by Vodafone, the project seeks to deliver broadband connectivity to 422 households in Anglesey, an island off the northwest coast of Wales. This is the latest collaboration between EFFECT Photonics and Vodafone, with the two companies already working together to bring 5G connectivity in the Netherlands.
The project will be carried out over two phases during an 18-month period. It is supported by the government’s Local Broadband Fund (LBF) as well as the Isle of Anglesey County Council and North Wales Economic Ambition Board. In addition to EFFECT Photonics and Vodafone, Bangor University’s Digital Signal Processing (DSP) Centre for Excellence is a supplier along with others. EFFECT Photonics and Bangor University have also partnered on the DESTINI project, for the development of an algorithm which can be introduced into existing telecommunications networks to expedite 5G connectivity.
“Photonics technology is an ideal solution for fixed wireless access due to its ability to provide increased bandwidth density with less energy consumption over significant distances,” said Joost Verberk, Director of Product Management, EFFECT Photonics. “We look forward to collaborating with the Welsh government, along with Vodafone and Bangor University once again, to bring 5G broadband access to areas never seemed possible before.”
EFFECT Photonics SFP+ Modules and NarroWave Technology
Based on Indium Phosphide technology, the EFFECT Photonics 10 Gbps Narrow Tunable SFP+ Transceiver Module leverages the company’s unique Photonics Integrated Circuit (PIC) technology. It features EFFECT Photonics NarroWave technology, which allows operators to set up, monitor and control remote SFP+ modules from the central office, without making any hardware or software changes in the field.Tags: 5G, access networks, coherent, connectivity, cost-effective, DESTNI, DWDM, FWA, integration, NarroWave, network, Networks, photonic integration, Photonics, pluggable, reach, telecommunication, Transceiver, Tunable SFP+, Vodafone, Wales, Welsh, Welsh Government
Tunable SFP+ modules powered by photonics System-on-Chip (SoC) technology successfully tested on live Dutch network…
With the increasing demand for cloud-based applications, datacom providers are pushing forward with expanding their…
With the increasing demand for cloud-based applications, datacom providers are pushing forward with expanding their distributed computing networks. Therefore, they and telecom provider partners are looking for data center interconnect (DCI) solutions that are faster and more affordable than before to ensure that connectivity between metro and regional facilities does not become a bottleneck.
Energy usage, space, simplicity, and cost-effectiveness all impact the efficiency of DCI infrastructure. These solutions must consider watts per bit, rack space, and simplified provisioning and operating expenditure. Previously, direct detect technology could fulfill these requirements for short-reach DCIs inside data centers and campuses. However, achieving the reach and bandwidths required for edge and metro DCIs required external amplifiers and dispersion compensators that increased the cost and complexity of network operations.
At the same time, advances in electronic and photonic integration allowed longer reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This enabled the transport of 100G and 400G connections over a single wavelength and several hundreds of kilometers, which is ideal for edge and metro DCI networks. Provider operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detect: a single coherent transceiver plugged into a router could fulfill the requirements.
In the coming decade, the shorter-reach DCI links will also require upgrades to 400G, 800G, and Terabit speeds, and at those speeds, coherent technology comes close to matching the energy consumption of direct detect. This would make it competitive even for shorter links.
Coherent Dominates in Metro DCIs
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. If needed, extended reach 400ZR+ pluggables can cover several hundreds of kilometers. As an example of their success, Cignal AI forecasts that 400ZR shipments will dominate in the edge applications, as shown in Figure 3.
Further improvements in integration can further boost the reach and efficiency of coherent transceivers. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ photonic System-On-Chip (SoC) technology can achieve higher transmit power levels and longer distances while keeping the smaller QSFP-DD form factor, power consumption, and cost.
Campus DCI Is The Battleground of Direct Detect and Coherent
The campus DCI segment, featuring distances below ten kilometers, was squarely the domain of direct detect products when the standard speed of these links was 100Gbps. No amplifiers nor compensators were needed for these shorter distances, so direct detect transceivers are as simple to deploy and maintain as coherent ones. However, at 400Gbps speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions.
This gap in power consumption is expected to disappear at 800Gbps, as shown in the figure below. For Terabit speeds, the prediction is that coherent transceivers will be more efficient. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions. Overall, coherent transceivers are expected to scale up better in future upgrades.
Direct Detect Dominates Intra Data Center Interconnects (For Now…)
Below Terabit speeds, direct detect technology (both NRZ and PAM-4) will likely dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than two kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
Nevertheless, moving to larger, more centralized data centers (such as hyperscale data centers) is lengthening intra-DCI links. Instead of transferring data directly from one data center building to another, new data centers first move data to a central hub. So even if the building you want to connect to might be 200 meters away, the fiber runs to a hub that might be one or two kilometers away. In other words, intra-DCI links are becoming campus DCI links, which requires their single-mode fiber solutions.
On top of these changes, the upgrades to Terabit speeds in the coming decade will also see coherent solutions challenge the power consumption of direct detect transceivers. PAM-4 direct detect transceivers that fulfill the speed requirements require digital signal processors (DSPs) and more complex lasers that will be less efficient and affordable than previous generations of direct detect technology. With coherent technology scaling up in volume and having greater flexibility and performance, one can make the argument that it will reach cost-competitiveness in this space, too.
Unsurprisingly, the decision of using coherent or direct detect technology for DCIs boils down to the reach and capacity needs. Coherent is already established as the solution for metro DCIs and is already gaining ground in the campus DCI segment for 800G and Terabit speeds. With the move to Terabit speeds and scaling production volumes, it could also become cost-competitive inside the data center too. Overall, the datacom sector is moving towards coherent technology, and it pays off to have this in mind when upgrading data center links.Tags: 800G, access networks, coherent, cost, cost-effective, Data center, distributed computing, edge and metro DCIs, integration, Intra DCI, license, metro, miniaturized, photonic integration, Photonics, pluggable, power consumption, power consumption SFP, reach, Terabit
Optical signals are moving deeper and deeper into access networks. Achieving the ambitious performance goals…
Optical signals are moving deeper and deeper into access networks. Achieving the ambitious performance goals of 5G architectures requires more optics than ever between small cell sites. As stated in a recent report by Deloitte, “extending fiber optics deeper into remote communities is a critical economic driver, promoting competition, increasing connectivity for the rural and underserved, and supporting densification for wireless.”
However, there are cases in which fiber is not cost-effective to deploy. For example, a network carrier might need to quickly increase their access network capacity for a big festival, and there is no point in deploying extra fiber. In many remote areas, the customer base is so small that the costly deployment of fiber will not produce a return on investment. These situations must be addressed with some kind of wireless access solution. Carriers have used fixed microwave links for the longest time to handle these situations.
However, radio microwave frequencies might not be enough as the world demands greater internet speeds and simply changing over to higher carrier frequencies will limit the reach of microwave links. On top of that, the radio spectrum is quite crowded, and a carrier might not have the available licensed spectrum to deploy this wireless link. Besides, microwave point-to-point links produce plenty of heat while struggling to deliver capacity beyond a few Gbps. This is where free-space optics (FSO) comes into play.
FSO is a relatively straightforward technology to explain. A high-power laser source converts data into laser pulses and sends them through a lens system and into the atmosphere. The laser travels to the other side of the link and goes through a receiver lens system and a high-sensitivity photodetector converts those laser pulses back into electronic data that can be processed. In other words, instead of using an optical fiber as a medium to transmit the laser pulses, FSO uses air as a medium. The laser typically operates at an infrared wavelength of 1550nm that is safer on the eye.
FSO has often been talked about as some futuristic technology to be used in space applications, but it can be used more than that, including ground-to-ground links in access networks.FSO can deliver a wireless access solution that can be deployed quickly and with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links. Furthermore, since it does not use the RF spectrum, there is no need to secure spectrum licenses.
Overcoming the challenges of alignment, and atmospheric turbulence
FSO struggled to break through into practical applications despite these benefits because of certain technical challenges. Communications infrastructure, therefore, focused on more stable transmission alternatives such as optical fiber and RF signals. However, research and innovation over the last few decades are removing these technical barriers. One obstacle to achieving longer distances with FSO had to do with the quality of the laser signal.
Over time, FSO developers have found a solution to this issue in adaptive optics systems. These systems compensate for distortions in the beam by using an active optical element—such as a deformable mirror or liquid crystal—that dynamically changes its structure depending on the shape of the laser beam. Dutch startup Aircision uses this kind of technology in its FSO systems to increase their tolerance to atmospheric disruptions.
Another drawback of FSO is aligning the transmitter and receiver units. Laser beams are extremely narrow, and if the beam doesn’t hit the receiver lens at just the right angle, the information may be lost. The system requires almost perfect alignment, and it must maintain this alignment even when there are small changes in the beam trajectory due to wind or atmospheric disturbances.
FSO systems can handle these alignment issues with fast steering mirror (FSM) technology. These mirrors are driven with electrical signals and are fast, compact, and accurate enough to compensate the disturbances in the beam trajectory. However, even if the system can maintain the beam trajectory and shape, atmospheric turbulence can still degrade the message and cause interference in the data. Fortunately, FSO developers also use sophisticated digital signal processing techniques (DSP) to compensate for these impairments.
These DSP techniques allow for reliable, high-capacity, quick deployments even through thick clouds and fog. FSO links can now handle Gbps capacity over several kilometers thanks to all these technological advances. For example, a collaboration between Aircision and TNO demonstrated in 2021 that their FSO systems could reliably transmit 10 Gbps over 2.5 km. Aircision’s Scientific Director John Reid explained, “it’s an important milestone to show we can outperform microwave E-band antennas and provide a realistic solution for the upcoming 5G system.”
An alternative for safe, private networks
An understated benefit of FSO is that, from a physics perspective, they are arguably the most secure form of wireless communication available today. Point-to-point microwave links transmit a far more directional beam than mobile antennas or WiFi systems, which reduces the potential for security breaches. However, even these narrower microwave beams are still spread out enough to cover a wide footprint vulnerable to eavesdropping and jamming.
At a 1km distance, the beam can spread out enough to cover roughly the length of a building, and at 5km, it could cover an entire city block. Furthermore, microwave systems have side- and back lobes radiating away from the intended direction of transmission that can be intercepted too. Finally, if an attacker is close enough to the source, even the reflected energy from buildings can be used to intercept the signal.
Laser beams in FSO are so narrow and focused that they do not have to deal with these issues. At 1km, a typical laser beam only spreads out about 2 meters, and at 5km, only about 5 meters. There are no side and back lobes to worry about and no near-zone reflections. The beam is so narrow that intercepting the transmission becomes an enormous challenge. An intruder would have to get within inches of a terminal or the line of sight, making it easier to get discovered. To complicate things further, the intruder’s terminal would also need to be very well aligned to pick up enough of a signal.
Using Highly-Integrated Transceivers in Free Space Optics
Even though fiber optical communications drove the push for smaller and more efficient optical transceivers, this progress also has a beneficial impact on FSO. As we have explained in previous articles, optical transmission systems have been miniaturized from big, expensive line cards to small, affordable pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all led to a reduced power consumption over the last decade. At EFFECT Photonics, we achieve even further efficiency gains by an optical System-On-Chip (SoC) that integrates all photonic functions on a single chip, including lasers and amplifiers.
FSO systems can now take advantage of affordable, low-power transceivers to transmit and receive laser signals in the air. For example, a transceiver based on an optical SoC can output a higher power into the FSO system. By using this higher laser power, the FSO does not need to amplify the signal so much before transmitting it, improving its noise profile. Furthermore, this benefit happens with both direct detect and coherent transceivers. This is a key reason why Aircision has partnered up with EFFECT Photonics to create both direct detect and coherent free-space optical systems, since the startup ultimately aims to reach transmission speeds of 100 Gbps over the air.
FSO has moved from the domain of science fiction to a practical technology that now deserves a place in access networks. FSO can deliver a wireless access solution that can be deployed quickly and with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links. Furthermore, since it does not use the RF spectrum, it is unnecessary to secure spectrum licenses. Affordable direct detect and coherent transceivers based on SoC can further improve the quality and affordability of FSO transmission.Tags: access networks, adaptive optics, affordable, capacity, coherent, cost-effective, deployments, free space optics, integration, license, miniaturized, photonic integration, Photonics, pluggable, power consumption, private network links, quick deployments, radio spectrum, remote communities, security, SFP, signal processing, turbulence
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and…
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end-user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Cloud-native applications are driving the construction of edge infrastructure and services. However, they cannot distribute their processing capabilities without considerable investments in real estate, infrastructure deployment, and management.
This situation leads to hyperscalers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers too. They must manage hundreds or thousands of new nodes that will be hard to control and maintain.
At EFFECT Photonics, we believe that coherent pluggables with an optical System-on-Chip (SoC) can become vital in addressing these datacom and telecom sector needs and enabling a new generation of distributed data center architectures. Combining the optical SoCs with reconfigurable DSPs and modern network orchestration and automation software will be a key to deploying edge data centers.
Edge data centers are a performance and sustainability imperative
Various trends are driving the rise of the edge cloud:
- 5G technology and the Internet of Things (IoT): These mobile networks and sensor networks need low-cost computing resources closer to the user to reduce latency and better manage the higher density of connections and data.
- Content delivery networks (CDNs): The popularity of CDN services continues to grow, and most web traffic today is served through CDNs, especially for major sites like Facebook, Netflix, and Amazon. By using content delivery servers that are more geographically distributed and closer to the edge and the end user, websites can reduce latency, load times, and bandwidth costs as well as increasing content availability and redundancy.
- Software-defined networks (SDN) and Network function virtualization (NFV). The increased use of SDNs and NFV requires more cloud software processing.
- Augment and virtual reality applications (AR/VR): Edge data centers can reduce the streaming latency and improve the performance of AR/VR applications.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
|Type of Edge||Data center||Location||Number |
per 10M people
|On-premises edge||Enterprise |
|Businesses||NA||2-5 ms||1 rack |
|Network (Mobile)||Tower edge||Tower||Nationwide||3000||10 ms||2 rack |
|Town||150||30 ms||2-6 rack |
|Inner edge||Core||Major city||10||40 ms||10+ rack |
|Regional edge||Regional||Major city||100||50 ms||100+ |
|1||60+ ms||5000+ |
Cisco estimates that 85 zettabytes of useful raw data were created in 2021, but only 21 zettabytes were stored and processed in data centers. Edge data centers can help close this gap. For example, industries or cities can use edge data centers to aggregate all the data from their sensors. Instead of sending all this raw sensor data to the core cloud, the edge cloud can process it locally and turn it into a handful of performance indicators. The edge cloud can then relay these indicators to the core, which requires a much lower bandwidth than sending the raw data.
Edge data centers therefore allow more sensor data to be aggregated and processed to make systems worldwide smarter and more efficient. The ultimate goal is to create entire “smart cities” that use this sensor data to benefit their inhabitants, businesses, and the environment. Everything from transport networks to water supply and lightning could be improved if we have more sensor data available in the cloud to optimize these processes. Distributing data centers is also vital for future data center architectures. While centralizing processing in hyper-scale data centers made them more energy-efficient, the power grid often limits the potential location of new hyperscale data centers. Thus, the industry may have to take a few steps back and decentralize data processing capacity to cope with the strain of data center clusters on power grids. For example, data centers can be relocated to areas where spare power capacity is available, preferably from nearby renewable energy sources. EFFECT Photonics envisions a system of datacentres with branches in different geographical areas, where data storage and processing are assigned based on local and temporal availability of renewable (wind-, solar-) energy and total energy demand in the area.
Coherent technology simplifies the scaling of edge data center interconnects
As edge data center interconnects became more common, the issue of how to interconnect them became more prominent. Direct detect technology had been the standard in the short-reach data center interconnects. However, reaching the distances greater than 50km and bandwidths over 100Gbps required for modern edge data center interconnects required external amplifiers and dispersion compensators that increased the complexity of network operations. At the same time, advances in electronic and photonic integration allowed longer reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. If needed, extended reach 400ZR+ pluggables can cover several hundreds of kilometers. Cignal AI forecasts that 400ZR shipments will dominate in the edge applications, as shown in Figure 3.
Further improvements in integration can further boost the reach and efficiency of coherent transceivers. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ optical System-On-Chip (SoC) technology can achieve higher transmit power levels and longer distances while keeping the smaller QSFP-DD form factor, power consumption, and cost.
Maximizing Edge Computing with Automation
With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will be hard to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Fortunately, several new technologies are enabling this scalable and automated network management.
First of all, the rise of self-tuning algorithms has made the installation of new pluggables easier than ever. They eliminate additional installation tasks such as manual tuning and record verification. They are host-agnostic, can plug into any third-party host equipment, and scale as you grow. Standardization also allows modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. The communication channels used for self-tuning algorithms can also be used for remote diagnostics and management, such as the case of EFFECT Photonics NarroWave technology.
Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, AI in network management will become a significant factor in reducing the energy consumption of future telecom networks.
Future smart transceivers with reconfigurable digital signal processors (DSPs) can give the AI-controlled management layer even more degrees of freedom to optimize the network. These smart transceivers will relay more device information for diagnosis, and depending on the management layer instructions, they can change their coding schemes to adapt to different network requirements
Cloud-native applications require edge data centers with lower latency, and that better fit the existing power grid. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Fortunately, coherent pluggables with self-tuning can play a vital role in addressing these datacom and telecom sector challenges and enabling a new generation of distributed data center architectures. Combining these pluggables with modern network orchestration and automation software will boost the deployment of edge data centers. EFFECT Photonics believes that with these automation technologies (self-tuning, SDNs, AI), we can reach the goal of a self-managed, zero-touch automated network that can handle the massive scale-up required for 5G networks and edge computing.Tags: 400ZR, artificial intelligence, cloud, coherent, computing, data centers, DSP, edge, edge data centers, infrastructure, latency, network, network edge, operators, optical system-on-chip, pluggables, self-tuning, services
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has finally enabled the widespread implementation of IP over DWDM solutions.
Self-tuning algorithms have also made DWDM solutions more widespread by simplifying their installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and even future access networks—are moving towards coherent tunable pluggables. The market for coherent tunable transceivers will explode in the coming years, with LightCounting estimating that annual sales will double by 2026. Telecom carriers and especially data center providers will drive the market demand, upgrading their optical networks with 400G, 600G, and 800G pluggable transceiver modules that will become the new industry standards.
Same Laser Performance, Smaller Package
As the industry moves towards packing more and more transceivers on a single router faceplate, tunable lasers need to maintain performance and power while moving to smaller footprints and lower power consumption and cost. Due to the faceplate density requirements for data center applications, transceiver power consumption is arguably the most critical factor in this use case.
In fact, power consumption is the main obstacle preventing pluggables from becoming a viable solution for a future upgrade to Terabit speeds. Since lasers are the second biggest power consumers in the transceiver module, laser manufacturers faced a paradoxical task. They must manufacture laser units that are small and energy-efficient enough to fit QSFP-DD and OSFP pluggable form factors while maintaining the laser performance. Fortunately, these ambitious spec targets became possible thanks to improved photonic integration technology.
The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules has once again cut the micro-ITLA footprint almost in half. The QSFP-DD modules that house the full transceiver are smaller (78mm by 20mm) than the original ITLA form factor. Stunningly, tunable laser manufacturers achieved this size reduction without impacting laser purity and power.
Versatile Laser Developers for Different Use Cases
The different telecom and datacom applications will have different requirements for their tunable lasers. Premium coherent systems used for submarine and ultra-long-haul require best-in-class lasers with the highest power output and purity. On the other hand, metro transport and data center interconnect applications do not need the highest possible laser quality, but they need small devices with lower power consumption to fit router faceplates. Meanwhile, the access network space looks for lower-cost components that are also temperature hardened.
These varied use cases provide laser developers with ample opportunities and market niches to provide fit-for-purpose solutions for. For example, a laser module can be set to run at a higher voltage to provide higher output power and reach for premium long-haul applications. On the other hand, tuning the laser to a lower voltage would enable a more energy-efficient operation that could serve more lenient, shorter-reach use cases (links < 250km), such as data center interconnects.
An Independent Player in Times of Consolidation
With the increasing demand for coherent transceivers, many companies have performed acquisitions and mergers that allow them to develop transceiver components internally and thus secure their supply. LightCounting forecasts show that while this consolidation will decrease the sales of modulator and receiver components, the demand for tunable lasers will continue to grow. The forecast expects the tunable laser market for the transceiver to reach a size of $400M in 2026.
We can dive deeper into the data to find the forces that drive the steady growth of the laser market. As shown in Figure 4, the next five years will likely see explosive growth in the demand for high-purity, high-power lasers. The forecast predicts that the shipments of such laser units will increase from roughly half a million in 2022 to 1.4 million in 2026 due to the growth of 400G and 800G transceiver upgrades. However, the industry consolidation will make it harder for component and equipment manufacturers to source lasers from independent vendors for their transceivers.
This data indicates that the market needs more independent vendors to provide high-performance ITLA components that adapt to different datacom or telecom provider needs. Following these trends, at EFFECT Photonics, we are not only developing the capabilities to provide a complete coherent transceiver solution but also the nano-ITLA units needed by other vendors.
The world is moving towards tunability. As telecom and datacom industries seek to expand their network capacity without increasing their fiber infrastructure, the sales tunable transceivers will explode in the coming years. These transceivers need tunable lasers with smaller sizes and lower power consumption than ever. Fortunately, the advances in photonic integration are managing to fulfill these laser requirements, leading to the new nano-ITLA module standards. However, even though component and equipment vendors need these tunable lasers for their next-gen transceivers, the industry consolidation can affect their supply. This situation presents an opportunity for new independent vendors to supply nano-ITLA units to this growing market.Tags: acquisition, coherent, coherent communication systems, coherent optical module vendor, coherent technology stack, datacenters, datacom, DWDM, high-performance, hyperscalers, independent, Integrated Photonics, lasers, noise, OEM, optical engine, optical transceivers, performance, photonic integration, Photonics, pluggables, power consumption, reach, self-tuning, Telecom, telecom carriers, Transceivers, tunable, tunable laser, tuneability, VARs, versatile
Network and data center operators need fast and affordable pluggable transceivers that perform well enough…
Co-Designing the Optimal DSPCoherent DSPs are already application-specific integrated circuits (ASICs), but they could fit their respective optical engines and use cases even more tightly. Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption. In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of PIC but a master of none. For example, many 400ZR+ transceivers used for telecom metro and long-haul applications are using the same DSPs as 400ZR transceivers used for much shorter data center interconnects. Given the ever-increasing demand for capacity and the need for sustainability both as financial and social responsibility, transceiver developers are increasingly in need of a steak knife rather than a Swiss army knife. Co-designing the DSP chip alongside the photonic integrated circuit (PIC) can lead to a much better fit between these components. A co-design approach helps identify in greater detail the trade-offs between various parameters in the DSP and PIC and thus improve system-level performance optimization. A DSP optimized for a specific optical engine and application could save up to a couple of Watts of power compared to the usual transceiver and DSP designs.
Co-designing DSP Interfaces for Power EfficiencySince the optical engine and DSP operate with signals of differing intensities, they need some analog electronic components to “talk” to each other. On the transmit side, the electronic driver block takes signals from the DSP, converts them to a higher voltage, and drives the optical engine. On the receive side, a trans-impedance amplifier (TIA) block will boost the weak signal captured by the optical detector so that the DSP can more easily process it. This signal power conversion overhead constitutes roughly 10-15% of transceiver power consumption, as shown in Figure 1. Co-designing the DSP and PIC could enable ways to decrease this power conversion overhead. For example, the modulator of the optical engine could be designed to run at a lower voltage that is more compatible with the signal output of the DSP. This way, the DSP could drive the optical engine directly without the need for the analog electronic driver. Such a setup could save roughly two watts of power consumption! Co-design is also vital to optimize the transceiver layout floorplan. This plan must consider the power dissipation of all transceiver building blocks to avoid hot spots and thermal interference from the DSP to the highly thermally sensitive PIC. The positioning of all bond pads and interfaces is also very important for signal and power integrity, requiring a co-design with the package and substrate. During this floorplan development, the RF interconnections between the DSP and PIC can be made as short as possible. These optimized RF interconnects reduce the optical and thermal losses in the transceiver package and will reduce the power consumption of the analog electronic driver and amplifier.
Co-Designing Fit-For-Purpose DSPs and PICsAs shown in Figure 4, a DSP chip contains a sequence of processing blocks that compensate for different transmission issues in the fiber and then recover, decode, and error-correct the data streams. Different applications might require slightly different layouts of the DSP or might not need some processing blocks. For example, full DSP compensation might be required for long links that span several hundreds of kilometers, but a shorter link might not require all the DSP functions. In these cases, a transceiver could turn off or reduce certain DSP functions—such as chromatic dispersion compensation—to save power. These power-saving features could be particularly useful for the cases of shorter data center interconnect links (DCI). On the optical engine side, the laser might not require a high power to transmit over this shorter DCI link so the amplifier functions could shut down. Co-designing the DSP and PIC allows a transceiver developer to mix and match these different energy-saving features to achieve the lowest possible power for a specific application.
TakeawaysPower consumption has become the big barrier that prevents pluggable transceivers from scaling up to 800G and Terabit speeds. Overcoming this barrier requires a tighter fit between the optics and electronics of the transceiver, especially when it comes to the interface between the optical engine and the electronic DSP. By co-designing the optical engine and the electronic DSP, transceiver developers could avoid the need for an external electrical driver and reduce transceiver power consumption by 10-15%. A co-design approach can also make it easier to design fit-for-purpose transceivers that implement power-saving features tailored to specific application cases. The benefits of this co-design approach led EFFECT Photonics to incorporate talent and intellectual property from Viasat’s Coherent DSP team. With this merger, EFFECT Photonics aims to co-design our Optical System-On-Chip with the DSP to develop fit-for-purpose transceivers that are more energy-efficient than ever before. Tags: acquisition, coherent, coherent communication systems, coherent optical module vendor, DSP, FEC, forward error correction, green, green transceivers, high vertical integration, independent coherent optical module vendor, Integrated Photonics, optical digital signal processing, optical engine, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tunable laser, tuneability
Carriers must solve the dilemma of how to use small and affordable coherent pluggables while…
Future Automated Networks Must Also Work on the Physical LayerTelecom and datacom providers who want to become market leaders must scale up while also learning to allocate their existing network resources most efficiently and dynamically. SDNs can help achieve this efficient, dynamic network management. In a nutshell, the SDN paradigm separates the switching hardware from the software, allowing operators to virtualize network functions in a single centralized controller unit. This centralized management and orchestration (MANO) layer can implement network functions that the switches do not, allowing network operators to allocate network resources more intelligently and dynamically. This added flexibility and optimization will improve network outcomes for operators. However, the upcoming 5G networks will consist of a massive number of devices, software applications, and technologies. EFFECT Photonics believes that handling all these new devices and use cases will require self-managed, zero-touch automated networks. Realizing this full network automation requires two additional components alongside SDN and NFV:
- Artificial intelligence and machine learning algorithms for complete network automation: For example, AI in network management will become a significant factor in reducing the energy consumption of future telecom networks.
- Sensor and control data flow across all OSI model layers, including the physical layer: As networks get bigger and more complex, the management and orchestration (MANO) software needs more degrees of freedom and knobs to adjust. Next-generation MANO software needs to adjust and optimize both the physical and network layers to fit the network best.
The Importance of Standardized Error CorrectionForward error correction (FEC) implemented by DSPs has become a vital component of coherent communication systems. FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image. When coherent transmission emerged, all FEC algorithms were proprietary. Equipment and component manufacturers closely guarded their FEC because it provided a critical competitive advantage. Therefore, coherent transceivers from different vendors could not operate with each other, and a single vendor had to be used for the entire network deployment. However, vendors had to adapt with data center providers pushing disaggregation deeper into communication networks. Their coherent transceivers needed to become interoperable, so FEC algorithms needed standardization. The OIF 400ZR standard for data center interconnects uses a public algorithm called concatenated FEC (CFEC). In contrast, some 400ZR+ MSA standards use open FEC (oFEC), which provides a more extended reach at the cost of a bit more bandwidth and energy consumption. For the longest possible link lengths (500+ kilometers), proprietary FECs become necessary for 400G transmission. Still, at least the public FEC standards have achieved interoperability for a large segment of the 400G transceiver market.
A Smart DSP to Rule All Network LinksA smart pluggable transceiver that can adapt to all the applications we have mentioned before—data centers, carrier networks, SDNs—requires an equally smart and versatile DSP. It must be a DSP that can be reconfigured via software to adapt to different network conditions and use cases. For example, a smart DSP could switch among different FEC algorithms to adapt to network performance and use cases. For example, let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the DSP can be reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
|400ZR||Open ZR+||Proprietary Long Haul|
|Target Application||Edge data center interconnect||Metro, Regional data center interconnect||Long-Haul Carrier|
|Target Reach @ 400G||120km||500km||1000 km|
|Standards / MSA||OIF||OpenZR+ MSA||Proprietary|