Insights
Hot Topics, Cool Solutions: Thermal Management in Optical Transceivers
In a world of optical access networks, where data speeds soar and connectivity reigns supreme,…
In a world of optical access networks, where data speeds soar and connectivity reigns supreme, the thermal management of optical transceivers is a crucial factor that is sometimes under-discussed. As the demand for higher speeds grows, the heat generated by optical devices poses increasing challenges. Without proper thermal management, this excessive heat can lead to performance degradation, reduced reliability, and lifespan, increasing optical equipment’s capital and operating expenditures.
By reducing footprints, co-designing optics and electronics for greater efficiency, and adhering to industry standards, operators can reduce the impact of heat-related issues.
Integration Reduces Heat Losses
The best way to manage heat is to produce less of it in the first place. Optical transceivers consist of various optical and electronic components, including lasers, photodiodes, modulators, electrical drivers and converters, and even digital signal processors. Each of these elements generates heat as a byproduct of their operation. However, photonic and electronic technology advances have enabled greater device integration, resulting in smaller form factors and reduced power consumption.
For example, over the last decade, coherent optical systems have been miniaturized from big, expensive line cards to small pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all lead to a reduced power consumption over the last decade, as shown in the figure below.
Co-design for Energy Efficiency
Co-designing the transceiver’s optics and electronics is a great tool for achieving optimal thermal management. Co-designing the DSP chip alongside the photonic integrated circuit (PIC) can lead to a much better fit between these components. A co-design approach helps identify in greater detail the trade-offs between various parameters in the DSP and PIC and thus improve system-level performance and efficiency.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Follow Best Practices and Standards
Effective thermal management also means following the industry’s best practices and standards. These standards ensure optical transceivers’ interoperability, reliability, and performance. Two common ratings that will condition the thermal design of optical transceivers are commercial (C-temp) and industrial (I-temp) ratings.
Commercial temperature (C-temp) transceivers are designed to operate from 0°C to 70°C. These transceivers suit the controlled environments of data center and network provider equipment rooms. These rooms have active temperature control, cooling systems, filters for dust and other particulates, airlocks, and humidity control. On the other hand, industrial temperature (I-temp) transceivers are designed to withstand more extreme temperature ranges, typically from -40°C to 85°C. These transceivers are essential for deployments in outdoor environments or locations with harsh operating conditions. It could be at the top of an antenna, on mountain ranges, inside traffic tunnels, or in the harsh winters of Northern Europe.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Operators can ensure the transceivers’ longevity and reliability by selecting the appropriate temperature rating based on the deployment environment and application. On the side, of the component manufacturer, the temperature rating will have a significant impact on the transceiver’s design and testing. For example, making an I-temp transceiver means that every internal component—the integrated circuits, lasers, photodetectors—must also be I-temp compliant.
Takeaways
Operators can overcome heat-related challenges and ensure optimal performance by reducing heat generation through device integration, co-designing optics and electronics, and adhering to industry standards. By addressing these thermal management issues, network operators can maintain efficient and reliable connectivity and contribute to the seamless functioning of optical networks in the digital age.
Manufacturing a Coherent Transceiver
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising on transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure in the ground.
Given the importance of coherent transmission, we will explain some key aspects of manufacturing and testing these devices in this article. In the previous article, we described critical aspects of the transceiver design process.
Into the Global Supply Chain
Almost every modern product results from global supply chains, with components and parts manufactured in facilities worldwide. A pluggable coherent transceiver is not an exception. Some transceiver developers will have fabrication facilities in-house, while others (like EFFECT Photonics) are fabless and outsource their manufacturing. We have discussed the pros and cons of these approaches in a previous article.
Writing a faithful, nuanced summary of all the manufacturing processes involved is beyond the scope of this article. Still, we will mention in very broad terms some of the key processes going on.
- Commercial Off The Shelf (COTS) Procurement: Many components in the transceiver are designed in-house and custom-ordered and manufactured, but other components are sourced off-the-shelf from various suppliers and manufacturers. This includes devices such as RF drivers, amplifiers, or even optical sub-assemblies.
- Integrated Circuit Fabrication: The electronic digital signal processor, optical engine, and laser chips are manufactured through semiconductor foundry processes. In the case of EFFECT Photonics, the laser and optical engine can be fabricated on the same chip. You can read some of our previous to know more about what goes into the DSP and the PIC manufacturing processes.
- Manufacturing Sub-Assemblies: When the chips have been manufactured and tested, the manufacturing of the different transceiver sub-assemblies (chiefly the transmitter and receiver) can proceed. Again, vertically-integrated transceiver developers can manufacture these in-house, but most transceiver makers outsource this, especially if they want large-scale production. This includes manufacturing printed circuit boards (PCB) that integrate and interconnect the electronic and optical components. Careful alignment and bonding of optical components, such as lasers and photodetectors, are critical to achieve optimum performance.
- Transceiver Housing: The transceiver subassemblies will be housed in a metal casing, usually made from an aluminum alloy. The design and manufacturing of these housings must consider power distribution and thermal management.
The collaboration in this global supply chain ensures the availability of specialized expertise and resources, leading to efficient and cost-effective production.
Testing the Transceiver
The avid reader may have noticed that we did not mention one of the most critical aspects of manufacturing in the previous section: testing. After all, what you do not test, you cannot manufacture reliably.
Testing and quality assurance processes must occur throughout all the manufacturing processes to verify its performance and compliance with industry standards. Semiconductor chips and PCBs must be tested before they are placed in sub-assemblies. The completed sub-assemblies must then be tested for optical and electrical performance. Once the transceiver module is completed, it must undergo several reliability and compatibility tests. Let’s discuss some of these testing processes.
- Chip Testing: Testing should happen not only on the transceiver sub-assemblies or the final package but also after chip fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies. The earlier faults can be found in the testing process, the greater the impact on the use of materials and the energy used to process defective chips.
- Calibration and Performance Testing: This involves assessing and calibrating the key performance parameters of the transceiver: output power, extinction ratio, bit error rate, receiver sensitivity, peak wavelength and spectrum, and a few others. Various modulation formats and data rates should be tested to ensure reliable performance under different operating conditions, and performance at different temperatures is also measured (more on that later). These tests will determine whether the device complies with industry standards.
- Environmental and Reliability Testing: The transceiver should undergo environmental and reliability testing to assess its performance under different operating conditions. These tests ensure that the transceiver can withstand real-world deployment scenarios and maintain reliable operation over its intended lifespan. As shown in Table 1, this includes temperature cycling, humidity testing, vibration testing, and accelerated aging tests.
Mechanical Reliability & Temperature Testing | ||
---|---|---|
Shock & Vibration | High / Low Storage Temp | Temp Cycle |
Damp Heat | Cycle Moisture Resistance | Hot Pluggable |
Mating Durability | Accelerated Aging | Life Expectancy Calculation |
- Compatibility Testing: The modules are inserted into switches of various third-party brands to test their interoperability. This is particularly important for a device that wants to be certified by a specific Multi-Source Agreement (MSA) group. This certification adds credibility and ensures that the transceiver can seamlessly integrate into various network environments.
Takeaways
Manufacturing pluggable coherent transceivers involves a global supply chain, enabling access to specialized expertise and resources for efficient production. Some critical processes in this manufacturing chain include procuring materials and off-the-shelf components, fabricating the integrated circuits, and manufacturing the sub-assemblies and the transceiver housing.
Testing and quality assurance are integral to reliable manufacturing. Rigorous testing occurs at various stages, including chip, calibration, performance, environmental, and compatibility testing with third-party brands. This ensures that the transceivers meet industry standards and perform optimally under various operating conditions.
Through meticulous manufacturing and rigorous testing processes, coherent transceivers remain at the forefront of advancing global connectivity.
Tags: Amplitude, and environmental testing, Calibration, Capacity upgrades, coherent 100G ZR, coherent transmission, COSA, datacom, Direct detect technology, DSP, DSP COSA, EIA, global, Global supply chain, GR-CORE, integration, ITLA, ITU, knowledge, laser, OIF, optical networks, Packaging, performance, phase, PIC, Pluggable coherent transceiver, polarization, power, Semiconductor foundry processes, SFF, standards, supply chain, Telcordia, Telecom, Testing and quality assurance, TransceiverDesigning a Coherent Transceiver
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising on transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure in the ground.
Given its importance, in this article, we want to explain some key aspects of the design processes going on in transceivers.
Key Components of An Optical Transceiver
An optical transceiver has both electrical and optical subsystems, each with its specific components. Each component plays a crucial role in enabling high-speed, long-haul data transmission. Here are the primary components of a coherent optical transceiver:
- Laser Source: The laser source generates the coherent light used for transmission. It determines many of the characteristics of the transmitted optical signal, from its wavelength and power to its noise tolerance. Read our previous article to know more about what goes inside laser design.
- Optical engine: The optical engine includes the modulator and receiver components as well as passive optical components required to guide, combine or split optical signals. The modulator encodes data onto the optical signal and the receiver converts the optical signal into an electrical signal. Depending on the material used (indium phosphide or silicon), the optical engine chip might also include the tunable laser.
- Digital/Analog Converters (DAC/ADC): The Digital-to-Analog Converter (DAC) turns digital signals from the digital processing circuitry into analog signals for modulation. The Analog-to-Digital Converter (ADC) does the reverse process.
- Digital Signal Processor (DSP): The DSP performs key signal processing functions, including dispersion compensation, polarization demultiplexing, carrier phase recovery, equalization, and error correction. Read this article to know more about what goes inside a DSP.
- Forward Error Correction (FEC) Encoder/Decoder: FEC is crucial for enhancing the reliability of data transmission by adding redundant bits that allow a receiver to check for errors without asking for retransmission of the data.
- Control and Monitoring Interfaces: Transceivers feature control and monitoring interfaces for managing and optimizing their operation.
- Power Management and Cooling Systems: These include heatsinks and thermoelectric coolers required to maintain the components within their specified temperature ranges and ensure reliable transceiver operation.
Key Design Processes in a Transceiver
Designing a transceiver is a process that should take an initial application concept into a functioning transceiver device that can be manufactured. It’s a complex, layered process that involves designing many individual components and subsystems separately but also predicting and simulating how all these components and subsystems will interact with each other.
Writing a faithful, nuanced summary of this design process is beyond the scope of this article, but we will mention in very broad terms some of the key processes going on.
- Defining Concept and Specifications: We must first define what goes into the transceiver and the expected performance. Transceiver architects will spend time with product management to understand the customer’s requirements and their impact on design choices. Some of these requirements and designs are already standardized, some of them (like EFFECT Photonics’ optical engine) are proprietary and will require deeper thinking in-house. After these conversations, the transceiver concept becomes a concrete set of specifications that are passed on to the different teams (some in-house, others from company partners).
- Optical Subsystem Design: The optical subsystem in the transceiver generates, manipulates, and receives the light signal. Optical designers develop a schematic circuit diagram that captures the function of the optical subsystem, which includes lasers, modulators, or light detectors. The designers will simulate the optical system to make sure it works, and then translate this functional design into an actual optical chip layout that can be manufactured at a semiconductor foundry.
- Electronic Subsystem Design: In parallel with the optical subsystem, the electronic subsystem is also being designed. The heart of the electronic subsystem is the DSP chip. The DSP design team also comes up with a functional model of the DSP and must simulate it and translate it into a layout that can be manufactured by a semiconductor foundry. However, there’s a lot more to the electronic system than just the DSP: there are analog-to-digital and digital-to-analog converters, amplifiers, drivers, and other electronic components required for signal conditioning. All of these components can be acquired from another vendor or designed in-house depending on the requirements and needs.
- Mechanical and Thermal Design: The mechanical and thermal design of the pluggable transceiver is essential to ensure its compatibility with industry-standard form factors and enable reliable operation. Mechanical considerations include connector types, physical dimensions, and mounting mechanisms. The thermal design focuses on heat dissipation and ensures the transceiver operates within acceptable temperature limits.
The Importance of a Co-Design Philosophy
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
A co-design approach that features strong interactions between the different teams that design these systems can lead to a much better fit and efficiency gains. You can learn more about the potential advantages of co-designing optical and electronic subsystems in this article.
Takeaways
In summary, designing a coherent optical pluggable transceiver involves carefully considering and balancing many different systems, standards, and requirements, from optical and electrical subsystems to mechanical and thermal design and component procurement. These design processes ensure the development of a reliable, high-performance optical transceiver that meets industry standards and fulfills the specific requirements of the target application.
Tags: coherent 100G ZR, Coherent technology, coherent transmission, COSA, datacom, Digital Signal Processor (DSP), Direct detect technology, DSP, DSP COSA, EIA, Forward Error Correction (FEC) Encoder/Decoder, global, GR-CORE, integration, ITLA, ITU, knowledge, laser, Laser source, OIF, optical engine, optical networks, Optical transceiver, Optical transmission, Packaging, PIC, power, SFF, standards, supply chain, Telcordia, Telecom, TransceiverTowards a Zero Touch Coherent Network
Telecommunication service providers face a critical challenge: how to incorporate affordable and compact coherent pluggables…
Telecommunication service providers face a critical challenge: how to incorporate affordable and compact coherent pluggables into their networks while ensuring optimal performance and coverage across most network links.
Automation will be pivotal in achieving affordable and sustainable networks. Software defined networks (SDNs) facilitate network function virtualization (NFV), empowering operators to implement various functions for network management and orchestration. By incorporating an artificial intelligence (AI) layer for management and orchestration with the SDN/NFV framework, operators can unlock even greater benefits, as depicted in the diagram below.
Nevertheless, achieving a fully automated network requires interfacing with the physical layer of the network. This requires intelligent, coherent pluggables capable of adapting to diverse network requirements.
Zero Touch Networks and the Physical Layer
Telecom and datacom providers aiming to achieve market leadership must scale their operations while efficiently and dynamically allocating existing network resources. SDNs offer a pathway to accomplish this by decoupling switching hardware from software, thereby enabling the virtualization of network functions through a centralized controller unit. This centralized management and orchestration (MANO) layer can implement network functions that switches alone cannot handle, enabling intelligent and dynamic allocation of network resources. This enhanced flexibility and optimization yield improved network outcomes for operators.
However, the forthcoming 5G networks will introduce a multitude of devices, software applications, and technologies. Managing these new devices and use cases necessitates self-managed, touchless automated networks. Realizing the full potential of network automation requires the flow of sensor and control data across all OSI model layers, including the physical layer.
As networks grow larger and more complex, MANO software necessitates greater degrees of freedom and adjustability. Next-generation MANO software must optimize both the physical and network layers to achieve the best network fit. Attaining this objective demands intelligent optical equipment and components that can be diagnosed and managed remotely from the MANO layer. This is where smart pluggable transceivers with reconfigurable DSPs come into play.
The Role of Forward Error Correction
Forward error correction (FEC) implemented by DSPs serves as a crucial component in coherent communication systems. FEC enhances the tolerance of coherent links to noise, enabling longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are a million times higher than those of typical direct detect links. In simpler terms, FEC algorithms allow the DSP to enhance link performance without necessitating hardware changes. This enhancement can be compared to image processing algorithms improving the quality of images produced by phone cameras.
When coherent transmission technology emerged, all FEC algorithms were proprietary, guarded closely by equipment and component manufacturers due to their competitive advantage. Consequently, coherent transceivers from different vendors were incompatible, and network deployment required reliance on a single vendor.
However, as data center providers pushed for deeper disaggregation in communication networks, the need for interoperability in coherent transceivers became evident, leading to the standardization of FEC algorithms. The OIF 400ZR standard for data center interconnects adopted a public algorithm called concatenated FEC (CFEC). In contrast, some 400ZR+ MSA standards employ open FEC (oFEC), which provides greater reach at the expense of additional bandwidth and energy consumption. For the longest link lengths (500+ kilometers), proprietary FECs are still necessary for 400G transmission. Nevertheless, public FEC standards have achieved interoperability for a significant portion of the 400G transceiver market.
The Promise of the Smart Transceiver
The realization of a smart coherent pluggable capable of addressing various applications—data centers, carrier networks, SDNs—relies on an equally intelligent and adaptable DSP. The DSP must be reconfigurable through software to adapt to diverse network conditions and use cases.
For instance, a smart DSP could switch between different FEC algorithms to match network performance and use case requirements. Consider the scenario of upgrading a 650-kilometer-long metro link running at 100 Gbps with open FEC to achieve a capacity of 400 Gbps. Open FEC might struggle to deliver the required link performance. However, if the DSP can be reconfigured to employ a proprietary FEC standard, the transceiver would be capable of handling this upgraded link.
400ZR | Open ZR+ | Proprietary Long Haul | |
---|---|---|---|
Target Application | Edge data center interconnect | Metro, Regional data center interconnect | Long-Haul Carrier |
Target Reach @ 400G | 120km | 500km | 1000 km |
Form Factor | QSFP-DD/OSFP | QSFP-DD/OSFP | QSFP-DD/OSFP |
FEC | CFEC | oFEC | Proprietary |
Standards / MSA | OIF | OpenZR+ MSA | Proprietary |
Reconfigurable DSPs also prove beneficial in auto-configuring links to address specific network conditions, particularly in brownfield links. For example, if the link possesses high-quality fiber, the DSP could be reconfigured to transmit at a higher baud rate. Conversely, if the fiber quality is poor, the DSP could scale down the baud rate to mitigate bit errors. Furthermore, if the smart pluggable detects a relatively short fiber length, it could reduce laser transmitter power or DSP power consumption to conserve energy.
Takeaways
Smart coherent access pluggables would greatly simplify network upgrades. The DSP to match this pluggable should be able to use different error correction levels to handle different reach and future zero-touch network requirements.
The DSP can not only introduce software corrections but also effectuate optical hardware adjustments (output power, amplifier control) to adapt to different noise scenarios. Through these adaptations, the next generation of pluggable transceivers will proficiently handle the telecom carrier and data center use cases presented to them.
Tags: 100G ZR, adaptable, AI, artificial intelligence, automation, Coherent pluggables, cost, data layer, DSPs, FEC, forward error correction, network layer, Network management and orchestration, network requirements, OPEX, physical layer, programmability, Proprietary, reach, reconfigurable, remote, SDNs, Service Providers, smart, Smart pluggable transceivers, software defined, Software-Defined Networks, standardized, standards, telecommunication, versatile, virtualizationWhere is 100ZR Needed?
Simply relying on traditional direct detect technologies will not meet the growing bandwidth and service…
Simply relying on traditional direct detect technologies will not meet the growing bandwidth and service requirements of mobile, cable, and business access networks, particularly regarding long-distance transmission. In many instances, deploying 100G coherent dense wavelength division multiplexing (DWDM) technology becomes essential to transmit larger volumes of data over extended distances.
Several applications in the optical network edge could benefit from upgrading from 10G DWDM or 100G grey aggregation uplinks to 100G DWDM optics:
- Mobile Mid-haul: Seamless upgrade of existing uplinks from 10G to 100G DWDM.
- Mobile Backhaul: Upgrading links to 100G IPoDWDM.
- Cable Access: Upgrading uplinks of termination devices like optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Business Services: Scaling enterprise bandwidth beyond single-channel 100G grey links.
However, network providers have often been reluctant to abandon their 10G DWDM or 100G grey links because existing 100G DWDM solutions did not fulfill all the requirements. Although “scaled-down” coherent 400ZR solutions offered the desired reach and tunability, they proved too expensive and power-intensive for many access network applications. Moreover, the ports in small to medium IP routers used in most edge deployments do not support the commonly used QSFP-DD form factor of 400ZR modules but rather the QSFP28 form factor.
How Coherent 100ZR Can Move into Mobile X-haul
The transition from 4G to 5G has transformed the radio access network (RAN) structure, evolving it from a two-level system (backhaul and fronthaul) in 4G to a three-level system (backhaul, midhaul, and fronthaul) in 5G:
- Fronthaul: The segment between the active antenna unit (AAU) and the distributed unit (DU).
- Midhaul: The segment from DU to the centralized unit (CU).
- Backhaul: The segment from CU to the core network.
Most developed countries have already initiated the rollout of 5G, with many operators upgrading their 1G SFP transceivers to 10G SFP+ devices. Some of these 10G solutions incorporated DWDM technology, but many were single-channel grey transceivers. However, to advance to the next phase of 5G deployments, mobile networks must install and aggregate a greater number of smaller base stations to accommodate the exponential increase in connected devices.
These advanced stages of 5G deployment will necessitate operators to cost-effectively scale fiber capacity using more prevalent 10G DWDM SFP+ solutions and 25G SFP28 transceivers. This upgrade will pressure the aggregation segments of mobile backhaul and midhaul, which typically rely on link aggregation of multiple 10G DWDM links into a higher bandwidth group (e.g., 4x10G).
However, this type of link aggregation involves splitting larger traffic streams and can be intricate to integrate within an access ring. Adopting a single 100G uplink diminishes the need for such link aggregation, simplifying network configuration and operations. To gain further insight into the potential market and reach of this link aggregation upgrade, it is recommended to consult the recent Cignal AI report on 100ZR technologies.
Coherent 100ZR Uplinks Driven by Cable Migration to 10G PON
Cignal AI’s 100ZR report also states that the primary catalyst for 100ZR adoption will be the multiplexing of fixed access network links transitioning from 1G to 10G. This trend will be evident in the long-awaited shift of cable networks from Gigabit Passive Optical Networks (GPON) to 10G PON, driven by the new DOCSIS 4.0 standard. This standard promises 10Gbps download speeds for customers and necessitates several hardware upgrades in cable networks.
To multiplex these larger 10Gbps customer links, cable providers and network operators must upgrade their optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) with 100G DWDM uplinks. Additionally, many of these new optical hubs will support up to 40 or 80 optical distribution networks (ODNs), making the previous approach of aggregating multiple 10G DWDM uplinks insufficient for handling the increased capacity and higher number of channels.
Anticipating these needs, the non-profit R&D organization CableLabs has recently spearheaded the development of a 100G Coherent PON (C-PON) standard. This proposal offers 100 Gbps per wavelength with a maximum reach of 80 km and a split ratio of up to 1:512. CableLabs envisions that C-PON, with its 100G capabilities, will play a significant role not only in cable optical network aggregation but also in other scenarios such as mobile x-haul, fiber-to-the-building (FTTB), long-reach rural areas, and distributed access networks.
Advancements in Business Services with 100ZR Coherent and QSFP28
Nearly every organization utilizes the cloud in some capacity, whether for resource development and testing or software-as-a-service applications. However, leveraging the cloud effectively requires fast, high-bandwidth wide-area connectivity to ensure optimal performance of cloud-based applications.
Like cable networks, enterprises will need to upgrade their existing 1G Ethernet private lines to 10G Ethernet to meet these requirements, consequently driving the demand for 100G coherent uplinks. Cable providers and operators will also seek to capitalize on their upgraded 10G PON networks by expanding the reach and capacity of their business services.
The business and enterprise services sector was an early adopter of 100G coherent uplinks, deploying “scaled-down” 400ZR transceivers in the QSFP-DD form factor when they were the available solution. However, since QSFP-DD slots also support QSFP28 form factors, the emergence of QSFP 100ZR solutions presents a more appealing upgrade for these enterprise applications, offering reduced cost and power consumption.
While QSFP28 solutions had struggled to gain widespread acceptance due to the requirement for new, low-power digital signal processors (DSPs), DSP developers and vendors are now actively involved in 100ZR development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions. This is also why EFFECT Photonics has announced its plans to co-develop a 100G DSP with Credo Semiconductor that best fits 100ZR solutions in the QSFP28 form factor.
Takeaways
In the coming years, deploying and applying 100G coherent uplinks will witness increasing prevalence across the network edge. Specific use cases in mobile access networks will require transitioning from existing 10G DWDM link aggregation to a single coherent 100G DWDM uplink.
Simultaneously, the migration of cable networks and business services from 1Gbps to 10Gbps customer links will be the primary driver for the demand for coherent 100G uplinks. For carriers providing converged cable/mobile access, these uplink upgrades will create opportunities to integrate additional business services and mobile traffic into their existing cable networks.
As the ecosystem for QSFP28 100ZR solutions expands, production will scale up, making these solutions more widely accessible and affordable. This, in turn, will unlock new use cases within access networks.
Tags: 100G, 100G ZR, 100ZR, 10G, 10G PON, 5G, 5G deployment, aggregation, backhaul, bandwidth, business service, Business services, Cable access, cable networks, CCAP, Cloud connectivity, coherent, Coherent DWDM, Converged Cable Access Platforms (CCAPs), edge, Ethernet private lines, Fiber capacity, fronthaul, FTTH, IoT, Link aggregation, midhaul, mobile, mobile access network, mobile networks, Network providers, OLT, Optical line terminals (OLTs), PON, power, QSFP-DD, QSFP28, revenue, traffic, upgrade, uplink, Wide-area connectivityThe Evolution to 800G and Beyond
Article first published 26th October 2022, updated 5th July 2023. The demand for data and…
Article first published 26th October 2022, updated 5th July 2023.
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom, and telecom operators need constantly upgrade their transport networks.
400 Gbps links are becoming the standard for links all across telecom transport networks and data center interconnects, but providers are already thinking about the next steps. LightCounting forecasts significant growth in shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years.
The major obstacles in this roadmap remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Thus, many incentives exist to continue improving the performance and power consumption of pluggable optical transceivers. By embracing increased photonic integration, co-designed PICs and DSPs, and multi-laser arrays, pluggables will be better able to scale in data rates while remaining affordable and at low power.
Direct Detect or Coherent for 800G and Beyond?
While coherent technology has become the dominant one in metro distances (80 km upwards), the campus (< 10 km) and intra-data center (< 2 km) distances remain in contention between direct detect technologies such as PAM 4 and coherent.
These links were originally the domain of direct detect products when the data rates were 100Gbps. However, as we move into Terabit speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions.
A major reason for this decreased gap is that direct detect technology will often require additional amplifiers and compensators at these data rates, while coherent pluggables do not. This also makes coherent technology simpler to deploy and maintain. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions.
Increased Integration and Co-Design are Key to Reduce Power Consumption
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s M1 processor integrates all electronic functions in a single system-on-chip (SoC) and consumes a third of the power compared to the processors with discrete components used in their previous generations of computers. We can observe this progress in the table below.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023, M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics can achieve greater efficiency gains by following a similar approach to integration. The interconnects required to couple discrete optical components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. In contrast, the more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components.
Reducing Complexity with Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones will provide more cost-effective ways for pluggables to scale to higher data rates.
Let’s say we need a data center interconnect with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The pace of worldwide data demand is relentless, with it the pace of link upgrades required by datacom and telecom networks. 400G transceivers are currently replacing previous 100G solutions, and in a few years, they will be replaced by transceivers with data rates of 800G or 1.6 Terabytes.
The cost and power consumption of coherent technology remain barriers to more widespread capacity upgrades, but the industry is finding ways to overcome them. Tighter photonic integration can minimize the losses of optical systems and their power consumption. Finally, the onset of multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel.
Tags: bandwidth, co-designing, coherent, DSP, full integration, integration, interface, line cards, optical engine, power consumption, RF Interconnections, ViasatCoherent Lite and The Future Inside the Data Center
In the dynamic landscape of data centers, the demand for greater bandwidth and extended reach…
In the dynamic landscape of data centers, the demand for greater bandwidth and extended reach is rapidly increasing. As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Coherent optical technology has already established itself as the go-to solution for interconnecting data centers over long distances in metro areas.
However, within the confines of data centers themselves, intensity-modulated direct detect (IM-DD) technology remains dominant. Recognizing the limitations of IM-DD in meeting evolving requirements, the industry is exploring “Coherent Lite” solutions—a simplified implementation of coherent technology designed specifically for shorter-reach data center connections.
This article delves into the concept of coherent lite technology and its potential to address the escalating bandwidth demands within data centers.
Reducing Dispersion Compensation
The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects. These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal that try to offset these dispersion and nonlinear distortions.
However, shorter-reach connections require less dispersion compensation, presenting an opportunity to streamline the implementation of coherent solutions. Coherent lite implementations can reduce the use of dispersion compensation blocks. This significantly lowers system power consumption.
The Trade-Offs Between Fixed and Tunable Lasers
Coherent lite solutions also aim to replace tunable lasers with fixed lasers to reduce costs. The use of fixed lasers eliminates the need for wavelength tuning and associated control circuitry and algorithms, simplifying the implementation and reducing operational complexities.
While fixed lasers offer significant advantages, tunable lasers will push to remain competitive. As we described in a previous article, advances in tunable laser technology aim to further reduce package footprints and leverage electronic ecosystems to reduce cost. Such developments will allow tunable lasers to keep pace with the demands of coherent lite solutions, ensuring a viable alternative for shorter-reach data center connections.
Scaling Data Center Links
The increasing length and bandwidth of links within data centers is increasingly calling for the use of coherent technology. As bandwidth scales to 1.6 and 3.2 terabits, traditional direct detect technology faces challenges in keeping up with the growing distances. Intra data center links that were previously limited to 2 kilometers are now extending to 5 or even 10 kilometers, demanding more robust and efficient transmission technologies.
In this context, coherent lite technology provides an attractive middle ground for enabling extended-reach connections within data centers. By leveraging some aspects of coherent solutions, coherent lite technologies facilitate the reliable and efficient transport of data over longer distances.
Takeaways
As data centers evolve to accommodate escalating bandwidth demands, coherent lite technology emerges as a promising solution for communication links within these facilities. By reducing dispersion compensation, simplifying their laser setups, and enabling extended-reach transmission, coherent lite solutions address the limitations of traditional direct detect technology. These advancements pave the way for enhanced performance, and seamless scalability within data center environments.
Tags: 1.6T, 3.2T, bandwidth, coherent, Coherent Lite, Coherent Transmissions, Complexity, CWDM, Data center, datacom, dispersion compensation, Fixed Lasers, IM-DD (Intensity-Modulated Direct Detect), Interconnects, Intra DC, optical technology, power, reach, Scaling Data Center Links, Simplification, Single WavelengthJapan Innovation Mission
Last week, Tim Koene (CTO) and Sophie De Maesschalck (CFO) of EFFECT Photonics, traveled to Japan on…
Last week, Tim Koene (CTO) and Sophie De Maesschalck (CFO) of EFFECT Photonics, traveled to Japan on a semiconductor innovation mission with several other top Dutch businesses. The mission was jointly organized by the Netherlands Enterprise Agency (RVO), the Innovation Attaché Tokyo and the Dutch Embassy.
As the world’s third-largest economy, Japan has a long and established history in the semiconductor field. The purpose of the mission was to offer an opportunity for exploring and finding potential partners for joint research, development, and commercialization of innovation in this space, with a strong focus on integrated photonics. In addition, the aim was to further build on the strong relationship and develop bilateral agreements and programs between the two governments.
During the innovation mission, the two countries signed a Memorandum of Cooperation on semiconductor policies where both governments will work to facilitate both private and public sector collaboration on semiconductor and related technologies such as photonics.
Tags: EFFECT Photonics, japan, SemiconductorsThe high-quality interactions, high turnout at every event during the week, and the media coverage shows the importance Japan is placing on the partnership with the Netherlands in the field of Semiconductors. The personal involvement of Minister Nishimura doubly underlines this. It is clear that Integrated Photonics is a key pillar in the broader Semiconductor policy. The support of the Ministry of Economic Affairs and Climate to organize this Innovation Mission is greatly appreciated. We have done more in one week than we could have done in a dozen visits.
Tim Koene, CTO at EFFECT Photonics
Coherent Satellite Networks
The current state of the space industry is characterized by rapid growth, technological advancements, and…
The current state of the space industry is characterized by rapid growth, technological advancements, and increasing commercialization. Over the past decade, the space industry has undergone a significant transformation driven by both government and private sector initiatives.
One notable trend is the rise of commercial space companies. Companies like SpaceX, Blue Origin, and Virgin Galactic have made major strides in developing reusable rocket technology, drastically reducing the cost of accessing space. The miniaturization of satellites has also led to an increase in the number of satellites launched. This progress has boosted space applications such as Earth observation, global internet connectivity, and remote sensing.
On the technical side, the main issues in satellite communications include signal latency, limited bandwidth, and vulnerability to weather conditions. Signal latency refers to the delay in transmitting signals over long distances, which can impact real-time applications. Limited bandwidth can result in slower data transfer rates and congestion. Weather conditions like heavy rainfall or storms can cause signal degradation or interruptions.
This article will discuss how satellite networks and coherent optical communications can help address some of these issues.
A New Age of LEO Satellite Constellations
The most important distinction between each satellite type is its orbital altitude or distance from Earth’s surface as it rotates the planet. There are three main categories:
- Low Earth Orbit (LEO). Altitude 500 to 1,200km. LEO is densely populated with thousands of satellites in operation today, primarily addressing science, imaging, and low-bandwidth telecommunications needs.
- Medium Earth Orbit (MEO). Altitude 5,000 to 20,000km. MEO has historically been used for GPS and other navigation applications.
- Geostationary Earth Orbit (GEO). Altitude > 36,000 km. GEO satellites match the rotation of the Earth as they travel, and so remain above the same point on the ground. Hundreds of GEO satellites are in orbit today, traditionally delivering services such as weather data, broadcast TV, and some low-speed data communication.
The telecom industry is particularly interested in using LEO satellites to provide enhanced connectivity. Compared to GEO satellites, they can provide higher speeds and significantly lower latencies. As the cost of launching LEO satellites has decreased, more can be launched to provide redundancy in case of satellite failures or outages. If a single satellite experiences a problem, such as a malfunctioning component or damage from space debris, it can be taken offline and replaced without interrupting the network’s overall operation.
Many companies are developing massive LEO constellations with hundreds or thousands of satellites to provide low latency and global coverage: SpaceX´s Starlink, Telesat’s Lightspeed, Amazon’s Kuiper, or OneWeb. These LEO satellite constellations can provide true universal coverage compared to terrestrial methods of communication. LEO satellites can connect people to high-speed internet where traditional ground infrastructure is hard to reach, making them an attractive solution to close the connectivity gaps across the world.
Coherent Technology is Vital for Future Satellite Links
Currently, most space missions use radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities.
Coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems. Additionally, optical transceivers can lower the size, weight, and power (SWAP) specifications of satellite communication systems. Less weight and size means a less expensive launch or perhaps room for more scientific instruments. Less power consumption means less drain on the spacecraft’s energy sources.
Compared to traditional optical technology, coherent optical technology offers improved sensitivity and signal to noise ratios. This reduces error rates and the need for retransmission, which would significantly increase latency.
Leveraging Electronics Ecosystems for Space Certification and Standardization
While integrated photonics can boost space communications by lowering the payload, it must overcome the obstacles of a harsh space environment, which include radiation hardness, an extreme operational temperature range, and vacuum conditions. The values in Table 1 show the unmanaged environmental temperatures in different space environments.
Mission Type | Temperature Range |
---|---|
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125 ºC |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
Fortunately, a substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
The space industry is experiencing rapid growth and commercialization, driven by technological advancements and the emergence of commercial space companies. Using multiple satellites in a constellation can enhance reliability and coverage while reducing signal disruptions.
Coherent optical technology is crucial for satellite communication links as it enables higher data rates and improve sensitivities and signal to noise ratio. The integration of electronics and optics ecosystems is essential for space certification and standardization, ensuring compatibility with the harsh space environment. Overall, addressing these challenges will continue to drive innovation and advancements in satellite communication networks.
Tags: Amazon (Kuiper), Blue Origin, coherent, Coherent Satellite Networks, Coherent technology, communications, compatibility, cost, GEO, Global coverage, latency, LEO, LEO satellite constellations, life span, Limited bandwidth, Low latency, MEO, Networks, OneWeb, Optical communications, P2P, PHIX Photonics Assembly, PIXAPP Photonic Packaging Pilot Line, reliability, satellite, Satellite communication systems, sensitivity, Signal latency, SNR, space, Space certification, SpaceX, Technobis IPS, Telesat, Virgin GalacticData Centers in the Age of AI
Artificial intelligence (AI) is changing the technology landscape in various industries, and data centers are…
Artificial intelligence (AI) is changing the technology landscape in various industries, and data centers are no exception. AI algorithms are computationally heavy and will increase data centers’ power consumption and cooling requirements. This aspect is arguably the one that will most deeply affect data center architecture. That being said, AI will also help automate some aspects of data center operation and maintenance.
This article will elaborate on how these AI aspects will change data center architecture.
Energy Efficiency
Data centers famously consume a significant amount of energy, and reducing their power consumption is an essential goal for data center operators.
The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2022 is 1.55. This implies that for every 1 kWh used to power data center equipment, an extra 0.55 kWh—about 35% of total power consumption—is needed to power auxiliary equipment like lighting and, more importantly, cooling. The Uptime Institute has only observed a marginal improvement of 10% in the average data center PUE since 2014 (which was 1.7 back then).
In some ways, AI can help data centers to become more energy efficient. By analyzing temperature data, heat generation, and other variables, AI can determine the optimal temperature and airflow for different areas of the data center. By looking at energy consumption and heat generation data, AI can determine the optimal placement of equipment to minimize energy consumption and reduce wasted heat. By studying historical data and forecasting future energy consumption, AI can help data center operators to identify areas where energy consumption can be reduced.
While these benefits have the potential to reduce data center power consumption in the future significantly, the current reality is that power-hungry AI algorithms require more computing resources and power and will lead to a net increase in data center power consumption. The world’s major data center providers are already gearing up for this increase.
For example, a recent Reuters report explains how Meta computing clusters needed 24 to 32 times the networking capacity. This increase required a redesign of the clusters and data centers to include new liquid cooling systems.
Intelligent Automation
AI can automate and optimize many monitoring and scheduling tasks currently performed manually in data centers. These optimization and automation processes could also reduce the amount of hardware to purchase, manage, and monitor, as explained by Pratik Gupta, CTO at IBM Automation.
For example, AI can be used to automate capacity planning. By analyzing historical data and forecasting future demand, AI allows data center operators to determine the optimal computing resources to support current and future workloads. Such work makes planning for growth easier and ensures that data centers have sufficient resources to support their customers’ needs.
AI for Self-Healing Data Centers
A VentureBeat report explains that there has been a trend in using AI for fault detection and prediction in data centers in recent years. This leads to “self-healing” mechanisms that help data center operators reduce downtime and improve the reliability of their infrastructure.
For example, AIs can help monitor traffic throughout the data center. If traffic in specific nodes is slowing down, the AI can detect that trend and find solutions to restart the node or reroute traffic to other nodes. These trends and issues might not be immediately apparent to human operators.
With AI, data centers can look at historical data to predict equipment failures and schedule maintenance before a failure occurs. This can help prevent downtime and ensure that equipment always operates at peak performance.
Takeaways
AI can significantly improve the operation of data centers by automating and optimizing resource allocation, as well as doing predictive maintenance and fault detection. These processes can help data centers become more energy efficient in the long term, but in the short term, AI will lead to significant increases in data center consumption. This is shown by how major data center providers have had to restructure their data centers to include enhanced cooling capabilities, such as liquid cooling.
Tags: Artificial intelligence (AI), Cooling requirements, Data center architecture, data centers, energy efficiency, Intelligent automation, Networking capacity, Optimal temperature and airflow, power consumption, Power usage effectiveness (PUE) ratioWhat do Next-Gen Optical Subassemblies Need?
While packaging, assembly, and testing are only a small part of the cost of electronic…
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic integrated circuits (PICs) and their subassemblies. Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost.
To trigger a revolution in the use of photonics worldwide, it needs to be as easy to manufacture and use as electronics. In the words of EFFECT Photonics’ Chief Technology Officer, Tim Koene: “We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.”
This article will explore three key avenues to improve optical subassemblies and packaging for photonic devices.
Learning from Electronics Packaging
A key way to improve photonics manufacturing is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a new special production line is much more expensive than modifying an existing production flow.
One electronic technique essential to transfer into photonics is ball-grid array (BGA) packaging. BGA-style packaging has grown popular among electronics manufacturers over the last few decades. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
Another critical technique to move into photonics is flip-chip bonding. This process is where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to transfer more electronics packaging methods into photonics. PICs that can handle being soldered to circuit boards allow the industry to build optical subassemblies that are more accessible to the open market and can go into trains, cars, or airplanes.
The Benefits of Increasing Integration
Economics of scale is a crucial principle behind electronics manufacturing, and we must apply it to photonics too. The more components we can integrate into a single chip and the more chips we can integrate into a single wafer, the more affordable the photonic device becomes. If production volumes increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the photonics industry in general.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Deepening photonics integration will also have a significant impact on power consumption. Integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient. This approach doesn’t just optimize the efficiency of the devices themselves but also of the resource-hungry chip manufacturing process.
The Importance of Laser Packaging
Over the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Another aspect of optimizing laser module footprint is allowing transceiver developers to mix and match their building blocks. For example, traditional ITLAs in transceivers contain the temperature control driver and power converter functions. However, the main transceiver board can usually provide these functions too. A setup in which the main board performs these driver and converter functions would avoid the need for redundant elements in both the main board and tunable laser.
Finally, the future of laser packaging will also involve packaging more multi-laser arrays. As explained in a previous article, multi-laser arrays will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while avoiding the higher cost and complexity of increasing the speed with a single laser channel.
Takeaways
Improving subassemblies and packaging is vital for photonics to reach its potential. Photonics must learn from well-established, standardized electronics packaging techniques like BGA-style packaging and flip-chip bonding. By increasing integration, photonics can achieve economies of scale that make devices more affordable and energy efficient. In this context, improved integration and packaging of tunable lasers and arrays will be particularly important. Overall, these efforts will make photonics more accessible to the open market and make it as easy to manufacture and use as electronics.
Tags: Assembly, electronics, flip chip bonding, integration, Manufacturing, Packaging, Photonics, Subassemblies, testingBuilding a Sustainable Future with Fully Integrated PICs
Article first published 27 September 2021, updated 31st May 2023. The demand for data and…
Article first published 27 September 2021, updated 31st May 2023.
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. By 2022, internet traffic had doubled yet again. While 5G standards are more energy-efficient per bit than 4G, the total power consumption will be much higher than 4G. Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations. These issues do not just affect the environment but also the bottom lines of communications companies.
Keeping up with the increasing data demand of future networks sustainably will require operators to deploy more optical technologies, such as photonic integrated circuits (PICs), in their access and fronthaul networks.
Integration Impacts Energy Efficiency and Optical Losses
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s recent M1 and M2 processors integrate all electronic functions in a single system-on-chip (SoC) and consume significantly less power than the processors with discrete components used in their previous generations of computers.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023,M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics is also achieving greater efficiency gains by following a similar approach to integration. The more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components and allow for interactive optimization.
Let’s start by discussing three different levels of device integration for an optical device like a transceiver:
- Discrete build – The transceiver components are manufactured through separate processes. The components are then assembled into a single package using different types of interconnections.
- Partial integration – Some components are manufactured and integrated on the same chip, but others are manufactured or sourced separately. For example, the transceiver laser can be manufactured separately on a different material and then interconnected to a chip with the other transceiver components.
- Full integration – All the components are manufactured on a single chip from a single material simultaneously.
While discrete builds and partial integration have advantages in managing the production yield of the individual components, full integration leads to fewer optical losses and more efficient packaging and testing processes, making them a much better fit in terms of sustainability.
The interconnects required to couple discrete components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. The more interconnects between different components, the higher the losses become. Discrete builds will have the most interconnect points and highest losses. Partial integration reduces the number of interconnect points and losses compared to discrete builds. If these components are made from different optical materials, the interconnections will suffer additional losses.
On the other hand, full integration uses a single chip of the same base material. It does not require lossy interconnections between chips, minimizing optical losses and significantly reducing the energy consumption and footprint of the transceiver device.
More Integration Saves Scarce Resources
When it comes to energy consumption and sustainability, we shouldn’t just think about the energy the PIC consumes but also the energy and carbon footprint of fabricating the chip and assembling the transceiver. To give an example from the electronics sector, a Harvard and Facebook study estimated that for Apple, manufacturing accounts for 74% of their carbon emissions, with integrated circuit manufacturing comprising roughly 33% of Apple’s carbon output. That’s higher than the emissions from product use.
Early Testing Avoids Wastage
Testing is another aspect of the manufacturing process that impacts sustainability. The earlier faults can be found in the testing process, the greater the impact on the use of materials and the energy used to process defective chips. Ideally, testing should happen not only on the final, packaged transceiver but in the earlier stages of PIC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies.
Discrete and partial integration approaches do more of their optical testing on the finalized package, after connecting all the different components together. Should just one of the components not pass the testing process, the complete packaged transceiver would need to be discarded, potentially leading to a massive waste of materials as nothing can be ”fixed” or reused at this stage of the manufacturing process.
Full integration enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials.
Full Integration Drives Sustainability
While communication networks have become more energy-efficient, further technological improvements must continue decreasing the cost of energy per bit and keeping up with the exponential increase in Internet traffic. At the same time, a greater focus is being placed on the importance of sustainability and responsible manufacturing. All the photonic integration approaches we have touched on will play a role in reducing the energy consumption of future networks. However, out of all of them, only full integration is in a position to make a significant contribution to the goals of sustainability and environmentally friendly manufacturing. A fully integrated system-on-chip minimizes optical losses, transceiver energy consumption, power usage, and materials wastage while at the same time ensuring increased energy efficiency of the manufacturing, packaging, and testing process.
Tags: ChipIntegration, Data demand, DataDemand, EFFECT Photonics, Energy consumption reduction, energy efficiency, EnergySavings, Environmental impact, Fully Integrated PICs, Green Future, GreenFuture, Integrated Photonics, Integration benefits, Manufacturing sustainability, Optical technologies, OpticalComponents, photonic integration, PIC, PICs, ResponsibleManufacturing, sustainability telecommunication, Sustainable, Sustainable future, SustainableNetworks, Transceiver optimizationThe Fabrication Process Inside a Photonic Foundry
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we should discuss the fabrication processes inside photonic semiconductor foundries.
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics co-founder Boudewijn Docter described a fabrication process with 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
1. Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods, such as physical or chemical vapor deposition.
2. Lithography (i.e., printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with aphotosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
3. Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. Etching techniques can be broadly classified into two categories.
- Dry Etching: These processes remove material by bombarding it with ions. Typically, these ions come from a plasma of reactive gases like oxygen, boron, chlorine, etc. This approach is often used to etch a material anisotropically (i.e., in a specific direction).
- Wet Etching: These processes involve the removal of material using a liquid reactant. The material to be etched is immersed in the solution, which will dissolve the targeted material layers. This solution usually consists of an acid, such as hydrofluoric acid (HFl), to etch silicon. Wet etching is typically used for etching a material isotropically (i.e., in all directions).
4. Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such as silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation for future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps or, in the end, to add electrical contacts to the chip.
Figure 5 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
After this fabrication process ends, the processed wafers are shipped worldwide to be tested and packaged into photonic devices. This is an expensive process we discussed in one of our previous articles.
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous knowledge in chip design, fabrication, and testing from experts in different fields worldwide. EFFECT Photonics was founded by people who fabricated these chips themselves, understood the process intimately and developed the connections and network to develop cutting-edge PICs at scale.
Tags: Agricultural, Carbon Footprint, Chip Material, Cleaning, Communication Networks, Deposition, Energy Cost, Epitaxy, Etching, Fabrication Process, Food Diagnostics, Integrated Photonics, LIDAR, Lithography, Manufacturing, Medical Devices, Metallization, Photonic Foundry, Photonics, Semiconductor, Sensing and Imaging, Surface PreparationData Center Interconnects: Coherent or Direct Detect?
Article first published 15 June 2022, updated 18 May 2023. With the increasing demand for…
Article first published 15 June 2022, updated 18 May 2023.
With the increasing demand for cloud-based applications, datacom providers are expanding their distributed computing networks. Therefore, they and telecom provider partners are looking for data center interconnect (DCI) solutions that are faster and more affordable than before to ensure that connectivity between metro and regional facilities does not become a bottleneck.
As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Coherent 400ZR now dominates the metro DCI space, but in the coming decade, coherent technology could also play a role in shorter ranges, such as campus and intra-data center interconnects. As interconnects upgrade to Terabit speeds, coherent technology might start coming closer to direct detect power consumption and cost.
Coherent Dominates in Metro DCIs
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create a 400ZR multi-source agreement. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. Operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detection: a single coherent transceiver plugged into a router could fulfill the requirements.
As an example of their success, Cignal AI forecasted that 400ZR shipments would dominate edge applications, as shown in Figure 2.
Campus Interconnect Are the Grey Area
The campus DCI segment, featuring distances below 10 kilometers, was squarely in the domain of direct detect products when the standard speed of these links was 100Gbps. No amplifiers nor compensators were needed for these shorter distances, so direct detect transceivers are as simple to deploy and maintain as coherent ones.
However, as link bandwidths increase into the Terabit space, these direct detect links will need more amplifiers to reach 10 kilometers, and their power consumption will approach that of coherent solutions. The industry initially predicted that coherent solutions would be able to match the power consumption of PAM4 direct detect solutions as early as 800G generation. However, PAM4 developers have proven resourceful and have borrowed some aspects of coherent solutions without fully implementing a coherent solution. For example, ahead of OFC 2023, semiconductor solutions provider Marvell announced a 1.6Tbps PAM4 platform that pushes the envelope on the cost and power per bit they could offer in the 10 km range.
Following the coming years and how the PAM-4 industry evolves will be interesting. How many (power-hungry) features of coherent solutions will they have to borrow if they want to keep up in upcoming generations and speeds of 3.2 Tbps and beyond? Lumentum’s Chief Technology Officer, Brandon Collings, has some interesting thoughts on the subject in this interview with Gazettabyte.
Direct Detect Dominates Intra Data Center Interconnects (For Now…)
Below Terabit speeds, direct detect technology (both NRZ and PAM-4) will likely dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than 2 kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
Nevertheless, moving to larger, more centralized data centers (such as hyperscale) is lengthening intra-DCI links. Instead of transferring data directly from one data center building to another, new data centers move data to a central hub. So even if the building you want to connect to might be 200 meters away, the fiber runs to a hub that might be one or two kilometers away. In other words, intra-DCI links are becoming campus DCI links requiring their single-mode fiber solutions.
On top of these changes, the upgrades to Terabit speeds in the coming decade will also see coherent solutions more closely challenge the power consumption of direct detect transceivers. PAM-4 direct detect transceivers that fulfill the speed requirements require digital signal processors (DSPs) and more complex lasers that will be less efficient and affordable than previous generations of direct detect technology. With coherent technology scaling up in volume and having greater flexibility and performance, one can argue that it will also reach cost-competitiveness in this space.
Takeaways
Unsurprisingly, using coherent or direct detect technology for data center interconnects boils down to reach and capacity needs. 400ZR coherent is already established as the solution for metro DCIs. In campus interconnects of 10 km or less, PAM-4 products remain a robust solution up to 1.6 Tbps, but coherent technology is making a case for its use. Thus, it will be interesting to see how they compete in future generations and 3.2 Tbps.
Coherent solutions are also becoming more competitive as the intra-data center sector moves into higher Terabit speeds, like 3.2Tbps. Overall, the datacom sector is moving towards coherent technology, which is worth considering when upgrading data center links.
Tags: 800G, access networks, coherent, cost, cost-effective, Data center, distributed computing, edge and metro DCIs, integration, Intra DCI, license, metro, miniaturized, photonic integration, Photonics, pluggable, power consumption, power consumption SFP, reach, TerabitShining a Light on Four Tunable Lasers
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity…
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity without investing in new fiber infrastructure thanks to tunable lasers and dense wavelength division multiplexing (DWDM). Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also contributed to the broad adoption of DWDM systems since they reduce the complexity of deployment and maintenance.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The fundamental components of a laser are the following:
- An optical resonator (also called an optical cavity) that allows laser light to re-circulate and feed itself back. Resonators can be linear or ring-shaped. Linear resonators have a highly reflective mirror on one end and a partially-reflective mirror on the other, which acts as a coupler that lets the laser light out. On the other hand, ring resonators use a waveguide as an output coupler.
- An active medium (also called a gain medium) inside the resonator that, when pumped by an external energy source, will amplify the power of light by a process called stimulated emission.
- A pump source is the external energy source that powers the amplification process of the gain medium. The typical tunable laser used in communications will use an electrical pump, but some lasers can also use an optical pump (i.e., another light source).
As light circulates throughout the resonator, it passes multiple times through the pumped gain medium, amplifying itself and building up power to become the highly concentrated and coherent beam of light we know as a laser.
There are multiple ways to tune lasers, but let’s discuss three common tuning methods. These methods can and are often used together.
- Tuning the Gain Medium: By changing the pump intensity or environmental conditions such as its temperature, the gain medium can amplify different frequencies of light.
- Tuning the Resonator Length: The light inside a resonator goes back and forth at a frequency that depends on the length of the resonator. So making the resonator shorter or longer can change its frequency.
- Tuning by Filtering: Adding a filtering element inside or outside the resonator, such as a diffraction grating (i.e., a periodic mirror), allows the laser to “select” a specific frequency.
With this short intro on how lasers work and can be tuned, let’s dive into some of the different tunable lasers used in communication systems.
Distributed Feedback Lasers
Distributed Feedback (DFB) lasers are unique because they directly etch a grating onto the gain medium. This grating acts as a periodic mirror, forming the optical resonator needed to recirculate light and create a laser beam. These lasers are tunable by tuning the temperature of the gain medium and by filtering with the embedded grating.
Compared to their predecessors, DFB lasers could produce very pure, high-quality laser light with lower complexity in design and manufacturing that could be easily integrated into optical fiber systems. These characteristics benefited the telecommunications sector, which needed lasers with high purity and low noise that could be produced at scale. After all, the more pure (i.e., lower linewidth) a laser is, the more information it can encode. Thus, DFB lasers became the industry’s solution for many years.
The drawback of DFB lasers is that embedding the grating element in the gain medium makes them more sensitive and unstable. This sensitivity narrows their tuning range and makes them less reliable as they age.
Distributed Bragg Reflector (DBR) Lasers
A simple way to improve the reliability compared to a DFB laser is to etch the grating element outside the gain medium instead of inside. This grating element (which in this case is called a Bragg reflector) acts as a mirror that creates the optical resonator and amplifies the light inside. This setup is called a distributed Bragg reflector (DBR) laser.
While, in principle, a DBR laser does not have a wider tuning range than a DFB laser, its tuning behavior is more reliable over time. Since the grating is outside the gain medium, the DBR laser is less sensitive to environmental fluctuations and more reliable as it ages. However, as coherent and DWDM systems became increasingly important, the industry needed a greater tuning range that DFB and DBR lasers alone could not provide.
External Cavity Lasers (ECL)
Interestingly enough, one of the most straightforward ways to improve the quality and tunability of a semiconductor laser is to use it inside a second, somewhat larger resonator. This setup is called an external cavity laser (ECL) since this new resonator or cavity will use additional optical elements external to the original laser.
The main modification to the original semiconductor laser is that instead of having a partially reflective mirror as an output coupler, the coupler will use an anti-reflection coating to become transparent. This helps the original laser resonator capture more light from the external cavity.
The new external resonator provides more degrees of freedom for tuning the laser. If the resonator uses a mirror, then the laser can be tuned by moving the mirror a bit and changing the length of the resonator. If the resonator uses a grating, it has an additional element to tune the laser by filtering.
ECLs have become the state-of-the-art solution in the telecom industry: they use a DFB or DBR laser as the “base laser” and external gratings as their filtering element for additional tuning. These lasers can provide a high-quality laser beam with low noise, narrow linewidth, and a wide tuning range. However, they came with a cost: manufacturing complexity.
ECLs initially required free-space bulk optical elements, such as lenses and mirrors, for the external cavity. One of the hardest things to do in photonics is coupling between free-space optics and a chip. This alignment of the free-space external cavity with the original laser chip is extremely sensitive to environmental disturbances. Therefore, their coupling is often inefficient and complicates manufacturing and assembly processes, making them much harder to scale in volume.
Laser developers have tried to overcome this obstacle by manufacturing the external cavity on a separate chip coupled to the original laser chip. Coupling these two chips together is still a complex problem for manufacturing but more feasible and scalable than coupling from chip to free space optics. This is the direction many major tunable laser developers will take in their future products.
Integrated Tunable Ring Lasers
As we explained in the introductory section, linear resonators are those in which light bounces back and forth between two mirrors. However, ring resonators take a different approach to feedback: the light loops multiple times inside a ring that contains the active medium. The ring is coupled to the rest of the optical circuit via a waveguide.
The power of the ring resonator lies in its compactness, flexibility, and integrability. While a single ring resonator is not that impressive or tunable, using multiple rings and other optical elements allows them to achieve performance and tunability on par with the state-of-the-art tunable lasers that use linear resonators.
Most importantly, these widely tunable ring lasers can be entirely constructed on a single chip of Indium Phosphide (InP) material. As shown in this paper from the Eindhoven University of Technology, these lasers can even be built with the same basic building blocks and processes used to make other elements in the InP photonic integrated circuit (PIC).
This high integration of ring lasers has many positive effects. It can avoid inefficient couplings and make the laser more energy efficient. Furthermore, it enables the development of a monolithically integrated laser module where every element is included on the same chip. This includes integrating the wavelength locker component on the same chip, an element most state-of-the-art lasers attach separately.
As we have argued in previous articles, the more elements can be integrated into a single chip, the more scalable the manufacturing process can become.
Takeaways
Factors such as output power, noise, linewidth, tuning range, and manufacturability are vital when deciding which kind of laser to use. A DFB or DBR laser should do the job if wide tunability is not required. Greater tuning range will require an external cavity laser, but if the device must be manufactured at a large volume, an external cavity made on a chip instead of free-space optics will scale more easily. The latter is the tunable laser solution the telecom industry is gravitating towards.
That being said, ring lasers are a promising alternative because they can enable a widely tunable and monolithically integrated laser with all elements, including wavelength locker, on the same chip. This setup is ideal for scaling into high production volumes.
Tags: EFFECT Photonics, PhotonicsThe Promise of Integrated Quantum Photonics
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest…
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days.
While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety.
Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum: protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does.
The quantum bits (qubits) used by QKD systems can be photons, electrons, atoms, or any other system that can exist in a quantum state. However, using photons as qubits will likely dominate the quantum communications and QKD application space. We have decades of experience manipulating the properties of photons, such as polarization and phase, to encode qubits. Thanks to optical fiber, we also know how to send photons over long distances with relatively little loss. Besides, optical fiber is already a fundamental component of modern telecommunication networks, so future quantum networks can run on that existing fiber infrastructure. All these signs point towards a new era of quantum photonics.
Photonic QKD devices have been, in some shape or form, commercially available for over 15 years. Still, factors such as the high cost, large size, and the inability to operate over longer distances have slowed their widespread adoption. Many R&D efforts regarding quantum photonics aim to address the size, weight, and power (SWaP) limitations. One way to overcome these limitations and reduce the cost per device would be to integrate every QKD function—generating, manipulating, and detecting photonic qubits—into a single chip.
Integration is Key to Bring Lab Technology into the Market
Bringing quantum products from lab prototypes to fully realized products that can be sold on the market is a complex process that involves several key steps.
One of the biggest challenges in bringing quantum products to market is scaling up the technology from lab prototypes to large-scale production. This requires the development of reliable manufacturing processes and supply chains that can produce high-quality quantum products at scale. Quantum products must be highly performant and reliable to meet the demands of commercial applications. This requires extensive testing and optimization to ensure that the product meets or exceeds the desired specifications.
In addition, quantum products must comply with relevant industry standards and regulations to ensure safety, interoperability, and compatibility with existing infrastructure. This requires close collaboration with regulatory bodies and industry organizations to develop appropriate standards and guidelines.
Photonic integration is a process that makes these goals more attainable for quantum technologies. By taking advantage of existing semiconductor manufacturing systems, quantum technologies can more scale up their production volumes more easily.
Smaller Footprints and Higher Efficiency
One of the most significant advantages of integrated photonics is its ability to miniaturize optical components and systems, making them much smaller, lighter, and more portable than traditional optical devices. This is achieved by leveraging micro- and nano-scale fabrication techniques to create optical components on a chip, which can then be integrated with other electronic and optical components to create a fully functional device.
The miniaturization of optical components and systems is essential for the development of practical quantum technologies, which require compact and portable devices that can be easily integrated into existing systems. For example, compact and portable quantum sensors can be used for medical imaging, geological exploration, and industrial process monitoring. Miniaturized quantum communication devices can be used to secure communication networks and enable secure communication between devices.
Integrated photonics also allows for the creation of complex optical circuits that can be easily integrated with other electronic components, to create fully integrated opto-electronic quantum systems. This is essential for the development of practical quantum computers, which require the integration of a large number of qubits (quantum bits) with control and readout electronics.
Economics of Scale
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the quantum photonics industry.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Takeaways
Overall, bringing quantum products to market requires a multi-disciplinary approach that involves collaboration between scientists, engineers, designers, business professionals, and regulatory bodies to develop and commercialize a high-quality product that meets the needs of its target audience. Integrated photonics offers significant advantages in miniaturization and scale-up potential, which are essential in taking quantum technologies from the lab to the market.
Tags: Economy-of-scale, EFFECT Photonics, Integrated Photonics, miniaturization, Photonics, Photons, Quantum, Quantum products, Qubits, RSA encryption, Wafer Scale PhotonicsThe Future of Coherent Transceivers in the Access
The demand for data and other digital services is rising exponentially. From 2010 to 2020,…
The demand for data and other digital services is rising exponentially. From 2010 to 2020, Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom and telecom operators need constantly upgrade their transport networks.
The major obstacles in this upgrade path remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. This power consumption increase seems incompatible with the power constraints of the network edge.
This article will review trends in data rate, power consumption, and footprint for transceivers in the network edge that aim to address these challenges.
Downscaling Data Rates for the Access
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge.
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the significant factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
You can read more about these developments in our previous article, When Will the Network Edge Go Coherent?
Moving Towards Low Power
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down. These power efficiency issues do not just affect the environment but also the bottom lines of communications companies.
As shown in the table below, the growth of data centers and wireless networks will continue to drive power consumption upwards.
These power constraints are even more pressing in the access network sector. Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Aside from the move to QSFP28 form factors for 100G coherent, EFFECT Photonics also believes in two other ways to reduce power consumption.
- Increased Integration: The interconnections among smaller, highly-integrated optical components consume less power than those among more discrete components. We will discuss this further in the next section.
- Co-Design: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead
Can We Still Move Towards Smaller Footprints?
Moving toward smaller pluggable footprints should not necessarily be a goal, but as we mentioned in the previous section, it is a means toward the goal of lower power consumption. Decreasing the size of optical components and their interconnections means that the light inside the chip will travel a smaller distance and accumulate fewer optical losses.
Let’s look at an example of lasers. In the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Takeaways
The ever-increasing bandwidth demands in access networks force coherent pluggables to face the complex problem of maintaining a good enough performance while moving to lower cost and power consumption.
The move towards 100G coherent solutions in QSFP28 form factors will play a major role in meeting the power requirements of the access network sector. Further gains can be achieved with greater integration of optical components and co-designing the optics and electronic engines of the transceiver to reduce inefficiencies. Further gains in footprint for transceivers can also be obtained by eliminating redundant laser control functions in both the laser package and the main transceiver board.
Tags: 100G Coherent Products, 400ZR Pluggable Coherent Solutions, 5G Networks, 800G Solutions, 800ZR, Affordability, Coherent Transceivers, datacom, Direct Detection, EFFECT Photonics, Internet Traffic, network edge, OSFP Form Factors, Photonics, Pluggable Modules, power consumption, Power Efficiency, QSFP Modules, QSSFP-DD, Telecom Operators, Thermal ManagementOne Watt Matters
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them…
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down.
The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2022 is 1.55. This implies that for every 1 kWh used to power data center equipment, an extra 0.55 kWh—about 35% of total power consumption—is needed to power auxiliary equipment like lighting and, more importantly, cooling. While the advent of centralized hyperscale data centers will improve energy efficiency in the coming decade, that trend is offset by the construction of many smaller local data centers on the network edge to address the exponential growth of 5G services such as the Internet of Things (IoT).
These opposing trends are one of the reasons why the Uptime Institute has only observed a marginal improvement of 10% in the average data center PUE since 2014 (which was 1.7 back then). Such a slow improvement in average data center power efficiency cannot compensate for the fast growth of new edge data centers.
For all the bad reputation data centers receive for their energy consumption, though, wireless transmission generates even more heat than wired links. While 5G standards are more energy-efficient per bit than 4G, Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations. To make things worse, the use of higher frequency spectrum bands and new IoT use cases require the deployment of more base stations too.
Prof. Earl McCune from TU Delft estimates that nine out of ten watts of electrical power in 5G systems turn into heat. This Huawei study also predicts that the energy consumption of wireless access networks will increase even more quickly than data centers in the next ten years—more than quadrupling between 2020 and 2030.
These power efficiency issues do not just affect the environment but also the bottom lines of communications companies. In such a scenario, saving even one watt of power per pluggable transceiver could quickly multiply and scale up into a massive improvement on the sustainability and profitability of telecom and datacom providers.
How One Watt of Savings Scales Up
Let’s discuss an example to show how a seemingly small improvement of one Watt in pluggable transceiver power consumption can quickly scale up into major energy savings.
A 2020 paper from Microsoft Research estimates that for a metropolitan region of 10 data centers with 16 fiber pairs each and 100-GHz DWDM per fiber, the regional interconnect network needs to host 12,800 transceivers. This number of transceivers could increase by a third in the coming years since the 400ZR transceiver ecosystem supports a denser 75 GHz DWDM grid, so this number of transceivers would increase to 17,000. Therefore, saving a watt of power in each transceiver would lead to a total of 17 kW in savings.
The power savings don’t end there, however. The transceiver is powered by the server, which is then powered by its power supply and, ultimately, the national electricity grid. On average, 2.5 Watts must be supplied from the national grid for every watt of power the transceiver uses. When applying that 2.5 factor, the 17 kW in savings we discussed earlier are, in reality, 42.5 kW. In a year of power consumption, this rate adds up to a total of 372 MWh in power consumption savings. According to the US Environmental Protection Agency (EPA), these amounts of power savings in a single metro data center network are equivalent to 264 metric tons of carbon dioxide emissions. These emissions are equivalent to consuming 610 barrels of oil and could power up to 33 American homes for a year.
Saving Power through Integration and Co-Design
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the chips produced losses and incompatibilities that made their devices less energy efficient. After 2020, starting with Apple’s M1 processor, they fully integrate all components on a single chip, avoiding losses and incompatibilities. As shown in the table below, this electronic system-on-chip (SoC) consumes a third of the power compared to the processors with discrete components used in their previous generations of computers.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023,M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
The photonics industry would benefit from a similar goal: implementing a photonic system-on-chip. Integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient. This approach doesn’t just optimize the efficiency of the devices themselves but also of the resource-hungry chip manufacturing process. For example, a system-on-chip approach enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials. You can read our previous article on the subject to know more about the energy efficiency benefits of system-on-chip integration.
Another way of improving power consumption in photonic devices is co-designing their optical and electronic systems. A co-design approach helps identify in greater detail the trade-offs between various parameters in the optics and electronics, optimizing their fit with each other and ultimately improving the overall power efficiency of the device. In the case of coherent optical transceivers, an electronic digital signal processor specifically optimized to drive an indium-phosphide optical engine directly could lead to power savings.
When Sustainability is Profitable
System-on-chip (SoC) approaches might reduce not only the footprint and energy consumption of photonic devices but also their cost. The economics of scale principles that rule the electronic semiconductor industry can also reduce the cost of photonic systems-on-chip. After all, SoCs minimize the footprint of photonic devices, allowing photonics developers to fit more of them within a single wafer, which decreases the price of each photonic system. As the graphic below shows, the more chips and wafers are produced, the lower the cost per chip becomes.
Integrating all optical components—including the laser—on a single chip shifts the complexity from the expensive assembly and packaging process to the more affordable and scalable semiconductor wafer process. For example, it’s much easier to combine optical components on a wafer at a high volume than to align components from different chips together in the assembly process. This shift to wafer processes also helps drive down the cost of the device.
Takeaways
With data and energy demands rising yearly, telecom and datacom providers are constantly finding ways to reduce their power and cost per transmitted bit. As we showed earlier in this article, even one watt of power saved in an optical transceiver can snowball into major savings that providers and the environment can profit from. These improvements in the power consumption of optical transceivers can be achieved by deepening the integration of optical components and co-designing them with electronics. Highly compact and integrated optical systems can also be manufactured at greater scale and efficiency, reducing their financial and environmental costs. These details help paint a bigger picture for providers: sustainability now goes hand-in-hand with profitability.
Tags: 5G, data centers, EFFECT Photonics, efficiency, energy consumption, Photonics, Sustainability, TransceiversWhen will the Network Edge go Coherent?
Article first published 27 July 2022, updated 12 April 2023. Network carriers want to provide…
Article first published 27 July 2022, updated 12 April 2023.
Network carriers want to provide communications solutions in all areas: mobile access, cable networks, and fixed access to business customers. They want to provide this extra capacity and innovative and personalized connectivity and entertainment services to their customers.
Deploying only legacy direct detect technologies will not be enough to cover these growing bandwidth and service demands of mobile, cable, and business access networks with the required reach. In several cases, networks must deploy more 100G coherent dense wavelength division multiplexing (DWDM) technology to transmit more information over long distances. Several applications in the optical network edge could benefit from upgrading from 10G DWDM or 100G grey aggregation uplinks to 100G DWDM optics:
- Mobile Mid-haul benefits from seamlessly upgrading existing uplinks from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their links to 100G IPoDWDM.
- Cable Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Business Services could scale enterprise bandwidth beyond single-channel 100G grey links.
However, network providers have often stuck to their 10G DWDM or 100G grey links because the existing 100G DWDM solutions could not check all the required boxes. “Scaled-down” coherent 400ZR solutions had the required reach and tunability but were too expensive and power-hungry for many access network applications. Besides, ports in small to medium IP routers used in most edge deployments often do not support the QSFP-DD form factor commonly used in 400ZR modules but the QSFP28 form factor.
Fortunately, the rise of 100ZR solutions in the QSFP28 form factor is changing the landscape for access networks. “The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade,” said Scott Wilkinson, Lead Analyst for Optical Components at market research firm Cignal AI. “As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
In this article, we will discuss how the recent advances in 100ZR solutions will enable the evolution of different segments of the network edge: mobile midhaul and backhaul, business services, and cable.
How Coherent 100G Can Move into Mobile X-haul
The upgrade from 4G to 5G has shifted the radio access network (RAN) from a two-level structure with backhaul and fronthaul in 4G to a three-level structure with back-, mid-, and fronthaul:
- Fronthaul is the segment between the active antenna unit (AAU) and the distributed unit (DU)
- Midhaul is the segment from DU to the centralized unit (CU)
- Backhaul is the segment from CU to the core network.
The initial rollout of 5G has already happened in most developed countries, with many operators upgrading their 1G SFPs transceiver to 10G SFP+ devices. Some of these 10G solutions had DWDM technology; many were single-channel grey transceivers. However, mobile networks must move to the next phase of 5G deployments, which requires installing and aggregating more and smaller base stations to exponentially increase the number of devices connected to the network.
These mature phases of 5G deployment will require operators to continue scaling fiber capacity cost-effectively with more widespread 10G DWDM SFP+ solutions and 25G SFP28 transceivers. These upgrades will put greater pressure on the aggregation segments of mobile backhaul and midhaul. These network segments commonly used link aggregation of multiple 10G DWDM links into a higher bandwidth group (such as 4x10G). However, this link aggregation requires splitting up larger traffic streams and can be complex to integrate across an access ring. A single 100G uplink would reduce the need for such link aggregation and simplify the network setup and operations. If you want to know more about the potential market and reach of this link aggregation upgrade, we recommend reading the recent Cignal AI report on 100ZR technologies.
Cable Migration to 10G PON Will Drive the Use of Coherent 100G Uplinks
According to Cignal AI’s 100ZR report, the biggest driver of 100ZR use will come from multiplexing fixed access network links upgrading from 1G to 10G. This trend will be reflected in cable networks’ long-awaited migration from Gigabit Passive Optical Networks (GPON) to 10G PON. This evolution is primarily guided by the new DOCSIS 4.0 standard, which promises 10Gbps download speeds for customers and will require several hardware upgrades in cable networks.
To multiplex these new larger 10Gbps customer links, cable providers and network operators need to upgrade their optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM uplinks. Many of these new optical hubs will support up to 40 or 80 optical distribution networks (ODNs), too, so the previous approach of aggregating multiple 10G DWDM uplinks will not be enough to handle this increased capacity and higher number of channels.
Anticipating such needs, the non-profit R&D organization CableLabs has recently pushed to develop a 100G Coherent PON (C-PON) standard. Their proposal offers 100 Gbps per wavelength at a maximum reach of 80 km and up to a 1:512 split ratio. CableLabs anticipates C-PON and its 100G capabilities will play a significant role not just in cable optical network aggregation but in other use cases such as mobile x-haul, fiber-to-the-building (FTTB), long-reach rural scenarios, and distributed access networks.
Towards 100G Coherent and QSFP28 in Business Services
Almost every organization uses the cloud in some capacity, whether for development and test resources or software-as-a-service applications. While the cost and flexibility of the cloud are compelling, its use requires fast, high-bandwidth wide-area connectivity to make cloud-based applications work as they should.
Similarly to cable networks, these needs will require enterprises to upgrade their existing 1G Ethernet private lines to 10G Ethernet, which will also drive a greater need for 100G coherent uplinks. Cable providers and operators will also want to take advantage of their upgraded 10G PON networks and expand the reach and capacity of their business services.
The business and enterprise services sector was the earliest adopter of 100G coherent uplinks, deploying “scaled-down” 400ZR transceivers in the QSFP-DD form factor since they were the solution available at the time. However, these QSFP-DD slots also support QSFP28 form factors, so the rise of QSFP 100ZR solutions will provide these enterprise applications with a more attractive upgrade with lower cost and power consumption. These QSFP28 solutions had struggled to become more widespread before because they required the development of new, low-power digital signal processors (DSPs), but DSP developers and vendors are keenly jumping on board the 100ZR train and have announced their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions. This is also why EFFECT Photonics has announced its plans to co-develop a 100G DSP with Credo Semiconductor that best fits 100ZR solutions in the QSFP28 form factor.
Takeaways
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the major factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
As the QSFP28 100ZR ecosystem expands, production will scale up, and these solutions will become more widespread and affordable, opening up even more use cases in access networks.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technologyEnabling a 100ZR Ecosystem
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom…
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. In response to this interest from operators, several vendors are keenly jumping on board the 100ZR train by announcing their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions.
This growing interest and use cases for 100ZR are also changing how industry analysts view the potential of the 100ZR market. Last February, Cignal AI released a report on 100ZR which stated that the viability of new low-power solutions in the QSFP28 form factor enabled use cases in access networks, thus doubling the size of their 100ZR shipment forecasts.
“The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade. As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
Scott Wilkinson, Lead Analyst for Optical Components at Cignal AI.
The 100ZR market can expand even further, however. Access networks are heavily price-conscious, and the lower prices of 100ZR pluggables become, the more widely they will be adopted. Reaching such a goal requires a vibrant 100ZR ecosystem with multiple suppliers that can provide lasers, digital signal processors (DSPs), and full transceiver solutions that address the access market’s needs and price targets.
The Constraints of Power in the Access
Initially, 100G coherent solutions were focused on the QSFP-DD form factor that was popularized by 400ZR solutions. However, power consumption has prevented these QSFP-DD solutions from becoming more viable in the access network domain.
Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment. QSFP-DD modules forced operators and equipment vendors to use larger cooling components (heatsinks and fans), meaning that each module would need more space to cool appropriately. The increased need for cabinet real estate makes these modules more costly to deploy in the access domain.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Ecosystems to Overcome the Laser and DSP Bottlenecks
Even though QSFP28 modules are better at addressing the power concerns of the access domain, some obstacles prevent their wider availability.
Since QSFP28 pluggables have a lower power consumption and slightly smaller footprint requirements, they also need new laser and DSP solutions. The industry cannot simply incorporate the same lasers and DSPs used for 400ZR devices. This is why EFFECT Photonics has announced its plans to develop a pico tunable laser assembly (pTLA) and co-develop a 100G DSP that will best fit 100ZR solutions in the QSFP28 form factor.
However, a 100ZR industry with only one or two laser and DSP suppliers will struggle to scale up and make these solutions more widely accessible. The 400ZR market provides a good example of the benefits of a vibrant ecosystem. Four vendors are currently shipping DSPs for 400ZR solutions, and even more companies have announced plans to develop DSPs. This larger vendor ecosystem will help 400ZR production scale up in volume and satisfy a rapidly growing market.
While the 100ZR market is smaller than the 400ZR one, its ecosystem must follow its example and expand to enable new use cases and further increase the market size.
Standards and Interoperability Make 100ZR More Widespread
Another reason 400ZR solutions became so widespread is their standardization and interoperability. Previously, the 400G space was more fragmented, and pluggables from different vendors could not operate with each other, forcing operators to use a single vendor for their entire network deployment.
Eventually, datacom and telecom providers approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop an interoperable 400G coherent solution that addressed their needs. These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. This standardization and interoperability effort enabled the explosive growth of the 400G market.
100ZR solutions must follow a similar path to reach a larger market. If telecom and datacom operators want more widespread and affordable 100ZR solutions, more of them will have to join the push for 100ZR standardization and interoperability. This includes standards not just for the power consumption and line interfaces but also for management and control interfaces, enabling more widespread use of remote provisioning and diagnostics. These efforts will make 100ZR devices easier to implement across access networks.
Takeaways
The demand from access network operators for 100ZR solutions is there, but it has yet to fully materialize in the industry forecasts because, right now, there is not enough supply of viable 100ZR solutions that can address their targets. So in a way, further growth of the 100ZR market is a self-fulfilled prophecy: the more suppliers and operators support 100ZR, the easier it is to scale up the supply and meet the price and power targets of access networks, expanding the potential market. Instead of one or two vendors fighting for control of a smaller 100ZR pie, having multiple vendors and standardization efforts will increase the supply, significantly increasing the size of the pie and benefiting everyone’s bottom line.
Therefore, EFFECT Photonics believes in the vision of a 100ZR ecosystem where multiple vendors can provide affordable laser, DSP, and complete transceiver solutions tailored to network edge use cases. Meanwhile, if network operators push towards greater standardization and interoperability, 100ZR solutions can become even more widespread and easy to use.
Tags: 100ZR, access networks, DSP, ecosystem, edge, laser, market, price, solutionsWhat DSPs Does the Cloud Edge Need?
By storing and processing data closer to the end user and reducing latency, smaller data…
By storing and processing data closer to the end user and reducing latency, smaller data centers on the network edge significantly impact how networks are designed and implemented. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Various trends are driving the rise of the edge cloud: 5G networks and the Internet of Things (IoT), augmented and virtual reality applications, network function virtualization, and content delivery networks.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Type of Edge | Datacenter | Location | Number of DCs per 10M people | Average Latency | Size | |
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks | |
Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
This situation leads to hyperscale data center providers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers. They must manage hundreds or thousands of new nodes that will be hard to control and maintain.
These conditions mean that optical transceivers for these networks, and thus their digital signal processors (DSPs), must have flexible and low power consumption and smart features that allow them to adapt to different network conditions.
Using Adaptable Power Settings
Reducing power consumption in the cloud edge is not just about reducing the maximum power consumption of transceivers. Transceivers and DSPs must also be smart and decide whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver must operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with lower power on the amplifiers. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
Fiber monitoring is also an essential variable in this equation. A smart DSP could change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
The Importance of a Co-Design Philosophy for DSPs
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately. This setup reduces the time to market and simplifies the research and design processes, but it comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of PIC but a master of none. Given the ever-increasing demand for capacity and the need for sustainability both as financial and social responsibility, transceiver developers increasingly need a steak knife rather than a Swiss army knife.
As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.
Scaling the Edge Cloud with Automation
With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will take much work to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Automation is vital to achieving such flexibility and scalability.
Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, the AI network controller can take telemetry data from the whole network to decide where to route traffic and adjust power levels, reducing power consumption.
In this context, smart digital signal processors (DSPs) and transceivers can give the AI controller more degrees of freedom to optimize the network. They could provide more telemetry to the AI controller so that it makes better decisions. The AI management layer can then remotely control programmable interfaces in the transceiver and DSP so that the optical links can adjust the varying network conditions. If you want to know more about these topics, you can read last week’s article about transceivers in the age of AI.
Takeaways
Cloud-native applications require edge data centers that handle increased traffic and lower network latency. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Scaling edge data center networks will require greater automation and more flexible power management, and smarter DSPs and transceivers will be vital to enable these goals.
Co-design approaches can optimize the interfacing of the DSP with the optical engine, making the transceiver more power efficient. Further power consumption gains can also be achieved with smarter DSPs and transceivers that provide telemetry data to centralized AI controllers. These smart network components can then adjust their power output based on the decisions and instructions of the AI controller.
Tags: 5G, AI, artificial intelligence, Augmented Reality, automation, cloud edge, Co-Design Philosophy, data centers, Delivery Networks, DSPs, Edge Cloud, Fiber Monitoring, Hyperscale Data Center, Internet of Things, IoT, latency, Network Function Virtualization, optical transceivers, power consumption, Software-Defined Networks, Telecom Operators, Virtual RealityTransceivers in the Age of AI
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable,…
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable, and sustainable. It can gather information from devices across the optical network to identify patterns and make decisions independently without human input. By synergizing with other technologies, such as network function virtualization (NFV), AI can become a centralized management and orchestration network layer. Such a setup can fully automate network provisioning, diagnostics, and management, as shown in the diagram below.
However, artificial intelligence and machine learning algorithms are data-hungry. To work optimally, they need information from all network layers and ever-faster data centers to process it quickly. Pluggable optical transceivers thus need to become smarter, relaying more information back to the AI central unit, and faster, enabling increased AI processing.
Faster Transceivers for the Age of AI
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
400 Gbps links are becoming the standard across data center interconnects, but providers are already considering the next steps. LightCounting forecasts significant growth in the shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years. We discuss these solutions in greater detail in our article about the roadmap to 800G and beyond.
Coherent Modules Need to Provide More Telemetry Data
Mobile networks now and in the future will consist of a massive number of devices, software applications, and technologies. Self-managed, zero-touch automated networks will be required to handle all these new devices and use cases. Realizing this full network automation requires two vital components.
- Artificial intelligence and machine learning algorithms for comprehensive network automation: For instance, AI in network management can drastically cut the energy usage of future telecom networks.
- Sensor and control data flow across all network model layers, including the physical layer: As networks grow in size and complexity, the management and orchestration (MANO) software needs more degrees of freedom and dials to turn.
These goals require smart optical equipment and components that provide comprehensive telemetry data about their status and the fiber they are connected to. The AI-controlled centralized management and orchestration layer can then use this data for remote management and diagnostics. We discuss this topic further in our previous article on remote provisioning, diagnostics, and management.
For example, a smart optical transceiver that fits this centralized AI-management model should relay data to the AI controller about fiber conditions. Such monitoring is not just limited to finding major faults or cuts in the fiber but also smaller degradations or delays in the fiber that stem from age, increased stress in the link due to increased traffic, and nonlinear optical effects. A transceiver that could relay all this data allows the AI controller to make better decisions about how to route traffic through the network.
A Smart Transceiver to Rule All Network Links
After relaying data to the AI management system, a smart pluggable transceiver must also switch parameters to adapt to different use cases and instructions given by the controller.
Let’s look at an example of forward error correction (FEC). FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A smart transceiver and DSP could switch among different FEC algorithms to adapt to network performance and use cases. Let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the transceiver can be remotely reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
Reconfigurable transceivers can also be beneficial to auto-configure links to deal with specific network conditions, especially in brownfield links. Let’s return to the fiber monitoring subject we discussed in the previous section. A transceiver can change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
Takeaways
Optical networks will need artificial intelligence and machine learning to scale more efficiently and affordably to handle the increased traffic and connected devices. Conversely, AI systems will also need faster pluggables than before to acquire data and make decisions more quickly. Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will need to scale up to speeds beyond 400G and relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Tags: 800G, 800G and beyond, adaptation, affordable, AI, artificial intelligence, automation, CloudComputing, data, DataCenter, EFFECT Photonics, FEC, fiber quality, innovation, integration, laser arrays, machine learning, network conditions, network optimization, Networking, optical transceivers, photonic integration, Photonics, physical layer, programmable interface, scalable, sensor data flow, technology, Telecommunications, telemetry data, terabyte, upgrade, virtualizationCoherent Optics In Space
When it started, the space race was a competition between two superpowers, but now there…
When it started, the space race was a competition between two superpowers, but now there are 90 countries with missions in space.
The prices of space travel have gone down, making it possible for more than just governments to send rockets and satellites into space. Several private companies are now investing in space programs, looking for everything from scientific advances to business opportunities. Some reports estimate more than 10,000 companies in the space industry and around 5,000 investors.
According to The Space Foundation’s 2022 report, the space economy was worth $469 billion in 2021. The report says more spacecraft were launched in the first six months of 2021 than in the first 52 years of space exploration (1957-2009). This growing industry has thus a growing need for technology products across many disciplines, including telecommunications. The space sector will need lighter, more affordable telecommunication systems that also provide increased bandwidth.
This is why EFFECT Photonics sees future opportunities for coherent technology in the space industry. By translating the coherent transmission from fiber communication systems on the ground to free-space optical systems, the space sector can benefit from solutions with more bandwidth capacity and less power consumption than traditional point-to-point microwave links.
It’s all About SWaP
One of the major concerns of the space industry is the cost of sending anything into space. Even during the days of NASA’s Space Shuttle program (which featured a reusable shuttle unit), sending a kilogram into space cost tens of thousands of dollars. Over time, more rocket stages have become reusable due to the efforts of companies like SpaceX, reducing these costs to just a few thousand. The figure below shows how the cost of space flight has decreased significantly in the last two decades.
Even though space travel is more affordable than ever, size, weight, and power (SWaP) requirements are still vital in the space industry. After all, shaving off weight or size in the spacecraft means a less expensive launch or perhaps room for more scientific instruments. Meanwhile, less power consumption means less drain on the spacecraft’s energy sources.
Using Optics and Photonics to Minimize SWaP Requirements
Currently, most space missions use bulkier radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities. Besides, radiofrequency equipment can often generate a lot of heat, requiring more energy to cool the system.
Decreasing SWaP requirements can be achieved with more photonics and miniaturization. Transmitting data with light will usually dissipate less than heat than transmission with electrical signals and radio waves. These leads to smaller, lighter communication systems that require less power to run.
These SWaP advantages come alongside the increased transmission speeds. After all, coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems.
Leveraging Electronics Ecosystems for Space Certification and Standardization
While integrated photonics can boost space communications by lowering the payload, it must overcome the obstacles of a harsh space environment, which include radiation hardness, an extreme operational temperature range, and vacuum conditions.
Mission Type | Temperature Range |
---|---|
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125 °C |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
The values in Table 1 show the unmanaged environmental temperatures in different space environments. In a temperature-managed area, these would decrease significantly for electronics and optics systems, perhaps by as much as half. Despite this management, the equipment would still need to deal with some extreme temperature values.
Fortunately, a substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Whenever you want to send data from point A to B, photonics is usually the most efficient way of doing it, be it over fiber or free space.
Offering optical communication systems in a small integrated package that can resist the required environmental conditions will significantly benefit the space sector and its need to minimize SWaP requirements. These optical systems can increase their transmission capacity with the coherent optical transmission used in fiber optics. Furthermore, by leveraging the assembly and packaging structure of electronics for the space sector, photonics can also provide the systems with the ruggedness required to live in space.
Tags: certification, coherent, electronics, existing, fast, growing, heat dissipation, miniaturization, Optical Communication, Photonics, power consumption, size, space, space sector, speed, SWAP, temperature, weightDesigning in the Great Lakes
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP)…
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP) and forward error correction (FEC) business unit from the global communications company Viasat Inc. This also meant welcoming to the EFFECT Photonics family a new engineering team who will continue to work in the Cleveland area.
As EFFECT Photonics expands its influence into the American Midwest, it is interesting to dive deeper into Cleveland’s history with industry and technology. Cleveland has enjoyed a long story as a Midwest industrial hub, and as these traditional industries have declined, it is evolving into one of the high-tech hubs of the region.
Cleveland and the Industrial Revolution
Cleveland’s industrial sector expanded significantly in the 19th century because of the city’s proximity to several essential resources and transportation routes: coal and iron ore deposits, the Ohio and Erie Canal, and the Lake Erie railroad. For example, several steel mills, such as the Cleveland Rolling Mill Company and the Cleveland Iron and Steel Company, emerged because of the city’s proximity to Lake Erie, facilitating the transportation of raw materials and goods.
Building on the emerging iron and steel industries, heavy equipment production also found a home in Cleveland. Steam engines, railroad equipment, and other forms of heavy machinery were all manufactured in great quantities in the city.
Cleveland saw another massive boost to its industrial hub status with the birth of the Standard Oil Company in 1870. At the peak of its power, Standard Oil was the largest petroleum company in the world, and its success made its founder and head, John D. Rockefeller, one of the wealthiest men of all time. This history with petroleum also led to the emergence of Cleveland’s chemicals and materials industry.
Many immigrants moved to Cleveland, searching for work in these expanding industries, contributing to the city’s rapid population boom. This growth also prompted the development of new infrastructure like roads, railways and bridges to accommodate the influx of people.
Several important electrical and mechanical equipment manufacturers, including the Bendix Corporation, the White Motor Company, and the Western Electric Company (which supplied equipment to the US Bell System), also established their headquarters in or around Cleveland in the late 19th and early 20th century.
From Traditional Industry to Healthcare and High-Tech
In the second half of the 20th century, Cleveland’s traditional industries, such as steel and manufacturing in Cleveland began to collapse. As was the case in many other great American manufacturing centers, automation, globalization, and other socioeconomic shifts all had a role in this decline. The demise of Cleveland’s core industries was a significant setback, but the city has made substantial efforts in recent years to diversify its economy and grow in new technology and healthcare areas.
For example, the Cleveland Clinic is one of the leading US academic medical centers, with pioneering medical breakthroughs such as the first coronary artery bypass surgery and the first face transplant in the United States. Institutions like theirs or the University Hospitals help establish Cleveland as a center for healthcare innovation.
Cleveland is also trying to evolve as a high-tech hub that attracts new workers and companies, especially in software development. Companies are attracted by the low office leasing and other operating costs, while the affordable living costs attract workers. As reported by the real estate firm CBRE, Cleveland’s tech workforce grew by 25 percent between 2016 and 2021, which was significantly above the national average of 12.8 percent.
A New Player in Cleveland’s High-Tech Industry
As Cleveland’s history as a tech hub continues, EFFECT Photonics is excited to join this emerging tech environment. Our new DSP team will find its new home in the Wagner Awning building in the Tremont neighborhood of Cleveland’s West Side.
This building was erected in 1895 and hosted a sewing factory that manufactured everything from tents and flotation devices for American soldiers and marines to awnings for Cleveland buildings. When the Ohio Awning company announced its relocation in 2015, this historic building began a redevelopment process to become a new office and apartment space.
EFFECT Photonics is proud to become a part of Cleveland’s rich and varied history with industry and technology. We hope our work can help develop this city further as a tech hub and attract more innovators and inventors to Cleveland.
Tags: digital signal processing (DSP), EFFECT Photonics, forward error correction (FEC), high-tech hub, industrial history, Integrated Photonics, Ohio Awning Company, Photonics, Tremont neighborhood, Viasat Inc., Wagner Awning buildingWhat Tunable Lasers Does the Network Edge Need?
Several applications in the optical network edge would benefit from upgrading from 10G to 100G…
Several applications in the optical network edge would benefit from upgrading from 10G to 100G DWDM or from 100G grey to 100G DWDM optics:
- Business Services could scale enterprise bandwidth beyond single-channel 100G links.
- Fixed Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Mobile Midhaul benefits from a seamless upgrade of existing links from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their linksto 100G IPoDWDM.
The 100G coherent pluggables for these applications will have very low power consumption (less than 6 Watts) and be deployed in uncontrolled environments. To enable this next generation of coherent pluggables, the next generation of tunable lasers needs enhanced optical and electronic integration, more configurability so that users can optimize their pluggable footprint and power consumption, and the leveraging of electronic ecosystems.
The Past and Future Successes of Increased Integration
Over the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Integration also has a major impact on power consumption, since smaller, highly-integrated lasers usually consume less power than bulkier lasers with more discrete components. Making the laser smaller and more integrated means that the light inside the chip will travel a smaller distance and therefore accumulate fewer optical losses.
Reducing the footprint of tunable lasers in the future will need even greater integration of their component parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Configurability and Optimization
Another important aspect of optimizing pluggable module footprint and power consumption is allowing transceiver developers to mix and match their transceiver building blocks.
Let’s discuss an example of such configurability. The traditional ITLA in transceivers contains the temperature control driver and power converter functions. However, the main transceiver board can usually provide these functions too.
A setup in which the main board performs these driver and converter functions would avoid the need for redundant elements in both the main board and tunable laser. Furthermore, it would give the transceiver developer more freedom to choose the power converter and driver blocks that best fit their footprint and power consumption requirements.
Such configurability will be particularly useful in the context of the new generation of 100G coherent pluggables. After, these 100G pluggables must fit tunable lasers, digital signal processors, and optical engines in a QSFP28 form factor that is slightly smaller than the QSFP-DD size used for 400G transceiver.
Looking Towards Electronics Style Packaging
The photonics production chain must be increasingly automated and standardized to save costs and increase accessibility. To achieve this goal, it is helpful to study established practices in the fields of electronics packaging, assembly, and testing.
By using BGA-style packaging or flip-chip bonding techniques that are common now in electronics packaging or passive optical fiber alignment, photonics packaging can also become more affordable and accessible. You can read more about these methods in our article about leveraging electronic ecosystems in photonics.
These kinds of packaging methods not only improve the scalability (and therefore cost) of laser production, but they can also further reduce the size of the laser.
Takeaways
Tunable lasers for coherent pluggable transceivers face the complex problem of maintaining a good enough performance while moving to smaller footprints, lower cost, and lower power consumption. Within a decade, the industry moved from the original integrable tunable laser assembly (ITLA) module to micro-ITLAs and then nano-ITLAs. Each generation had roughly half the footprint of the previous one.
However, the need for 100G coherent pluggables for the network edge imposes even tighter footprint and power consumption constraints on tunable lasers. Increased integration, more configurability of the laser and transceiver building blocks, and the leveraging of electronic will help tunable lasers get smaller and more power-efficient to enable these new application cases in edge and access networks.
Tags: automation, cost, efficiency, energy efficient, nano ITLA, optimization, power, size, smaller, testing, wavelength lockerWhat DSPs Does the Network Edge Need?
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact,…
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact, low-power form factors like QSFP28. A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy.
These new 100G coherent pluggables will have very low power consumption (less than six Watts) and will be deployed in uncontrolled environments, imposing new demands on coherent digital signal processors (DSPs). To enable this next generation of coherent pluggables in the network edge, the next generation of DSPs needs ultra-low power consumption, co-designing with the optical engine, and industrial hardening.
The Power Requirements of the Network Edge
Several applications in the network edge can benefit from upgrading their existing 10G DWDM, or 100G grey links into 100G DWDM, such as the aggregation of fixed and mobile access networks and 100G data center interconnects for enterprises. However, network providers have often chosen to stick to their 10G links because the existing 100G solutions do not check all the required boxes.
100G direct detect pluggables have a more limited reach and are not always compatible with DWDM systems. “Scaled down” coherent 400ZR solutions have the required reach and tunability, but they are too expensive and power-hungry for edge applications. Besides, the ports in small to medium IP routers used in edge deployments often do not support QSFP-DD modules commonly used in 400ZR but QSFP28 modules.
The QSFP28 form factor imposes tighter footprint and power consumption constraints on coherent technologies compared to QSFP-DD modules. QSFP28 is slightly smaller, and most importantly, it can handle at most a 6-Watt power consumption, in contrast with the typical 15-Watt consumption of QSFP-DD modules in 400ZR links. Fortunately, the industry is moving towards a proper 100ZR solution in the QSFP28 form factor that balances performance, footprint, and power consumption requirements for the network edge.
These power requirements also impact DSP power consumption. DSPs constitute roughly 50% of coherent transceiver power consumption, so a DSP optimized for the network edge 100G use cases should aim to consume at most 2.5 to 3 Watts of power.
Co-Designing and Adjusting for Power Efficiency
Achieving this ambitious power target will require scaling down performance in some areas and designing smartly in others. Let’s discuss a few examples below.
- Modulation: 400ZR transceivers use a more power-hungry 16QAM modulation. This modulation scheme uses sixteen different states that arise from combining four different intensity levels and four phases of light. The new generation of 100ZR transceivers might use some variant of a QPSK modulation, which only uses four states from four different phases of light.
- Forward Error Correction (FEC): DSPs in 400ZR transceivers use a more advanced concatenated FEC (CFEC) code, which combines inner and outer FEC codes to enhance the performance compared to a standard FEC code. The new 100ZR transceivers might use a more basic FEC type like GFEC. This is one of the earliest optical FEC algorithms and was adopted as part of the ITU G.709 specification.
- Co-Designing DSP and Optical Engine: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.
Industrial Hardening for DSPs
Traditionally, coherent devices have resided in the controlled settings of data center machine rooms or network provider equipment rooms. These rooms have active temperature control, cooling systems, dust and particle filters, airlocks, and humidity control. In such a setting, pluggable transceivers must operate within the so-called commercial temperature range (c-temp) from 0 to 70ºC.
On the other hand, the network edge often involves uncontrolled settings outdoors at the whims of Mother Nature. It might be at the top of an antenna, on mountain ranges, within traffic tunnels, or in Northern Europe’s severe winters. For these outdoor settings, transceivers should operate in the industrial temperature range (I-temp) from -40 to 85ºC. Higher altitude deployments provide additional challenges too. Because the air gets thinner, networking equipment cooling mechanisms become less effective, and the device cannot withstand casing temperatures as high as they can at sea level.
Takeaways
The network edge could benefit from switching their existing direct detect or grey links to 100G DWDM coherent. However, the industry needs more affordable and power-efficient transceivers and DSPs specifically designed for coherent 100G transmission in edge and access networks. By realizing DSPs co-designed with the optics, adjusted for reduced power consumption, and industrially hardened, the network edge will have coherent DSP and transceiver products adapted to their needs.
Tags: 100ZR, access network, co-design, controlled, edge, fit for platform DSP, InP, low power, pluggables, power consumption, power conversion, QSFP 28, QSFP-DDA Story of Standards: From 400ZR and Open ROADM to OpenZR+
Coherent optical transmission has been crucial in addressing network operator problems for the last decade. In this time, coherent technology has expanded from a solution reserved only for premium long-distance links to one that impacts data center interconnects, metro, and access networks…
Coherent optical transmission has been crucial in addressing network operator problems for the last decade. In this time, coherent technology has expanded from a solution reserved only for premium long-distance links to one that impacts data center interconnects, metro, and access networks, as explained in the video below.
The development of the 400ZR standard by the Optical Internetworking Forum (OIF) proved to be a milestone in this regard. It was the result of several years of progress in electronic and photonic integration that enabled the miniaturization of 400G coherent systems into smaller pluggable form factors. With small enough modules to pack a router faceplate densely, the datacom sector could profit from an ideal solution for high-capacity data center interconnects. Telecom operators wanted to implement a similar solution in their metro links, so they combined the Open ROADM standardization initiative with the 400ZR initiative to develop the OpenZR+ agreement that better fits their use cases.
This article will elaborate on these standardization projects—400ZR, OpenROADM, and OpenZR+—and explain what use cases each was designed to tackle.
What Is 400G ZR?
To cope with growing bandwidth demands, providers wanted to implement the concept of IP over DWDM (IPoDWDM), in which tunable optics are integrated into the router. This integration eliminates the optical transponder shelf and the optics between the routers and DWDM systems, reducing the network capital expenditure (CAPEX). This is shown in the figure below.
However, widely deploying IPoDWDM with coherent optics forced providers to face a router faceplate trade-off. Since DWDM modules have traditionally been much larger than the client optics, plugging DWDM modules into the router required sacrificing roughly half of the costly router faceplate capacity. This was unacceptable for datacom and telecom providers, who approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop a compact and interoperable coherent solution that addressed this trade-off.
These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. 400ZR specifies a (relatively) low-cost and interoperable 400 Gigabit coherent interface for a link with a single optical wavelength, using a double-polarization, 16-state quadrature amplitude modulation scheme (DP-16QAM). This modulation scheme uses sixteen different constellation points that arise from combining four different intensity levels and four phases. It doubles the usual 16-QAM transmission capacity by encoding information in two different polarizations of light.
The agreement specified a link reach of 80 kilometers without amplification and 120 kilometers with amplification. For forward error correction (FEC), the 400ZR standard supports a concatenated FEC (CFEC) method. CFEC combines inner and outer FEC codes to enhance the performance compared to a standard FEC code.
The 400ZR agreement does not specify a particular size or type of module, but its specifications targeted a footprint and power consumption that could fit in smaller modules such as the Quad Small Form-Factor Pluggable Double-Density (QSFP-DD) and Octal-Small Form-Factor Pluggable (OSFP). These form factors are small enough to provide the faceplate density that telecom and especially datacom operators need in their system architectures. So even if we often associate the 400ZR standard with QSFP-DD, other form factors, such as CFP2, can be used.
What Is Open ROADM?
In parallel with the 400ZR standardization efforts, telecom network operators had a different ongoing discussion.
Reconfigurable Optical Add-Drop Multiplexers (ROADMs) were a game-changer for optical communications when they entered the market in the 2000s. Before this technology, optical networks featured inefficient fixed routes and could not adapt to changes in traffic and demand. ROADMs allowed operators to remotely provision and manage their wavelength channels and bandwidth without redesigning the physical network infrastructure.
However, ROADMs were proprietary hardware with proprietary software. Changing the proprietary ROADM platform needed extensive testing and a lengthy integration process, so operators were usually reluctant to look for other platform alternatives. Besides, ROADMs still had several fixed, pre-defined elements that could have been configurable through open interfaces. This environment led to reduced competition and innovation in the ROADM space.
These trends drove the launch of the Open ROADM project in 2016 and the release of their first Multi-Source Agreement in 2017. The project aimed to disaggregate and open up these traditionally proprietary ROADM systems and make their provisioning and control more centralized through technologies such as software-defined networks (SDNs, explained in the diagram below).
The Open ROADM project defined three disaggregated functions (pluggable optics, transponder, and ROADM), all controlled through an open standards-based API that could be accessed through an SDN controller. It defined 100G-400G interfaces for both Ethernet and Optical Transport Networking (OTN) protocols with a link reach of up to 500km. It also defined a stronger FEC algorithm called open FEC (oFEC) to support this reach. oFEC provides a greater enhancement than CFEC at the cost of more overhead and energy.
What Is OpenZR+?
The 400ZR agreement was primarily focused on addressing the needs of large-scale data center operators and their suppliers.
While it had some usefulness for telecom network operators, their transport network links usually span several hundreds of kilometers, so the interface and module power consumption defined in the 400ZR agreement could not handle such an extended reach. Besides, network operators needed extra flexibility when defining the transmission rate and the modulation type of their links.
Therefore, soon after the publication of the 400ZR agreement, the OpenZR+ Multi-Source Agreement (MSA) was published in September 2020. As the diagram below explains, this agreement can be seen as a combination of the 400ZR and Open ROADM standardization efforts.
To better fit the telecom use cases of regional and long-haul transport links, OpenZR+ added a few changes to improve the link reach and flexibility over 400ZR:
- Using the more powerful oFEC defined by the Open ROADM standard.
- Multi-rate Ethernet that enables the multiplexing of 100G and 200G signals. This provides more options to optimize traffic in transport links.
- Support for 100G, 200G, 300G, or 400G transport links using different modulation types (QPSK, 8QAM, or 16QAM). This enables further reach and capacity optimization for fiber links.
- Higher dispersion compensation to make the fiber link more robust.
These changes allow QSFP-DD and OSFP modules to reach link lengths of up to 480 km (with optical amplifiers) at a 400G data rate. However, the FEC and dispersion compensation improvements that enable this extended reach come at the price of increased energy consumption. While the 400ZR standard targets a power consumption of 15 Watts, OpenZR+ standards aim for a power consumption of up to 20 Watts.
If operators need more performance, distances above 500 km, and support for OTN traffic (400ZR and OpenZR+ only support Ethernet), they must still use proprietary solutions, which are informally called 400ZR+. These 400ZR+ solutions feature larger module sizes (CFP2), higher performance proprietary FEC, and higher launch powers to achieve longer reach. These more powerful features come at the cost of even more power consumption, which can go up to 25 Watts.
Takeaways
The following table summarizes the use cases and characteristics of the approaches discussed in the article: 400ZR, Open ROADM, OpenZR+, and 400ZR+.
Technology | 400ZR | Open ROADM | OpenZR+ | 400ZR+ Proprietary |
---|---|---|---|---|
Target Application | Edge Data Center Interconnects | Carrier ROADM Mesh Networks | Metro/Regional Carrier and Data Center Interconnects | Long-Haul Carrier |
Target Reach @ 400G | 120 km (amplified) | 500 km (amplified) | 480km (amplified) | 1000 km (amplified) |
Target Power Consumption | Up to 15 W | Up to 25 W | Up to 20 W | Up to 25W |
Typical Module Option | QSFP-DD/OSFP | CFP2 | QSFP-DD/OSFP | CFP2 |
Client Interface | 400G Ethernet | 100-400G Ethernet and OTN | 100-400G Ethernet (Multi-rate) | 100-400G Ethernet and OTN |
Modulation Scheme | 16QAM | QPSK, 8QAM, 16QAM | QPSK, 8QAM, 16QAM | QPSK, 8QAM, 16QAM |
Forward Error Correction | CFEC | oFEC | oFEC | Proprietary |
Standards / MSA | OIF | Open ROADM MSA | OpenZR+ MSA | Proprietary |
400ZR is an agreement primarily focused on the needs of data center interconnects across distances of 80 – 120 kilometers. On the other hand, OpenROADM and OpenZR+ focused on addressing the needs of telecom carrier links, supporting link lengths of up to 500 km. These differences in reach are also reflected in the power consumption specs and the module form factors typically used. The 400ZR and OpenZR+ standards can only handle Ethernet traffic, while the Open ROADM and 400ZR+ solutions can handle both Ethernet and OTN traffic.
Tags: 400ZR, Data center, demand, disaggregation, extend, fiber, interoperable, Open ROADM, power, size, ZR+The Impact of Optics and Photonics on the Agrifood Industry
Precision farming is essential in a world with over 10 billion people by 2050 and…
Precision farming is essential in a world with over 10 billion people by 2050 and a food demand that is expanding at an exponential pace. The 2019 World Resources Report from the World Economic Forum warns that at the current level of food production efficiency, feeding the world in 2050 would require “clearing most of the world’s remaining forests, wiping out thousands more species, and releasing enough greenhouse gas emissions to exceed the 1.5°C and 2°C warming targets enshrined in the Paris Agreement – even if emissions from all other human activities were entirely eliminated.”
Technology can help the agrifood industry improve efficiency and meet these demands by combining robotics, machine vision, and small sensors to precisely and automatically determine the care needed by plants and animals in our food supply chain. This approach helps control and optimize food production, resulting in more sustainable crops, higher yields, and safer food.
Sensors based on integrated photonics can enable many of these precision farming applications. Photonic chips are lighter and smaller than other solutions so they can be deployed more easily in these agricultural use cases. The following article will provide examples of how integrated photonics and optical technology can add value to the agriculture and food industries.
How The World’s Tiny Agricultural Titan Minimizes Food Waste
The Netherlands is such a small country that if it were a US state, it would be among the ten smallest states, with a land area between West Virginia and Maryland. Despite its size, the Food and Agriculture Organization of the United Nations (FAO) ranked the Netherlands as the second largest exporter of food in the world by revenue in 2020, only behind the Americans and ahead of countries like Germany, China, or Brazil. These nations have tens or hundreds of times more arable land than the Dutch. Technology is a significant reason for this achievement, and the Dutch are arguably the most developed nation in the world regarding precision farming.
The hub of Dutch agrifood research and development is called the Food Valley, centered in the municipality of Wageningen in Gelderland province. In this area, many public and private R&D initiatives are carried out jointly with Wageningen University, a world-renowned leader in agricultural research.
When interviewed last year, Harrij Schmeitz, Director of the Fruit Tech Campus in Geldermalsen, mentions the example of a local fruit supplier called Fruitmasters. They employ basic cameras to snap 140 photographs of each apple that travels through the sorting conveyor, all within a few milliseconds. These photographs are used to automatically create a 3D model and help the conveyor line filter out the rotten apples before they are packaged for customers. This process was done manually in the past, so this new 3D mapping technology significantly improves efficiency.
These techniques are not just constrained to Gelderland, of course. Jacob van den Borne is a potato farmer from Reusel in the province of North Brabant, roughly a half-hour drive from EFFECT Photonics’ Eindhoven headquarters. Van den Borne’s farm includes self-driving AVR harvesters (shown in the video below), and he has been using drones in his farms since 2011 to photograph his fields and study the soil quality and farming yield.
The drone pictures are used to create maps of the fields, which then inform farming decisions. Van den Borne can study the status of the soil before farming, but even after crops have sprouted, he can study which parts of the field are doing poorly and need more fertilization. These measures prevent food waste and the overuse of fertilizer and pesticides. For example, Van den Borne’s farms have eliminated pesticide chemicals in their greenhouses while boosting their yield. The global average yield of potatoes per acre is around nine tons, but his farms yield more than 20 tons per acre!
If you want to know more about Van den Borne and his use of technology and data, you can read this article.
Lighting to Reduce Power Consumption and Emissions
Artificial lighting is a frequent requirement of indoor plant production facilities to increase production and improve crop quality. Growers are turning to LED lighting because it is more efficient than traditional incandescent or fluorescent systems at converting electricity to light. LED lights are made through similar semiconductor manufacturing processes to photonics chips.
LED lighting also provides a greater variety of colors than the usual yellow/orange glow. This technology allows gardeners to pick colors that match each plant’s demands from seedlings through cultivation, unlike high-pressure sodium or other traditional lighting systems. Different colors of visible light create different chlorophyll types in plants, so LED lights can be set to specific colors to provide the best chlorophyll for each development stage.
For example, suppose you roam around the Westland municipality of the Netherlands. You might occasionally catch a purple glow in the night skies, which has nothing to do with UFOs or aliens wanting to abduct you. As explained by Professor Leo Marcelis of Wageningen University (see the above video), researchers have found that red light is very good for plant growth, and mixing it with five to ten percent blue light gives even better results. Red and blue are also the most energy-efficient colors for LEDs, which helps reduce energy consumption even more. As a result, the farmers can save on light and energy use while the environment profits too.
Improving Communication Networks in the Era of Sensors
Modern farmers like Jacob van den Borne collect a large quantity of sensor data, which allows them to plan and learn how to provide plants with the perfect amount of water, light, and nutrients at the proper moment. Farmers can use these resources more efficiently and without waste thanks to this sensor information.
For example, Van den Borne uses wireless, Internet-of-Things sensors from companies like Sensoterra (video below) to gauge the soil’s water level. As we speak, researchers in the OnePlanet Research Center, a collaboration including the Imec R&D organization and Wageningen University, are developing nitrogen sensors that run on optical chips and can help keep nitrogen emissions in check.
These sensors will be connected to local servers and the internet for faster data transfer, so many of the issues and photonics solutions discussed in previous articles about the cloud edge and access networks are also relevant for agrifood sensors. Thus, improving optical communication networks will also impact the agrifood industry positively.
Takeaways
In a future of efficient high-tech and precision farming, optics and photonics will play an increasingly important role.
Optical sensors on a chip can be fast, accurate, small, and efficient. They will provide food producers with plenty of data to optimize their production processes and monitor the environmental impact of food production. Novel lighting methods can reduce the energy consumption of greenhouses and other indoor plant facilities. Meanwhile, photonics will also be vital to improving the capacity of the communications networks that these sensors run in.
With photonics-enabled precision processes, the agrifood industry can improve yields and supply, optimize resource use, reduce waste throughout the value chain, and minimize environmental impact.
Tags: atmosphere, demand, emissions, energy consumption, environment, future, high tech farming, high volume, Integrated Photonics, population growth, Precision agriculture, precision farming, process, resource, sensors, supply, wasteThe Whats and Hows of Telcordia Standards
Before 1984, the world of telecom standards looked very different from how it does now.…
Before 1984, the world of telecom standards looked very different from how it does now. Such a world prominently featured closed systems like the one AT&T had in the United States. They were stable and functional systems but led to a sluggish pace of technology innovation due to the lack of competition. The breakup of the Bell System in the early 1980s, where AT&T was forced to divest from their local Bell operating and manufacturing units, caused a tectonic shift in the industry. As a result, new standards bodies rose to meet the demands of a reinvented telecom sector.
Bellcore, formerly Bell Communications Research, was one of the first organizations to answer this demand. Bellcore aided the Regional Bell Operating Companies by creating “generic requirements” (GR) documents that specified the design, operation, and purpose of telecom networks, equipment, and components. These GRs provided thorough criteria to help new suppliers design interoperable equipment, leading to the explosion of a new supplier ecosystem that made “GR conformant” equipment. An industry that relied on a few major suppliers thus became a more dynamic and competitive field, with carriers allowed to work with several suppliers almost overnight.
Bellcore is now Telcordia, and although the industry saw the emergence of other standards bodies, Telcordia still plays a major role in standardization by updating and producing new GR documents. Some of the most well-known documents are reliability prediction standards for commercial telecommunication products. Let’s discuss what these standards entail and why they matter in the industry.
What is the goal of Telcordia reliability standards?
Telecommunications carriers can use general requirements documents to select products that meet reliability and performance needs. The documents cover five sections:
- General Requirements, which discuss documentation, packaging, shipping, design features, product marking, safety and interoperability.
- Performance Requirements, which cover potential tests, as well as the performance criteria applied during testing.
- Service Life Tests, which mimic the stresses faced by the product in real-life use cases.
- Extended Service Life Tests, which verify long-term reliability.
- Reliability Assurance Program, which ensures satisfactory, long-term operation of products in a telecom plant
Several of these specifications require environmental/thermal testing and often refer to other MIL STD and EIA / TIA test specifications. Listed below are a few common Telcordia test specifications that require the use of environmental testing.
Telcordia Generic Requirement | Description/Applicable Product |
GR-49-CORE | for Outdoor Telephone Network Interface Devices |
GR-63-CORE | for Network Equipment-Building System Requirements (NEBS): Physical Protection |
GR-326-CORE | for Single Mode Optical Connectors and Jumper Assemblies (Fiber Optics) |
GR-468-CORE | for Optoelectronic Devices Used in Telecommunications Equipment |
GR-487-CORE | for Electronic Equipment Cabinets (Enclosures) |
GR-974-CORE | for Telecommunications Line Protector Units (TLPUS) |
GR-1209-CORE | for Fiber Optic Branching Components |
GR-1221-CORE | for Passive Optical Components |
What are Telcordia tests like
For example, our optical transceivers at EFFECT Photonics comply with the Telcordia GR-468 qualification, which describes how to test optoelectronic devices for reliability under extreme conditions. Qualification depends upon maintaining optical integrity throughout an appropriate test regimen. Accelerated environmental tests are described in the diagram below. The GR recommends that a chosen test regimen be constructed upon expected conditions and stresses over the long term life of a system and/or device.
Mechanical Reliability & Temperature Testing | ||
Shock & Vibration | High / Low Storage Temp | Temp Cycle |
Damp Heat | Cycle Moisture Resistance | Hot Pluggable |
Mating Durability | Accelerated Aging | Life Expectancy Calculation |
Our manufacturing facilities and partners include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.
Why are Telcordia standards important?
Companies engage in telecom standards for several reasons:
- Strategic Advantage: Standards influence incumbents with well-established products differently than startups with “game changer” technologies. Following a technological standard helps incumbents get new business and safeguard their existing business. If a new vendor comes along with a box based on a new technology that gives identical functionality for a fraction of the price, you now have a vested stake in that technological standard.
- Allocation of Resources: Standards are part of technology races. If a competitor doubles technical contributions to hasten the inclusion of their specialized technology into evolving standards, you need to know so you may react by committing additional resources or taking another action.
- Early Identification of Prospective Partners and Rivals: Standards help suppliers recognize competitors and potential partners to achieve business objectives. After all, the greatest technology does not necessarily “win the race”, but the one with the best business plans and partners that can help realize the desired specification and design.
- Information Transfer: Most firms use standards to exchange information. Companies contribute technical contributions to standards groups to ensure that standards are as close as feasible to their business model and operations’ architecture and technology. Conversely, a company’s product and service developers must know about the current standards to guarantee that their goods and services support or adhere to industry standards, which clients expect.
Takeaways
One of our central company objectives is to bring the highest-performing optical technologies, such as coherent detection, all the way to the network edge. However, achieving this goal doesn’t just require us to focus on the optical or electronic side but also on meeting the mechanical and temperature reliability standards required to operate coherent devices outdoors. This is why it’s important for EFFECT Photonics to constantly follow and contribute to standards as it prepares its new product lines.
Tags: accelerated, AT&T, Bellcore, closed, coherent, innovation, monopoly, open, partners, reliability, resource allocation, service life, technology, TelcordiaTrends in Edge Networks
To see what is trending in the edge and access networks, we look at recent…
To see what is trending in the edge and access networks, we look at recent survey results from a poll by Omdia to see where current interests and expectations lie.
Type of network operator surveyed
Revenue of surveyed operators
Type of network operator surveyed
58% of the participants think that by the end of 2024, the relative use of 400G+ coherent optics in DWDM systems will lean towards Pluggable Optics versus 42% who think it will lean towards Embedded Optics.
54% of the participants think that by the end of 2024, the relative use of 400G+ coherent optics integrated into routers/switches will be Pluggable Optics with a -10dBm launch power, while 46% think it will be in 0dBm
Most beneficial features for
coherent tunable pluggables in
network/operations
The level of importance for
100G coherent pluggable optics
in the edge/access strategy
Management options are a must-have and EFFECT Photonics has
much experience with NarroWave in Direct Detect.
75% of the respondents indicate coherent pluggables optics are
essential for their edge/access evolution strategy.
Reaching a 100ZR Future for Access Network Transport
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications…
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had yet to materialize into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
By embracing smaller and more customizable light sources, new optimized DSP designs, and high-volume manufacturing capabilities, we can develop native 100ZR solutions with lower costs that better fit edge and access networks.
Making Tunable Lasers Even Smaller?
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, integrable tunable laser assemblies (ITLAs) must maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half.
There are still plenty of discussions over the future of ITLA packaging to fit the QSFP28 form factors of these new 100ZR transceivers. For example, every tunable laser needs a wavelength locker component that stabilizes the laser’s output regardless of environmental conditions such as temperature. Integrating that wavelength locker component with the laser chip would help reduce the laser package’s footprint.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means that the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Co-designing DSPs for Energy Efficiency
The 5 Watt-power requirement of 100ZR in a QSFP28 form factor is significantly reduced compared to the 15-Watt specification of 400ZR transceivers in a QSFP-DD form factor. Achieving this reduction requires a digital signal processor (DSP) specifically optimized for the 100G transceiver.
Current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Optical Subassemblies That Leverage Electronic Ecosystems
To become more accessible and affordable, the photonics manufacturing chain can learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible: passive alignments of the optical fiber, BGA-style packaging, and flip-chip bonding. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics. To read more about them, please read our previous article.
Takeaways
The interest in novel 100ZR coherent pluggable optics for edge and access applications is strong, but the market has struggled to provide “native” and specific 100ZR solutions to address this interest. Transceiver developers need to embrace several new technological approaches to develop these solutions. They will need smaller tunable laser packages that can fit the QSFP28 form factors of 100ZR solutions, optimized and co-designed DSPs that meet the reduced power consumption goals, and sub-assemblies that leverage electronic ecosystems for increased scale and reduced cost.
Tags: 100 ZR, 100G, 100ZR, 400ZR, C-band, C-PON, coherent, DSP, filters, future proof, Korea, laser sources, O-band, Packaging, pluggable, roadmap, S-bandRemote Provisioning and Management for Edge Networks
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and…
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and store and compute data closer to the end user. According to PricewaterhouseCoopers, these advantages will drive the worldwide market for edge data centers to more than triple from $4 billion in 2017 to $13.5 billion in 2024. With the increased use of edge computing, more high-speed transceivers are required to link edge data centers. According to Cignal AI, the number of 100G equivalent ports sold for edge applications will double between 2022 and 2025, as indicated in the graph below.
The increase in edge infrastructure comes with many network provisioning and management challenges. While typical data centers were built in centralized and controlled environments, edge deployments will live in remote and uncontrolled environments because they need to be close to where the data is generated. For example, edge infrastructure could be a server plugged into the middle of a busy factory floor to collect data more quickly from their equipment sensors.
This increase in edge infrastructure will provide plenty of headaches to network operators who also need to scale up their networks to handle the increased bandwidths and numbers of connections. More truck rolls must be done to update more equipment, and this option won’t scale cost-effectively, which is why many companies simply prefer not to upgrade and modernize their infrastructure.
Towards Zero-Touch Provisioning
A zero-touch provisioning model would represent a major shift in an operator’s ability to upgrade their network equipment. The network administrator could automate the configuration and provisioning of each unit from their central office, ship the units to each remote site, and the personnel in that site (who don’t need any technical experience!) just need to power up the unit. After turning them on, they could be further provisioned, managed, and monitored by experts anywhere in the world.
The optical transceivers potentially connected to some of these edge nodes already have the tools to be part of such a zero-touch provisioning paradigm. Many transceivers have a plug-and-play operation that does not require an expert on the remote site. For example, the central office can already program and determine specific parameters of the optical link, such as temperature, wavelength drift, dispersion, or signal-to-noise ratio, or even what specific wavelength to use. The latter wavelength self-tuning application is shown in Figure 2.
Once plugged in, the transceiver will set the operational parameters as programmed and communicate with the central office for confirmation. These provisioning options make deployment much easier for network operators.
Enabling Remote Diagnostics and Management
The same channel that establishes parameters remotely during the provisioning phase can also perform monitoring and diagnostics afterward. The headend module in the central office could remotely modify certain aspects of the tail-end module in the remote site, effectively enabling several remote management and diagnostics options. The figure below provides a visualization of such a scenario.
The central office can remotely measure metrics such as the transceiver temperature and power transmitted and received. These metrics can provide a quick and useful health check of the link. The headend module can also remotely read alarms for low/high values of these metrics.
These remote diagnostics and management features can eliminate certain truck rolls and save more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an antenna tower) that require expensive truck rolls.
Remote Diagnostics and Control for Energy Sustainability
To talk about the impact of remote control on energy sustainability, we first must review the concept of performance margins. This number is a vital measure of received signal quality. It determines how much room there is for the signal to degrade without impacting the error-free operation of the optical link.
In the past, network designers played it safe, maintaining large margins to ensure a robust network operation in different conditions. However, these higher margins usually require higher transmitter power and power consumption. Network management software can use the remote diagnostics provided by this new generation of transceivers to develop tighter, more accurate optical link budgets in real time that require lower residual margins. This could lower the required transceiver powers and save valuable energy.
Another related sustainability feature is deciding whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver needs to operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at their maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with a smaller residual margin and use lower power on the amplifier. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
If the industry wants interoperability between different transceiver vendors, these kinds of power parameters for remote management and control should also be standardized.
Takeaways
As edge networks get bigger and more complex, network operators and designers need more knobs and degrees of freedom to optimize network architecture and performance and thus scale networks cost-effectively.
The new generation of transceivers has the tools for remote provisioning, management, and control, which gives optical networks more degrees of freedom for optimization and reduces the need for expensive truck rolls. These benefits make edge networks simpler, more affordable, and more sustainable to build and operate.
Tags: access network, capacity, cost, distributed access networks, DWDM, inventory stock, Loss of signal, maintenance, monitor, optical networks, plug-and-play, remote, remote control, scale, scaling, self-tuning, timeWhat is 100ZR and Why Does it Matter?
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider…
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider ADVA announced the launch of the industry’s first 100ZR pluggable coherent transceiver. Discussions in the telecom sector about a future beyond 400G coherent technology have usually focused on 800G products, but there is increasing excitement about “downscaling” to 100G coherent products for certain applications in the network edge and business services. This article will discuss the market and technology forces that drive this change in discourse.
The Need for 100G Transmission in Telecom Deployments
The 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry for the optical network edge. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had not really materialized into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
100ZR and its Enabling Technologies
With the recent II-VI Incorporated and ADVA announcement, the industry is showing its first attempts at a native 100ZR solution that can provide a true alternative to the powered-down 400ZR products. Some of the key specifications of this novel 100ZR solution include:
- A QSFP28 form factor, very similar but slightly smaller than a QSFP-DD
- 5 Watt power consumption
- C-temp and I-temp certifications to handle harsh environments
The 5 Watt-power requirement is a major reduction compared to the 15-Watt specification of 400ZR transceivers in the QSFP-DD form factor. Achieving this spec requires a digital signal processor (DSP) that is specifically optimized for the 100G transceiver.
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market, simplifies the research and design processes, but comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. DSPs co-designed and optimized for their specific optical engine and laser can significantly improve power efficiency. You can read more about co-design approaches in one of our previous articles.
Achieving 100ZR Cost-Efficiency through Scale
Making 100ZR coherent optical transceivers more affordable is also a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Such manufacturing scale demands a higher upfront investment, but the result is a more accessible product that more customers can purchase.
Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems. Furthermore, transceiver developers must look for trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips and packages.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
As the Heavy Reading survey showed, the interest in 100G coherent pluggable optics for edge/access applications is strong, and operators have identified use key use cases within their networks. In the past, there were no true 100ZR solutions that could address this interest, but the use of optimized DSPs and light sources, as well as high-volume manufacturing capabilities, can finally deliver a viable and affordable 100ZR product.
Tags: 100G coherent, 100ZR, DSP, DSPs, edge and access applications, EFFECT Photonics, PhotonicsFit for Platform DSPs
Over the last two decades, power ratings for pluggable modules have increased as we moved…
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the digital signal processing (DSP) chip that also performs the functions of clock data recovery (CDR), optical-electrical gear-boxing, and lane switching. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip and its radiofrequency (RF) interconnects with the optical engine.
One way to reduce transceiver power consumption requires designing DSPs that take advantage of the material platform of their optical engine. In this article, we will elaborate on what that means for the Indium Phosphide platform.
A Jack of All Trades but a Master of None
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
Co-Designing with Indium Phosphide PICs for Power Efficiency
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Additionally, the optimized DSP could also be programmed to do some additional signal conditioning that minimizes the nonlinear optical effects of the InP material, which can reduce noise and improve performance.
Taking Advantage of Active Components in the InP Platform
Russell Fuerst, EFFECT Photonics’ Vice-President of Digital Signal Processing, gave us an interesting insight about designing for the InP platform in a previous interview:
When we started doing coherent DSP designs for optical communication over a decade ago, we pulled many solutions from the RF wireless and satellite communications space into our initial designs. Still, we couldn’t bring all those solutions to the optical markets.
However, when you get more of the InP active components involved, some of those solutions can finally be brought over and utilized. They were not used before in our designs for silicon photonics because silicon is not an active medium and lacked the performance to exploit these advanced techniques.
For example, the fact that the DSP could control laser and modulator components on the InP can lead to some interesting manipulations of light signals. A DSP that can control these components directly could generate proprietary waveforms or use non-standard constellation and modulation schemes that can boost the performance of a coherent transceiver and increase the capacity of the link.
Takeaways
The biggest problem for DSP designers is still improving performance while reducing power use. This problem can be solved by finding ways to integrate the DSP more deeply with the InP platform, such as letting the DSP control the laser and modulator directly to develop new waveform shaping and modulation schemes. Because the InP platforms have active components, DSP designers can also import more solutions from the RF wireless space.
Tags: analog electronics, building blocks, coherent, dispersion compensation, DSP, energy efficiency, Intra DCI, Photonics, PON, power consumption, reach, simplifiedThe Power of Integrated Photonic LIDAR
Outside of communications applications, photonics can play a major role in sensing and imaging applications.…
Outside of communications applications, photonics can play a major role in sensing and imaging applications. The most well-known of these sensing applications is Light Detection and Ranging (LIDAR), which is the light-based cousin of RADAR systems that use radio waves.
To put it in a simple way: LIDAR involves sending out a pulse of light, receiving it back, and using a computer to study how the environment changes that pulse. It’s a simple but quite powerful concept.
If we send pulses of light to a wall and listen to how long it takes for them to come back, we know how far that wall is. That is the basis of time-of-flight (TOF) LIDAR. If we send a pulse of light with multiple wavelengths to an object, we know where the object is and whether it is moving towards you or away from you. That is next-gen LIDAR, known as FMCW LIDAR. These technologies are already used in self-driving cars to figure out the location and distance of other cars. The following video provides a short explainer of how LIDAR works in self-driving cars.
Despite their usefulness, the wider implementation of LIDAR systems is limited by their size, weight, and power (SWAP) requirements. Or, to put it bluntly, they are bulky and expensive. For example, maybe you have seen pictures and videos of self-driving cars with a large LIDAR sensor and scanner on the roof of the car, as in the image below.
Making LIDAR systems more affordable and lighter requires integrating the optical components more tightly and manufacturing them at a higher volume. Unsurprisingly, this sounds like a problem that could be solved by integrated photonics.
Replacing Bulk LIDAR with “LIDAR on Chip”
Back in 2019, Tesla CEO Elon Musk famously said that “Anyone relying on LIDAR is doomed”. And his scepticism had some substance to it. LIDAR sensors were clunky and expensive, and it wasn’t clear that they would be a better solution than just using regular cameras with huge amounts of visual analysis software. However, the incentive to dominate the future of the automotive sector was too big, and a technology arms race had already begun to miniaturize LIDAR systems into a single photonic chip.
Let’s provide a key example. A typical LIDAR system will require a mechanical system that moves the light source around to scan the environment. This could be as simple as a 360-rotating LIDAR scanner or using small scanning mirrors to steer the beam. However, an even better solution would be to create a LIDAR scanner with no moving parts that could be manufactured at a massive scale on a typical semiconductor process.
This is where optical phased arrays (OPAs) systems come in. An OPA system splits the output of a tunable laser into multiple channels and puts different time delays on each channels. The OPA will then recombine the channels, and depending on the time delays assigned, the resulting light beam will come out at a different angle. In other words, an OPA system can steer a beam of light from a semiconductor chip without any moving parts.
There is still plenty of development required to bring OPAs into maturity. Victor Dolores Calzadilla, a researcher from the Eindhoven University of Technology (TU/e) explains that “The OPA is the biggest bottleneck for achieving a truly solid-state, monolithic lidar. Many lidar building blocks, such as photodetectors and optical amplifiers, were developed years ago for other applications, like telecommunication. Even though they’re generally not yet optimized for lidar, they are available in principle. OPAs were not needed in telecom, so work on them started much later. This component is the least mature.”
Economics of Scale in LIDAR Systems
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for LIDAR and automotive industry.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Using Proven Photonics Technologies for Automotive Standards
Another challenge for photonics technologies is that they need to move from into parameters and specifications in the automotive sector that are often harsher than the telecom/datacom sector. For example, a target temperature range of −40°C to 125°C is often required, which is much broader than the typical industrial temperature range used in the telecom sector. The packaging of the PIC and its coupling to fiber and free space is particularly sensitive to these temperature changes.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Fortunately, a substantial body of knowledge already exists to make integrated photonics compatible with harsh environments like those of outer space. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been qualified for space and automotive applications. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Photonics technology must be built on a wafer-scale process that can produce millions of chips in a month. When we can show the market that photonics can be as easy to use as electronics, that will trigger a revolution in the use of photonics worldwide.
The broader availability of photonic devices will take photonics into new applications, such as those of LIDAR and the automotive sector. With a growing integrated photonics industry, LIDAR can become lighter, avoid moving parts, and be manufactured in much larger volumes that reduce the cost of LIDAR devices. Integrated photonics is the avenue for LIDAR to become more accessible to everyone.
Tags: accessible, affordable, automotive, automotive sector, beamforming, discrete, economics of scale, efficient, electronics, laser, LIDAR, phased arrays, photonic integration, power consumption, self-driving car, self-driving cars, space, waferHow To Make a Photonic Integrated Circuit
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we want to explain how photonic integrated circuits (PICs), the devices that enable all these applications, are made.
Designing a PIC
The process of designing a PIC should translate an initial application concept into a functioning photonics chip that can be manufactured. In a short course at the OFC 2018 conference, Wim Bogaerts from Ghent University summarized the typical PIC design process in the steps we will describe below.
- Concept and Specifications: We first have to define what goes into the chip. A chip architect normally spends time with the customer to understand what the customer wants to achieve with the chip and all the conditions and situations where the chip will be used. After these conversations, the chip application concept becomes a concrete set of specifications that are passed on to the team that will design the internals of the chip. These specs will set the performance targets of the PIC design.
- Design Function: Having defined the specs, the design team will develop a schematic circuit diagram that captures the function of the PIC. This diagram is separated into several functional blocks: some of them might already exist, and some of them might have to be built. These blocks include lasers, modulators, detectors, and other components that can manipulate light in one way or another.
- Design Simulation: Making a chip costs a lot of money and time. With such risks, a fundamental element of chip design is to accurately predict the chip’s behavior after it is manufactured. The functional blocks are placed together, and their behavior is simulated using various physical models and simulation tools. The design team often uses a few different simulation approaches to reduce the risk of failure after manufacturing.
- Design Layout: Now, the design team must translate the functional chip schematic into a proper design layout that can be manufactured. The layout consists of layers, component positions, and geometric shapes that represent the actual manufacturing steps. The team uses software that translates these functions into the geometric patterns to be manufactured, with human input required for the trickiest placement and geometry decisions.
- Check Design Rules: Every chip fabrication facility will have its own set of manufacturing rules. In this step, the design team verifies that the layout agrees with these rules.
- Verify Design Function: This is a final check to ensure that the layout actually performs as was originally intended in the original circuit schematic. The layout process usually leads to new component placement and parasitic effects that were not considered in the original circuit schematic. These tests might require the design team to revisit previous functional or layout schematic steps.
The Many Steps of Fabricating a PIC
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics President Boudewijn Docter described a fabrication process with a total of 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
- Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods (which are summarized in this article).
- Lithography (i.e. printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with a photosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
- Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. There are several techniques that can be done to etch the material, which are summarized in this article.
- Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such a silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation of future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps, or at the end to add electrical contacts to the chip.
Figure 6 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
The Expensive Process of Testing and Packaging
Chip fabrication is a process with many sources of variability, and therefore much testing is required to make sure that the fabricated chip agrees with what was originally designed and simulated. Once that is certified and qualified, the process of packaging and assembling a device with the PIC follows.
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic systems. Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost. There are many research efforts in motion to reduce these costs, which you can learn more about in one of our previous articles.
Especially after the first fabrication run of a new chip, there will be a few rounds of characterization, validation and revisions to make sure the chip performs up to spec. After this first round of characterization and validation, the chip must be made ready for mass production, which requires a series of reliability tests in several environmental different conditions. You can learn more about this process in our previous article on industrial hardening. For example, different applications need different certification of the temperatures in which the chip must operate in.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous amount of knowledge in chip design, fabrication, and testing from experts in different fields all around the world. EFFECT Photonics was founded by people who fabricated these chips themselves, understand the process intimately, and developed the connections and network to develop cutting-edge PICs at scale.
Tags: building blocks, c-temp, coherent, die testing, DSP, electron beam lithography, faults, I-temp, imprint lithography, InP, interfaces, optical lithography, reach, scale, wafer testingWhat’s an ITLA and Why Do I Need One?
The tunable laser is a core component of every optical communication system, both direct detect…
The tunable laser is a core component of every optical communication system, both direct detect and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system.
Depending on the material platform, system architecture, and requirements, optical system developers must balance laser parameters—tunability, purity, size, environmental resistance, and power—for the best system performance.
In this article, we will talk about one specific kind of laser—the integrable tunable laser assembly (ITLA)—and when it is needed.
When Do I Need an ITLA?
The promise of silicon photonics (SiP) is compatibility with existing electronic manufacturing ecosystems and infrastructure. Integrating silicon components on a single chip with electronics manufacturing processes can dramatically reduce the footprint and the cost of optical systems and open avenues for closer integration with silicon electronics on the same chip. However, the one thing silicon photonics misses is the laser component.
Silicon is not a material that can naturally emit laser light from electrical signals. Decades of research have created silicon-based lasers with more unconventional nonlinear optical techniques. Still, they cannot match the power, efficiency, tunability, and cost-at-scale of lasers made from indium phosphide (InP) and other III-V compound semiconductors.
Therefore, making a suitable laser for silicon photonics does not mean making an on-chip laser from silicon but an external laser from III-V materials such as InP. This light source will be coupled via optical fiber to the silicon components on the chip while maintaining a low enough footprint and cost for high-volume integration. The external laser typically comes in the form of an integrable tunable laser assembly (ITLA).