Insights
The Fabrication Process Inside a Photonic Foundry
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we should discuss the fabrication processes inside photonic semiconductor foundries.
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics co-founder Boudewijn Docter described a fabrication process with 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
1. Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods, such as physical or chemical vapor deposition.
2. Lithography (i.e., printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with aphotosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
3. Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. Etching techniques can be broadly classified into two categories.
- Dry Etching: These processes remove material by bombarding it with ions. Typically, these ions come from a plasma of reactive gases like oxygen, boron, chlorine, etc. This approach is often used to etch a material anisotropically (i.e., in a specific direction).
- Wet Etching: These processes involve the removal of material using a liquid reactant. The material to be etched is immersed in the solution, which will dissolve the targeted material layers. This solution usually consists of an acid, such as hydrofluoric acid (HFl), to etch silicon. Wet etching is typically used for etching a material isotropically (i.e., in all directions).
4. Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such as silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation for future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps or, in the end, to add electrical contacts to the chip.
Figure 5 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
After this fabrication process ends, the processed wafers are shipped worldwide to be tested and packaged into photonic devices. This is an expensive process we discussed in one of our previous articles.
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous knowledge in chip design, fabrication, and testing from experts in different fields worldwide. EFFECT Photonics was founded by people who fabricated these chips themselves, understood the process intimately and developed the connections and network to develop cutting-edge PICs at scale.
Tags: Agricultural, Carbon Footprint, Chip Material, Cleaning, Communication Networks, Deposition, Energy Cost, Epitaxy, Etching, Fabrication Process, Food Diagnostics, Integrated Photonics, LIDAR, Lithography, Manufacturing, Medical Devices, Metallization, Photonic Foundry, Photonics, Semiconductor, Sensing and Imaging, Surface PreparationData Center Interconnects: Coherent or Direct Detect?
Article first published 15 June 2022, updated 18 May 2023. With the increasing demand for…
Article first published 15 June 2022, updated 18 May 2023.
With the increasing demand for cloud-based applications, datacom providers are expanding their distributed computing networks. Therefore, they and telecom provider partners are looking for data center interconnect (DCI) solutions that are faster and more affordable than before to ensure that connectivity between metro and regional facilities does not become a bottleneck.
As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Coherent 400ZR now dominates the metro DCI space, but in the coming decade, coherent technology could also play a role in shorter ranges, such as campus and intra-data center interconnects. As interconnects upgrade to Terabit speeds, coherent technology might start coming closer to direct detect power consumption and cost.
Coherent Dominates in Metro DCIs
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create a 400ZR multi-source agreement. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. Operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detection: a single coherent transceiver plugged into a router could fulfill the requirements.
As an example of their success, Cignal AI forecasted that 400ZR shipments would dominate edge applications, as shown in Figure 2.
Campus Interconnect Are the Grey Area
The campus DCI segment, featuring distances below 10 kilometers, was squarely in the domain of direct detect products when the standard speed of these links was 100Gbps. No amplifiers nor compensators were needed for these shorter distances, so direct detect transceivers are as simple to deploy and maintain as coherent ones.
However, as link bandwidths increase into the Terabit space, these direct detect links will need more amplifiers to reach 10 kilometers, and their power consumption will approach that of coherent solutions. The industry initially predicted that coherent solutions would be able to match the power consumption of PAM4 direct detect solutions as early as 800G generation. However, PAM4 developers have proven resourceful and have borrowed some aspects of coherent solutions without fully implementing a coherent solution. For example, ahead of OFC 2023, semiconductor solutions provider Marvell announced a 1.6Tbps PAM4 platform that pushes the envelope on the cost and power per bit they could offer in the 10 km range.
Following the coming years and how the PAM-4 industry evolves will be interesting. How many (power-hungry) features of coherent solutions will they have to borrow if they want to keep up in upcoming generations and speeds of 3.2 Tbps and beyond? Lumentum’s Chief Technology Officer, Brandon Collings, has some interesting thoughts on the subject in this interview with Gazettabyte.
Direct Detect Dominates Intra Data Center Interconnects (For Now…)
Below Terabit speeds, direct detect technology (both NRZ and PAM-4) will likely dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than 2 kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
Nevertheless, moving to larger, more centralized data centers (such as hyperscale) is lengthening intra-DCI links. Instead of transferring data directly from one data center building to another, new data centers move data to a central hub. So even if the building you want to connect to might be 200 meters away, the fiber runs to a hub that might be one or two kilometers away. In other words, intra-DCI links are becoming campus DCI links requiring their single-mode fiber solutions.
On top of these changes, the upgrades to Terabit speeds in the coming decade will also see coherent solutions more closely challenge the power consumption of direct detect transceivers. PAM-4 direct detect transceivers that fulfill the speed requirements require digital signal processors (DSPs) and more complex lasers that will be less efficient and affordable than previous generations of direct detect technology. With coherent technology scaling up in volume and having greater flexibility and performance, one can argue that it will also reach cost-competitiveness in this space.
Takeaways
Unsurprisingly, using coherent or direct detect technology for data center interconnects boils down to reach and capacity needs. 400ZR coherent is already established as the solution for metro DCIs. In campus interconnects of 10 km or less, PAM-4 products remain a robust solution up to 1.6 Tbps, but coherent technology is making a case for its use. Thus, it will be interesting to see how they compete in future generations and 3.2 Tbps.
Coherent solutions are also becoming more competitive as the intra-data center sector moves into higher Terabit speeds, like 3.2Tbps. Overall, the datacom sector is moving towards coherent technology, which is worth considering when upgrading data center links.
Tags: 800G, access networks, coherent, cost, cost-effective, Data center, distributed computing, edge and metro DCIs, integration, Intra DCI, license, metro, miniaturized, photonic integration, Photonics, pluggable, power consumption, power consumption SFP, reach, TerabitShining a Light on Four Tunable Lasers
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity…
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity without investing in new fiber infrastructure thanks to tunable lasers and dense wavelength division multiplexing (DWDM). Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also contributed to the broad adoption of DWDM systems since they reduce the complexity of deployment and maintenance.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The fundamental components of a laser are the following:
- An optical resonator (also called an optical cavity) that allows laser light to re-circulate and feed itself back. Resonators can be linear or ring-shaped. Linear resonators have a highly reflective mirror on one end and a partially-reflective mirror on the other, which acts as a coupler that lets the laser light out. On the other hand, ring resonators use a waveguide as an output coupler.
- An active medium (also called a gain medium) inside the resonator that, when pumped by an external energy source, will amplify the power of light by a process called stimulated emission.
- A pump source is the external energy source that powers the amplification process of the gain medium. The typical tunable laser used in communications will use an electrical pump, but some lasers can also use an optical pump (i.e., another light source).
As light circulates throughout the resonator, it passes multiple times through the pumped gain medium, amplifying itself and building up power to become the highly concentrated and coherent beam of light we know as a laser.
There are multiple ways to tune lasers, but let’s discuss three common tuning methods. These methods can and are often used together.
- Tuning the Gain Medium: By changing the pump intensity or environmental conditions such as its temperature, the gain medium can amplify different frequencies of light.
- Tuning the Resonator Length: The light inside a resonator goes back and forth at a frequency that depends on the length of the resonator. So making the resonator shorter or longer can change its frequency.
- Tuning by Filtering: Adding a filtering element inside or outside the resonator, such as a diffraction grating (i.e., a periodic mirror), allows the laser to “select” a specific frequency.
With this short intro on how lasers work and can be tuned, let’s dive into some of the different tunable lasers used in communication systems.
Distributed Feedback Lasers
Distributed Feedback (DFB) lasers are unique because they directly etch a grating onto the gain medium. This grating acts as a periodic mirror, forming the optical resonator needed to recirculate light and create a laser beam. These lasers are tunable by tuning the temperature of the gain medium and by filtering with the embedded grating.
Compared to their predecessors, DFB lasers could produce very pure, high-quality laser light with lower complexity in design and manufacturing that could be easily integrated into optical fiber systems. These characteristics benefited the telecommunications sector, which needed lasers with high purity and low noise that could be produced at scale. After all, the more pure (i.e., lower linewidth) a laser is, the more information it can encode. Thus, DFB lasers became the industry’s solution for many years.
The drawback of DFB lasers is that embedding the grating element in the gain medium makes them more sensitive and unstable. This sensitivity narrows their tuning range and makes them less reliable as they age.
Distributed Bragg Reflector (DBR) Lasers
A simple way to improve the reliability compared to a DFB laser is to etch the grating element outside the gain medium instead of inside. This grating element (which in this case is called a Bragg reflector) acts as a mirror that creates the optical resonator and amplifies the light inside. This setup is called a distributed Bragg reflector (DBR) laser.
While, in principle, a DBR laser does not have a wider tuning range than a DFB laser, its tuning behavior is more reliable over time. Since the grating is outside the gain medium, the DBR laser is less sensitive to environmental fluctuations and more reliable as it ages. However, as coherent and DWDM systems became increasingly important, the industry needed a greater tuning range that DFB and DBR lasers alone could not provide.
External Cavity Lasers (ECL)
Interestingly enough, one of the most straightforward ways to improve the quality and tunability of a semiconductor laser is to use it inside a second, somewhat larger resonator. This setup is called an external cavity laser (ECL) since this new resonator or cavity will use additional optical elements external to the original laser.
The main modification to the original semiconductor laser is that instead of having a partially reflective mirror as an output coupler, the coupler will use an anti-reflection coating to become transparent. This helps the original laser resonator capture more light from the external cavity.
The new external resonator provides more degrees of freedom for tuning the laser. If the resonator uses a mirror, then the laser can be tuned by moving the mirror a bit and changing the length of the resonator. If the resonator uses a grating, it has an additional element to tune the laser by filtering.
ECLs have become the state-of-the-art solution in the telecom industry: they use a DFB or DBR laser as the “base laser” and external gratings as their filtering element for additional tuning. These lasers can provide a high-quality laser beam with low noise, narrow linewidth, and a wide tuning range. However, they came with a cost: manufacturing complexity.
ECLs initially required free-space bulk optical elements, such as lenses and mirrors, for the external cavity. One of the hardest things to do in photonics is coupling between free-space optics and a chip. This alignment of the free-space external cavity with the original laser chip is extremely sensitive to environmental disturbances. Therefore, their coupling is often inefficient and complicates manufacturing and assembly processes, making them much harder to scale in volume.
Laser developers have tried to overcome this obstacle by manufacturing the external cavity on a separate chip coupled to the original laser chip. Coupling these two chips together is still a complex problem for manufacturing but more feasible and scalable than coupling from chip to free space optics. This is the direction many major tunable laser developers will take in their future products.
Integrated Tunable Ring Lasers
As we explained in the introductory section, linear resonators are those in which light bounces back and forth between two mirrors. However, ring resonators take a different approach to feedback: the light loops multiple times inside a ring that contains the active medium. The ring is coupled to the rest of the optical circuit via a waveguide.
The power of the ring resonator lies in its compactness, flexibility, and integrability. While a single ring resonator is not that impressive or tunable, using multiple rings and other optical elements allows them to achieve performance and tunability on par with the state-of-the-art tunable lasers that use linear resonators.
Most importantly, these widely tunable ring lasers can be entirely constructed on a single chip of Indium Phosphide (InP) material. As shown in this paper from the Eindhoven University of Technology, these lasers can even be built with the same basic building blocks and processes used to make other elements in the InP photonic integrated circuit (PIC).
This high integration of ring lasers has many positive effects. It can avoid inefficient couplings and make the laser more energy efficient. Furthermore, it enables the development of a monolithically integrated laser module where every element is included on the same chip. This includes integrating the wavelength locker component on the same chip, an element most state-of-the-art lasers attach separately.
As we have argued in previous articles, the more elements can be integrated into a single chip, the more scalable the manufacturing process can become.
Takeaways
Factors such as output power, noise, linewidth, tuning range, and manufacturability are vital when deciding which kind of laser to use. A DFB or DBR laser should do the job if wide tunability is not required. Greater tuning range will require an external cavity laser, but if the device must be manufactured at a large volume, an external cavity made on a chip instead of free-space optics will scale more easily. The latter is the tunable laser solution the telecom industry is gravitating towards.
That being said, ring lasers are a promising alternative because they can enable a widely tunable and monolithically integrated laser with all elements, including wavelength locker, on the same chip. This setup is ideal for scaling into high production volumes.
Tags: EFFECT Photonics, PhotonicsThe Promise of Integrated Quantum Photonics
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest…
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days.
While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety.
Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum: protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does.
The quantum bits (qubits) used by QKD systems can be photons, electrons, atoms, or any other system that can exist in a quantum state. However, using photons as qubits will likely dominate the quantum communications and QKD application space. We have decades of experience manipulating the properties of photons, such as polarization and phase, to encode qubits. Thanks to optical fiber, we also know how to send photons over long distances with relatively little loss. Besides, optical fiber is already a fundamental component of modern telecommunication networks, so future quantum networks can run on that existing fiber infrastructure. All these signs point towards a new era of quantum photonics.
Photonic QKD devices have been, in some shape or form, commercially available for over 15 years. Still, factors such as the high cost, large size, and the inability to operate over longer distances have slowed their widespread adoption. Many R&D efforts regarding quantum photonics aim to address the size, weight, and power (SWaP) limitations. One way to overcome these limitations and reduce the cost per device would be to integrate every QKD function—generating, manipulating, and detecting photonic qubits—into a single chip.
Integration is Key to Bring Lab Technology into the Market
Bringing quantum products from lab prototypes to fully realized products that can be sold on the market is a complex process that involves several key steps.
One of the biggest challenges in bringing quantum products to market is scaling up the technology from lab prototypes to large-scale production. This requires the development of reliable manufacturing processes and supply chains that can produce high-quality quantum products at scale. Quantum products must be highly performant and reliable to meet the demands of commercial applications. This requires extensive testing and optimization to ensure that the product meets or exceeds the desired specifications.
In addition, quantum products must comply with relevant industry standards and regulations to ensure safety, interoperability, and compatibility with existing infrastructure. This requires close collaboration with regulatory bodies and industry organizations to develop appropriate standards and guidelines.
Photonic integration is a process that makes these goals more attainable for quantum technologies. By taking advantage of existing semiconductor manufacturing systems, quantum technologies can more scale up their production volumes more easily.
Smaller Footprints and Higher Efficiency
One of the most significant advantages of integrated photonics is its ability to miniaturize optical components and systems, making them much smaller, lighter, and more portable than traditional optical devices. This is achieved by leveraging micro- and nano-scale fabrication techniques to create optical components on a chip, which can then be integrated with other electronic and optical components to create a fully functional device.
The miniaturization of optical components and systems is essential for the development of practical quantum technologies, which require compact and portable devices that can be easily integrated into existing systems. For example, compact and portable quantum sensors can be used for medical imaging, geological exploration, and industrial process monitoring. Miniaturized quantum communication devices can be used to secure communication networks and enable secure communication between devices.
Integrated photonics also allows for the creation of complex optical circuits that can be easily integrated with other electronic components, to create fully integrated opto-electronic quantum systems. This is essential for the development of practical quantum computers, which require the integration of a large number of qubits (quantum bits) with control and readout electronics.
Economics of Scale
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the quantum photonics industry.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Takeaways
Overall, bringing quantum products to market requires a multi-disciplinary approach that involves collaboration between scientists, engineers, designers, business professionals, and regulatory bodies to develop and commercialize a high-quality product that meets the needs of its target audience. Integrated photonics offers significant advantages in miniaturization and scale-up potential, which are essential in taking quantum technologies from the lab to the market.
Tags: Economy-of-scale, EFFECT Photonics, Integrated Photonics, miniaturization, Photonics, Photons, Quantum, Quantum products, Qubits, RSA encryption, Wafer Scale PhotonicsThe Future of Coherent Transceivers in the Access
The demand for data and other digital services is rising exponentially. From 2010 to 2020,…
The demand for data and other digital services is rising exponentially. From 2010 to 2020, Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom and telecom operators need constantly upgrade their transport networks.
The major obstacles in this upgrade path remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. This power consumption increase seems incompatible with the power constraints of the network edge.
This article will review trends in data rate, power consumption, and footprint for transceivers in the network edge that aim to address these challenges.
Downscaling Data Rates for the Access
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge.
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the significant factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
You can read more about these developments in our previous article, When Will the Network Edge Go Coherent?
Moving Towards Low Power
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down. These power efficiency issues do not just affect the environment but also the bottom lines of communications companies.
As shown in the table below, the growth of data centers and wireless networks will continue to drive power consumption upwards.
These power constraints are even more pressing in the access network sector. Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Aside from the move to QSFP28 form factors for 100G coherent, EFFECT Photonics also believes in two other ways to reduce power consumption.
- Increased Integration: The interconnections among smaller, highly-integrated optical components consume less power than those among more discrete components. We will discuss this further in the next section.
- Co-Design: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead
Can We Still Move Towards Smaller Footprints?
Moving toward smaller pluggable footprints should not necessarily be a goal, but as we mentioned in the previous section, it is a means toward the goal of lower power consumption. Decreasing the size of optical components and their interconnections means that the light inside the chip will travel a smaller distance and accumulate fewer optical losses.
Let’s look at an example of lasers. In the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Takeaways
The ever-increasing bandwidth demands in access networks force coherent pluggables to face the complex problem of maintaining a good enough performance while moving to lower cost and power consumption.
The move towards 100G coherent solutions in QSFP28 form factors will play a major role in meeting the power requirements of the access network sector. Further gains can be achieved with greater integration of optical components and co-designing the optics and electronic engines of the transceiver to reduce inefficiencies. Further gains in footprint for transceivers can also be obtained by eliminating redundant laser control functions in both the laser package and the main transceiver board.
Tags: 100G Coherent Products, 400ZR Pluggable Coherent Solutions, 5G Networks, 800G Solutions, 800ZR, Affordability, Coherent Transceivers, datacom, Direct Detection, EFFECT Photonics, Internet Traffic, network edge, OSFP Form Factors, Photonics, Pluggable Modules, power consumption, Power Efficiency, QSFP Modules, QSSFP-DD, Telecom Operators, Thermal ManagementOne Watt Matters
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them…
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down.
The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2022 is 1.55. This implies that for every 1 kWh used to power data center equipment, an extra 0.55 kWh—about 35% of total power consumption—is needed to power auxiliary equipment like lighting and, more importantly, cooling. While the advent of centralized hyperscale data centers will improve energy efficiency in the coming decade, that trend is offset by the construction of many smaller local data centers on the network edge to address the exponential growth of 5G services such as the Internet of Things (IoT).
These opposing trends are one of the reasons why the Uptime Institute has only observed a marginal improvement of 10% in the average data center PUE since 2014 (which was 1.7 back then). Such a slow improvement in average data center power efficiency cannot compensate for the fast growth of new edge data centers.
For all the bad reputation data centers receive for their energy consumption, though, wireless transmission generates even more heat than wired links. While 5G standards are more energy-efficient per bit than 4G, Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations. To make things worse, the use of higher frequency spectrum bands and new IoT use cases require the deployment of more base stations too.
Prof. Earl McCune from TU Delft estimates that nine out of ten watts of electrical power in 5G systems turn into heat. This Huawei study also predicts that the energy consumption of wireless access networks will increase even more quickly than data centers in the next ten years—more than quadrupling between 2020 and 2030.
These power efficiency issues do not just affect the environment but also the bottom lines of communications companies. In such a scenario, saving even one watt of power per pluggable transceiver could quickly multiply and scale up into a massive improvement on the sustainability and profitability of telecom and datacom providers.
How One Watt of Savings Scales Up
Let’s discuss an example to show how a seemingly small improvement of one Watt in pluggable transceiver power consumption can quickly scale up into major energy savings.
A 2020 paper from Microsoft Research estimates that for a metropolitan region of 10 data centers with 16 fiber pairs each and 100-GHz DWDM per fiber, the regional interconnect network needs to host 12,800 transceivers. This number of transceivers could increase by a third in the coming years since the 400ZR transceiver ecosystem supports a denser 75 GHz DWDM grid, so this number of transceivers would increase to 17,000. Therefore, saving a watt of power in each transceiver would lead to a total of 17 kW in savings.
The power savings don’t end there, however. The transceiver is powered by the server, which is then powered by its power supply and, ultimately, the national electricity grid. On average, 2.5 Watts must be supplied from the national grid for every watt of power the transceiver uses. When applying that 2.5 factor, the 17 kW in savings we discussed earlier are, in reality, 42.5 kW. In a year of power consumption, this rate adds up to a total of 372 MWh in power consumption savings. According to the US Environmental Protection Agency (EPA), these amounts of power savings in a single metro data center network are equivalent to 264 metric tons of carbon dioxide emissions. These emissions are equivalent to consuming 610 barrels of oil and could power up to 33 American homes for a year.
Saving Power through Integration and Co-Design
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the chips produced losses and incompatibilities that made their devices less energy efficient. After 2020, starting with Apple’s M1 processor, they fully integrate all components on a single chip, avoiding losses and incompatibilities. As shown in the table below, this electronic system-on-chip (SoC) consumes a third of the power compared to the processors with discrete components used in their previous generations of computers.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
The photonics industry would benefit from a similar goal: implementing a photonic system-on-chip. Integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient. This approach doesn’t just optimize the efficiency of the devices themselves but also of the resource-hungry chip manufacturing process. For example, a system-on-chip approach enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials. You can read our previous article on the subject to know more about the energy efficiency benefits of system-on-chip integration.
Another way of improving power consumption in photonic devices is co-designing their optical and electronic systems. A co-design approach helps identify in greater detail the trade-offs between various parameters in the optics and electronics, optimizing their fit with each other and ultimately improving the overall power efficiency of the device. In the case of coherent optical transceivers, an electronic digital signal processor specifically optimized to drive an indium-phosphide optical engine directly could lead to power savings.
When Sustainability is Profitable
System-on-chip (SoC) approaches might reduce not only the footprint and energy consumption of photonic devices but also their cost. The economics of scale principles that rule the electronic semiconductor industry can also reduce the cost of photonic systems-on-chip. After all, SoCs minimize the footprint of photonic devices, allowing photonics developers to fit more of them within a single wafer, which decreases the price of each photonic system. As the graphic below shows, the more chips and wafers are produced, the lower the cost per chip becomes.
Integrating all optical components—including the laser—on a single chip shifts the complexity from the expensive assembly and packaging process to the more affordable and scalable semiconductor wafer process. For example, it’s much easier to combine optical components on a wafer at a high volume than to align components from different chips together in the assembly process. This shift to wafer processes also helps drive down the cost of the device.
Takeaways
With data and energy demands rising yearly, telecom and datacom providers are constantly finding ways to reduce their power and cost per transmitted bit. As we showed earlier in this article, even one watt of power saved in an optical transceiver can snowball into major savings that providers and the environment can profit from. These improvements in the power consumption of optical transceivers can be achieved by deepening the integration of optical components and co-designing them with electronics. Highly compact and integrated optical systems can also be manufactured at greater scale and efficiency, reducing their financial and environmental costs. These details help paint a bigger picture for providers: sustainability now goes hand-in-hand with profitability.
Tags: 5G, data centers, EFFECT Photonics, efficiency, energy consumption, Photonics, Sustainability, TransceiversWhen will the Network Edge go Coherent?
Article first published 27 July 2022, updated 12 April 2023. Network carriers want to provide…
Article first published 27 July 2022, updated 12 April 2023.
Network carriers want to provide communications solutions in all areas: mobile access, cable networks, and fixed access to business customers. They want to provide this extra capacity and innovative and personalized connectivity and entertainment services to their customers.
Deploying only legacy direct detect technologies will not be enough to cover these growing bandwidth and service demands of mobile, cable, and business access networks with the required reach. In several cases, networks must deploy more 100G coherent dense wavelength division multiplexing (DWDM) technology to transmit more information over long distances. Several applications in the optical network edge could benefit from upgrading from 10G DWDM or 100G grey aggregation uplinks to 100G DWDM optics:
- Mobile Mid-haul benefits from seamlessly upgrading existing uplinks from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their links to 100G IPoDWDM.
- Cable Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Business Services could scale enterprise bandwidth beyond single-channel 100G grey links.
However, network providers have often stuck to their 10G DWDM or 100G grey links because the existing 100G DWDM solutions could not check all the required boxes. “Scaled-down” coherent 400ZR solutions had the required reach and tunability but were too expensive and power-hungry for many access network applications. Besides, ports in small to medium IP routers used in most edge deployments often do not support the QSFP-DD form factor commonly used in 400ZR modules but the QSFP28 form factor.
Fortunately, the rise of 100ZR solutions in the QSFP28 form factor is changing the landscape for access networks. “The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade,” said Scott Wilkinson, Lead Analyst for Optical Components at market research firm Cignal AI. “As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
In this article, we will discuss how the recent advances in 100ZR solutions will enable the evolution of different segments of the network edge: mobile midhaul and backhaul, business services, and cable.
How Coherent 100G Can Move into Mobile X-haul
The upgrade from 4G to 5G has shifted the radio access network (RAN) from a two-level structure with backhaul and fronthaul in 4G to a three-level structure with back-, mid-, and fronthaul:
- Fronthaul is the segment between the active antenna unit (AAU) and the distributed unit (DU)
- Midhaul is the segment from DU to the centralized unit (CU)
- Backhaul is the segment from CU to the core network.
The initial rollout of 5G has already happened in most developed countries, with many operators upgrading their 1G SFPs transceiver to 10G SFP+ devices. Some of these 10G solutions had DWDM technology; many were single-channel grey transceivers. However, mobile networks must move to the next phase of 5G deployments, which requires installing and aggregating more and smaller base stations to exponentially increase the number of devices connected to the network.
These mature phases of 5G deployment will require operators to continue scaling fiber capacity cost-effectively with more widespread 10G DWDM SFP+ solutions and 25G SFP28 transceivers. These upgrades will put greater pressure on the aggregation segments of mobile backhaul and midhaul. These network segments commonly used link aggregation of multiple 10G DWDM links into a higher bandwidth group (such as 4x10G). However, this link aggregation requires splitting up larger traffic streams and can be complex to integrate across an access ring. A single 100G uplink would reduce the need for such link aggregation and simplify the network setup and operations. If you want to know more about the potential market and reach of this link aggregation upgrade, we recommend reading the recent Cignal AI report on 100ZR technologies.
Cable Migration to 10G PON Will Drive the Use of Coherent 100G Uplinks
According to Cignal AI’s 100ZR report, the biggest driver of 100ZR use will come from multiplexing fixed access network links upgrading from 1G to 10G. This trend will be reflected in cable networks’ long-awaited migration from Gigabit Passive Optical Networks (GPON) to 10G PON. This evolution is primarily guided by the new DOCSIS 4.0 standard, which promises 10Gbps download speeds for customers and will require several hardware upgrades in cable networks.
To multiplex these new larger 10Gbps customer links, cable providers and network operators need to upgrade their optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM uplinks. Many of these new optical hubs will support up to 40 or 80 optical distribution networks (ODNs), too, so the previous approach of aggregating multiple 10G DWDM uplinks will not be enough to handle this increased capacity and higher number of channels.
Anticipating such needs, the non-profit R&D organization CableLabs has recently pushed to develop a 100G Coherent PON (C-PON) standard. Their proposal offers 100 Gbps per wavelength at a maximum reach of 80 km and up to a 1:512 split ratio. CableLabs anticipates C-PON and its 100G capabilities will play a significant role not just in cable optical network aggregation but in other use cases such as mobile x-haul, fiber-to-the-building (FTTB), long-reach rural scenarios, and distributed access networks.
Towards 100G Coherent and QSFP28 in Business Services
Almost every organization uses the cloud in some capacity, whether for development and test resources or software-as-a-service applications. While the cost and flexibility of the cloud are compelling, its use requires fast, high-bandwidth wide-area connectivity to make cloud-based applications work as they should.
Similarly to cable networks, these needs will require enterprises to upgrade their existing 1G Ethernet private lines to 10G Ethernet, which will also drive a greater need for 100G coherent uplinks. Cable providers and operators will also want to take advantage of their upgraded 10G PON networks and expand the reach and capacity of their business services.
The business and enterprise services sector was the earliest adopter of 100G coherent uplinks, deploying “scaled-down” 400ZR transceivers in the QSFP-DD form factor since they were the solution available at the time. However, these QSFP-DD slots also support QSFP28 form factors, so the rise of QSFP 100ZR solutions will provide these enterprise applications with a more attractive upgrade with lower cost and power consumption. These QSFP28 solutions had struggled to become more widespread before because they required the development of new, low-power digital signal processors (DSPs), but DSP developers and vendors are keenly jumping on board the 100ZR train and have announced their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions. This is also why EFFECT Photonics has announced its plans to co-develop a 100G DSP with Credo Semiconductor that best fits 100ZR solutions in the QSFP28 form factor.
Takeaways
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the major factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
As the QSFP28 100ZR ecosystem expands, production will scale up, and these solutions will become more widespread and affordable, opening up even more use cases in access networks.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technologyEnabling a 100ZR Ecosystem
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom…
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. In response to this interest from operators, several vendors are keenly jumping on board the 100ZR train by announcing their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions.
This growing interest and use cases for 100ZR are also changing how industry analysts view the potential of the 100ZR market. Last February, Cignal AI released a report on 100ZR which stated that the viability of new low-power solutions in the QSFP28 form factor enabled use cases in access networks, thus doubling the size of their 100ZR shipment forecasts.
“The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade. As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
Scott Wilkinson, Lead Analyst for Optical Components at Cignal AI.
The 100ZR market can expand even further, however. Access networks are heavily price-conscious, and the lower prices of 100ZR pluggables become, the more widely they will be adopted. Reaching such a goal requires a vibrant 100ZR ecosystem with multiple suppliers that can provide lasers, digital signal processors (DSPs), and full transceiver solutions that address the access market’s needs and price targets.
The Constraints of Power in the Access
Initially, 100G coherent solutions were focused on the QSFP-DD form factor that was popularized by 400ZR solutions. However, power consumption has prevented these QSFP-DD solutions from becoming more viable in the access network domain.
Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment. QSFP-DD modules forced operators and equipment vendors to use larger cooling components (heatsinks and fans), meaning that each module would need more space to cool appropriately. The increased need for cabinet real estate makes these modules more costly to deploy in the access domain.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Ecosystems to Overcome the Laser and DSP Bottlenecks
Even though QSFP28 modules are better at addressing the power concerns of the access domain, some obstacles prevent their wider availability.
Since QSFP28 pluggables have a lower power consumption and slightly smaller footprint requirements, they also need new laser and DSP solutions. The industry cannot simply incorporate the same lasers and DSPs used for 400ZR devices. This is why EFFECT Photonics has announced its plans to develop a pico tunable laser assembly (pTLA) and co-develop a 100G DSP that will best fit 100ZR solutions in the QSFP28 form factor.
However, a 100ZR industry with only one or two laser and DSP suppliers will struggle to scale up and make these solutions more widely accessible. The 400ZR market provides a good example of the benefits of a vibrant ecosystem. Four vendors are currently shipping DSPs for 400ZR solutions, and even more companies have announced plans to develop DSPs. This larger vendor ecosystem will help 400ZR production scale up in volume and satisfy a rapidly growing market.
While the 100ZR market is smaller than the 400ZR one, its ecosystem must follow its example and expand to enable new use cases and further increase the market size.
Standards and Interoperability Make 100ZR More Widespread
Another reason 400ZR solutions became so widespread is their standardization and interoperability. Previously, the 400G space was more fragmented, and pluggables from different vendors could not operate with each other, forcing operators to use a single vendor for their entire network deployment.
Eventually, datacom and telecom providers approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop an interoperable 400G coherent solution that addressed their needs. These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. This standardization and interoperability effort enabled the explosive growth of the 400G market.
100ZR solutions must follow a similar path to reach a larger market. If telecom and datacom operators want more widespread and affordable 100ZR solutions, more of them will have to join the push for 100ZR standardization and interoperability. This includes standards not just for the power consumption and line interfaces but also for management and control interfaces, enabling more widespread use of remote provisioning and diagnostics. These efforts will make 100ZR devices easier to implement across access networks.
Takeaways
The demand from access network operators for 100ZR solutions is there, but it has yet to fully materialize in the industry forecasts because, right now, there is not enough supply of viable 100ZR solutions that can address their targets. So in a way, further growth of the 100ZR market is a self-fulfilled prophecy: the more suppliers and operators support 100ZR, the easier it is to scale up the supply and meet the price and power targets of access networks, expanding the potential market. Instead of one or two vendors fighting for control of a smaller 100ZR pie, having multiple vendors and standardization efforts will increase the supply, significantly increasing the size of the pie and benefiting everyone’s bottom line.
Therefore, EFFECT Photonics believes in the vision of a 100ZR ecosystem where multiple vendors can provide affordable laser, DSP, and complete transceiver solutions tailored to network edge use cases. Meanwhile, if network operators push towards greater standardization and interoperability, 100ZR solutions can become even more widespread and easy to use.
Tags: 100ZR, access networks, DSP, ecosystem, edge, laser, market, price, solutionsWhat DSPs Does the Cloud Edge Need?
By storing and processing data closer to the end user and reducing latency, smaller data…
By storing and processing data closer to the end user and reducing latency, smaller data centers on the network edge significantly impact how networks are designed and implemented. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Various trends are driving the rise of the edge cloud: 5G networks and the Internet of Things (IoT), augmented and virtual reality applications, network function virtualization, and content delivery networks.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Type of Edge | Datacenter | Location | Number of DCs per 10M people | Average Latency | Size | |
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks | |
Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
This situation leads to hyperscale data center providers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers. They must manage hundreds or thousands of new nodes that will be hard to control and maintain.
These conditions mean that optical transceivers for these networks, and thus their digital signal processors (DSPs), must have flexible and low power consumption and smart features that allow them to adapt to different network conditions.
Using Adaptable Power Settings
Reducing power consumption in the cloud edge is not just about reducing the maximum power consumption of transceivers. Transceivers and DSPs must also be smart and decide whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver must operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with lower power on the amplifiers. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
Fiber monitoring is also an essential variable in this equation. A smart DSP could change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
The Importance of a Co-Design Philosophy for DSPs
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately. This setup reduces the time to market and simplifies the research and design processes, but it comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of PIC but a master of none. Given the ever-increasing demand for capacity and the need for sustainability both as financial and social responsibility, transceiver developers increasingly need a steak knife rather than a Swiss army knife.
As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.
Scaling the Edge Cloud with Automation
With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will take much work to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Automation is vital to achieving such flexibility and scalability.
Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, the AI network controller can take telemetry data from the whole network to decide where to route traffic and adjust power levels, reducing power consumption.
In this context, smart digital signal processors (DSPs) and transceivers can give the AI controller more degrees of freedom to optimize the network. They could provide more telemetry to the AI controller so that it makes better decisions. The AI management layer can then remotely control programmable interfaces in the transceiver and DSP so that the optical links can adjust the varying network conditions. If you want to know more about these topics, you can read last week’s article about transceivers in the age of AI.
Takeaways
Cloud-native applications require edge data centers that handle increased traffic and lower network latency. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Scaling edge data center networks will require greater automation and more flexible power management, and smarter DSPs and transceivers will be vital to enable these goals.
Co-design approaches can optimize the interfacing of the DSP with the optical engine, making the transceiver more power efficient. Further power consumption gains can also be achieved with smarter DSPs and transceivers that provide telemetry data to centralized AI controllers. These smart network components can then adjust their power output based on the decisions and instructions of the AI controller.
Tags: 5G, AI, artificial intelligence, Augmented Reality, automation, cloud edge, Co-Design Philosophy, data centers, Delivery Networks, DSPs, Edge Cloud, Fiber Monitoring, Hyperscale Data Center, Internet of Things, IoT, latency, Network Function Virtualization, optical transceivers, power consumption, Software-Defined Networks, Telecom Operators, Virtual RealityTransceivers in the Age of AI
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable,…
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable, and sustainable. It can gather information from devices across the optical network to identify patterns and make decisions independently without human input. By synergizing with other technologies, such as network function virtualization (NFV), AI can become a centralized management and orchestration network layer. Such a setup can fully automate network provisioning, diagnostics, and management, as shown in the diagram below.
However, artificial intelligence and machine learning algorithms are data-hungry. To work optimally, they need information from all network layers and ever-faster data centers to process it quickly. Pluggable optical transceivers thus need to become smarter, relaying more information back to the AI central unit, and faster, enabling increased AI processing.
Faster Transceivers for the Age of AI
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
400 Gbps links are becoming the standard across data center interconnects, but providers are already considering the next steps. LightCounting forecasts significant growth in the shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years. We discuss these solutions in greater detail in our article about the roadmap to 800G and beyond.
Coherent Modules Need to Provide More Telemetry Data
Mobile networks now and in the future will consist of a massive number of devices, software applications, and technologies. Self-managed, zero-touch automated networks will be required to handle all these new devices and use cases. Realizing this full network automation requires two vital components.
- Artificial intelligence and machine learning algorithms for comprehensive network automation: For instance, AI in network management can drastically cut the energy usage of future telecom networks.
- Sensor and control data flow across all network model layers, including the physical layer: As networks grow in size and complexity, the management and orchestration (MANO) software needs more degrees of freedom and dials to turn.
These goals require smart optical equipment and components that provide comprehensive telemetry data about their status and the fiber they are connected to. The AI-controlled centralized management and orchestration layer can then use this data for remote management and diagnostics. We discuss this topic further in our previous article on remote provisioning, diagnostics, and management.
For example, a smart optical transceiver that fits this centralized AI-management model should relay data to the AI controller about fiber conditions. Such monitoring is not just limited to finding major faults or cuts in the fiber but also smaller degradations or delays in the fiber that stem from age, increased stress in the link due to increased traffic, and nonlinear optical effects. A transceiver that could relay all this data allows the AI controller to make better decisions about how to route traffic through the network.
A Smart Transceiver to Rule All Network Links
After relaying data to the AI management system, a smart pluggable transceiver must also switch parameters to adapt to different use cases and instructions given by the controller.
Let’s look at an example of forward error correction (FEC). FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A smart transceiver and DSP could switch among different FEC algorithms to adapt to network performance and use cases. Let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the transceiver can be remotely reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
Reconfigurable transceivers can also be beneficial to auto-configure links to deal with specific network conditions, especially in brownfield links. Let’s return to the fiber monitoring subject we discussed in the previous section. A transceiver can change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
Takeaways
Optical networks will need artificial intelligence and machine learning to scale more efficiently and affordably to handle the increased traffic and connected devices. Conversely, AI systems will also need faster pluggables than before to acquire data and make decisions more quickly. Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will need to scale up to speeds beyond 400G and relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Tags: 800G, 800G and beyond, adaptation, affordable, AI, artificial intelligence, automation, CloudComputing, data, DataCenter, EFFECT Photonics, FEC, fiber quality, innovation, integration, laser arrays, machine learning, network conditions, network optimization, Networking, optical transceivers, photonic integration, Photonics, physical layer, programmable interface, scalable, sensor data flow, technology, Telecommunications, telemetry data, terabyte, upgrade, virtualizationCoherent Optics In Space
When it started, the space race was a competition between two superpowers, but now there…
When it started, the space race was a competition between two superpowers, but now there are 90 countries with missions in space.
The prices of space travel have gone down, making it possible for more than just governments to send rockets and satellites into space. Several private companies are now investing in space programs, looking for everything from scientific advances to business opportunities. Some reports estimate more than 10,000 companies in the space industry and around 5,000 investors.
According to The Space Foundation’s 2022 report, the space economy was worth $469 billion in 2021. The report says more spacecraft were launched in the first six months of 2021 than in the first 52 years of space exploration (1957-2009). This growing industry has thus a growing need for technology products across many disciplines, including telecommunications. The space sector will need lighter, more affordable telecommunication systems that also provide increased bandwidth.
This is why EFFECT Photonics sees future opportunities for coherent technology in the space industry. By translating the coherent transmission from fiber communication systems on the ground to free-space optical systems, the space sector can benefit from solutions with more bandwidth capacity and less power consumption than traditional point-to-point microwave links.
It’s all About SWaP
One of the major concerns of the space industry is the cost of sending anything into space. Even during the days of NASA’s Space Shuttle program (which featured a reusable shuttle unit), sending a kilogram into space cost tens of thousands of dollars. Over time, more rocket stages have become reusable due to the efforts of companies like SpaceX, reducing these costs to just a few thousand. The figure below shows how the cost of space flight has decreased significantly in the last two decades.
Even though space travel is more affordable than ever, size, weight, and power (SWaP) requirements are still vital in the space industry. After all, shaving off weight or size in the spacecraft means a less expensive launch or perhaps room for more scientific instruments. Meanwhile, less power consumption means less drain on the spacecraft’s energy sources.
Using Optics and Photonics to Minimize SWaP Requirements
Currently, most space missions use bulkier radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities. Besides, radiofrequency equipment can often generate a lot of heat, requiring more energy to cool the system.
Decreasing SWaP requirements can be achieved with more photonics and miniaturization. Transmitting data with light will usually dissipate less than heat than transmission with electrical signals and radio waves. These leads to smaller, lighter communication systems that require less power to run.
These SWaP advantages come alongside the increased transmission speeds. After all, coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems.
Leveraging Electronics Ecosystems for Space Certification and Standardization
While integrated photonics can boost space communications by lowering the payload, it must overcome the obstacles of a harsh space environment, which include radiation hardness, an extreme operational temperature range, and vacuum conditions.
Mission Type | Temperature Range |
---|---|
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125 °C |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
The values in Table 1 show the unmanaged environmental temperatures in different space environments. In a temperature-managed area, these would decrease significantly for electronics and optics systems, perhaps by as much as half. Despite this management, the equipment would still need to deal with some extreme temperature values.
Fortunately, a substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Whenever you want to send data from point A to B, photonics is usually the most efficient way of doing it, be it over fiber or free space.
Offering optical communication systems in a small integrated package that can resist the required environmental conditions will significantly benefit the space sector and its need to minimize SWaP requirements. These optical systems can increase their transmission capacity with the coherent optical transmission used in fiber optics. Furthermore, by leveraging the assembly and packaging structure of electronics for the space sector, photonics can also provide the systems with the ruggedness required to live in space.
Tags: certification, coherent, electronics, existing, fast, growing, heat dissipation, miniaturization, Optical Communication, Photonics, power consumption, size, space, space sector, speed, SWAP, temperature, weightDesigning in the Great Lakes
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP)…
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP) and forward error correction (FEC) business unit from the global communications company Viasat Inc. This also meant welcoming to the EFFECT Photonics family a new engineering team who will continue to work in the Cleveland area.
As EFFECT Photonics expands its influence into the American Midwest, it is interesting to dive deeper into Cleveland’s history with industry and technology. Cleveland has enjoyed a long story as a Midwest industrial hub, and as these traditional industries have declined, it is evolving into one of the high-tech hubs of the region.
Cleveland and the Industrial Revolution
Cleveland’s industrial sector expanded significantly in the 19th century because of the city’s proximity to several essential resources and transportation routes: coal and iron ore deposits, the Ohio and Erie Canal, and the Lake Erie railroad. For example, several steel mills, such as the Cleveland Rolling Mill Company and the Cleveland Iron and Steel Company, emerged because of the city’s proximity to Lake Erie, facilitating the transportation of raw materials and goods.
Building on the emerging iron and steel industries, heavy equipment production also found a home in Cleveland. Steam engines, railroad equipment, and other forms of heavy machinery were all manufactured in great quantities in the city.
Cleveland saw another massive boost to its industrial hub status with the birth of the Standard Oil Company in 1870. At the peak of its power, Standard Oil was the largest petroleum company in the world, and its success made its founder and head, John D. Rockefeller, one of the wealthiest men of all time. This history with petroleum also led to the emergence of Cleveland’s chemicals and materials industry.
Many immigrants moved to Cleveland, searching for work in these expanding industries, contributing to the city’s rapid population boom. This growth also prompted the development of new infrastructure like roads, railways and bridges to accommodate the influx of people.
Several important electrical and mechanical equipment manufacturers, including the Bendix Corporation, the White Motor Company, and the Western Electric Company (which supplied equipment to the US Bell System), also established their headquarters in or around Cleveland in the late 19th and early 20th century.
From Traditional Industry to Healthcare and High-Tech
In the second half of the 20th century, Cleveland’s traditional industries, such as steel and manufacturing in Cleveland began to collapse. As was the case in many other great American manufacturing centers, automation, globalization, and other socioeconomic shifts all had a role in this decline. The demise of Cleveland’s core industries was a significant setback, but the city has made substantial efforts in recent years to diversify its economy and grow in new technology and healthcare areas.
For example, the Cleveland Clinic is one of the leading US academic medical centers, with pioneering medical breakthroughs such as the first coronary artery bypass surgery and the first face transplant in the United States. Institutions like theirs or the University Hospitals help establish Cleveland as a center for healthcare innovation.
Cleveland is also trying to evolve as a high-tech hub that attracts new workers and companies, especially in software development. Companies are attracted by the low office leasing and other operating costs, while the affordable living costs attract workers. As reported by the real estate firm CBRE, Cleveland’s tech workforce grew by 25 percent between 2016 and 2021, which was significantly above the national average of 12.8 percent.
A New Player in Cleveland’s High-Tech Industry
As Cleveland’s history as a tech hub continues, EFFECT Photonics is excited to join this emerging tech environment. Our new DSP team will find its new home in the Wagner Awning building in the Tremont neighborhood of Cleveland’s West Side.
This building was erected in 1895 and hosted a sewing factory that manufactured everything from tents and flotation devices for American soldiers and marines to awnings for Cleveland buildings. When the Ohio Awning company announced its relocation in 2015, this historic building began a redevelopment process to become a new office and apartment space.
EFFECT Photonics is proud to become a part of Cleveland’s rich and varied history with industry and technology. We hope our work can help develop this city further as a tech hub and attract more innovators and inventors to Cleveland.
Tags: digital signal processing (DSP), EFFECT Photonics, forward error correction (FEC), high-tech hub, industrial history, Integrated Photonics, Ohio Awning Company, Photonics, Tremont neighborhood, Viasat Inc., Wagner Awning buildingWhat Tunable Lasers Does the Network Edge Need?
Several applications in the optical network edge would benefit from upgrading from 10G to 100G…
Several applications in the optical network edge would benefit from upgrading from 10G to 100G DWDM or from 100G grey to 100G DWDM optics:
- Business Services could scale enterprise bandwidth beyond single-channel 100G links.
- Fixed Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Mobile Midhaul benefits from a seamless upgrade of existing links from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their linksto 100G IPoDWDM.
The 100G coherent pluggables for these applications will have very low power consumption (less than 6 Watts) and be deployed in uncontrolled environments. To enable this next generation of coherent pluggables, the next generation of tunable lasers needs enhanced optical and electronic integration, more configurability so that users can optimize their pluggable footprint and power consumption, and the leveraging of electronic ecosystems.
The Past and Future Successes of Increased Integration
Over the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Integration also has a major impact on power consumption, since smaller, highly-integrated lasers usually consume less power than bulkier lasers with more discrete components. Making the laser smaller and more integrated means that the light inside the chip will travel a smaller distance and therefore accumulate fewer optical losses.
Reducing the footprint of tunable lasers in the future will need even greater integration of their component parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Configurability and Optimization
Another important aspect of optimizing pluggable module footprint and power consumption is allowing transceiver developers to mix and match their transceiver building blocks.
Let’s discuss an example of such configurability. The traditional ITLA in transceivers contains the temperature control driver and power converter functions. However, the main transceiver board can usually provide these functions too.
A setup in which the main board performs these driver and converter functions would avoid the need for redundant elements in both the main board and tunable laser. Furthermore, it would give the transceiver developer more freedom to choose the power converter and driver blocks that best fit their footprint and power consumption requirements.
Such configurability will be particularly useful in the context of the new generation of 100G coherent pluggables. After, these 100G pluggables must fit tunable lasers, digital signal processors, and optical engines in a QSFP28 form factor that is slightly smaller than the QSFP-DD size used for 400G transceiver.
Looking Towards Electronics Style Packaging
The photonics production chain must be increasingly automated and standardized to save costs and increase accessibility. To achieve this goal, it is helpful to study established practices in the fields of electronics packaging, assembly, and testing.
By using BGA-style packaging or flip-chip bonding techniques that are common now in electronics packaging or passive optical fiber alignment, photonics packaging can also become more affordable and accessible. You can read more about these methods in our article about leveraging electronic ecosystems in photonics.
These kinds of packaging methods not only improve the scalability (and therefore cost) of laser production, but they can also further reduce the size of the laser.
Takeaways
Tunable lasers for coherent pluggable transceivers face the complex problem of maintaining a good enough performance while moving to smaller footprints, lower cost, and lower power consumption. Within a decade, the industry moved from the original integrable tunable laser assembly (ITLA) module to micro-ITLAs and then nano-ITLAs. Each generation had roughly half the footprint of the previous one.
However, the need for 100G coherent pluggables for the network edge imposes even tighter footprint and power consumption constraints on tunable lasers. Increased integration, more configurability of the laser and transceiver building blocks, and the leveraging of electronic will help tunable lasers get smaller and more power-efficient to enable these new application cases in edge and access networks.
Tags: automation, cost, efficiency, energy efficient, nano ITLA, optimization, power, size, smaller, testing, wavelength lockerWhat DSPs Does the Network Edge Need?
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact,…
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact, low-power form factors like QSFP28. A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy.
These new 100G coherent pluggables will have very low power consumption (less than six Watts) and will be deployed in uncontrolled environments, imposing new demands on coherent digital signal processors (DSPs). To enable this next generation of coherent pluggables in the network edge, the next generation of DSPs needs ultra-low power consumption, co-designing with the optical engine, and industrial hardening.
The Power Requirements of the Network Edge
Several applications in the network edge can benefit from upgrading their existing 10G DWDM, or 100G grey links into 100G DWDM, such as the aggregation of fixed and mobile access networks and 100G data center interconnects for enterprises. However, network providers have often chosen to stick to their 10G links because the existing 100G solutions do not check all the required boxes.
100G direct detect pluggables have a more limited reach and are not always compatible with DWDM systems. “Scaled down” coherent 400ZR solutions have the required reach and tunability, but they are too expensive and power-hungry for edge applications. Besides, the ports in small to medium IP routers used in edge deployments often do not support QSFP-DD modules commonly used in 400ZR but QSFP28 modules.
The QSFP28 form factor imposes tighter footprint and power consumption constraints on coherent technologies compared to QSFP-DD modules. QSFP28 is slightly smaller, and most importantly, it can handle at most a 6-Watt power consumption, in contrast with the typical 15-Watt consumption of QSFP-DD modules in 400ZR links. Fortunately, the industry is moving towards a proper 100ZR solution in the QSFP28 form factor that balances performance, footprint, and power consumption requirements for the network edge.
These power requirements also impact DSP power consumption. DSPs constitute roughly 50% of coherent transceiver power consumption, so a DSP optimized for the network edge 100G use cases should aim to consume at most 2.5 to 3 Watts of power.
Co-Designing and Adjusting for Power Efficiency
Achieving this ambitious power target will require scaling down performance in some areas and designing smartly in others. Let’s discuss a few examples below.
- Modulation: 400ZR transceivers use a more power-hungry 16QAM modulation. This modulation scheme uses sixteen different states that arise from combining four different intensity levels and four phases of light. The new generation of 100ZR transceivers might use some variant of a QPSK modulation, which only uses four states from four different phases of light.
- Forward Error Correction (FEC): DSPs in 400ZR transceivers use a more advanced concatenated FEC (CFEC) code, which combines inner and outer FEC codes to enhance the performance compared to a standard FEC code. The new 100ZR transceivers might use a more basic FEC type like GFEC. This is one of the earliest optical FEC algorithms and was adopted as part of the ITU G.709 specification.
- Co-Designing DSP and Optical Engine: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.
Industrial Hardening for DSPs
Traditionally, coherent devices have resided in the controlled settings of data center machine rooms or network provider equipment rooms. These rooms have active temperature control, cooling systems, dust and particle filters, airlocks, and humidity control. In such a setting, pluggable transceivers must operate within the so-called commercial temperature range (c-temp) from 0 to 70ºC.
On the other hand, the network edge often involves uncontrolled settings outdoors at the whims of Mother Nature. It might be at the top of an antenna, on mountain ranges, within traffic tunnels, or in Northern Europe’s severe winters. For these outdoor settings, transceivers should operate in the industrial temperature range (I-temp) from -40 to 85ºC. Higher altitude deployments provide additional challenges too. Because the air gets thinner, networking equipment cooling mechanisms become less effective, and the device cannot withstand casing temperatures as high as they can at sea level.
Takeaways
The network edge could benefit from switching their existing direct detect or grey links to 100G DWDM coherent. However, the industry needs more affordable and power-efficient transceivers and DSPs specifically designed for coherent 100G transmission in edge and access networks. By realizing DSPs co-designed with the optics, adjusted for reduced power consumption, and industrially hardened, the network edge will have coherent DSP and transceiver products adapted to their needs.
Tags: 100ZR, access network, co-design, controlled, edge, fit for platform DSP, InP, low power, pluggables, power consumption, power conversion, QSFP 28, QSFP-DDA Story of Standards: From 400ZR and Open ROADM to OpenZR+
Coherent optical transmission has been crucial in addressing network operator problems for the last decade. In this time, coherent technology has expanded from a solution reserved only for premium long-distance links to one that impacts data center interconnects, metro, and access networks…
Coherent optical transmission has been crucial in addressing network operator problems for the last decade. In this time, coherent technology has expanded from a solution reserved only for premium long-distance links to one that impacts data center interconnects, metro, and access networks, as explained in the video below.
The development of the 400ZR standard by the Optical Internetworking Forum (OIF) proved to be a milestone in this regard. It was the result of several years of progress in electronic and photonic integration that enabled the miniaturization of 400G coherent systems into smaller pluggable form factors. With small enough modules to pack a router faceplate densely, the datacom sector could profit from an ideal solution for high-capacity data center interconnects. Telecom operators wanted to implement a similar solution in their metro links, so they combined the Open ROADM standardization initiative with the 400ZR initiative to develop the OpenZR+ agreement that better fits their use cases.
This article will elaborate on these standardization projects—400ZR, OpenROADM, and OpenZR+—and explain what use cases each was designed to tackle.
What Is 400G ZR?
To cope with growing bandwidth demands, providers wanted to implement the concept of IP over DWDM (IPoDWDM), in which tunable optics are integrated into the router. This integration eliminates the optical transponder shelf and the optics between the routers and DWDM systems, reducing the network capital expenditure (CAPEX). This is shown in the figure below.
However, widely deploying IPoDWDM with coherent optics forced providers to face a router faceplate trade-off. Since DWDM modules have traditionally been much larger than the client optics, plugging DWDM modules into the router required sacrificing roughly half of the costly router faceplate capacity. This was unacceptable for datacom and telecom providers, who approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop a compact and interoperable coherent solution that addressed this trade-off.
These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. 400ZR specifies a (relatively) low-cost and interoperable 400 Gigabit coherent interface for a link with a single optical wavelength, using a double-polarization, 16-state quadrature amplitude modulation scheme (DP-16QAM). This modulation scheme uses sixteen different constellation points that arise from combining four different intensity levels and four phases. It doubles the usual 16-QAM transmission capacity by encoding information in two different polarizations of light.
The agreement specified a link reach of 80 kilometers without amplification and 120 kilometers with amplification. For forward error correction (FEC), the 400ZR standard supports a concatenated FEC (CFEC) method. CFEC combines inner and outer FEC codes to enhance the performance compared to a standard FEC code.
The 400ZR agreement does not specify a particular size or type of module, but its specifications targeted a footprint and power consumption that could fit in smaller modules such as the Quad Small Form-Factor Pluggable Double-Density (QSFP-DD) and Octal-Small Form-Factor Pluggable (OSFP). These form factors are small enough to provide the faceplate density that telecom and especially datacom operators need in their system architectures. So even if we often associate the 400ZR standard with QSFP-DD, other form factors, such as CFP2, can be used.
What Is Open ROADM?
In parallel with the 400ZR standardization efforts, telecom network operators had a different ongoing discussion.
Reconfigurable Optical Add-Drop Multiplexers (ROADMs) were a game-changer for optical communications when they entered the market in the 2000s. Before this technology, optical networks featured inefficient fixed routes and could not adapt to changes in traffic and demand. ROADMs allowed operators to remotely provision and manage their wavelength channels and bandwidth without redesigning the physical network infrastructure.
However, ROADMs were proprietary hardware with proprietary software. Changing the proprietary ROADM platform needed extensive testing and a lengthy integration process, so operators were usually reluctant to look for other platform alternatives. Besides, ROADMs still had several fixed, pre-defined elements that could have been configurable through open interfaces. This environment led to reduced competition and innovation in the ROADM space.
These trends drove the launch of the Open ROADM project in 2016 and the release of their first Multi-Source Agreement in 2017. The project aimed to disaggregate and open up these traditionally proprietary ROADM systems and make their provisioning and control more centralized through technologies such as software-defined networks (SDNs, explained in the diagram below).
The Open ROADM project defined three disaggregated functions (pluggable optics, transponder, and ROADM), all controlled through an open standards-based API that could be accessed through an SDN controller. It defined 100G-400G interfaces for both Ethernet and Optical Transport Networking (OTN) protocols with a link reach of up to 500km. It also defined a stronger FEC algorithm called open FEC (oFEC) to support this reach. oFEC provides a greater enhancement than CFEC at the cost of more overhead and energy.
What Is OpenZR+?
The 400ZR agreement was primarily focused on addressing the needs of large-scale data center operators and their suppliers.
While it had some usefulness for telecom network operators, their transport network links usually span several hundreds of kilometers, so the interface and module power consumption defined in the 400ZR agreement could not handle such an extended reach. Besides, network operators needed extra flexibility when defining the transmission rate and the modulation type of their links.
Therefore, soon after the publication of the 400ZR agreement, the OpenZR+ Multi-Source Agreement (MSA) was published in September 2020. As the diagram below explains, this agreement can be seen as a combination of the 400ZR and Open ROADM standardization efforts.
To better fit the telecom use cases of regional and long-haul transport links, OpenZR+ added a few changes to improve the link reach and flexibility over 400ZR:
- Using the more powerful oFEC defined by the Open ROADM standard.
- Multi-rate Ethernet that enables the multiplexing of 100G and 200G signals. This provides more options to optimize traffic in transport links.
- Support for 100G, 200G, 300G, or 400G transport links using different modulation types (QPSK, 8QAM, or 16QAM). This enables further reach and capacity optimization for fiber links.
- Higher dispersion compensation to make the fiber link more robust.
These changes allow QSFP-DD and OSFP modules to reach link lengths of up to 480 km (with optical amplifiers) at a 400G data rate. However, the FEC and dispersion compensation improvements that enable this extended reach come at the price of increased energy consumption. While the 400ZR standard targets a power consumption of 15 Watts, OpenZR+ standards aim for a power consumption of up to 20 Watts.
If operators need more performance, distances above 500 km, and support for OTN traffic (400ZR and OpenZR+ only support Ethernet), they must still use proprietary solutions, which are informally called 400ZR+. These 400ZR+ solutions feature larger module sizes (CFP2), higher performance proprietary FEC, and higher launch powers to achieve longer reach. These more powerful features come at the cost of even more power consumption, which can go up to 25 Watts.
Takeaways
The following table summarizes the use cases and characteristics of the approaches discussed in the article: 400ZR, Open ROADM, OpenZR+, and 400ZR+.
Technology | 400ZR | Open ROADM | OpenZR+ | 400ZR+ Proprietary |
---|---|---|---|---|
Target Application | Edge Data Center Interconnects | Carrier ROADM Mesh Networks | Metro/Regional Carrier and Data Center Interconnects | Long-Haul Carrier |
Target Reach @ 400G | 120 km (amplified) | 500 km (amplified) | 480km (amplified) | 1000 km (amplified) |
Target Power Consumption | Up to 15 W | Up to 25 W | Up to 20 W | Up to 25W |
Typical Module Option | QSFP-DD/OSFP | CFP2 | QSFP-DD/OSFP | CFP2 |
Client Interface | 400G Ethernet | 100-400G Ethernet and OTN | 100-400G Ethernet (Multi-rate) | 100-400G Ethernet and OTN |
Modulation Scheme | 16QAM | QPSK, 8QAM, 16QAM | QPSK, 8QAM, 16QAM | QPSK, 8QAM, 16QAM |
Forward Error Correction | CFEC | oFEC | oFEC | Proprietary |
Standards / MSA | OIF | Open ROADM MSA | OpenZR+ MSA | Proprietary |
400ZR is an agreement primarily focused on the needs of data center interconnects across distances of 80 – 120 kilometers. On the other hand, OpenROADM and OpenZR+ focused on addressing the needs of telecom carrier links, supporting link lengths of up to 500 km. These differences in reach are also reflected in the power consumption specs and the module form factors typically used. The 400ZR and OpenZR+ standards can only handle Ethernet traffic, while the Open ROADM and 400ZR+ solutions can handle both Ethernet and OTN traffic.
Tags: 400ZR, Data center, demand, disaggregation, extend, fiber, interoperable, Open ROADM, power, size, ZR+The Impact of Optics and Photonics on the Agrifood Industry
Precision farming is essential in a world with over 10 billion people by 2050 and…
Precision farming is essential in a world with over 10 billion people by 2050 and a food demand that is expanding at an exponential pace. The 2019 World Resources Report from the World Economic Forum warns that at the current level of food production efficiency, feeding the world in 2050 would require “clearing most of the world’s remaining forests, wiping out thousands more species, and releasing enough greenhouse gas emissions to exceed the 1.5°C and 2°C warming targets enshrined in the Paris Agreement – even if emissions from all other human activities were entirely eliminated.”
Technology can help the agrifood industry improve efficiency and meet these demands by combining robotics, machine vision, and small sensors to precisely and automatically determine the care needed by plants and animals in our food supply chain. This approach helps control and optimize food production, resulting in more sustainable crops, higher yields, and safer food.
Sensors based on integrated photonics can enable many of these precision farming applications. Photonic chips are lighter and smaller than other solutions so they can be deployed more easily in these agricultural use cases. The following article will provide examples of how integrated photonics and optical technology can add value to the agriculture and food industries.
How The World’s Tiny Agricultural Titan Minimizes Food Waste
The Netherlands is such a small country that if it were a US state, it would be among the ten smallest states, with a land area between West Virginia and Maryland. Despite its size, the Food and Agriculture Organization of the United Nations (FAO) ranked the Netherlands as the second largest exporter of food in the world by revenue in 2020, only behind the Americans and ahead of countries like Germany, China, or Brazil. These nations have tens or hundreds of times more arable land than the Dutch. Technology is a significant reason for this achievement, and the Dutch are arguably the most developed nation in the world regarding precision farming.
The hub of Dutch agrifood research and development is called the Food Valley, centered in the municipality of Wageningen in Gelderland province. In this area, many public and private R&D initiatives are carried out jointly with Wageningen University, a world-renowned leader in agricultural research.
When interviewed last year, Harrij Schmeitz, Director of the Fruit Tech Campus in Geldermalsen, mentions the example of a local fruit supplier called Fruitmasters. They employ basic cameras to snap 140 photographs of each apple that travels through the sorting conveyor, all within a few milliseconds. These photographs are used to automatically create a 3D model and help the conveyor line filter out the rotten apples before they are packaged for customers. This process was done manually in the past, so this new 3D mapping technology significantly improves efficiency.
These techniques are not just constrained to Gelderland, of course. Jacob van den Borne is a potato farmer from Reusel in the province of North Brabant, roughly a half-hour drive from EFFECT Photonics’ Eindhoven headquarters. Van den Borne’s farm includes self-driving AVR harvesters (shown in the video below), and he has been using drones in his farms since 2011 to photograph his fields and study the soil quality and farming yield.
The drone pictures are used to create maps of the fields, which then inform farming decisions. Van den Borne can study the status of the soil before farming, but even after crops have sprouted, he can study which parts of the field are doing poorly and need more fertilization. These measures prevent food waste and the overuse of fertilizer and pesticides. For example, Van den Borne’s farms have eliminated pesticide chemicals in their greenhouses while boosting their yield. The global average yield of potatoes per acre is around nine tons, but his farms yield more than 20 tons per acre!
If you want to know more about Van den Borne and his use of technology and data, you can read this article.
Lighting to Reduce Power Consumption and Emissions
Artificial lighting is a frequent requirement of indoor plant production facilities to increase production and improve crop quality. Growers are turning to LED lighting because it is more efficient than traditional incandescent or fluorescent systems at converting electricity to light. LED lights are made through similar semiconductor manufacturing processes to photonics chips.
LED lighting also provides a greater variety of colors than the usual yellow/orange glow. This technology allows gardeners to pick colors that match each plant’s demands from seedlings through cultivation, unlike high-pressure sodium or other traditional lighting systems. Different colors of visible light create different chlorophyll types in plants, so LED lights can be set to specific colors to provide the best chlorophyll for each development stage.
For example, suppose you roam around the Westland municipality of the Netherlands. You might occasionally catch a purple glow in the night skies, which has nothing to do with UFOs or aliens wanting to abduct you. As explained by Professor Leo Marcelis of Wageningen University (see the above video), researchers have found that red light is very good for plant growth, and mixing it with five to ten percent blue light gives even better results. Red and blue are also the most energy-efficient colors for LEDs, which helps reduce energy consumption even more. As a result, the farmers can save on light and energy use while the environment profits too.
Improving Communication Networks in the Era of Sensors
Modern farmers like Jacob van den Borne collect a large quantity of sensor data, which allows them to plan and learn how to provide plants with the perfect amount of water, light, and nutrients at the proper moment. Farmers can use these resources more efficiently and without waste thanks to this sensor information.
For example, Van den Borne uses wireless, Internet-of-Things sensors from companies like Sensoterra (video below) to gauge the soil’s water level. As we speak, researchers in the OnePlanet Research Center, a collaboration including the Imec R&D organization and Wageningen University, are developing nitrogen sensors that run on optical chips and can help keep nitrogen emissions in check.
These sensors will be connected to local servers and the internet for faster data transfer, so many of the issues and photonics solutions discussed in previous articles about the cloud edge and access networks are also relevant for agrifood sensors. Thus, improving optical communication networks will also impact the agrifood industry positively.
Takeaways
In a future of efficient high-tech and precision farming, optics and photonics will play an increasingly important role.
Optical sensors on a chip can be fast, accurate, small, and efficient. They will provide food producers with plenty of data to optimize their production processes and monitor the environmental impact of food production. Novel lighting methods can reduce the energy consumption of greenhouses and other indoor plant facilities. Meanwhile, photonics will also be vital to improving the capacity of the communications networks that these sensors run in.
With photonics-enabled precision processes, the agrifood industry can improve yields and supply, optimize resource use, reduce waste throughout the value chain, and minimize environmental impact.
Tags: atmosphere, demand, emissions, energy consumption, environment, future, high tech farming, high volume, Integrated Photonics, population growth, Precision agriculture, precision farming, process, resource, sensors, supply, wasteThe Whats and Hows of Telcordia Standards
Before 1984, the world of telecom standards looked very different from how it does now.…
Before 1984, the world of telecom standards looked very different from how it does now. Such a world prominently featured closed systems like the one AT&T had in the United States. They were stable and functional systems but led to a sluggish pace of technology innovation due to the lack of competition. The breakup of the Bell System in the early 1980s, where AT&T was forced to divest from their local Bell operating and manufacturing units, caused a tectonic shift in the industry. As a result, new standards bodies rose to meet the demands of a reinvented telecom sector.
Bellcore, formerly Bell Communications Research, was one of the first organizations to answer this demand. Bellcore aided the Regional Bell Operating Companies by creating “generic requirements” (GR) documents that specified the design, operation, and purpose of telecom networks, equipment, and components. These GRs provided thorough criteria to help new suppliers design interoperable equipment, leading to the explosion of a new supplier ecosystem that made “GR conformant” equipment. An industry that relied on a few major suppliers thus became a more dynamic and competitive field, with carriers allowed to work with several suppliers almost overnight.
Bellcore is now Telcordia, and although the industry saw the emergence of other standards bodies, Telcordia still plays a major role in standardization by updating and producing new GR documents. Some of the most well-known documents are reliability prediction standards for commercial telecommunication products. Let’s discuss what these standards entail and why they matter in the industry.
What is the goal of Telcordia reliability standards?
Telecommunications carriers can use general requirements documents to select products that meet reliability and performance needs. The documents cover five sections:
- General Requirements, which discuss documentation, packaging, shipping, design features, product marking, safety and interoperability.
- Performance Requirements, which cover potential tests, as well as the performance criteria applied during testing.
- Service Life Tests, which mimic the stresses faced by the product in real-life use cases.
- Extended Service Life Tests, which verify long-term reliability.
- Reliability Assurance Program, which ensures satisfactory, long-term operation of products in a telecom plant
Several of these specifications require environmental/thermal testing and often refer to other MIL STD and EIA / TIA test specifications. Listed below are a few common Telcordia test specifications that require the use of environmental testing.
Telcordia Generic Requirement | Description/Applicable Product |
---|---|
GR-49-CORE | for Outdoor Telephone Network Interface Devices |
GR-63-CORE | for Network Equipment-Building System Requirements (NEBS): Physical Protection |
GR-326-CORE | for Single Mode Optical Connectors and Jumper Assemblies (Fiber Optics) |
GR-468-CORE | for Optoelectronic Devices Used in Telecommunications Equipment |
GR-487-CORE | for Electronic Equipment Cabinets (Enclosures) |
GR-974-CORE | for Telecommunications Line Protector Units (TLPUS) |
GR-1209-CORE | for Fiber Optic Branching Components |
GR-1221-CORE | for Passive Optical Components |
What are Telcordia tests like
For example, our optical transceivers at EFFECT Photonics comply with the Telcordia GR-468 qualification, which describes how to test optoelectronic devices for reliability under extreme conditions. Qualification depends upon maintaining optical integrity throughout an appropriate test regimen. Accelerated environmental tests are described in the diagram below. The GR recommends that a chosen test regimen be constructed upon expected conditions and stresses over the long term life of a system and/or device.
Mechanical Reliability & Temperature Testing | ||
---|---|---|
Shock & Vibration | High / Low Storage Temp | Temp Cycle |
Damp Heat | Cycle Moisture Resistance | Hot Pluggable |
Mating Durability | Accelerated Aging | Life Expectancy Calculation |
Our manufacturing facilities and partners include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.
Why are Telcordia standards important?
Companies engage in telecom standards for several reasons:
- Strategic Advantage: Standards influence incumbents with well-established products differently than startups with “game changer” technologies. Following a technological standard helps incumbents get new business and safeguard their existing business. If a new vendor comes along with a box based on a new technology that gives identical functionality for a fraction of the price, you now have a vested stake in that technological standard.
- Allocation of Resources: Standards are part of technology races. If a competitor doubles technical contributions to hasten the inclusion of their specialized technology into evolving standards, you need to know so you may react by committing additional resources or taking another action.
- Early Identification of Prospective Partners and Rivals: Standards help suppliers recognize competitors and potential partners to achieve business objectives. After all, the greatest technology does not necessarily “win the race”, but the one with the best business plans and partners that can help realize the desired specification and design.
- Information Transfer: Most firms use standards to exchange information. Companies contribute technical contributions to standards groups to ensure that standards are as close as feasible to their business model and operations’ architecture and technology. Conversely, a company’s product and service developers must know about the current standards to guarantee that their goods and services support or adhere to industry standards, which clients expect.
Takeaways
One of our central company objectives is to bring the highest-performing optical technologies, such as coherent detection, all the way to the network edge. However, achieving this goal doesn’t just require us to focus on the optical or electronic side but also on meeting the mechanical and temperature reliability standards required to operate coherent devices outdoors. This is why it’s important for EFFECT Photonics to constantly follow and contribute to standards as it prepares its new product lines.
Tags: accelerated, AT&T, Bellcore, closed, coherent, innovation, monopoly, open, partners, reliability, resource allocation, service life, technology, TelcordiaTrends in Edge Networks
To see what is trending in the edge and access networks, we look at recent…
To see what is trending in the edge and access networks, we look at recent survey results from a poll by Omdia to see where current interests and expectations lie.
Type of network operator surveyed
Revenue of surveyed operators
Type of network operator surveyed
58% of the participants think that by the end of 2024, the relative use of 400G+ coherent optics in DWDM systems will lean towards Pluggable Optics versus 42% who think it will lean towards Embedded Optics.
54% of the participants think that by the end of 2024, the relative use of 400G+ coherent optics integrated into routers/switches will be Pluggable Optics with a -10dBm launch power, while 46% think it will be in 0dBm
Most beneficial features for
coherent tunable pluggables in
network/operations
The level of importance for
100G coherent pluggable optics
in the edge/access strategy
Management options are a must-have and EFFECT Photonics has
much experience with NarroWave in Direct Detect.
75% of the respondents indicate coherent pluggables optics are
essential for their edge/access evolution strategy.
Reaching a 100ZR Future for Access Network Transport
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications…
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had yet to materialize into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
By embracing smaller and more customizable light sources, new optimized DSP designs, and high-volume manufacturing capabilities, we can develop native 100ZR solutions with lower costs that better fit edge and access networks.
Making Tunable Lasers Even Smaller?
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, integrable tunable laser assemblies (ITLAs) must maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half.
There are still plenty of discussions over the future of ITLA packaging to fit the QSFP28 form factors of these new 100ZR transceivers. For example, every tunable laser needs a wavelength locker component that stabilizes the laser’s output regardless of environmental conditions such as temperature. Integrating that wavelength locker component with the laser chip would help reduce the laser package’s footprint.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means that the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Co-designing DSPs for Energy Efficiency
The 5 Watt-power requirement of 100ZR in a QSFP28 form factor is significantly reduced compared to the 15-Watt specification of 400ZR transceivers in a QSFP-DD form factor. Achieving this reduction requires a digital signal processor (DSP) specifically optimized for the 100G transceiver.
Current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Optical Subassemblies That Leverage Electronic Ecosystems
To become more accessible and affordable, the photonics manufacturing chain can learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible: passive alignments of the optical fiber, BGA-style packaging, and flip-chip bonding. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics. To read more about them, please read our previous article.
Takeaways
The interest in novel 100ZR coherent pluggable optics for edge and access applications is strong, but the market has struggled to provide “native” and specific 100ZR solutions to address this interest. Transceiver developers need to embrace several new technological approaches to develop these solutions. They will need smaller tunable laser packages that can fit the QSFP28 form factors of 100ZR solutions, optimized and co-designed DSPs that meet the reduced power consumption goals, and sub-assemblies that leverage electronic ecosystems for increased scale and reduced cost.
Tags: 100 ZR, 100G, 100ZR, 400ZR, C-band, C-PON, coherent, DSP, filters, future proof, Korea, laser sources, O-band, Packaging, pluggable, roadmap, S-bandRemote Provisioning and Management for Edge Networks
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and…
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and store and compute data closer to the end user. According to PricewaterhouseCoopers, these advantages will drive the worldwide market for edge data centers to more than triple from $4 billion in 2017 to $13.5 billion in 2024. With the increased use of edge computing, more high-speed transceivers are required to link edge data centers. According to Cignal AI, the number of 100G equivalent ports sold for edge applications will double between 2022 and 2025, as indicated in the graph below.
The increase in edge infrastructure comes with many network provisioning and management challenges. While typical data centers were built in centralized and controlled environments, edge deployments will live in remote and uncontrolled environments because they need to be close to where the data is generated. For example, edge infrastructure could be a server plugged into the middle of a busy factory floor to collect data more quickly from their equipment sensors.
This increase in edge infrastructure will provide plenty of headaches to network operators who also need to scale up their networks to handle the increased bandwidths and numbers of connections. More truck rolls must be done to update more equipment, and this option won’t scale cost-effectively, which is why many companies simply prefer not to upgrade and modernize their infrastructure.
Towards Zero-Touch Provisioning
A zero-touch provisioning model would represent a major shift in an operator’s ability to upgrade their network equipment. The network administrator could automate the configuration and provisioning of each unit from their central office, ship the units to each remote site, and the personnel in that site (who don’t need any technical experience!) just need to power up the unit. After turning them on, they could be further provisioned, managed, and monitored by experts anywhere in the world.
The optical transceivers potentially connected to some of these edge nodes already have the tools to be part of such a zero-touch provisioning paradigm. Many transceivers have a plug-and-play operation that does not require an expert on the remote site. For example, the central office can already program and determine specific parameters of the optical link, such as temperature, wavelength drift, dispersion, or signal-to-noise ratio, or even what specific wavelength to use. The latter wavelength self-tuning application is shown in Figure 2.
Once plugged in, the transceiver will set the operational parameters as programmed and communicate with the central office for confirmation. These provisioning options make deployment much easier for network operators.
Enabling Remote Diagnostics and Management
The same channel that establishes parameters remotely during the provisioning phase can also perform monitoring and diagnostics afterward. The headend module in the central office could remotely modify certain aspects of the tail-end module in the remote site, effectively enabling several remote management and diagnostics options. The figure below provides a visualization of such a scenario.
The central office can remotely measure metrics such as the transceiver temperature and power transmitted and received. These metrics can provide a quick and useful health check of the link. The headend module can also remotely read alarms for low/high values of these metrics.
These remote diagnostics and management features can eliminate certain truck rolls and save more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an antenna tower) that require expensive truck rolls.
Remote Diagnostics and Control for Energy Sustainability
To talk about the impact of remote control on energy sustainability, we first must review the concept of performance margins. This number is a vital measure of received signal quality. It determines how much room there is for the signal to degrade without impacting the error-free operation of the optical link.
In the past, network designers played it safe, maintaining large margins to ensure a robust network operation in different conditions. However, these higher margins usually require higher transmitter power and power consumption. Network management software can use the remote diagnostics provided by this new generation of transceivers to develop tighter, more accurate optical link budgets in real time that require lower residual margins. This could lower the required transceiver powers and save valuable energy.
Another related sustainability feature is deciding whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver needs to operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at their maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with a smaller residual margin and use lower power on the amplifier. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
If the industry wants interoperability between different transceiver vendors, these kinds of power parameters for remote management and control should also be standardized.
Takeaways
As edge networks get bigger and more complex, network operators and designers need more knobs and degrees of freedom to optimize network architecture and performance and thus scale networks cost-effectively.
The new generation of transceivers has the tools for remote provisioning, management, and control, which gives optical networks more degrees of freedom for optimization and reduces the need for expensive truck rolls. These benefits make edge networks simpler, more affordable, and more sustainable to build and operate.
Tags: access network, capacity, cost, distributed access networks, DWDM, inventory stock, Loss of signal, maintenance, monitor, optical networks, plug-and-play, remote, remote control, scale, scaling, self-tuning, timeWhat is 100ZR and Why Does it Matter?
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider…
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider ADVA announced the launch of the industry’s first 100ZR pluggable coherent transceiver. Discussions in the telecom sector about a future beyond 400G coherent technology have usually focused on 800G products, but there is increasing excitement about “downscaling” to 100G coherent products for certain applications in the network edge and business services. This article will discuss the market and technology forces that drive this change in discourse.
The Need for 100G Transmission in Telecom Deployments
The 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry for the optical network edge. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had not really materialized into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
100ZR and its Enabling Technologies
With the recent II-VI Incorporated and ADVA announcement, the industry is showing its first attempts at a native 100ZR solution that can provide a true alternative to the powered-down 400ZR products. Some of the key specifications of this novel 100ZR solution include:
- A QSFP28 form factor, very similar but slightly smaller than a QSFP-DD
- 5 Watt power consumption
- C-temp and I-temp certifications to handle harsh environments
The 5 Watt-power requirement is a major reduction compared to the 15-Watt specification of 400ZR transceivers in the QSFP-DD form factor. Achieving this spec requires a digital signal processor (DSP) that is specifically optimized for the 100G transceiver.
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market, simplifies the research and design processes, but comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. DSPs co-designed and optimized for their specific optical engine and laser can significantly improve power efficiency. You can read more about co-design approaches in one of our previous articles.
Achieving 100ZR Cost-Efficiency through Scale
Making 100ZR coherent optical transceivers more affordable is also a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Such manufacturing scale demands a higher upfront investment, but the result is a more accessible product that more customers can purchase.
Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems. Furthermore, transceiver developers must look for trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips and packages.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
As the Heavy Reading survey showed, the interest in 100G coherent pluggable optics for edge/access applications is strong, and operators have identified use key use cases within their networks. In the past, there were no true 100ZR solutions that could address this interest, but the use of optimized DSPs and light sources, as well as high-volume manufacturing capabilities, can finally deliver a viable and affordable 100ZR product.
Tags: 100G coherent, 100ZR, DSP, DSPs, edge and access applications, EFFECT Photonics, PhotonicsFit for Platform DSPs
Over the last two decades, power ratings for pluggable modules have increased as we moved…
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the digital signal processing (DSP) chip that also performs the functions of clock data recovery (CDR), optical-electrical gear-boxing, and lane switching. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip and its radiofrequency (RF) interconnects with the optical engine.
One way to reduce transceiver power consumption requires designing DSPs that take advantage of the material platform of their optical engine. In this article, we will elaborate on what that means for the Indium Phosphide platform.
A Jack of All Trades but a Master of None
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
Co-Designing with Indium Phosphide PICs for Power Efficiency
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Additionally, the optimized DSP could also be programmed to do some additional signal conditioning that minimizes the nonlinear optical effects of the InP material, which can reduce noise and improve performance.
Taking Advantage of Active Components in the InP Platform
Russell Fuerst, EFFECT Photonics’ Vice-President of Digital Signal Processing, gave us an interesting insight about designing for the InP platform in a previous interview:
When we started doing coherent DSP designs for optical communication over a decade ago, we pulled many solutions from the RF wireless and satellite communications space into our initial designs. Still, we couldn’t bring all those solutions to the optical markets.
However, when you get more of the InP active components involved, some of those solutions can finally be brought over and utilized. They were not used before in our designs for silicon photonics because silicon is not an active medium and lacked the performance to exploit these advanced techniques.
For example, the fact that the DSP could control laser and modulator components on the InP can lead to some interesting manipulations of light signals. A DSP that can control these components directly could generate proprietary waveforms or use non-standard constellation and modulation schemes that can boost the performance of a coherent transceiver and increase the capacity of the link.
Takeaways
The biggest problem for DSP designers is still improving performance while reducing power use. This problem can be solved by finding ways to integrate the DSP more deeply with the InP platform, such as letting the DSP control the laser and modulator directly to develop new waveform shaping and modulation schemes. Because the InP platforms have active components, DSP designers can also import more solutions from the RF wireless space.
Tags: analog electronics, building blocks, coherent, dispersion compensation, DSP, energy efficiency, Intra DCI, Photonics, PON, power consumption, reach, simplifiedThe Power of Integrated Photonic LIDAR
Outside of communications applications, photonics can play a major role in sensing and imaging applications.…
Outside of communications applications, photonics can play a major role in sensing and imaging applications. The most well-known of these sensing applications is Light Detection and Ranging (LIDAR), which is the light-based cousin of RADAR systems that use radio waves.
To put it in a simple way: LIDAR involves sending out a pulse of light, receiving it back, and using a computer to study how the environment changes that pulse. It’s a simple but quite powerful concept.
If we send pulses of light to a wall and listen to how long it takes for them to come back, we know how far that wall is. That is the basis of time-of-flight (TOF) LIDAR. If we send a pulse of light with multiple wavelengths to an object, we know where the object is and whether it is moving towards you or away from you. That is next-gen LIDAR, known as FMCW LIDAR. These technologies are already used in self-driving cars to figure out the location and distance of other cars. The following video provides a short explainer of how LIDAR works in self-driving cars.
Despite their usefulness, the wider implementation of LIDAR systems is limited by their size, weight, and power (SWAP) requirements. Or, to put it bluntly, they are bulky and expensive. For example, maybe you have seen pictures and videos of self-driving cars with a large LIDAR sensor and scanner on the roof of the car, as in the image below.
Making LIDAR systems more affordable and lighter requires integrating the optical components more tightly and manufacturing them at a higher volume. Unsurprisingly, this sounds like a problem that could be solved by integrated photonics.
Replacing Bulk LIDAR with “LIDAR on Chip”
Back in 2019, Tesla CEO Elon Musk famously said that “Anyone relying on LIDAR is doomed”. And his scepticism had some substance to it. LIDAR sensors were clunky and expensive, and it wasn’t clear that they would be a better solution than just using regular cameras with huge amounts of visual analysis software. However, the incentive to dominate the future of the automotive sector was too big, and a technology arms race had already begun to miniaturize LIDAR systems into a single photonic chip.
Let’s provide a key example. A typical LIDAR system will require a mechanical system that moves the light source around to scan the environment. This could be as simple as a 360-rotating LIDAR scanner or using small scanning mirrors to steer the beam. However, an even better solution would be to create a LIDAR scanner with no moving parts that could be manufactured at a massive scale on a typical semiconductor process.
This is where optical phased arrays (OPAs) systems come in. An OPA system splits the output of a tunable laser into multiple channels and puts different time delays on each channels. The OPA will then recombine the channels, and depending on the time delays assigned, the resulting light beam will come out at a different angle. In other words, an OPA system can steer a beam of light from a semiconductor chip without any moving parts.
There is still plenty of development required to bring OPAs into maturity. Victor Dolores Calzadilla, a researcher from the Eindhoven University of Technology (TU/e) explains that “The OPA is the biggest bottleneck for achieving a truly solid-state, monolithic lidar. Many lidar building blocks, such as photodetectors and optical amplifiers, were developed years ago for other applications, like telecommunication. Even though they’re generally not yet optimized for lidar, they are available in principle. OPAs were not needed in telecom, so work on them started much later. This component is the least mature.”
Economics of Scale in LIDAR Systems
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for LIDAR and automotive industry.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Using Proven Photonics Technologies for Automotive Standards
Another challenge for photonics technologies is that they need to move from into parameters and specifications in the automotive sector that are often harsher than the telecom/datacom sector. For example, a target temperature range of −40°C to 125°C is often required, which is much broader than the typical industrial temperature range used in the telecom sector. The packaging of the PIC and its coupling to fiber and free space is particularly sensitive to these temperature changes.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Fortunately, a substantial body of knowledge already exists to make integrated photonics compatible with harsh environments like those of outer space. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been qualified for space and automotive applications. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Photonics technology must be built on a wafer-scale process that can produce millions of chips in a month. When we can show the market that photonics can be as easy to use as electronics, that will trigger a revolution in the use of photonics worldwide.
The broader availability of photonic devices will take photonics into new applications, such as those of LIDAR and the automotive sector. With a growing integrated photonics industry, LIDAR can become lighter, avoid moving parts, and be manufactured in much larger volumes that reduce the cost of LIDAR devices. Integrated photonics is the avenue for LIDAR to become more accessible to everyone.
Tags: accessible, affordable, automotive, automotive sector, beamforming, discrete, economics of scale, efficient, electronics, laser, LIDAR, phased arrays, photonic integration, power consumption, self-driving car, self-driving cars, space, waferHow To Make a Photonic Integrated Circuit
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we want to explain how photonic integrated circuits (PICs), the devices that enable all these applications, are made.
Designing a PIC
The process of designing a PIC should translate an initial application concept into a functioning photonics chip that can be manufactured. In a short course at the OFC 2018 conference, Wim Bogaerts from Ghent University summarized the typical PIC design process in the steps we will describe below.
- Concept and Specifications: We first have to define what goes into the chip. A chip architect normally spends time with the customer to understand what the customer wants to achieve with the chip and all the conditions and situations where the chip will be used. After these conversations, the chip application concept becomes a concrete set of specifications that are passed on to the team that will design the internals of the chip. These specs will set the performance targets of the PIC design.
- Design Function: Having defined the specs, the design team will develop a schematic circuit diagram that captures the function of the PIC. This diagram is separated into several functional blocks: some of them might already exist, and some of them might have to be built. These blocks include lasers, modulators, detectors, and other components that can manipulate light in one way or another.
- Design Simulation: Making a chip costs a lot of money and time. With such risks, a fundamental element of chip design is to accurately predict the chip’s behavior after it is manufactured. The functional blocks are placed together, and their behavior is simulated using various physical models and simulation tools. The design team often uses a few different simulation approaches to reduce the risk of failure after manufacturing.
- Design Layout: Now, the design team must translate the functional chip schematic into a proper design layout that can be manufactured. The layout consists of layers, component positions, and geometric shapes that represent the actual manufacturing steps. The team uses software that translates these functions into the geometric patterns to be manufactured, with human input required for the trickiest placement and geometry decisions.
- Check Design Rules: Every chip fabrication facility will have its own set of manufacturing rules. In this step, the design team verifies that the layout agrees with these rules.
- Verify Design Function: This is a final check to ensure that the layout actually performs as was originally intended in the original circuit schematic. The layout process usually leads to new component placement and parasitic effects that were not considered in the original circuit schematic. These tests might require the design team to revisit previous functional or layout schematic steps.
The Many Steps of Fabricating a PIC
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics President Boudewijn Docter described a fabrication process with a total of 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
- Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods (which are summarized in this article).
- Lithography (i.e. printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with a photosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
- Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. There are several techniques that can be done to etch the material, which are summarized in this article.
- Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such a silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation of future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps, or at the end to add electrical contacts to the chip.
Figure 6 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
The Expensive Process of Testing and Packaging
Chip fabrication is a process with many sources of variability, and therefore much testing is required to make sure that the fabricated chip agrees with what was originally designed and simulated. Once that is certified and qualified, the process of packaging and assembling a device with the PIC follows.
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic systems. Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost. There are many research efforts in motion to reduce these costs, which you can learn more about in one of our previous articles.
Especially after the first fabrication run of a new chip, there will be a few rounds of characterization, validation and revisions to make sure the chip performs up to spec. After this first round of characterization and validation, the chip must be made ready for mass production, which requires a series of reliability tests in several environmental different conditions. You can learn more about this process in our previous article on industrial hardening. For example, different applications need different certification of the temperatures in which the chip must operate in.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous amount of knowledge in chip design, fabrication, and testing from experts in different fields all around the world. EFFECT Photonics was founded by people who fabricated these chips themselves, understand the process intimately, and developed the connections and network to develop cutting-edge PICs at scale.
Tags: building blocks, c-temp, coherent, die testing, DSP, electron beam lithography, faults, I-temp, imprint lithography, InP, interfaces, optical lithography, reach, scale, wafer testingWhat’s an ITLA and Why Do I Need One?
The tunable laser is a core component of every optical communication system, both direct detect…
The tunable laser is a core component of every optical communication system, both direct detect and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system.
Depending on the material platform, system architecture, and requirements, optical system developers must balance laser parameters—tunability, purity, size, environmental resistance, and power—for the best system performance.
In this article, we will talk about one specific kind of laser—the integrable tunable laser assembly (ITLA)—and when it is needed.
When Do I Need an ITLA?
The promise of silicon photonics (SiP) is compatibility with existing electronic manufacturing ecosystems and infrastructure. Integrating silicon components on a single chip with electronics manufacturing processes can dramatically reduce the footprint and the cost of optical systems and open avenues for closer integration with silicon electronics on the same chip. However, the one thing silicon photonics misses is the laser component.
Silicon is not a material that can naturally emit laser light from electrical signals. Decades of research have created silicon-based lasers with more unconventional nonlinear optical techniques. Still, they cannot match the power, efficiency, tunability, and cost-at-scale of lasers made from indium phosphide (InP) and other III-V compound semiconductors.
Therefore, making a suitable laser for silicon photonics does not mean making an on-chip laser from silicon but an external laser from III-V materials such as InP. This light source will be coupled via optical fiber to the silicon components on the chip while maintaining a low enough footprint and cost for high-volume integration. The external laser typically comes in the form of an integrable tunable laser assembly (ITLA).
Meanwhile, a photonic chip developer that uses the InP platform for its entire chip instead of silicon can use an integrated laser directly on its chip. Using an external or integrated depends on the transceiver developer’s device requirements, supply chain, and manufacturing facilities and processes. You can read more about the differences in this article.
What is an ITLA?
In summary, an integrable tunable laser assembly (ITLA) is a small external laser that can be coupled to an optical system (like a transceiver) via optical fiber. This ITLA must maintain a low enough footprint and cost for high-volume integration with the optical system.
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, ITLAs need to maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half. The QSFP-DD modules that house the full transceiver are smaller (78mm by 20mm) than the original ITLA form factor. Stunningly, tunable laser manufacturers achieved this size reduction without impacting laser purity and power.
The Exploding Market for ITLAs
With the increasing demand for coherent transceivers, many companies have performed acquisitions and mergers that allow them to develop transceiver components internally and thus secure their supply. LightCounting predicts that this consolidation will decrease the sales of modulator and receiver components but that the demand for tunable lasers (mainly in the form of ITLAs) will continue to grow. The forecast expects the tunable laser market for transceivers to reach a size of $400M in 2026. We talk more about these market forces in one of our previous articles.
However, the industry consolidation will make it harder for component and equipment manufacturers to source lasers from independent vendors for their transceivers. The market needs more independent vendors to provide high-performance ITLA components that adapt to different datacom or telecom provider needs. Following these trends, at EFFECT Photonics we are not only developing the capabilities to provide a complete, fully-integrated coherent transceiver solution but also the ITLA units needed by vendors who use external lasers.
Takeaways
The world is moving towards tunability. As telecom and datacom industries seek to expand their network capacity without increasing their fiber infrastructure, the sales of tunable transceivers will explode in the coming years. These transceivers need tunable lasers with smaller sizes and lower power consumption than ever.
Some transceivers will use lasers integrated directly on the same chip as the optical engine. Others will have an external laser coupled via fiber to the optical engine. The need for these external lasers led to the development of the ITLA form factors, which get smaller and smaller with every generation.
Tags: coherent, Density, discrete, DSP, full integration, high-performance, independent, InP, ITLA, micro ITLA, nano ITLA, power consumption, reach, SiP, size, tunable, tunable lasers, versatileWhat are FEC and PCS, and Why do They Matter?
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While Direct Detect transmission only uses the amplitude of the light signal, Coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure on the ground.
However, the demand for data never ceases, and with it, developers of digital signal processors (DSPs) have had to figure out ways to improve the efficiency of coherent transmission. In this article, we will briefly describe the impact of two algorithms that DSP developers use to make coherent transmission more efficient: Forward Error Correction (FEC) and Probabilistic Constellation Shaping (PCS).
What is Forward Error Correction?
Forward Error Correction (FEC) implemented by DSPs has become a vital component of coherent communication systems. FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link.
Let’s provide a high-level overview of how FEC works. An FEC encoder adds a series of redundant bits (called overhead) to the transmitted data stream. The receiver can use this overhead to check for errors without asking the transmitter to resend the data.
In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
We must highlight that FEC is a block of an electronic DSP engine with its own specialized circuitry and algorithms, so it is a separate piece of intellectual property. Therefore, developing the entire DSP electronic engine (see Figure 2 for the critical component blocks of a DSP) requires ownership or access to specific FEC intellectual property.
What is Probabilistic Constellation Shaping?
DSP developers can transmit more data by transmitting more states in their quadrature-amplitude modulation process. The simplest kind of QAM (4-QAM) uses four different states (usually called constellation points), combining two different intensity levels and two different phases of light.
By using more intensity levels and phases, more bits can be transmitted in one go. State-of-the-art commercially available 400ZR transceivers typically use 16-QAM, with sixteen different constellation points that arise from combining four different intensity levels and four phases. However, this increased transmission capacity comes at a price: a signal with more modulation orders is more susceptible to noise and distortions. That’s why these transceivers can transmit 400Gbps over 100km but not over 1000km.
One of the most remarkable recent advances in DSPs to increase the reach of light signals is Probabilistic Constellation Shaping (PCS). In the typical 16-QAM modulation used in coherent transceivers, each constellation point has the same probability of being used. This is inefficient since the outer constellation points that require more power have the same probability as the inner constellation points that require lower power.
PCS uses the low-power inner constellation points more frequently and the outer constellation points less frequently, as shown in Figure 3. This feature provides many benefits, including improved tolerance to distortions and easier system optimization to specific bit transmission requirements. If you want to know more about it, please read the explainers here and here.
The Importance of Standardization and Reconfigurability
Algorithms like FEC and PCS have usually been proprietary technologies. Equipment and component manufacturers closely guarded their algorithms because they provided a critical competitive advantage. However, this often meant that coherent transceivers from different vendors could not operate with each other, and a single vendor had to be used for the entire network deployment.
Over time, coherent transceivers have increasingly needed to become interoperable, leading to some standardization in these algorithms. For example, the 400ZR standard for data center interconnects uses a public algorithm called concatenated FEC (CFEC). In contrast, some 400ZR+ MSA standards use open FEC (oFEC), which provides a more extended reach at the cost of a bit more bandwidth and energy consumption. For the longest possible link lengths (500+ kilometers), proprietary FECs become necessary for 400G transmission. Still, at least the public FEC standards have achieved interoperability for a large segment of the 400G transceiver market. Perhaps in the future, this could happen with PCS methods.
Future DSPs could switch among different algorithms and methods to adapt to network performance and use cases. For example, let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the DSP can be reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link. Similarly, longer reach could be achieved if the DSP activates its PCS feature.
400 ZR | Open ZR+ | Proprietary Long Haul | |
Target Application | Edge data center interconnect | Metro, Regional data center interconnect | Long-Haul Carrier |
Target Reach @ 400G | 120km | 500km | 1000 km |
Form Factor | QSFP-DD/OSFP | QSFP-DD/OSFP | QSFP-DD/OSFP |
FEC | CFEC | oFEC | Proprietary |
Standards / MSA | OIF | OpenZR+ MSA | Proprietary |
Takeaways
The entire field of communication technology can arguably be summarized with a single question: how can we transmit more information into a single frequency-limited signal over the longest possible distance?
DSP developers have many tools to answer this question, and two of them are FEC and PCS. Both technologies make coherent links much more tolerant of noise and can extend their reach. Future pluggables that handle different use cases must use different coding, error coding, and modulation schemes to adapt to different network requirements.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever.
Tags: coherent, constellation shaping, DSP, DSPs, error compensation, FEC, PCS, power, Proprietary, reach, reconfigurable, standardized, standardsThe Light Path to a Coherent Cloud Edge
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and…
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024.
As edge data centers become more common, the issue of interconnecting them becomes more prominent. This situation motivated the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. Cignal AI forecasts that 400ZR shipments will dominate the edge applications, as shown in the figure below.
The 400ZR standard has made coherent technology and dense wavelength division multiplexing (DWDM) the dominant solution in the metro data center interconnects (DCIs) space. Datacom provider operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detect technology. A single coherent transceiver plugged into a router could fulfill the requirements.
However, there are still obstacles that prevent coherent from becoming dominant in shorter-reach DCI links at the campus (< 10km distance) and intra-datacenter (< 2km distance) level. These spaces require more optical links and transceivers, and coherent technology is still considered too power-hungry and expensive to become the de-facto solution here.
Fortunately, there are avenues for coherent technology to overcome these barriers. By embracing multi-laser arrays, DSP co-design, and electronic ecosystems, coherent technology can mature and become a viable solution for every data center interconnect scenario.
The Promise of Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for optical transceivers because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need an intra-DCI link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Co-designing DSP and Optical Engine for Efficiency and Performance
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Additionally, the optimized DSP could also be programmed to do some additional signal conditioning that minimizes the nonlinear optical effects of the InP material, which can reduce noise and improve performance.
Driving Scale Through Existing Electronic Ecosystems
Making coherent optical transceivers more affordable is a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems.
While vertically-integrated PIC development has its strengths, a fabless model in which developers outsource their PIC manufacturing to a large-scale foundry is the simplest way to scale to production volumes of millions of units. Fabless PIC developers can remain flexible and lean, relying on trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on their end market and designs instead of costly fabrication facilities.
Further progress must also be made in the packaging, assembly, and testing of photonic chips. While these processes are only a small part of the cost of electronic systems, the reverse happens with photonics. To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. It must move towards proven and scalable packaging methods that are common in the electronics industry.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
Coherent transceivers are already established as the solution for metro Data Center Interconnects (DCIs), but they need to become more affordable and less power-hungry to fit the intra- and campus DCI application cases. Fortunately, there are several avenues for coherent technology to overcome these cost and power consumption barriers.
Multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel. Co-designing the optics and electronics can allow the electronic DSP to exploit the intrinsic advantages of specific photonics platforms such as indium phosphide. Finally, leveraging electronic ecosystems and processes is vital to increase the production volumes of coherent transceivers and make them more affordable.
By embracing these pathways to progress, coherent technology can mature and become a viable solution for every data center interconnect scenario.
Tags: campus, cloud, cloud edge, codesign, coherent, DCI, DSP, DSPs, DWDM, integration, intra, light sources, metro, modulator, multi laser arrays, photonic integration, PIC, power consumption, wafer testingLeveraging Electronic Ecosystems in Photonics
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades.…
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades. This allowed the world to enjoy chips that every generation became smaller and provided exponentially more computing power for the same amount of money. This scale-up process is how everyone now has a computer processor in their pocket that is millions of times more powerful than the most advanced computers of the 1960s that landed men on the moon.
This progress in electronics integration is a key factor that brought down the size and cost of coherent transceivers, packing more bits than ever into smaller areas. However, photonics has struggled to keep up with electronics, with the photonic components dominating the cost of transceivers. If the transceiver cost curve does not continue to decrease, it will be challenging to achieve the goal of making them more accessible across the entire optical network.
To trigger a revolution in the use of photonics worldwide, it needs to be as easy to use as electronics. In the words of our Chief Technology Officer, Tim Koene: “We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.”
This goal requires photonics manufacturing to leverage existing electronics manufacturing processes and ecosystems. Photonics must embrace fabless models, chips that can survive soldering steps, and electronic packaging and assembly methods.
The Advantages of Moving to a Fabless Model
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers manufacture their chips in-house within their fabrication facilities. This approach has some substantial advantages, giving component manufacturers complete control over their production process.
However, this approach has its trade-offs when scaling up. If a vertically-integrated chip developer wants to scale up in volume, they must make a hefty capital expenditure (CAPEX) in more equipment and personnel. They must develop new fabrication processes as well as develop and train personnel. Fabs are not only expensive to build but to operate. Unless they can be kept at nearly full utilization, operating expenses (OPEX) also drain the facility owners’ finances.
Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective. For example, LightCounting estimates that 55 million optical transceivers were sold in 2021, while the International Data Corporation estimates that 1.4 billion smartphones were sold in 2021. The latter figure is 25 times larger than that of the transceiver market.
Electronics manufacturing experienced a similar problem during their 70s and 80s boom, with smaller chip start-ups facing almost insurmountable barriers to market entry because of the massive CAPEX required. Furthermore, the large-scale electronics manufacturing foundries had excess production capacity that drained their OPEX. The large-scale foundries ended up selling that excess capacity to the smaller chip developers, who became fabless. In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at total capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
This fabless model, with companies designing and selling the chips but outsourcing the manufacturing, should also be the way to go for photonics. Instead of going through a more costly, time-consuming process, the troubles of scaling up for photonics developers are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on the end market. This is the simplest way forward if photonics moves into million-scale volumes.
Adopting Electronics-Style Packaging
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic integrated circuits (PICs). Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost.
To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. The lack of automation makes manufacturing slower and prevents data collection that can be used for process control, optimization, and standardization.
One of the best ways to reach these automation and standardization goals is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible. Below are a few examples:
- Passive alignments: Connecting optical fiber to PICs is one of optical devices’ most complicated packaging and assembly problems. The best alignments are usually achieved via active alignment processes in which feedback from the PIC is used to align the fiber better. Passive alignment processes do not use such feedback. They cannot achieve the best possible alignment but are much more affordable.
- BGA-style packaging: Ball-grid array packaging has grown popular among electronics manufacturers. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
- Flip-chip bonding: A process where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Making Photonics Chips That Can Survive Soldering
Soldering remains another tricky step for photonics assembly and packaging. Photonics device developers usually custom order a PIC, then wire and die bond to the electronics. However, some elements in the PIC cannot handle soldering temperatures, making it difficult to solder into an electronics board. Developers often must glue the chip onto the board with a non-standard process that needs additional verification for reliability.
This goes back to the issue of process standardization. Current PICs often use different materials and processes from electronics, such as optical fiber connections and metals for chip interconnects, that cannot survive a standard soldering process.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to make fiber coupling and other PIC aspects compatible with these electronic packaging methods.
PICs that can handle being soldered to circuit boards will allow the industry to build optical subassemblies that can be made more readily available in the open market and can go into trains, cars, or airplanes.
Takeaways
Photonics must leverage existing electronics ecosystems and processes to scale up and have a greater global impact. Our Chief Technology Officer, Tim Koene, explains what this means:
Photonics technology needs to integrate more electronic functionalities into the same package. It needs to build photonic integration and packaging support that plays by the rules of existing electronic manufacturing ecosystems. It needs to be built on a semiconductor manufacturing process that can produce millions of chips in a month.
As soon as photonics can achieve these larger production volumes, it can reach price points and improvements in quality and yield closer to those of electronics. When we show the market that photonics can be as easy to use as electronics, that will trigger a revolution in its worldwide use.
This vision is one of our guiding lights at EFFECT Photonics, where we aim to develop optical systems that can have an impact all over the world in many different applications.
Tags: automotive sector, BGA style packaging, compatible, computing power, cost per mm, efficient, electronic, electronic board, electronics, fabless, Photonics, risk, scale, soldering, transistor, wafer scaleThe Evolution to 800G and Beyond
The demand for data and other digital services is rising exponentially. From 2010 to 2020,…
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom and telecom operators need constantly upgrade their transport networks.
400 Gbps links are becoming the standard for links all across telecom transport networks and data center interconnects, but providers are already thinking about the next steps. LightCounting forecasts significant growth in shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years.
The major obstacles in this roadmap remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Thus, many incentives exist to continue improving the performance and power consumption of pluggable optical transceivers. By embracing increased photonic integration, co-designed PICs and DSPs, and multi-laser arrays, pluggables will be better able to scale in data rates while remaining affordable and at low power.
Direct Detect or Coherent for 800G and Beyond?
While coherent technology has become the dominant one in metro distances (80 km upwards), the campus (< 10 km) and intra-data center (< 2 km) distances remain in contention between direct detect technologies such as PAM 4 and coherent.
These links were originally the domain of direct detect products when the data rates were 100Gbps. However, at 400Gbps speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions. This gap in power consumption is expected to disappear at 800Gbps, as shown in the figure below.
A major for this decreased gap is that direct detect technology will often require additional amplifiers and compensators at these data rates, while coherent pluggables do not. This also makes coherent technology simpler to deploy and maintain. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions.
Increased Integration and Co-Design are Key to Reduce Power Consumption
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s M1 processor integrates all electronic functions in a single system-on-chip (SoC) and consumes a third of the power compared to the processors with discrete components used in their previous generations of computers. We can observe this progress in the table below.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics can achieve greater efficiency gains by following a similar approach to integration. The interconnects required to couple discrete optical components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. In contrast, the more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components.
Further improvements in power consumption can be achieved by co-designing electronic digital signal processors (DSPs) with the photonic integrated circuit (PIC) that constitutes the transceiver’s optical engine. Standard DSPs are like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none.
A DSP that is co-designed and optimized alongside the PIC can better exploit specific advantages of the PIC. For example, if an indium phosphide PIC and a DSP are co-designed together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, drastically reducing a power conversion overhead that is often 10-15% of transceiver power consumption. To learn more about the advantages of co-design, you can read our article on the topic.
Reducing Complexity with Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones will provide more cost-effective ways for pluggables to scale to higher data rates.
Let’s say we need a data center interconnect with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The pace of worldwide data demand is relentless, with it the pace of link upgrades required by datacom and telecom networks. 400G transceivers are currently replacing previous 100G solutions, and in a few years, they will be replaced by transceivers with data rates of 800G or 1.6 Terabytes.
The cost and power consumption of coherent technology remain barriers to more widespread capacity upgrades, but the industry is finding ways to overcome them. Tighter photonic integration can minimize the losses of optical systems and their power consumption. Further improvements in power efficiency can be achieved by co-designing DSPs alongside the optical engine. Finally, the onset of multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel.
Tags: bandwidth, co-designing, coherent, DSP, full integration, integration, interface, line cards, optical engine, power consumption, RF Interconnections, ViasatDay of Photonics 2022, Photonics FAQ
On October 21st, 1983, the General Conference of Weights and Measures adopted the current value…
On October 21st, 1983, the General Conference of Weights and Measures adopted the current value of the speed of light at 299,792.458 km/s. To commemorate this milestone, hundreds of optics and photonics companies, organizations, and institutes all over the world organize activities every year on this date to celebrate the Day of Photonics and how this technology is impacting our daily lives.
At EFFECT Photonics, we want to celebrate the Day of Photonics by answering some commonly asked questions about photonics and its impact on the world.
What is photonics?
Photonics is the study and application of photon (light) generation, manipulation, and detection, often aiming to create, control, and sense light signals.
The term photonics emerged in the 60s and 70s with the development of the first semiconductor lasers and optical fibers. Its goals and even the name “photonics” are born from its analogy with electronics: photonics aims to generate, control, and sense photons (the particles of light) in similar ways to how electronics does with electrons (the particles of electricity).
What is photonics used for?
Photonics can be applied in many ways. For the Day of Photonics, we will explore two categories:
Communications
Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint.
A classic example is optical fiber communications. The webpage you are reading was originally a stream of 0 and 1s that traveled through an optical fiber to reach you.
Outside of optical fibers, photonics can also deliver solutions beyond what traditional radio communications can offer. For example, optical transmission over the air could handle links between different sites of a mobile network, links between cars, or to a satellite out in space. At some point, we may even see the use of Li-Fi, a technology that replaces indoor Wi-Fi links with infrared light.
Sensing
There are multiple sensing application markets, but their core technology is the same. They need a small device that sends out a known pulse of light, accurately detects how the light comes back, and calculates the properties of the environment from that information. It’s a simple but quite powerful concept.
This concept is already being used to implement LIDAR systems that help self-driving cars determine the location and distance of people and objects. However, there is also potential to use this concept in medical and agri-food applications, such as looking for undesired growths in the human eye or knowing how ripe an apple is.
Will photonics replace electronics?
No, each technology has its strengths.
When transmitting information from point A to B, photonics can do it faster and more efficiently than electronics. For example, optical fiber can transmit information at the speed of light and dissipate less heat than electric wires.
On the other hand, since electricity can be manipulated at the nanometer level more easily than light, electronics are usually better for building computers. There are some specific areas where photonic computers could outperform traditional electronic ones, especially given the rise of quantum computers that can be made with photonic components. However, most computer products will remain electronic for the foreseeable future.
Thus, photonics is not expected to replace electronics but to collaborate and integrate strongly with it. Most future applications will involve photonic systems transmitting or sensing information then processed by electronic computers.
Tags: DayofPhotonics2022, Integrated Photonics, PhotonicsCoherent Free Space Optics for Ground and Space Applications
In a previous article, we described how free-space optics (FSO) could impact mobile fronthaul and…
In a previous article, we described how free-space optics (FSO) could impact mobile fronthaul and enterprise links. They can deliver a wireless access solution that can be deployed quickly, with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links.
However, there’s potential to do even more. There are network applications on the ground that require very high bandwidths in the range of 100 Gbps and space applications that need powerful transceivers to deliver messages across vast distances. Microwaves are struggling to deliver all the requirements for these use cases.
By merging the coherent technology in fiber optical communications with FSO systems, they can achieve greater reach and capacity than before, enabling these new applications in space and terrestrial links.
Reaching 100G on the Ground for Access Networks
Thanks to advances in adaptive optics, fast steering mirrors, and digital signal processors, FSO links can now handle Gbps-capacity links over several kilometers. For example, a collaboration between Dutch FSO startup Aircision and research organization TNO demonstrated in 2021 that their FSO systems could reliably transmit 10 Gbps over 2.5 km.
However, new communication technologies emerge daily, and our digital society keeps evolving and demanding more data. This need for progress has motivated more research and development into increasing the capacity of FSO links to 100 Gbps, providing a new short-reach solution for access networks.
One such initiative came from the collaboration of Norweigan optical network solutions provider Smartoptics, Swedish research institute RISE Acreo, and optical wireless link provider Norwegian Polewall. In a trial set-up at Acreo’s research facilities, Smartoptics’ 100G transponder was used with CFP transceivers to create a 100 Gbps DWDM signal transmitted through the air using Polewall’s optical wireless technology. Their system is estimated to reach 250 meters in the worst possible weather conditions.
Fredrik Larsson, the Optical Transmission Specialist at Smartoptics, explains the importance of this trial:
“Smartoptics is generally recognized as offering a very flexible platform for optical networking, with applications for all types of scenarios. 100Gbps connectivity through the air has not been demonstrated before this trial, at least not with commercially available products. We are proud to be part of that milestone together with Acreo and Polewall,”
Meanwhile, Aircision aims to develop a 100 Gbps coherent FSO system capable of transmitting up to 10km. To achieve this, they have partnered up with EFFECT Photonics, who will take charge of developing coherent modules that can go into Aircision’s future 100G system.
In many ways, the basic technologies to build these coherent FSO systems have been available for some time. However, they included high-power 100G lasers and transceivers originally intended for premium long-reach applications. The high price, footprint, and power consumption of these devices prevented the development of more affordable and lighter FSO systems for the larger access network market.
However, the advances in integration and miniaturization of coherent technology have opened up new possibilities for FSO links. For example, 100ZR transceiver standards enable a new generation of low-cost, low-power coherent pluggables that can be easily integrated into FSO systems. Meanwhile, companies like Aircision are working hard in using technologies such as adaptive optics and fast-steering mirrors to extend the reach of these 100G FSO systems into the kilometer range.
Coherent Optical Technology in Space
Currently, most space missions use radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities.
Coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems. Additionally, optical transceivers can lower the size, weight, and power (SWAP) specifications of satellite communication systems. Less weight and size means a less expensive launch or perhaps room for more scientific instruments. Less power consumption means less drain on the spacecraft’s energy sources.
For example, the Laser Communications Relay Demonstration (LCRD) from NASA, launched in December 2021, aims to showcase the unique capabilities of optical communications. Future missions in space will send data to the LCRD, which then relays the data down to ground stations on Earth. The LCRD will forward this data at rates of 1.2 Gigabits per second over optical links, allowing more high-resolution experiment data to be transmitted back to Earth. LCRD is a technology demonstration expected to pave the way for more widespread use of optical communications in space.
Making Coherent Technology Live in Space
Integrated photonics can boost space communications by lowering the payload. Still, it must overcome the obstacles of a harsh space environment, with radiation hardness, an extreme operational temperature range, and vacuum conditions.
For example, the Laser Communications Relay Demonstration (LCRD) from NASA, launched in December 2021, aims to showcase the unique capabilities of optical communications. Future missions in space will send data to the LCRD, which then relays the data down to ground stations on Earth. The LCRD will forward this data at rates of 1.2 Gigabits per second over optical links, allowing more high-resolution experiment data to be transmitted back to Earth. LCRD is a technology demonstration expected to pave the way for more widespread use of optical communications in space.
Mission Type | Conditions |
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125°C |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
The values in Table 1 are unmanaged environmental temperatures and would decrease significantly for electronics and optics systems in a temperature-managed area, perhaps by as much as half.
A substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Whenever you want to send data from point A to B, photonics is usually the most efficient way of doing it, be it over a fiber or free space.
This is why EFFECT Photonics sees future opportunities in the free-space optical (FSO) communications sectors. In mobile access networks or satellite link applications, FSO can provide solutions with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links.
These FSO systems can be further boosted by using coherent optical transmission similar to the one used in fiber optics. Offering these systems in a small package that can resist the required environmental conditions will significantly benefit the access network and space sectors.
Tags: 100G, access capacity, access network, capacity, certification, coherent, free space optics, FSO, GEO, ground, LEO, lunar, Photonics, reach, satellite, space, SWAP, temperature, TransceiversWhat’s Inside a Tunable Laser for Coherent Systems?
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows the datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also made DWDM solutions more widespread by simplifying installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and —are moving towards tunable pluggables.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system. This article will clarify some critical aspects of laser design for communication systems.
External and Integrated Lasers: What’s the Difference?
The promise of silicon photonics (SiP) is compatibility with existing electronic manufacturing ecosystems and infrastructure. Integrating silicon components on a single chip with electronics manufacturing processes can dramatically reduce the footprint and the cost of optical systems and open avenues for closer integration with silicon electronics on the same chip. However, the one thing silicon photonics misses is the laser component.
Silicon is not a material that can naturally emit laser light from electrical signals. Decades of research have created silicon-based lasers with more unconventional nonlinear optical techniques. Still, they cannot match the power, efficiency, tunability, and cost-at-scale of lasers made from indium phosphide (InP) and III-V compound semiconductors.
Therefore, making a suitable laser for silicon photonics does not mean making an on-chip laser from silicon but an external laser from III-V materials such as InP. This light source will be coupled via optical fiber to the silicon components on the chip while maintaining a low enough footprint and cost for high-volume integration. The external laser typically comes in the form of an integrable tunable laser assembly (ITLA).
In contrast, the InP platform can naturally emit light and provide high-quality light sources and amplifiers. This allows for photonic system-on-chip designs that include an integrated laser on the chip. The integrated laser carries the advantage of reduced footprint and power consumption compared to an external laser. These advantages become even more helpful for PICs that need multiple laser channels.
Finally, integrated lasers enable earlier optical testing on the semiconductor wafer and die. By testing the dies and wafers directly before packaging them into a transceiver, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy, materials, and cost.
Using an external or integrated laser depends on the transceiver developer’s device requirements, supply chain, and manufacturing facilities and processes. At EFFECT Photonics, we have the facilities and expertise to provide fully-integrated InP optical systems with an integrated laser and the external laser component that a silicon photonics developer might need for their optical system.
What are the key requirements for a laser in coherent systems?
In his recent talk at ECOC 2022, our Director of Product Management, Joost Verberk, outlined five critical parameters for laser performance.
- Tunability: With telecom providers needing to scale up their network capacity without adding more fiber infrastructure, combining tunable lasers with dense wavelength division multiplexing (DWDM) technology becomes necessary. These tunable optical systems have become more widespread thanks to self-tuning technology that removes the need for manual tuning. This makes their deployment and maintenance easier.
- Spectral Purity: Coherent systems encode information in the phase of the light, and the purer the light source is, the more information it can transmit. An ideal, perfectly pure light source can generate a single, exact color of light. However, real-life lasers are not pure and will generate light outside their intended color. The size of this deviation is what we call the laser linewidth. An impure laser with a large linewidth will have a more unstable phase that propagates errors in its transmitted data, as shown in the diagram below. This means it will transmit at a lower speed than desired.
- Dimensions: As the industry moves towards packing more and more transceivers on a single router faceplate, tunable lasers need to maintain performance and power while moving to smaller footprints. Laser manufacturers have achieved size reductions thanks to improved integration without impacting laser purity and power, moving from ITLA to micro-ITLA and nano-ITLA form factors in a decade.
- Environmental Resistance: Lasers used in edge and access networks will be subject to harsh environments, like temperature and moisture changes. For these use cases, lasers should operate in the industrial temperature (I-temp) range of -40 to 85ºC for these environments.
- Transmit Power: The required laser output power will depend on the application and the system architecture. For example, a laser fully integrated into the chip can reach higher transmit powers more easily because it avoids the interconnection losses of an external laser. Still, shorter-reach applications might not necessarily need such powers.
The Promise of Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for tunable DWDM because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need a link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows the datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Thanks to the miniaturization of coherent technology and self-tuning algorithms, many application cases—metro transport, data center interconnects, and future access networks—will eventually move towards coherent tunable pluggables.
These new application cases will have to balance the laser parameters we described early—tunability, purity, size, environmental resistance, power—depending on their material platforms, system architecture, and requirements. Some will need external lasers; some will want a fully-integrated laser. Some will need multi-laser arrays to increase capacity; others need more stringent temperature certifications.
Following these trends, at EFFECT Photonics, we are not only developing the capabilities to provide a complete, coherent transceiver solution but also the external nano-ITLA units needed by other vendors.
Tags: coherent, DBR, DFB, ECL, full integration, InP, ITLA, micro ITLA, nano ITLA, SiP, tunableThe Growing Photonics Cluster of the Boston Area
As they lighted the candles in their ship, the Pilgrim families traveling on the Mayflower had no idea they would help build a nation that would become a major pioneer in light technology and many other fields.
The United States features many areas with a strong photonics background, including the many companies in California’s Silicon Valley and the regions close to the country’s leading optics universities, such as Colorado, New York, Arizona, and Florida.
However, the Greater Boston area and Massachusetts state, in general, are becoming an increasingly important photonics hub with world-class universities and many successful optics and photonics initiatives and companies. Let’s talk a bit more about their legacy with light-based technology and the history of the town of Maynard with the high-tech industry.
From World-Class Labs to the Real World
The Boston area features many world-class universities collaborating with the government and industry to develop new photonics technology. Harvard, the Massachusetts Institute of Technology (MIT), Boston University, Tufts University, and Northeastern University are major research institutions in the area that lead many photonics-related initiatives.
The state of Massachusetts, in general, has also been home to several prosperous photonics businesses, and initiatives are being made to capitalize on Boston’s extensive medical industry knowledge to boost biomedical optics and photonics. Raytheon, Polaroid, and IPG Photonics are examples of Massachusetts-based businesses that promoted optical technology.
The US federal government and Massachusetts state are committing resources to get these academic and industry partners to collaborate as much as possible. In 2015, the Lab for Education and Application Prototypes (LEAP) network was established as part of a federal drive to revive American manufacturing. The Massachusetts Manufacturing Innovation Initiative, a state grant program, and AIM Photonics, the national manufacturing institution, each contributed $11.3 million to constructing labs around Massachusetts universities and colleges.
The LEAP Network objectives are to teach integrated photonics manufacturing practice, offer companies technician training and certification, encourage company engagement in the tool, process, and application upgrades, and support AIM Photonics in their manufacturing and testing.
These partnerships form a statewide ecosystem to educate the manufacturing workforce throughout the photonics supply chain. The facilities’ strategic placement next to both universities and community colleges allows them to attract students from all areas and stages of their careers, from technicians to engineers to fundamental researchers.
From the Mill to High Tech: The Story of Maynard
A trip down Route 2 into Middlesex County, 25 miles northwest of Boston, will take one past apple orchards, vineyards, and some of Massachusetts’ most stunning nature preserves before arriving at a historic mill on the Assabet River. The community around this mill, Maynard, is a charming and surprisingly historical hub of economic innovation that houses an emerging tech ecosystem.
The renowned Assabet Woolen Mill was established for textile manufacturing in 1847 by Amory Maynard, who by the age of 16 was managing his own sawmill company. Initially a carpet manufacturing plant, Maynard’s enterprise produced blankets and uniforms for the Union Army during the Civil War. The company employed immigrants from Ireland, Finland, Poland, Russia, and Italy, many of them coming to the mill for jobs as soon as they arrived in the nation. By the 1930s, the town of Maynard was recognized as one of the most multi-ethnic places in the state.
The Assabet Woolen Mill continued to create textiles until 1950. The 11-acre former mill complex, currently named Mill and Main, is the contemporary expression of the town’s evolution and relationship with innovative industry.
The Digital Equipment Corporation (DEC) occupied the facility before the end of the 50s with just $70,000 cash and three engineers. From the 1960s onward, DEC became a major global supplier of computer systems and enjoyed tremendous growth. It’s hard to overstate the company’s impact on Maynard, which became the ” Mini Computer Capital of the World” in barely twenty years.
Following DEC’s departure, the mill complex was sold and rented out to a fresh group of young and ambitious computer startups, many of whom are still operating today. Since then, more people and companies have joined, noting the affordable real estate, the enjoyable commute and environs, and the obvious cluster of IT enterprises. For example, when Acacia Communications, Inc. was established in 2009 and needed a home, Maynard’s mill space was a natural fit.
The Future of Coherent DSP Design: Interview with Russell Fuerst
Digital signal processors (DSPs) are the heart of coherent communication systems. They not only encode/decode…
Digital signal processors (DSPs) are the heart of coherent communication systems. They not only encode/decode data into the three properties of a light signal (amplitude, phase, polarization) but also handle error correction, analog-digital conversation, Ethernet framing, and compensation of dispersion and nonlinear distortion. And with every passing generation, they are assigned more advanced functions such as probabilistic constellation shaping.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever. We ask Russell Fuerst, our Vice-President of Digital Signal Processing, how we can achieve these goals.
What’s the most exciting thing about joining EFFECT Photonics?
Before being acquired by EFFECT Photonics, our DSP design team has been a design-for-hire house. We’ve been doing designs for other companies that have put those designs in their products. By joining EFFECT Photonics, we can now do a design and stamp our brand on it. That’s exciting.
The other exciting thing is to have all the technologies under one roof. Having everything from the DSP to the PIC to the packaging and module-level elements in one company will allow us to make our products that much better.
We also find the company culture to be very relaxed and very collaborative. Even though we’re geographically diverse, it’s been straightforward to understand what other people and groups in the company are doing. It’s easy to talk to others and find out whom you need to talk to. There’s not a whole lot of organizational structure that blocks communication, so it’s been excellent from that perspective.
People at EFFECT Photonics were welcoming from day one, making us that much more excited to join.
What key technology challenges must be solved by DSP designers to thrive in the next 5 to 10 years?
The key is to bring the power down while also increasing the performance.
In the markets where coherent has been the de-facto solution, I think it’s essential to understand how to drive cost and power down either through the DSP design itself or by integrating the DSP with other technologies within the module. That will be where the benefits come from in those markets.
Similarly, there are markets where direct detection is the current technology of choice. We must understand how to insert coherent technology into those markets while meeting the stringent requirements of those important high-volume markets. Again, this progress will be largely tied to performance within the power and cost requirements.
As DSP technology has matured, other aspects outside of performance are becoming key, and understanding how we can work that into our products will be the key to success.
How do you think the DSP can be more tightly integrated with the PIC?
This is an answer that will evolve over time. We will become more closely integrated with the team in Eindhoven and learn some of the nuances of their mature design process. And similarly, they’ll understand the nuances of our design process that have matured over the years. As we understand the PIC technology and our in-house capabilities better, that will bring additional improvements that are currently unknown.
Right now, we are primarily focused on the obvious improvements tied to the fully-integrated platform. For example, the fact that we can have the laser on the PIC because of the active InP material. We want to understand how we co-design aspects of the module and shift the complexity from one design piece or component to another, thanks to being vertically integrated.
Another closely-tied area for improvement is on the modulator side. We think that the substantially lower drive voltages required for the InP modulator give us the possibility to eliminate some components, such as RF drivers. We could potentially drive the modulator directly from that DSP without any intermediary electronics, which would reduce the cost and power consumption. That’s not only tied to the lower drive voltages but also some proprietary signal conditioning we can do to minimize some of the nonlinearities in the modulator and improve the performance.
What are the challenges and opportunities of designing DSPs for Indium phosphide instead of silicon?
So, we already mentioned two opportunities with the laser and the modulator.
I think the InP integration makes the design challenges smaller than those facing DSP design for silicon photonics. The fact is that InP can have more active integrated components and that DSPs are inherently active electronic devices, so getting the active functions tuned and matched over time will be a challenge. It motivates our EFFECT DSP team to quickly integrate with the experienced EFFECT PIC design team to understand the fundamental InP platform a bit better. Once we understand it, the DSP designs will get more manageable with improved performance, especially as we have control over the designs of both DSP and PIC. As we get to the point where co-packaging is realized, there will also be some thermal management issues to consider.
When we started doing coherent DSP designs for optical communication over a decade ago, we pulled many solutions from the RF wireless and satellite communications space into our initial designs. Still, we couldn’t bring all those solutions to the optical markets.
However, when you get more of the InP active components involved, some of those solutions can finally be brought over and utilized. They were not used before in our designs for silicon photonics because silicon is not an active medium and lacked the performance to exploit these advanced techniques.
For example, we have done proprietary waveforms tuned to specific satellite systems in the wireless space. Our DSP team was able to design non-standard constellations and modulation schemes that increased the capacity of the satellite link over the previous generation of satellites. Similarly, we could tune the DSP’s waveform and performance to the inherent advantages of the InP platform to improve cost, performance, bandwidth utilization, and efficiency. That’s something that we’re excited about.
Takeaways
As Russell explained, the big challenge for DSP designers continues to be increasing performance while keeping down the power consumption. Finding ways to integrate the DSP more deeply with the InP platform can overcome this challenge, such as direct control of the laser and modulator from the DSP to novel waveform shaping methods. The presence of active components in the InP platforms also gives DSP designers the opportunity to import more solutions from the RF wireless space.
We look forward to our new DSP team at EFFECT Photonics settling into the company and trying out all these solutions to make DSPs faster and more sustainable!
Tags: coherent, DSP, energy efficient, InP, integration, performance, power consumption, Sustainable, ViasatThe Future of Passive Optical Networks
Like every other telecom network, cable networks had to change to meet the growing demand…
Like every other telecom network, cable networks had to change to meet the growing demand for data. These demands led to the development of hybrid fiber-coaxial (HFC) networks in the 1990s and 2000s. In these networks, optical fibers travel from the cable company hub and terminate in optical nodes, while coaxial cable connects the last few hundred meters from the optical node to nearby houses. Most of these connections were asymmetrical, giving customers more capacity to download data than upload.
That being said, the way we use the Internet has evolved over the last ten years. Users now require more upstream bandwidth thanks to the growth of social media, online gaming, video calls, and independent content creation such as video blogging. The DOCSIS standards that govern data transmission over coaxial cables have advanced quickly because of these additional needs. For instance, full-duplex transmission with symmetrical upstream and downstream channels is permitted under the most current DOCSIS 4.0 specifications.
Fiber-to-the-home (FTTH) systems, which bring fiber right to the customer’s door, are also proliferating and enabling Gigabit connections quicker than HFC networks. Overall, extending optical fiber deeper into communities (see Figure 1 for a graphic example) is a critical economic driver, increasing connectivity for the rural and underserved. These investments also lead to more robust competition among cable companies and a denser, higher-performance wireless network.
Passive optical networks (PONs) are a vital technology to cost-effectively expand the use of optical fiber within access networks and make FTTH systems more viable. By creating networks using passive optical splitters, PONs avoid the power consumption and cost of active components in optical networks such as electronics and amplifiers. PONs can be deployed in mobile fronthaul and mid-haul for macro sites, metro networks, and enterprise scenarios.
Despite some success from PONs, the cost of laying more fiber and the optical modems for the end users continue to deter carriers from using FTTH more broadly across their networks. This cost problem will only grow as the industry moves into higher bandwidths, such as 50G and 100G, requiring coherent technology in the modems.
Therefore, new technology and manufacturing methods are required to make PON technology more affordable and accessible. For example, wavelength division multiplexing (WDM)-PON allows providers to make the most of their existing fiber infrastructure. Meanwhile, simplified designs for coherent digital signal processors (DSPs) manufactured at large volumes can help lower the cost of coherent PON technology for access networks.
The Advantages of WDM PONs
Previous PON solutions, such as Gigabit PON (GPON) and Ethernet PON (EPON), used time-division multiplexing (TDM) solutions. In these cases, the fiber was shared sequentially by multiple channels. These technologies were initially meant for the residential services market, but they scale poorly for the higher capacity of business or carrier services. PON standardization for 25G and 50G capacities is ready but sharing a limited bitrate among multiple users with TDM technology is an insufficient approach for future-proof access networks.
This WDM-PON uses WDM multiplexing/demultiplexing technology to ensure that data signals can be divided into individual outgoing signals connected to buildings or homes. This hardware-based traffic separation gives customers the benefits of a secure and scalable point-to-point wavelength link. Since many wavelength channels are inside a single fiber, the carrier can retain very low fiber counts, yielding lower operating costs.
WDM-PON has the potential to become the unified access and backhaul technology of the future, carrying data from residential, business, and carrier wholesale services on a single platform. We discussed this converged access solution in one of our previous articles. Its long-reach capability and bandwidth scalability enable carriers to serve more customers from fewer active sites without compromising security and availability.
Migration to the WDM-PON access network does require a carrier to reassess how it views its network topology. It is not only a move away from operating parallel purpose-built platforms for different user groups to one converged access and backhaul infrastructure. It is also a change from today’s power-hungry and labor-intensive switch and router systems to a simplified, energy-efficient, and transport-centric environment with more passive optical components.
The Possibility of Coherent Access
As data demands continue to grow, direct detect optical technology used in prior PON standards will not be enough. The roadmap for this update remains a bit blurry, with different carriers taking different paths. For example, future expansions might require using 25G or 50G transceivers in the cable network, but the required number of channels in the fiber might not be enough for the conventional optical band (the C-band). Such a capacity expansion would therefore require using other bands (such as the O-band), which comes with additional challenges. An expansion to other optical bands would require changes in other optical networking equipment, such as multiplexers and filters, which increases the cost of the upgrade.
An alternative solution could be upgrading instead to coherent 100G technology. An upgrade to 100G could provide the necessary capacity in cable networks while remaining in the C-band and avoiding using other optical bands. This path has also been facilitated by the decreasing costs of coherent transceivers, which are becoming more integrated, sustainable, and affordable. You can read more about this subject in one of our previous articles.
For example, the renowned non-profit R&D center CableLabs announced a project to develop a symmetric 100G Coherent PON (C-PON). According to CableLabs, the scenarios for a C-PON are many: aggregation of 10G PON and DOCSIS 4.0 links, transport for macro-cell sites in some 5G network configurations, fiber-to-the-building (FTTB), long-reach rural scenarios, and high-density urban networks.
CableLabs anticipates C-PON and its 100G capabilities to play a significant role in the future of access networks, starting with data aggregation on networks that implement a distributed access architecture (DAA) like Remote PH. You can learn more about these networks here.
Combining Affordable Designs with Affordable Manufacturing
The main challenge of C-PON is the higher cost of coherent modulation and detection. Coherent technology requires more complex and expensive optics and digital signal processors (DSPs). Plenty of research is happening on simplifying these coherent designs for access networks. However, a first step towards making these optics more accessible is the 100ZR standard.
100ZR is currently a term for a short-reach (~80 km) coherent 100Gbps transceiver in a QSFP pluggable size. Targeted at the metro edge and enterprise applications that do not require 400ZR solutions, 100ZR provides a lower-cost, lower-power pluggable that also benefits from compatibility with the large installed base of 50 GHz and legacy 100 GHz multiplexer systems.
Another way to reduce the cost of PON technology is through the economics of scale, manufacturing pluggable transceiver devices at a high volume to drive down the cost per device. And with greater photonic integration, even more, devices can be produced on a single wafer. This economy-of-scale principle is the same behind electronics manufacturing, which must be applied to photonics.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have modeled how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to tens of Euros. This must be the goal of the optical transceiver industry.
Takeaways
Integrated photonics and volume manufacturing will be vital for developing future passive optical networks. PONs will use more WDM-PON solutions for increased capacity, secure channels, and easier management through self-tuning algorithms.
Meanwhile, PONs are also moving into incorporating coherent technology. These coherent transceivers have been traditionally too expensive for end-user modems. Fortunately, more affordable coherent transceiver designs and standards manufactured at larger volumes can change this situation and decrease the cost per device.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDMHow Many DWDM Channels Do You Really Need?
Optical fiber and dense wavelength division multiplex (DWDM) technology are moving towards the edges of…
Optical fiber and dense wavelength division multiplex (DWDM) technology are moving towards the edges of networks. In the case of new 5G networks, operators will need more fiber capacity to interconnect the increased density of cell sites, often requiring replacing legacy time-division multiplexing transmission with higher-capacity DWDM links. In the case of cable and other fixed access networks, new distributed access architectures like Remote PHY free up ports in cable operator headends to serve more bandwidth to more customers.
A report by Deloitte summarizes the reasons to expand the reach and capacity of optical access networks: “Extending fiber deeper into communities is a critical economic driver, promoting competition, increasing connectivity for the rural and underserved, and supporting densification for wireless.”
To achieve such a deep fiber deployment, operators look to DWDM solutions to expand their fiber capacity without the expensive laying of new fiber. DWDM technology has become more affordable than ever due to the availability of low-cost filters and SFP transceiver modules with greater photonic integration and manufacturing volumes. Furthermore, self-tuning technology has made the installation and maintenance of transceivers easier and more affordable.
Despite the advantages of DWDM solutions, their price still causes operators to second-guess whether the upgrade is worth it. For example, mobile fronthaul applications don’t require all 40, 80, or 100 channels of many existing tunable modules. Fortunately, operators can now choose between narrow- or full-band tunable solutions that offer a greater variety of wavelength channels to fit different budgets and network requirements.
Example: Fullband Tunables in Cable Networks
Let’s look at what happens when a fixed access network needs to migrate to a distributed access architecture like Remote PHY.
A provider has a legacy access network with eight optical nodes, and each node services 500 customers. To give higher bandwidth capacity to these 500 customers, the provider wants to split each node into ten new nodes for fifty customers. Thus, the provider goes from having eight to eighty nodes. Each node requires the provider to assign a new DWDM channel, occupying more and more of the optical C-band. This network upgrade is an example that requires a fullband tunable module with coverage across the entire C-band to provide many DWDM channels with narrow (50 GHz) grid spacing.
Furthermore, using a fullband tunable module means that a single part number can handle all the necessary wavelengths for the network. In the past, network operators used fixed wavelength DWDM modules that must go into specific ports. For example, an SFP+ module with a C16 wavelength could only work with the C16 wavelength port of a DWDM multiplexer. However, tunable SFP+ modules can connect to any port of a DWDM multiplexer. This advantage means technicians no longer have to navigate a confusing sea of fixed modules with specific wavelengths; a single tunable module and part number will do the job.
Overall, fullband tunable modules will fit applications that need a large number of wavelength channels to maximize the capacity of fiber infrastructure. Metro transport or data center interconnects (DCIs) are good examples of applications with such requirements.
Example: Narrowband Tunables in Mobile Fronthaul
The transition to 5G and beyond will require a significant restructuring of mobile network architecture. 5G networks will use higher frequency bands, which require more cell sites and antennas to cover the same geographical areas as 4G. Existing antennas must upgrade to denser antenna arrays. These requirements will put more pressure on the existing fiber infrastructure, and mobile network operators are expected to deliver their 5G promises with relatively little expansion in their fiber infrastructure.
DWDM solutions will be vital for mobile network operators to scale capacity without laying new fiber. However, operators often regard traditional fullband tunable modules as expensive for this application. Mobile fronthaul links don’t need anything close to the 40 or 80 DWDM channels of a fullband transceiver. It’s like having a cable subscription where you only watch 10 out of the 80 TV channels.
This issue led EFFECT Photonics to develop narrowband tunable modules with just nine channels. They offer a more affordable and moderate capacity expansion that better fits the needs of mobile fronthaul networks. These networks often feature nodes that aggregate two or three different cell sites, each with three antenna arrays (each antenna provides 120° coverage at the tower) with their unique wavelength channel. Therefore, these aggregation points often need six or nine different wavelength channels, but not the entire 80-100 channels of a typical fullband module.
With the narrowband tunable option, operators can reduce their part number inventory compared to grey transceivers while avoiding the cost of a fullband transceiver.
Synergizing with Self-Tuning Algorithms
The number of channels in a tunable module (up to 100 in the case of EFFECT Photonics fullband modules) can quickly become overwhelming for technicians in the field. There will be more records to examine, more programming for tuning equipment, more trucks to load with tuning equipment, and more verifications to do in the field. These tasks can take a couple of hours just for a single node. If there are hundreds of nodes to install or repair, the required hours of labor will quickly rack up into the thousands and the associated costs into hundreds of thousands.
Self-tuning allows technicians to treat DWDM tunable modules the same way they treat grey transceivers. There is no need for additional training for technicians to install the tunable module. There is no need to program tuning equipment or obsessively check the wavelength records and tables to avoid deployment errors on the field. Technicians only need to follow the typical cleaning and handling procedures, plug the transceiver, and the device will automatically scan and find the correct wavelength once plugged. This feature can save providers thousands of person-hours in their network installation and maintenance and reduce the probability of human errors, effectively reducing capital and operational expenditures.
Self-tuning algorithms make installing and maintaining narrowband and fullband tunable modules more straightforward and affordable for network deployment.
Takeaways
Fullband self-tuning modules will allow providers to deploy extensive fiber capacity upgrades more quickly than ever. However, in use cases such as mobile access networks where operators don’t need a wide array of DWDM channels, they can opt for narrowband solutions that are more affordable than their fullband alternatives. By combining fullband and narrowband solutions with self-tuning algorithms, operators can expand their networks in the most affordable and accessible ways for their budget and network requirements.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDMScaling Up Live Event Coverage with 5G Technology
Everyone who has attended a major venue or event, such as a football stadium or…
Everyone who has attended a major venue or event, such as a football stadium or concert, knows the pains of getting good Internet access in such a packed venue. There are too many people and not enough bandwidth. Many initial use cases for 5G have been restricted to achieving much higher speeds, allowing users to enjoy seamless connectivity for live gaming, video conferencing, and live broadcasting. Within a few years, consumers will demand more immersive experiences to enjoy sporting events, concerts, and movies. These experiences will include virtual and augmented reality and improved payment methods.
These experiences will hopefully lead to a win-win scenario: the end user can improve their experiences at the venue, while the telecom service provider can increase their average income per user. Delivering this higher capacity and immersive experiences for live events is a challenge for telecom providers, as they struggle to scale up their networks cost-effectively for these one-off events or huge venues. Fortunately, 5G technology makes scaling up for these events easier thanks to the greater density of cell sites and the increased capacity of optical transport networks.
A Higher Bandwidth Experience
One of the biggest athletic events in the world, the Super Bowl, draws 60 to 100 thousand spectators to an American stadium once a year. Furthermore, hundreds of thousands, if not millions, of people of out-of-towners will visit the Superbowl host city to support their teams. The amount of data transported inside the Atlanta stadium for the 2019 Superbowl alone reached a record 24 terabytes. The half-time show caused a 13.06 Gbps surge in data traffic on the network from more than 30,000 mobile devices. This massive traffic surge in mobile networks can even hamper the ability of security officers and first responders (i.e., law enforcement and medical workers) to react swiftly to crises.
Fortunately, 5G networks are designed to handle more connections than previous generations. They use higher frequency bands for increased bandwidth and a higher density of antennas and cell sites to provide more coverage. This infrastructure enables reliable data speeds of up to 10 Gbps per device and more channels that enable a stable and prioritized connection for critical medical and security services. Carriers are investing heavily in 5G infrastructure around sports stadiums and other public events to improve the safety and security of visitors.
For example, Sprint updated its cell towers with Massive multiple-input, multiple-output (MIMO) technology ahead of the 2019 Super Bowl in Atlanta. Meanwhile, AT&T implemented the first standards-based mobile 5G network in Atlanta with a $43 million network upgrade. In addition to providing first responders with a quick, dependable communication network for other large events, this helps handle enormous traffic during live events like the Super Bowl.
New Ways of Interaction
5G technology and its increased bandwidth capacity will promote new ways for live audiences to interact with these events. Joris Evers, Chief Communication Officer of La Liga, Spain’s top men’s football league, explains a potential application: “Inside a stadium, you could foresee 5G giving fans more capacities on a portable device to check game stats and replays in near real-time.” The gigabit speeds of 5G can replace the traditional jumbotrons and screens and allow spectators to replay games instantly from their cellphones. Venues are also investigating how 5G and AI might lessen lengthy queues at kiosks for events with tens of thousands of visitors. At all Major League Baseball stadiums, American food service company Aramark deployed AI-driven self-service checkout kiosks. Aramark reports that these kiosks have resulted in a 40% increase in transaction speed and a 25% increase in revenue.
Almost all live events have limited seats available, and ticket prices reflect demand, seating preferences, and supply. However, 5G might allow an endless number of virtual seats. Regarding the potential of 5G for sports, Evers notes that “away from the stadium, 5G may enable VR experiences to happen in a more live fashion.”
Strengthening the Transport Network
This increased bandwidth and new forms of interaction in live events will put more pressure on the existing fiber transport infrastructure. Mobile network operators are expected to deliver their 5G promises while avoiding costly expansions of their fiber infrastructure. The initial rollout of 5G has already happened in most developed countries, with operators upgrading their optical transceivers to 10G SFP+ and wavelength division multiplexing (WDM). Mobile networks must now move to the next phase of 5G deployments, exponentially increasing the number of devices connected to the network.
The roadmap for this update remains a bit blurry, with different carriers taking different paths. For example, South Korea’s service providers decided to future-proof their fiber infrastructure and invested in 10G and 25G WDM technology since the early stages of their 5G rollout. Carriers in Europe and the Americas have followed a different path, upgrading only to 10G in the early phase while thinking about the next step.
Some carriers might do a straightforward upgrade to 25G, but others are already thinking about future-proofing their networks ahead of a 6G standard. For example, future expansions might require using 25G or 50G transceivers in their networks, but the required number of channels in the fiber might not be enough for the conventional optical band (the C-band). Such a capacity expansion would therefore require using other bands (such as the O-band), which comes with additional challenges.
An expansion to other optical bands would require changes in other optical networking equipment, such as multiplexers and filters, which increases the cost of the upgrade. An alternative solution could be upgrading instead from 10G to coherent 100G technology. An upgrade to 100G could provide the necessary capacity in transport networks while remaining in the C-band and avoiding using other optical bands. This path has also been facilitated by the decreasing costs of coherent transceivers, which are becoming more integrated, sustainable, and affordable. You can read more about this subject in one of our previous articles. By deploying these higher-capacity links and DWDM solutions, providers will scale up their transport networks to enable these new services for live events.
Takeaways
Thanks to 5G technology, network providers can provide more than just higher bandwidth for live events and venues; they will also enable new possibilities in live events. For example, audiences can instantly replay what is happening in a football match or use VR to attend a match or concert virtually. This progress in how end users interact with live events must also be backed up by the transport network. The discussions of how to upgrade the transport network are still ongoing and imply that coherent technology could play a significant role in this upgrade.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDMWhat’s Inside a Coherent DSP?
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure on the ground.
The digital signal processor (DSP) is the electronic heart of coherent transmission systems. The fundamental function of the DSP is encoding the electronic digital data into the amplitude, phase, and polarization of the light signal and decoding said data when the signal is received. The DSP does much more than that, though: it compensates for impairments in the fiber, performs analog-to-digital conversions (and vice versa), corrects errors, encrypts data, and monitors performance. And recently, DSPs are taking on more advanced functions such as probabilistic constellation shaping or dynamic bandwidth allocation, which enable improved reach and performance.
Given its vital role in coherent optical transmission, we at EFFECT Photonics want to provide an explainer of what goes on inside the DSP chip of our optical transceivers.
There’s More to a DSP Than You Think…
Even though we colloquially call the chip a “DSP”, it is an electronic engine that performs much more than just signal processing. Some of the different functions of this electronic engine (diagram below) are:
- Analog Processing: This engine segment focuses on converting signals between analog and digital formats. Digital data is composed of discrete values like 0s and 1s, but transmitting it through a coherent optical system requires converting it into an analog signal with continuous values. Meanwhile, a light signal received on the opposite end requires conversion from analog into digital format.
- Digital Signal Processing: This is the actual digital processing. As explained previously, this block encodes the digital data into the different properties of a light signal. It also decodes this data when the light signal is received.
- Forward Error Correction (FEC): FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link. FEC algorithms allow the electronic engine to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
- Framer: While a typical electric signal sent through a network uses the Ethernet frame format, the optical signal uses the Optical Transport Network (OTN) format. The framer block performs this conversion. We should note that an increasingly popular solution in communication systems is to send Ethernet frames directly over the optical signal (a solution called optical Ethernet). However, many legacy optical communication systems still use the OTN format, so electronic engines should always have the option to convert between OTN and Ethernet frames.
- Glue Logic: This block consists of the electronic circuitry needed to interface all the different blocks of the electronic engine. This includes the microprocessor that drives the electronic engine and the serializer-deserializer (SERDES) circuit. Since coherent systems only have four channels, the SERDES circuit converts parallel data streams into a single serial stream that can be transmitted over one of these channels. The opposite conversion (serial-to-parallel) occurs when the signal is received.
We must highlight that each of these blocks has its own specialized circuitry and algorithms, so each is a separate piece of intellectual property. Therefore, developing the entire electronic engine requires ownership or access to each of these intellectual properties.
So What’s Inside the Actual DSP Block?
Having clarified first all the different parts of a transceiver’s electronic engine, we can now talk more specifically about the actual DSP block that encodes/decodes the data and compensates for distortions and impairments in the optical fiber. We will describe some of the critical functions of the DSP in the order in which they happen during signal transmission. Receiving the signal would require these functions to occur in the opposite order, as shown in the diagram below.
- Signal Mapping: This is where the encoding/decoding magic happens. The DSP maps the data signal into the different phases of the light signal—the in-phase components and the quadrature components—and the two different polarizations (x- and y- polarizations). When receiving the signal, the DSP will perform the inverse process, taking the information from the phase and polarization and mapping it into a stream of bits. The whole process of encoding and decoding data into different phases of light is known as quadrature modulation. Explaining quadrature modulation in detail goes beyond the scope of this article, so if you want to know more about it, please read the following article.
- Pilot Signal Insertion: The pilot signal is transmitted over the communication systems to estimate the status of the transmission path. It makes it easier (and thus more energy-efficient) for the receiver end to decode data from the phase and polarization of the light signal.
- Adaptive Equalization: This function happens when receiving the signal. The fiber channel adds several distortions to the light signal (more on that later) that change the signal’s frequency spectrum from what was initially intended. Just as with an audio equalizer, the purpose of this equalizer is to change specific frequencies of the signal to compensate for the distortions and bring the signal spectrum back to what was initially intended.
- Dispersion and Nonlinear Compensation: This function happens when receiving the signal. The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects. These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal that try to offset these dispersion and nonlinear distortions.
- Spectrum Shaping: Communication systems must be efficient in all senses, so they must transmit as much signal as possible within a limited number of frequencies. Spectrum shaping is a process that uses a digital filter to narrow down the signal to the smallest possible frequency bandwidth and achieve this efficiency.
When transmitting, the signal goes through the digital-to-analog conversion after this whole DSP sequence. When receiving the signal, it goes through the inverse analog-to-digital conversation and then through the DSP sequence.
Recent Advances and Challenges in DSPs
This is an oversimplification, but we can broadly classify the two critical areas of improvement for DSPs into two categories.
Transmission Reach and Efficiency
The entire field of communication technology can arguably be summarized with a single question: how can we transmit more information into a single frequency-limited signal over the longest possible distance?
DSP developers have many tools in their kit to answer this question. For example, they can transmit more data using more states in their quadrature-amplitude modulation process. The simplest kind of QAM (4-QAM) uses four different states (usually called constellation points), combining two different intensity levels and two different phases of light.
By using more intensity levels and phases, more bits can be transmitted in one go. State-of-the-art commercially available 400ZR transceivers typically use 16-QAM, with sixteen different constellation points that arise from combining four different intensity levels and four phases. However, this increased transmission capacity comes at a price: a signal with more modulation orders is more susceptible to noise and distortions. That’s why these transceivers can transmit 400Gbps over 100km but not over 1000km.
One of the most remarkable recent advances in DSPs to increase the reach of light signals is probabilistic constellation shaping (PCS). In the typical 16-QAM modulation used in coherent transceivers, each constellation point has the same probability of being used. This is inefficient since the outer constellation points that require more power have the same probability as the inner constellation points that require lower power.
PCS uses the low-power inner constellation points more frequently, and the outer constellation points less frequently, as shown in Figure 5. This feature provides many benefits, including improved tolerance to distortions and easier system optimization to specific bit transmission requirements. If you want to know more about it, please read the explainers here and here.
Energy Efficiency
Increases in transmission reach and efficiency must be balanced with power consumption and thermal management. Energy efficiency is the biggest obstacle in the roadmap to scale high-speed coherent transceivers into Terabit speeds.
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the DSP chip. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip, and its radiofrequency (RF) interconnects with the optical engine. DSP chips must therefore be adaptable and smart, using the least amount of energy to encode/decode information. You can learn more about this subject in one of our previous articles. The interconnects with the optical engine are another area that can see further optimization, and we discuss these improvements in our article about optoelectronic co-design.
Takeaways
In summary, DSPs are the heart of coherent communication systems. They not only encode/decode data into the three properties of a light signal (amplitude, phase, polarization) but also handle error correction, analog-digital conversation, Ethernet framing, and compensation of dispersion and nonlinear distortion. And with every passing generation, they are assigned more advanced functions such as probabilistic constellation shaping.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDMThe Next Bright Lights of Eindhoven
Paris may be the more well-known City of Light, but we may argue that Eindhoven…
Paris may be the more well-known City of Light, but we may argue that Eindhoven has had a closer association with light and light-based technology. The earliest Dutch match manufacturers, the Philips light bulb factory, and ASML’s enormous optical lithography systems were all located in Eindhoven during the course of the city’s 150-year history. And today, Eindhoven is one of the worldwide hubs of the emerging photonics industry. The heritage of Eindhoven’s light technology is one that EFFECT Photonics is honored to continue into the future.
From Matches to the Light Bulb Factory
Eindhoven’s nickname as the Lichtstad did not originate from Philips factories but from the city’s earlier involvement in producing lucifer friction matches. In 1870, banker Christiaan Mennen and his brother-in-law Everardus Keunen set up the first large-scale match factory in the Netherlands in Eindhoven’s Bergstraat. In the following decades, the Mennen & Keunen factory acquired other match factories, and promoted the merger of the four biggest factories in the country to form the Vereenigde Nederlandsche Lucifersfabriken (VNLF). After 1892, the other three factories shut down, and all the match production was focused on Eindhoven. Over the course of the next century, the Eindhoven match factory underwent a number of ownership and name changes until ceasing operations in 1979.
Two decades after the founding of the original match factory, businessman Gerard Philips bought a small plant at the Emmasingel in Eindhoven with the financial support of his father, Frederik, a banker. After a few years, Gerard’s brother Anton joined the company and helped it expand quickly. The company succeeded in its first three decades by focusing almost exclusively on a single product: metal-filament light bulbs.
Over time, Philips began manufacturing various electro-technical products, including vacuum tubes, TVs, radios, and electric shavers. Philips was also one of the key companies that helped develop the audio cassette tape was Philips. In the 1960s, Philips joined the electronic revolution that swept the globe and proposed early iterations of the VCR cassette tape.
From ASML and Photonics Research to Philips
In 1997, Philips relocated their corporate headquarters outside of Eindhoven, leaving a significant void in the city. Philips was the primary factor in Eindhoven’s growth, attracting many people to the city to work.
Fortunately, Philips’ top-notch research and development led to several major spinoff companies, such as NXP and ASML. While ASML is already well-known across Eindhoven and is arguably the city’s largest employer, it might just be the most important tech company the world hasn’t heard of. In order to produce the world’s electronics, ASML builds enormous optical lithography systems that are shipped to the largest semiconductor facilities on earth. The scale of these systems requires engineers from all fields—electrical, optical, mechanical, and materials—to develop them, and that has attracted top talent from all over the world to Eindhoven. Thanks to their growth, Eindhoven has developed into a major center for expats in the Netherlands.