View All
Discover Where Light Meets Digital at ECOC2023
Join EFFECT Photonics from October 2nd-4th, 2023 at ECOC Exhibition in Sec, Glasgow, Scotland. ECOC is the largest optical communications exhibition in Europe and a key meeting place for decision-makers. Come and discover firsthand how our technology is transforming where light meets digital, visit booth #547 to learn how EFFECT Photonics’ full portfolio of optical building blocks are enabling 100G coherent to the network edge and next-generation applications.
Explore Our ECOC2023 Demos:
Build Your Own 100G ZR Coherent Module
At this year’s ECOC, see how easy and affordable it can be to upgrade existing 10G links to a more scalable 100G coherent solution! Try your hand at constructing a 100G ZR coherent module specifically designed for the network edge utilizing various optical building blocks including tunable lasers, DSPs and optical subassemblies.
Tune Your Own PIC (Photonic Integrated Circuit)
Be sure to stop by stand #547 to tune your own PIC with EFFECT Photonics technology. In this interactive and dynamic demonstration, participants can explore first-hand the power of EFFECT Photonics solutions utilizing various parameters and product configurations.
Our experts are also available to discuss customer needs and how EFFECT Photonics might be able to assist. To schedule a meeting, please email marketing@effectphotonics.com
Tags: 100 ZR, 100G, 100gcoherent, access, access networks, bringing100Gtoedge, cloud, cloudedge, coherent, coherentoptics, datacenters, DSP, DSPs, ECOC, ECOC2023, EFFECT Photonics, Integrated Photonics, networkedge, opticcommunications, Optics, photonic integration, Photonics, PIC, tunablelasers, wherelightmeetsdigitalBeaming with Potential – Why Integrated Photonics is Worth It
In today’s rapidly evolving world, traditional technologies such as microelectronics are increasingly struggling to match…
In today’s rapidly evolving world, traditional technologies such as microelectronics are increasingly struggling to match the rising demands of sectors such as communication, healthcare, energy, and manufacturing. These struggles can result in slower data transmission, more invasive diagnostics, or excessive energy consumption. Amidst these challenges, there is a ray of hope: photonics.
Photonics is the study and application of light generation, manipulation, and detection, often aiming to transmit, control, and sense light signals. Its goals and even the name “photonics” are born from its analogy with electronics: photonics aims to transmit, control, and sense photons (the particles of light) in similar ways to how electronics do with electrons (the particles of electricity).
Photons can travel more quickly and efficiently than electrons, especially over long distances. Photonic devices can be manufactured on a semiconductor process similar to the one used by microelectronics, so they have the potential to be manufactured in small packages at high volumes. Due to these properties, photonics can drive change across multiple industries and technologies by enabling faster and more sustainable solutions manufactured at scale.
Integrated Photonics Enables New Networks and Sensors
Two of the biggest sectors photonics can impact are communications and sensing.
Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase communication networks’ speed, reach, and flexibility and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint.
The webpage you are reading was originally a stream of 0 and 1s that traveled through an optical fiber to reach you. Fiber networks need some optical transceiver that transmits and receives the light signal through the fiber. These transceivers were initially bulky and inefficient, but advances in integrated photonics and electronics have miniaturized these transceivers into the size of a large USB stick.
Aside from fiber communications, photonics can also deliver solutions beyond traditional radio communications. For example, optical transmission over the air or space could handle links between different mobile network sites, cars, or satellites.
There are multiple sensing application markets but their core technology is the same. They need a small device that sends out a known pulse of light, accurately detects how the light comes back, and calculates the properties of the environment from that information. It’s a simple but quite powerful concept.
This concept is already being used to implement LIDAR systems that help self-driving cars determine the location and distance of people and objects. However, there is also potential to use this concept in medical and agrifood applications, such as looking for undesired growths in the human eye or knowing how ripe an apple is.
Integrated Photonics Drives Down Power Consumption
Photonics can make many industries more energy efficient. One of the photonics’ success stories is light-emitting diodes (LEDs) manufactured at scale through semiconductor processes. LED lighting sales have experienced explosive growth in the past decade, quickly replacing traditional incandescent and fluorescent light bulbs that are less energy efficient. The International Energy Agency (IEA) estimates that residential LED sales have risen from around 5% of the market in 2013 to about 50% in 2022.
Greater integration is also vital for energy efficiency. In many electronic and photonic devices, the interconnections between different components are often sources of losses and inefficiency. A more compact, integrated device will have shorter and more energy-efficient interconnections. For example, Apple’s system-on-chip processors fully integrate all electronic processing functions on a single chip. As shown in the table below, these processors are significantly more energy efficient than the previous generations of Apple processors.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023,M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
The photonics industry can set a similar goal to Apple’s system-on-chip. By integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient.
There are other ways for photonics to aid energy efficiency goals. For example, photonics enables a more decentralized system of data centers with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids. The Dutch government has already proposed this kind of decentralization as part of its spatial strategy for data centers.
More Investment is Needed for Photonics to Scale like Electronics
Photonics can have an even greater impact on the world if it becomes as readily available and easy to use as electronics.
We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.
Tim Koene – Chief Technology Officer, EFFECT Photonics
Today, photonics is still a ways off from achieving this goal. Photonics manufacturing chains are not at a point where they can quickly produce millions of integrated photonic devices per year. While packaging, assembly, and testing are only a small part of the cost of electronic systems, they are 80% of the total module cost in photonics, as shown in the figure below.
To scale and become more affordable, the photonics manufacturing chains must become more automated and leverage existing electronic packaging, assembly, and testing methods that are already well-known and standardized. Technologies like BGA-style packaging and flip-chip bonding might be novel for photonics developers who started implementing them in the last five or ten years, but electronics embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
The roadmap of scaling integrated photonics and making it more accessible is clear: it must leverage existing electronics manufacturing processes and ecosystems and tap into the same economy-of-scale principles as electronics. Implementing this roadmap, however, requires more investment in photonics. While such high-volume photonics manufacturing demands a higher upfront investment, the resulting high-volume production line will drive down the cost per device and opens them up to a much larger market. That’s the process by which electronics revolutionized the world.
Conclusion
By harnessing the power of light, integrated photonics can offer faster and more sustainable solutions to address the evolving challenges faced by various sectors, including communication, healthcare, energy, and manufacturing. However, for photonics to truly scale and become as accessible as electronics, more investment is necessary to scale production and adapt existing electronics processes to photonics. This scaling will drive down production costs, making integrated photonics more widely available and paving the way for its impactful integration into numerous technologies across the globe.
Tags: Accessibility, Advancements, Biosensors, Challenges, Communication, Cost reduction, Economic growth, Economies of scale, EFFECT Photonics, Evolving demands, Fiber-optic networks, future, Future returns, Healthcare, innovation, Investing in photonics, Investments, LEDs, Light-based technologies, Limitations, Manufacturing, Manufacturing capabilities, Medical imaging, New solutions, Optical diagnostics, Photonic devices, Photonics, Promising industry, Renewable energy technologies, Scalable communication systems, Solar cells, Traditional technologiesHot Topics, Cool Solutions: Thermal Management in Optical Transceivers
In a world of optical access networks, where data speeds soar and connectivity reigns supreme,…
In a world of optical access networks, where data speeds soar and connectivity reigns supreme, the thermal management of optical transceivers is a crucial factor that is sometimes under-discussed. As the demand for higher speeds grows, the heat generated by optical devices poses increasing challenges. Without proper thermal management, this excessive heat can lead to performance degradation, reduced reliability, and lifespan, increasing optical equipment’s capital and operating expenditures.
By reducing footprints, co-designing optics and electronics for greater efficiency, and adhering to industry standards, operators can reduce the impact of heat-related issues.
Integration Reduces Heat Losses
The best way to manage heat is to produce less of it in the first place. Optical transceivers consist of various optical and electronic components, including lasers, photodiodes, modulators, electrical drivers and converters, and even digital signal processors. Each of these elements generates heat as a byproduct of their operation. However, photonic and electronic technology advances have enabled greater device integration, resulting in smaller form factors and reduced power consumption.
For example, over the last decade, coherent optical systems have been miniaturized from big, expensive line cards to small pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all lead to a reduced power consumption over the last decade, as shown in the figure below.
Co-design for Energy Efficiency
Co-designing the transceiver’s optics and electronics is a great tool for achieving optimal thermal management. Co-designing the DSP chip alongside the photonic integrated circuit (PIC) can lead to a much better fit between these components. A co-design approach helps identify in greater detail the trade-offs between various parameters in the DSP and PIC and thus improve system-level performance and efficiency.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Follow Best Practices and Standards
Effective thermal management also means following the industry’s best practices and standards. These standards ensure optical transceivers’ interoperability, reliability, and performance. Two common ratings that will condition the thermal design of optical transceivers are commercial (C-temp) and industrial (I-temp) ratings.
Commercial temperature (C-temp) transceivers are designed to operate from 0°C to 70°C. These transceivers suit the controlled environments of data center and network provider equipment rooms. These rooms have active temperature control, cooling systems, filters for dust and other particulates, airlocks, and humidity control. On the other hand, industrial temperature (I-temp) transceivers are designed to withstand more extreme temperature ranges, typically from -40°C to 85°C. These transceivers are essential for deployments in outdoor environments or locations with harsh operating conditions. It could be at the top of an antenna, on mountain ranges, inside traffic tunnels, or in the harsh winters of Northern Europe.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Operators can ensure the transceivers’ longevity and reliability by selecting the appropriate temperature rating based on the deployment environment and application. On the side, of the component manufacturer, the temperature rating will have a significant impact on the transceiver’s design and testing. For example, making an I-temp transceiver means that every internal component—the integrated circuits, lasers, photodetectors—must also be I-temp compliant.
Takeaways
Operators can overcome heat-related challenges and ensure optimal performance by reducing heat generation through device integration, co-designing optics and electronics, and adhering to industry standards. By addressing these thermal management issues, network operators can maintain efficient and reliable connectivity and contribute to the seamless functioning of optical networks in the digital age.
Manufacturing a Coherent Transceiver
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising on transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure in the ground.
Given the importance of coherent transmission, we will explain some key aspects of manufacturing and testing these devices in this article. In the previous article, we described critical aspects of the transceiver design process.
Into the Global Supply Chain
Almost every modern product results from global supply chains, with components and parts manufactured in facilities worldwide. A pluggable coherent transceiver is not an exception. Some transceiver developers will have fabrication facilities in-house, while others (like EFFECT Photonics) are fabless and outsource their manufacturing. We have discussed the pros and cons of these approaches in a previous article.
Writing a faithful, nuanced summary of all the manufacturing processes involved is beyond the scope of this article. Still, we will mention in very broad terms some of the key processes going on.
- Commercial Off The Shelf (COTS) Procurement: Many components in the transceiver are designed in-house and custom-ordered and manufactured, but other components are sourced off-the-shelf from various suppliers and manufacturers. This includes devices such as RF drivers, amplifiers, or even optical sub-assemblies.
- Integrated Circuit Fabrication: The electronic digital signal processor, optical engine, and laser chips are manufactured through semiconductor foundry processes. In the case of EFFECT Photonics, the laser and optical engine can be fabricated on the same chip. You can read some of our previous to know more about what goes into the DSP and the PIC manufacturing processes.
- Manufacturing Sub-Assemblies: When the chips have been manufactured and tested, the manufacturing of the different transceiver sub-assemblies (chiefly the transmitter and receiver) can proceed. Again, vertically-integrated transceiver developers can manufacture these in-house, but most transceiver makers outsource this, especially if they want large-scale production. This includes manufacturing printed circuit boards (PCB) that integrate and interconnect the electronic and optical components. Careful alignment and bonding of optical components, such as lasers and photodetectors, are critical to achieve optimum performance.
- Transceiver Housing: The transceiver subassemblies will be housed in a metal casing, usually made from an aluminum alloy. The design and manufacturing of these housings must consider power distribution and thermal management.
The collaboration in this global supply chain ensures the availability of specialized expertise and resources, leading to efficient and cost-effective production.
Testing the Transceiver
The avid reader may have noticed that we did not mention one of the most critical aspects of manufacturing in the previous section: testing. After all, what you do not test, you cannot manufacture reliably.
Testing and quality assurance processes must occur throughout all the manufacturing processes to verify its performance and compliance with industry standards. Semiconductor chips and PCBs must be tested before they are placed in sub-assemblies. The completed sub-assemblies must then be tested for optical and electrical performance. Once the transceiver module is completed, it must undergo several reliability and compatibility tests. Let’s discuss some of these testing processes.
- Chip Testing: Testing should happen not only on the transceiver sub-assemblies or the final package but also after chip fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies. The earlier faults can be found in the testing process, the greater the impact on the use of materials and the energy used to process defective chips.
- Calibration and Performance Testing: This involves assessing and calibrating the key performance parameters of the transceiver: output power, extinction ratio, bit error rate, receiver sensitivity, peak wavelength and spectrum, and a few others. Various modulation formats and data rates should be tested to ensure reliable performance under different operating conditions, and performance at different temperatures is also measured (more on that later). These tests will determine whether the device complies with industry standards.
- Environmental and Reliability Testing: The transceiver should undergo environmental and reliability testing to assess its performance under different operating conditions. These tests ensure that the transceiver can withstand real-world deployment scenarios and maintain reliable operation over its intended lifespan. As shown in Table 1, this includes temperature cycling, humidity testing, vibration testing, and accelerated aging tests.
Mechanical Reliability & Temperature Testing | ||
---|---|---|
Shock & Vibration | High / Low Storage Temp | Temp Cycle |
Damp Heat | Cycle Moisture Resistance | Hot Pluggable |
Mating Durability | Accelerated Aging | Life Expectancy Calculation |
- Compatibility Testing: The modules are inserted into switches of various third-party brands to test their interoperability. This is particularly important for a device that wants to be certified by a specific Multi-Source Agreement (MSA) group. This certification adds credibility and ensures that the transceiver can seamlessly integrate into various network environments.
Takeaways
Manufacturing pluggable coherent transceivers involves a global supply chain, enabling access to specialized expertise and resources for efficient production. Some critical processes in this manufacturing chain include procuring materials and off-the-shelf components, fabricating the integrated circuits, and manufacturing the sub-assemblies and the transceiver housing.
Testing and quality assurance are integral to reliable manufacturing. Rigorous testing occurs at various stages, including chip, calibration, performance, environmental, and compatibility testing with third-party brands. This ensures that the transceivers meet industry standards and perform optimally under various operating conditions.
Through meticulous manufacturing and rigorous testing processes, coherent transceivers remain at the forefront of advancing global connectivity.
Tags: Amplitude, and environmental testing, Calibration, Capacity upgrades, coherent 100G ZR, coherent transmission, COSA, datacom, Direct detect technology, DSP, DSP COSA, EIA, global, Global supply chain, GR-CORE, integration, ITLA, ITU, knowledge, laser, OIF, optical networks, Packaging, performance, phase, PIC, Pluggable coherent transceiver, polarization, power, Semiconductor foundry processes, SFF, standards, supply chain, Telcordia, Telecom, Testing and quality assurance, TransceiverLaser Focus World Innovators Awards 2023
– EFFECT Photonics Pico Tunable Laser Assembly Honored by 2023 Laser Focus World Innovators Awards…
– EFFECT Photonics Pico Tunable Laser Assembly Honored by 2023 Laser Focus World Innovators Awards
Eindhoven, The Netherlands
EFFECT Photonics, a leading developer of highly integrated optical solutions, announced today that its Pico Tunable Laser Assembly (pTLA) was recognized among the best by the 2023 Laser Focus World Innovators Awards. An esteemed and experienced panel of judges from the optics and photonics community recognized EFFECT Photonics as a Gold honoree.
Peter Fretty – Laser Focus World Group PublisherOn behalf of the Laser Focus World Innovators Awards, I would like to congratulate EFFECT Photonics on their Gold level honoree status. This competitive program allows Laser Focus World to celebrate and recognize the most innovative products impacting the photonics community this year.
The EFFECT Photonics Pico Tunable Laser Assembly (pTLA) is a continuous wave, tunable laser source specially designed for coherent applications in the optical network edge. It supports both commercial- and industrial-temperature (C-temp and I-temp) operating ranges and offers an ideal combination of power efficiency, cost-effectiveness, and flexibility to enable a seamless upgrade to a more scalable 100 Gbps pluggable coherent solution in a QSFP28 form factor.
About Laser Focus World
Published since 1965, Laser Focus World has become the most trusted global resource for engineers, researchers, scientists, and technical professionals by providing comprehensive coverage of photonics technologies, applications, and markets. Laser Focus World reports on and analyzes the latest developments and significant trends in both the technology and business of photonics worldwide — and offers greater technical depth than any other publication in the field.
About EFFECT Photonics
Where Light Meets Digital – EFFECT Photonics is a highly vertically integrated, independent optical systems company addressing the need for high-performance, affordable optic solutions driven by the ever-increasing demand for bandwidth and faster data transfer capabilities. Using our company’s field-proven digital signal processing and forward error correction technology and ultra-pure light sources, we offer compact form factors with seamless integration, cost efficiency, low power, and security of supply. By leveraging established microelectronics ecosystems, we aim to make our products affordable and available in high volumes to address the challenges in 5G and beyond, access-ready coherent solutions, and cloud and cloud edge services. For more information, please visit: www.effectphotonics.com. Follow EFFECT Photonics on LinkedIn and Twitter.
# # #
Media Contact:
Colleen Cronin
EFFECT Photonics
+1 (978) 409-3531
colleencronin@effectphotonics.com
Designing a Coherent Transceiver
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising on transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure in the ground.
Given its importance, in this article, we want to explain some key aspects of the design processes going on in transceivers.
Key Components of An Optical Transceiver
An optical transceiver has both electrical and optical subsystems, each with its specific components. Each component plays a crucial role in enabling high-speed, long-haul data transmission. Here are the primary components of a coherent optical transceiver:
- Laser Source: The laser source generates the coherent light used for transmission. It determines many of the characteristics of the transmitted optical signal, from its wavelength and power to its noise tolerance. Read our previous article to know more about what goes inside laser design.
- Optical engine: The optical engine includes the modulator and receiver components as well as passive optical components required to guide, combine or split optical signals. The modulator encodes data onto the optical signal and the receiver converts the optical signal into an electrical signal. Depending on the material used (indium phosphide or silicon), the optical engine chip might also include the tunable laser.
- Digital/Analog Converters (DAC/ADC): The Digital-to-Analog Converter (DAC) turns digital signals from the digital processing circuitry into analog signals for modulation. The Analog-to-Digital Converter (ADC) does the reverse process.
- Digital Signal Processor (DSP): The DSP performs key signal processing functions, including dispersion compensation, polarization demultiplexing, carrier phase recovery, equalization, and error correction. Read this article to know more about what goes inside a DSP.
- Forward Error Correction (FEC) Encoder/Decoder: FEC is crucial for enhancing the reliability of data transmission by adding redundant bits that allow a receiver to check for errors without asking for retransmission of the data.
- Control and Monitoring Interfaces: Transceivers feature control and monitoring interfaces for managing and optimizing their operation.
- Power Management and Cooling Systems: These include heatsinks and thermoelectric coolers required to maintain the components within their specified temperature ranges and ensure reliable transceiver operation.
Key Design Processes in a Transceiver
Designing a transceiver is a process that should take an initial application concept into a functioning transceiver device that can be manufactured. It’s a complex, layered process that involves designing many individual components and subsystems separately but also predicting and simulating how all these components and subsystems will interact with each other.
Writing a faithful, nuanced summary of this design process is beyond the scope of this article, but we will mention in very broad terms some of the key processes going on.
- Defining Concept and Specifications: We must first define what goes into the transceiver and the expected performance. Transceiver architects will spend time with product management to understand the customer’s requirements and their impact on design choices. Some of these requirements and designs are already standardized, some of them (like EFFECT Photonics’ optical engine) are proprietary and will require deeper thinking in-house. After these conversations, the transceiver concept becomes a concrete set of specifications that are passed on to the different teams (some in-house, others from company partners).
- Optical Subsystem Design: The optical subsystem in the transceiver generates, manipulates, and receives the light signal. Optical designers develop a schematic circuit diagram that captures the function of the optical subsystem, which includes lasers, modulators, or light detectors. The designers will simulate the optical system to make sure it works, and then translate this functional design into an actual optical chip layout that can be manufactured at a semiconductor foundry.
- Electronic Subsystem Design: In parallel with the optical subsystem, the electronic subsystem is also being designed. The heart of the electronic subsystem is the DSP chip. The DSP design team also comes up with a functional model of the DSP and must simulate it and translate it into a layout that can be manufactured by a semiconductor foundry. However, there’s a lot more to the electronic system than just the DSP: there are analog-to-digital and digital-to-analog converters, amplifiers, drivers, and other electronic components required for signal conditioning. All of these components can be acquired from another vendor or designed in-house depending on the requirements and needs.
- Mechanical and Thermal Design: The mechanical and thermal design of the pluggable transceiver is essential to ensure its compatibility with industry-standard form factors and enable reliable operation. Mechanical considerations include connector types, physical dimensions, and mounting mechanisms. The thermal design focuses on heat dissipation and ensures the transceiver operates within acceptable temperature limits.
The Importance of a Co-Design Philosophy
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
A co-design approach that features strong interactions between the different teams that design these systems can lead to a much better fit and efficiency gains. You can learn more about the potential advantages of co-designing optical and electronic subsystems in this article.
Takeaways
In summary, designing a coherent optical pluggable transceiver involves carefully considering and balancing many different systems, standards, and requirements, from optical and electrical subsystems to mechanical and thermal design and component procurement. These design processes ensure the development of a reliable, high-performance optical transceiver that meets industry standards and fulfills the specific requirements of the target application.
Tags: coherent 100G ZR, Coherent technology, coherent transmission, COSA, datacom, Digital Signal Processor (DSP), Direct detect technology, DSP, DSP COSA, EIA, Forward Error Correction (FEC) Encoder/Decoder, global, GR-CORE, integration, ITLA, ITU, knowledge, laser, Laser source, OIF, optical engine, optical networks, Optical transceiver, Optical transmission, Packaging, PIC, power, SFF, standards, supply chain, Telcordia, Telecom, TransceiverTowards a Zero Touch Coherent Network
Telecommunication service providers face a critical challenge: how to incorporate affordable and compact coherent pluggables…
Telecommunication service providers face a critical challenge: how to incorporate affordable and compact coherent pluggables into their networks while ensuring optimal performance and coverage across most network links.
Automation will be pivotal in achieving affordable and sustainable networks. Software defined networks (SDNs) facilitate network function virtualization (NFV), empowering operators to implement various functions for network management and orchestration. By incorporating an artificial intelligence (AI) layer for management and orchestration with the SDN/NFV framework, operators can unlock even greater benefits, as depicted in the diagram below.
Nevertheless, achieving a fully automated network requires interfacing with the physical layer of the network. This requires intelligent, coherent pluggables capable of adapting to diverse network requirements.
Zero Touch Networks and the Physical Layer
Telecom and datacom providers aiming to achieve market leadership must scale their operations while efficiently and dynamically allocating existing network resources. SDNs offer a pathway to accomplish this by decoupling switching hardware from software, thereby enabling the virtualization of network functions through a centralized controller unit. This centralized management and orchestration (MANO) layer can implement network functions that switches alone cannot handle, enabling intelligent and dynamic allocation of network resources. This enhanced flexibility and optimization yield improved network outcomes for operators.
However, the forthcoming 5G networks will introduce a multitude of devices, software applications, and technologies. Managing these new devices and use cases necessitates self-managed, touchless automated networks. Realizing the full potential of network automation requires the flow of sensor and control data across all OSI model layers, including the physical layer.
As networks grow larger and more complex, MANO software necessitates greater degrees of freedom and adjustability. Next-generation MANO software must optimize both the physical and network layers to achieve the best network fit. Attaining this objective demands intelligent optical equipment and components that can be diagnosed and managed remotely from the MANO layer. This is where smart pluggable transceivers with reconfigurable DSPs come into play.
The Role of Forward Error Correction
Forward error correction (FEC) implemented by DSPs serves as a crucial component in coherent communication systems. FEC enhances the tolerance of coherent links to noise, enabling longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are a million times higher than those of typical direct detect links. In simpler terms, FEC algorithms allow the DSP to enhance link performance without necessitating hardware changes. This enhancement can be compared to image processing algorithms improving the quality of images produced by phone cameras.
When coherent transmission technology emerged, all FEC algorithms were proprietary, guarded closely by equipment and component manufacturers due to their competitive advantage. Consequently, coherent transceivers from different vendors were incompatible, and network deployment required reliance on a single vendor.
However, as data center providers pushed for deeper disaggregation in communication networks, the need for interoperability in coherent transceivers became evident, leading to the standardization of FEC algorithms. The OIF 400ZR standard for data center interconnects adopted a public algorithm called concatenated FEC (CFEC). In contrast, some 400ZR+ MSA standards employ open FEC (oFEC), which provides greater reach at the expense of additional bandwidth and energy consumption. For the longest link lengths (500+ kilometers), proprietary FECs are still necessary for 400G transmission. Nevertheless, public FEC standards have achieved interoperability for a significant portion of the 400G transceiver market.
The Promise of the Smart Transceiver
The realization of a smart coherent pluggable capable of addressing various applications—data centers, carrier networks, SDNs—relies on an equally intelligent and adaptable DSP. The DSP must be reconfigurable through software to adapt to diverse network conditions and use cases.
For instance, a smart DSP could switch between different FEC algorithms to match network performance and use case requirements. Consider the scenario of upgrading a 650-kilometer-long metro link running at 100 Gbps with open FEC to achieve a capacity of 400 Gbps. Open FEC might struggle to deliver the required link performance. However, if the DSP can be reconfigured to employ a proprietary FEC standard, the transceiver would be capable of handling this upgraded link.
400ZR | Open ZR+ | Proprietary Long Haul | |
---|---|---|---|
Target Application | Edge data center interconnect | Metro, Regional data center interconnect | Long-Haul Carrier |
Target Reach @ 400G | 120km | 500km | 1000 km |
Form Factor | QSFP-DD/OSFP | QSFP-DD/OSFP | QSFP-DD/OSFP |
FEC | CFEC | oFEC | Proprietary |
Standards / MSA | OIF | OpenZR+ MSA | Proprietary |
Reconfigurable DSPs also prove beneficial in auto-configuring links to address specific network conditions, particularly in brownfield links. For example, if the link possesses high-quality fiber, the DSP could be reconfigured to transmit at a higher baud rate. Conversely, if the fiber quality is poor, the DSP could scale down the baud rate to mitigate bit errors. Furthermore, if the smart pluggable detects a relatively short fiber length, it could reduce laser transmitter power or DSP power consumption to conserve energy.
Takeaways
Smart coherent access pluggables would greatly simplify network upgrades. The DSP to match this pluggable should be able to use different error correction levels to handle different reach and future zero-touch network requirements.
The DSP can not only introduce software corrections but also effectuate optical hardware adjustments (output power, amplifier control) to adapt to different noise scenarios. Through these adaptations, the next generation of pluggable transceivers will proficiently handle the telecom carrier and data center use cases presented to them.
Tags: 100G ZR, adaptable, AI, artificial intelligence, automation, Coherent pluggables, cost, data layer, DSPs, FEC, forward error correction, network layer, Network management and orchestration, network requirements, OPEX, physical layer, programmability, Proprietary, reach, reconfigurable, remote, SDNs, Service Providers, smart, Smart pluggable transceivers, software defined, Software-Defined Networks, standardized, standards, telecommunication, versatile, virtualizationWhy So Many Coherent DSPs?
– LightWave
Where is 100ZR Needed?
Simply relying on traditional direct detect technologies will not meet the growing bandwidth and service…
Simply relying on traditional direct detect technologies will not meet the growing bandwidth and service requirements of mobile, cable, and business access networks, particularly regarding long-distance transmission. In many instances, deploying 100G coherent dense wavelength division multiplexing (DWDM) technology becomes essential to transmit larger volumes of data over extended distances.
Several applications in the optical network edge could benefit from upgrading from 10G DWDM or 100G grey aggregation uplinks to 100G DWDM optics:
- Mobile Mid-haul: Seamless upgrade of existing uplinks from 10G to 100G DWDM.
- Mobile Backhaul: Upgrading links to 100G IPoDWDM.
- Cable Access: Upgrading uplinks of termination devices like optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Business Services: Scaling enterprise bandwidth beyond single-channel 100G grey links.
However, network providers have often been reluctant to abandon their 10G DWDM or 100G grey links because existing 100G DWDM solutions did not fulfill all the requirements. Although “scaled-down” coherent 400ZR solutions offered the desired reach and tunability, they proved too expensive and power-intensive for many access network applications. Moreover, the ports in small to medium IP routers used in most edge deployments do not support the commonly used QSFP-DD form factor of 400ZR modules but rather the QSFP28 form factor.
How Coherent 100ZR Can Move into Mobile X-haul
The transition from 4G to 5G has transformed the radio access network (RAN) structure, evolving it from a two-level system (backhaul and fronthaul) in 4G to a three-level system (backhaul, midhaul, and fronthaul) in 5G:
- Fronthaul: The segment between the active antenna unit (AAU) and the distributed unit (DU).
- Midhaul: The segment from DU to the centralized unit (CU).
- Backhaul: The segment from CU to the core network.
Most developed countries have already initiated the rollout of 5G, with many operators upgrading their 1G SFP transceivers to 10G SFP+ devices. Some of these 10G solutions incorporated DWDM technology, but many were single-channel grey transceivers. However, to advance to the next phase of 5G deployments, mobile networks must install and aggregate a greater number of smaller base stations to accommodate the exponential increase in connected devices.
These advanced stages of 5G deployment will necessitate operators to cost-effectively scale fiber capacity using more prevalent 10G DWDM SFP+ solutions and 25G SFP28 transceivers. This upgrade will pressure the aggregation segments of mobile backhaul and midhaul, which typically rely on link aggregation of multiple 10G DWDM links into a higher bandwidth group (e.g., 4x10G).
However, this type of link aggregation involves splitting larger traffic streams and can be intricate to integrate within an access ring. Adopting a single 100G uplink diminishes the need for such link aggregation, simplifying network configuration and operations. To gain further insight into the potential market and reach of this link aggregation upgrade, it is recommended to consult the recent Cignal AI report on 100ZR technologies.
Coherent 100ZR Uplinks Driven by Cable Migration to 10G PON
Cignal AI’s 100ZR report also states that the primary catalyst for 100ZR adoption will be the multiplexing of fixed access network links transitioning from 1G to 10G. This trend will be evident in the long-awaited shift of cable networks from Gigabit Passive Optical Networks (GPON) to 10G PON, driven by the new DOCSIS 4.0 standard. This standard promises 10Gbps download speeds for customers and necessitates several hardware upgrades in cable networks.
To multiplex these larger 10Gbps customer links, cable providers and network operators must upgrade their optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) with 100G DWDM uplinks. Additionally, many of these new optical hubs will support up to 40 or 80 optical distribution networks (ODNs), making the previous approach of aggregating multiple 10G DWDM uplinks insufficient for handling the increased capacity and higher number of channels.
Anticipating these needs, the non-profit R&D organization CableLabs has recently spearheaded the development of a 100G Coherent PON (C-PON) standard. This proposal offers 100 Gbps per wavelength with a maximum reach of 80 km and a split ratio of up to 1:512. CableLabs envisions that C-PON, with its 100G capabilities, will play a significant role not only in cable optical network aggregation but also in other scenarios such as mobile x-haul, fiber-to-the-building (FTTB), long-reach rural areas, and distributed access networks.
Advancements in Business Services with 100ZR Coherent and QSFP28
Nearly every organization utilizes the cloud in some capacity, whether for resource development and testing or software-as-a-service applications. However, leveraging the cloud effectively requires fast, high-bandwidth wide-area connectivity to ensure optimal performance of cloud-based applications.
Like cable networks, enterprises will need to upgrade their existing 1G Ethernet private lines to 10G Ethernet to meet these requirements, consequently driving the demand for 100G coherent uplinks. Cable providers and operators will also seek to capitalize on their upgraded 10G PON networks by expanding the reach and capacity of their business services.
The business and enterprise services sector was an early adopter of 100G coherent uplinks, deploying “scaled-down” 400ZR transceivers in the QSFP-DD form factor when they were the available solution. However, since QSFP-DD slots also support QSFP28 form factors, the emergence of QSFP 100ZR solutions presents a more appealing upgrade for these enterprise applications, offering reduced cost and power consumption.
While QSFP28 solutions had struggled to gain widespread acceptance due to the requirement for new, low-power digital signal processors (DSPs), DSP developers and vendors are now actively involved in 100ZR development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions. This is also why EFFECT Photonics has announced its plans to co-develop a 100G DSP with Credo Semiconductor that best fits 100ZR solutions in the QSFP28 form factor.
Takeaways
In the coming years, deploying and applying 100G coherent uplinks will witness increasing prevalence across the network edge. Specific use cases in mobile access networks will require transitioning from existing 10G DWDM link aggregation to a single coherent 100G DWDM uplink.
Simultaneously, the migration of cable networks and business services from 1Gbps to 10Gbps customer links will be the primary driver for the demand for coherent 100G uplinks. For carriers providing converged cable/mobile access, these uplink upgrades will create opportunities to integrate additional business services and mobile traffic into their existing cable networks.
As the ecosystem for QSFP28 100ZR solutions expands, production will scale up, making these solutions more widely accessible and affordable. This, in turn, will unlock new use cases within access networks.
Tags: 100G, 100G ZR, 100ZR, 10G, 10G PON, 5G, 5G deployment, aggregation, backhaul, bandwidth, business service, Business services, Cable access, cable networks, CCAP, Cloud connectivity, coherent, Coherent DWDM, Converged Cable Access Platforms (CCAPs), edge, Ethernet private lines, Fiber capacity, fronthaul, FTTH, IoT, Link aggregation, midhaul, mobile, mobile access network, mobile networks, Network providers, OLT, Optical line terminals (OLTs), PON, power, QSFP-DD, QSFP28, revenue, traffic, upgrade, uplink, Wide-area connectivityThe Evolution to 800G and Beyond
Article first published 26th October 2022, updated 5th July 2023. The demand for data and…
Article first published 26th October 2022, updated 5th July 2023.
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom, and telecom operators need constantly upgrade their transport networks.
400 Gbps links are becoming the standard for links all across telecom transport networks and data center interconnects, but providers are already thinking about the next steps. LightCounting forecasts significant growth in shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years.
The major obstacles in this roadmap remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Thus, many incentives exist to continue improving the performance and power consumption of pluggable optical transceivers. By embracing increased photonic integration, co-designed PICs and DSPs, and multi-laser arrays, pluggables will be better able to scale in data rates while remaining affordable and at low power.
Direct Detect or Coherent for 800G and Beyond?
While coherent technology has become the dominant one in metro distances (80 km upwards), the campus (< 10 km) and intra-data center (< 2 km) distances remain in contention between direct detect technologies such as PAM 4 and coherent.
These links were originally the domain of direct detect products when the data rates were 100Gbps. However, as we move into Terabit speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions.
A major reason for this decreased gap is that direct detect technology will often require additional amplifiers and compensators at these data rates, while coherent pluggables do not. This also makes coherent technology simpler to deploy and maintain. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions.
Increased Integration and Co-Design are Key to Reduce Power Consumption
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s M1 processor integrates all electronic functions in a single system-on-chip (SoC) and consumes a third of the power compared to the processors with discrete components used in their previous generations of computers. We can observe this progress in the table below.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023, M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics can achieve greater efficiency gains by following a similar approach to integration. The interconnects required to couple discrete optical components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. In contrast, the more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components.
Reducing Complexity with Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones will provide more cost-effective ways for pluggables to scale to higher data rates.
Let’s say we need a data center interconnect with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The pace of worldwide data demand is relentless, with it the pace of link upgrades required by datacom and telecom networks. 400G transceivers are currently replacing previous 100G solutions, and in a few years, they will be replaced by transceivers with data rates of 800G or 1.6 Terabytes.
The cost and power consumption of coherent technology remain barriers to more widespread capacity upgrades, but the industry is finding ways to overcome them. Tighter photonic integration can minimize the losses of optical systems and their power consumption. Finally, the onset of multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel.
Tags: bandwidth, co-designing, coherent, DSP, full integration, integration, interface, line cards, optical engine, power consumption, RF Interconnections, ViasatCoherent Lite and The Future Inside the Data Center
In the dynamic landscape of data centers, the demand for greater bandwidth and extended reach…
In the dynamic landscape of data centers, the demand for greater bandwidth and extended reach is rapidly increasing. As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Coherent optical technology has already established itself as the go-to solution for interconnecting data centers over long distances in metro areas.
However, within the confines of data centers themselves, intensity-modulated direct detect (IM-DD) technology remains dominant. Recognizing the limitations of IM-DD in meeting evolving requirements, the industry is exploring “Coherent Lite” solutions—a simplified implementation of coherent technology designed specifically for shorter-reach data center connections.
This article delves into the concept of coherent lite technology and its potential to address the escalating bandwidth demands within data centers.
Reducing Dispersion Compensation
The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects. These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal that try to offset these dispersion and nonlinear distortions.
However, shorter-reach connections require less dispersion compensation, presenting an opportunity to streamline the implementation of coherent solutions. Coherent lite implementations can reduce the use of dispersion compensation blocks. This significantly lowers system power consumption.
The Trade-Offs Between Fixed and Tunable Lasers
Coherent lite solutions also aim to replace tunable lasers with fixed lasers to reduce costs. The use of fixed lasers eliminates the need for wavelength tuning and associated control circuitry and algorithms, simplifying the implementation and reducing operational complexities.
While fixed lasers offer significant advantages, tunable lasers will push to remain competitive. As we described in a previous article, advances in tunable laser technology aim to further reduce package footprints and leverage electronic ecosystems to reduce cost. Such developments will allow tunable lasers to keep pace with the demands of coherent lite solutions, ensuring a viable alternative for shorter-reach data center connections.
Scaling Data Center Links
The increasing length and bandwidth of links within data centers is increasingly calling for the use of coherent technology. As bandwidth scales to 1.6 and 3.2 terabits, traditional direct detect technology faces challenges in keeping up with the growing distances. Intra data center links that were previously limited to 2 kilometers are now extending to 5 or even 10 kilometers, demanding more robust and efficient transmission technologies.
In this context, coherent lite technology provides an attractive middle ground for enabling extended-reach connections within data centers. By leveraging some aspects of coherent solutions, coherent lite technologies facilitate the reliable and efficient transport of data over longer distances.
Takeaways
As data centers evolve to accommodate escalating bandwidth demands, coherent lite technology emerges as a promising solution for communication links within these facilities. By reducing dispersion compensation, simplifying their laser setups, and enabling extended-reach transmission, coherent lite solutions address the limitations of traditional direct detect technology. These advancements pave the way for enhanced performance, and seamless scalability within data center environments.
Tags: 1.6T, 3.2T, bandwidth, coherent, Coherent Lite, Coherent Transmissions, Complexity, CWDM, Data center, datacom, dispersion compensation, Fixed Lasers, IM-DD (Intensity-Modulated Direct Detect), Interconnects, Intra DC, optical technology, power, reach, Scaling Data Center Links, Simplification, Single WavelengthDiscover Where Light Meets Digital at COMNEXT
Join EFFECT Photonics from June 28th to June 30th, 2023 at COMNEXT Tokyo Big Sight, Tokyo, Japan.
COMNEXT is an international exhibition where the next generation of communication technologies, including 6G, will come together. Visit the Sun Instruments stand #11-2 and discover firsthand how our technology is transforming where light meets digital.
Experience the future of optical communications through our highly integrated systems that combine DSP technologies and ultra-pure light sources. Our portfolio includes DWDM Transceivers offering exceptional capabilities in Direct Detect and Coherent technologies.
Unlock the potential of 5G & Beyond, Access-Ready Coherent, Cloud & Cloud Edge technologies and witness the extraordinary possibilities our technology presents to the world.
Register today
Japan Innovation Mission
Last week, Tim Koene (CTO) and Sophie De Maesschalck (CFO) of EFFECT Photonics, traveled to Japan on…
Last week, Tim Koene (CTO) and Sophie De Maesschalck (CFO) of EFFECT Photonics, traveled to Japan on a semiconductor innovation mission with several other top Dutch businesses. The mission was jointly organized by the Netherlands Enterprise Agency (RVO), the Innovation Attaché Tokyo and the Dutch Embassy.
As the world’s third-largest economy, Japan has a long and established history in the semiconductor field. The purpose of the mission was to offer an opportunity for exploring and finding potential partners for joint research, development, and commercialization of innovation in this space, with a strong focus on integrated photonics. In addition, the aim was to further build on the strong relationship and develop bilateral agreements and programs between the two governments.
During the innovation mission, the two countries signed a Memorandum of Cooperation on semiconductor policies where both governments will work to facilitate both private and public sector collaboration on semiconductor and related technologies such as photonics.
Tags: EFFECT Photonics, japan, SemiconductorsThe high-quality interactions, high turnout at every event during the week, and the media coverage shows the importance Japan is placing on the partnership with the Netherlands in the field of Semiconductors. The personal involvement of Minister Nishimura doubly underlines this. It is clear that Integrated Photonics is a key pillar in the broader Semiconductor policy. The support of the Ministry of Economic Affairs and Climate to organize this Innovation Mission is greatly appreciated. We have done more in one week than we could have done in a dozen visits.
Tim Koene, CTO at EFFECT Photonics
Coherent Satellite Networks
The current state of the space industry is characterized by rapid growth, technological advancements, and…
The current state of the space industry is characterized by rapid growth, technological advancements, and increasing commercialization. Over the past decade, the space industry has undergone a significant transformation driven by both government and private sector initiatives.
One notable trend is the rise of commercial space companies. Companies like SpaceX, Blue Origin, and Virgin Galactic have made major strides in developing reusable rocket technology, drastically reducing the cost of accessing space. The miniaturization of satellites has also led to an increase in the number of satellites launched. This progress has boosted space applications such as Earth observation, global internet connectivity, and remote sensing.
On the technical side, the main issues in satellite communications include signal latency, limited bandwidth, and vulnerability to weather conditions. Signal latency refers to the delay in transmitting signals over long distances, which can impact real-time applications. Limited bandwidth can result in slower data transfer rates and congestion. Weather conditions like heavy rainfall or storms can cause signal degradation or interruptions.
This article will discuss how satellite networks and coherent optical communications can help address some of these issues.
A New Age of LEO Satellite Constellations
The most important distinction between each satellite type is its orbital altitude or distance from Earth’s surface as it rotates the planet. There are three main categories:
- Low Earth Orbit (LEO). Altitude 500 to 1,200km. LEO is densely populated with thousands of satellites in operation today, primarily addressing science, imaging, and low-bandwidth telecommunications needs.
- Medium Earth Orbit (MEO). Altitude 5,000 to 20,000km. MEO has historically been used for GPS and other navigation applications.
- Geostationary Earth Orbit (GEO). Altitude > 36,000 km. GEO satellites match the rotation of the Earth as they travel, and so remain above the same point on the ground. Hundreds of GEO satellites are in orbit today, traditionally delivering services such as weather data, broadcast TV, and some low-speed data communication.
The telecom industry is particularly interested in using LEO satellites to provide enhanced connectivity. Compared to GEO satellites, they can provide higher speeds and significantly lower latencies. As the cost of launching LEO satellites has decreased, more can be launched to provide redundancy in case of satellite failures or outages. If a single satellite experiences a problem, such as a malfunctioning component or damage from space debris, it can be taken offline and replaced without interrupting the network’s overall operation.
Many companies are developing massive LEO constellations with hundreds or thousands of satellites to provide low latency and global coverage: SpaceX´s Starlink, Telesat’s Lightspeed, Amazon’s Kuiper, or OneWeb. These LEO satellite constellations can provide true universal coverage compared to terrestrial methods of communication. LEO satellites can connect people to high-speed internet where traditional ground infrastructure is hard to reach, making them an attractive solution to close the connectivity gaps across the world.
Coherent Technology is Vital for Future Satellite Links
Currently, most space missions use radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities.
Coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems. Additionally, optical transceivers can lower the size, weight, and power (SWAP) specifications of satellite communication systems. Less weight and size means a less expensive launch or perhaps room for more scientific instruments. Less power consumption means less drain on the spacecraft’s energy sources.
Compared to traditional optical technology, coherent optical technology offers improved sensitivity and signal to noise ratios. This reduces error rates and the need for retransmission, which would significantly increase latency.
Leveraging Electronics Ecosystems for Space Certification and Standardization
While integrated photonics can boost space communications by lowering the payload, it must overcome the obstacles of a harsh space environment, which include radiation hardness, an extreme operational temperature range, and vacuum conditions. The values in Table 1 show the unmanaged environmental temperatures in different space environments.
Mission Type | Temperature Range |
---|---|
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125 ºC |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
Fortunately, a substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
The space industry is experiencing rapid growth and commercialization, driven by technological advancements and the emergence of commercial space companies. Using multiple satellites in a constellation can enhance reliability and coverage while reducing signal disruptions.
Coherent optical technology is crucial for satellite communication links as it enables higher data rates and improve sensitivities and signal to noise ratio. The integration of electronics and optics ecosystems is essential for space certification and standardization, ensuring compatibility with the harsh space environment. Overall, addressing these challenges will continue to drive innovation and advancements in satellite communication networks.
Tags: Amazon (Kuiper), Blue Origin, coherent, Coherent Satellite Networks, Coherent technology, communications, compatibility, cost, GEO, Global coverage, latency, LEO, LEO satellite constellations, life span, Limited bandwidth, Low latency, MEO, Networks, OneWeb, Optical communications, P2P, PHIX Photonics Assembly, PIXAPP Photonic Packaging Pilot Line, reliability, satellite, Satellite communication systems, sensitivity, Signal latency, SNR, space, Space certification, SpaceX, Technobis IPS, Telesat, Virgin GalacticLightwave Webinar: The Pluggable Transceiver Revolution
Discover the transformative power of coherent pluggable optics in the realm of optical communications networks. Traditionally, transceivers have played a vital role in these networks, but now we are witnessing a paradigm shift. Coherent pluggable optics are assuming new responsibilities, especially in router-to-router links for data center interconnect and service provider IP over DWDM, as well as IP/optical layer integration. The significance of pluggable optics in open line systems is increasingly evident, revolutionizing network infrastructure.
Moreover, transmission rates are soaring from 400GbE to 800GbE, with an astounding 1.6TbE on the horizon. This webinar will delve into these trends across various applications and explore the optimal utilization of current and emerging modules. Join us as we unravel the potential of cutting-edge technology in optical communications networks.
Topics will Include:
- Coherent pluggable optics in router-to-router links
- Data center interconnect and service provider IP over DWDM
- IP/optical layer integration
- Pluggable optics in open line systems
- Transmission rates: From 400GbE to 800GbE and beyond
Speakers:
- Jon Baldry (Metro Marketing Director, Infinera)
- Joost Verberk (Director of Product Management, EFFECT Photonics)
- Anabel Alarcon (Product Manager, EXFO)
- Tho Quang Nguyen (Lead Application Engineer – Thermal, Henkel Corporation)
- Stephen Hardy (Editorial Director, Lightwave)
Watch the Recording
Quantum Meets Friday June 16
EFFECT Photonics will be joining the highly anticipated Quantum Meets for their Day 5, which…
EFFECT Photonics will be joining the highly anticipated Quantum Meets for their Day 5, which will be hosted by Eindhoven Technical University.
This week-long gathering will bring together quantum enthusiasts from around the world for a series of community meetups. Attendees look forward to a captivating program filled with inspiring talks, extraordinary experiences, and enjoyable activities, all centered around the field of quantum technology.
Quantum Meets offers a unique opportunity to form real connections with like-minded individuals who share a passion for quantum. The event aims to provide a comprehensive overview of quantum applications and their implications across various sectors.t
Register today
Data Centers in the Age of AI
Artificial intelligence (AI) is changing the technology landscape in various industries, and data centers are…
Artificial intelligence (AI) is changing the technology landscape in various industries, and data centers are no exception. AI algorithms are computationally heavy and will increase data centers’ power consumption and cooling requirements. This aspect is arguably the one that will most deeply affect data center architecture. That being said, AI will also help automate some aspects of data center operation and maintenance.
This article will elaborate on how these AI aspects will change data center architecture.
Energy Efficiency
Data centers famously consume a significant amount of energy, and reducing their power consumption is an essential goal for data center operators.
The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2022 is 1.55. This implies that for every 1 kWh used to power data center equipment, an extra 0.55 kWh—about 35% of total power consumption—is needed to power auxiliary equipment like lighting and, more importantly, cooling. The Uptime Institute has only observed a marginal improvement of 10% in the average data center PUE since 2014 (which was 1.7 back then).
In some ways, AI can help data centers to become more energy efficient. By analyzing temperature data, heat generation, and other variables, AI can determine the optimal temperature and airflow for different areas of the data center. By looking at energy consumption and heat generation data, AI can determine the optimal placement of equipment to minimize energy consumption and reduce wasted heat. By studying historical data and forecasting future energy consumption, AI can help data center operators to identify areas where energy consumption can be reduced.
While these benefits have the potential to reduce data center power consumption in the future significantly, the current reality is that power-hungry AI algorithms require more computing resources and power and will lead to a net increase in data center power consumption. The world’s major data center providers are already gearing up for this increase.
For example, a recent Reuters report explains how Meta computing clusters needed 24 to 32 times the networking capacity. This increase required a redesign of the clusters and data centers to include new liquid cooling systems.
Intelligent Automation
AI can automate and optimize many monitoring and scheduling tasks currently performed manually in data centers. These optimization and automation processes could also reduce the amount of hardware to purchase, manage, and monitor, as explained by Pratik Gupta, CTO at IBM Automation.
For example, AI can be used to automate capacity planning. By analyzing historical data and forecasting future demand, AI allows data center operators to determine the optimal computing resources to support current and future workloads. Such work makes planning for growth easier and ensures that data centers have sufficient resources to support their customers’ needs.
AI for Self-Healing Data Centers
A VentureBeat report explains that there has been a trend in using AI for fault detection and prediction in data centers in recent years. This leads to “self-healing” mechanisms that help data center operators reduce downtime and improve the reliability of their infrastructure.
For example, AIs can help monitor traffic throughout the data center. If traffic in specific nodes is slowing down, the AI can detect that trend and find solutions to restart the node or reroute traffic to other nodes. These trends and issues might not be immediately apparent to human operators.
With AI, data centers can look at historical data to predict equipment failures and schedule maintenance before a failure occurs. This can help prevent downtime and ensure that equipment always operates at peak performance.
Takeaways
AI can significantly improve the operation of data centers by automating and optimizing resource allocation, as well as doing predictive maintenance and fault detection. These processes can help data centers become more energy efficient in the long term, but in the short term, AI will lead to significant increases in data center consumption. This is shown by how major data center providers have had to restructure their data centers to include enhanced cooling capabilities, such as liquid cooling.
Tags: Artificial intelligence (AI), Cooling requirements, Data center architecture, data centers, energy efficiency, Intelligent automation, Networking capacity, Optimal temperature and airflow, power consumption, Power usage effectiveness (PUE) ratioCEO Marcoccia: EFFECT Photonics combineert het beste van twee werelden Micro-elektronica uit de VS, fotonica uit Nederland
– Link Magazine
What do Next-Gen Optical Subassemblies Need?
While packaging, assembly, and testing are only a small part of the cost of electronic…
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic integrated circuits (PICs) and their subassemblies. Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost.
To trigger a revolution in the use of photonics worldwide, it needs to be as easy to manufacture and use as electronics. In the words of EFFECT Photonics’ Chief Technology Officer, Tim Koene: “We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.”
This article will explore three key avenues to improve optical subassemblies and packaging for photonic devices.
Learning from Electronics Packaging
A key way to improve photonics manufacturing is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a new special production line is much more expensive than modifying an existing production flow.
One electronic technique essential to transfer into photonics is ball-grid array (BGA) packaging. BGA-style packaging has grown popular among electronics manufacturers over the last few decades. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
Another critical technique to move into photonics is flip-chip bonding. This process is where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to transfer more electronics packaging methods into photonics. PICs that can handle being soldered to circuit boards allow the industry to build optical subassemblies that are more accessible to the open market and can go into trains, cars, or airplanes.
The Benefits of Increasing Integration
Economics of scale is a crucial principle behind electronics manufacturing, and we must apply it to photonics too. The more components we can integrate into a single chip and the more chips we can integrate into a single wafer, the more affordable the photonic device becomes. If production volumes increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the photonics industry in general.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Deepening photonics integration will also have a significant impact on power consumption. Integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient. This approach doesn’t just optimize the efficiency of the devices themselves but also of the resource-hungry chip manufacturing process.
The Importance of Laser Packaging
Over the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Another aspect of optimizing laser module footprint is allowing transceiver developers to mix and match their building blocks. For example, traditional ITLAs in transceivers contain the temperature control driver and power converter functions. However, the main transceiver board can usually provide these functions too. A setup in which the main board performs these driver and converter functions would avoid the need for redundant elements in both the main board and tunable laser.
Finally, the future of laser packaging will also involve packaging more multi-laser arrays. As explained in a previous article, multi-laser arrays will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while avoiding the higher cost and complexity of increasing the speed with a single laser channel.
Takeaways
Improving subassemblies and packaging is vital for photonics to reach its potential. Photonics must learn from well-established, standardized electronics packaging techniques like BGA-style packaging and flip-chip bonding. By increasing integration, photonics can achieve economies of scale that make devices more affordable and energy efficient. In this context, improved integration and packaging of tunable lasers and arrays will be particularly important. Overall, these efforts will make photonics more accessible to the open market and make it as easy to manufacture and use as electronics.
Tags: Assembly, electronics, flip chip bonding, integration, Manufacturing, Packaging, Photonics, Subassemblies, testingBuilding a Sustainable Future with Fully Integrated PICs
Article first published 27 September 2021, updated 31st May 2023. The demand for data and…
Article first published 27 September 2021, updated 31st May 2023.
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. By 2022, internet traffic had doubled yet again. While 5G standards are more energy-efficient per bit than 4G, the total power consumption will be much higher than 4G. Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations. These issues do not just affect the environment but also the bottom lines of communications companies.
Keeping up with the increasing data demand of future networks sustainably will require operators to deploy more optical technologies, such as photonic integrated circuits (PICs), in their access and fronthaul networks.
Integration Impacts Energy Efficiency and Optical Losses
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s recent M1 and M2 processors integrate all electronic functions in a single system-on-chip (SoC) and consume significantly less power than the processors with discrete components used in their previous generations of computers.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023,M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics is also achieving greater efficiency gains by following a similar approach to integration. The more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components and allow for interactive optimization.
Let’s start by discussing three different levels of device integration for an optical device like a transceiver:
- Discrete build – The transceiver components are manufactured through separate processes. The components are then assembled into a single package using different types of interconnections.
- Partial integration – Some components are manufactured and integrated on the same chip, but others are manufactured or sourced separately. For example, the transceiver laser can be manufactured separately on a different material and then interconnected to a chip with the other transceiver components.
- Full integration – All the components are manufactured on a single chip from a single material simultaneously.
While discrete builds and partial integration have advantages in managing the production yield of the individual components, full integration leads to fewer optical losses and more efficient packaging and testing processes, making them a much better fit in terms of sustainability.
The interconnects required to couple discrete components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. The more interconnects between different components, the higher the losses become. Discrete builds will have the most interconnect points and highest losses. Partial integration reduces the number of interconnect points and losses compared to discrete builds. If these components are made from different optical materials, the interconnections will suffer additional losses.
On the other hand, full integration uses a single chip of the same base material. It does not require lossy interconnections between chips, minimizing optical losses and significantly reducing the energy consumption and footprint of the transceiver device.
More Integration Saves Scarce Resources
When it comes to energy consumption and sustainability, we shouldn’t just think about the energy the PIC consumes but also the energy and carbon footprint of fabricating the chip and assembling the transceiver. To give an example from the electronics sector, a Harvard and Facebook study estimated that for Apple, manufacturing accounts for 74% of their carbon emissions, with integrated circuit manufacturing comprising roughly 33% of Apple’s carbon output. That’s higher than the emissions from product use.
Early Testing Avoids Wastage
Testing is another aspect of the manufacturing process that impacts sustainability. The earlier faults can be found in the testing process, the greater the impact on the use of materials and the energy used to process defective chips. Ideally, testing should happen not only on the final, packaged transceiver but in the earlier stages of PIC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies.
Discrete and partial integration approaches do more of their optical testing on the finalized package, after connecting all the different components together. Should just one of the components not pass the testing process, the complete packaged transceiver would need to be discarded, potentially leading to a massive waste of materials as nothing can be ”fixed” or reused at this stage of the manufacturing process.
Full integration enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials.
Full Integration Drives Sustainability
While communication networks have become more energy-efficient, further technological improvements must continue decreasing the cost of energy per bit and keeping up with the exponential increase in Internet traffic. At the same time, a greater focus is being placed on the importance of sustainability and responsible manufacturing. All the photonic integration approaches we have touched on will play a role in reducing the energy consumption of future networks. However, out of all of them, only full integration is in a position to make a significant contribution to the goals of sustainability and environmentally friendly manufacturing. A fully integrated system-on-chip minimizes optical losses, transceiver energy consumption, power usage, and materials wastage while at the same time ensuring increased energy efficiency of the manufacturing, packaging, and testing process.
Tags: ChipIntegration, Data demand, DataDemand, EFFECT Photonics, Energy consumption reduction, energy efficiency, EnergySavings, Environmental impact, Fully Integrated PICs, Green Future, GreenFuture, Integrated Photonics, Integration benefits, Manufacturing sustainability, Optical technologies, OpticalComponents, photonic integration, PIC, PICs, ResponsibleManufacturing, sustainability telecommunication, Sustainable, Sustainable future, SustainableNetworks, Transceiver optimizationDiscover Where Light Meets Digital at the Dutch Technology Festival
Join EFFECT Photonics on June 8th and 9th, 2023 at the Dutch Technology Festival – Klokgebouw Eindhoven, the Netherlands.
Get ready to experience an exhilarating festival that will take you on a mind-blowing journey into the future of ‘technology’. Young individuals, students, and professionals will come together to witness the awe-inspiring world of possibilities in store for their careers and education. You’ll delve into the realms of security, sustainability, health, energy, food, and mobility. Prepare to be amazed as you explore the very forefront of innovation and discover the incredible potential that lies ahead!
Come and discover firsthand, at EFFECT Photonics, how our technology is transforming where light meets digital, and learn how EFFECT Photonics’ full portfolio of optical building blocks is enabling 100G coherent to the network edge and next-generation applications.
Explore Our Demos:
Build Your Own 100G ZR Coherent Module
See how easy and affordable it can be to upgrade existing 10G links to a more scalable 100G coherent solution! Try your hand at constructing a 100G ZR coherent module specifically designed for the network edge utilizing various optical building blocks including tunable lasers, DSPs and optical subassemblies.
Tune Your Own PIC (Photonic Integrated Circuit)
Be sure to stop by to tune your own PIC with EFFECT Photonics technology. In this interactive and dynamic demonstration, participants can explore first-hand the power of EFFECT Photonics solutions utilizing various parameters and product configurations.
Tags: Careers and education, EFFECT Photonics, Energy, Future of technology, Health, innovation, Light meets digital, Mobility, network edge, Optical building blocks, Photonic Integrated Circuit, PIC, security, Sustainability
The Fabrication Process Inside a Photonic Foundry
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we should discuss the fabrication processes inside photonic semiconductor foundries.
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics co-founder Boudewijn Docter described a fabrication process with 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
1. Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods, such as physical or chemical vapor deposition.
2. Lithography (i.e., printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with aphotosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
3. Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. Etching techniques can be broadly classified into two categories.
- Dry Etching: These processes remove material by bombarding it with ions. Typically, these ions come from a plasma of reactive gases like oxygen, boron, chlorine, etc. This approach is often used to etch a material anisotropically (i.e., in a specific direction).
- Wet Etching: These processes involve the removal of material using a liquid reactant. The material to be etched is immersed in the solution, which will dissolve the targeted material layers. This solution usually consists of an acid, such as hydrofluoric acid (HFl), to etch silicon. Wet etching is typically used for etching a material isotropically (i.e., in all directions).
4. Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such as silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation for future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps or, in the end, to add electrical contacts to the chip.
Figure 5 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
After this fabrication process ends, the processed wafers are shipped worldwide to be tested and packaged into photonic devices. This is an expensive process we discussed in one of our previous articles.
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous knowledge in chip design, fabrication, and testing from experts in different fields worldwide. EFFECT Photonics was founded by people who fabricated these chips themselves, understood the process intimately and developed the connections and network to develop cutting-edge PICs at scale.
Tags: Agricultural, Carbon Footprint, Chip Material, Cleaning, Communication Networks, Deposition, Energy Cost, Epitaxy, Etching, Fabrication Process, Food Diagnostics, Integrated Photonics, LIDAR, Lithography, Manufacturing, Medical Devices, Metallization, Photonic Foundry, Photonics, Semiconductor, Sensing and Imaging, Surface PreparationData Center Interconnects: Coherent or Direct Detect?
Article first published 15 June 2022, updated 18 May 2023. With the increasing demand for…
Article first published 15 June 2022, updated 18 May 2023.
With the increasing demand for cloud-based applications, datacom providers are expanding their distributed computing networks. Therefore, they and telecom provider partners are looking for data center interconnect (DCI) solutions that are faster and more affordable than before to ensure that connectivity between metro and regional facilities does not become a bottleneck.
As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Coherent 400ZR now dominates the metro DCI space, but in the coming decade, coherent technology could also play a role in shorter ranges, such as campus and intra-data center interconnects. As interconnects upgrade to Terabit speeds, coherent technology might start coming closer to direct detect power consumption and cost.
Coherent Dominates in Metro DCIs
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create a 400ZR multi-source agreement. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. Operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detection: a single coherent transceiver plugged into a router could fulfill the requirements.
As an example of their success, Cignal AI forecasted that 400ZR shipments would dominate edge applications, as shown in Figure 2.
Campus Interconnect Are the Grey Area
The campus DCI segment, featuring distances below 10 kilometers, was squarely in the domain of direct detect products when the standard speed of these links was 100Gbps. No amplifiers nor compensators were needed for these shorter distances, so direct detect transceivers are as simple to deploy and maintain as coherent ones.
However, as link bandwidths increase into the Terabit space, these direct detect links will need more amplifiers to reach 10 kilometers, and their power consumption will approach that of coherent solutions. The industry initially predicted that coherent solutions would be able to match the power consumption of PAM4 direct detect solutions as early as 800G generation. However, PAM4 developers have proven resourceful and have borrowed some aspects of coherent solutions without fully implementing a coherent solution. For example, ahead of OFC 2023, semiconductor solutions provider Marvell announced a 1.6Tbps PAM4 platform that pushes the envelope on the cost and power per bit they could offer in the 10 km range.
Following the coming years and how the PAM-4 industry evolves will be interesting. How many (power-hungry) features of coherent solutions will they have to borrow if they want to keep up in upcoming generations and speeds of 3.2 Tbps and beyond? Lumentum’s Chief Technology Officer, Brandon Collings, has some interesting thoughts on the subject in this interview with Gazettabyte.
Direct Detect Dominates Intra Data Center Interconnects (For Now…)
Below Terabit speeds, direct detect technology (both NRZ and PAM-4) will likely dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than 2 kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
Nevertheless, moving to larger, more centralized data centers (such as hyperscale) is lengthening intra-DCI links. Instead of transferring data directly from one data center building to another, new data centers move data to a central hub. So even if the building you want to connect to might be 200 meters away, the fiber runs to a hub that might be one or two kilometers away. In other words, intra-DCI links are becoming campus DCI links requiring their single-mode fiber solutions.
On top of these changes, the upgrades to Terabit speeds in the coming decade will also see coherent solutions more closely challenge the power consumption of direct detect transceivers. PAM-4 direct detect transceivers that fulfill the speed requirements require digital signal processors (DSPs) and more complex lasers that will be less efficient and affordable than previous generations of direct detect technology. With coherent technology scaling up in volume and having greater flexibility and performance, one can argue that it will also reach cost-competitiveness in this space.
Takeaways
Unsurprisingly, using coherent or direct detect technology for data center interconnects boils down to reach and capacity needs. 400ZR coherent is already established as the solution for metro DCIs. In campus interconnects of 10 km or less, PAM-4 products remain a robust solution up to 1.6 Tbps, but coherent technology is making a case for its use. Thus, it will be interesting to see how they compete in future generations and 3.2 Tbps.
Coherent solutions are also becoming more competitive as the intra-data center sector moves into higher Terabit speeds, like 3.2Tbps. Overall, the datacom sector is moving towards coherent technology, which is worth considering when upgrading data center links.
Tags: 800G, access networks, coherent, cost, cost-effective, Data center, distributed computing, edge and metro DCIs, integration, Intra DCI, license, metro, miniaturized, photonic integration, Photonics, pluggable, power consumption, power consumption SFP, reach, TerabitTim Koene over het Nederlandse techklimaat
BNR – Zakendoen 23:45
Shining a Light on Four Tunable Lasers
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity…
The world is moving towards tunability. Datacom and telecom companies may increase their network capacity without investing in new fiber infrastructure thanks to tunable lasers and dense wavelength division multiplexing (DWDM). Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also contributed to the broad adoption of DWDM systems since they reduce the complexity of deployment and maintenance.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The fundamental components of a laser are the following:
- An optical resonator (also called an optical cavity) that allows laser light to re-circulate and feed itself back. Resonators can be linear or ring-shaped. Linear resonators have a highly reflective mirror on one end and a partially-reflective mirror on the other, which acts as a coupler that lets the laser light out. On the other hand, ring resonators use a waveguide as an output coupler.
- An active medium (also called a gain medium) inside the resonator that, when pumped by an external energy source, will amplify the power of light by a process called stimulated emission.
- A pump source is the external energy source that powers the amplification process of the gain medium. The typical tunable laser used in communications will use an electrical pump, but some lasers can also use an optical pump (i.e., another light source).
As light circulates throughout the resonator, it passes multiple times through the pumped gain medium, amplifying itself and building up power to become the highly concentrated and coherent beam of light we know as a laser.
There are multiple ways to tune lasers, but let’s discuss three common tuning methods. These methods can and are often used together.
- Tuning the Gain Medium: By changing the pump intensity or environmental conditions such as its temperature, the gain medium can amplify different frequencies of light.
- Tuning the Resonator Length: The light inside a resonator goes back and forth at a frequency that depends on the length of the resonator. So making the resonator shorter or longer can change its frequency.
- Tuning by Filtering: Adding a filtering element inside or outside the resonator, such as a diffraction grating (i.e., a periodic mirror), allows the laser to “select” a specific frequency.
With this short intro on how lasers work and can be tuned, let’s dive into some of the different tunable lasers used in communication systems.
Distributed Feedback Lasers
Distributed Feedback (DFB) lasers are unique because they directly etch a grating onto the gain medium. This grating acts as a periodic mirror, forming the optical resonator needed to recirculate light and create a laser beam. These lasers are tunable by tuning the temperature of the gain medium and by filtering with the embedded grating.
Compared to their predecessors, DFB lasers could produce very pure, high-quality laser light with lower complexity in design and manufacturing that could be easily integrated into optical fiber systems. These characteristics benefited the telecommunications sector, which needed lasers with high purity and low noise that could be produced at scale. After all, the more pure (i.e., lower linewidth) a laser is, the more information it can encode. Thus, DFB lasers became the industry’s solution for many years.
The drawback of DFB lasers is that embedding the grating element in the gain medium makes them more sensitive and unstable. This sensitivity narrows their tuning range and makes them less reliable as they age.
Distributed Bragg Reflector (DBR) Lasers
A simple way to improve the reliability compared to a DFB laser is to etch the grating element outside the gain medium instead of inside. This grating element (which in this case is called a Bragg reflector) acts as a mirror that creates the optical resonator and amplifies the light inside. This setup is called a distributed Bragg reflector (DBR) laser.
While, in principle, a DBR laser does not have a wider tuning range than a DFB laser, its tuning behavior is more reliable over time. Since the grating is outside the gain medium, the DBR laser is less sensitive to environmental fluctuations and more reliable as it ages. However, as coherent and DWDM systems became increasingly important, the industry needed a greater tuning range that DFB and DBR lasers alone could not provide.
External Cavity Lasers (ECL)
Interestingly enough, one of the most straightforward ways to improve the quality and tunability of a semiconductor laser is to use it inside a second, somewhat larger resonator. This setup is called an external cavity laser (ECL) since this new resonator or cavity will use additional optical elements external to the original laser.
The main modification to the original semiconductor laser is that instead of having a partially reflective mirror as an output coupler, the coupler will use an anti-reflection coating to become transparent. This helps the original laser resonator capture more light from the external cavity.
The new external resonator provides more degrees of freedom for tuning the laser. If the resonator uses a mirror, then the laser can be tuned by moving the mirror a bit and changing the length of the resonator. If the resonator uses a grating, it has an additional element to tune the laser by filtering.
ECLs have become the state-of-the-art solution in the telecom industry: they use a DFB or DBR laser as the “base laser” and external gratings as their filtering element for additional tuning. These lasers can provide a high-quality laser beam with low noise, narrow linewidth, and a wide tuning range. However, they came with a cost: manufacturing complexity.
ECLs initially required free-space bulk optical elements, such as lenses and mirrors, for the external cavity. One of the hardest things to do in photonics is coupling between free-space optics and a chip. This alignment of the free-space external cavity with the original laser chip is extremely sensitive to environmental disturbances. Therefore, their coupling is often inefficient and complicates manufacturing and assembly processes, making them much harder to scale in volume.
Laser developers have tried to overcome this obstacle by manufacturing the external cavity on a separate chip coupled to the original laser chip. Coupling these two chips together is still a complex problem for manufacturing but more feasible and scalable than coupling from chip to free space optics. This is the direction many major tunable laser developers will take in their future products.
Integrated Tunable Ring Lasers
As we explained in the introductory section, linear resonators are those in which light bounces back and forth between two mirrors. However, ring resonators take a different approach to feedback: the light loops multiple times inside a ring that contains the active medium. The ring is coupled to the rest of the optical circuit via a waveguide.
The power of the ring resonator lies in its compactness, flexibility, and integrability. While a single ring resonator is not that impressive or tunable, using multiple rings and other optical elements allows them to achieve performance and tunability on par with the state-of-the-art tunable lasers that use linear resonators.
Most importantly, these widely tunable ring lasers can be entirely constructed on a single chip of Indium Phosphide (InP) material. As shown in this paper from the Eindhoven University of Technology, these lasers can even be built with the same basic building blocks and processes used to make other elements in the InP photonic integrated circuit (PIC).
This high integration of ring lasers has many positive effects. It can avoid inefficient couplings and make the laser more energy efficient. Furthermore, it enables the development of a monolithically integrated laser module where every element is included on the same chip. This includes integrating the wavelength locker component on the same chip, an element most state-of-the-art lasers attach separately.
As we have argued in previous articles, the more elements can be integrated into a single chip, the more scalable the manufacturing process can become.
Takeaways
Factors such as output power, noise, linewidth, tuning range, and manufacturability are vital when deciding which kind of laser to use. A DFB or DBR laser should do the job if wide tunability is not required. Greater tuning range will require an external cavity laser, but if the device must be manufactured at a large volume, an external cavity made on a chip instead of free-space optics will scale more easily. The latter is the tunable laser solution the telecom industry is gravitating towards.
That being said, ring lasers are a promising alternative because they can enable a widely tunable and monolithically integrated laser with all elements, including wavelength locker, on the same chip. This setup is ideal for scaling into high production volumes.
Tags: EFFECT Photonics, PhotonicsThe Promise of Integrated Quantum Photonics
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest…
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days.
While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety.
Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum: protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does.
The quantum bits (qubits) used by QKD systems can be photons, electrons, atoms, or any other system that can exist in a quantum state. However, using photons as qubits will likely dominate the quantum communications and QKD application space. We have decades of experience manipulating the properties of photons, such as polarization and phase, to encode qubits. Thanks to optical fiber, we also know how to send photons over long distances with relatively little loss. Besides, optical fiber is already a fundamental component of modern telecommunication networks, so future quantum networks can run on that existing fiber infrastructure. All these signs point towards a new era of quantum photonics.
Photonic QKD devices have been, in some shape or form, commercially available for over 15 years. Still, factors such as the high cost, large size, and the inability to operate over longer distances have slowed their widespread adoption. Many R&D efforts regarding quantum photonics aim to address the size, weight, and power (SWaP) limitations. One way to overcome these limitations and reduce the cost per device would be to integrate every QKD function—generating, manipulating, and detecting photonic qubits—into a single chip.
Integration is Key to Bring Lab Technology into the Market
Bringing quantum products from lab prototypes to fully realized products that can be sold on the market is a complex process that involves several key steps.
One of the biggest challenges in bringing quantum products to market is scaling up the technology from lab prototypes to large-scale production. This requires the development of reliable manufacturing processes and supply chains that can produce high-quality quantum products at scale. Quantum products must be highly performant and reliable to meet the demands of commercial applications. This requires extensive testing and optimization to ensure that the product meets or exceeds the desired specifications.
In addition, quantum products must comply with relevant industry standards and regulations to ensure safety, interoperability, and compatibility with existing infrastructure. This requires close collaboration with regulatory bodies and industry organizations to develop appropriate standards and guidelines.
Photonic integration is a process that makes these goals more attainable for quantum technologies. By taking advantage of existing semiconductor manufacturing systems, quantum technologies can more scale up their production volumes more easily.
Smaller Footprints and Higher Efficiency
One of the most significant advantages of integrated photonics is its ability to miniaturize optical components and systems, making them much smaller, lighter, and more portable than traditional optical devices. This is achieved by leveraging micro- and nano-scale fabrication techniques to create optical components on a chip, which can then be integrated with other electronic and optical components to create a fully functional device.
The miniaturization of optical components and systems is essential for the development of practical quantum technologies, which require compact and portable devices that can be easily integrated into existing systems. For example, compact and portable quantum sensors can be used for medical imaging, geological exploration, and industrial process monitoring. Miniaturized quantum communication devices can be used to secure communication networks and enable secure communication between devices.
Integrated photonics also allows for the creation of complex optical circuits that can be easily integrated with other electronic components, to create fully integrated opto-electronic quantum systems. This is essential for the development of practical quantum computers, which require the integration of a large number of qubits (quantum bits) with control and readout electronics.
Economics of Scale
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the quantum photonics industry.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Takeaways
Overall, bringing quantum products to market requires a multi-disciplinary approach that involves collaboration between scientists, engineers, designers, business professionals, and regulatory bodies to develop and commercialize a high-quality product that meets the needs of its target audience. Integrated photonics offers significant advantages in miniaturization and scale-up potential, which are essential in taking quantum technologies from the lab to the market.
Tags: Economy-of-scale, EFFECT Photonics, Integrated Photonics, miniaturization, Photonics, Photons, Quantum, Quantum products, Qubits, RSA encryption, Wafer Scale PhotonicsThe Future of Coherent Transceivers in the Access
The demand for data and other digital services is rising exponentially. From 2010 to 2020,…
The demand for data and other digital services is rising exponentially. From 2010 to 2020, Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom and telecom operators need constantly upgrade their transport networks.
The major obstacles in this upgrade path remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. This power consumption increase seems incompatible with the power constraints of the network edge.
This article will review trends in data rate, power consumption, and footprint for transceivers in the network edge that aim to address these challenges.
Downscaling Data Rates for the Access
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge.
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the significant factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
You can read more about these developments in our previous article, When Will the Network Edge Go Coherent?
Moving Towards Low Power
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down. These power efficiency issues do not just affect the environment but also the bottom lines of communications companies.
As shown in the table below, the growth of data centers and wireless networks will continue to drive power consumption upwards.
These power constraints are even more pressing in the access network sector. Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Aside from the move to QSFP28 form factors for 100G coherent, EFFECT Photonics also believes in two other ways to reduce power consumption.
- Increased Integration: The interconnections among smaller, highly-integrated optical components consume less power than those among more discrete components. We will discuss this further in the next section.
- Co-Design: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead
Can We Still Move Towards Smaller Footprints?
Moving toward smaller pluggable footprints should not necessarily be a goal, but as we mentioned in the previous section, it is a means toward the goal of lower power consumption. Decreasing the size of optical components and their interconnections means that the light inside the chip will travel a smaller distance and accumulate fewer optical losses.
Let’s look at an example of lasers. In the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Takeaways
The ever-increasing bandwidth demands in access networks force coherent pluggables to face the complex problem of maintaining a good enough performance while moving to lower cost and power consumption.
The move towards 100G coherent solutions in QSFP28 form factors will play a major role in meeting the power requirements of the access network sector. Further gains can be achieved with greater integration of optical components and co-designing the optics and electronic engines of the transceiver to reduce inefficiencies. Further gains in footprint for transceivers can also be obtained by eliminating redundant laser control functions in both the laser package and the main transceiver board.
Tags: 100G Coherent Products, 400ZR Pluggable Coherent Solutions, 5G Networks, 800G Solutions, 800ZR, Affordability, Coherent Transceivers, datacom, Direct Detection, EFFECT Photonics, Internet Traffic, network edge, OSFP Form Factors, Photonics, Pluggable Modules, power consumption, Power Efficiency, QSFP Modules, QSSFP-DD, Telecom Operators, Thermal ManagementKing Willem Alexander gets a taste of the future in Eindhoven via a deep dive into the world of integrated photonics
– InnovationOrigins
Emerging Photonics Ecosystem at the Heart of King’s Visit to Holst Centre and PhotonDelta
Eindhoven, The Netherlands During King Willem-Alexander’s visit to the High Tech Campus in Eindhoven on…
Eindhoven, The Netherlands
During King Willem-Alexander’s visit to the High Tech Campus in Eindhoven on April 20, he gained insight into the value chain of integrated photonics in the Netherlands and Europe from various perspectives. This emerging chip technology is accelerating thanks to a €1.1 billion program led by PhotonDelta. With demonstrations at Holst Centre and roundtable discussions at PhotonDelta, the King learned about the essence of the technology, its applications, and its importance for the future of the Netherlands and Europe.
Ewit Roos, CEO of PhotonDelta, is very pleased with the day’s outcome.
Ewit Roos, CEO at PhotonDeltaThe King’s visit is a tremendous honor for the ecosystem. It underlines once again the importance of integrated photonics for our country. We told him about the applications of integrated photonics for innovations in medical diagnostics, autonomous driving, the agricultural sector, and data communications. PhotonDelta and Holst Centre are central to the development of this new value chain. For the King, this was certainly a wonderful opportunity to get the most complete picture possible of the quality of the unique position of the ecosystem and all the opportunities that still lie ahead.
Photonics is all about the interaction between light (photons) and matter. Through photonics, data can be transmitted at the speed of light, reducing energy consumption and increasing efficiency. Integrated photonic chips, considered a key technology worldwide, enable the development of smaller, faster, and energy-efficient devices. This will lead to innovations that will help solve the major challenges of our time and contribute to a healthy and sustainable future.
National Growth Fund
The Netherlands has recognized the opportunities of integrated photonics through the National Growth Fund. PhotonDelta, the ecosystem of organizations in photonic chip technology, has mobilized public and private investment totaling 1.1 billion euros through this fund to transform the Netherlands into a leader in the next generation of semiconductors.
The investment consists of 470 million euros through the growth fund, with the rest coming from other partners and stakeholders. It is all part of the Dutch government’s plan to strengthen the country’s position as a world leader in integrated photonics.
Demonstrations and Roundtable Discussion
The King first visited Holst Centre, a collaboration between the research centers imec and TNO. Researchers from different disciplines develop new technology for photonic chips and prototypes that companies can use for their production process and new products.
The King was given a tour of Holst Centre’s laboratory and attended demonstrations of the application of photonics in various sectors. Representatives from Signify, EFFECT Photonics, Delta Diagnostics, Quix Quantum, and Lionix explained the different stages in the value chain. During this tour, the main focus was on the applications that help solve society’s critical challenges.
Integrated photonics is one of the pillars of Holst Centre’s strategy. By combining the expertise of imec and TNO, many aspects needed in the development and production process can be offered, such as design, prototyping, testing, and manufacturing. We combine the photonic microchip technology of imec in Belgium, the complementary photonic platforms in the Netherlands, the design expertise of imec in the Netherlands, and the optics and systems integration knowledge of TNO to help develop new sustainable solutions in different industries.
Kathleen Philips, director of imec at Holst Centre
The subsequent visit to PhotonDelta, also at the High Tech Campus, began with a presentation on the ecosystem. In recent years, PhotonDelta has grown this community from 12 to 60 stakeholders who together form a value chain that conceives, develops, and makes photonic chips and solutions based on them. The roundtable discussion covered topics such as industrialization, application technology, and European cooperation. Phix Photonics Assembly, Smart Photonics, and Trumpf Photonics Components addressed the challenges and possible solutions in scaling up the most crucial industrialization processes. Following this talk, Synopsys NL and NXP discussed the challenges around application strategies: creating a library of easy-to-use building blocks for new applications. Finally, the European playing field came into focus through PhotonDelta itself and the Ministry of Economic Affairs. This covered Europe’s ambitions around strategic autonomy and the impact of the European Chips Act. Photonic chips have been designated as a key technology by the European Commission. Security of supply for digital technology is the guiding principle in the Chips Act.
Strategic autonomy does not mean developing a copy of the global semiconductor value chain in Europe, but taking unique European positions in these chains where we can be ‘world-class’. This implies interdependence through quality and capacity rather than taking over one-to-one what is already happening elsewhere. Integrated photonics is one such position where Europe can excel.
Ton van Mol, Director TNO at Holst Centre
About PhotonDelta
PhotonDelta is an ecosystem of organizations that conceive, develop and make solutions based on photonic chips. PhotonDelta supports the system by stimulating collaboration between stakeholders, providing funding, and connecting them to the market.
About Holst Centre
Holst Centre, a collaboration of imec and TNO, brings together expertise in wireless sensor technologies and flexible electronics under one roof. The sharing of specific knowledge in an open structure enables the alignment of research and innovation with societal issues in health and vitality, energy and climate, and mobility and industry 5.0.
# # #
More information:
PhotonDelta
Jorn Smeets, Chief Marketing Officer
M: 06 – 1147 8812
E: jorn@photondelta.com
W: www.photondelta.com
Imec at Holst Centre
Carolien van der Leegte, Communicatiemanager imec NL
M: 06 – 1760 4841
E: carolien.vanderleegte@imec.nl
W: www.holstcentre.com
W: www.imec-int.com
TNO at Holst Centre
Sara Joosten, Project Manager Marketing & Communications
M: 06 – 2915 6716
E: sara.joosten@tno.nl
W: www.holstcentre.com
W: www.tno.nl
One Watt Matters
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them…
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down.
The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2022 is 1.55. This implies that for every 1 kWh used to power data center equipment, an extra 0.55 kWh—about 35% of total power consumption—is needed to power auxiliary equipment like lighting and, more importantly, cooling. While the advent of centralized hyperscale data centers will improve energy efficiency in the coming decade, that trend is offset by the construction of many smaller local data centers on the network edge to address the exponential growth of 5G services such as the Internet of Things (IoT).
These opposing trends are one of the reasons why the Uptime Institute has only observed a marginal improvement of 10% in the average data center PUE since 2014 (which was 1.7 back then). Such a slow improvement in average data center power efficiency cannot compensate for the fast growth of new edge data centers.
For all the bad reputation data centers receive for their energy consumption, though, wireless transmission generates even more heat than wired links. While 5G standards are more energy-efficient per bit than 4G, Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations. To make things worse, the use of higher frequency spectrum bands and new IoT use cases require the deployment of more base stations too.
Prof. Earl McCune from TU Delft estimates that nine out of ten watts of electrical power in 5G systems turn into heat. This Huawei study also predicts that the energy consumption of wireless access networks will increase even more quickly than data centers in the next ten years—more than quadrupling between 2020 and 2030.
These power efficiency issues do not just affect the environment but also the bottom lines of communications companies. In such a scenario, saving even one watt of power per pluggable transceiver could quickly multiply and scale up into a massive improvement on the sustainability and profitability of telecom and datacom providers.
How One Watt of Savings Scales Up
Let’s discuss an example to show how a seemingly small improvement of one Watt in pluggable transceiver power consumption can quickly scale up into major energy savings.
A 2020 paper from Microsoft Research estimates that for a metropolitan region of 10 data centers with 16 fiber pairs each and 100-GHz DWDM per fiber, the regional interconnect network needs to host 12,800 transceivers. This number of transceivers could increase by a third in the coming years since the 400ZR transceiver ecosystem supports a denser 75 GHz DWDM grid, so this number of transceivers would increase to 17,000. Therefore, saving a watt of power in each transceiver would lead to a total of 17 kW in savings.
The power savings don’t end there, however. The transceiver is powered by the server, which is then powered by its power supply and, ultimately, the national electricity grid. On average, 2.5 Watts must be supplied from the national grid for every watt of power the transceiver uses. When applying that 2.5 factor, the 17 kW in savings we discussed earlier are, in reality, 42.5 kW. In a year of power consumption, this rate adds up to a total of 372 MWh in power consumption savings. According to the US Environmental Protection Agency (EPA), these amounts of power savings in a single metro data center network are equivalent to 264 metric tons of carbon dioxide emissions. These emissions are equivalent to consuming 610 barrels of oil and could power up to 33 American homes for a year.
Saving Power through Integration and Co-Design
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the chips produced losses and incompatibilities that made their devices less energy efficient. After 2020, starting with Apple’s M1 processor, they fully integrate all components on a single chip, avoiding losses and incompatibilities. As shown in the table below, this electronic system-on-chip (SoC) consumes a third of the power compared to the processors with discrete components used in their previous generations of computers.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2023,M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
The photonics industry would benefit from a similar goal: implementing a photonic system-on-chip. Integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more efficient. This approach doesn’t just optimize the efficiency of the devices themselves but also of the resource-hungry chip manufacturing process. For example, a system-on-chip approach enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials. You can read our previous article on the subject to know more about the energy efficiency benefits of system-on-chip integration.
Another way of improving power consumption in photonic devices is co-designing their optical and electronic systems. A co-design approach helps identify in greater detail the trade-offs between various parameters in the optics and electronics, optimizing their fit with each other and ultimately improving the overall power efficiency of the device. In the case of coherent optical transceivers, an electronic digital signal processor specifically optimized to drive an indium-phosphide optical engine directly could lead to power savings.
When Sustainability is Profitable
System-on-chip (SoC) approaches might reduce not only the footprint and energy consumption of photonic devices but also their cost. The economics of scale principles that rule the electronic semiconductor industry can also reduce the cost of photonic systems-on-chip. After all, SoCs minimize the footprint of photonic devices, allowing photonics developers to fit more of them within a single wafer, which decreases the price of each photonic system. As the graphic below shows, the more chips and wafers are produced, the lower the cost per chip becomes.
Integrating all optical components—including the laser—on a single chip shifts the complexity from the expensive assembly and packaging process to the more affordable and scalable semiconductor wafer process. For example, it’s much easier to combine optical components on a wafer at a high volume than to align components from different chips together in the assembly process. This shift to wafer processes also helps drive down the cost of the device.
Takeaways
With data and energy demands rising yearly, telecom and datacom providers are constantly finding ways to reduce their power and cost per transmitted bit. As we showed earlier in this article, even one watt of power saved in an optical transceiver can snowball into major savings that providers and the environment can profit from. These improvements in the power consumption of optical transceivers can be achieved by deepening the integration of optical components and co-designing them with electronics. Highly compact and integrated optical systems can also be manufactured at greater scale and efficiency, reducing their financial and environmental costs. These details help paint a bigger picture for providers: sustainability now goes hand-in-hand with profitability.
Tags: 5G, data centers, EFFECT Photonics, efficiency, energy consumption, Photonics, Sustainability, TransceiversEFFECT Photonics Adds Executives
– Lightwave
When will the Network Edge go Coherent?
Article first published 27 July 2022, updated 12 April 2023. Network carriers want to provide…
Article first published 27 July 2022, updated 12 April 2023.
Network carriers want to provide communications solutions in all areas: mobile access, cable networks, and fixed access to business customers. They want to provide this extra capacity and innovative and personalized connectivity and entertainment services to their customers.
Deploying only legacy direct detect technologies will not be enough to cover these growing bandwidth and service demands of mobile, cable, and business access networks with the required reach. In several cases, networks must deploy more 100G coherent dense wavelength division multiplexing (DWDM) technology to transmit more information over long distances. Several applications in the optical network edge could benefit from upgrading from 10G DWDM or 100G grey aggregation uplinks to 100G DWDM optics:
- Mobile Mid-haul benefits from seamlessly upgrading existing uplinks from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their links to 100G IPoDWDM.
- Cable Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Business Services could scale enterprise bandwidth beyond single-channel 100G grey links.
However, network providers have often stuck to their 10G DWDM or 100G grey links because the existing 100G DWDM solutions could not check all the required boxes. “Scaled-down” coherent 400ZR solutions had the required reach and tunability but were too expensive and power-hungry for many access network applications. Besides, ports in small to medium IP routers used in most edge deployments often do not support the QSFP-DD form factor commonly used in 400ZR modules but the QSFP28 form factor.
Fortunately, the rise of 100ZR solutions in the QSFP28 form factor is changing the landscape for access networks. “The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade,” said Scott Wilkinson, Lead Analyst for Optical Components at market research firm Cignal AI. “As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
In this article, we will discuss how the recent advances in 100ZR solutions will enable the evolution of different segments of the network edge: mobile midhaul and backhaul, business services, and cable.
How Coherent 100G Can Move into Mobile X-haul
The upgrade from 4G to 5G has shifted the radio access network (RAN) from a two-level structure with backhaul and fronthaul in 4G to a three-level structure with back-, mid-, and fronthaul:
- Fronthaul is the segment between the active antenna unit (AAU) and the distributed unit (DU)
- Midhaul is the segment from DU to the centralized unit (CU)
- Backhaul is the segment from CU to the core network.
The initial rollout of 5G has already happened in most developed countries, with many operators upgrading their 1G SFPs transceiver to 10G SFP+ devices. Some of these 10G solutions had DWDM technology; many were single-channel grey transceivers. However, mobile networks must move to the next phase of 5G deployments, which requires installing and aggregating more and smaller base stations to exponentially increase the number of devices connected to the network.
These mature phases of 5G deployment will require operators to continue scaling fiber capacity cost-effectively with more widespread 10G DWDM SFP+ solutions and 25G SFP28 transceivers. These upgrades will put greater pressure on the aggregation segments of mobile backhaul and midhaul. These network segments commonly used link aggregation of multiple 10G DWDM links into a higher bandwidth group (such as 4x10G). However, this link aggregation requires splitting up larger traffic streams and can be complex to integrate across an access ring. A single 100G uplink would reduce the need for such link aggregation and simplify the network setup and operations. If you want to know more about the potential market and reach of this link aggregation upgrade, we recommend reading the recent Cignal AI report on 100ZR technologies.
Cable Migration to 10G PON Will Drive the Use of Coherent 100G Uplinks
According to Cignal AI’s 100ZR report, the biggest driver of 100ZR use will come from multiplexing fixed access network links upgrading from 1G to 10G. This trend will be reflected in cable networks’ long-awaited migration from Gigabit Passive Optical Networks (GPON) to 10G PON. This evolution is primarily guided by the new DOCSIS 4.0 standard, which promises 10Gbps download speeds for customers and will require several hardware upgrades in cable networks.
To multiplex these new larger 10Gbps customer links, cable providers and network operators need to upgrade their optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM uplinks. Many of these new optical hubs will support up to 40 or 80 optical distribution networks (ODNs), too, so the previous approach of aggregating multiple 10G DWDM uplinks will not be enough to handle this increased capacity and higher number of channels.
Anticipating such needs, the non-profit R&D organization CableLabs has recently pushed to develop a 100G Coherent PON (C-PON) standard. Their proposal offers 100 Gbps per wavelength at a maximum reach of 80 km and up to a 1:512 split ratio. CableLabs anticipates C-PON and its 100G capabilities will play a significant role not just in cable optical network aggregation but in other use cases such as mobile x-haul, fiber-to-the-building (FTTB), long-reach rural scenarios, and distributed access networks.
Towards 100G Coherent and QSFP28 in Business Services
Almost every organization uses the cloud in some capacity, whether for development and test resources or software-as-a-service applications. While the cost and flexibility of the cloud are compelling, its use requires fast, high-bandwidth wide-area connectivity to make cloud-based applications work as they should.
Similarly to cable networks, these needs will require enterprises to upgrade their existing 1G Ethernet private lines to 10G Ethernet, which will also drive a greater need for 100G coherent uplinks. Cable providers and operators will also want to take advantage of their upgraded 10G PON networks and expand the reach and capacity of their business services.
The business and enterprise services sector was the earliest adopter of 100G coherent uplinks, deploying “scaled-down” 400ZR transceivers in the QSFP-DD form factor since they were the solution available at the time. However, these QSFP-DD slots also support QSFP28 form factors, so the rise of QSFP 100ZR solutions will provide these enterprise applications with a more attractive upgrade with lower cost and power consumption. These QSFP28 solutions had struggled to become more widespread before because they required the development of new, low-power digital signal processors (DSPs), but DSP developers and vendors are keenly jumping on board the 100ZR train and have announced their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions. This is also why EFFECT Photonics has announced its plans to co-develop a 100G DSP with Credo Semiconductor that best fits 100ZR solutions in the QSFP28 form factor.
Takeaways
In the coming years, 100G coherent uplinks will become increasingly widespread in deployments and applications throughout the network edge. Some mobile access networks use cases must upgrade their existing 10G DWDM link aggregation into a single coherent 100G DWDM uplink. Meanwhile, cable networks and business services are upgrading their customer links from 1Gbps to 10Gbps, and this migration will be the major factor that will increase the demand for coherent 100G uplinks. For carriers who provide converged cable/mobile access, these upgrades to 100G uplinks will enable opportunities to overlay more business services and mobile traffic into their existing cable networks.
As the QSFP28 100ZR ecosystem expands, production will scale up, and these solutions will become more widespread and affordable, opening up even more use cases in access networks.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technologyEFFECT Photonics Extends Leadership Team
– Seasoned industry executives appointed as Senior Vice President of Product Development, Vice President of…
– Seasoned industry executives appointed as Senior Vice President of Product Development, Vice President of Operations and Global Head of Human Resources
Eindhoven, The Netherlands
EFFECT Photonics, a leading developer of highly integrated optical solutions, today announced the extension of its executive leadership team as it positions itself for continued rapid growth and long-term success.
Roberto Marcoccia, CEO, EFFECT Photonics“EFFECT Photonics is at a pivotal stage of growth. To capitalize on our unique market opportunity and to achieve our goal of leveraging our integrated optical technologies to deliver solutions that disrupt the status quo, it was time to expand our leadership team to achieve this mission. We are thrilled and excited to have these new leaders on board and the experience they bring in order to help us chart the course for the opportunity ahead.”
Leading EFFECT Photonics product development from concept to production, Dr. Ted Schmidt will serve as the company’s Senior Vice President of Product Development. An accomplished technical leader and technology pathfinder, Ted has over 20 years of fiberoptic transceiver development experience, delivering advanced optical technologies for industry leaders such as Lumentum, Juniper Networks, OpNext, and Stratalight Communications. With a PhD in Physics, is an inventor on 45 US patents and has authored numerous books, papers and articles in the field of optical communications.
To oversee EFFECT Photonics’ new operational structure including management of its supply chain and to work with its microelectronics ecosystem industry partners, Tony Englese joins the company as Vice President of Operations. Tony is an expert manufacturing and operations professional with a wide range of experience at companies such as Juniper Networks, Aurrion and Xsigo Systems (acquired by Oracle).
As EFFECT Photonics Global Head of Resources, Veronique Gremmen-de Groot is responsible for supporting the company’s growth and evolution and for management of all people-related activities. Veronique has an extensive background in Human Resources and organizational development for various industries, having worked for global companies such as KPN, Office Depot, TenneT and Conclusion.
About EFFECT Photonics
Where Light Meets Digital – EFFECT Photonics is a highly vertically integrated, independent optical systems company addressing the need for high-performance, affordable optic solutions driven by the ever-increasing demand for bandwidth and faster data transfer capabilities. Using our company’s field-proven digital signal processing and forward error correction technology and ultra-pure light sources, we offer compact form factors with seamless integration, cost efficiency, low power, and security of supply. By leveraging established microelectronics ecosystems, we aim to make our products affordable and available in high volumes to address the challenges in 5G and beyond, access-ready coherent solutions, and cloud and cloud edge services. For more information, please visit: www.effectphotonics.com. Follow EFFECT Photonics on LinkedIn and Twitter.
# # #
Media Contact:
Colleen Cronin
EFFECT Photonics
colleencronin@effectphotonics.com
Delta Dialogues – April 2023 – Looking back at OFC San Diego ’23
– PhotonDelta
Enabling a 100ZR Ecosystem
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom…
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. In response to this interest from operators, several vendors are keenly jumping on board the 100ZR train by announcing their development projects: Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions.
This growing interest and use cases for 100ZR are also changing how industry analysts view the potential of the 100ZR market. Last February, Cignal AI released a report on 100ZR which stated that the viability of new low-power solutions in the QSFP28 form factor enabled use cases in access networks, thus doubling the size of their 100ZR shipment forecasts.
“The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade. As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
Scott Wilkinson, Lead Analyst for Optical Components at Cignal AI.
The 100ZR market can expand even further, however. Access networks are heavily price-conscious, and the lower prices of 100ZR pluggables become, the more widely they will be adopted. Reaching such a goal requires a vibrant 100ZR ecosystem with multiple suppliers that can provide lasers, digital signal processors (DSPs), and full transceiver solutions that address the access market’s needs and price targets.
The Constraints of Power in the Access
Initially, 100G coherent solutions were focused on the QSFP-DD form factor that was popularized by 400ZR solutions. However, power consumption has prevented these QSFP-DD solutions from becoming more viable in the access network domain.
Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment. QSFP-DD modules forced operators and equipment vendors to use larger cooling components (heatsinks and fans), meaning that each module would need more space to cool appropriately. The increased need for cabinet real estate makes these modules more costly to deploy in the access domain.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 14 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
Ecosystems to Overcome the Laser and DSP Bottlenecks
Even though QSFP28 modules are better at addressing the power concerns of the access domain, some obstacles prevent their wider availability.
Since QSFP28 pluggables have a lower power consumption and slightly smaller footprint requirements, they also need new laser and DSP solutions. The industry cannot simply incorporate the same lasers and DSPs used for 400ZR devices. This is why EFFECT Photonics has announced its plans to develop a pico tunable laser assembly (pTLA) and co-develop a 100G DSP that will best fit 100ZR solutions in the QSFP28 form factor.
However, a 100ZR industry with only one or two laser and DSP suppliers will struggle to scale up and make these solutions more widely accessible. The 400ZR market provides a good example of the benefits of a vibrant ecosystem. Four vendors are currently shipping DSPs for 400ZR solutions, and even more companies have announced plans to develop DSPs. This larger vendor ecosystem will help 400ZR production scale up in volume and satisfy a rapidly growing market.
While the 100ZR market is smaller than the 400ZR one, its ecosystem must follow its example and expand to enable new use cases and further increase the market size.
Standards and Interoperability Make 100ZR More Widespread
Another reason 400ZR solutions became so widespread is their standardization and interoperability. Previously, the 400G space was more fragmented, and pluggables from different vendors could not operate with each other, forcing operators to use a single vendor for their entire network deployment.
Eventually, datacom and telecom providers approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop an interoperable 400G coherent solution that addressed their needs. These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. This standardization and interoperability effort enabled the explosive growth of the 400G market.
100ZR solutions must follow a similar path to reach a larger market. If telecom and datacom operators want more widespread and affordable 100ZR solutions, more of them will have to join the push for 100ZR standardization and interoperability. This includes standards not just for the power consumption and line interfaces but also for management and control interfaces, enabling more widespread use of remote provisioning and diagnostics. These efforts will make 100ZR devices easier to implement across access networks.
Takeaways
The demand from access network operators for 100ZR solutions is there, but it has yet to fully materialize in the industry forecasts because, right now, there is not enough supply of viable 100ZR solutions that can address their targets. So in a way, further growth of the 100ZR market is a self-fulfilled prophecy: the more suppliers and operators support 100ZR, the easier it is to scale up the supply and meet the price and power targets of access networks, expanding the potential market. Instead of one or two vendors fighting for control of a smaller 100ZR pie, having multiple vendors and standardization efforts will increase the supply, significantly increasing the size of the pie and benefiting everyone’s bottom line.
Therefore, EFFECT Photonics believes in the vision of a 100ZR ecosystem where multiple vendors can provide affordable laser, DSP, and complete transceiver solutions tailored to network edge use cases. Meanwhile, if network operators push towards greater standardization and interoperability, 100ZR solutions can become even more widespread and easy to use.
Tags: 100ZR, access networks, DSP, ecosystem, edge, laser, market, price, solutionsWhat DSPs Does the Cloud Edge Need?
By storing and processing data closer to the end user and reducing latency, smaller data…
By storing and processing data closer to the end user and reducing latency, smaller data centers on the network edge significantly impact how networks are designed and implemented. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Various trends are driving the rise of the edge cloud: 5G networks and the Internet of Things (IoT), augmented and virtual reality applications, network function virtualization, and content delivery networks.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Type of Edge | Datacenter | Location | Number of DCs per 10M people | Average Latency | Size | |
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks | |
Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
This situation leads to hyperscale data center providers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers. They must manage hundreds or thousands of new nodes that will be hard to control and maintain.
These conditions mean that optical transceivers for these networks, and thus their digital signal processors (DSPs), must have flexible and low power consumption and smart features that allow them to adapt to different network conditions.
Using Adaptable Power Settings
Reducing power consumption in the cloud edge is not just about reducing the maximum power consumption of transceivers. Transceivers and DSPs must also be smart and decide whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver must operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with lower power on the amplifiers. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
Fiber monitoring is also an essential variable in this equation. A smart DSP could change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
The Importance of a Co-Design Philosophy for DSPs
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately. This setup reduces the time to market and simplifies the research and design processes, but it comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of PIC but a master of none. Given the ever-increasing demand for capacity and the need for sustainability both as financial and social responsibility, transceiver developers increasingly need a steak knife rather than a Swiss army knife.
As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.
Scaling the Edge Cloud with Automation
With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will take much work to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Automation is vital to achieving such flexibility and scalability.
Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, the AI network controller can take telemetry data from the whole network to decide where to route traffic and adjust power levels, reducing power consumption.
In this context, smart digital signal processors (DSPs) and transceivers can give the AI controller more degrees of freedom to optimize the network. They could provide more telemetry to the AI controller so that it makes better decisions. The AI management layer can then remotely control programmable interfaces in the transceiver and DSP so that the optical links can adjust the varying network conditions. If you want to know more about these topics, you can read last week’s article about transceivers in the age of AI.
Takeaways
Cloud-native applications require edge data centers that handle increased traffic and lower network latency. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Scaling edge data center networks will require greater automation and more flexible power management, and smarter DSPs and transceivers will be vital to enable these goals.
Co-design approaches can optimize the interfacing of the DSP with the optical engine, making the transceiver more power efficient. Further power consumption gains can also be achieved with smarter DSPs and transceivers that provide telemetry data to centralized AI controllers. These smart network components can then adjust their power output based on the decisions and instructions of the AI controller.
Tags: 5G, AI, artificial intelligence, Augmented Reality, automation, cloud edge, Co-Design Philosophy, data centers, Delivery Networks, DSPs, Edge Cloud, Fiber Monitoring, Hyperscale Data Center, Internet of Things, IoT, latency, Network Function Virtualization, optical transceivers, power consumption, Software-Defined Networks, Telecom Operators, Virtual RealityRoberto Marcoccia – Interview at OFC 2023
– LightwaveOnline
Transceivers in the Age of AI
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable,…
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable, and sustainable. It can gather information from devices across the optical network to identify patterns and make decisions independently without human input. By synergizing with other technologies, such as network function virtualization (NFV), AI can become a centralized management and orchestration network layer. Such a setup can fully automate network provisioning, diagnostics, and management, as shown in the diagram below.
However, artificial intelligence and machine learning algorithms are data-hungry. To work optimally, they need information from all network layers and ever-faster data centers to process it quickly. Pluggable optical transceivers thus need to become smarter, relaying more information back to the AI central unit, and faster, enabling increased AI processing.
Faster Transceivers for the Age of AI
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
400 Gbps links are becoming the standard across data center interconnects, but providers are already considering the next steps. LightCounting forecasts significant growth in the shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years. We discuss these solutions in greater detail in our article about the roadmap to 800G and beyond.
Coherent Modules Need to Provide More Telemetry Data
Mobile networks now and in the future will consist of a massive number of devices, software applications, and technologies. Self-managed, zero-touch automated networks will be required to handle all these new devices and use cases. Realizing this full network automation requires two vital components.
- Artificial intelligence and machine learning algorithms for comprehensive network automation: For instance, AI in network management can drastically cut the energy usage of future telecom networks.
- Sensor and control data flow across all network model layers, including the physical layer: As networks grow in size and complexity, the management and orchestration (MANO) software needs more degrees of freedom and dials to turn.
These goals require smart optical equipment and components that provide comprehensive telemetry data about their status and the fiber they are connected to. The AI-controlled centralized management and orchestration layer can then use this data for remote management and diagnostics. We discuss this topic further in our previous article on remote provisioning, diagnostics, and management.
For example, a smart optical transceiver that fits this centralized AI-management model should relay data to the AI controller about fiber conditions. Such monitoring is not just limited to finding major faults or cuts in the fiber but also smaller degradations or delays in the fiber that stem from age, increased stress in the link due to increased traffic, and nonlinear optical effects. A transceiver that could relay all this data allows the AI controller to make better decisions about how to route traffic through the network.
A Smart Transceiver to Rule All Network Links
After relaying data to the AI management system, a smart pluggable transceiver must also switch parameters to adapt to different use cases and instructions given by the controller.
Let’s look at an example of forward error correction (FEC). FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A smart transceiver and DSP could switch among different FEC algorithms to adapt to network performance and use cases. Let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the transceiver can be remotely reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
Reconfigurable transceivers can also be beneficial to auto-configure links to deal with specific network conditions, especially in brownfield links. Let’s return to the fiber monitoring subject we discussed in the previous section. A transceiver can change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
Takeaways
Optical networks will need artificial intelligence and machine learning to scale more efficiently and affordably to handle the increased traffic and connected devices. Conversely, AI systems will also need faster pluggables than before to acquire data and make decisions more quickly. Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will need to scale up to speeds beyond 400G and relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Tags: 800G, 800G and beyond, adaptation, affordable, AI, artificial intelligence, automation, CloudComputing, data, DataCenter, EFFECT Photonics, FEC, fiber quality, innovation, integration, laser arrays, machine learning, network conditions, network optimization, Networking, optical transceivers, photonic integration, Photonics, physical layer, programmable interface, scalable, sensor data flow, technology, Telecommunications, telemetry data, terabyte, upgrade, virtualizationEnabling Digital-Signal Processing and Forward Error Correction
– LaserFocusWorld
Coherent Optics In Space
When it started, the space race was a competition between two superpowers, but now there…
When it started, the space race was a competition between two superpowers, but now there are 90 countries with missions in space.
The prices of space travel have gone down, making it possible for more than just governments to send rockets and satellites into space. Several private companies are now investing in space programs, looking for everything from scientific advances to business opportunities. Some reports estimate more than 10,000 companies in the space industry and around 5,000 investors.
According to The Space Foundation’s 2022 report, the space economy was worth $469 billion in 2021. The report says more spacecraft were launched in the first six months of 2021 than in the first 52 years of space exploration (1957-2009). This growing industry has thus a growing need for technology products across many disciplines, including telecommunications. The space sector will need lighter, more affordable telecommunication systems that also provide increased bandwidth.
This is why EFFECT Photonics sees future opportunities for coherent technology in the space industry. By translating the coherent transmission from fiber communication systems on the ground to free-space optical systems, the space sector can benefit from solutions with more bandwidth capacity and less power consumption than traditional point-to-point microwave links.
It’s all About SWaP
One of the major concerns of the space industry is the cost of sending anything into space. Even during the days of NASA’s Space Shuttle program (which featured a reusable shuttle unit), sending a kilogram into space cost tens of thousands of dollars. Over time, more rocket stages have become reusable due to the efforts of companies like SpaceX, reducing these costs to just a few thousand. The figure below shows how the cost of space flight has decreased significantly in the last two decades.
Even though space travel is more affordable than ever, size, weight, and power (SWaP) requirements are still vital in the space industry. After all, shaving off weight or size in the spacecraft means a less expensive launch or perhaps room for more scientific instruments. Meanwhile, less power consumption means less drain on the spacecraft’s energy sources.
Using Optics and Photonics to Minimize SWaP Requirements
Currently, most space missions use bulkier radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities. Besides, radiofrequency equipment can often generate a lot of heat, requiring more energy to cool the system.
Decreasing SWaP requirements can be achieved with more photonics and miniaturization. Transmitting data with light will usually dissipate less than heat than transmission with electrical signals and radio waves. These leads to smaller, lighter communication systems that require less power to run.
These SWaP advantages come alongside the increased transmission speeds. After all, coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems.
Leveraging Electronics Ecosystems for Space Certification and Standardization
While integrated photonics can boost space communications by lowering the payload, it must overcome the obstacles of a harsh space environment, which include radiation hardness, an extreme operational temperature range, and vacuum conditions.
Mission Type | Temperature Range |
---|---|
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125 °C |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
The values in Table 1 show the unmanaged environmental temperatures in different space environments. In a temperature-managed area, these would decrease significantly for electronics and optics systems, perhaps by as much as half. Despite this management, the equipment would still need to deal with some extreme temperature values.
Fortunately, a substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Whenever you want to send data from point A to B, photonics is usually the most efficient way of doing it, be it over fiber or free space.
Offering optical communication systems in a small integrated package that can resist the required environmental conditions will significantly benefit the space sector and its need to minimize SWaP requirements. These optical systems can increase their transmission capacity with the coherent optical transmission used in fiber optics. Furthermore, by leveraging the assembly and packaging structure of electronics for the space sector, photonics can also provide the systems with the ruggedness required to live in space.
Tags: certification, coherent, electronics, existing, fast, growing, heat dissipation, miniaturization, Optical Communication, Photonics, power consumption, size, space, space sector, speed, SWAP, temperature, weightOFC 2023 Round-up
– 5G TechnologyWorld
Discover Where Light Meets Digital at OFC2023
Join EFFECT Photonics from March 7 to 9, 2023 at OFC in San Diego, California, the world’s largest event for optical networking and communications, to discover firsthand how our technology is transforming where light meets digital. Visit Booth #2423 to learn how EFFECT Photonics’ full portfolio of optical building blocks are enabling 100G coherent to the network edge and next-generation applications.
Explore Our OFC2023 Demos:
Build Your Own 100G ZR Coherent Module
At this year’s OFC, see how easy and affordable it can be to upgrade existing 10G links to a more scalable 100G coherent solution! Try your hand at constructing a 100G ZR coherent module specifically designed for the network edge utilizing various optical building blocks including tunable lasers, DSPs and optical subassemblies.
Tune Your Own PIC (Photonic Integrated Circuit)
Be sure to stop by Booth #2423 to tune your own PIC with EFFECT Photonics technology. In this interactive and dynamic demonstration, participants can explore first-hand the power of EFFECT Photonics solutions utilizing various parameters and product configurations.
Our experts are also available to discuss customer needs and how EFFECT Photonics might be able to assist. To schedule a meeting, please email marketing@effectphotonics.com
Tags: 100 ZR, 100G, 100gcoherent, access, access networks, bringing100Gtoedge, cloud, cloudedge, coherent, coherentoptics, datacenters, DSP, DSPs, EFFECT Photonics, Integrated Photonics, networkedge, ofc23, opticcommunications, Optics, photonic integration, Photonics, PIC, tunablelasers, wherelightmeetsdigitalCredo and EFFECT Photonics Announce Collaboration on High-Performance, Ultralow Power Coherent DSP Solutions
Industry Leaders Join Forces to Meet the Next Generation Needs of Service Providers, 5G Telecom…
Industry Leaders Join Forces to Meet the Next Generation Needs of Service Providers, 5G Telecom and Fixed Access Operators
EINDHOVEN, The Netherlands and SAN JOSE, Calif.—
Credo (NASDAQ:CRDO) and EFFECT Photonics today announced plans to collaborate on the development of coherent Digital Signal Processor (DSP) merchant ICs, featuring industry-leading power dissipation and performance. The DSPs are expected to deliver the capacity and reach needed to meet the explosive demands for connectivity and bandwidth and enable cost-effective network upgrades over existing physical fiber infrastructure. These merchant silicon offerings have the added benefit of giving customers the flexibility to choose from a variety of transceiver suppliers.
The two companies will work together to develop new coherent DSP products featuring EFFECT Photonics’ coherent DSP technology and Forward Error Correction (FEC), combined with Credo’s high-speed SerDes, I/Os, Analog to Digital Converters (ADCs), and Digital to Analog Converters (DACs). Credo will manufacture and manage the sales and supply channels of the co-developed merchant ICs, and provide complete design services and customer support.
The first products will target 100G ZR and 100G ZR+ applications. The products will work within the popular QSFP28 optical module’s ultralow 5W power envelope and extend network reach well beyond 80km to support advanced application requirements. These products will target new buildouts and the imminent 10G upgrade cycle to 100G for service providers, 5G telecom and fixed-access operators.
Merchant solutions will help drive down the cost, and with the installed base of 10G ports in excess of 13M ports, there is a significant installed base that needs to upgrade to 100G to support new applications and traffic patterns, plus potential new market opportunities that coherent DSPs can open up in service provider, cloud, and enterprise edge networks.
Alan Weckel, Founder and Technology Analyst at 650 Group
Coherent technologies are gaining importance as they improve spectral efficiency and increase data throughput. Historically, coherent solutions have been mostly deployed for very long distances due to their high power and high cost. Our first collaboration with EFFECT Photonics will allow us to deliver capabilities that fit into the power envelopes of the most commonly used transceivers at a competitive price, providing our large volume customers with a refined solution that will seamlessly integrate into their existing access and edge infrastructure to address the dramatic bandwidth growth at the network edge.
Scott Feller, Vice President of Marketing at Credo
Our field-proven coherent technology and Credo’s high-performance, power-efficient connectivity solutions and semiconductor manufacturing expertise provide a potent combination that we believe could truly revolutionize the network edge and access interconnect market. The clear market need for this technology presents a truly exciting opportunity for both of our companies.
Harald Graber, Chief Commercial Officer at EFFECT Photonics.
The joint solutions will offer numerous benefits, including high performance, small size, excellent energy efficiency and cost-efficient production.
DSP Feature Set/Advantages:
- Optimized ZR and ZR+ solutions cover 40 to 500km links.
- Energy-efficiency with ultralow power consumption.
- Devices will be rated for both commercial and industrial temperature operating ranges.
- Compliant with industry standards.
- Credo’s high-speed data converters, SerDes and I/Os, are optimized for performance and energy efficiency.
- EFFECT Photonics’ best-in-class, field-proven coherent DSP and FEC technology (gained through the company’s acquisition agreement with Viasat).
- Credo will be the sole supplier of the merchant ICs, working with its foundry partners to provide dependable supply chain fulfilment with competitive lead times.
- Credo’s DSP expertise leverages its N-1 process technology advantage to deliver the most cost-effective coherent DSPs in the market.
About Credo
Our mission is to deliver high-speed solutions to break bandwidth barriers on every wired connection in the data infrastructure market. Credo is an innovator in providing secure, high-speed connectivity solutions that deliver improved power and cost efficiency as data rates and corresponding bandwidth requirements increase exponentially throughout the data infrastructure market. Our innovations ease system bandwidth bottlenecks while simultaneously improving on power, security and reliability. Our connectivity solutions are optimized for optical and electrical Ethernet applications, including the emerging 100G (or Gigabits per second), 200G, 400G and 800G port markets. Our products are based on our proprietary Serializer/Deserializer (SerDes) and Digital Signal Processor (DSP) technologies. Our product families include integrated circuits (ICs), Active Electrical Cables (AECs) and SerDes Chiplets. Our intellectual property (IP) solutions consist primarily of SerDes IP licensing.
For more information, please visit: https://www.credosemi.com. Follow Credo on LinkedIn and Twitter.
About EFFECT Photonics
Where Light Meets Digital – EFFECT Photonics is a highly vertically integrated, independent optical systems company addressing the need for high-performance, affordable optic solutions driven by the ever-increasing demand for bandwidth and faster data transfer capabilities. Using our field-proven digital signal processing and forward error correction technology and ultra-pure light sources, we offer compact form factors with seamless integration, cost efficiency, low power, and security of supply. By leveraging established microelectronics ecosystems, we aim to make our products affordable and available in high volumes to address the challenges in 5G and beyond, access-ready coherent solutions, and cloud and cloud edge services.
For more information, please visit: https://www.effectphotonics.com. Follow EFFECT Photonics on LinkedIn and Twitter.
# # #
Media Contacts:
Diane Vanasse
Credo
Diane.vanasse@credosemi.com
Colleen Cronin
EFFECT Photonics
colleencronin@effectphotonics.com
Dutch-based EFFECT Photonics bags €37.6M to Develop Highly Integrated Optics Solutions
– Silicon Canals
Credo and EFFECT Photonics Announce Collaboration on High-Performance, Ultralow Power Coherent DSP Solutions
– Financial Times
EFFECT Photonics Secures $40M in Funding
– PhotonicsMarketplace
EFFECT Photonics Secures $40 Million in Additional Funding
Eindhoven, The Netherlands EFFECT Photonics, a leading developer of highly integrated optical solutions, has secured…
Eindhoven, The Netherlands
EFFECT Photonics, a leading developer of highly integrated optical solutions, has secured an additional $40 million in funding from a group of investors led by Invest-NL and Innovation Industries, along with other existing investors.
The new investment enables the company to accelerate product development and fuel go-to-market initiatives, specifically those related to its integrated coherent optical product portfolio and solutions that meet the industry need for disaggregation of the key components for the growing needs of coherent optical interfaces.
We’re thankful to Invest-NL, Innovation Industries, and our other existing investors for their continued support and confidence in EFFECT Photonics mission and products. This investment positions us well to advance our portfolio of integrated optic solutions that will reshape the future of communications and positively disrupt the status quo.
Roberto Marcoccia, CEO, EFFECT Photonics
Invest-NL’s Deep Tech Fund is established to support companies with innovative, complex technologies focusing on future societal challenges. Our investment in EFFECT Photonics is within that goal. We are very happy to support our existing deep tech portfolio company EFFECT Photonics, together with the other shareholders, for further growth. Management is well-focused on future developments and has positioned EFFECT Photonics as a tier-one business partner.
Gert-Jan Vaessen, Fund Manager Deep Tech Fund at Invest-NL
Innovation Industries is excited to offer our continued support of EFFECT Photonics. We are impressed by the company’s plans for future growth and innovative product portfolio, which is forging new grounds in offering the lowest power per bit.
Nard Sintenie, Partner, Innovation Industries
About EFFECT PHOTONICS
Where Light Meets Digital – EFFECT Photonics is a highly vertically integrated, independent optical systems company addressing the need for high-performance, affordable optic solutions driven by the ever-increasing demand for bandwidth and faster data transfer capabilities. Using our field-proven digital signal processing and forward error correction technologies and ultra-pure light sources, we offer compact form factor solutions with seamless integration, cost efficiency, low power, and security of supply. By leveraging established microelectronics ecosystems, we aim to make our products affordable and available in high volumes to address the challenges in 5G and beyond, access-ready coherent solutions, and cloud and cloud edge services. For more information, please visit: www.effectphotonics.com. Follow EFFECT Photonics on LinkedIn and Twitter.
# # #
Media Contact:
Colleen Cronin
EFFECT Photonics
colleencronin@effectphotonics.com
Designing in the Great Lakes
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP)…
Last year, EFFECT Photonics announced the acquisition of the coherent optical digital signal processing (DSP) and forward error correction (FEC) business unit from the global communications company Viasat Inc. This also meant welcoming to the EFFECT Photonics family a new engineering team who will continue to work in the Cleveland area.
As EFFECT Photonics expands its influence into the American Midwest, it is interesting to dive deeper into Cleveland’s history with industry and technology. Cleveland has enjoyed a long story as a Midwest industrial hub, and as these traditional industries have declined, it is evolving into one of the high-tech hubs of the region.
Cleveland and the Industrial Revolution
Cleveland’s industrial sector expanded significantly in the 19th century because of the city’s proximity to several essential resources and transportation routes: coal and iron ore deposits, the Ohio and Erie Canal, and the Lake Erie railroad. For example, several steel mills, such as the Cleveland Rolling Mill Company and the Cleveland Iron and Steel Company, emerged because of the city’s proximity to Lake Erie, facilitating the transportation of raw materials and goods.
Building on the emerging iron and steel industries, heavy equipment production also found a home in Cleveland. Steam engines, railroad equipment, and other forms of heavy machinery were all manufactured in great quantities in the city.
Cleveland saw another massive boost to its industrial hub status with the birth of the Standard Oil Company in 1870. At the peak of its power, Standard Oil was the largest petroleum company in the world, and its success made its founder and head, John D. Rockefeller, one of the wealthiest men of all time. This history with petroleum also led to the emergence of Cleveland’s chemicals and materials industry.
Many immigrants moved to Cleveland, searching for work in these expanding industries, contributing to the city’s rapid population boom. This growth also prompted the development of new infrastructure like roads, railways and bridges to accommodate the influx of people.
Several important electrical and mechanical equipment manufacturers, including the Bendix Corporation, the White Motor Company, and the Western Electric Company (which supplied equipment to the US Bell System), also established their headquarters in or around Cleveland in the late 19th and early 20th century.
From Traditional Industry to Healthcare and High-Tech
In the second half of the 20th century, Cleveland’s traditional industries, such as steel and manufacturing in Cleveland began to collapse. As was the case in many other great American manufacturing centers, automation, globalization, and other socioeconomic shifts all had a role in this decline. The demise of Cleveland’s core industries was a significant setback, but the city has made substantial efforts in recent years to diversify its economy and grow in new technology and healthcare areas.
For example, the Cleveland Clinic is one of the leading US academic medical centers, with pioneering medical breakthroughs such as the first coronary artery bypass surgery and the first face transplant in the United States. Institutions like theirs or the University Hospitals help establish Cleveland as a center for healthcare innovation.
Cleveland is also trying to evolve as a high-tech hub that attracts new workers and companies, especially in software development. Companies are attracted by the low office leasing and other operating costs, while the affordable living costs attract workers. As reported by the real estate firm CBRE, Cleveland’s tech workforce grew by 25 percent between 2016 and 2021, which was significantly above the national average of 12.8 percent.
A New Player in Cleveland’s High-Tech Industry
As Cleveland’s history as a tech hub continues, EFFECT Photonics is excited to join this emerging tech environment. Our new DSP team will find its new home in the Wagner Awning building in the Tremont neighborhood of Cleveland’s West Side.
This building was erected in 1895 and hosted a sewing factory that manufactured everything from tents and flotation devices for American soldiers and marines to awnings for Cleveland buildings. When the Ohio Awning company announced its relocation in 2015, this historic building began a redevelopment process to become a new office and apartment space.
EFFECT Photonics is proud to become a part of Cleveland’s rich and varied history with industry and technology. We hope our work can help develop this city further as a tech hub and attract more innovators and inventors to Cleveland.
Tags: digital signal processing (DSP), EFFECT Photonics, forward error correction (FEC), high-tech hub, industrial history, Integrated Photonics, Ohio Awning Company, Photonics, Tremont neighborhood, Viasat Inc., Wagner Awning buildingWhat Tunable Lasers Does the Network Edge Need?
Several applications in the optical network edge would benefit from upgrading from 10G to 100G…
Several applications in the optical network edge would benefit from upgrading from 10G to 100G DWDM or from 100G grey to 100G DWDM optics:
- Business Services could scale enterprise bandwidth beyond single-channel 100G links.
- Fixed Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Mobile Midhaul benefits from a seamless upgrade of existing links from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their linksto 100G IPoDWDM.
The 100G coherent pluggables for these applications will have very low power consumption (less than 6 Watts) and be deployed in uncontrolled environments. To enable this next generation of coherent pluggables, the next generation of tunable lasers needs enhanced optical and electronic integration, more configurability so that users can optimize their pluggable footprint and power consumption, and the leveraging of electronic ecosystems.
The Past and Future Successes of Increased Integration
Over the last decade, technological progress in tunable laser packaging and integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). The ITLA package measured around 30.5 mm in width and 74 mm in length. By 2015, tunable lasers were sold in the more compact Micro-ITLA form factor, which cut the original ITLA package size in half. And in 2019, laser developers (see examples here and here) announced a new Nano-ITLA form factor that reduced the size by almost half again.
Integration also has a major impact on power consumption, since smaller, highly-integrated lasers usually consume less power than bulkier lasers with more discrete components. Making the laser smaller and more integrated means that the light inside the chip will travel a smaller distance and therefore accumulate fewer optical losses.
Reducing the footprint of tunable lasers in the future will need even greater integration of their component parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
Configurability and Optimization
Another important aspect of optimizing pluggable module footprint and power consumption is allowing transceiver developers to mix and match their transceiver building blocks.
Let’s discuss an example of such configurability. The traditional ITLA in transceivers contains the temperature control driver and power converter functions. However, the main transceiver board can usually provide these functions too.
A setup in which the main board performs these driver and converter functions would avoid the need for redundant elements in both the main board and tunable laser. Furthermore, it would give the transceiver developer more freedom to choose the power converter and driver blocks that best fit their footprint and power consumption requirements.
Such configurability will be particularly useful in the context of the new generation of 100G coherent pluggables. After, these 100G pluggables must fit tunable lasers, digital signal processors, and optical engines in a QSFP28 form factor that is slightly smaller than the QSFP-DD size used for 400G transceiver.
Looking Towards Electronics Style Packaging
The photonics production chain must be increasingly automated and standardized to save costs and increase accessibility. To achieve this goal, it is helpful to study established practices in the fields of electronics packaging, assembly, and testing.
By using BGA-style packaging or flip-chip bonding techniques that are common now in electronics packaging or passive optical fiber alignment, photonics packaging can also become more affordable and accessible. You can read more about these methods in our article about leveraging electronic ecosystems in photonics.
These kinds of packaging methods not only improve the scalability (and therefore cost) of laser production, but they can also further reduce the size of the laser.
Takeaways
Tunable lasers for coherent pluggable transceivers face the complex problem of maintaining a good enough performance while moving to smaller footprints, lower cost, and lower power consumption. Within a decade, the industry moved from the original integrable tunable laser assembly (ITLA) module to micro-ITLAs and then nano-ITLAs. Each generation had roughly half the footprint of the previous one.
However, the need for 100G coherent pluggables for the network edge imposes even tighter footprint and power consumption constraints on tunable lasers. Increased integration, more configurability of the laser and transceiver building blocks, and the leveraging of electronic will help tunable lasers get smaller and more power-efficient to enable these new application cases in edge and access networks.
Tags: automation, cost, efficiency, energy efficient, nano ITLA, optimization, power, size, smaller, testing, wavelength lockerEFFECT Photonics Unveils Development of Pico Tunable Laser Assembly Enabling QSFP28 100G ZR Coherent Pluggable Modules
-New pTLA to offer industry-best combination of size, affordability, and performance to meet the demand…
-New pTLA to offer industry-best combination of size, affordability, and performance to meet the demand for 100G coherent at the edge
EINDHOVEN, The Netherlands
EFFECT Photonics, a leading developer of highly integrated optical solutions, announced today the development of a new Pico Tunable Laser Assembly (pTLA) to address the growing demand for 100G coherent transceivers in access networks. Tunable lasers are a core component of optical systems enabling dense wavelength division multiplexing (DWDM) which allows network operators to expand their network capacity without expanding the existing fiber infrastructure. Purposely designed for the optical network edge, EFFECT Photonics new pTLA supports both commercial- and industrial-temperature (C-temp and I-temp) operating ranges and offers an ideal combination of power, cost, and size to enable a transceiver form-factors to upgrade the existing infrastructure to a scalable 100 Gbps coherent solution.
According to a recent Heavy Reading survey, 75% of operators believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, market adoption has yet to materialize since affordable and power-efficient 100ZR-based products are currently not available due to stringent size and power consumption requirements that cannot be fulfilled by today’s tunable laser solutions. Designed specifically to address the 100G coherent network edge, EFFECT Photonics’ pTLA will allow coherent pluggables to be deployed more easily and cost effectively in the access domain and will feature optimal laser performance, size and power consumption for a standard QSFP28 form-factor. Furthermore, EFFECT Photonics’ new pTLA utilizes the existing microelectronics ecosystem to allow manufacturing at scale as well as complementary coherent products and services, such as DSPs for those providers in need of a complete transceiver solution.
Today’s operators need an network edge aggregation strategy that offers the best combination of capacity, cost-effectiveness, and performance to evolve network access effectively, and 100G coherent pluggable optics offer just that, EFFECT Photonics’ new Pico Tunable Laser Assembly will be the only purpose designed tunable laser assembly today to serve this emerging market. helping to easily scale up network edge aggregation capacity and benefit from coherent technology.
Roberto Marcoccia, CEO, EFFECT Photonics
About EFFECT PHOTONICS
Where Light Meets Digital – EFFECT Photonics is a highly vertically integrated, independent optical systems company addressing the need for high-performance, affordable optic solutions driven by the ever-increasing demand for bandwidth and faster data transfer capabilities.
Using our proprietary digital signal processing and forward error correction technology and ultra-pure light sources, we offer compact form factors with seamless integration, cost efficiency, low power, and security of supply. By leveraging established microelectronics ecosystems, we aim to make our products affordable and available in high volumes to address the challenges in 5G and beyond, access-ready coherent solutions, and cloud and cloud edge services.
For more information, please visit: www.effectphotonics.com. Follow EFFECT Photonics on LinkedIn and Twitter.
To read the Simplified Chinese version click here.
To read the Traditional Chinese version click here.
To read the Japanese version click here.
To read the Korean version click here.
# # #
Media Contact:
Colleen Cronin
EFFECT Photonics
colleencronin@effectphotonics.com
What DSPs Does the Network Edge Need?
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact,…
Operators are strongly interested in 100G pluggables that can house tunable coherent optics in compact, low-power form factors like QSFP28. A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy.
These new 100G coherent pluggables will have very low power consumption (less than six Watts) and will be deployed in uncontrolled environments, imposing new demands on coherent digital signal processors (DSPs). To enable this next generation of coherent pluggables in the network edge, the next generation of DSPs needs ultra-low power consumption, co-designing with the optical engine, and industrial hardening.
The Power Requirements of the Network Edge
Several applications in the network edge can benefit from upgrading their existing 10G DWDM, or 100G grey links into 100G DWDM, such as the aggregation of fixed and mobile access networks and 100G data center interconnects for enterprises. However, network providers have often chosen to stick to their 10G links because the existing 100G solutions do not check all the required boxes.
100G direct detect pluggables have a more limited reach and are not always compatible with DWDM systems. “Scaled down” coherent 400ZR solutions have the required reach and tunability, but they are too expensive and power-hungry for edge applications. Besides, the ports in small to medium IP routers used in edge deployments often do not support QSFP-DD modules commonly used in 400ZR but QSFP28 modules.
The QSFP28 form factor imposes tighter footprint and power consumption constraints on coherent technologies compared to QSFP-DD modules. QSFP28 is slightly smaller, and most importantly, it can handle at most a 6-Watt power consumption, in contrast with the typical 15-Watt consumption of QSFP-DD modules in 400ZR links. Fortunately, the industry is moving towards a proper 100ZR solution in the QSFP28 form factor that balances performance, footprint, and power consumption requirements for the network edge.
These power requirements also impact DSP power consumption. DSPs constitute roughly 50% of coherent transceiver power consumption, so a DSP optimized for the network edge 100G use cases should aim to consume at most 2.5 to 3 Watts of power.
Co-Designing and Adjusting for Power Efficiency
Achieving this ambitious power target will require scaling down performance in some areas and designing smartly in others. Let’s discuss a few examples below.
- Modulation: 400ZR transceivers use a more power-hungry 16QAM modulation. This modulation scheme uses sixteen different states that arise from combining four different intensity levels and four phases of light. The new generation of 100ZR transceivers might use some variant of a QPSK modulation, which only uses four states from four different phases of light.
- Forward Error Correction (FEC): DSPs in 400ZR transceivers use a more advanced concatenated FEC (CFEC) code, which combines inner and outer FEC codes to enhance the performance compared to a standard FEC code. The new 100ZR transceivers might use a more basic FEC type like GFEC. This is one of the earliest optical FEC algorithms and was adopted as part of the ITU G.709 specification.
- Co-Designing DSP and Optical Engine: As we explained in a previous article about fit-for-platform DSPs, a transceiver optical engine designed on the indium phosphide platform could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing a separate analog driver, doing away with a significant power conversion overhead compared to a silicon photonics setup, as shown in the figure below.