Insights

The Whats and Hows of Telcordia Standards
Before 1984, the world of telecom standards looked very different from how it does now.…
Before 1984, the world of telecom standards looked very different from how it does now. Such a world prominently featured closed systems like the one AT&T had in the United States. They were stable and functional systems but led to a sluggish pace of technology innovation due to the lack of competition. The breakup of the Bell System in the early 1980s, where AT&T was forced to divest from their local Bell operating and manufacturing units, caused a tectonic shift in the industry. As a result, new standards bodies rose to meet the demands of a reinvented telecom sector.
Bellcore, formerly Bell Communications Research, was one of the first organizations to answer this demand. Bellcore aided the Regional Bell Operating Companies by creating “generic requirements” (GR) documents that specified the design, operation, and purpose of telecom networks, equipment, and components. These GRs provided thorough criteria to help new suppliers design interoperable equipment, leading to the explosion of a new supplier ecosystem that made “GR conformant” equipment. An industry that relied on a few major suppliers thus became a more dynamic and competitive field, with carriers allowed to work with several suppliers almost overnight.
Bellcore is now Telcordia, and although the industry saw the emergence of other standards bodies, Telcordia still plays a major role in standardization by updating and producing new GR documents. Some of the most well-known documents are reliability prediction standards for commercial telecommunication products. Let’s discuss what these standards entail and why they matter in the industry.
What is the goal of Telcordia reliability standards?
Telecommunications carriers can use general requirements documents to select products that meet reliability and performance needs. The documents cover five sections:
- General Requirements, which discuss documentation, packaging, shipping, design features, product marking, safety and interoperability.
- Performance Requirements, which cover potential tests, as well as the performance criteria applied during testing.
- Service Life Tests, which mimic the stresses faced by the product in real-life use cases.
- Extended Service Life Tests, which verify long-term reliability.
- Reliability Assurance Program, which ensures satisfactory, long-term operation of products in a telecom plant
Several of these specifications require environmental/thermal testing and often refer to other MIL STD and EIA / TIA test specifications. Listed below are a few common Telcordia test specifications that require the use of environmental testing.
Telcordia Generic Requirement | Description/Applicable Product |
---|---|
GR-49-CORE | for Outdoor Telephone Network Interface Devices |
GR-63-CORE | for Network Equipment-Building System Requirements (NEBS): Physical Protection |
GR-326-CORE | for Single Mode Optical Connectors and Jumper Assemblies (Fiber Optics) |
GR-468-CORE | for Optoelectronic Devices Used in Telecommunications Equipment |
GR-487-CORE | for Electronic Equipment Cabinets (Enclosures) |
GR-974-CORE | for Telecommunications Line Protector Units (TLPUS) |
GR-1209-CORE | for Fiber Optic Branching Components |
GR-1221-CORE | for Passive Optical Components |
What are Telcordia tests like
For example, our optical transceivers at EFFECT Photonics comply with the Telcordia GR-468 qualification, which describes how to test optoelectronic devices for reliability under extreme conditions. Qualification depends upon maintaining optical integrity throughout an appropriate test regimen. Accelerated environmental tests are described in the diagram below. The GR recommends that a chosen test regimen be constructed upon expected conditions and stresses over the long term life of a system and/or device.
Mechanical Reliability & Temperature Testing | ||
---|---|---|
Shock & Vibration | High / Low Storage Temp | Temp Cycle |
Damp Heat | Cycle Moisture Resistance | Hot Pluggable |
Mating Durability | Accelerated Aging | Life Expectancy Calculation |
Our manufacturing facilities and partners include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.
Why are Telcordia standards important?
Companies engage in telecom standards for several reasons:
- Strategic Advantage: Standards influence incumbents with well-established products differently than startups with “game changer” technologies. Following a technological standard helps incumbents get new business and safeguard their existing business. If a new vendor comes along with a box based on a new technology that gives identical functionality for a fraction of the price, you now have a vested stake in that technological standard.
- Allocation of Resources: Standards are part of technology races. If a competitor doubles technical contributions to hasten the inclusion of their specialized technology into evolving standards, you need to know so you may react by committing additional resources or taking another action.
- Early Identification of Prospective Partners and Rivals: Standards help suppliers recognize competitors and potential partners to achieve business objectives. After all, the greatest technology does not necessarily “win the race”, but the one with the best business plans and partners that can help realize the desired specification and design.
- Information Transfer: Most firms use standards to exchange information. Companies contribute technical contributions to standards groups to ensure that standards are as close as feasible to their business model and operations’ architecture and technology. Conversely, a company’s product and service developers must know about the current standards to guarantee that their goods and services support or adhere to industry standards, which clients expect.
Takeaways
One of our central company objectives is to bring the highest-performing optical technologies, such as coherent detection, all the way to the network edge. However, achieving this goal doesn’t just require us to focus on the optical or electronic side but also on meeting the mechanical and temperature reliability standards required to operate coherent devices outdoors. This is why it’s important for EFFECT Photonics to constantly follow and contribute to standards as it prepares its new product lines.
Tags: accelerated, AT&T, Bellcore, closed, coherent, innovation, monopoly, open, partners, reliability, resource allocation, service life, technology, Telcordia
Reaching a 100ZR Future for Access Network Transport
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications…
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had yet to materialize into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
By embracing smaller and more customizable light sources, new optimized DSP designs, and high-volume manufacturing capabilities, we can develop native 100ZR solutions with lower costs that better fit edge and access networks.
Making Tunable Lasers Even Smaller?
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, integrable tunable laser assemblies (ITLAs) must maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half.

There are still plenty of discussions over the future of ITLA packaging to fit the QSFP28 form factors of these new 100ZR transceivers. For example, every tunable laser needs a wavelength locker component that stabilizes the laser’s output regardless of environmental conditions such as temperature. Integrating that wavelength locker component with the laser chip would help reduce the laser package’s footprint.
Another potential future to reduce the size of tunable laser packages is related to the control electronics. The current ITLA standards include the complete control electronics on the laser package, including power conversion and temperature control. However, if the transceiver’s main board handles some of these electronic functions instead of the laser package, the size of the laser package can be reduced.
This approach means that the reduced laser package would only have full functionality if connected to the main transceiver board. However, some transceiver developers will appreciate the laser package reduction and the extra freedom to provide their own laser control electronics.
Co-designing DSPs for Energy Efficiency
The 5 Watt-power requirement of 100ZR in a QSFP28 form factor is significantly reduced compared to the 15-Watt specification of 400ZR transceivers in a QSFP-DD form factor. Achieving this reduction requires a digital signal processor (DSP) specifically optimized for the 100G transceiver.
Current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.

However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.

Optical Subassemblies That Leverage Electronic Ecosystems
To become more accessible and affordable, the photonics manufacturing chain can learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible: passive alignments of the optical fiber, BGA-style packaging, and flip-chip bonding. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics. To read more about them, please read our previous article.
Takeaways
The interest in novel 100ZR coherent pluggable optics for edge and access applications is strong, but the market has struggled to provide “native” and specific 100ZR solutions to address this interest. Transceiver developers need to embrace several new technological approaches to develop these solutions. They will need smaller tunable laser packages that can fit the QSFP28 form factors of 100ZR solutions, optimized and co-designed DSPs that meet the reduced power consumption goals, and sub-assemblies that leverage electronic ecosystems for increased scale and reduced cost.
Tags: 100 ZR, 100G, 100ZR, 400ZR, C-band, C-PON, coherent, DSP, filters, future proof, Korea, laser sources, O-band, Packaging, pluggable, roadmap, S-band
Remote Provisioning and Management for Edge Networks
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and…
Smaller data centers near the end user can reduce latency, overcome inconsistencies in connectivity, and store and compute data closer to the end user. According to PricewaterhouseCoopers, these advantages will drive the worldwide market for edge data centers to more than triple from $4 billion in 2017 to $13.5 billion in 2024. With the increased use of edge computing, more high-speed transceivers are required to link edge data centers. According to Cignal AI, the number of 100G equivalent ports sold for edge applications will double between 2022 and 2025, as indicated in the graph below.

The increase in edge infrastructure comes with many network provisioning and management challenges. While typical data centers were built in centralized and controlled environments, edge deployments will live in remote and uncontrolled environments because they need to be close to where the data is generated. For example, edge infrastructure could be a server plugged into the middle of a busy factory floor to collect data more quickly from their equipment sensors.
This increase in edge infrastructure will provide plenty of headaches to network operators who also need to scale up their networks to handle the increased bandwidths and numbers of connections. More truck rolls must be done to update more equipment, and this option won’t scale cost-effectively, which is why many companies simply prefer not to upgrade and modernize their infrastructure.
Towards Zero-Touch Provisioning
A zero-touch provisioning model would represent a major shift in an operator’s ability to upgrade their network equipment. The network administrator could automate the configuration and provisioning of each unit from their central office, ship the units to each remote site, and the personnel in that site (who don’t need any technical experience!) just need to power up the unit. After turning them on, they could be further provisioned, managed, and monitored by experts anywhere in the world.
The optical transceivers potentially connected to some of these edge nodes already have the tools to be part of such a zero-touch provisioning paradigm. Many transceivers have a plug-and-play operation that does not require an expert on the remote site. For example, the central office can already program and determine specific parameters of the optical link, such as temperature, wavelength drift, dispersion, or signal-to-noise ratio, or even what specific wavelength to use. The latter wavelength self-tuning application is shown in Figure 2.
Once plugged in, the transceiver will set the operational parameters as programmed and communicate with the central office for confirmation. These provisioning options make deployment much easier for network operators.

Enabling Remote Diagnostics and Management
The same channel that establishes parameters remotely during the provisioning phase can also perform monitoring and diagnostics afterward. The headend module in the central office could remotely modify certain aspects of the tail-end module in the remote site, effectively enabling several remote management and diagnostics options. The figure below provides a visualization of such a scenario.

The central office can remotely measure metrics such as the transceiver temperature and power transmitted and received. These metrics can provide a quick and useful health check of the link. The headend module can also remotely read alarms for low/high values of these metrics.
These remote diagnostics and management features can eliminate certain truck rolls and save more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an antenna tower) that require expensive truck rolls.
Remote Diagnostics and Control for Energy Sustainability
To talk about the impact of remote control on energy sustainability, we first must review the concept of performance margins. This number is a vital measure of received signal quality. It determines how much room there is for the signal to degrade without impacting the error-free operation of the optical link.
In the past, network designers played it safe, maintaining large margins to ensure a robust network operation in different conditions. However, these higher margins usually require higher transmitter power and power consumption. Network management software can use the remote diagnostics provided by this new generation of transceivers to develop tighter, more accurate optical link budgets in real time that require lower residual margins. This could lower the required transceiver powers and save valuable energy.

Another related sustainability feature is deciding whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver needs to operate at its maximum capacity, a programmable interface can be controlled remotely to set the amplifiers at their maximum power. However, if the operator uses the transceiver for just half of the maximum capacity, the transceiver can operate with a smaller residual margin and use lower power on the amplifier. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
If the industry wants interoperability between different transceiver vendors, these kinds of power parameters for remote management and control should also be standardized.
Takeaways
As edge networks get bigger and more complex, network operators and designers need more knobs and degrees of freedom to optimize network architecture and performance and thus scale networks cost-effectively.
The new generation of transceivers has the tools for remote provisioning, management, and control, which gives optical networks more degrees of freedom for optimization and reduces the need for expensive truck rolls. These benefits make edge networks simpler, more affordable, and more sustainable to build and operate.
Tags: access network, capacity, cost, distributed access networks, DWDM, inventory stock, Loss of signal, maintenance, monitor, optical networks, plug-and-play, remote, remote control, scale, scaling, self-tuning, time
What is 100ZR and Why Does it Matter?
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider…
In June 2022, transceiver developer II‐VI Incorporated (now Coherent Corp.) and optical networking solutions provider ADVA announced the launch of the industry’s first 100ZR pluggable coherent transceiver. Discussions in the telecom sector about a future beyond 400G coherent technology have usually focused on 800G products, but there is increasing excitement about “downscaling” to 100G coherent products for certain applications in the network edge and business services. This article will discuss the market and technology forces that drive this change in discourse.
The Need for 100G Transmission in Telecom Deployments
The 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry for the optical network edge. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had not really materialized into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.

100ZR and its Enabling Technologies
With the recent II-VI Incorporated and ADVA announcement, the industry is showing its first attempts at a native 100ZR solution that can provide a true alternative to the powered-down 400ZR products. Some of the key specifications of this novel 100ZR solution include:
- A QSFP28 form factor, very similar but slightly smaller than a QSFP-DD
- 5 Watt power consumption
- C-temp and I-temp certifications to handle harsh environments
The 5 Watt-power requirement is a major reduction compared to the 15-Watt specification of 400ZR transceivers in the QSFP-DD form factor. Achieving this spec requires a digital signal processor (DSP) that is specifically optimized for the 100G transceiver.
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market, simplifies the research and design processes, but comes with performance and power consumption trade-offs.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. DSPs co-designed and optimized for their specific optical engine and laser can significantly improve power efficiency. You can read more about co-design approaches in one of our previous articles.
Achieving 100ZR Cost-Efficiency through Scale
Making 100ZR coherent optical transceivers more affordable is also a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Such manufacturing scale demands a higher upfront investment, but the result is a more accessible product that more customers can purchase.

Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems. Furthermore, transceiver developers must look for trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips and packages.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
As the Heavy Reading survey showed, the interest in 100G coherent pluggable optics for edge/access applications is strong, and operators have identified use key use cases within their networks. In the past, there were no true 100ZR solutions that could address this interest, but the use of optimized DSPs and light sources, as well as high-volume manufacturing capabilities, can finally deliver a viable and affordable 100ZR product.
Tags: 100G coherent, 100ZR, DSP, DSPs, edge and access applications, EFFECT Photonics, Photonics
Fit for Platform DSPs
Over the last two decades, power ratings for pluggable modules have increased as we moved…
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the digital signal processing (DSP) chip that also performs the functions of clock data recovery (CDR), optical-electrical gear-boxing, and lane switching. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip and its radiofrequency (RF) interconnects with the optical engine.

One way to reduce transceiver power consumption requires designing DSPs that take advantage of the material platform of their optical engine. In this article, we will elaborate on what that means for the Indium Phosphide platform.
A Jack of All Trades but a Master of None
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
Co-Designing with Indium Phosphide PICs for Power Efficiency
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.

However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.

Additionally, the optimized DSP could also be programmed to do some additional signal conditioning that minimizes the nonlinear optical effects of the InP material, which can reduce noise and improve performance.
Taking Advantage of Active Components in the InP Platform
Russell Fuerst, EFFECT Photonics’ Vice-President of Digital Signal Processing, gave us an interesting insight about designing for the InP platform in a previous interview:
When we started doing coherent DSP designs for optical communication over a decade ago, we pulled many solutions from the RF wireless and satellite communications space into our initial designs. Still, we couldn’t bring all those solutions to the optical markets.
However, when you get more of the InP active components involved, some of those solutions can finally be brought over and utilized. They were not used before in our designs for silicon photonics because silicon is not an active medium and lacked the performance to exploit these advanced techniques.
For example, the fact that the DSP could control laser and modulator components on the InP can lead to some interesting manipulations of light signals. A DSP that can control these components directly could generate proprietary waveforms or use non-standard constellation and modulation schemes that can boost the performance of a coherent transceiver and increase the capacity of the link.
Takeaways
The biggest problem for DSP designers is still improving performance while reducing power use. This problem can be solved by finding ways to integrate the DSP more deeply with the InP platform, such as letting the DSP control the laser and modulator directly to develop new waveform shaping and modulation schemes. Because the InP platforms have active components, DSP designers can also import more solutions from the RF wireless space.
Tags: analog electronics, building blocks, coherent, dispersion compensation, DSP, energy efficiency, Intra DCI, Photonics, PON, power consumption, reach, simplified
The Power of Integrated Photonic LIDAR
Outside of communications applications, photonics can play a major role in sensing and imaging applications.…
Outside of communications applications, photonics can play a major role in sensing and imaging applications. The most well-known of these sensing applications is Light Detection and Ranging (LIDAR), which is the light-based cousin of RADAR systems that use radio waves.
To put it in a simple way: LIDAR involves sending out a pulse of light, receiving it back, and using a computer to study how the environment changes that pulse. It’s a simple but quite powerful concept.
If we send pulses of light to a wall and listen to how long it takes for them to come back, we know how far that wall is. That is the basis of time-of-flight (TOF) LIDAR. If we send a pulse of light with multiple wavelengths to an object, we know where the object is and whether it is moving towards you or away from you. That is next-gen LIDAR, known as FMCW LIDAR. These technologies are already used in self-driving cars to figure out the location and distance of other cars. The following video provides a short explainer of how LIDAR works in self-driving cars.
Despite their usefulness, the wider implementation of LIDAR systems is limited by their size, weight, and power (SWAP) requirements. Or, to put it bluntly, they are bulky and expensive. For example, maybe you have seen pictures and videos of self-driving cars with a large LIDAR sensor and scanner on the roof of the car, as in the image below.

Making LIDAR systems more affordable and lighter requires integrating the optical components more tightly and manufacturing them at a higher volume. Unsurprisingly, this sounds like a problem that could be solved by integrated photonics.
Replacing Bulk LIDAR with “LIDAR on Chip”
Back in 2019, Tesla CEO Elon Musk famously said that “Anyone relying on LIDAR is doomed”. And his scepticism had some substance to it. LIDAR sensors were clunky and expensive, and it wasn’t clear that they would be a better solution than just using regular cameras with huge amounts of visual analysis software. However, the incentive to dominate the future of the automotive sector was too big, and a technology arms race had already begun to miniaturize LIDAR systems into a single photonic chip.
Let’s provide a key example. A typical LIDAR system will require a mechanical system that moves the light source around to scan the environment. This could be as simple as a 360-rotating LIDAR scanner or using small scanning mirrors to steer the beam. However, an even better solution would be to create a LIDAR scanner with no moving parts that could be manufactured at a massive scale on a typical semiconductor process.
This is where optical phased arrays (OPAs) systems come in. An OPA system splits the output of a tunable laser into multiple channels and puts different time delays on each channels. The OPA will then recombine the channels, and depending on the time delays assigned, the resulting light beam will come out at a different angle. In other words, an OPA system can steer a beam of light from a semiconductor chip without any moving parts.

There is still plenty of development required to bring OPAs into maturity. Victor Dolores Calzadilla, a researcher from the Eindhoven University of Technology (TU/e) explains that “The OPA is the biggest bottleneck for achieving a truly solid-state, monolithic lidar. Many lidar building blocks, such as photodetectors and optical amplifiers, were developed years ago for other applications, like telecommunication. Even though they’re generally not yet optimized for lidar, they are available in principle. OPAs were not needed in telecom, so work on them started much later. This component is the least mature.”
Economics of Scale in LIDAR Systems
Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for LIDAR and automotive industry.

By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Using Proven Photonics Technologies for Automotive Standards
Another challenge for photonics technologies is that they need to move from into parameters and specifications in the automotive sector that are often harsher than the telecom/datacom sector. For example, a target temperature range of −40°C to 125°C is often required, which is much broader than the typical industrial temperature range used in the telecom sector. The packaging of the PIC and its coupling to fiber and free space is particularly sensitive to these temperature changes.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Fortunately, a substantial body of knowledge already exists to make integrated photonics compatible with harsh environments like those of outer space. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been qualified for space and automotive applications. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Photonics technology must be built on a wafer-scale process that can produce millions of chips in a month. When we can show the market that photonics can be as easy to use as electronics, that will trigger a revolution in the use of photonics worldwide.
The broader availability of photonic devices will take photonics into new applications, such as those of LIDAR and the automotive sector. With a growing integrated photonics industry, LIDAR can become lighter, avoid moving parts, and be manufactured in much larger volumes that reduce the cost of LIDAR devices. Integrated photonics is the avenue for LIDAR to become more accessible to everyone.
Tags: accessible, affordable, automotive, automotive sector, beamforming, discrete, economics of scale, efficient, electronics, laser, LIDAR, phased arrays, photonic integration, power consumption, self-driving car, self-driving cars, space, wafer
How To Make a Photonic Integrated Circuit
Photonics is one of the enabling technologies of the future. Light is the fastest information…
Photonics is one of the enabling technologies of the future. Light is the fastest information carrier in the universe and can transmit this information while dissipating less heat and energy than electrical signals. Thus, photonics can dramatically increase the speed, reach, and flexibility of communication networks and cope with the ever-growing demand for more data. And it will do so at a lower energy cost, decreasing the Internet’s carbon footprint. Meanwhile, fast and efficient photonic signals have massive potential for sensing and imaging applications in medical devices, automotive LIDAR, agricultural and food diagnostics, and more.
Given its importance, we want to explain how photonic integrated circuits (PICs), the devices that enable all these applications, are made.
Designing a PIC
The process of designing a PIC should translate an initial application concept into a functioning photonics chip that can be manufactured. In a short course at the OFC 2018 conference, Wim Bogaerts from Ghent University summarized the typical PIC design process in the steps we will describe below.
- Concept and Specifications: We first have to define what goes into the chip. A chip architect normally spends time with the customer to understand what the customer wants to achieve with the chip and all the conditions and situations where the chip will be used. After these conversations, the chip application concept becomes a concrete set of specifications that are passed on to the team that will design the internals of the chip. These specs will set the performance targets of the PIC design.
- Design Function: Having defined the specs, the design team will develop a schematic circuit diagram that captures the function of the PIC. This diagram is separated into several functional blocks: some of them might already exist, and some of them might have to be built. These blocks include lasers, modulators, detectors, and other components that can manipulate light in one way or another.
- Design Simulation: Making a chip costs a lot of money and time. With such risks, a fundamental element of chip design is to accurately predict the chip’s behavior after it is manufactured. The functional blocks are placed together, and their behavior is simulated using various physical models and simulation tools. The design team often uses a few different simulation approaches to reduce the risk of failure after manufacturing.
- Design Layout: Now, the design team must translate the functional chip schematic into a proper design layout that can be manufactured. The layout consists of layers, component positions, and geometric shapes that represent the actual manufacturing steps. The team uses software that translates these functions into the geometric patterns to be manufactured, with human input required for the trickiest placement and geometry decisions.
- Check Design Rules: Every chip fabrication facility will have its own set of manufacturing rules. In this step, the design team verifies that the layout agrees with these rules.
- Verify Design Function: This is a final check to ensure that the layout actually performs as was originally intended in the original circuit schematic. The layout process usually leads to new component placement and parasitic effects that were not considered in the original circuit schematic. These tests might require the design team to revisit previous functional or layout schematic steps.
The Many Steps of Fabricating a PIC
Manufacturing semiconductor chips for photonics and electronics is one of the most complex procedures in the world. For example, back in his university days, EFFECT Photonics President Boudewijn Docter described a fabrication process with a total of 243 steps!
Yuqing Jiao, Associate Professor at the Eindhoven University of Technology (TU/e), explains the fabrication process in a few basic, simplified steps:
- Grow or deposit your chip material
- Print a pattern on the material
- Etch the printed pattern into your material
- Do some cleaning and extra surface preparation
- Go back to step 1 and repeat as needed
Real life is, of course, a lot more complicated and will require cycling through these steps tens of times, leading to processes with more than 200 total steps. Let’s go through these basic steps in a bit more detail.
- Layer Epitaxy and Deposition: Different chip elements require different semiconductor material layers. These layers can be grown on the semiconductor wafer via a process called epitaxy or deposited via other methods (which are summarized in this article).
- Lithography (i.e. printing): There are a few lithography methods, but the one used for high-volume chip fabrication is projection optical lithography. The semiconductor wafer is coated with a photosensitive polymer film called a photoresist. Meanwhile, the design layout pattern is transferred to an opaque material called a mask. The optical lithography system projects the mask pattern onto the photoresist. The exposed photoresist is then developed (like photographic film) to complete the pattern printing.
- Etching: Having “printed” the pattern on the photoresist, it is time to remove (or etch) parts of the semiconductor material to transfer the pattern from the resist into the wafer. There are several techniques that can be done to etch the material, which are summarized in this article.
- Cleaning and Surface Preparation: After etching, a series of steps will clean and prepare the surface before the next cycle.
- Passivation: Adding layers of dielectric material (such a silica) to “passivate” the chip and make it more tolerant to environmental effects.
- Planarization: Making the surface flat in preparation of future lithography and etching steps.
- Metallization: Depositing metal components and films on the wafer. This might be done for future lithography and etching steps, or at the end to add electrical contacts to the chip.
Figure 6 summarizes how an InP photonic device looks after the steps of layer epitaxy, etching, dielectric deposition and planarization, and metallization.
The Expensive Process of Testing and Packaging
Chip fabrication is a process with many sources of variability, and therefore much testing is required to make sure that the fabricated chip agrees with what was originally designed and simulated. Once that is certified and qualified, the process of packaging and assembling a device with the PIC follows.
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic systems. Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost. There are many research efforts in motion to reduce these costs, which you can learn more about in one of our previous articles.

Especially after the first fabrication run of a new chip, there will be a few rounds of characterization, validation and revisions to make sure the chip performs up to spec. After this first round of characterization and validation, the chip must be made ready for mass production, which requires a series of reliability tests in several environmental different conditions. You can learn more about this process in our previous article on industrial hardening. For example, different applications need different certification of the temperatures in which the chip must operate in.
Temperature Standard | Temperature Range (°C) | |
Min | Max | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Takeaways
The process of making photonic integrated circuits is incredibly long and complex, and the steps we described in this article are a mere simplification of the entire process. It requires tremendous amount of knowledge in chip design, fabrication, and testing from experts in different fields all around the world. EFFECT Photonics was founded by people who fabricated these chips themselves, understand the process intimately, and developed the connections and network to develop cutting-edge PICs at scale.
Tags: building blocks, c-temp, coherent, die testing, DSP, electron beam lithography, faults, I-temp, imprint lithography, InP, interfaces, optical lithography, reach, scale, wafer testing
What’s an ITLA and Why Do I Need One?
The tunable laser is a core component of every optical communication system, both direct detect…
The tunable laser is a core component of every optical communication system, both direct detect and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system.
Depending on the material platform, system architecture, and requirements, optical system developers must balance laser parameters—tunability, purity, size, environmental resistance, and power—for the best system performance.
In this article, we will talk about one specific kind of laser—the integrable tunable laser assembly (ITLA)—and when it is needed.
When Do I Need an ITLA?
The promise of silicon photonics (SiP) is compatibility with existing electronic manufacturing ecosystems and infrastructure. Integrating silicon components on a single chip with electronics manufacturing processes can dramatically reduce the footprint and the cost of optical systems and open avenues for closer integration with silicon electronics on the same chip. However, the one thing silicon photonics misses is the laser component.
Silicon is not a material that can naturally emit laser light from electrical signals. Decades of research have created silicon-based lasers with more unconventional nonlinear optical techniques. Still, they cannot match the power, efficiency, tunability, and cost-at-scale of lasers made from indium phosphide (InP) and other III-V compound semiconductors.
Therefore, making a suitable laser for silicon photonics does not mean making an on-chip laser from silicon but an external laser from III-V materials such as InP. This light source will be coupled via optical fiber to the silicon components on the chip while maintaining a low enough footprint and cost for high-volume integration. The external laser typically comes in the form of an integrable tunable laser assembly (ITLA).

Meanwhile, a photonic chip developer that uses the InP platform for its entire chip instead of silicon can use an integrated laser directly on its chip. Using an external or integrated depends on the transceiver developer’s device requirements, supply chain, and manufacturing facilities and processes. You can read more about the differences in this article.
What is an ITLA?
In summary, an integrable tunable laser assembly (ITLA) is a small external laser that can be coupled to an optical system (like a transceiver) via optical fiber. This ITLA must maintain a low enough footprint and cost for high-volume integration with the optical system.
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, ITLAs need to maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half. The QSFP-DD modules that house the full transceiver are smaller (78mm by 20mm) than the original ITLA form factor. Stunningly, tunable laser manufacturers achieved this size reduction without impacting laser purity and power.

The Exploding Market for ITLAs
With the increasing demand for coherent transceivers, many companies have performed acquisitions and mergers that allow them to develop transceiver components internally and thus secure their supply. LightCounting predicts that this consolidation will decrease the sales of modulator and receiver components but that the demand for tunable lasers (mainly in the form of ITLAs) will continue to grow. The forecast expects the tunable laser market for transceivers to reach a size of $400M in 2026. We talk more about these market forces in one of our previous articles.

However, the industry consolidation will make it harder for component and equipment manufacturers to source lasers from independent vendors for their transceivers. The market needs more independent vendors to provide high-performance ITLA components that adapt to different datacom or telecom provider needs. Following these trends, at EFFECT Photonics we are not only developing the capabilities to provide a complete, fully-integrated coherent transceiver solution but also the ITLA units needed by vendors who use external lasers.
Takeaways
The world is moving towards tunability. As telecom and datacom industries seek to expand their network capacity without increasing their fiber infrastructure, the sales of tunable transceivers will explode in the coming years. These transceivers need tunable lasers with smaller sizes and lower power consumption than ever.
Some transceivers will use lasers integrated directly on the same chip as the optical engine. Others will have an external laser coupled via fiber to the optical engine. The need for these external lasers led to the development of the ITLA form factors, which get smaller and smaller with every generation.
Tags: coherent, Density, discrete, DSP, full integration, high-performance, independent, InP, ITLA, micro ITLA, nano ITLA, power consumption, reach, SiP, size, tunable, tunable lasers, versatile
What are FEC and PCS, and Why do They Matter?
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While Direct Detect transmission only uses the amplitude of the light signal, Coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure on the ground.
However, the demand for data never ceases, and with it, developers of digital signal processors (DSPs) have had to figure out ways to improve the efficiency of coherent transmission. In this article, we will briefly describe the impact of two algorithms that DSP developers use to make coherent transmission more efficient: Forward Error Correction (FEC) and Probabilistic Constellation Shaping (PCS).
What is Forward Error Correction?
Forward Error Correction (FEC) implemented by DSPs has become a vital component of coherent communication systems. FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link.
Let’s provide a high-level overview of how FEC works. An FEC encoder adds a series of redundant bits (called overhead) to the transmitted data stream. The receiver can use this overhead to check for errors without asking the transmitter to resend the data.

In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
We must highlight that FEC is a block of an electronic DSP engine with its own specialized circuitry and algorithms, so it is a separate piece of intellectual property. Therefore, developing the entire DSP electronic engine (see Figure 2 for the critical component blocks of a DSP) requires ownership or access to specific FEC intellectual property.

What is Probabilistic Constellation Shaping?
DSP developers can transmit more data by transmitting more states in their quadrature-amplitude modulation process. The simplest kind of QAM (4-QAM) uses four different states (usually called constellation points), combining two different intensity levels and two different phases of light.
By using more intensity levels and phases, more bits can be transmitted in one go. State-of-the-art commercially available 400ZR transceivers typically use 16-QAM, with sixteen different constellation points that arise from combining four different intensity levels and four phases. However, this increased transmission capacity comes at a price: a signal with more modulation orders is more susceptible to noise and distortions. That’s why these transceivers can transmit 400Gbps over 100km but not over 1000km.
One of the most remarkable recent advances in DSPs to increase the reach of light signals is Probabilistic Constellation Shaping (PCS). In the typical 16-QAM modulation used in coherent transceivers, each constellation point has the same probability of being used. This is inefficient since the outer constellation points that require more power have the same probability as the inner constellation points that require lower power.

PCS uses the low-power inner constellation points more frequently and the outer constellation points less frequently, as shown in Figure 3. This feature provides many benefits, including improved tolerance to distortions and easier system optimization to specific bit transmission requirements. If you want to know more about it, please read the explainers here and here.
The Importance of Standardization and Reconfigurability
Algorithms like FEC and PCS have usually been proprietary technologies. Equipment and component manufacturers closely guarded their algorithms because they provided a critical competitive advantage. However, this often meant that coherent transceivers from different vendors could not operate with each other, and a single vendor had to be used for the entire network deployment.
Over time, coherent transceivers have increasingly needed to become interoperable, leading to some standardization in these algorithms. For example, the 400ZR standard for data center interconnects uses a public algorithm called concatenated FEC (CFEC). In contrast, some 400ZR+ MSA standards use open FEC (oFEC), which provides a more extended reach at the cost of a bit more bandwidth and energy consumption. For the longest possible link lengths (500+ kilometers), proprietary FECs become necessary for 400G transmission. Still, at least the public FEC standards have achieved interoperability for a large segment of the 400G transceiver market. Perhaps in the future, this could happen with PCS methods.
Future DSPs could switch among different algorithms and methods to adapt to network performance and use cases. For example, let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the DSP can be reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link. Similarly, longer reach could be achieved if the DSP activates its PCS feature.
400 ZR | Open ZR+ | Proprietary Long Haul | |
Target Application | Edge data center interconnect | Metro, Regional data center interconnect | Long-Haul Carrier |
Target Reach @ 400G | 120km | 500km | 1000 km |
Form Factor | QSFP-DD/OSFP | QSFP-DD/OSFP | QSFP-DD/OSFP |
FEC | CFEC | oFEC | Proprietary |
Standards / MSA | OIF | OpenZR+ MSA | Proprietary |
Takeaways
The entire field of communication technology can arguably be summarized with a single question: how can we transmit more information into a single frequency-limited signal over the longest possible distance?
DSP developers have many tools to answer this question, and two of them are FEC and PCS. Both technologies make coherent links much more tolerant of noise and can extend their reach. Future pluggables that handle different use cases must use different coding, error coding, and modulation schemes to adapt to different network requirements.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever.
Tags: coherent, constellation shaping, DSP, DSPs, error compensation, FEC, PCS, power, Proprietary, reach, reconfigurable, standardized, standards
The Light Path to a Coherent Cloud Edge
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and…
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024.
As edge data centers become more common, the issue of interconnecting them becomes more prominent. This situation motivated the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. Cignal AI forecasts that 400ZR shipments will dominate the edge applications, as shown in the figure below.

The 400ZR standard has made coherent technology and dense wavelength division multiplexing (DWDM) the dominant solution in the metro data center interconnects (DCIs) space. Datacom provider operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detect technology. A single coherent transceiver plugged into a router could fulfill the requirements.
However, there are still obstacles that prevent coherent from becoming dominant in shorter-reach DCI links at the campus (< 10km distance) and intra-datacenter (< 2km distance) level. These spaces require more optical links and transceivers, and coherent technology is still considered too power-hungry and expensive to become the de-facto solution here.
Fortunately, there are avenues for coherent technology to overcome these barriers. By embracing multi-laser arrays, DSP co-design, and electronic ecosystems, coherent technology can mature and become a viable solution for every data center interconnect scenario.
The Promise of Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for optical transceivers because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need an intra-DCI link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.

Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Co-designing DSP and Optical Engine for Efficiency and Performance
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none. For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.

However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.

Additionally, the optimized DSP could also be programmed to do some additional signal conditioning that minimizes the nonlinear optical effects of the InP material, which can reduce noise and improve performance.
Driving Scale Through Existing Electronic Ecosystems
Making coherent optical transceivers more affordable is a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems.
While vertically-integrated PIC development has its strengths, a fabless model in which developers outsource their PIC manufacturing to a large-scale foundry is the simplest way to scale to production volumes of millions of units. Fabless PIC developers can remain flexible and lean, relying on trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on their end market and designs instead of costly fabrication facilities.
Further progress must also be made in the packaging, assembly, and testing of photonic chips. While these processes are only a small part of the cost of electronic systems, the reverse happens with photonics. To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. It must move towards proven and scalable packaging methods that are common in the electronics industry.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
Coherent transceivers are already established as the solution for metro Data Center Interconnects (DCIs), but they need to become more affordable and less power-hungry to fit the intra- and campus DCI application cases. Fortunately, there are several avenues for coherent technology to overcome these cost and power consumption barriers.
Multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel. Co-designing the optics and electronics can allow the electronic DSP to exploit the intrinsic advantages of specific photonics platforms such as indium phosphide. Finally, leveraging electronic ecosystems and processes is vital to increase the production volumes of coherent transceivers and make them more affordable.
By embracing these pathways to progress, coherent technology can mature and become a viable solution for every data center interconnect scenario.
Tags: campus, cloud, cloud edge, codesign, coherent, DCI, DSP, DSPs, DWDM, integration, intra, light sources, metro, modulator, multi laser arrays, photonic integration, PIC, power consumption, wafer testing
Leveraging Electronic Ecosystems in Photonics
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades.…
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades. This allowed the world to enjoy chips that every generation became smaller and provided exponentially more computing power for the same amount of money. This scale-up process is how everyone now has a computer processor in their pocket that is millions of times more powerful than the most advanced computers of the 1960s that landed men on the moon.
This progress in electronics integration is a key factor that brought down the size and cost of coherent transceivers, packing more bits than ever into smaller areas. However, photonics has struggled to keep up with electronics, with the photonic components dominating the cost of transceivers. If the transceiver cost curve does not continue to decrease, it will be challenging to achieve the goal of making them more accessible across the entire optical network.
To trigger a revolution in the use of photonics worldwide, it needs to be as easy to use as electronics. In the words of our Chief Technology Officer, Tim Koene: “We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.”
This goal requires photonics manufacturing to leverage existing electronics manufacturing processes and ecosystems. Photonics must embrace fabless models, chips that can survive soldering steps, and electronic packaging and assembly methods.
The Advantages of Moving to a Fabless Model
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers manufacture their chips in-house within their fabrication facilities. This approach has some substantial advantages, giving component manufacturers complete control over their production process.

However, this approach has its trade-offs when scaling up. If a vertically-integrated chip developer wants to scale up in volume, they must make a hefty capital expenditure (CAPEX) in more equipment and personnel. They must develop new fabrication processes as well as develop and train personnel. Fabs are not only expensive to build but to operate. Unless they can be kept at nearly full utilization, operating expenses (OPEX) also drain the facility owners’ finances.
Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective. For example, LightCounting estimates that 55 million optical transceivers were sold in 2021, while the International Data Corporation estimates that 1.4 billion smartphones were sold in 2021. The latter figure is 25 times larger than that of the transceiver market.
Electronics manufacturing experienced a similar problem during their 70s and 80s boom, with smaller chip start-ups facing almost insurmountable barriers to market entry because of the massive CAPEX required. Furthermore, the large-scale electronics manufacturing foundries had excess production capacity that drained their OPEX. The large-scale foundries ended up selling that excess capacity to the smaller chip developers, who became fabless. In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at total capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
This fabless model, with companies designing and selling the chips but outsourcing the manufacturing, should also be the way to go for photonics. Instead of going through a more costly, time-consuming process, the troubles of scaling up for photonics developers are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on the end market. This is the simplest way forward if photonics moves into million-scale volumes.
Adopting Electronics-Style Packaging
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic integrated circuits (PICs). Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost.

To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. The lack of automation makes manufacturing slower and prevents data collection that can be used for process control, optimization, and standardization.
One of the best ways to reach these automation and standardization goals is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible. Below are a few examples:
- Passive alignments: Connecting optical fiber to PICs is one of optical devices’ most complicated packaging and assembly problems. The best alignments are usually achieved via active alignment processes in which feedback from the PIC is used to align the fiber better. Passive alignment processes do not use such feedback. They cannot achieve the best possible alignment but are much more affordable.
- BGA-style packaging: Ball-grid array packaging has grown popular among electronics manufacturers. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
- Flip-chip bonding: A process where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Making Photonics Chips That Can Survive Soldering
Soldering remains another tricky step for photonics assembly and packaging. Photonics device developers usually custom order a PIC, then wire and die bond to the electronics. However, some elements in the PIC cannot handle soldering temperatures, making it difficult to solder into an electronics board. Developers often must glue the chip onto the board with a non-standard process that needs additional verification for reliability.
This goes back to the issue of process standardization. Current PICs often use different materials and processes from electronics, such as optical fiber connections and metals for chip interconnects, that cannot survive a standard soldering process.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to make fiber coupling and other PIC aspects compatible with these electronic packaging methods.
PICs that can handle being soldered to circuit boards will allow the industry to build optical subassemblies that can be made more readily available in the open market and can go into trains, cars, or airplanes.
Takeaways
Photonics must leverage existing electronics ecosystems and processes to scale up and have a greater global impact. Our Chief Technology Officer, Tim Koene, explains what this means:
Photonics technology needs to integrate more electronic functionalities into the same package. It needs to build photonic integration and packaging support that plays by the rules of existing electronic manufacturing ecosystems. It needs to be built on a semiconductor manufacturing process that can produce millions of chips in a month.
As soon as photonics can achieve these larger production volumes, it can reach price points and improvements in quality and yield closer to those of electronics. When we show the market that photonics can be as easy to use as electronics, that will trigger a revolution in its worldwide use.
This vision is one of our guiding lights at EFFECT Photonics, where we aim to develop optical systems that can have an impact all over the world in many different applications.
Tags: automotive sector, BGA style packaging, compatible, computing power, cost per mm, efficient, electronic, electronic board, electronics, fabless, Photonics, risk, scale, soldering, transistor, wafer scale
The Evolution to 800G and Beyond
The demand for data and other digital services is rising exponentially. From 2010 to 2020,…
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. From 2020 to 2026, internet traffic will likely increase 5-fold. To meet this demand, datacom and telecom operators need constantly upgrade their transport networks.
400 Gbps links are becoming the standard for links all across telecom transport networks and data center interconnects, but providers are already thinking about the next steps. LightCounting forecasts significant growth in shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years.

The major obstacles in this roadmap remain the power consumption, thermal management, and affordability of transceivers. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Thus, many incentives exist to continue improving the performance and power consumption of pluggable optical transceivers. By embracing increased photonic integration, co-designed PICs and DSPs, and multi-laser arrays, pluggables will be better able to scale in data rates while remaining affordable and at low power.
Direct Detect or Coherent for 800G and Beyond?
While coherent technology has become the dominant one in metro distances (80 km upwards), the campus (< 10 km) and intra-data center (< 2 km) distances remain in contention between direct detect technologies such as PAM 4 and coherent.
These links were originally the domain of direct detect products when the data rates were 100Gbps. However, at 400Gbps speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions. This gap in power consumption is expected to disappear at 800Gbps, as shown in the figure below.

A major for this decreased gap is that direct detect technology will often require additional amplifiers and compensators at these data rates, while coherent pluggables do not. This also makes coherent technology simpler to deploy and maintain. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions.
Increased Integration and Co-Design are Key to Reduce Power Consumption
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s M1 processor integrates all electronic functions in a single system-on-chip (SoC) and consumes a third of the power compared to the processors with discrete components used in their previous generations of computers. We can observe this progress in the table below.
Mac Mini Model | Power Consumption (Watts) | |
Idle | Max | |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Photonics can achieve greater efficiency gains by following a similar approach to integration. The interconnects required to couple discrete optical components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. In contrast, the more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components.

Further improvements in power consumption can be achieved by co-designing electronic digital signal processors (DSPs) with the photonic integrated circuit (PIC) that constitutes the transceiver’s optical engine. Standard DSPs are like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none.
A DSP that is co-designed and optimized alongside the PIC can better exploit specific advantages of the PIC. For example, if an indium phosphide PIC and a DSP are co-designed together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, drastically reducing a power conversion overhead that is often 10-15% of transceiver power consumption. To learn more about the advantages of co-design, you can read our article on the topic.
Reducing Complexity with Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones will provide more cost-effective ways for pluggables to scale to higher data rates.
Let’s say we need a data center interconnect with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.

Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The pace of worldwide data demand is relentless, with it the pace of link upgrades required by datacom and telecom networks. 400G transceivers are currently replacing previous 100G solutions, and in a few years, they will be replaced by transceivers with data rates of 800G or 1.6 Terabytes.
The cost and power consumption of coherent technology remain barriers to more widespread capacity upgrades, but the industry is finding ways to overcome them. Tighter photonic integration can minimize the losses of optical systems and their power consumption. Further improvements in power efficiency can be achieved by co-designing DSPs alongside the optical engine. Finally, the onset of multi-laser arrays can avoid the higher cost and complexity of increasing capacity with just a single transceiver channel.
Tags: bandwidth, co-designing, coherent, DSP, full integration, integration, interface, line cards, optical engine, power consumption, RF Interconnections, Viasat
Coherent Free Space Optics for Ground and Space Applications
In a previous article, we described how free-space optics (FSO) could impact mobile fronthaul and…
In a previous article, we described how free-space optics (FSO) could impact mobile fronthaul and enterprise links. They can deliver a wireless access solution that can be deployed quickly, with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links.

However, there’s potential to do even more. There are network applications on the ground that require very high bandwidths in the range of 100 Gbps and space applications that need powerful transceivers to deliver messages across vast distances. Microwaves are struggling to deliver all the requirements for these use cases.
By merging the coherent technology in fiber optical communications with FSO systems, they can achieve greater reach and capacity than before, enabling these new applications in space and terrestrial links.
Reaching 100G on the Ground for Access Networks
Thanks to advances in adaptive optics, fast steering mirrors, and digital signal processors, FSO links can now handle Gbps-capacity links over several kilometers. For example, a collaboration between Dutch FSO startup Aircision and research organization TNO demonstrated in 2021 that their FSO systems could reliably transmit 10 Gbps over 2.5 km.
However, new communication technologies emerge daily, and our digital society keeps evolving and demanding more data. This need for progress has motivated more research and development into increasing the capacity of FSO links to 100 Gbps, providing a new short-reach solution for access networks.
One such initiative came from the collaboration of Norweigan optical network solutions provider Smartoptics, Swedish research institute RISE Acreo, and optical wireless link provider Norwegian Polewall. In a trial set-up at Acreo’s research facilities, Smartoptics’ 100G transponder was used with CFP transceivers to create a 100 Gbps DWDM signal transmitted through the air using Polewall’s optical wireless technology. Their system is estimated to reach 250 meters in the worst possible weather conditions.
Fredrik Larsson, the Optical Transmission Specialist at Smartoptics, explains the importance of this trial:
“Smartoptics is generally recognized as offering a very flexible platform for optical networking, with applications for all types of scenarios. 100Gbps connectivity through the air has not been demonstrated before this trial, at least not with commercially available products. We are proud to be part of that milestone together with Acreo and Polewall,”
Meanwhile, Aircision aims to develop a 100 Gbps coherent FSO system capable of transmitting up to 10km. To achieve this, they have partnered up with EFFECT Photonics, who will take charge of developing coherent modules that can go into Aircision’s future 100G system.
In many ways, the basic technologies to build these coherent FSO systems have been available for some time. However, they included high-power 100G lasers and transceivers originally intended for premium long-reach applications. The high price, footprint, and power consumption of these devices prevented the development of more affordable and lighter FSO systems for the larger access network market.
However, the advances in integration and miniaturization of coherent technology have opened up new possibilities for FSO links. For example, 100ZR transceiver standards enable a new generation of low-cost, low-power coherent pluggables that can be easily integrated into FSO systems. Meanwhile, companies like Aircision are working hard in using technologies such as adaptive optics and fast-steering mirrors to extend the reach of these 100G FSO systems into the kilometer range.
Coherent Optical Technology in Space
Currently, most space missions use radio frequency communications to send data to and from spacecraft. While radio waves have a proven track record of success in space missions, generating and collecting more mission data requires enhanced communications capabilities.
Coherent optical communications can increase link capacities to spacecraft and satellites by 10 to 100 times that of radio frequency systems. Additionally, optical transceivers can lower the size, weight, and power (SWAP) specifications of satellite communication systems. Less weight and size means a less expensive launch or perhaps room for more scientific instruments. Less power consumption means less drain on the spacecraft’s energy sources.

For example, the Laser Communications Relay Demonstration (LCRD) from NASA, launched in December 2021, aims to showcase the unique capabilities of optical communications. Future missions in space will send data to the LCRD, which then relays the data down to ground stations on Earth. The LCRD will forward this data at rates of 1.2 Gigabits per second over optical links, allowing more high-resolution experiment data to be transmitted back to Earth. LCRD is a technology demonstration expected to pave the way for more widespread use of optical communications in space.
Making Coherent Technology Live in Space
Integrated photonics can boost space communications by lowering the payload. Still, it must overcome the obstacles of a harsh space environment, with radiation hardness, an extreme operational temperature range, and vacuum conditions.

For example, the Laser Communications Relay Demonstration (LCRD) from NASA, launched in December 2021, aims to showcase the unique capabilities of optical communications. Future missions in space will send data to the LCRD, which then relays the data down to ground stations on Earth. The LCRD will forward this data at rates of 1.2 Gigabits per second over optical links, allowing more high-resolution experiment data to be transmitted back to Earth. LCRD is a technology demonstration expected to pave the way for more widespread use of optical communications in space.
Mission Type | Conditions |
Pressurized Module | +18.3 ºC to 26.7 °C |
Low-Earth Orbit (LEO) | -65 ºC to +125°C |
Geosynchronous Equatorial Orbit (GEO) | -196 ºC to +128 °C |
Trans-Atmospheric Vehicle | -200 ºC to +260 ºC |
Lunar Surface | -171 ºC to +111 ºC |
Martian Surface | -143 ºC to +27 ºC |
The values in Table 1 are unmanaged environmental temperatures and would decrease significantly for electronics and optics systems in a temperature-managed area, perhaps by as much as half.
A substantial body of knowledge exists to make integrated photonics compatible with space environments. After all, photonic integrated circuits (PICs) use similar materials to their electronic counterparts, which have already been space qualified in many implementations.
Much research has gone into overcoming the challenges of packaging PICs with electronics and optical fibers for these space environments, which must include hermetic seals and avoid epoxies. Commercial solutions, such as those offered by PHIX Photonics Assembly, Technobis IPS, and the PIXAPP Photonic Packaging Pilot Line, are now available.
Takeaways
Whenever you want to send data from point A to B, photonics is usually the most efficient way of doing it, be it over a fiber or free space.
This is why EFFECT Photonics sees future opportunities in the free-space optical (FSO) communications sectors. In mobile access networks or satellite link applications, FSO can provide solutions with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links.
These FSO systems can be further boosted by using coherent optical transmission similar to the one used in fiber optics. Offering these systems in a small package that can resist the required environmental conditions will significantly benefit the access network and space sectors.
Tags: 100G, access capacity, access network, capacity, certification, coherent, free space optics, FSO, GEO, ground, LEO, lunar, Photonics, reach, satellite, space, SWAP, temperature, Transceivers
What’s Inside a Tunable Laser for Coherent Systems?
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows the datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also made DWDM solutions more widespread by simplifying installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and —are moving towards tunable pluggables.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system. This article will clarify some critical aspects of laser design for communication systems.
External and Integrated Lasers: What’s the Difference?
The promise of silicon photonics (SiP) is compatibility with existing electronic manufacturing ecosystems and infrastructure. Integrating silicon components on a single chip with electronics manufacturing processes can dramatically reduce the footprint and the cost of optical systems and open avenues for closer integration with silicon electronics on the same chip. However, the one thing silicon photonics misses is the laser component.
Silicon is not a material that can naturally emit laser light from electrical signals. Decades of research have created silicon-based lasers with more unconventional nonlinear optical techniques. Still, they cannot match the power, efficiency, tunability, and cost-at-scale of lasers made from indium phosphide (InP) and III-V compound semiconductors.
Therefore, making a suitable laser for silicon photonics does not mean making an on-chip laser from silicon but an external laser from III-V materials such as InP. This light source will be coupled via optical fiber to the silicon components on the chip while maintaining a low enough footprint and cost for high-volume integration. The external laser typically comes in the form of an integrable tunable laser assembly (ITLA).
In contrast, the InP platform can naturally emit light and provide high-quality light sources and amplifiers. This allows for photonic system-on-chip designs that include an integrated laser on the chip. The integrated laser carries the advantage of reduced footprint and power consumption compared to an external laser. These advantages become even more helpful for PICs that need multiple laser channels.
Finally, integrated lasers enable earlier optical testing on the semiconductor wafer and die. By testing the dies and wafers directly before packaging them into a transceiver, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy, materials, and cost.

Using an external or integrated laser depends on the transceiver developer’s device requirements, supply chain, and manufacturing facilities and processes. At EFFECT Photonics, we have the facilities and expertise to provide fully-integrated InP optical systems with an integrated laser and the external laser component that a silicon photonics developer might need for their optical system.
What are the key requirements for a laser in coherent systems?
In his recent talk at ECOC 2022, our Director of Product Management, Joost Verberk, outlined five critical parameters for laser performance.
- Tunability: With telecom providers needing to scale up their network capacity without adding more fiber infrastructure, combining tunable lasers with dense wavelength division multiplexing (DWDM) technology becomes necessary. These tunable optical systems have become more widespread thanks to self-tuning technology that removes the need for manual tuning. This makes their deployment and maintenance easier.
- Spectral Purity: Coherent systems encode information in the phase of the light, and the purer the light source is, the more information it can transmit. An ideal, perfectly pure light source can generate a single, exact color of light. However, real-life lasers are not pure and will generate light outside their intended color. The size of this deviation is what we call the laser linewidth. An impure laser with a large linewidth will have a more unstable phase that propagates errors in its transmitted data, as shown in the diagram below. This means it will transmit at a lower speed than desired.

- Dimensions: As the industry moves towards packing more and more transceivers on a single router faceplate, tunable lasers need to maintain performance and power while moving to smaller footprints. Laser manufacturers have achieved size reductions thanks to improved integration without impacting laser purity and power, moving from ITLA to micro-ITLA and nano-ITLA form factors in a decade.
- Environmental Resistance: Lasers used in edge and access networks will be subject to harsh environments, like temperature and moisture changes. For these use cases, lasers should operate in the industrial temperature (I-temp) range of -40 to 85ºC for these environments.
- Transmit Power: The required laser output power will depend on the application and the system architecture. For example, a laser fully integrated into the chip can reach higher transmit powers more easily because it avoids the interconnection losses of an external laser. Still, shorter-reach applications might not necessarily need such powers.

The Promise of Multi-Laser Arrays
Earlier this year, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for tunable DWDM because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need a link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.

Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Takeaways
The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows the datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Thanks to the miniaturization of coherent technology and self-tuning algorithms, many application cases—metro transport, data center interconnects, and future access networks—will eventually move towards coherent tunable pluggables.
These new application cases will have to balance the laser parameters we described early—tunability, purity, size, environmental resistance, power—depending on their material platforms, system architecture, and requirements. Some will need external lasers; some will want a fully-integrated laser. Some will need multi-laser arrays to increase capacity; others need more stringent temperature certifications.
Following these trends, at EFFECT Photonics, we are not only developing the capabilities to provide a complete, coherent transceiver solution but also the external nano-ITLA units needed by other vendors.
Tags: coherent, DBR, DFB, ECL, full integration, InP, ITLA, micro ITLA, nano ITLA, SiP, tunable
The Growing Photonics Cluster of the Boston Area
As they lighted the candles in their ship, the Pilgrim families traveling on the Mayflower had no idea they would help build a nation that would become a major pioneer in light technology and many other fields.
The United States features many areas with a strong photonics background, including the many companies in California’s Silicon Valley and the regions close to the country’s leading optics universities, such as Colorado, New York, Arizona, and Florida.
However, the Greater Boston area and Massachusetts state, in general, are becoming an increasingly important photonics hub with world-class universities and many successful optics and photonics initiatives and companies. Let’s talk a bit more about their legacy with light-based technology and the history of the town of Maynard with the high-tech industry.
From World-Class Labs to the Real World
The Boston area features many world-class universities collaborating with the government and industry to develop new photonics technology. Harvard, the Massachusetts Institute of Technology (MIT), Boston University, Tufts University, and Northeastern University are major research institutions in the area that lead many photonics-related initiatives.

The state of Massachusetts, in general, has also been home to several prosperous photonics businesses, and initiatives are being made to capitalize on Boston’s extensive medical industry knowledge to boost biomedical optics and photonics. Raytheon, Polaroid, and IPG Photonics are examples of Massachusetts-based businesses that promoted optical technology.
The US federal government and Massachusetts state are committing resources to get these academic and industry partners to collaborate as much as possible. In 2015, the Lab for Education and Application Prototypes (LEAP) network was established as part of a federal drive to revive American manufacturing. The Massachusetts Manufacturing Innovation Initiative, a state grant program, and AIM Photonics, the national manufacturing institution, each contributed $11.3 million to constructing labs around Massachusetts universities and colleges.
The LEAP Network objectives are to teach integrated photonics manufacturing practice, offer companies technician training and certification, encourage company engagement in the tool, process, and application upgrades, and support AIM Photonics in their manufacturing and testing.
These partnerships form a statewide ecosystem to educate the manufacturing workforce throughout the photonics supply chain. The facilities’ strategic placement next to both universities and community colleges allows them to attract students from all areas and stages of their careers, from technicians to engineers to fundamental researchers.
From the Mill to High Tech: The Story of Maynard
A trip down Route 2 into Middlesex County, 25 miles northwest of Boston, will take one past apple orchards, vineyards, and some of Massachusetts’ most stunning nature preserves before arriving at a historic mill on the Assabet River. The community around this mill, Maynard, is a charming and surprisingly historical hub of economic innovation that houses an emerging tech ecosystem.

The renowned Assabet Woolen Mill was established for textile manufacturing in 1847 by Amory Maynard, who by the age of 16 was managing his own sawmill company. Initially a carpet manufacturing plant, Maynard’s enterprise produced blankets and uniforms for the Union Army during the Civil War. The company employed immigrants from Ireland, Finland, Poland, Russia, and Italy, many of them coming to the mill for jobs as soon as they arrived in the nation. By the 1930s, the town of Maynard was recognized as one of the most multi-ethnic places in the state.
The Assabet Woolen Mill continued to create textiles until 1950. The 11-acre former mill complex, currently named Mill and Main, is the contemporary expression of the town’s evolution and relationship with innovative industry.
The Digital Equipment Corporation (DEC) occupied the facility before the end of the 50s with just $70,000 cash and three engineers. From the 1960s onward, DEC became a major global supplier of computer systems and enjoyed tremendous growth. It’s hard to overstate the company’s impact on Maynard, which became the ” Mini Computer Capital of the World” in barely twenty years.
Following DEC’s departure, the mill complex was sold and rented out to a fresh group of young and ambitious computer startups, many of whom are still operating today. Since then, more people and companies have joined, noting the affordable real estate, the enjoyable commute and environs, and the obvious cluster of IT enterprises. For example, when Acacia Communications, Inc. was established in 2009 and needed a home, Maynard’s mill space was a natural fit.


The Future of Coherent DSP Design: Interview with Russell Fuerst
Digital signal processors (DSPs) are the heart of coherent communication systems. They not only encode/decode…
Digital signal processors (DSPs) are the heart of coherent communication systems. They not only encode/decode data into the three properties of a light signal (amplitude, phase, polarization) but also handle error correction, analog-digital conversation, Ethernet framing, and compensation of dispersion and nonlinear distortion. And with every passing generation, they are assigned more advanced functions such as probabilistic constellation shaping.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever. We ask Russell Fuerst, our Vice-President of Digital Signal Processing, how we can achieve these goals.
What’s the most exciting thing about joining EFFECT Photonics?
Before being acquired by EFFECT Photonics, our DSP design team has been a design-for-hire house. We’ve been doing designs for other companies that have put those designs in their products. By joining EFFECT Photonics, we can now do a design and stamp our brand on it. That’s exciting.
The other exciting thing is to have all the technologies under one roof. Having everything from the DSP to the PIC to the packaging and module-level elements in one company will allow us to make our products that much better.
We also find the company culture to be very relaxed and very collaborative. Even though we’re geographically diverse, it’s been straightforward to understand what other people and groups in the company are doing. It’s easy to talk to others and find out whom you need to talk to. There’s not a whole lot of organizational structure that blocks communication, so it’s been excellent from that perspective.
People at EFFECT Photonics were welcoming from day one, making us that much more excited to join.
What key technology challenges must be solved by DSP designers to thrive in the next 5 to 10 years?
The key is to bring the power down while also increasing the performance.
In the markets where coherent has been the de-facto solution, I think it’s essential to understand how to drive cost and power down either through the DSP design itself or by integrating the DSP with other technologies within the module. That will be where the benefits come from in those markets.
Similarly, there are markets where direct detection is the current technology of choice. We must understand how to insert coherent technology into those markets while meeting the stringent requirements of those important high-volume markets. Again, this progress will be largely tied to performance within the power and cost requirements.
As DSP technology has matured, other aspects outside of performance are becoming key, and understanding how we can work that into our products will be the key to success.
How do you think the DSP can be more tightly integrated with the PIC?
This is an answer that will evolve over time. We will become more closely integrated with the team in Eindhoven and learn some of the nuances of their mature design process. And similarly, they’ll understand the nuances of our design process that have matured over the years. As we understand the PIC technology and our in-house capabilities better, that will bring additional improvements that are currently unknown.
Right now, we are primarily focused on the obvious improvements tied to the fully-integrated platform. For example, the fact that we can have the laser on the PIC because of the active InP material. We want to understand how we co-design aspects of the module and shift the complexity from one design piece or component to another, thanks to being vertically integrated.
Another closely-tied area for improvement is on the modulator side. We think that the substantially lower drive voltages required for the InP modulator give us the possibility to eliminate some components, such as RF drivers. We could potentially drive the modulator directly from that DSP without any intermediary electronics, which would reduce the cost and power consumption. That’s not only tied to the lower drive voltages but also some proprietary signal conditioning we can do to minimize some of the nonlinearities in the modulator and improve the performance.
What are the challenges and opportunities of designing DSPs for Indium phosphide instead of silicon?
So, we already mentioned two opportunities with the laser and the modulator.
I think the InP integration makes the design challenges smaller than those facing DSP design for silicon photonics. The fact is that InP can have more active integrated components and that DSPs are inherently active electronic devices, so getting the active functions tuned and matched over time will be a challenge. It motivates our EFFECT DSP team to quickly integrate with the experienced EFFECT PIC design team to understand the fundamental InP platform a bit better. Once we understand it, the DSP designs will get more manageable with improved performance, especially as we have control over the designs of both DSP and PIC. As we get to the point where co-packaging is realized, there will also be some thermal management issues to consider.
When we started doing coherent DSP designs for optical communication over a decade ago, we pulled many solutions from the RF wireless and satellite communications space into our initial designs. Still, we couldn’t bring all those solutions to the optical markets.
However, when you get more of the InP active components involved, some of those solutions can finally be brought over and utilized. They were not used before in our designs for silicon photonics because silicon is not an active medium and lacked the performance to exploit these advanced techniques.
For example, we have done proprietary waveforms tuned to specific satellite systems in the wireless space. Our DSP team was able to design non-standard constellations and modulation schemes that increased the capacity of the satellite link over the previous generation of satellites. Similarly, we could tune the DSP’s waveform and performance to the inherent advantages of the InP platform to improve cost, performance, bandwidth utilization, and efficiency. That’s something that we’re excited about.
Takeaways
As Russell explained, the big challenge for DSP designers continues to be increasing performance while keeping down the power consumption. Finding ways to integrate the DSP more deeply with the InP platform can overcome this challenge, such as direct control of the laser and modulator from the DSP to novel waveform shaping methods. The presence of active components in the InP platforms also gives DSP designers the opportunity to import more solutions from the RF wireless space.
We look forward to our new DSP team at EFFECT Photonics settling into the company and trying out all these solutions to make DSPs faster and more sustainable!
Tags: coherent, DSP, energy efficient, InP, integration, performance, power consumption, Sustainable, Viasat
The Future of Passive Optical Networks
Like every other telecom network, cable networks had to change to meet the growing demand…
Like every other telecom network, cable networks had to change to meet the growing demand for data. These demands led to the development of hybrid fiber-coaxial (HFC) networks in the 1990s and 2000s. In these networks, optical fibers travel from the cable company hub and terminate in optical nodes, while coaxial cable connects the last few hundred meters from the optical node to nearby houses. Most of these connections were asymmetrical, giving customers more capacity to download data than upload.
That being said, the way we use the Internet has evolved over the last ten years. Users now require more upstream bandwidth thanks to the growth of social media, online gaming, video calls, and independent content creation such as video blogging. The DOCSIS standards that govern data transmission over coaxial cables have advanced quickly because of these additional needs. For instance, full-duplex transmission with symmetrical upstream and downstream channels is permitted under the most current DOCSIS 4.0 specifications.
Fiber-to-the-home (FTTH) systems, which bring fiber right to the customer’s door, are also proliferating and enabling Gigabit connections quicker than HFC networks. Overall, extending optical fiber deeper into communities (see Figure 1 for a graphic example) is a critical economic driver, increasing connectivity for the rural and underserved. These investments also lead to more robust competition among cable companies and a denser, higher-performance wireless network.

Passive optical networks (PONs) are a vital technology to cost-effectively expand the use of optical fiber within access networks and make FTTH systems more viable. By creating networks using passive optical splitters, PONs avoid the power consumption and cost of active components in optical networks such as electronics and amplifiers. PONs can be deployed in mobile fronthaul and mid-haul for macro sites, metro networks, and enterprise scenarios.
Despite some success from PONs, the cost of laying more fiber and the optical modems for the end users continue to deter carriers from using FTTH more broadly across their networks. This cost problem will only grow as the industry moves into higher bandwidths, such as 50G and 100G, requiring coherent technology in the modems.
Therefore, new technology and manufacturing methods are required to make PON technology more affordable and accessible. For example, wavelength division multiplexing (WDM)-PON allows providers to make the most of their existing fiber infrastructure. Meanwhile, simplified designs for coherent digital signal processors (DSPs) manufactured at large volumes can help lower the cost of coherent PON technology for access networks.
The Advantages of WDM PONs
Previous PON solutions, such as Gigabit PON (GPON) and Ethernet PON (EPON), used time-division multiplexing (TDM) solutions. In these cases, the fiber was shared sequentially by multiple channels. These technologies were initially meant for the residential services market, but they scale poorly for the higher capacity of business or carrier services. PON standardization for 25G and 50G capacities is ready but sharing a limited bitrate among multiple users with TDM technology is an insufficient approach for future-proof access networks.
This WDM-PON uses WDM multiplexing/demultiplexing technology to ensure that data signals can be divided into individual outgoing signals connected to buildings or homes. This hardware-based traffic separation gives customers the benefits of a secure and scalable point-to-point wavelength link. Since many wavelength channels are inside a single fiber, the carrier can retain very low fiber counts, yielding lower operating costs.

WDM-PON has the potential to become the unified access and backhaul technology of the future, carrying data from residential, business, and carrier wholesale services on a single platform. We discussed this converged access solution in one of our previous articles. Its long-reach capability and bandwidth scalability enable carriers to serve more customers from fewer active sites without compromising security and availability.
Migration to the WDM-PON access network does require a carrier to reassess how it views its network topology. It is not only a move away from operating parallel purpose-built platforms for different user groups to one converged access and backhaul infrastructure. It is also a change from today’s power-hungry and labor-intensive switch and router systems to a simplified, energy-efficient, and transport-centric environment with more passive optical components.
The Possibility of Coherent Access
As data demands continue to grow, direct detect optical technology used in prior PON standards will not be enough. The roadmap for this update remains a bit blurry, with different carriers taking different paths. For example, future expansions might require using 25G or 50G transceivers in the cable network, but the required number of channels in the fiber might not be enough for the conventional optical band (the C-band). Such a capacity expansion would therefore require using other bands (such as the O-band), which comes with additional challenges. An expansion to other optical bands would require changes in other optical networking equipment, such as multiplexers and filters, which increases the cost of the upgrade.
An alternative solution could be upgrading instead to coherent 100G technology. An upgrade to 100G could provide the necessary capacity in cable networks while remaining in the C-band and avoiding using other optical bands. This path has also been facilitated by the decreasing costs of coherent transceivers, which are becoming more integrated, sustainable, and affordable. You can read more about this subject in one of our previous articles.
For example, the renowned non-profit R&D center CableLabs announced a project to develop a symmetric 100G Coherent PON (C-PON). According to CableLabs, the scenarios for a C-PON are many: aggregation of 10G PON and DOCSIS 4.0 links, transport for macro-cell sites in some 5G network configurations, fiber-to-the-building (FTTB), long-reach rural scenarios, and high-density urban networks.
CableLabs anticipates C-PON and its 100G capabilities to play a significant role in the future of access networks, starting with data aggregation on networks that implement a distributed access architecture (DAA) like Remote PH. You can learn more about these networks here.
Combining Affordable Designs with Affordable Manufacturing
The main challenge of C-PON is the higher cost of coherent modulation and detection. Coherent technology requires more complex and expensive optics and digital signal processors (DSPs). Plenty of research is happening on simplifying these coherent designs for access networks. However, a first step towards making these optics more accessible is the 100ZR standard.
100ZR is currently a term for a short-reach (~80 km) coherent 100Gbps transceiver in a QSFP pluggable size. Targeted at the metro edge and enterprise applications that do not require 400ZR solutions, 100ZR provides a lower-cost, lower-power pluggable that also benefits from compatibility with the large installed base of 50 GHz and legacy 100 GHz multiplexer systems.
Another way to reduce the cost of PON technology is through the economics of scale, manufacturing pluggable transceiver devices at a high volume to drive down the cost per device. And with greater photonic integration, even more, devices can be produced on a single wafer. This economy-of-scale principle is the same behind electronics manufacturing, which must be applied to photonics.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have modeled how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to tens of Euros. This must be the goal of the optical transceiver industry.

Takeaways
Integrated photonics and volume manufacturing will be vital for developing future passive optical networks. PONs will use more WDM-PON solutions for increased capacity, secure channels, and easier management through self-tuning algorithms.
Meanwhile, PONs are also moving into incorporating coherent technology. These coherent transceivers have been traditionally too expensive for end-user modems. Fortunately, more affordable coherent transceiver designs and standards manufactured at larger volumes can change this situation and decrease the cost per device.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDM
How Many DWDM Channels Do You Really Need?
Optical fiber and dense wavelength division multiplex (DWDM) technology are moving towards the edges of…
Optical fiber and dense wavelength division multiplex (DWDM) technology are moving towards the edges of networks. In the case of new 5G networks, operators will need more fiber capacity to interconnect the increased density of cell sites, often requiring replacing legacy time-division multiplexing transmission with higher-capacity DWDM links. In the case of cable and other fixed access networks, new distributed access architectures like Remote PHY free up ports in cable operator headends to serve more bandwidth to more customers.
A report by Deloitte summarizes the reasons to expand the reach and capacity of optical access networks: “Extending fiber deeper into communities is a critical economic driver, promoting competition, increasing connectivity for the rural and underserved, and supporting densification for wireless.”

To achieve such a deep fiber deployment, operators look to DWDM solutions to expand their fiber capacity without the expensive laying of new fiber. DWDM technology has become more affordable than ever due to the availability of low-cost filters and SFP transceiver modules with greater photonic integration and manufacturing volumes. Furthermore, self-tuning technology has made the installation and maintenance of transceivers easier and more affordable.
Despite the advantages of DWDM solutions, their price still causes operators to second-guess whether the upgrade is worth it. For example, mobile fronthaul applications don’t require all 40, 80, or 100 channels of many existing tunable modules. Fortunately, operators can now choose between narrow- or full-band tunable solutions that offer a greater variety of wavelength channels to fit different budgets and network requirements.
Example: Fullband Tunables in Cable Networks
Let’s look at what happens when a fixed access network needs to migrate to a distributed access architecture like Remote PHY.
A provider has a legacy access network with eight optical nodes, and each node services 500 customers. To give higher bandwidth capacity to these 500 customers, the provider wants to split each node into ten new nodes for fifty customers. Thus, the provider goes from having eight to eighty nodes. Each node requires the provider to assign a new DWDM channel, occupying more and more of the optical C-band. This network upgrade is an example that requires a fullband tunable module with coverage across the entire C-band to provide many DWDM channels with narrow (50 GHz) grid spacing.
Furthermore, using a fullband tunable module means that a single part number can handle all the necessary wavelengths for the network. In the past, network operators used fixed wavelength DWDM modules that must go into specific ports. For example, an SFP+ module with a C16 wavelength could only work with the C16 wavelength port of a DWDM multiplexer. However, tunable SFP+ modules can connect to any port of a DWDM multiplexer. This advantage means technicians no longer have to navigate a confusing sea of fixed modules with specific wavelengths; a single tunable module and part number will do the job.

Overall, fullband tunable modules will fit applications that need a large number of wavelength channels to maximize the capacity of fiber infrastructure. Metro transport or data center interconnects (DCIs) are good examples of applications with such requirements.
Example: Narrowband Tunables in Mobile Fronthaul
The transition to 5G and beyond will require a significant restructuring of mobile network architecture. 5G networks will use higher frequency bands, which require more cell sites and antennas to cover the same geographical areas as 4G. Existing antennas must upgrade to denser antenna arrays. These requirements will put more pressure on the existing fiber infrastructure, and mobile network operators are expected to deliver their 5G promises with relatively little expansion in their fiber infrastructure.

DWDM solutions will be vital for mobile network operators to scale capacity without laying new fiber. However, operators often regard traditional fullband tunable modules as expensive for this application. Mobile fronthaul links don’t need anything close to the 40 or 80 DWDM channels of a fullband transceiver. It’s like having a cable subscription where you only watch 10 out of the 80 TV channels.
This issue led EFFECT Photonics to develop narrowband tunable modules with just nine channels. They offer a more affordable and moderate capacity expansion that better fits the needs of mobile fronthaul networks. These networks often feature nodes that aggregate two or three different cell sites, each with three antenna arrays (each antenna provides 120° coverage at the tower) with their unique wavelength channel. Therefore, these aggregation points often need six or nine different wavelength channels, but not the entire 80-100 channels of a typical fullband module.
With the narrowband tunable option, operators can reduce their part number inventory compared to grey transceivers while avoiding the cost of a fullband transceiver.
Synergizing with Self-Tuning Algorithms
The number of channels in a tunable module (up to 100 in the case of EFFECT Photonics fullband modules) can quickly become overwhelming for technicians in the field. There will be more records to examine, more programming for tuning equipment, more trucks to load with tuning equipment, and more verifications to do in the field. These tasks can take a couple of hours just for a single node. If there are hundreds of nodes to install or repair, the required hours of labor will quickly rack up into the thousands and the associated costs into hundreds of thousands.

Self-tuning allows technicians to treat DWDM tunable modules the same way they treat grey transceivers. There is no need for additional training for technicians to install the tunable module. There is no need to program tuning equipment or obsessively check the wavelength records and tables to avoid deployment errors on the field. Technicians only need to follow the typical cleaning and handling procedures, plug the transceiver, and the device will automatically scan and find the correct wavelength once plugged. This feature can save providers thousands of person-hours in their network installation and maintenance and reduce the probability of human errors, effectively reducing capital and operational expenditures.
Self-tuning algorithms make installing and maintaining narrowband and fullband tunable modules more straightforward and affordable for network deployment.
Takeaways
Fullband self-tuning modules will allow providers to deploy extensive fiber capacity upgrades more quickly than ever. However, in use cases such as mobile access networks where operators don’t need a wide array of DWDM channels, they can opt for narrowband solutions that are more affordable than their fullband alternatives. By combining fullband and narrowband solutions with self-tuning algorithms, operators can expand their networks in the most affordable and accessible ways for their budget and network requirements.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDM
Scaling Up Live Event Coverage with 5G Technology
Everyone who has attended a major venue or event, such as a football stadium or…
A Higher Bandwidth Experience
One of the biggest athletic events in the world, the Super Bowl, draws 60 to 100 thousand spectators to an American stadium once a year. Furthermore, hundreds of thousands, if not millions, of people of out-of-towners will visit the Superbowl host city to support their teams. The amount of data transported inside the Atlanta stadium for the 2019 Superbowl alone reached a record 24 terabytes. The half-time show caused a 13.06 Gbps surge in data traffic on the network from more than 30,000 mobile devices. This massive traffic surge in mobile networks can even hamper the ability of security officers and first responders (i.e., law enforcement and medical workers) to react swiftly to crises.
New Ways of Interaction
5G technology and its increased bandwidth capacity will promote new ways for live audiences to interact with these events. Joris Evers, Chief Communication Officer of La Liga, Spain’s top men’s football league, explains a potential application: “Inside a stadium, you could foresee 5G giving fans more capacities on a portable device to check game stats and replays in near real-time.” The gigabit speeds of 5G can replace the traditional jumbotrons and screens and allow spectators to replay games instantly from their cellphones. Venues are also investigating how 5G and AI might lessen lengthy queues at kiosks for events with tens of thousands of visitors. At all Major League Baseball stadiums, American food service company Aramark deployed AI-driven self-service checkout kiosks. Aramark reports that these kiosks have resulted in a 40% increase in transaction speed and a 25% increase in revenue.
Strengthening the Transport Network
This increased bandwidth and new forms of interaction in live events will put more pressure on the existing fiber transport infrastructure. Mobile network operators are expected to deliver their 5G promises while avoiding costly expansions of their fiber infrastructure. The initial rollout of 5G has already happened in most developed countries, with operators upgrading their optical transceivers to 10G SFP+ and wavelength division multiplexing (WDM). Mobile networks must now move to the next phase of 5G deployments, exponentially increasing the number of devices connected to the network.
Takeaways
Thanks to 5G technology, network providers can provide more than just higher bandwidth for live events and venues; they will also enable new possibilities in live events. For example, audiences can instantly replay what is happening in a football match or use VR to attend a match or concert virtually. This progress in how end users interact with live events must also be backed up by the transport network. The discussions of how to upgrade the transport network are still ongoing and imply that coherent technology could play a significant role in this upgrade. Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, LightCounting, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDM
What’s Inside a Coherent DSP?
Coherent transmission has become a fundamental component of optical networks to address situations where direct…
Coherent transmission has become a fundamental component of optical networks to address situations where direct detect technology cannot provide the required capacity and reach.
While direct detect transmission only uses the amplitude of the light signal, coherent optical transmission manipulates three different light properties: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance. Furthermore, coherent technology enables capacity upgrades without replacing the expensive physical fiber infrastructure on the ground.
The digital signal processor (DSP) is the electronic heart of coherent transmission systems. The fundamental function of the DSP is encoding the electronic digital data into the amplitude, phase, and polarization of the light signal and decoding said data when the signal is received. The DSP does much more than that, though: it compensates for impairments in the fiber, performs analog-to-digital conversions (and vice versa), corrects errors, encrypts data, and monitors performance. And recently, DSPs are taking on more advanced functions such as probabilistic constellation shaping or dynamic bandwidth allocation, which enable improved reach and performance.
Given its vital role in coherent optical transmission, we at EFFECT Photonics want to provide an explainer of what goes on inside the DSP chip of our optical transceivers.
There’s More to a DSP Than You Think…
Even though we colloquially call the chip a “DSP”, it is an electronic engine that performs much more than just signal processing. Some of the different functions of this electronic engine (diagram below) are:

- Analog Processing: This engine segment focuses on converting signals between analog and digital formats. Digital data is composed of discrete values like 0s and 1s, but transmitting it through a coherent optical system requires converting it into an analog signal with continuous values. Meanwhile, a light signal received on the opposite end requires conversion from analog into digital format.
- Digital Signal Processing: This is the actual digital processing. As explained previously, this block encodes the digital data into the different properties of a light signal. It also decodes this data when the light signal is received.
- Forward Error Correction (FEC): FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link. FEC algorithms allow the electronic engine to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.

- Framer: While a typical electric signal sent through a network uses the Ethernet frame format, the optical signal uses the Optical Transport Network (OTN) format. The framer block performs this conversion. We should note that an increasingly popular solution in communication systems is to send Ethernet frames directly over the optical signal (a solution called optical Ethernet). However, many legacy optical communication systems still use the OTN format, so electronic engines should always have the option to convert between OTN and Ethernet frames.
- Glue Logic: This block consists of the electronic circuitry needed to interface all the different blocks of the electronic engine. This includes the microprocessor that drives the electronic engine and the serializer-deserializer (SERDES) circuit. Since coherent systems only have four channels, the SERDES circuit converts parallel data streams into a single serial stream that can be transmitted over one of these channels. The opposite conversion (serial-to-parallel) occurs when the signal is received.
We must highlight that each of these blocks has its own specialized circuitry and algorithms, so each is a separate piece of intellectual property. Therefore, developing the entire electronic engine requires ownership or access to each of these intellectual properties.
So What’s Inside the Actual DSP Block?
Having clarified first all the different parts of a transceiver’s electronic engine, we can now talk more specifically about the actual DSP block that encodes/decodes the data and compensates for distortions and impairments in the optical fiber. We will describe some of the critical functions of the DSP in the order in which they happen during signal transmission. Receiving the signal would require these functions to occur in the opposite order, as shown in the diagram below.

- Signal Mapping: This is where the encoding/decoding magic happens. The DSP maps the data signal into the different phases of the light signal—the in-phase components and the quadrature components—and the two different polarizations (x- and y- polarizations). When receiving the signal, the DSP will perform the inverse process, taking the information from the phase and polarization and mapping it into a stream of bits. The whole process of encoding and decoding data into different phases of light is known as quadrature modulation. Explaining quadrature modulation in detail goes beyond the scope of this article, so if you want to know more about it, please read the following article.
- Pilot Signal Insertion: The pilot signal is transmitted over the communication systems to estimate the status of the transmission path. It makes it easier (and thus more energy-efficient) for the receiver end to decode data from the phase and polarization of the light signal.
- Adaptive Equalization: This function happens when receiving the signal. The fiber channel adds several distortions to the light signal (more on that later) that change the signal’s frequency spectrum from what was initially intended. Just as with an audio equalizer, the purpose of this equalizer is to change specific frequencies of the signal to compensate for the distortions and bring the signal spectrum back to what was initially intended.

- Dispersion and Nonlinear Compensation: This function happens when receiving the signal. The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects. These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal that try to offset these dispersion and nonlinear distortions.
- Spectrum Shaping: Communication systems must be efficient in all senses, so they must transmit as much signal as possible within a limited number of frequencies. Spectrum shaping is a process that uses a digital filter to narrow down the signal to the smallest possible frequency bandwidth and achieve this efficiency.
When transmitting, the signal goes through the digital-to-analog conversion after this whole DSP sequence. When receiving the signal, it goes through the inverse analog-to-digital conversation and then through the DSP sequence.
Recent Advances and Challenges in DSPs
This is an oversimplification, but we can broadly classify the two critical areas of improvement for DSPs into two categories.
Transmission Reach and Efficiency
The entire field of communication technology can arguably be summarized with a single question: how can we transmit more information into a single frequency-limited signal over the longest possible distance?
DSP developers have many tools in their kit to answer this question. For example, they can transmit more data using more states in their quadrature-amplitude modulation process. The simplest kind of QAM (4-QAM) uses four different states (usually called constellation points), combining two different intensity levels and two different phases of light.
By using more intensity levels and phases, more bits can be transmitted in one go. State-of-the-art commercially available 400ZR transceivers typically use 16-QAM, with sixteen different constellation points that arise from combining four different intensity levels and four phases. However, this increased transmission capacity comes at a price: a signal with more modulation orders is more susceptible to noise and distortions. That’s why these transceivers can transmit 400Gbps over 100km but not over 1000km.
One of the most remarkable recent advances in DSPs to increase the reach of light signals is probabilistic constellation shaping (PCS). In the typical 16-QAM modulation used in coherent transceivers, each constellation point has the same probability of being used. This is inefficient since the outer constellation points that require more power have the same probability as the inner constellation points that require lower power.

PCS uses the low-power inner constellation points more frequently, and the outer constellation points less frequently, as shown in Figure 5. This feature provides many benefits, including improved tolerance to distortions and easier system optimization to specific bit transmission requirements. If you want to know more about it, please read the explainers here and here.
Energy Efficiency
Increases in transmission reach and efficiency must be balanced with power consumption and thermal management. Energy efficiency is the biggest obstacle in the roadmap to scale high-speed coherent transceivers into Terabit speeds.
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.

Around 50% of a coherent transceiver’s power consumption goes into the DSP chip. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip, and its radiofrequency (RF) interconnects with the optical engine. DSP chips must therefore be adaptable and smart, using the least amount of energy to encode/decode information. You can learn more about this subject in one of our previous articles. The interconnects with the optical engine are another area that can see further optimization, and we discuss these improvements in our article about optoelectronic co-design.
Takeaways
In summary, DSPs are the heart of coherent communication systems. They not only encode/decode data into the three properties of a light signal (amplitude, phase, polarization) but also handle error correction, analog-digital conversation, Ethernet framing, and compensation of dispersion and nonlinear distortion. And with every passing generation, they are assigned more advanced functions such as probabilistic constellation shaping.
There are still many challenges ahead to improve DSPs and make them transmit even more bits in more energy-efficient ways. Now that EFFECT Photonics has incorporated talent and intellectual property from Viasat’s Coherent DSP team, we hope to contribute to this ongoing research and development and make transceivers faster and more sustainable than ever.
Tags: 100G, 5G, 6G, access, access networks, aggregation, backhaul, capacity, coherent, DWDM, fronthaul, Integrated Photonics, live events, metro, midhaul, mobile, mobile access, mobile networks, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology, VR, WDM
The Next Bright Lights of Eindhoven
Paris may be the more well-known City of Light, but we may argue that Eindhoven…
Paris may be the more well-known City of Light, but we may argue that Eindhoven has had a closer association with light and light-based technology. The earliest Dutch match manufacturers, the Philips light bulb factory, and ASML’s enormous optical lithography systems were all located in Eindhoven during the course of the city’s 150-year history. And today, Eindhoven is one of the worldwide hubs of the emerging photonics industry. The heritage of Eindhoven’s light technology is one that EFFECT Photonics is honored to continue into the future.
From Matches to the Light Bulb Factory
Eindhoven’s nickname as the Lichtstad did not originate from Philips factories but from the city’s earlier involvement in producing lucifer friction matches. In 1870, banker Christiaan Mennen and his brother-in-law Everardus Keunen set up the first large-scale match factory in the Netherlands in Eindhoven’s Bergstraat. In the following decades, the Mennen & Keunen factory acquired other match factories, and promoted the merger of the four biggest factories in the country to form the Vereenigde Nederlandsche Lucifersfabriken (VNLF). After 1892, the other three factories shut down, and all the match production was focused on Eindhoven. Over the course of the next century, the Eindhoven match factory underwent a number of ownership and name changes until ceasing operations in 1979.
Two decades after the founding of the original match factory, businessman Gerard Philips bought a small plant at the Emmasingel in Eindhoven with the financial support of his father, Frederik, a banker. After a few years, Gerard’s brother Anton joined the company and helped it expand quickly. The company succeeded in its first three decades by focusing almost exclusively on a single product: metal-filament light bulbs.
Over time, Philips began manufacturing various electro-technical products, including vacuum tubes, TVs, radios, and electric shavers. Philips was also one of the key companies that helped develop the audio cassette tape was Philips. In the 1960s, Philips joined the electronic revolution that swept the globe and proposed early iterations of the VCR cassette tape.
From ASML and Photonics Research to Philips
In 1997, Philips relocated their corporate headquarters outside of Eindhoven, leaving a significant void in the city. Philips was the primary factor in Eindhoven’s growth, attracting many people to the city to work.
Fortunately, Philips’ top-notch research and development led to several major spinoff companies, such as NXP and ASML. While ASML is already well-known across Eindhoven and is arguably the city’s largest employer, it might just be the most important tech company the world hasn’t heard of. In order to produce the world’s electronics, ASML builds enormous optical lithography systems that are shipped to the largest semiconductor facilities on earth. The scale of these systems requires engineers from all fields—electrical, optical, mechanical, and materials—to develop them, and that has attracted top talent from all over the world to Eindhoven. Thanks to their growth, Eindhoven has developed into a major center for expats in the Netherlands.
As ASML grew into a global powerhouse, the Eindhoven University of Technology (TU/e) worked tirelessly over the last 30 years to develop the light technology of the future: photonics. Photonics is used to create chips like the electronics inside your computers and phones, but instead of using electricity, these chips use laser light. Replacing electricity with light dramatically increases the speed of data transmission while also decreasing its power consumption. These benefits would lead photonics to have a significant impact in several industries, especially telecommunications.
Bringing Photonics into the Real World from the Lab
The photonics discoveries occurring in Eindhoven have been making strides in the lab for the last 30 years, and now they are finally becoming businesses. The founders of EFFECT Photonics were once TU/e students who wanted to take their lab research outside into the real world. Like us, there are many other companies in who are trying to bring new and exciting technologies into market, such as SMART Photonics (semiconductor manufacturing), Lightyear (solar electric cars), or Aircision (free space optics). Many of these companies have gathered in the High Tech Campus in Eindhoven and the PhotonDelta cluster, which gathers photonics companies in the Netherlands. The figure below provides a comprehensive picture of the entire PhotonDelta Ecosystem.
The TU/e environment has also championed processes that allow integrated photonics to become more widespread and easier to develop for market applications. The JePPIX consortium has aimed at creating a common platform of indium-phosphide chip design and manufacturing blocks that can become a “language” that every photonics developer in Europe can follow to make their devices. Meanwhile, photonics research and develop continues on many fronts, including biomedical devices, next-generation telecommunications, and improving photonics manufacturing’s compatibility with electronics. Hopefully, additional companies will emerge in the next years to bring these novel technologies to market.
As you can see, Eindhoven has a long history with light, from matches to light bulbs to TVs to optical lithography and photonics. The heritage of Eindhoven’s light technology is one that EFFECT Photonics is honored to carry into the future.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology
When Will Access Networks Go Coherent?
Access networks everywhere are scaling. 5G and IoT promise to interconnect exponentially more devices than…
Access networks everywhere are scaling. 5G and IoT promise to interconnect exponentially more devices than before, with higher speed and latencies. This puts more pressure than ever on fixed and mobile access networks.
In fixed access, there will be a significant expansion in the deployment of fiber-to-the-home (FTTH) links and passive optical networks (PONs). Meanwhile, power-hungry business customers require capacity expansions to 100G and beyond. In the mobile sector, carriers must deliver on their 5G promises without the expensive deployment of new fiber infrastructure. The ever-increasing Internet traffic combined with the flat or declining revenue margins makes scaling up more difficult.
Coherent solutions can help cope with some of these requirements, but their size and expense made them impractical to implement in access networks. In the last few years, the miniaturization of coherent transceivers enables cost and size reductions that made this technology more accessible.
Coherent solutions are still a decade away from becoming mainstream in mobile access networks, but at least they will soon have an impact in cable networks, business services, and edge data centers. Given the continuing advances in standardization and the focus on more affordable components for shorter links, a future with coherent optics in the access network domain is upon us.
Mobile x-haul is not moving into Coherent anytime soon
The initial rollout of 5G has already happened in most developed countries, with many operators upgrading their 1G SFP to 10G SFP+ devices and deploying more wavelength division multiplexing (WDM). Mobile networks must now move to the next phase of 5G deployments: this will require installing more and smaller base stations to increase the number of devices connected to the network exponentially.
These more mature phases of 5G deployment will require operators to scale fiber capacity cost-effectively. The 25G tunable transceivers used in this new deployment phase support a typical reach of 10 km, reaching up to 15 or even 20 km with extra amplification and compensation. For now, that capacity of 25G and 10-20km distances seems to be the sweet spot for radio access network transport.
As more and more 5G antennas are deployed and more 5G users are connected, traffic in radio access networks will keep growing. This increase in traffic demand will translate up through successive network layers to the core, passing through backhaul and sometimes mid-haul stages. This will require an increase in transport capacity beyond 25G, but for now 50G and 100G transceivers are being used in limited quantities by operators in support of massive MIMO and packet fronthaul. Both will remain niche products with volumes far smaller than the mainstream 25G devices.Fixed Access is Moving into Coherent PON Networks
As data demands continue to grow in cable networks, direct detect optical technology used in prior passive optical networks (PON) will not be enough. PON technology needs to move into the domain of 50G and 100G link capacity, and such progress will require coherent technology. Operators want to upgrade their 10G PON networks, and the industry seems to be converging into a consensus pick of 50G over lower-capacity 25G PON, as reported by LightCounting research. The moves into 50G and a potential 100G later will benefit from the broader adoption of coherent technology. 50G PON already hits the limits of direct detect technology, and even at those speeds, the devices will require additional complexity compared to typical direct detect devices. For reasons like these, Nokia predicted in a recent white paper that 50G-PON would be “more of a quantum leap than an evolution”. Anticipating such needs, the non-profit R&D organization CableLabs is pushing to develop a 100G Coherent PON (C-PON) standard. According to CableLabs, several applications justify the development of 100G PON standards and technology:- Aggregation of 10G PON and DOCSIS 4.0
- 5G back- and mid-haul for some macro-cell sites
- Fiber-to-the-building
- Long-reach rural scenarios
- High density/high split ratio urban scenarios, such as distributed access networks (DAA)
Business Services are Moving to 100ZR.
Almost every organization uses the cloud in some capacity, whether for development and test resources or software-as-a-service applications. While the cost and flexibility of the cloud are compelling, many IT executives overlook the importance of fast, high-bandwidth wide-area connectivity to make cloud-based applications work as they should. These needs might require businesses with huge traffic loads to upgrade to 25G, 100G, or even 400G speeds. These capacity needs would require coherent technology. Fortunately, advances in electronic and photonic integration have miniaturized coherent line card transponders into pluggable modules the size of a large USB stick.
Takeaways
Mobile access networks are still comfortable with direct detect technology, but coherent is already starting to impact cable networks and business services. Furthermore, coherent is already established as a solution to interconnect edge data centers. Favorable coherent component cost-reduction trends are expected to continue, technological advancements will enable higher performance, and simpler implementations will make coherent technology more pervasive in the access network to achieve exponential capacity growth. Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, Photonics, PIC, PON, programmable photonic system-on-chip, solutions, technology
Photonic System-on-Chip is the Future
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components…
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the different chips produced losses and incompatibilities that made the device less efficient. After 2020, starting with Apple’s M1 processor, they now fully integrate all components on a single chip, avoiding losses and incompatibilities.
Apple’s fully integrated processor consumes a third of the power and lower costs than their older processors while still providing similar performance. EFFECT Photonics does something similar to what Apple did, but with optical components instead of electronic components. By integrating all the optical components (lasers, detectors, modulators, etc.) in a single system on a chip, we can minimize the losses and make the device more efficient. This approach is what we call photonic System-on-Chip (SoC).
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost. Testing is another aspect that becomes more efficient and scalable when manufacturing at the wafer level.
When faults are found earlier in the testing process, fewer resources and energy are spent processing defective chips. Ideally, testing should happen not only on the final, packaged transceiver but in the earlier stages of photonic SoC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies. Full photonic integration enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves time, and cost and is more energy-efficient and sustainable.
For example, EFFECT Photonics reaps these benefits in its production processes. 100% of electrical testing on the photonic SoCs happens at the wafer level, and our unique integration technology allows for 90% of optical testing on the wafer. The real-world applications of SoCs are practically limitless and priceless. Electronic SoCs are used in most, if not all, portable devices, such as smartphones, cameras, tablets, and other wireless technologies. SoCs are also frequently used in equipment involved in the Internet of Things, embedded systems, and, of course, photonics. Data center interconnects are an excellent example of an application that benefits from a photonic SoC approach. As DCIs demand higher performance and reach, it’s no longer sufficient to have a solution that integrates just some parts of a system.
That is why EFFECT Photonics’ business strategy aims to solve the interconnect challenges through a holistic photonic SoC approach that understands the interdependence of system elements. By combining the photonic SoC with highly optimized packaging with cost-effective electronics, we are building a high production volume platform that can meet the demands of the datacom sector.
Tags: DWDM, Integrated Photonics, network, optical networking, optical technology, photonic integrated chip, photonic integration, photonic system-on-chip, PIC, solutions, technology
The Growth of Business Ethernet Services
The increasing use of data and fiercely price-conscious and multimedia-hungry business subscribers can limit revenue…
The increasing use of data and fiercely price-conscious and multimedia-hungry business subscribers can limit revenue opportunities for the network provider industry. Providers must therefore look elsewhere to grow their customer base, open new revenue streams, and boost margins. So how can they adapt their business strategies and achieve growth objectives?
Just as operators scale up high-capacity data center interconnects to cope with these needs, providers can add Ethernet services for high-capacity services. These can offer a differentiated and competitive service to corporate customers, ranging from 1G to 100G and beyond. Not only will Ethernet services add a cost-effective alternative to existing services, but they also ensure business Ethernet offerings are set up to complement wide area networks and hybrid network services.
According to Ovum, the global enterprise Ethernet services market will grow at 10.7% CAGR, exceeding $70bn by 2020 (Ovum’s Ethernet Services Forecast, Sep 2015), and is now the de facto wide-area network data connectivity technology. Ethernet will be a significant portion of the data service market driven by enthusiasm for higher bandwidth services. This growth will continue as we adopt more cloud-based applications and enterprises embrace digital transformation. Business Ethernet solutions can be further boosted with tunable DWDM transceivers.
Towards Carrier Ethernet
With applications and data volumes exploding in organizations of all types and sizes, there is an increasing need for 1GbE+ connections, with 10GbE+ connectivity for company headquarters and even 100GbE+ connections for data center connectivity. Specifically, demand is being driven by the proliferation of bandwidth-hungry applications. An MRI scan, for example, can be a 300GB file, which would take around 7 hours to download over a 100Mbps connection. Over a 10GbE link, that time falls to just 4 minutes and 28 seconds on a 100GbE link – this can be the difference between life and death when a consultant needs to make a time-critical decision on how to treat their patient best.
Enterprises know all about cost-containment and budget constraints. Virtual private networks (VPNs) based on legacy MPLS protocols have their place when delivering wide-area connectivity. These VPNs can offer any-to-any connectivity and scale to thousands of sites – but, at the higher speeds needed for high-bandwidth applications, they can be significantly more expensive than business Ethernet to deploy and maintain.
In addition, the MPLS routers needed for IP VPN have acquired more and more protocols and complexity over the last few decades. The cost of implementing all of these protocols, and securing them against attack, has driven leading service providers to demand a radically more straightforward way of building networks.
With business Ethernet, the network infrastructure and management can be unified under Ethernet protocol, making the network easier to plan, deploy and manage at scale. This means fewer routers, more remotely programmable services, and fewer truck rolls resulting in a lower cost per bit than comparable VPN solutions. These savings scale across 1GbE, 10GbE, and 100GbE connections with less CAPEX investment, helping to increase the predictability of delivery costs over time. WDM solutions can further boost this capacity.
Towards a Coherent Upgrade with 100ZR
Almost every organization uses the cloud in some capacity, whether for development and test resources or software-as-a-service applications. While the cost and flexibility of the cloud are compelling, many IT executives overlook the importance of fast, high-bandwidth wide-area connectivity to make cloud-based applications work as they should.
These needs might require businesses with huge traffic loads to upgrade to 25G, 100G, or even 400G speeds. These capacity needs would require coherent technology. Fortunately, advances in electronic and photonic integration have miniaturized coherent line card transponders into pluggable modules the size of a large USB stick.
Many of these business applications will require links between 25 Gbps and 100 Gbps that span several tens of kilometers to connect to the network provider’s headend. For these sites, the 400ZR pluggables that have become mainstream in datacom applications are not cost-effective when utilization is so low. This is where 100ZR technology comes into play.
100ZR is currently a marketing term for a short-reach (~80 km) coherent 100Gbps in a QSFP pluggable. Targeted at the metro edge and enterprise applications that do not require 400Gbps, 100ZR provides a lower-cost, lower-power pluggable that also benefits from compatibility with the large installed base of 50 GHz and legacy 100 GHz DWDM/ROADM line systems.
Self-Tuning Makes Management Easier
Businesses that need to aggregate many sites and branches into their networks will likely require tunable transceiver solutions to interconnect them. The number of available channels in tunable modules can quickly become overwhelming for technicians in the field. There will be more records to examine, more programming for tuning equipment, more trucks to load with tuning equipment, and more verifications to do in the field. These tasks can take a couple of hours just for a single node. If there are hundreds of nodes to install or repair, the required hours of labor will quickly rack up into the thousands and the associated costs into hundreds of thousands. Self-tuning modules significantly overcome these issues and make network deployment and maintenance more straightforward and affordable.
Self-tuning allows technicians to treat DWDM tunable modules the same way they would grey transceivers. There is no need for additional training for technicians to install the tunable module. There is no need to program tuning equipment. There is no need to obsessively check the wavelength records and tables to avoid deployment errors on the field. Technicians only need to follow the typical cleaning and handling procedures, plug the transceiver, and the device will automatically scan and find the correct wavelength once plugged. This feature can save providers thousands of person-hours in their network installation and maintenance and reduce the probability of human errors, effectively reducing capital and operational expenditures (OPEX).
Takeaways
With business Ethernet, one can set up super-fast connections for customers and connect their locations and end-users with any cloud-based services they use. Business Ethernet solutions can be further boosted with tunable DWDM transceivers. If business future-proof their networks with upgrades like 100ZR transceivers, they can scale up connectivity seamlessly to ensure that applications always provide an excellent end-user experience. That connectivity is never a limiting factor for customers’ cloud strategies. As the business sector seeks to upgrade to greater capacity and easier management, tunable and coherent transceivers will be vital in addressing their needs.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, PIC, PON, programmable photonic system-on-chip, solutions, technology
The Future of 5G Fronthaul
The 5G network revolution promises to fulfill the capacity needs that previous cellular generations could…
The 5G network revolution promises to fulfill the capacity needs that previous cellular generations could no longer provide to the ever-increasing customer demands. This network generation is expected to revolutionize the concept of telecommunication and bring the most anticipated services of larger bandwidth, higher speed, and reduced latency as part of the modern cellular network. The upgrade from 4G to 5G has shifted the radio access network (RAN) from a two-level structure with backhaul and fronthaul in 4G to a three-level structure with back-, mid-, and fronthaul:
- Fronthaul is the segment between the active antenna unit (AAU) and the distributed unit (DU)
- Midhaul is the segment from DU to the centralized unit (CU)
- Backhaul is the segment from CU to the core network.

5G promises to interconnect exponentially more devices than before, with higher speed and latencies. As a result, 5G edge network bandwidth requirements can reach up to 100 times more than 4G. These requirements will put more pressure on the existing fiber infrastructure, and mobile network operators are expected to deliver their 5G promises with relatively little expansion in their fiber infrastructure.
The initial rollout of 5G has already happened in most developed countries, with operators switching from 1G SFP transceivers and grey transceivers to 10G SFP+ or wavelength division multiplexing (WDM). Mobile networks must move to the next phase of 5G deployments, which will exponentially increase the number of devices connected to the network. These more mature phases of 5G deployment will require operators to scale capacity cost-effectively. The move to 25G tunable optics in fronthaul networks will enable this expansion in capacity in an affordable way and help promote a long-awaited convergence between mobile and fixed access networks.
Enhancing Mobile Fronthaul Capacity
The move from 4G to 5G networks saw many operators upgrade their 10G grey transceivers to tunable 10G transceivers and 25G grey transceivers to make the most of their fiber infrastructure. However, as the 5G rollout moves into a more mature phase, the demands of fronthaul networks will often require even greater capacity from fiber infrastructure.
These future demands are a key reason why South Korea’s service providers decided to future-proof their fiber infrastructure and invested in 10G and 25G WDM technology since the early stages of their 5G rollout. Over time, providers in other countries will find themselves fiber-limited and turn towards tunable technologies. These trends are why LightCounting forecasts that the 25G DWDM market will provide the most significant revenue opportunity in the coming five years.
These 25G tunable transceivers support a typical reach of 10 km. It can reach up to 15 or even 20 km with extra amplification and compensation. Maximizing the capacity of fronthaul fiber will not only be beneficial for mobile network providers and telecom business developers, system architectures, equipment manufacturers, and product developers.

DWDM Solutions For Fronthaul Aggregation
The transitions to 3G and 4G relied heavily on more efficient use of broader RF spectrum blocks. These transitions were as simple for many cell sites as changing the appropriate radio line card at a base station unit. The same cannot be said about the transition to 5G. Particularly this second phase of 5G deployment will require a more profound restructuring of mobile network architecture. These mature 5G networks will use higher frequency bands, which require the deployment of more cell sites and antennas to cover the same geographical areas as 4G. In contrast, existing antennas must upgrade to denser antenna arrays.
The characteristics of this second 5G deployment mean that operators must deploy larger-bandwidth channels and more total channels due to the additional base stations. DWDM is an excellent fit for interconnecting these new smaller cell sites since it allows operators to quickly increase their number of channels without having to lay out new and expensive fiber infrastructure. Thanks to the 25G capacity, these additional channels can be easily aggregated into the fronthaul transport network without being limited by fiber infrastructure.

The Dream of Fixed-Mobile Access
Carriers that provide both fixed and mobile network services may often have to deal with situations in which their fixed and mobile access networks compete against each other. Since these networks often use different technology standards and even transmission media (e.g., legacy coaxial networks in fixed access), these carriers often have to build additional and arguably redundant optical infrastructure.
These carriers have long dreamed of merging their fixed and mobile network infrastructures under the same standards and transmission pipes. Such solutions reduce the need to build and manage redundant infrastructure. The expansion of fiber infrastructure and WDM technology might finally provide them with the opportunity to do so.

Passive optical networks (PON) have become a popular solution to implement fiber-to-the-home solutions. These networks’ bandwidth and latency requirements are leading to the standardization of 25G PONs and WDM-PON technology. Now that both mobile and fixed access are considering 25G WDM solutions, it might be a good time to revisit the subject of convergence, as together, this network will offer various quality services such as communication, entertainment, and data acquisition without any terminal, application, network, and location limitation.
Takeaways
25G tunable optics will become an industry standard for mobile fronthaul in the coming years. They allow operators to make the most of their existing fiber infrastructure, maximizing bandwidth and increasing the network’s ability to aggregate more signals from smaller and more numerous 5G base stations. For certain carriers, they could enable a future converged fixed-access network that simplifies the installation and management of their infrastructure. As shown by the example of South Korean network operators, it pays off to anticipate all these future demands and invest in a future-proof network that can scale up quickly.
Tags: 5G, access, aggregation, backhaul, capacity, DWDM, fronthaul, Integrated Photonics, LightCounting, metro, midhaul, mobile, mobile access, network, optical networking, optical technology, photonic integrated chip, photonic integration, PIC, PON, programmable photonic system-on-chip, solutions, technology
What is DWDM and Why Should You Care?
Imagine a couple of small trucks moving along a country road in opposite directions, carrying…
Imagine a couple of small trucks moving along a country road in opposite directions, carrying goods between factories and consumers. As the population grows and demand increases, the trucks grow in number, and diversity of goods and traffic increases. City planners must start adding lanes until, eventually the tiny country road has become a large multi-lane highway with 18-wheelers moving vast volumes of different types of merchandise every day. A similar rapid expansion in ‘cargo’ has happened in telecommunications.
The telecommunications industry and service providers, in particular, have faced a dramatic and very rapid increase in the volume and type of data their systems must handle. Networks built initially to transmit soundwaves as electrical signals from one phone to another were now faced with managing data and video in real-time from many devices. Within approximately 30 years, we have moved from the introduction of the Internet and the creation of the Worldwide Web to the rollout of 5G wireless technology and the Internet-of-Things (IoT), through which virtually all devices can theoretically be interconnected.
Handling this exponentially increasing data traffic has required massive contributions from fiber optics and optical communications systems. In these systems, laser light carries much higher data transmission rates over greater distances than electrical signals. To encode the data into light, transmit it, and decode it back into electrical signals upon receipt, optical communication systems rely on optical transceivers. Dense Wavelength Division Multiplexing (DWDM) is a transceiver technology developed around 20 years ago that dramatically increases the amount of data transmitted over existing fiber networks. Data from various signals are separated, encoded on different wavelengths, and put together (multiplexed) in a single optical fiber.
The wavelengths are separated again and reconverted into the original digital signals at the receiving end. In other words, DWDM allows different data streams to be sent simultaneously over a single optical fiber without requiring new cables to be laid. In a way, it’s like adding more lanes to the information highway without having to build new roads!

The tremendous expansion in data volume afforded with DWDM can be seen compared to other optical methods. A standard transceiver, often called a grey transceiver, is a single-channel device – each fiber has a single laser source. You can transmit 10 Gbps with grey optics. Coarse Wavelength Division Multiplexing (CWDM) has multiple channels, although far fewer than possible with DWDM. For example, with a 4-channel CWDM, you can transmit 40 Gbps. DWDM can accommodate up to 100 channels.
You can transmit 1 Tbps or one trillion bps at that capacity – 100 times more data than grey optics and 25 times more than CWDM. While the volume of data transmitted with DWDM is impressive, demand will continue to grow as we move toward IoT and 5G. Adding additional optical transceivers with different wavelengths to a fixed-wavelength DWDM system can significantly increase costs. Tunable DWDM transceivers allow you to control the wavelength (color) that the laser channel emits, adding flexibility and reducing cost. However, two obstacles prevented the broader deployment of DWDM technology.
First of all, installing and maintaining many new DWDM optical links was a time-consuming and costly process. Fortunately, the telecom industry developed a new weapon to face these challenges: self-tuning DWDM modules. Self-tuning DWDM modules minimize the network’s time-to-service by eliminating additional installation tasks such as manual tuning and record verification and reducing the potential for human error. They are host-agnostic and can plug into any third-party host equipment.
Furthermore, tunability standards allow modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. Self-tuning modules made the deployment and operation of DWDM links faster, simpler, and more affordable. The second issue had to do with size. DWDM modules were traditionally too large, so plugging them into a router required sacrificing roughly half of the expensive router faceplate capacity. Telecom operators could not accept such a trade-off. Advances in electronic and photonic integration overcame these trade-offs, miniaturizing coherent line card transponders into pluggable modules the size of a large USB stick.
Few companies worldwide supply DWDM technology with such compact sizes and self-tuning features. EFFECT Photonics is among them, and its tunable and cost-effective DWDM technologies will act as enablers of 5G and IoT, bringing the future to you today.
Tags: DWDM, Integrated Photonics, optical networking, optical technology, photonic integrated chip, photonic integration, PIC, programmable photonic system-on-chip
Data Center Interconnects: Coherent or Direct Detect?
With the increasing demand for cloud-based applications, datacom providers are pushing forward with expanding their…
With the increasing demand for cloud-based applications, datacom providers are pushing forward with expanding their distributed computing networks. Therefore, they and telecom provider partners are looking for data center interconnect (DCI) solutions that are faster and more affordable than before to ensure that connectivity between metro and regional facilities does not become a bottleneck.
Energy usage, space, simplicity, and cost-effectiveness all impact the efficiency of DCI infrastructure. These solutions must consider watts per bit, rack space, and simplified provisioning and operating expenditure. Previously, direct detect technology could fulfill these requirements for short-reach DCIs inside data centers and campuses. However, achieving the reach and bandwidths required for edge and metro DCIs required external amplifiers and dispersion compensators that increased the cost and complexity of network operations.

At the same time, advances in electronic and photonic integration allowed longer reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This enabled the transport of 100G and 400G connections over a single wavelength and several hundreds of kilometers, which is ideal for edge and metro DCI networks. Provider operations teams found the simplicity of coherent pluggables very attractive. There was no need to install and maintain additional amplifiers and compensators as in direct detect: a single coherent transceiver plugged into a router could fulfill the requirements.
In the coming decade, the shorter-reach DCI links will also require upgrades to 400G, 800G, and Terabit speeds, and at those speeds, coherent technology comes close to matching the energy consumption of direct detect. This would make it competitive even for shorter links.
Coherent Dominates in Metro DCIs
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. If needed, extended reach 400ZR+ pluggables can cover several hundreds of kilometers. As an example of their success, Cignal AI forecasts that 400ZR shipments will dominate in the edge applications, as shown in Figure 3.

Further improvements in integration can further boost the reach and efficiency of coherent transceivers. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ photonic System-On-Chip (SoC) technology can achieve higher transmit power levels and longer distances while keeping the smaller QSFP-DD form factor, power consumption, and cost.
Campus DCI Is The Battleground of Direct Detect and Coherent
The campus DCI segment, featuring distances below ten kilometers, was squarely the domain of direct detect products when the standard speed of these links was 100Gbps. No amplifiers nor compensators were needed for these shorter distances, so direct detect transceivers are as simple to deploy and maintain as coherent ones. However, at 400Gbps speeds, the power consumption of coherent technology is much closer to that of direct detect PAM-4 solutions.
This gap in power consumption is expected to disappear at 800Gbps, as shown in the figure below. For Terabit speeds, the prediction is that coherent transceivers will be more efficient. Furthermore, as the volume production of coherent transceivers increases, their price will also become competitive with direct detect solutions. Overall, coherent transceivers are expected to scale up better in future upgrades.

Direct Detect Dominates Intra Data Center Interconnects (For Now…)
Below Terabit speeds, direct detect technology (both NRZ and PAM-4) will likely dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than two kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
Nevertheless, moving to larger, more centralized data centers (such as hyperscale data centers) is lengthening intra-DCI links. Instead of transferring data directly from one data center building to another, new data centers first move data to a central hub. So even if the building you want to connect to might be 200 meters away, the fiber runs to a hub that might be one or two kilometers away. In other words, intra-DCI links are becoming campus DCI links, which requires their single-mode fiber solutions.
On top of these changes, the upgrades to Terabit speeds in the coming decade will also see coherent solutions challenge the power consumption of direct detect transceivers. PAM-4 direct detect transceivers that fulfill the speed requirements require digital signal processors (DSPs) and more complex lasers that will be less efficient and affordable than previous generations of direct detect technology. With coherent technology scaling up in volume and having greater flexibility and performance, one can make the argument that it will reach cost-competitiveness in this space, too.
Takeaways
Unsurprisingly, the decision of using coherent or direct detect technology for DCIs boils down to the reach and capacity needs. Coherent is already established as the solution for metro DCIs and is already gaining ground in the campus DCI segment for 800G and Terabit speeds. With the move to Terabit speeds and scaling production volumes, it could also become cost-competitive inside the data center too. Overall, the datacom sector is moving towards coherent technology, and it pays off to have this in mind when upgrading data center links.
Tags: 800G, access networks, coherent, cost, cost-effective, Data center, distributed computing, edge and metro DCIs, integration, Intra DCI, license, metro, miniaturized, photonic integration, Photonics, pluggable, power consumption, power consumption SFP, reach, Terabit
Free Space Optics for Access Networks
Optical signals are moving deeper and deeper into access networks. Achieving the ambitious performance goals…
Optical signals are moving deeper and deeper into access networks. Achieving the ambitious performance goals of 5G architectures requires more optics than ever between small cell sites. As stated in a recent report by Deloitte, “extending fiber optics deeper into remote communities is a critical economic driver, promoting competition, increasing connectivity for the rural and underserved, and supporting densification for wireless.”
However, there are cases in which fiber is not cost-effective to deploy. For example, a network carrier might need to quickly increase their access network capacity for a big festival, and there is no point in deploying extra fiber. In many remote areas, the customer base is so small that the costly deployment of fiber will not produce a return on investment. These situations must be addressed with some kind of wireless access solution. Carriers have used fixed microwave links for the longest time to handle these situations.
However, radio microwave frequencies might not be enough as the world demands greater internet speeds and simply changing over to higher carrier frequencies will limit the reach of microwave links. On top of that, the radio spectrum is quite crowded, and a carrier might not have the available licensed spectrum to deploy this wireless link. Besides, microwave point-to-point links produce plenty of heat while struggling to deliver capacity beyond a few Gbps. This is where free-space optics (FSO) comes into play.
FSO is a relatively straightforward technology to explain. A high-power laser source converts data into laser pulses and sends them through a lens system and into the atmosphere. The laser travels to the other side of the link and goes through a receiver lens system and a high-sensitivity photodetector converts those laser pulses back into electronic data that can be processed. In other words, instead of using an optical fiber as a medium to transmit the laser pulses, FSO uses air as a medium. The laser typically operates at an infrared wavelength of 1550nm that is safer on the eye.

FSO has often been talked about as some futuristic technology to be used in space applications, but it can be used more than that, including ground-to-ground links in access networks.FSO can deliver a wireless access solution that can be deployed quickly and with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links. Furthermore, since it does not use the RF spectrum, there is no need to secure spectrum licenses.
Overcoming the challenges of alignment, and atmospheric turbulence
FSO struggled to break through into practical applications despite these benefits because of certain technical challenges. Communications infrastructure, therefore, focused on more stable transmission alternatives such as optical fiber and RF signals. However, research and innovation over the last few decades are removing these technical barriers. One obstacle to achieving longer distances with FSO had to do with the quality of the laser signal.
Over time, FSO developers have found a solution to this issue in adaptive optics systems. These systems compensate for distortions in the beam by using an active optical element—such as a deformable mirror or liquid crystal—that dynamically changes its structure depending on the shape of the laser beam. Dutch startup Aircision uses this kind of technology in its FSO systems to increase their tolerance to atmospheric disruptions.

Another drawback of FSO is aligning the transmitter and receiver units. Laser beams are extremely narrow, and if the beam doesn’t hit the receiver lens at just the right angle, the information may be lost. The system requires almost perfect alignment, and it must maintain this alignment even when there are small changes in the beam trajectory due to wind or atmospheric disturbances.
FSO systems can handle these alignment issues with fast steering mirror (FSM) technology. These mirrors are driven with electrical signals and are fast, compact, and accurate enough to compensate the disturbances in the beam trajectory. However, even if the system can maintain the beam trajectory and shape, atmospheric turbulence can still degrade the message and cause interference in the data. Fortunately, FSO developers also use sophisticated digital signal processing techniques (DSP) to compensate for these impairments.
These DSP techniques allow for reliable, high-capacity, quick deployments even through thick clouds and fog. FSO links can now handle Gbps capacity over several kilometers thanks to all these technological advances. For example, a collaboration between Aircision and TNO demonstrated in 2021 that their FSO systems could reliably transmit 10 Gbps over 2.5 km. Aircision’s Scientific Director John Reid explained, “it’s an important milestone to show we can outperform microwave E-band antennas and provide a realistic solution for the upcoming 5G system.”
An alternative for safe, private networks
An understated benefit of FSO is that, from a physics perspective, they are arguably the most secure form of wireless communication available today. Point-to-point microwave links transmit a far more directional beam than mobile antennas or WiFi systems, which reduces the potential for security breaches. However, even these narrower microwave beams are still spread out enough to cover a wide footprint vulnerable to eavesdropping and jamming.
At a 1km distance, the beam can spread out enough to cover roughly the length of a building, and at 5km, it could cover an entire city block. Furthermore, microwave systems have side- and back lobes radiating away from the intended direction of transmission that can be intercepted too. Finally, if an attacker is close enough to the source, even the reflected energy from buildings can be used to intercept the signal.

Laser beams in FSO are so narrow and focused that they do not have to deal with these issues. At 1km, a typical laser beam only spreads out about 2 meters, and at 5km, only about 5 meters. There are no side and back lobes to worry about and no near-zone reflections. The beam is so narrow that intercepting the transmission becomes an enormous challenge. An intruder would have to get within inches of a terminal or the line of sight, making it easier to get discovered. To complicate things further, the intruder’s terminal would also need to be very well aligned to pick up enough of a signal.
Using Highly-Integrated Transceivers in Free Space Optics
Even though fiber optical communications drove the push for smaller and more efficient optical transceivers, this progress also has a beneficial impact on FSO. As we have explained in previous articles, optical transmission systems have been miniaturized from big, expensive line cards to small, affordable pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all led to a reduced power consumption over the last decade. At EFFECT Photonics, we achieve even further efficiency gains by an optical System-On-Chip (SoC) that integrates all photonic functions on a single chip, including lasers and amplifiers.

FSO systems can now take advantage of affordable, low-power transceivers to transmit and receive laser signals in the air. For example, a transceiver based on an optical SoC can output a higher power into the FSO system. By using this higher laser power, the FSO does not need to amplify the signal so much before transmitting it, improving its noise profile. Furthermore, this benefit happens with both direct detect and coherent transceivers. This is a key reason why Aircision has partnered up with EFFECT Photonics to create both direct detect and coherent free-space optical systems, since the startup ultimately aims to reach transmission speeds of 100 Gbps over the air.
Takeaways
FSO has moved from the domain of science fiction to a practical technology that now deserves a place in access networks. FSO can deliver a wireless access solution that can be deployed quickly and with more bandwidth capacity, security features, and less power consumption than traditional point-to-point microwave links. Furthermore, since it does not use the RF spectrum, it is unnecessary to secure spectrum licenses. Affordable direct detect and coherent transceivers based on SoC can further improve the quality and affordability of FSO transmission.
Tags: access networks, adaptive optics, affordable, capacity, coherent, cost-effective, deployments, free space optics, integration, license, miniaturized, photonic integration, Photonics, pluggable, power consumption, private network links, quick deployments, radio spectrum, remote communities, security, SFP, signal processing, turbulence
Improving Edge Computing with Coherent Optical Systems on Chip
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and…
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end-user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024. Cloud-native applications are driving the construction of edge infrastructure and services. However, they cannot distribute their processing capabilities without considerable investments in real estate, infrastructure deployment, and management.
This situation leads to hyperscalers cooperating with telecom operators to install their servers in the existing carrier infrastructure. For example, Amazon Web Services (AWS) is implementing edge technology in carrier networks and company premises (e.g., AWS Wavelength, AWS Outposts). Google and Microsoft have strategies and products that are very similar. In this context, edge computing poses a few problems for telecom providers too. They must manage hundreds or thousands of new nodes that will be hard to control and maintain.
At EFFECT Photonics, we believe that coherent pluggables with an optical System-on-Chip (SoC) can become vital in addressing these datacom and telecom sector needs and enabling a new generation of distributed data center architectures. Combining the optical SoCs with reconfigurable DSPs and modern network orchestration and automation software will be a key to deploying edge data centers.
Edge data centers are a performance and sustainability imperative
Various trends are driving the rise of the edge cloud:
- 5G technology and the Internet of Things (IoT): These mobile networks and sensor networks need low-cost computing resources closer to the user to reduce latency and better manage the higher density of connections and data.
- Content delivery networks (CDNs): The popularity of CDN services continues to grow, and most web traffic today is served through CDNs, especially for major sites like Facebook, Netflix, and Amazon. By using content delivery servers that are more geographically distributed and closer to the edge and the end user, websites can reduce latency, load times, and bandwidth costs as well as increasing content availability and redundancy.
- Software-defined networks (SDN) and Network function virtualization (NFV). The increased use of SDNs and NFV requires more cloud software processing.
- Augment and virtual reality applications (AR/VR): Edge data centers can reduce the streaming latency and improve the performance of AR/VR applications.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Type of Edge | Data center | Location | Number of DCs per 10M people | Average Latency | Size | |
---|---|---|---|---|---|---|
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 rack max |
Outer edge | Aggrega- tion points | Town | 150 | 30 ms | 2-6 rack max | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ rack max | |
Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks | |
Not edge | Hyperscale | State/ national | 1 | 60+ ms | 5000+ racks |
Cisco estimates that 85 zettabytes of useful raw data were created in 2021, but only 21 zettabytes were stored and processed in data centers. Edge data centers can help close this gap. For example, industries or cities can use edge data centers to aggregate all the data from their sensors. Instead of sending all this raw sensor data to the core cloud, the edge cloud can process it locally and turn it into a handful of performance indicators. The edge cloud can then relay these indicators to the core, which requires a much lower bandwidth than sending the raw data.

Edge data centers therefore allow more sensor data to be aggregated and processed to make systems worldwide smarter and more efficient. The ultimate goal is to create entire “smart cities” that use this sensor data to benefit their inhabitants, businesses, and the environment. Everything from transport networks to water supply and lightning could be improved if we have more sensor data available in the cloud to optimize these processes. Distributing data centers is also vital for future data center architectures. While centralizing processing in hyper-scale data centers made them more energy-efficient, the power grid often limits the potential location of new hyperscale data centers. Thus, the industry may have to take a few steps back and decentralize data processing capacity to cope with the strain of data center clusters on power grids. For example, data centers can be relocated to areas where spare power capacity is available, preferably from nearby renewable energy sources. EFFECT Photonics envisions a system of datacentres with branches in different geographical areas, where data storage and processing are assigned based on local and temporal availability of renewable (wind-, solar-) energy and total energy demand in the area.

Coherent technology simplifies the scaling of edge data center interconnects
As edge data center interconnects became more common, the issue of how to interconnect them became more prominent. Direct detect technology had been the standard in the short-reach data center interconnects. However, reaching the distances greater than 50km and bandwidths over 100Gbps required for modern edge data center interconnects required external amplifiers and dispersion compensators that increased the complexity of network operations. At the same time, advances in electronic and photonic integration allowed longer reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. If needed, extended reach 400ZR+ pluggables can cover several hundreds of kilometers. Cignal AI forecasts that 400ZR shipments will dominate in the edge applications, as shown in Figure 3.

Further improvements in integration can further boost the reach and efficiency of coherent transceivers. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ optical System-On-Chip (SoC) technology can achieve higher transmit power levels and longer distances while keeping the smaller QSFP-DD form factor, power consumption, and cost.
Maximizing Edge Computing with Automation
With the rise of edge data centers, telecom providers must manage hundreds or thousands of new nodes that will be hard to control and maintain. Furthermore, providers also need a flexible network with pay-as-you-go scalability that can handle future capacity needs. Fortunately, several new technologies are enabling this scalable and automated network management.
First of all, the rise of self-tuning algorithms has made the installation of new pluggables easier than ever. They eliminate additional installation tasks such as manual tuning and record verification. They are host-agnostic, can plug into any third-party host equipment, and scale as you grow. Standardization also allows modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. The communication channels used for self-tuning algorithms can also be used for remote diagnostics and management, such as the case of EFFECT Photonics NarroWave technology.
Automation potential improves further by combining artificial intelligence with the software-defined networks (SDNs) framework that virtualizes and centralizes network functions. This creates an automated and centralized management layer that can allocate resources efficiently and dynamically. For example, AI in network management will become a significant factor in reducing the energy consumption of future telecom networks.

Future smart transceivers with reconfigurable digital signal processors (DSPs) can give the AI-controlled management layer even more degrees of freedom to optimize the network. These smart transceivers will relay more device information for diagnosis, and depending on the management layer instructions, they can change their coding schemes to adapt to different network requirements
Takeaways
Cloud-native applications require edge data centers with lower latency, and that better fit the existing power grid. However, their implementation came with the challenges of more data center interconnects and a massive increase in nodes to manage. Fortunately, coherent pluggables with self-tuning can play a vital role in addressing these datacom and telecom sector challenges and enabling a new generation of distributed data center architectures. Combining these pluggables with modern network orchestration and automation software will boost the deployment of edge data centers. EFFECT Photonics believes that with these automation technologies (self-tuning, SDNs, AI), we can reach the goal of a self-managed, zero-touch automated network that can handle the massive scale-up required for 5G networks and edge computing.
Tags: 400ZR, artificial intelligence, cloud, coherent, computing, data centers, DSP, edge, edge data centers, infrastructure, latency, network, network edge, operators, optical system-on-chip, pluggables, self-tuning, services
The Growing Market for Tunable Lasers
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has finally enabled the widespread implementation of IP over DWDM solutions.
Self-tuning algorithms have also made DWDM solutions more widespread by simplifying their installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and even future access networks—are moving towards coherent tunable pluggables. The market for coherent tunable transceivers will explode in the coming years, with LightCounting estimating that annual sales will double by 2026. Telecom carriers and especially data center providers will drive the market demand, upgrading their optical networks with 400G, 600G, and 800G pluggable transceiver modules that will become the new industry standards.

Same Laser Performance, Smaller Package
As the industry moves towards packing more and more transceivers on a single router faceplate, tunable lasers need to maintain performance and power while moving to smaller footprints and lower power consumption and cost. Due to the faceplate density requirements for data center applications, transceiver power consumption is arguably the most critical factor in this use case.
In fact, power consumption is the main obstacle preventing pluggables from becoming a viable solution for a future upgrade to Terabit speeds. Since lasers are the second biggest power consumers in the transceiver module, laser manufacturers faced a paradoxical task. They must manufacture laser units that are small and energy-efficient enough to fit QSFP-DD and OSFP pluggable form factors while maintaining the laser performance. Fortunately, these ambitious spec targets became possible thanks to improved photonic integration technology.
The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules has once again cut the micro-ITLA footprint almost in half. The QSFP-DD modules that house the full transceiver are smaller (78mm by 20mm) than the original ITLA form factor. Stunningly, tunable laser manufacturers achieved this size reduction without impacting laser purity and power.

Versatile Laser Developers for Different Use Cases
The different telecom and datacom applications will have different requirements for their tunable lasers. Premium coherent systems used for submarine and ultra-long-haul require best-in-class lasers with the highest power output and purity. On the other hand, metro transport and data center interconnect applications do not need the highest possible laser quality, but they need small devices with lower power consumption to fit router faceplates. Meanwhile, the access network space looks for lower-cost components that are also temperature hardened.

These varied use cases provide laser developers with ample opportunities and market niches to provide fit-for-purpose solutions for. For example, a laser module can be set to run at a higher voltage to provide higher output power and reach for premium long-haul applications. On the other hand, tuning the laser to a lower voltage would enable a more energy-efficient operation that could serve more lenient, shorter-reach use cases (links < 250km), such as data center interconnects.
An Independent Player in Times of Consolidation
With the increasing demand for coherent transceivers, many companies have performed acquisitions and mergers that allow them to develop transceiver components internally and thus secure their supply. LightCounting forecasts show that while this consolidation will decrease the sales of modulator and receiver components, the demand for tunable lasers will continue to grow. The forecast expects the tunable laser market for the transceiver to reach a size of $400M in 2026.

We can dive deeper into the data to find the forces that drive the steady growth of the laser market. As shown in Figure 4, the next five years will likely see explosive growth in the demand for high-purity, high-power lasers. The forecast predicts that the shipments of such laser units will increase from roughly half a million in 2022 to 1.4 million in 2026 due to the growth of 400G and 800G transceiver upgrades. However, the industry consolidation will make it harder for component and equipment manufacturers to source lasers from independent vendors for their transceivers.

This data indicates that the market needs more independent vendors to provide high-performance ITLA components that adapt to different datacom or telecom provider needs. Following these trends, at EFFECT Photonics, we are not only developing the capabilities to provide a complete coherent transceiver solution but also the nano-ITLA units needed by other vendors.
Takeaways
The world is moving towards tunability. As telecom and datacom industries seek to expand their network capacity without increasing their fiber infrastructure, the sales tunable transceivers will explode in the coming years. These transceivers need tunable lasers with smaller sizes and lower power consumption than ever. Fortunately, the advances in photonic integration are managing to fulfill these laser requirements, leading to the new nano-ITLA module standards. However, even though component and equipment vendors need these tunable lasers for their next-gen transceivers, the industry consolidation can affect their supply. This situation presents an opportunity for new independent vendors to supply nano-ITLA units to this growing market.
Tags: acquisition, coherent, coherent communication systems, coherent optical module vendor, coherent technology stack, datacenters, datacom, DWDM, high-performance, hyperscalers, independent, Integrated Photonics, lasers, noise, OEM, optical engine, optical transceivers, performance, photonic integration, Photonics, pluggables, power consumption, reach, self-tuning, Telecom, telecom carriers, Transceivers, tunable, tunable laser, tuneability, VARs, versatile
Co-Designing Optics and Electronics for Versatile and Green Transceivers
Network and data center operators need fast and affordable pluggable transceivers that perform well enough…
Network and data center operators need fast and affordable pluggable transceivers that perform well enough to cover a wide range of link lengths. However, power consumption and thermal management are the big obstacles in the roadmap to scale high-speed transceivers into Terabit speeds.
Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Around 50% of a coherent transceiver’s power consumption goes into the digital signal processing (DSP) chip that also performs the functions of clock data recovery (CDR), optical-electrical gear-boxing, and lane switching. Scaling to higher bandwidths leads to even more losses and energy consumption from the DSP chip and its radiofrequency (RF) interconnects with the optical engine.

Thus, a great incentive exists to optimize the interface between the module’s DSP chip and the optical engine to make the transceiver more energy efficient. This need for optimization and efficiency makes co-designing the optical and electronic systems of the transceiver more important than ever.
Co-Designing the Optimal DSP
Coherent DSPs are already application-specific integrated circuits (ASICs), but they could fit their respective optical engines and use cases even more tightly. Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. This setup reduces the time to market and simplifies the research and design processes but comes with trade-offs in performance and power consumption.
In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of PIC but a master of none. For example, many 400ZR+ transceivers used for telecom metro and long-haul applications are using the same DSPs as 400ZR transceivers used for much shorter data center interconnects. Given the ever-increasing demand for capacity and the need for sustainability both as financial and social responsibility, transceiver developers are increasingly in need of a steak knife rather than a Swiss army knife.
Co-designing the DSP chip alongside the photonic integrated circuit (PIC) can lead to a much better fit between these components. A co-design approach helps identify in greater detail the trade-offs between various parameters in the DSP and PIC and thus improve system-level performance optimization. A DSP optimized for a specific optical engine and application could save up to a couple of Watts of power compared to the usual transceiver and DSP designs.

Co-designing DSP Interfaces for Power Efficiency
Since the optical engine and DSP operate with signals of differing intensities, they need some analog electronic components to “talk” to each other. On the transmit side, the electronic driver block takes signals from the DSP, converts them to a higher voltage, and drives the optical engine. On the receive side, a trans-impedance amplifier (TIA) block will boost the weak signal captured by the optical detector so that the DSP can more easily process it. This signal power conversion overhead constitutes roughly 10-15% of transceiver power consumption, as shown in Figure 1.
Co-designing the DSP and PIC could enable ways to decrease this power conversion overhead. For example, the modulator of the optical engine could be designed to run at a lower voltage that is more compatible with the signal output of the DSP. This way, the DSP could drive the optical engine directly without the need for the analog electronic driver. Such a setup could save roughly two watts of power consumption!

Co-design is also vital to optimize the transceiver layout floorplan. This plan must consider the power dissipation of all transceiver building blocks to avoid hot spots and thermal interference from the DSP to the highly thermally sensitive PIC. The positioning of all bond pads and interfaces is also very important for signal and power integrity, requiring a co-design with the package and substrate.
During this floorplan development, the RF interconnections between the DSP and PIC can be made as short as possible. These optimized RF interconnects reduce the optical and thermal losses in the transceiver package and will reduce the power consumption of the analog electronic driver and amplifier.
Co-Designing Fit-For-Purpose DSPs and PICs
As shown in Figure 4, a DSP chip contains a sequence of processing blocks that compensate for different transmission issues in the fiber and then recover, decode, and error-correct the data streams. Different applications might require slightly different layouts of the DSP or might not need some processing blocks.

For example, full DSP compensation might be required for long links that span several hundreds of kilometers, but a shorter link might not require all the DSP functions. In these cases, a transceiver could turn off or reduce certain DSP functions—such as chromatic dispersion compensation—to save power. These power-saving features could be particularly useful for the cases of shorter data center interconnect links (DCI). On the optical engine side, the laser might not require a high power to transmit over this shorter DCI link so the amplifier functions could shut down. Co-designing the DSP and PIC allows a transceiver developer to mix and match these different energy-saving features to achieve the lowest possible power for a specific application.
Takeaways
Power consumption has become the big barrier that prevents pluggable transceivers from scaling up to 800G and Terabit speeds. Overcoming this barrier requires a tighter fit between the optics and electronics of the transceiver, especially when it comes to the interface between the optical engine and the electronic DSP. By co-designing the optical engine and the electronic DSP, transceiver developers could avoid the need for an external electrical driver and reduce transceiver power consumption by 10-15%. A co-design approach can also make it easier to design fit-for-purpose transceivers that implement power-saving features tailored to specific application cases.
The benefits of this co-design approach led EFFECT Photonics to incorporate talent and intellectual property from Viasat’s Coherent DSP team. With this merger, EFFECT Photonics aims to co-design our Optical System-On-Chip with the DSP to develop fit-for-purpose transceivers that are more energy-efficient than ever before.
Tags: acquisition, coherent, coherent communication systems, coherent optical module vendor, DSP, FEC, forward error correction, green, green transceivers, high vertical integration, independent coherent optical module vendor, Integrated Photonics, optical digital signal processing, optical engine, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tunable laser, tuneability
Reconfigurable DSPs for Versatile Pluggables
Carriers must solve the dilemma of how to use small and affordable coherent pluggables while…

Future Automated Networks Must Also Work on the Physical Layer
Telecom and datacom providers who want to become market leaders must scale up while also learning to allocate their existing network resources most efficiently and dynamically. SDNs can help achieve this efficient, dynamic network management. In a nutshell, the SDN paradigm separates the switching hardware from the software, allowing operators to virtualize network functions in a single centralized controller unit. This centralized management and orchestration (MANO) layer can implement network functions that the switches do not, allowing network operators to allocate network resources more intelligently and dynamically. This added flexibility and optimization will improve network outcomes for operators.
- Artificial intelligence and machine learning algorithms for complete network automation: For example, AI in network management will become a significant factor in reducing the energy consumption of future telecom networks.
- Sensor and control data flow across all OSI model layers, including the physical layer: As networks get bigger and more complex, the management and orchestration (MANO) software needs more degrees of freedom and knobs to adjust. Next-generation MANO software needs to adjust and optimize both the physical and network layers to fit the network best.
The Importance of Standardized Error Correction
Forward error correction (FEC) implemented by DSPs has become a vital component of coherent communication systems. FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. Thanks to FEC, coherent links can handle bit error rates that are literally a million times higher than a typical direct detect link. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A Smart DSP to Rule All Network Links
A smart pluggable transceiver that can adapt to all the applications we have mentioned before—data centers, carrier networks, SDNs—requires an equally smart and versatile DSP. It must be a DSP that can be reconfigured via software to adapt to different network conditions and use cases. For example, a smart DSP could switch among different FEC algorithms to adapt to network performance and use cases. For example, let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the DSP can be reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.400ZR | Open ZR+ | Proprietary Long Haul | |
Target Application | Edge data center interconnect | Metro, Regional data center interconnect | Long-Haul Carrier |
Target Reach @ 400G | 120km | 500km | 1000 km |
Form Factor | QSFP-DD/OSFP | QSFP-DD/OSFP | QSFP-DD/OSFP |
FEC | CFEC | oFEC | Proprietary |
Standards / MSA | OIF | OpenZR+ MSA | Proprietary |
Takeaways
A versatile pluggable that can handle different use cases – data center links, long metro links, and dynamic management and orchestration layers – must have the ability to use different coding and error coding schemes and adapt to different network requirements. The DSP must be equally versatile and switch among several operating modes – 400ZR, 400ZR+, proprietary – and error correction methods – cFEC, oFEC, and proprietary. Together with a programmable optical system on chip, the DSP can not just add software corrections but also make optical hardware changes (output power, turn amplifiers on/off) to adapt to different noise scenarios. Through these adjustments, the next generation of pluggable transceivers will be able to handle all the telecom carrier and data center use cases we can throw at it. Tags: acquisition, coherent, coherent communication systems, coherent optical module vendor, DSP, FEC, forward error correction, green, green transceivers, high vertical integration, independent coherent optical module vendor, Integrated Photonics, optical digital signal processing, optical engine, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tunable laser, tuneability
International growth opportunities for the photonics ecosystem, provided challenges around talent, scale-up and technology are solved
International growth opportunities for the photonics ecosystem, provided challenges around talent, scale-up and technology are…
International growth opportunities for the photonics ecosystem, provided challenges around talent, scale-up and technology are solved
Faster, lighter, more durable, and, at the end of the day, also much cheaper: the benefits of photonic circuits are considerable, for a wide range of applications. And the Netherlands plays an important role, globally, in the development and application of this key technology. In recent years, under the leadership of PhotonDelta, a solid foundation has been laid under the Dutch integrated photonics ecosystem. In the final episode of this series, we survey the playing field with Kathleen Philips (imec) and Boudewijn Docter (EFFECT Photonics). Read the whole series here.
Things are going well for the Dutch ecosystem around integrated photonics. Thanks to the perspectives shared by renowned representatives of this rapidly emerging industry in this series of articles, we learned that an international breakthrough is just around the corner. Obviously, this won’t be possible without a lot of investments. But in addition to that, general manager Kathleen Philips of imec at Holst Centre, three factors are all-important: choosing the right technology, growing into an ‘economy of scale’, and talent.
Imec’s headquarters are in Leuven; in the Netherlands, the renowned research institute is based in Eindhoven (as part of Holst Centre on the High Tech Campus) and Wageningen (with the OnePlanet Research Center). Although the Netherlands is primarily committed to Indium Phosphide (InP) and Silicon Nitride (SiN) production platforms, Kathleen Philips would like to make a case for internationalizing by joining CMOS-based work platforms such as Silicon Photonics (SiPh). “It offers the best opportunities for international support and that is essential for our growth ambitions.”
What is (integrated) photonics? Photonics is similar to electronics. However, instead of electrons, it uses photons (light) to transmit information. Photonic technology detects, generates, transports, and processes light. Current applications include solar cells, sensors, and fiber-optic networks. Photonic chips, officially called Photonic Integrated Circuits (PICs), integrate various photonic and often electronic functions into a microchip to make smaller, faster, and more energy-efficient devices. Because they are manufactured like traditional chips (with wafer-scale technology), mass production is also within reach – with price drop as a result. More here. |
At imec, Kathleen Philips has an excellent overview of the status of photonics developments in the Netherlands and Belgium. She is thus able to combine the Dutch emphasis on Indium Phosphide and Silicon Nitride with the ‘Leuven’ expertise on Silicon Photonics. “We must be careful not to operate in ‘splendid isolation’. It is precisely in the hybrid combination of platforms that we find the desired connection to the world stage. Moreover, silicon photonics is largely compatible with classical and mainstream CMOS chip production lines, the value of which should never be underestimated. That said; if you need good lasers or low-loss waveguides, then InP and SiN platforms are an essential complement.”
Top-Notch
The next step is in creating an economy of scale, says Philips. “High volume is needed to lower the price of the end product. This automatically means you have to look across borders. Even a European scale is insufficient in that respect; we also have to focus on America and Asia. In photonics, you see the same development as in the semiconductor industry: the promise lies in the high volumes. We know that by scaling up the price goes down.”
The Netherlands has everything it needs to make that leap, Philips emphasizes. “You have to be top-notch to make an impact worldwide. And fortunately, we are. Our R&D is renowned, also historically. We are excellently positioned to connect with the big American flagships, for example. With Eindhoven, Twente and Delft we have academic gems. Their research, their publications, their professors, but also the rich ecosystem of start-ups around them and of course Photondelta: it’s all exactly how we would want to see it. Combine that with the presence of a solid high tech industry with major corporations such as ASML and NXP, and institutes like TNO and imec, and you know that a lot of good things are awaiting us.”
But, Philips warns, “to be successful we must be prepared to look beyond the important Dutch photonics industry and also strategically align ourselves internationally. In particular, the Dutch-Flemish axis offers wonderful opportunities and imec can play a connecting role. From Holst Centre in Eindhoven, we work closely with all the Dutch representatives of the ecosystem. Our colleagues in Leuven have strong international roots, with complementary technology and knowledge.” What helps, she adds, is that both at the Dutch and European level the realization has sunk in that governments can also help financially in this regard. Imec already makes use of Interreg subsidies, but the EU Chips Act is also full of promise in this regard. “And at a national level, there is a chance that the photonics sector can make use of the funding that is going to be distributed through the National Growth Fund. In short: there is much more awareness than before that public investment is important here.”
Talent
In a growth market, finding sufficient talent is always a challenge. In the photonics industry, it is no different. There is no shortage of good universities, says Philips. She mentions the three Dutch Universities of Technology, as well as those of Ghent, Leuven, and Brussels as important centers of expertise. “But you also need crown jewels: companies that capture the imagination so much that they manage to attract the best people, wherever they come from.” As an example, she points to EFFECT Photonics, founded in Eindhoven but grown – in a relatively short time – into a scale-up with some 250 people and offices around the world. “With that, EFFECT also shows how important scaling up is; not just for the company itself, but for our entire ecosystem.”
Indeed, the increasing awareness of EFFECT’s achievements has resulted in more talents knocking on their door. “But in addition to that, we also reach out to the talents ourselves,” adds founder Boudewijn Docter. “In fact, that’s one of the main reasons for our recent acquisition in the United States. We see that young people from all over the world have no trouble finding their way to Eindhoven. Recent graduates and PhDs, for example. They are very important, but we also need more experienced people and for them, it is often more difficult to leave hearth and home for a new workplace on the other side of the world.” And yet it is precisely those people who are desperately needed, Docter says. “The most important engineering skills can only be learned in practice. For the phase in which we are now, trial and error is no longer enough – we also need solid experience.”
This desire to hire more experienced people also leads to more remote work. “But even then, we would like people to come to Eindhoven from time to time, especially if they are working on multidisciplinary projects.” The best is a mix of young and experienced, in-house and remote. “With such a mix, young people find the best circumstances to grow, because they can take an example from their colleagues with a bit more experience.”
Volume
Docter is convinced that the choice to locate EFFECT’s business in places where the talent can be found ultimately also offers advantages for the Netherlands. “By growing all over the world, we become more visible as part of the national and European ecosystem. That in itself then attracts new talent, allowing the entire industry to grow.” This, in turn, is beneficial for this economy of scale also desired by Kathleen Philips. “In the semiconductor industry you always need volume”, Docter also says. “Because only then do you really start to notice the advantages. You have to know which markets you want to work for. For example, do you opt for a flexible design of your device, or a very specific one? Either way, you need to improve and stabilize your manufacturing process, which consists of hundreds of steps. Each step must deliver a 99.9999% yield, but it takes time to get there. Not only for us, by the way, but for all stakeholders in our industry, even the biggest ones. We have not yet built up sufficient experience for ‘First Time Right’, with the reliability that goes with such an ambition, but partly due to the focus on volume, we are already very well on our way to maturity.”
The imec model
Kathleen Philips is pleased that imec can play an important role in this global development. “The imec model, in which we set up R&D programs with various partners in a precompetitive setting, and our emphasis on the integration of different production platforms are essential. We are that neutral zone within which you can technically try out new ideas, and test a prototype in the value chain with limited costs. Sometimes this leads to the creation of new start-ups, or to collaboration with existing parties. But always it creates new or stronger ecosystems that the entire industry can benefit from.”
Tags: green, green transceivers, Integrated Photonics, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tuneability
Solving the Carrier’s Dilemma with Optical Systems-On-Chip
In the coming decade, the network providers and countries with the most extensive installed base…
Moving IP over DWDM from Datacom to Telecom
The viability of IP over DWDM (IPoDWDM) solutions was a major factor in the rise of coherent pluggables for datacenter interconnects. By integrating DWDM pluggable optics into the router, IPoDWDM eliminates the optical transponder shelf as well as the optics between the routers and DWDM systems, reducing the network capital expenditure (CAPEX).
A System-on-Chip Enables Greater Reach
The reach trade-off happens because the QSFP-DD form factors could not fit optical amplifier components, limiting the transmit power and reach of the transceiver. Furthermore, lasers in most QSFP modules are still discrete: manufactured on a separate chip and then packaged with the photonic integrated circuit (PIC). A discrete laser carries a power penalty because of the losses incurred when connecting it to the PIC. On the other hand, big transponders could easily fit amplifiers to deliver best-in-class performance, reaching 1500km link lengths that could cover all the different link lengths in the carrier network. Fortunately for the industry, further improvements in integration are overcoming the reach trade-off. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ optical System-On-Chip (SoC) technology can achieve transmit power levels similar to those of transponders while keeping the smaller QSFP-DD form factor, power consumption, and cost.
One plug to rule all network links
On-chip amplification also adds versatility to QSFP modules because of the SOAs tunability. By tuning the SOAs upwards, the QSFP transceiver can operate in a high-performance mode, with high-transmit power and high receiver sensitivity. With the right forward error correction (FEC), this high-performance SoC module can handle borderline long-haul links (about 1000km). On the other hand, tuning the SOAs down would enable an energy-efficient mode that could serve more lenient, shorter-reach use cases (links < 250km). To show how this versatility becomes useful, let’s look at a real-life example. British Telecom studied the links in their UK network that they would upgrade to 400G. They wanted to interconnect 106 sites (including ten hub sites) with links that contained ROADMs and typical G.652 fiber. BT found that open ZR+ transceivers could only cover 50% of the brownfield links containing ROADMs (links lengths less than 250km), while X-ponder line cards can cover all brownfield links.
Takeaways
Telecom carriers need to scale up their transport networks affordably, and solutions such as IPoDWDM can help achieve this goal. Coherent pluggable modules with a fully integrated optical System-on-Chip (SoC) can help overcome the performance trade-offs that prevented the broader deployment of IPoDWDM solutions in carrier transport networks. SoC devices maximize the bandwidth density of optical transceivers, enabling transponder line card performance in a pluggable form factor. Thanks to their on-chip tunable amplifiers, the modules with an optical SoC can operate in high and low power modes that cover almost every link in an operator’s network. This way, a single versatile pluggable can take care of a carrier’s future network upgrades. Tags: green, green transceivers, Integrated Photonics, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tuneability
Why Greener Transceivers are Profitable
Thanks to the incredible progress in energy-saving technologies (hyperscale datacenters, photonic and electronic integration), the…
Thanks to the incredible progress in energy-saving technologies (hyperscale datacenters, photonic and electronic integration), the exponential growth in data traffic for the next ten years will not lead to an exponential growth in ICT energy consumption. A 2020 study by Huawei Technologies estimates that from 2020 to 2030, global data traffic will grow 14 times while ICT energy consumption will just increase 1.5 times. Telecom operators, customers, employees, and investors are all paying more attention to sustainability.
A study commissioned by Vertiv surveyed 501 telecom enterprises worldwide, and 24% of them thought that energy efficiency should be their first priority when deploying 5G networks, while 16% saw it as their second priority. People are more likely to work for and buy products from companies with clear and ambitious sustainability goals. Investors and shareholders demand risk premiums from assets that underperform on climate goals, which often happens with fossil fuel companies. Such risk premiums could carry over to the telecom and datacom sectors. Sustainability is no longer just a matter of corporate social responsibility; it has real financial consequences.

However, there’s even more to the sustainability story. The telecom and datacom industries should become more sustainable not just because investors and customers like it, but also because it can lead to affordable ways to scale up capacity. After all, sustainable systems are efficient systems that are often smaller, more affordable, and require less energy spending. In this article, we will dive into one of an example of this trend by explaining how compact, fully-integrated optical transceivers can play an essential role in transitioning towards a greener and more affordable telecom infrastructure.
Telecom Equipment Dissipates Heat…and Money
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat, and the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down. The Uptime Institute estimates that the average power usage effectiveness (PUE) ratio for data centers in 2020 is 1.58.
This number means that, on average, every 1 kWh required to power ICT equipment needs an additional 0.58 kWh for auxiliary equipment such as lighting and especially cooling. Datacenter PUE will decrease in the coming decade thanks to the emergence of hyperscale data centers, but the exponential increase of data traffic and 5G services also means that more data centers must be built too, especially on the network edges. For all the bad reputation that datacenters receive for their energy consumption, wireless transmission generates even more heat than wired links. While 5G standards are more energy-efficient per bit than 4G, the total power consumption will be much higher than 4G. Huawei expects that the maximum power consumption of one of their 5G base stations will be 68% higher than their 4G stations.
To make things worse, the use of higher frequency spectrum bands and new Internet-of-Things use cases requires the deployment of more base stations too. Prof. Earl McCune from TU Delft estimates that nine out of ten watts of electrical power in 5G systems turn into heat. This issue is why the Huawei also expects that the energy consumption of wireless access networks will increase even more quickly than that of data centers in the next ten years—more than quadrupling between 2020 and 2030.

These issues do not just affect the environment but also the bottom lines of communications companies. McKinsey reports that by the end of 2018, energy costs already represented 5% of operating expenditures for telecom operators. These costs will increase even further with the exponential growth of traffic and the deployment of 5G networks.
Compactness Makes Integrated Photonics Cool
Decreasing energy consumption and costs requires more efficient equipment, and a key to achieving this goal is to increase the use of photonics and miniaturization. Photonics has several properties that improve energy efficiency. Light transmitted over an optical fiber can carry more data faster and over longer distances than electric signals over wires, while dissipating less heat. Due to their longer reach, optical signals also save power compared to electrical signals by reducing the number of times the signal needs regeneration.
Photonics can also play a key role in rethinking the architecture of data centers. Photonics enables a more decentralized system of datacentres with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids.
For example, data centers can relocate to areas where spare power capacity is available, preferably from nearby renewable energy sources. Efficiency can increase further by sending data to branches with spare capacity. The Dutch government has already proposed this kind of decentralization as part of their spatial strategy for data centers.

As we have explained in previous articles, miniaturization of telecom technology can also improve energy efficiency and affordability. For example, over the last decade coherent optical systems have been miniaturized from big, expensive line cards to small pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all lead to a reduced power consumption over the last decade, as shown in the figure below.

Transceivers can decrease their energy consumption further by using an optical System-On-Chip (SoC). The SoC integrates all photonic functions on a single chip, including lasers and amplifiers. This full integration leads to simpler and more efficient interconnections between optical elements, which leads to lower losses and heat dissipation. Optical SoCs also allow coherent transceivers to have a similar reach to line card transponders for use cases up to 400G, so the industry does not have to choose between size and performance anymore.
Wafer Scale Processes Make Integrated Photonics Affordable
Previously, deploying coherent technology required investing in large and expensive transponder equipment on both sides of the optical link. The rise of integrated photonics has not only reduced the footprint and energy consumption of coherent transceivers but also their cost. The economics of scale principles that rule the semiconductor industry reduce the cost of optical SoCs and the transceivers that use them. SoCs minimize the footprint of the optics, allowing transceiver developers to fit more optics within a single wafer, which decreases the price of each individual optical system. As the graphic below shows, the more chips and wafers are produced, the lower the cost per chip becomes.

Integrating all optical components—including the laser—on a single chip shifts the complexity from the expensive assembly and packaging process to the more affordable and scalable semiconductor wafer process. For example, it’s much easier to combine optical components on a wafer at a high-volume than it is to align components from different chips together in the assembly process. This shift to wafer processes also helps drives down the cost of the device.
Takeaways
Pluggable transceivers with compact, highly-integrated optics are more energy efficient and therefore save money in network operational expenditures such as cooling. They can even lead to datacenter architectures that make the most out of the existing electricity and processing resources, allowing cloud providers to make the most of their big infrastructure investments.
By integrating all the optical components in a single SoC, more of them can fit on a single wafer and scale up to higher production volumes. Thanks to the economics of scale, higher volume production leads to lower sales prices, which reduces operators’ capital expenditures too. Due to all the reasons described above, it should now be clear why these greener pluggable transceivers will become a key factor in the successful and profitable deployment of coherent technology in future access networks.
Tags: green, green transceivers, Integrated Photonics, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, tuneability
The Power of Self-Tuning Access Networks
The transitions to 3G and 4G relied heavily on more efficient use of broader RF…
The transitions to 3G and 4G relied heavily on more efficient use of broader RF spectrum blocks. For many cell sites, these transitions were as simple as changing the appropriate radio line card at a base station unit. The same cannot be said about the transition to 5G, which will require deeper restructuring of mobile network architecture. 5G networks will use higher frequency bands, which require the deployment of more cell sites and antennas to cover the same geographical areas as 4G while existing antennas must upgrade to denser antenna arrays. Operators also need to deploy new fiber to connect all these smaller cell sites and access points since the legacy copper and wireless backhaul solutions cannot handle the capacity and latency needed to meet 5G standards.
On the side of fixed access networks, the rise of Remote PHY architectures and Dense Wavelength Division Multiplexing (DWDM) will lead to a similar increase in the density of optical network coverage. Previous networks could serve 500 customers with a single optical node, but future networks will serve that same service area with ten nodes. By deploying these new nodes, providers can vastly increase the bandwidth delivered to customers.
Installing and maintaining these new nodes in fixed networks and optical fronthaul links for wireless networks will require many new DWDM optical links. Even though tunable DWDM modules have made these deployments a bit easier to handle, this tremendous network upgrade still comes with several challenges. Typical tunable modules still require several time-consuming processes to install and maintain, and that time quickly turns into higher expenses.
In the coming decade, the winners in the battle for dominance of access networks will be the providers and countries with the most extensive installed fiber base. Therefore, providers and nations must scale up cost-effectively AND quickly. Every hour saved is essential to reach targets before the competition. Fortunately, the telecom industry has a new weapon in the fight to reduce time-to-service and costs of their future networks: self-tuning DWDM modules.
Plug-and-play operation reduces the time to service
Typical tunable modules involve several tasks—manual tuning, verification of wavelength channel records—that can easily take an extra hour just for a single installation. Repair work on the field can take even longer if the technicians visit two different sites (e.g., the node and the multiplexer) to verify that they connected the correct fibers. If there are hundreds of nodes to install or repair, the required hours of labor can quickly rack up into the thousands and the associated operational expenses into the hundreds of thousands.
Self-tuning allows technicians to treat tunable modules the same way they do with grey transceivers. There is no need for additional training for technicians to install the tunable module. There is no need for technicians to program a separate tuning peripheral. There is no need to obsessively check the wavelength channel records and tables on the field to avoid deployment errors. Technicians only need to follow the typical cleaning and handling procedures, plug in the tunable module, and once plugged, the device will automatically scan and find the correct wavelength.
This plug-and-play operation of self-tuning modules eliminates the additional time and complexity of deploying new nodes and DWDM links in optical access networks. Self-tuning is a game-changing feature that makes DWDM networks simpler and more affordable to upgrade, manage, and maintain.
Host-agnostic and interoperable
Another way to save time when installing new tunable modules is to let specialized host equipment perform the tuning procedure instead. However, that would require the module and host to be compatible with each other and thus “speak the same language” when performing the tuning procedure. This situation leads to vendor lock-in: providers and integrators could not use host equipment or modules from a third party. This lock-in adds an extra layer of complexity and gives providers less flexibility to upgrade and innovate in their networks.

Self-tuning modules do not carry this trade-off because they are “host-agnostic”: they can plug into any host device as long as it accepts third-party 10G grey optics. Just as technicians can treat a self-tuning module as grey, any third-party host equipment can do the same. This benefit is possible because the module takes care of the tuning independently without relying on the host.
Furthermore, self-tuning standards will allow modules from different vendors to communicate with and tune each other. For example, the International Telecommunication Union has aimed to promote self-tuning with its ITU G.698.4 standard—widely known by its working name of G.metro. Most optical component and equipment developers have incorporated G.metro self-tuning standards into their products. Meanwhile, the SmartTunable Multi-Source Agreement (MSA) will further build on G.metro standards and promote interoperability. This MSA seeks to develop a set of common specifications for self-tuning features that will allow for interoperability among the full C-band DWDM modules of different vendors. EFFECT Photonics is collaborating in this MSA with other market leaders in tunable wavelength transceivers—II-VI Incorporated and Lumentum—as well as network carrier AT&T.
Enabling simpler and remote network management
Self-tuning lies at the core of EFFECT Photonics’ NarroWave technology. To implement our NarroWave procedures, we add a small low-frequency modulation signal to the tunable module and specific software that performs wavelength scanning and locking. Since this is a process controlled via software and the added signal is very small, it has no impact on these transceivers’ optical design and performance. It is simply an additional feature that the user can activate. The figure below gives a simplified overview of how NarroWave self-tuning works.

Since self-tuning software requires exchanging commands between modules across the network, it can also enable remote management tasks. For example, our NarroWave communication channel can also allow the operator’s headend module to have read-write control over certain memory registers of the tail-end module. This means that the operator can modify several module variables such as the wavelength channel, power levels, behavior when turning on/off, all from the comfort of the central office.
In addition, the NarroWave channel also allows the headend module to read diagnostic information from the remote module, such as transmitter power levels, alarms, warnings, or status flags. NarroWave then allows the user to act upon this information and change control limits, initiate channel tuning, or clear flags. These remote diagnostics and management features avoid the need for additional truck rolls and save even more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an underground installation) that require expensive truck rolls. Some vendors have made remote installation and management of these modules even more accessible through smartphone app interfaces.
Takeaways
With these advantages, self-tuning modules can help rethink how optical access networks are built and maintained. They minimize the network’s time-to-service by eliminating additional installation tasks such as manual tuning and record verification and reducing the potential for human error. They are host-agnostic and can plug into any third-party host equipment. Furthermore, tunability standards will allow modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. Finally, the communication channels used in self-tuning can also become channels for remote diagnostics and management, simplifying network operation even further.
Self-tuning modules are bound to make optical network deployment and operation faster, simpler, and more affordable. In our next article, we will elaborate on how to customize self-tuning modules to better fit the needs of specific networks.
Tags: access networks, DWDM, fixed access networks, flexible, G.metro, Integrated Photonics, OPEX, optical transceivers, photonic integration, Photonics, pluggables, remote diagnostics, remote management, self-tuning, Smart Tunable MSA, Transceivers, tuneability
Remote PHY, a new architecture for fixed access networks
Cable networks, just like any other telecom network in the world, had to adapt to…
Cable networks, just like any other telecom network in the world, had to adapt to the rising demand for data. During the 90s and 00s, these requirements led to the rise of hybrid fiber-coaxial (HFC) networks: optical fibers travel from the cable company hub and terminate into optical nodes, while coaxial cable connects the last few hundreds of meters from the optical node to nearby houses. These connections mainly were asymmetric, with customers having several times more bandwidth to download data than to upload.
In the past decade, the way we use the Internet has changed. With the rise of social media, online gaming, video calls, and independent content creation such as video blogging, users need more upstream bandwidth than ever. These new requirements have led to quick progress in the DOCSIS standards that regulate data transmission over coaxial cables. For example, the latest DOCSIS 4.0 standards allow full-duplex transmission with symmetrical upstream and downstream channels. Meanwhile, fiber-to-the-home (FTTH) systems—with fiber arriving directly to the customer premises—are also becoming widespread and allowing Gigabit connections that are faster than HFC networks.
Despite these upgrades in the transport mediums and standards, cable networks have experienced surprisingly few upgrades in their architectures. They still rely on centralized architectures in which the network operator’s headend performs almost all functionalities of both the physical layer (PHY) and medium access control layer (MAC). This means that the headend must modulate and demodulate data, convert between analog and digital, perform error corrections, provide cable modem termination system (CMTS) services, and do some resource allocation and flow control.
However, as traffic demands grow, cable providers need to deliver more and more bandwidth to their optical nodes and customer premises. The headend equipment is getting more congested, consuming more power, and running out of ports to handle more fiber connections. This solution centralized on the headend is struggling to scale up with increased demand. As it often happens in the telecom sector, operators need to figure out ways to deliver more bandwidth to more customers without spending significantly more money.
The multiple benefits of distributed access architectures
These issues are the reason why cable providers are moving into distributed access architectures (DAA) that can spread functionalities across access network nodes and reduce the port congestion and equipment required at the headend. Remote PHY has become increasingly popular among providers because it separates the PHY layer from the traditional cable headend and pushes its functions (such as modulation or digital-analog conversion) into the optical fiber access nodes of the network.
This shift can enhance the performance, capacity, and reliability of fixed access networks by using more digital transmission. It also reduces the complexity and power consumption of the headend, which previously translated into higher costs due to the required cooling.
Furthermore, separating PHY and MAC layers makes it easier to virtualize headends and their network functions, which significantly cuts expenses due to the use of commercial-off-the-shelf (COTS) equipment compared to more specialized equipment. Virtualization also allows deploying new services and applications more quickly to users and migrating workloads to optimize power consumption and reduce energy costs. On top of that, Remote PHY achieves all of these benefits while keeping the existing HFC infrastructure!

Distributing digital-analog conversion
One of the most significant upgrades provided by Remote PHY networks is digital transmission deeper into the access network. In Remote PHY, data and video signals are kept in a digital format beyond the core headend, all the way into the upgraded optical node, where the signal is then converted into analog RF format. The fiber links between the headend and the access node that were previously analog will become digital fiber connections over Ethernet.
Since digital signals are more noise-tolerant than analog signals, the network benefits from this increased digital transmission length. Analog and radiofrequency signals now travel smaller distances to reach customer premises, so the signal accumulates less noise and boosts its signal-to-noise ratio. This improvement potentially allows the delivery of higher bandwidth signals to customers, including an increase in upstream bandwidth. Furthermore, the reliability of the link between the headend and the new optical node increases due to the greater robustness of digital links. These advances in reliability and performance make digital optics more affordable to buy and maintain than analog optics, reducing the costs for the network operators.
Let’s provide a very simplified example of how it all comes together. A network operator wants to increase their bandwidth and serve more customers, but their traditional centralized headend is already crowded with eight analog optical fiber links of 1Gbps each. There is no room to upgrade.
By installing Remote PHY technology in both the headend and the node, those analog links can be replaced by higher-capacity 10G digital links. The increased capacity at the headend allows for more optical node splits, while the new digital-to-analog conversion capability of the nodes allows them to care of more coaxial splits, all to serve new areas and customers.
Using DWDM in Remote PHY
The tremendous progress in electronic and photonic integration made Dense Wavelength Division Multiplex (DWDM) technology affordable and available to access networks, and this technology is quickly becoming a workhorse in this network domain. The availability of affordable DWDM transceivers made the deployment of Remote PHY even more powerful.
With Remote PHY improving the capacity of the headend, cable access networks had more bandwidth to serve more customers. However, some ways of using that bandwidth are more efficient than others. Operators can do extra node splits for customers by using their dark fibers and more grey transceivers, but that solution doesn’t scale in so cost-effectively due to the installation and maintenance of a new fiber link. Another option is time division multiplexing (TDM), which multiplexes the data of different node channels into specific time slots. This solution allows operators to carry different node channels in a single fiber but has speed, latency, and security trade-offs. A single time-multiplexed channel cannot transmit at the same speed and latency as a dedicated channel, and the data of all node channels are in the same multiplexed optical link, so the nodes and their customers can’t have fully secure channels to themselves.
DWDM solutions, on the other hand, can avoid the speed and security trade-offs by multiplexing extra channels into different wavelengths of light. Instead of several TDM channels “splitting” the 10G bandwidth among themselves, the DWDM channels can each transmit at 10G. And since each WDM channel has its own wavelength, the channels are transmitted independently from each other, allowing users to have secure channels.
Without sharing an optical link as in TDM, DWDM channels can also provide bidirectional communication (upstream and downstream) with less electronic processing than TDM channels. This feature is particularly beneficial for the modern Internet consumption patterns described earlier in the article.

Let’s go back to our previous example of the upgraded headend with 10G digital fiber links. Thanks to DWDM technology, a single 10G port on this headend can support additional optical nodes in the network more cost-effectively than ever. Let’s say a new apartment complex was built, and the network operator needs to deploy a new node to service this new building. In the past, this deployment would have required lighting up a dark fiber and setting up an extra fiber link or using TDM technology with lower data rates, latency, and security. With DWDM, the new node can simply be carried through a different wavelength channel in the already existing fiber link. And as we will describe in our next article, autotuneability in DWDM transceivers makes their setup and maintenance even more affordable.
Takeaways
Cable networks need to serve more customers than ever with more symmetric upstream and downstream capacity, and they need to achieve this without changing their existing fiber and coaxial infrastructure. These goals become possible with the onset of Remote PHY and more accessible DWDM transceivers. By separating the MAC and PHY layer, Remote PHY reduces the load on the cable headend and allows for more virtualization of network functions, making it easier and more affordable to upgrade and manage the network. Meanwhile, DWDM enables connections from the headend to the Remote PHY nodes that serve tens of customers with a single fiber.
Tags: architectuyre, autotuneability, DWDM, fixed access networks, Integrated Photonics, optical transceivers, photonic integration, Photonics, pluggables, remote phy, Transceivers, tuneability
Leveraging the Power of External Foundries in Photonics
Working with a world-class high-volume foundry makes scaling up from low to high volume as…
Working with a world-class high-volume foundry makes scaling up from low to high volume as easy as putting a purchase order in place. Instead of having to buy equipment, develop processes, and train operators over many years, a fabless photonics developer can leverage foundries who already have these capabilities and knowledge.
Thanks to wafer scale technology, electronics has successfully driven down cost per transistor for many decades. This allowed the world to enjoy chips that every generation became smaller and provided exponentially more computing power for the same amount of money. This scale-up process is how everyone now has a computer processor in their pocket that is millions of times more powerful than the most advanced computers of the 1960s that landed men on the moon.
This progress in electronics integration is a key factor that brought down the size and cost of coherent transceivers, packing more bits than ever into smaller areas. However, photonics has struggled to keep up with electronics, and now the optics dominate the optical transceiver’s cost. If the transceiver cost curve does not continue to decrease, it will be difficult to achieve the goal of making coherent technology more accessible across the entire optical network. This will make it more difficult to provide the services needed by cloud providers and the growing 5G access networks.
As we mentioned in our previous article, photonics manufacturing must move into wafer-scale territory to provide faster, more affordable, and sustainable coherent transmission.
However, most photonic chip developers don’t have human and financial resources to own and operate their own wafer-scale photonic foundries. Fortunately, electronic chip developers have shown a more viable and cost-effective alternative: the fabless model.
A Lower Upfront Investment
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers choose to manufacture their chips in-house within their own fabrication facilities. This approach has some strong advantages, as it gives component manufacturers full control over their production process. By vertically integrating the whole chip design, manufacturing, and testing process within the same company, it’s often easier to try out new changes and innovations to the product.

However, this approach has its trade-offs. If a vertically-integrated chip developer wants to scale up in volume, they must make a hefty investment in more equipment and personnel to do so. They must develop new fabrication process which not only need money, but also time to develop and train personnel. Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective.
Electronics manufacturing had a similar problem during their 1970s boom, with smaller chip start-ups facing almost insurmountable barriers to enter the market because of the massive capital expenditure (CapEx) required. Electronics solved this problem by moving into what we call a fabless model, with companies designing and selling the chips but outsourcing the manufacturing.
For example, transceiver DSP chip developers design the chip, but then outsource the actual fabrication to a large-volume manufacturing plant (usually called a foundry). This business model works by leveraging the design, research, development, and distribution networks of the fabless company, and the specialized manufacturing skill of the chip foundry.

This model reduces the capital expenditure burden on the DSP developers, because instead of spending all the time and energy in scaling up their own facilities, they can work with a foundry that already did that investment and has the required manufacturing volume. In other words, instead going through a more costly, time-consuming process, the troubles of scaling up are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model also allows companies to concentrate their R&D resources on the end market. If photonics is to move into million-scale volumes, this is likely the way forward.
Economies of Scale and Operating Expenses
Even if an optical transceiver developer could move forward with the CapEx required for its own large-scale fab and a vertically-integrated model, market demand and operational expenses become the next pain point. Transceivers are a B2B market, and their demand is significantly smaller than of B2C consumer electronics. For example, LightCounting estimates that 55 million optical transceivers will be sold in 2021, while the International Data Corporation estimates that 1.4 billion smartphones will be sold in 2021. The latter figure is 25 times larger than that of the transceiver market.
The smaller demand of transceivers means that even if a vertically-integrated transceiver developer upgrades to a larger-scale manufacturing facility, it will likely have more manufacturing capacity than what their customers need. In such a situation, the facility could run at a reduced capacity. However, fabs are not only expensive to build, but also to operate. Unless they can be kept at nearly full utilization, operating expenses (OpEx) will become a drain on the finances of the facility owners.
This issue was something the electronics industry faced in the past, during the 1980s. Integrated electronics manufacturers had excess production capacity, and this situation paved the way for the fabless model too. The large-scale manufacturers ended up selling that excess capacity to smaller, fabless chip developers. Ultimately, the entire electronics industry relied increasingly on the fabless model, to the point where pure play foundries like the Taiwan Semiconductor Manufacturing Corporation (TSMC) appeared and focused entirely on manufacturing for other fabless companies.
In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at full capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
Working with the Best in the Business
The other advantage of pure play foundries is that they not only have state-of-the art equipment but also the best personnel and technical expertise. Even if a vertically-integrated developer transceiver can make the required CapEx to scale up their facilities, developing processes and training people inevitably takes years, delaying the return on investment even further.
By working with an established and experienced foundry, fabless companies take advantage of the highly trained and experience personnel of these facilities. These operators, technicians, and engineers have worked day in, day out with their equipment for years and already developed processes that are finely tuned to their equipment. Thanks to their work, fabless transceiver developers do not have to reinvent the wheel and come up with their own processes, saving valuable time and many, many headaches.
Takeaways
To make transceivers more accessible to world and connect more people together, transceiver developers need to reach production scales in the millions. At EFFECT Photonics, we believe that the way to achieve this goal is by having photonics follow the blueprint laid out by the electronics industry. Using a fabless model, we can reduce the capital expenditure and scale up more quickly and with fewer risks.
Working with a world-class high-volume foundry makes scaling up from low to high volume as easy as putting a purchase order in place. Instead of having to buy equipment, develop processes, and train operators over many years, a fabless photonics developer can leverage foundries who already have these capabilities and knowledge.
Tags: coherent, coherent optics, external foundries, foundries, Integrated Photonics, LightCounting, optical transceivers, photonic integration, Photonics, photonicwafer, pluggables, Transceivers, wafer
Wafer Scale Photonics for a Coherent Future
The advances in electronic and optical integration have brought down the size and cost of…
The advances in electronic and optical integration have brought down the size and cost of the coherent transceivers, packing more bits than ever into smaller areas. However, progress in the cost and bandwidth density of transceivers might slow down soon. Electronics has achieved amazing breakthroughs in the last two decades to continue increasing transistor densities and keeping Moore’s Law alive, but these achievements have come at a price. With each new generation of electronic processors, development costs increase and the price per transistor has stagnated.

Due to these developments, electronic digital signal processor (DSP) chips will continue to improve in efficiency and footprint, but their price will stagnate and with it the price of optical transceivers. Without further improvements in the cost per bit, it will be difficult to achieve the goal of making coherent technology more accessible across the entire optical network. This will make it more difficult to provide the device volume and services needed by the growing 5G networks and cloud providers.
To make coherent transceivers more accessible, photonics has to step up now more than ever. With the cost of DSPs stagnating, photonic integration must take the lead in driving down the costs and size of optical transceivers. Integrating all optical components on a single chip makes it easier to scale up in volume, reach these size and cost targets, and ultimately provide faster, more affordable, and sustainable coherent transmission.
Size Matters
Full photonic integration allows us to combine active optical elements like the laser and the amplifer with passive elements, all on the same chip and enclosed in a simple, non-hermetic package. This process enables a much smaller device than combining several indivudally packaged elements. For example, by integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ pluggable transceiver modules can achieve transmit power levels similar to those of line card transponder modules while still keeping the smaller QSFP router pluggable form factor, power consumption, and cost.

Full integration technology increases the transmit power by minimizing the optical losses due to the use of more efficient optical modulators, fewer material losses compared to silicon, and the integration of the laser device on the same chip as the rest of the optical components. The semiconductor optical amplifiers (SOAs) used in fully integrated devices can also outperform the performance of micro-EDFAs for transmission distances of at least 80km.
The Economics of Scale
As innovative as full photonic integration can be, it will have little impact if it cannot be manufactured at a high enough volume to satisfy the demands of mobile and cloud providers and drive down the cost per device. Wafer scale photonics manufacturing demands a higher upfront investment, but the resulting high-volume production line drives down the cost per device. This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics.
The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease. Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the optical transceiver industry.

Full Integration Streamlines Production and Testing
Integrating all optical components on a single chip can make manufacturing and testing more efficient, sustainable, and easier to scale up. The price of photonic devices is not dominated by the manufacturing of the semiconductor chips, but by the device assembly and packaging. Assembling and packaging a device by interconnecting multiple photonic chips together leads to an increase in assembly complexity and therefore additional costs.
This situation happens frequently with the laser component, which is often manufactured on a separate chip and then interconnected to the other optical components which are on another chip. Integrating all components—including the laser—on a single chip shifts the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. For example, it’s much easier to combine optical components on a wafer at a high-volume than it is to align different chips together in the assembly process, and that drives down the cost of the device.
Testing is another aspect that becomes more efficient and scalable when manufacturing at the wafer level. When faults are found earlier in the testing process, fewer resources and energy are spent in process defective chips. Ideally, testing should happen not only on the final, packaged transceiver but in the earlier stages of PIC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies.


Full photonic integration enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves time, cost, and is more energy efficient and sustainable. For example, EFFECT Photonics reaps these benefits in its production processes. 100% of electrical testing on the PICs happens at the wafer level, and our unique integration technology allows for 90% of optical testing to also happen on the wafer.
Takeaways
Photonics is facing the next stage of its development. There have been many great breakthroughs that have allowed us to take photonic devices from the lab to real-world use cases. However, to have the biggest possible impact in society, we need to manufacture photonic devices at very high volumes to make them accessible to everyone. This requires us to think about production volumes in the scale of millions of units. At EFFECT Photonics, we believe that the way to achieve this goal is by following the blueprint laid out by our friends in the electronics industry. By integrating all optical components on a single chip, we can shift more complexity from the assembly to the wafer, allowing production to scale more efficiently and sustainably. In our next article, we will elaborate on another key factor of the electronics blueprint: the fabless development model.
Tags: coherent, coherent optics, Density, Integrated Photonics, LightCounting, network operators, optical transceivers, photonic integration, Photonics, pluggables, Transceivers, Wafer Scale Photonics
Miniaturization: size and performance matter
Photonic integration will be a disruptive technology that will simplify network design and operation and…
Photonic integration will be a disruptive technology that will simplify network design and operation and reduce network operators’ capital and operating expenses.
Photonic and electronic integration have squeezed more performance into a smaller area and at lower power consumption, making coherent devices more cost-effective.
Full photonic integration like the one used by @EFFECTPhotonics helps even further to bring the performance and functions of big, expensive line card systems into a more affordable and sustainable pluggable form factor.
About EFFECT Photonics
EFFECT Photonics delivers highly integrated optical communications products based on its Dense Wavelength Division Multiplexing (DWDM) optical System-on-Chip technology. The key enabling technology for DWDM systems is full monolithic integration of all photonic components within a single chip and being able to produce these in volume with high yield at low cost. With this capability, EFFECT Photonics is addressing the need for low cost DWDM solutions driven by the soaring demand for high bandwidth connections between datacentres and back from mobile cell towers. EFFECT Photonics is headquartered in Eindhoven, The Netherlands, with additional R&D and manufacturing in South West UK, and a facility opening soon in the US.
http://www.effectphotonics.com
Tags: coherent, Coherent Detection, Direct Detect, Direct Detection, Integrated Photonics, Optical Coherent Technology, Optical Communication, optical networking, optical technology
How to Increase Bandwidth Density in Photonics
In the past, optical transceivers were too bulky, too expensive, and not efficient enough, so…
In the past, optical transceivers were too bulky, too expensive, and not efficient enough, so they could only be used for long-haul telecom networks or data centers with massive bandwidth requirements. Electronic and photonic integration broke this paradigm, miniaturizing optical transceivers to the size of a large USB stick and reducing their cost. Through these advances, optical component manufacturers could pack exponentially more bandwidth into a smaller transceiver area within the last decade. Thanks to this exponential progress, component manufacturers managed to keep up with the exponentially-growing worldwide demand for data.

However, the next wave of innovative network services—autonomous vehicles, the Internet of Things, Industry 4.0—demands even more bandwidth and other requirements. Optical links that connect 5G controller units to the rest of the network must upscale from 10G to 100G. The links from metro networks to datacenters must upscale from 100G to 400G or 800G, and the links within datacenters must operate at Terabit speeds. These services require not only more bandwidth but also lower latencies and more reliability. Further electronic integration cannot keep up by itself; photonic integration must also continue pushing the envelope on bandwidth density.
In our previous articles and videos, we have already discussed one way to increase the bandwidth density through coherent transmission. Coherent technology can pack more bits into a laser signal because it encodes data in the laser light’s amplitude, phase, and polarization. The use of digital signal processing in these devices improves the reach and bandwidth of the signal even further. Coherent transmission allows network operators to reach higher bandwidths without upgrading their existing optical fiber infrastructure. Conserving and reusing existing fiber infrastructure is also a sustainability measure because it avoids spending additional energy and resources on manufacturing more fiber and laying it down on the roads.
However, another way to improve the bandwidth density is by moving to full photonic integration. Let’s use an analogy from electronics to explain what this means.

Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the different chips produced losses and incompatibilities that made the device less efficient. After 2020, starting with Apple’s M1 processor, they now fully integrate all components on a single chip, making the device more energy efficient and reducing its footprint.
Full photonic integration achieves something similar to Apple’s approach, but with optical components instead of electronic components. By integrating all optical elements required for optical transmission (lasers, detectors, modulators, etc.) into a single System-on-Chip (SoC), we can minimize the losses and reduce the chip’s footprint, transmitting more bits over the same chip area. Furthermore, as we discussed in a previous article, a fully integrated system-on-chip reduces materials wastage while at the same time ensuring increased energy efficiency of the manufacturing, packaging, and testing process.
Coherent transmission and full photonic integration approaches must synergize to achieve the highest possible bandwidth density. For example, EFFECT Photonics taped out a fully-integrated coherent optical System-on-Chip (SoC) last year. This device can push hundreds of Gigabits per second within a chip that fits inside your fingertip, and we want to turn this breakthrough into a world-class coherent product. We believe it is the next step in packing exponentially more data into optical chips, allowing the world to keep up with the exponential increase in data for years to come.
In the coming weeks, we will discuss more photonic integration and how to implement it in larger volumes to make coherent transmission more widespread around the world.
Tags: bandwidth, coherent, coherent optics, Density, fiber networks, increase bandwidth density, Integrated Photonics, LightCounting, network operators, optical transceivers, photonic integration, Photonics, pluggables, Transceivers
Direct Detection or Coherent? EFFECT Photonics explains
Direct Detection and Coherent: what is the difference between these technologies? What are their benefits…
Direct Detection and Coherent: what is the difference between these technologies? What are their benefits and limitations?
In the following video, we give a short explanation about these two technologies.
Firstly and foremost, Direct Detect and Coherent use different levels of information.
Direct Detection works by changing the amplitude of the light to transmit information. In this case, the achievable transmission distance depends on the speed of the data signal: at lower data rates, the transmission distance is more than 100 km, but as the speed increase, it exponentially gets shorter.
Coherent Optical Transmission uses three different properties of light: amplitude, phase, and polarization. This way, it is possible to increase the speed of the data signal, without compromising the transmission distance. With Coherent, it is possible to transmit information across long distances with very high data rates enabling operators to upgrade their networks without replacing the physical fiber infrastructure in the ground.
About EFFECT Photonics
EFFECT Photonics delivers highly integrated optical communications products based on its Dense Wavelength Division Multiplexing (DWDM) optical System-on-Chip technology. The key enabling technology for DWDM systems is full monolithic integration of all photonic components within a single chip and being able to produce these in volume with high yield at low cost. With this capability, EFFECT Photonics is addressing the need for low cost DWDM solutions driven by the soaring demand for high bandwidth connections between datacentres and back from mobile cell towers. EFFECT Photonics is headquartered in Eindhoven, The Netherlands, with additional R&D and manufacturing in South West UK, and a facility opening soon in the US.
http://www.effectphotonics.com
Tags: coherent, Coherent Detection, Direct Detect, Direct Detection, Integrated Photonics, Optical Coherent Technology, Optical Communication, optical networking, optical technology
Industrial Hardening: Coherent Goes Outdoors
The global optical transceiver market is expected to double in size by 2026, and coherent…
The global optical transceiver market is expected to double in size by 2026, and coherent pluggables will play a significant role in this growth as they will constitute roughly a third of those sales. While coherent is now an established solution in data center interconnects and long haul networks, it is also expected to start gaining ground in the access networks that connect mobile base stations and their controllers to the rest of the Internet. LightCounting forecasts that by 2025, coherent modules will generate 19% of all sales revenue, in an estimated market of $827 million, for transceivers in back-, mid-, and front-haul network segments. This is an increase in market share from 6% in 2021, as operators are expected to replace some of their direct detect modules with coherent ones in the coming years.

The numbers for coherent sales will only increase in the coming decade for two main reasons. First, electronic and photonic integration are making coherent pluggables economically viable and smaller (see one of our previous article on the subject). Second, the increasing data demands require access networks to increase their capacity beyond what direct detect can deliver. However, for coherent devices to become established in access networks, they must learn to live outdoors.
Controlled vs. uncontrolled environments
Coherent devices have traditionally lived in the controlled environments of data center machine rooms or network provider equipment rooms. These rooms have active temperature control, cooling systems, filters for dust and other particulates, airlocks, and humidity control. In these rooms, pluggable transceivers operate at a relatively stable temperature of around 50ºC, and they only need to survive in ambient temperatures within the commercial temperature range (C-temp) of 0 to 70ºC.

Figure 2: Temperature ratings required across different segments of a 5G network.
On the other hand, access networks feature uncontrolled outdoor environments at Mother Nature’s mercy and whims. It could be at the top of an antenna, on mountain ranges, inside traffic tunnels, or in the harsh winters of Northern Europe. Deployments at higher altitudes present additional problems. The air becomes less dense, so networking equipment cooling mechanisms don’t work as efficiently, so the device cannot tolerate case temperatures as high as it does at sea level. Transceivers should operate in the industrial temperature (I-temp) range of -40 to 85ºC degrees for these environments. Optics are also available in the extended temperature (e-temp) range, which can operate as hot as I-temp devices (85ºC) but cannot get any colder than -20ºC.

Table 1: Comparing the temperature ranges of different temperature hardening standards, including industrial and automotive/full military applications
The initial investment has a longer-term payoff
An expensive challenge for a network operator is having a product that cannot perform reliably in the uncontrolled environments of 5G deployments. With more bandwidth and computing power moving towards the network edges, coherent transceivers must endure potentially extreme conditions in outside environments. Since i-temp transceivers are more robust, they will survive for longer, and operators will ultimately buy fewer of them compared to c-temp modules. Therefore, the initial, somewhat more expensive investment in I-temp transceivers will pay off in the long run.
In addition, the growth of Internet-of-Things (IoT) applications makes reliability even more important. A network connection drop could be disastrous in many critical and business services, such as medical and self-driving car applications.
The importance of standards
Making an I-temp transceiver means that every internal component—the integrated circuits, lasers, photodetectors—must also be I-temp compliant. EFFECT Photonics has already developed I-temp pluggable transceivers with direct detection, so we understand what standards must be followed to develop temperature-hardened coherent devices.
For example, our optical transceivers comply with the Telcordia GR-468 qualification, which describes how to test optoelectronic devices for reliability under extreme conditions. Our manufacturing facilities include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.

Figure 3: Examples of necessary verification tests for transceivers that will operate in harsh temperatures
EFFECT Photonics transceivers also comply with the SFF-8472 standard that describes the Digital Diagnostics Monitoring (DDM) required for temperature-hardened transceivers to compensate for temperature fluctuations. Our proprietary NarroWave technology even allows network operators to read such device diagnostics remotely, avoiding additional truck rolls to check the devices on the field. These remote diagnostics give operators a full view of the entire network’s health from the central office.
Takeaways: going from Direct-Detect to Coherent I-temp
One of our central company objectives is to bring the highest-performing optical technologies, such as coherent detection, all the way to the network edge. However, achieving this goal doesn’t just require us to focus on the optical or electronic side but also on meeting the mechanical and temperature reliability standards required to operate coherent devices outdoors. Fortunately, EFFECT Photonics can take advantage of its previous experience and knowledge in I-temp qualification for direct-detect devices as it prepares its new coherent product line.
If you would like to download this article as a PDF, then please click here.
Tags: access networks, coherent, coherent optics, commercial temperature, fiber networks, I-temp, industrial temperature, Integrated Photonics, LightCounting, NarroWave, network operators, optical transceivers, photonic integration, Photonics, pluggables, Transceivers
Optical System-on-Chip: bringing scalable and affordable DWDM to the edges of the network
While electrical System-on-Chips have been around for some time, EFFECT Photonics is the first company…
While electrical System-on-Chips have been around for some time, EFFECT Photonics is the first company in the world to introduce a full optical System-on-Chip – combining all the optical elements needed for optical networking onto a single die.
EFFECT Photonics’ System-on-Chip technology focuses on dense wavelength division multiplexing (DWDM), which is regarded as an important innovation in optical networks. DWDM is scalable, transparent and enables provision of high-bandwidth services. It is the technology of choice for many networking applications today. Using many different wavelengths of light to route data makes these systems more efficient, flexible, and cost-effective to build, own, and operate compared to single-channel, point-to-point links. Thanks to our high-density electrical interconnect and packaging technology, the optical System-on-Chip can be assembled for volume manufacture at a low cost.
In this short animation, we show you how EFFECT Photonics takes a platform approach to designing optical System-on-Chips using our extensive library of experimentally verified optical building blocks. This library contains all the optical components needed to build a monolithically integrated optical engine. By combining different building blocks, our photonic integrated circuit (PIC) designers create a new optical System-on-Chip that can be used in the next-generation optical transceivers we are developing. This System-on-Chip is then combined with simple packaging to deliver highly integrated optical communication products.
About EFFECT Photonics
EFFECT Photonics delivers highly integrated optical communications products based on its Dense Wavelength Division Multiplexing (DWDM) optical System-on-Chip technology. The key enabling technology for DWDM systems is full monolithic integration of all photonic components within a single chip and being able to produce these in volume with high yield at low cost. With this capability, EFFECT Photonics is addressing the need for low cost DWDM solutions driven by the soaring demand for high bandwidth connections between datacentres and back from mobile cell towers. EFFECT Photonics is headquartered in Eindhoven, The Netherlands, with additional R&D and manufacturing in South West UK, and a facility opening soon in the US.
http://www.effectphotonics.com
Tags: DWDM, Integrated Photonics, optical networking, optical technology, photonic integrated chip, photonic integration, PIC, programmable optical system-on-chip
Integrating Line Card Performance and Functions into a Pluggable
The optical transceiver market is expected to double in size by 2025, and coherent optical…

Integration enables line card performance in a pluggable format
The advances in photonic integration change the game and can enable high performance and transmit power in the smallest pluggable transceiver form factors. By integrating all photonic functions on a single chip, including lasers and optical amplifiers, EFFECT Photonics’ pluggable transceiver modules can achieve transmit power levels similar to those of line card transponder modules while still keeping the smaller QSFP router pluggable form factor, power consumption, and cost.
Modern ASICs can fit electronic functions in a pluggable form factor
As important as optical performance is, though, pluggable transceivers also needed improvements on the electronic side. Traditionally, line card systems not only had better optical performance but also broader and more advanced electronic functionalities, such as digital signal processing (DSP), advanced forward error correction (FEC), encryption, and advanced modulation schemes. These features are usually implemented on electronic application-specific integrated circuits (ASICs). ASICs benefit from the same CMOS process improvements that drive progress in consumer electronics. Each new CMOS process generation can fit more transistors into a single chip. Ten years ago, an ASIC for line cards had tens of millions of transistors, while the 7nm ASIC technology used in modern pluggables has more than five billion transistors. This progress in transistor density allows ASICs to integrate more electronic functions than ever into a single chip while still making the chip smaller. Previously, every function—signal processing, analog/digital conversion, error correction, multiplexing, encryption—required a separate ASIC, but now they can all be consolidated on a single chip that fits in a pluggable transceiver.
Electronic integration enables line card system management in a pluggable form factor
The advancements in CMOS technology also enable the integration of system-level functions into a pluggable transceiver. Previously, functions such as in-band network management and security, remote management, autotuneability, or topology awareness had to live on the shelf controller or in the line card interface, but that’s not the case anymore. Thanks to the advances in electronic integration, we are closer than ever to achieving a full, open transponder on a pluggable that operates as part of the optical network. These programmable, pluggable transceivers provide more flexibility than ever to manage access networks. For example, the pluggable transceiver could run in a mode that prioritizes high-performance or one that prioritizes low consumption by using simpler and less power-hungry signal processing and error correction features. Therefore, these pluggables could provide high-end performance in the smallest form-factor or low and mid-range performance at lower power consumption than embedded line card transponders. EFFECT Photonics has already started implementing these kinds of system-management features in its product. For example, our direct-detect SFP+ transceiver modules feature NarroWave technology that allows customers to monitor and control remote SFP+ modules from the central office without making any hardware or software changes in the field. NarroWave is agnostic of vendor equipment, data rate, or protocol of the in-band traffic. Pluggable transceivers also provide the flexibility of multi-vendor interoperability. High-performance line card transponders have often prioritized the use of proprietary features to increase performance while neglecting interoperability. The new generations of pluggables don’t need to make this trade off: they can operate in standards-compatible modes for interoperability or in high-performance modes that use proprietary features.Takeaways
Coherent technology was originally reserved for premium long-distance links where performance is everything. Edge and access networks could not use this higher-performance technology since it was too bulky and expensive. Photonic integration technology like the one used by EFFECT Photonics helps bring these big, proprietary, and expensive line card systems into a router pluggable form factor. This tech has squeezed more performance into a smaller area and at lower power consumption, making the device more cost-effective. Combining the improvements in photonic integration with the advances in electronic integration for ASICs, the goal of having a fully programmable transponder in a pluggable is practically a reality. Photonic integration will be a disruptive technology that will simplify network design and operation and reduce network operators’ capital and operating expenses. The impact of this technological improvement in pluggable transceivers was summarized deftly by Keven Wollenweber, VP of Product Management for Cisco’s Routing Portfolio: “Technology advancements have reached a point where coherent pluggables match the QSFP-DD form factor of grey optics, enabling a change in the way our customers build networks. 100G edge and access optimized coherent pluggables will not only provide operational simplicity, but also scalability, making access networks more future proof.” If you would like to download this article as a PDF, then please click here. Tags: 100G, access network, ASIC, CFP, coherent optics, CoherentPIC, DSP, edge network, electronic integration, fully integrated, Fully Integrated PICs, Integrated Photonics, line card, metro access, miniaturization, NarroWave, optical transceivers, photonic integration, PIC, pluggable, pluggable transceiver, QSFP, SFP+, small form factor, sustainability telecommunication
Building a Sustainable and Green Future with Fully Integrated PICs
The World Needs Greener Telecommunications The demand for data and other digital services is rising…
The World Needs Greener Telecommunications
The demand for data and other digital services is rising exponentially. From 2010 to 2020, the number of Internet users worldwide doubled, and global internet traffic increased 12-fold. By 2022, internet traffic will likely double yet again. Mobile wireless networks will significantly drive this energy consumption upwards despite 5G being the most energy-aware mobile communication standard ever released. In a March 2020 report, Ericsson states that some communications service providers have estimated doubling their energy consumption to meet the increasing traffic demands after rolling out 5G.
Keeping up with the increasing data demand of future networks in a sustainable way will require operators to deploy more optical technologies, such as photonic integrated circuits (PICs), in their access and fronthaul networks. By replacing the inefficient copper and coaxial links that use electrical signals, operators can provide their customers and mobile sites with more data while reducing the required power per bit.
Integration Impacts Energy Efficiency and Optical Losses
Lately, we have seen many efforts to increase further the integration on a component level across the electronics industry. For example, moving towards greater integration of components in a single chip has yielded significant efficiency benefits in electronics processors. Apple’s M1 processor integrates all electronic functions in a single system-on-chip (SoC) and consumes a third of the power compared to the processors with discrete components used in their previous generations of computers.

Photonics is also achieving greater efficiency gains by following a similar approach to integration. The more active and passive optical components (lasers, modulators, detectors, etc.) manufacturers can integrate on a single chip, the more energy they can save since they avoid coupling losses between discrete components and allow for interactive optimization.
Transceiver manufacturers have three choices in terms of design:
- Discrete build – The transceiver components are manufactured through separate processes. The components are then assembled into a single package using different types of interconnections.
- Partial integration – Some components are manufactured and integrated on the same chip, but others are manufactured or sourced separately. For example, the transceiver laser can be manufactured separately on a different material and then interconnected to a chip with the other transceiver components.
- Full integration – All the components are manufactured on a single chip from a single material simultaneously.
While discrete builds and partial integration have advantages in managing the yield of the individual components, full integration has fewer losses and more efficient packaging and testing processes, making them a much better fit in terms of sustainability.
The interconnects required to couple discrete components result in electrical and optical losses that must be compensated with higher transmitter power and more energy consumption. The more interconnects between different components, the higher the losses become. Discrete builds will have the most interconnect points and highest losses. Partial integration reduces the number of interconnect points and losses compared to discrete builds. If these components are made from different optical materials, the interconnections will suffer additional losses.
On the other hand, full integration uses a single chip of the same base material. It does not require lossy interconnections between chips, minimizing optical losses and significantly reducing the energy consumption and footprint of the transceiver device.

More Integration Saves Scarce Resources
When it comes to energy consumption and sustainability, we shouldn’t just think about the energy the PIC consumes but also the energy and carbon footprint of fabricating the chip and assembling the transceiver. To give an example from the electronics sector, a Harvard and Facebook study estimated that for Apple, manufacturing accounts for 74% of their carbon emissions, with integrated circuit manufacturing comprising roughly 33% of Apple’s carbon output. That’s higher than the emissions from product use.

Chip manufacturing processes consume an immense amount of resources. A typical electronics semiconductor chip fab uses between 2 and 4 million gallons of water per day (that’s roughly between 7 and 15 million liters of water per day). Chips often need rare metals such as gold, cobalt, and silver or rare earth such as erbium yttrium, neodymium, or thulium. Mining and processing these materials are among the activities that produce the most waste and damage to the environment. To make things worse, we only recycle a small fraction of these materials. For example, the Global Enabling Sustainability Initiative estimates that high-tech products use 320 tons of gold every year and that less than 15% of the gold in e-waste is recovered for reuse.
The choice of integration approach has implications on the complexity of fabrication and packaging of the transceivers, which in turn has implications on their level of sustainability. For example, discrete integration requires a hermetically sealed gold box package to protect the interconnections from moisture. This packaging process also consumes more energy to create an airtight seal and consumes rare gold material. Partial integration reduces the number of interconnects and requires a smaller gold box, which reduces the cost and complexity of the packaging. However, this approach still requires separate fabrication processes for its components and interconnects, which increases the energy consumption of the assembly process. On the other hand, fully integrated transceivers can do away with the gold box and minimize energy consumption by avoiding the extra fabrication and packaging steps.
Early Testing Avoids Wastage
Testing is another aspect of the manufacturing process that impacts sustainability. The earlier faults can be found in the testing process, the greater the impact on the use of materials and the energy used to process defective chips. Ideally, testing should happen not only on the final, packaged transceiver but in the earlier stages of PIC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies.

Discrete and partial integration approaches do more of their optical testing on the finalized package, after connecting all the different components together. Should just one of the components not pass the testing process, the complete packaged transceiver would need to be discarded, potentially leading to a massive waste of materials as nothing can be ”fixed” or reused at this stage of the manufacturing process.
Full integration enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials.
For example, EFFECT Photonics reaps these benefits in its production processes. 100% of electrical testing on the PICs happens at the wafer level, and our unique integration technology allows for 90% of optical testing to also happen on the wafer.
Full Integration Drives Sustainability
While communication networks have become more energy-efficient, further technological improvements must continue decreasing the cost of energy per bit and keeping up with the exponential increase in Internet traffic. At the same time, a greater focus is being placed on the importance of sustainability and responsible manufacturing. All the photonic integration approaches we have touched on will play a role in reducing the energy consumption of future networks. However, out of all of them, only full integration is in a position to make a significant contribution to the goals of sustainability and environmentally friendly manufacturing. A fully integrated system-on-chip minimizes optical losses, transceiver energy consumption, power usage, and materials wastage while at the same time ensuring increased energy efficiency of the manufacturing, packaging, and testing process.
If you would like to download this article as a PDF, then please click here.
Tags: energy efficency, Fully Integrated PICs, Green Future, Integrated Photonics, photonic integration, sustainability telecommunication, Sustainable
EFFECT Photonics to open facility in the US
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has finally enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also made DWDM solutions more widespread by simplifying their installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and even future access networks—are moving towards coherent tunable pluggables.
The market for coherent tunable transceivers will explode in the coming years, with LightCounting estimating that annual sales will double by 2026. Telecom carriers and especially data center providers will drive the market demand, upgrading their optical networks with 400G, 600G, and 800G pluggable transceiver modules that will become the new industry standards.

This increase in transceiver demand also means that component and equipment vendors also need more tunable lasers to build such transceivers. However, through several recent acquisitions and mergers, the transceiver market is consolidating into fewer companies that will develop these high-performance tunable lasers, modulators, and receivers internally. This situation reduces the laser supply on the market and makes it harder for independent component and equipment manufacturers to source lasers and other optical components for their pluggable and co-packaged systems. These trends point towards a market need for new independent providers of integrated tunable laser assemblies (ITLAs).
Same Laser Performance, Smaller Package
As the industry moves towards packing more and more transceivers on a single router faceplate, tunable lasers need to maintain performance and power while moving to smaller footprints and lower power consumption and cost. Due to the faceplate density requirements for data center applications, transceiver power consumption is arguably the most critical factor in this use case. In fact, power consumption is the main obstacle preventing pluggables from becoming a viable solution for a future upgrade to Terabit speeds. Since lasers are the second biggest power consumers in the transceiver module, laser manufacturers faced a paradoxical task. They must manufacture laser units that are small and energy-efficient enough to fit QSFP-DD and OSFP pluggable form factors while maintaining the laser performance.
Fortunately, these ambitious spec targets became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a micro-ITLA form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules has once again cut the micro-ITLA footprint almost in half. The QSFP-DD modules that house the full transceiver are smaller (78mm by 20mm) than the original ITLA form factor. Stunningly, tunable laser manufacturers achieved this size reduction without impacting laser purity and power.

Introducing our new Coherent Product Line Manager – Vladi Veljanovski
Last year, EFFECT Photonics taped out the world’s first fully integrated coherent optical transceiver chip.…
Last year, EFFECT Photonics taped out the world’s first fully integrated coherent optical transceiver chip. We are now ready to turn this engineering breakthrough into a product. To lead this process, EFFECT Photonics has hired Vladimir Veljanovski as our first Coherent Product Line Manager. To give you more insight into our new colleague and what drives him, we asked him a few questions.
Tell us a little more about yourself and your background
I was born in Macedonia and moved to Germany for my university studies, graduating in 2006 with an Engineering degree in Communication and Information Technology from the Technical University in Munich. I did my master thesis with the R&D department of Siemens Fixed Networks (later Coriant) and started working there after graduation. I started my job by simulating optical transmission systems and extracting system design rules. Soon after, I moved to the lab, which definitely attracted me more as it was closer to reality and work done on the field.
Around that time, we started testing our first 40G coherent product in the lab. Coherent was a major technology change, and customers were struggling to believe and buy into it. Hence, R&D needed to go to the customers and demonstrate the new technology. That’s when I discovered my preference for working closer to the customers. I remained in this customer-facing role until 2014, doing lots of introductory work for coherent technology in terrestrial and submarine networks. In 2014, I moved to Switzerland to work for Huawei in a technical sales role as the Sales Project Manager responsible for the Swisscom transport network. In four exciting years there, we renewed the network by introducing 200G and 400G coherent products into the network backbone and 100G coherent into the metro network.
After the Huawei experience, I wanted more of a network-level overview, which was hard to get when working with the network of a telecom carrier. Thus, I spent the next two years working in the enterprise environment. The company was smaller and more easily manageable. There I could see the whole network as an entity, not just the optics but also the switching, the firewalls, the network management, etc. And in June 2021, I joined EFFECT Photonics. I am thrilled with this transition, even though we have a lot of work to bring the product out as soon as possible.
What do you find exciting about coherent technology?
Coherent technology is not new. It has been around for a while. However, it is incredible that now those complicated benchtop systems I built in the lab back in 2009, with all the discrete components that were cumbersome to connect, can now fit into something the size of a sugar cube thanks to photonic integration. And these systems even have better performance than back then. You see that progress and think, “man, that’s awesome!”.
Coherent technology was reserved for premium long-distance links where performance is everything. Metro and access networks could not use this higher-performance technology since on the one hand, it was too bulky and expensive, and on the other, the bandwidth demand was yet to grow.
Photonic integration technology like EFFECT’ Photonics’ helped bring these big, proprietary, and expensive systems into a router pluggable form factor. This tech has squeezed more performance into a smaller area and at lower power consumption, making the device more cost-effective. Photonic integration will be a disruptive technology that will simplify network design and operation and reduce the capital and operating expenses of network operators.
What do you find exciting about working at EFFECT Photonics?
I love working with smart people with a good team spirit. I get to study and learn new things, and I continue to grow and challenge myself, which makes all the work even more fun.
I figured out that by working at EFFECT Photonics, I would be surrounded by great professionals who have worked on photonic integration for ten years or more and know and identify with this technology very well. It’s a fascinating and challenging environment for me to be in.
On that note, photonic integration technology was a big reason why I chose to work at EFFECT Photonics. I was amazed to see how a company of this size, with relatively few people, can work on such a potentially disruptive technology. I get to work on new and exciting technology, and at the same time, I can get to know almost everyone in the company. I clearly feel the “scale-up winds” blowing in my daily work.
Having worked in both R&D and sales of coherent products with network carriers and enterprise providers, Vladi possesses deep insight into coherent technology itself and how to sell it to customers. At EFFECT Photonics, we are excited to work with him, and we look forward to what he can do to turn our technology into a fantastic coherent product.
Tags: coherent, Integrated Photonics, photonic integration
EFFECT Photonics senior management team complete
EFFECT Photonics is pleased to announce the completion of its leadership team with the recent…
EFFECT Photonics is pleased to announce the completion of its leadership team with the recent hiring of Roberto Marcoccia in the position of Chief Development & Strategy Officer as part of the growth and expansion journey of the company. This follows the appointment of Dr. Sophie De Maesschalck as Chief Financial Officer, and Harald Graber as Chief Commercial Officer last year.
Based in California, Roberto joins EFFECT Photonics with over 20 years of leadership experience in optical communications product development. Most recently he was Vice President Engineering at Juniper Networks responsible for leading and executing many of the company’s optical strategies. Prior to Juniper, Roberto was the founding leader at StrataLight Communications responsible all aspects of development including StrataLight’s long haul DWDM products that were deployed in networks around the world and the successful acquisition of StrataLight by Opnext. At EFFECT Photonics, Roberto will lead and grow the global development organization and drive the advancement and execution of the growth strategy.
“I’m absolutely delighted to have joined the EFFECT Photonics team. We are in a unique position to leverage the power of integrated photonics and our vertically integrated capabilities to meet the massive shifts happening in the industry with the evolution of 5G and large-scale computation moving to the edge of the network. It’s an exciting time for the industry and the company, and I look forward to working closely with James and the rest of the senior leadership team to capitalize on the many opportunities before us.” – Roberto Marcoccia
Sophie is an experienced technology scale-up CFO with over 15 years’ experience in technology incubation, international financing, and mergers and acquisitions. She has been Managing Partner at Incubaid, an incubator based in Belgium where she supported a portfolio of start-ups in the field of storage, virtualization and decentralization, several of which were exited to large, international corporations. She started her career at Bain & Company. With her PhD in optical telecommunications and her financial experience, Sophie is perfectly placed to help EFFECT Photonics scale up the finance infrastructure, drive the expansion strategy and manage the equity financing rounds.
“I am thrilled to be part of this amazing team at EFFECT Photonics. Over the last months, we have made tremendous progress and we have welcomed our new Series-C investors on board. Now we have strengthened our senior management team and with the trust and support of our investors, I’m convinced EFFECT photonics has a bright future ahead and we can truly make a difference with our ground-breaking technology ” – Sophie De Maesschalck
During his more than 20 years in optical networking, Harald has held different senior leadership roles, one of the last was as Vice President Customer Innovation – Product & Solution Marketing at Coriant. Prior to that he lived and worked in Europe as well as the US and China for Lucent Technologies, Alcatel-Lucent and Nokia Siemens Networks. In his role as CCO at EFFECT Photonics, he is responsible for leading the global sales, product line management, marketing communications teams as well as building the underlying infrastructure to serve the growing global customer base.
“Joining EFFECT Photonics has been one of the most rewarding decisions so far. The team is driven to make a huge impact in the world of semiconductors and telecommunication. With the completion of the senior management team we also have formulated a clear path to excel in the market and further deepen our technological competence in the most critical fields. It is great to see our customers and partners joining us in discussions about the future and formulating breakthrough developments.” – Harald Graber
The senior management team now comprises of:
- James Regan – Chief Executive Officer
- Dr. Boudewijn Docter – President & co-founder
- Tim Koene – Chief Technology Officer & co-founder
- Dr. Paul Rosser – Chief Operations Officer
- Harald Graber – Chief Commercial Officer
- Roberto Marcoccia – Chief Development & Strategy Officer
- Dr. Sophie De Maesschalck – Chief Financial Officer
“We now have the world-class leadership team we need to realize the ambition that we have for our existing and next gen product portfolios. These are exciting times for us as we further scale up our R&D and production efforts as well as expanding our geographic footprint.” – James Regan
For more information on the senior management team, please click here.
Tags: #expansion, #invest, #leadership, #photonics, Integrated Photonics
An Introduction to Quantum Key Distribution
While the word “quantum” has only started trending in the technology space during the last…
While the word “quantum” has only started trending in the technology space during the last decade, many past technologies already relied on our understanding of the quantum world, from lasers to MRI imaging, electronic transistors, and nuclear power. The reason quantum has become so popular lately is that researchers have become increasingly better at manipulating individual quantum particles (light photons, electrons, atoms) in ways that weren’t possible before. These advances allow us to harness more explicitly the unique and weird properties of the quantum world. They could launch yet another quantum technology revolution in areas like sensing, computation, and communication.
What’s a Quantum Computer?
The power of quantum computers comes chiefly from the superposition principle. A classical bit can only be in a 0 or 1 state, while a quantum bit (qubit) can exist in several 0 and 1 state combinations. When one measures and observes the qubit, it will collapse into just one of these combinations. Each combination has a specific probability of occurring when the qubit collapses.
While two classical bits can only exist in one out of four combinations, two quantum bits can exist in all these combinations simultaneously before being observed. Therefore, these qubits can hold more information than a classical bit, and the amount of information they can hold grows exponentially with each additional qubit. Twenty qubits can already hold a million values simultaneously (220), and 300 qubits can store as many particles as there are in the universe (2300).

However, to harness this potential processing power, we must understand that probabilities in quantum mechanics do not work like conventional probabilities. The probability we learned about in school allowed only for numbers between 0 and 1. On the other hand, probabilities in quantum mechanics behave as waves with amplitudes that can be positive or negative. And just like waves, quantum probabilities can interfere, reinforcing each other or cancelling each other out.

Quantum computers solve computational problems by harnessing such interference. The quantum algorithm choreographs a pattern of interference where the combinations leading to a wrong answer cancel each other out. In contrast, the combinations leading to the correct answer reinforce each other. This process gives the computer a massive speed boost. We only know how to create such interference patterns for particular computational problems, so for most problems, a quantum computer will only be as fast as a conventional computer. However, one problem where quantum computers are much faster than classical ones is finding the prime factors of very large numbers.
How Quantum Computers Threaten Conventional Cryptography
Today’s digital society depends heavily on securely transmitting and storing data. One of the oldest and most widely used methods to encrypt data is called RSA (Rivest-Shamir-Adleman – the surnames of the algorithm’s designers). RSA protocols encrypt messages with a key that results from the multiplication of two very large numbers. Only someone who knows the values of these two numbers can decode the message.
RSA security relies on a mathematical principle: multiplying two large numbers is computationally easy, but the opposite process—figuring out what large numbers were multiplied—is extremely hard, if not practically impossible, for a conventional computer. However, in 1994 mathematician Peter Shor proved that an ideal quantum computer could find the prime factors of large numbers exponentially more quickly than a conventional computer and thus break RSA encryption within hours or days.
While practical quantum computers are likely decades away from implementing Shor’s algorithm with enough performance and scale to break RSA or similar encryption methods, the potential implications are terrifying for our digital society and our data safety.
In combination with private key systems like AES, RSA encrypts most of the traffic on the Internet. Breaking RSA means that emails, online purchases, medical records, company data, and military information, among many others, would all be more susceptible to attacks from malicious third parties. Quantum computers could also crack the digital signatures that ensure the integrity of updates to apps, browsers, operating systems, and other software, opening a path for malware.
This security threat has led to heavy investments in new quantum-resistant encryption. Besides, existing private key systems used in the enterprise telecom sector like AES-256 are already quantum resistant. However, even if these methods are secure now, there is no guarantee that they will remain secure in the future. Someone might discover a way to crack them, just as it happened with RSA.
Quantum Key Distribution and its Impact on the Telecom World
Given these risks, arguably the most secure way to protect data and communications is by fighting quantum with quantum:protect your data from quantum computer hacking by using security protocols that harness the power of quantum physics laws. That’s what quantum key distribution (QKD) does: QKD uses qubits to generate a secret cryptographic key protected by the phenomenon of quantum state collapse. If an attacker tries to eavesdrop and learn information about the key, they will distort the qubits irreversibly. The sender and receiver will see this distortion as errors in their qubit measurements and know that their key has been compromised.
