Insights
Leveraging Electronic Ecosystems in Photonics
(First published on the 2nd November 2022 updated 4th September 2024) Thanks to wafer-scale technology,…
(First published on the 2nd November 2022 updated 4th September 2024)
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades. This allowed the world to enjoy chips that every generation became smaller and provided exponentially more computing power for the same amount of money. This scale-up process is how everyone now has a computer processor in their pocket that is millions of times more powerful than the most advanced computers of the 1960s that landed men on the moon.
This progress in electronics integration is a key factor that brought down the size and cost of coherent transceivers, packing more bits than ever into smaller areas. However, photonics has struggled to keep up with electronics, with the photonic components dominating the cost of transceivers. If the transceiver cost curve does not continue to decrease, it will be challenging to achieve the goal of making them more accessible across the entire optical network.
To trigger a revolution in the use of photonics worldwide, it needs to be as easy to use as electronics. In the words of our Chief Technology Officer, Tim Koene-Ong:
“We need to buy photonics products from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.”
Tim Koene-Ong, Chief Technology Officer.
This goal requires photonics manufacturing to leverage existing electronics manufacturing processes and ecosystems. Photonics must embrace fabless models, chips that can survive soldering steps, and electronic packaging and assembly methods.
The Advantages of a Fabless Model
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers manufacture their chips in-house within their fabrication facilities. This approach has substantial advantages, giving component manufacturers complete control over their production process.
However, this approach has its trade-offs when scaling up. If a vertically integrated chip developer wants to scale up in volume, they must make a hefty capital expenditure (CAPEX) in more equipment and personnel. They must develop new fabrication processes as well as develop and train personnel. Fabs are not only expensive to build but to operate. Unless they can be kept at nearly full utilization, operating expenses (OPEX) also drain the facility owners’ finances.
Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective. For example, LightCounting research data estimates that 173 million optical Ethernet transceivers were sold in 2023, while the International Data Corporation estimates that 1.17 billion smartphones were sold in 2023. The latter figure is seven times larger than the entire optical transceiver market.
Electronics manufacturing experienced a similar problem during their 70s and 80s boom, with smaller chip start-ups facing almost insurmountable barriers to market entry because of the massive CAPEX required. Furthermore, the large-scale electronics manufacturing foundries had excess production capacity that drained their OPEX. The large-scale foundries sold that excess capacity to the smaller chip developers, who became fabless. In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at total capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
This fabless model, with companies designing and selling the chips but outsourcing the manufacturing, should also be the way for photonics. Instead of going through a more costly, time-consuming process, the troubles of scaling up for photonics developers are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on the end market. This is the simplest way forward if photonics moves into million-scale volumes.
Adopting Electronics-Style Packaging
While packaging, assembly, and testing are only a small part of the cost of electronic systems, the reverse happens with photonic integrated circuits (PICs). Researchers at the Technical University of Eindhoven (TU/e) estimate that for most Indium Phosphide (InP) photonics devices, the cost of packaging, assembly, and testing can reach around 80% of the total module cost.
To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. The lack of automation makes manufacturing slower and prevents data collection that can be used for process control, optimization, and standardization.
One of the best ways to reach these automation and standardization goals is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a special production line is much more expensive than modifying an existing production flow.
There are several ways in which photonics packaging, assembly, and testing can be made more affordable and accessible. Below are a few examples:
- Passive alignments: Connecting optical fiber to PICs is one of optical devices’ most complicated packaging and assembly problems. The best alignments are usually achieved via active alignment processes in which feedback from the PIC is used to align the fiber better. Passive alignment processes do not use such feedback. They cannot achieve the best possible alignment but are much more affordable.
- BGA-style packaging: Ball-grid array packaging has grown popular among electronics manufacturers. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
- Flip-chip bonding: A process where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Making Photonics Chips That Can Survive Soldering
Soldering remains another tricky step for photonics assembly and packaging. Photonics device developers usually custom order a PIC, then wire and die bond to the electronics. However, some elements in the PIC cannot handle soldering temperatures, making it difficult to solder into an electronics board. Developers often must glue the chip onto the board with a non-standard process that needs additional verification for reliability.
This goes back to the issue of process standardization. Current PICs often use different materials and processes from electronics, such as optical fiber connections and metals for chip interconnects, that cannot survive a standard soldering process.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to make fiber coupling and other PIC aspects compatible with these electronic packaging methods.
PICs that can handle being soldered to circuit boards will allow the industry to build optical subassemblies that can be made more readily available in the open market and can go into trains, cars, or airplanes.
Conclusion
Photonics must leverage existing electronics ecosystems and processes to scale up and have a greater global impact. Our Chief Technology Officer, Tim Koene-Ong, explains what this means:
Photonics technology needs to integrate more electronic functionalities into the same package. It needs to build photonic integration and packaging support that plays by the rules of existing electronic manufacturing ecosystems. It needs to be built on a semiconductor manufacturing process that can produce millions of chips in a month.
As soon as photonics can achieve these larger production volumes, it can reach price points and improvements in quality and yield closer to those of electronics. When we show the market that photonics can be as easy to use as electronics, that will trigger a revolution in its worldwide use.
Our Chief Technology Officer, Tim Koene-Ong.
This vision is one of our guiding lights at EFFECT Photonics, where we aim to develop optical systems that can impact the world in many different applications.
Tags: automotive sector, BGA style packaging, compatible, computing power, cost per mm, efficient, electronic, electronic board, electronics, fabless, Photonics, risk, scale, soldering, transistor, wafer scaleThe Internet of Things: Enhanced Connectivity Through Photonics
The Internet of Things (IoT) is transforming industries by enabling devices to communicate, collect, and…
The Internet of Things (IoT) is transforming industries by enabling devices to communicate, collect, and exchange data seamlessly. This interconnected ecosystem relies on robust, high-speed communication technologies to function effectively. Photonics, which involves the use of light to transmit data, plays a critical role in enhancing the connectivity of IoT devices. This article explores the different ways photonics enables IoT, focusing on high-speed data transmission, energy-efficient sensing, and the development of smart, self-powered devices.
High-Speed Data Transmission
The advent of 5G technology, with its promise of ultra-fast speeds and low latency, has enabled many IoT applications, turning mundane devices into smart, interconnected components of broader digital ecosystems. However, this means more devices contribute to the already vast data streams flowing through global networks.
Photonics significantly enhances data transmission speeds in IoT networks, making it possible to handle the massive data volumes generated by connected devices. Optical fibers, which use light to transmit data, offer higher bandwidth and lower latency compared to traditional copper wires. This is essential for applications that require real-time data processing and rapid communication, such as autonomous vehicles, smart grids, and industrial automation.
Energy-Efficient Sensing
Photonics also plays a vital role in the development of energy-efficient sensing technologies for IoT. Photonic sensors offer high sensitivity and accuracy with much low power consumption. This is particularly important for applications in remote or hard-to-reach locations where replacing batteries is impractical.
Photonic integrated circuits (PICs) combine multiple optical components on a single chip, enabling faster and more efficient data transmission. For instance, advances in PIC technology have enabled high-performance LiDAR systems, which use laser pulses to create detailed 3D maps of environments. These systems are crucial for applications like autonomous driving, where precise and real-time data is necessary.
Smart, Self-Powered Devices
One of the most exciting developments in photonics for IoT is the creation of self-powered devices. These devices use ambient light to generate the energy needed for their operation, eliminating the need for batteries. This not only reduces maintenance costs but also minimizes environmental impact by decreasing the number of disposable batteries used.
For example, Ambient Photonics is a company that specializes in developing and manufacturing low-light energy harvesting solar cells. The company’s solar cells are thin, efficient, and capable of capturing energy from a wide range of light conditions, including dim indoor settings where traditional solar cells are less effective. Their technology is designed to generate power from low-light environments, such as indoor lighting, which can be used to power various electronic devices without the need for batteries or frequent recharging. This makes them suitable for powering IoT devices, remote sensors, and other small electronics that are often used indoors.
Conclusion
Photonics is enabling the connectivity and functionality of IoT networks. By enabling high-speed data transmission, energy-efficient sensing, and the development of smart, self-powered devices, photonics addresses many of the challenges faced by traditional electronic technologies.
Tags: ambient light, Autonomous Vehicles, connectivity, EFFECT Photonics, energy harvesting, energy-efficient sensing, high-speed data transmission, Internet of Things, IoT, LiDAR systems, OPDs, Optical fibers, OPVs, organic optoelectronics, organic photodetectors, organic photovoltaic cells, photonic integrated circuits, Photonics, PICs, self-powered devices, smart devices, sustainable technologyThe Lasers Powering AI
Artificial Intelligence (AI) networks rely on vast amounts of data processed and transferred at incredible…
Artificial Intelligence (AI) networks rely on vast amounts of data processed and transferred at incredible speeds to function effectively. This data-intensive nature requires robust infrastructure, with lasers playing a pivotal role. AI networks depend on two primary processes: AI training and AI inference. Training involves feeding large datasets into models to learn and make predictions, while inference uses these trained models to make real-time decisions.
Lasers are crucial in enhancing the efficiency and speed of these processes by enabling high-speed data transfer within data centers and across networks. This article explores the various ways lasers power AI networks, the specific requirements for data center connections, and their broader impact on AI infrastructure.
Requirements for Data Center Connections
The connectivity requirements for data centers supporting AI workloads are stringent. They must handle enormous volumes of data with minimal latency and high reliability. The primary requirements for lasers in these environments include:
High Bandwidth: AI applications, especially those involving large language models and real-time data processing, require interconnects that can support high data rates.
Low Latency: Minimizing latency is crucial for AI inference tasks that require real-time decision-making. Lasers enable faster data transfer compared to traditional electronic interconnects, significantly reducing the time it takes for data to travel between nodes.
Energy Efficiency: AI data centers consume vast amounts of power. Integrated photonics combines optical components on a single chip, reducing power consumption while maintaining high performance.
Scalability: As AI workloads grow, the infrastructure must scale accordingly. Lasers provide the scalability needed to expand data center capabilities without compromising performance.
Lasers Arrays in Data Center Interconnects
In 2022, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for optical transceivers because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need an intra-DCI link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Broader Impact on AI Infrastructure
Beyond data centers, lasers are transforming the broader AI infrastructure by enabling advanced applications and enhancing network efficiency. In the context of edge computing, where data is processed closer to the source, lasers facilitate rapid data transfer and low-latency processing. This is essential for applications like autonomous vehicles, smart cities, and real-time analytics, where immediate data processing is critical.
Lasers also play a significant role in the integration of AI with 5G and future 6G networks. The high-frequency demands of these networks require precise and high-speed optical interconnects, which lasers provide.
Conclusion
Lasers are at the core of modern AI networks, providing the high-speed, low-latency, and energy-efficient interconnects needed to support data-intensive AI workloads. From enhancing data center connectivity to enabling advanced edge computing and network integration, lasers play a pivotal role in powering AI. As AI continues to evolve and expand into new applications, the reliance on laser technology will only grow, driving further innovation and efficiency in AI infrastructure.
Tags: 5G Networks, AI inference, AI networks, AI training, Autonomous Vehicles, bandwidth, data centers, data transfer, Edge computing, EFFECT Photonics, energy efficiency, high-speed connectivity, Indium Phosphide lasers, lasers, Low latency, optical interconnects, photonic processors, Scalability, silicon photonics, Smart Cities, VCSELsAI at the Network Edge
Artificial Intelligence (AI) can impact several different industries by enhancing efficiency, automation, and data processing…
Artificial Intelligence (AI) can impact several different industries by enhancing efficiency, automation, and data processing capabilities. The network edge is another area where AI can deliver such improvements. Edge computing, combined with AI, enables data processing closer to the source of data generation, leading to reduced latency, improved real-time data analytics, and enhanced security. This article delves into the potential of AI at the network edge, exploring its applications, training and inference processes, and future impact.
The Potential of AI at the Network Edge
According to market research, the global market for edge computing technologies is projected to grow from $46.3 billion in 2022 to $124.7 billion by 2027.
AI at the network edge involves deploying AI models and algorithms closer to where data is generated, such as in IoT devices, sensors, and local servers. This proximity allows for real-time data processing and decision-making, which is critical for applications that require immediate responses. Industries such as manufacturing, healthcare, retail, and smart cities are prime beneficiaries of edge AI. For instance, in manufacturing, edge AI can monitor machinery in real-time to predict and prevent failures, enhancing operational efficiency and reducing downtime. In healthcare, edge AI enables real-time patient monitoring, providing immediate alerts to medical staff about critical changes in patient conditions.
The integration of AI at the edge also addresses the growing need for data privacy and security. By processing data locally, sensitive information does not need to be transmitted to centralized cloud servers, reducing the risk of data breaches and ensuring compliance with data protection regulations. Moreover, edge AI reduces the bandwidth required for data transfer, as only the necessary information is sent to the cloud, optimizing network resources and reducing costs.
Training and Inference at the Edge
Training AI models involves feeding large datasets into algorithms to enable them to learn patterns and make predictions. Traditionally, this process requires significant computational power and is often performed in centralized data centers. However, advancements in edge computing and model optimization techniques have made it possible to train AI models at the edge.
One of the key techniques for enabling AI training at the edge is model optimization. This includes methods such as pruning, quantization, and low-rank adaptation, which reduce the size and complexity of AI models without compromising their performance. Pruning involves removing less important neurons or layers from a neural network, while quantization reduces the precision of the model’s weights, making it more efficient in terms of memory and computational requirements. Low-rank adaptation focuses on modifying only a subset of parameters, which is particularly useful for fine-tuning pre-trained models on specific tasks.
Inference, the process of making predictions using a trained AI model, is especially critical at the edge. It requires lower computational power compared to training and can be optimized for low-latency and energy-efficient operations. Edge devices equipped with AI inference capabilities can analyze data in real-time and provide immediate feedback. For example, in retail, edge AI can facilitate frictionless checkout experiences by instantly recognizing and processing items, while in smart cities, it can manage traffic and enhance public safety by analyzing real-time data from surveillance cameras and sensors.
The Role of Pluggables in the Network Edge
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Takeaways
By bringing AI closer to the source of data generation, it enables real-time analytics, reduces latency, enhances data privacy, and optimizes network resources. Edge AI can foster innovation in areas such as autonomous vehicles, where real-time data processing is crucial for safe navigation and decision-making. In the healthcare sector, edge AI will enable more sophisticated patient monitoring systems, capable of diagnosing and responding to medical emergencies instantly. Moreover, edge AI will play a role in mobile networks, providing the necessary infrastructure to handle the massive amounts of data generated by connected devices.
Tags: AI edge, AI models, AI network, bandwidth optimization, data generation, data privacy, Edge computing, EFFECTPhotonics, future impact, Healthcare, inference, IoT devices, local servers, Manufacturing, model optimization, operational efficiency, real-time analytics, real-time data processing, reduced latency, security, sensors, Smart Cities, training, transformative powerHow Photonics Enables AI Networks
Artificial Intelligence (AI) networks have revolutionized various industries by enabling tasks such as image recognition,…
Artificial Intelligence (AI) networks have revolutionized various industries by enabling tasks such as image recognition, natural language processing, and autonomous driving. Central to the functioning of AI networks are two processes: AI training and AI inference. AI training involves feeding large datasets into algorithms to learn patterns and make predictions, typically requiring significant computational resources. AI inference, on the other hand, is the process of using trained models to make predictions on new data, which requires efficient and fast computation. As the demand for AI capabilities grows, the need for robust, high-speed, and energy-efficient interconnects within data centers and between network nodes becomes critical. This is where photonics comes into play, offering significant advantages over traditional electronic methods.
Enhancing Data Center Interconnects
Data centers are the backbone of AI networks, housing the vast computational resources needed for both training and inference tasks. As AI models become more complex, the data traffic within and between data centers increases exponentially. Traditional electronic interconnects face limitations in terms of bandwidth and power efficiency. Photonics, using light to transmit data, offers a solution to these challenges.
Photonics enables the integration of optical components like lasers, modulators, and detectors on a single chip. This technology allows for high-speed data transfer with significantly lower power consumption compared to electronic interconnects. These advancements are crucial for handling the data-intensive nature of AI workloads.
Enabling High-Speed AI Training and Inference
AI training requires the processing of vast amounts of data, often necessitating the use of distributed computing resources across multiple data centers. Photonic interconnects facilitate this by providing ultra-high bandwidth connections, which are essential for the rapid movement of data between computational nodes. The high-speed data transfer capabilities of photonics reduce latency and improve the overall efficiency of AI training processes.
This high transfer speed and capacity also plays a critical role in AI inference, particularly in scenarios where real-time processing and high throughput is essential. For example, in a network featuring autonomous vehicles, AI inference must process data from sensors and cameras in real-time to make immediate decisions. For other ways in which photonics plays a role in autonomous vehicles, please read our article on LIDAR and photonics.
Into Network Edge Applications
The network edge refers to the point where data is generated and collected, such as IoT devices, sensors, and local servers. Deploying AI capabilities at the network edge allows for real-time data processing and decision-making, reducing the need to send data back to centralized data centers. This approach not only reduces latency but also enhances data privacy and security by keeping sensitive information local.
Photonics enables edge AI by providing the necessary high-speed, low-power interconnects required for efficient data processing at the edge. For some use cases, the network edge could benefit from upgrading their existing direct detect or grey links to 100G DWDM coherent. However, the industry needs more affordable and power-efficient transceivers and DSPs specifically designed for coherent 100G transmission in edge and access networks. By realizing DSPs co-designed with the optics, adjusted for reduced power consumption, and industrially hardened, the network edge will have coherent DSP and transceiver products adapted to their needs. This is a path EFFECT Photonics believes strongly in, and we talk more about it in one of our previous articles.
Conclusion
Photonics is transforming the landscape of AI networks by providing high-speed, energy-efficient interconnects that enhance data center performance, enable faster AI training and inference, and support real-time processing at the network edge. As AI continues to evolve and expand into new applications, the role of photonics will become increasingly critical in addressing the challenges of bandwidth, latency, and power consumption. By leveraging the unique properties of light, photonics offers a path to more efficient and scalable AI networks, driving innovation and enabling new possibilities across various industries.
Tags: AI inference, AI networks, AI training, autonomous driving, bandwidth, computational resources, Data center, detectors, edge AI, EFFECT Photonics, energy efficient, high-speed interconnects, lasers, latency, modulators, optical components, photonic processors, Photonics, power consumption, real-time processing, silicon photonicsWhat Do AI Networks Need From Optical Pluggables?
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable,…
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable, and sustainable. It can gather information from devices across the optical network to identify patterns and make decisions independently without human input. By synergizing with other technologies, such as network function virtualization (NFV), AI can become a centralized management and orchestration network layer. Such a setup can fully automate network provisioning, diagnostics, and management, as shown in the diagram below.
However, artificial intelligence and machine learning algorithms are data-hungry. To work optimally, they need information from all network layers and ever-faster data centers to process it quickly. Pluggable optical transceivers thus need to become smarter, relaying more information back to the AI central unit, and faster, enabling increased AI processing.
The Need for Faster Transceivers
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
400 Gbps links are becoming the standard across data center interconnects, but providers are already considering the next steps. LightCounting forecasts significant growth in the shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years. We discuss these solutions in greater detail in our article about the roadmap to 800G and beyond.
The Need for Telemetry Data
Mobile networks now and in the future will consist of a massive number of devices, software applications, and technologies. Self-managed, zero-touch automated networks will be required to handle all these new devices and use cases. Realizing this full network automation requires two vital components.
- Artificial intelligence and machine learning algorithms for comprehensive network automation: For instance, AI in network management can drastically cut the energy usage of future telecom networks.
- Sensor and control data flow across all network model layers, including the physical layer: As networks grow in size and complexity, the management and orchestration (MANO) software needs more degrees of freedom and dials to turn.
These goals require smart optical equipment and components that provide comprehensive telemetry data about their status and the fiber they are connected to. The AI-controlled centralized management and orchestration layer can then use this data for remote management and diagnostics. We discuss this topic further in our previous article on remote provisioning, diagnostics, and management.
For example, a smart optical transceiver that fits this centralized AI-management model should relay data to the AI controller about fiber conditions. Such monitoring is not just limited to finding major faults or cuts in the fiber but also smaller degradations or delays in the fiber that stem from age, increased stress in the link due to increased traffic, and nonlinear optical effects. A transceiver that could relay all this data allows the AI controller to make better decisions about how to route traffic through the network.
A Smart Transceiver to Rule All Network Links
After relaying data to the AI management system, a smart pluggable transceiver must also switch parameters to adapt to different use cases and instructions given by the controller.
Let’s look at an example of forward error correction (FEC). FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A smart transceiver and DSP could switch among different FEC algorithms to adapt to network performance and use cases. Let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the transceiver can be remotely reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
Reconfigurable transceivers can also be beneficial to auto-configure links to deal with specific network conditions, especially in brownfield links. Let’s return to the fiber monitoring subject we discussed in the previous section. A transceiver can change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
Takeaways
Optical networks will need artificial intelligence and machine learning to scale more efficiently and affordably to handle the increased traffic and connected devices. Conversely, AI systems will also need faster pluggables than before to acquire data and make decisions more quickly. Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will need to scale up to speeds beyond 400G and relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Tags: 400 Gbps links, affordable networks, AI networks, centralized management, data-hungry algorithms, Dense wavelength division multiplexing, digital signal processing, EFFECT Photonics, Fiber Monitoring, forward error correction, high-speed transceivers, mobile networks, Network automation, Network Function Virtualization, network orchestration, network scalability, optical pluggables, Reconfigurable transceivers, smart optical equipment, sustainable networks, telemetry dataWhat Do Coherent Access Pluggables Need?
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom…
Given the success of 400ZR pluggable coherent solutions in the market, discussions in the telecom sector about a future beyond 400G pluggables have often focused on 800G solutions and 800ZR. However, there is also increasing excitement about “downscaling” to 100G coherent products for applications in the network edge. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. In response to this interest from operators, several vendors are keenly jumping on board the 100ZR train by announcing their development projects: : Acacia, Coherent/ADVA, Marvell/InnoLight, and Marvell/OE Solutions.
This growing interest and use cases for 100ZR are also changing how industry analysts view the potential of the 100ZR market. Last February, Cignal AI released a report on 100ZR, which stated that the viability of new low-power solutions in the QSFP28 form factor enabled use cases in access networks, thus doubling the size of their 100ZR shipment forecasts
The access market needs a simple, pluggable, low-cost upgrade to the 10G DWDM optics that it has been using for years. 100ZR is that upgrade. As access networks migrate from 1G solutions to 10G solutions, 100ZR will be a critical enabling technology.”
– Scott Wilkinson, Lead Analyst for Optical Components at Cignal AI.
The 100ZR market can expand even further, however. Access networks are heavily price-conscious, and the lower the prices of 100ZR pluggables become, the more widely they will be adopted. Reaching such a goal requires a vibrant 100ZR ecosystem with multiple suppliers that can provide lasers, digital signal processors (DSPs), and full transceiver solutions that address the access market’s needs and price targets.
The Need for Lower Power
Unlike data centers and the network core, access network equipment lives in uncontrolled environments with limited cooling capabilities. Therefore, every extra watt of pluggable power consumption will impact how vendors and operators design their cabinets and equipment. QSFP-DD modules forced operators and equipment vendors to use larger cooling components (heatsinks and fans), meaning that each module would need more space to cool appropriately. The increased need for cabinet real estate makes these modules more costly to deploy in the access domain.
These struggles are a major reason why QSFP28 form factor solutions are becoming increasingly attractive in the 100ZR domain. Their power consumption (up to 6 watts) is lower than that of QSFP-DD form factors (up to 15 Watts), which allows them to be stacked more densely in access network equipment rooms. Besides, QSFP28 modules are compatible with existing access network equipment, which often features QSFP28 slots.
The Need to Overcome Laser and DSP Bottlenecks
Even though QSFP28 modules are better at addressing the power concerns of the access domain, some obstacles prevent their wider availability.
Since QSFP28 pluggables require lower power consumption and slightly smaller footprints, they also need new laser and digital signal processor (DSP) solutions. The industry cannot simply incorporate the same lasers and DSPs used for 400ZR devices. This is why EFFECT Photonics is developing a pico tunable laser assembly (pTLA) and a 100G DSP that will best fit 100ZR solutions in the QSFP28 form factor.
However, a 100ZR industry with only one or two laser and DSP suppliers will struggle to scale up and make these solutions more widely accessible. The 400ZR market provides a good example of the benefits of a vibrant ecosystem. This larger vendor ecosystem helps 400ZR production scale up in volume and satisfy a rapidly growing market.
The Need for Standards and Interoperability
Another reason 400ZR solutions became so widespread is their standardization and interoperability. Previously, the 400G space was more fragmented, and pluggables from different vendors could not operate with each other, forcing operators to use a single vendor for their entire network deployment.
Eventually, datacom and telecom providers approached their suppliers and the Optical Internetworking Forum (OIF) about the need to develop an interoperable 400G coherent solution that addressed their needs. These discussions and technology development led the OIF to publish the 400ZR implementation agreement in 2020. This standardization and interoperability effort enabled the explosive growth of the 400G market.
100ZR solutions must follow a similar path to reach a larger market. If telecom and datacom operators want more widespread and affordable 100ZR solutions, more of them will have to join the push for 100ZR standardization and interoperability. This includes standards not just for the power consumption and line interfaces but also for management and control interfaces, enabling more widespread use of remote provisioning and diagnostics. These efforts will make 100ZR devices easier to implement across access networks standards-compatible modes for interoperability or in high-performance modes that use proprietary features.
Takeaways
The demand from access network operators for 100ZR solutions is there, but it has yet to fully materialize in the industry forecasts because, right now, there is not enough supply of viable 100ZR solutions that can address their targets. So in a way, further growth of the 100ZR market is a self-fulfilled prophecy: the more suppliers and operators support 100ZR, the easier it is to scale up the supply and meet the price and power targets of access networks, expanding the potential market. Instead of one or two vendors fighting for control of a smaller 100ZR pie, having multiple vendors and standardization efforts will increase the supply, significantly increasing the size of the pie and benefiting everyone’s bottom line.
Therefore, EFFECT Photonics believes in the vision of a 100ZR ecosystem where multiple vendors can provide affordable laser, DSP, and complete transceiver solutions tailored to network edge use cases. Meanwhile, if network operators push towards greater standardization and interoperability, 100ZR solutions can become even more widespread and easy to use.
Tags: 100G Coherent Products, 100ZR, 400ZR, 800G Solutions, 800ZR, Acacia, access networks, Cignal AI report, Coherent/ADVA, DSPs, DWDM optics, Heavy Reading survey, lasers, Marvell/InnoLight, Marvell/OE Solutions, operators, power consumption, QSFP-DD form factors, QSFP28 form factor, Scott Wilkinson, vendorsReducing the Cost per Bit with Coherent Technology
The cost per bit is a metric directly impacting network operators’ economic viability and competitive…
The cost per bit is a metric directly impacting network operators’ economic viability and competitive positioning. It represents the expense of transmitting a single bit of information across a network, encompassing infrastructure, operations, and maintenance costs. Lower costs per bit enable providers to offer more data-intensive services at competitive prices, attracting more customers and increasing revenue. Additionally, optimizing this metric helps achieve higher efficiency and sustainability in network operations.
Different segments of a telecommunication network—core, metro, and access—prioritize the cost per bit differently due to their distinct roles and technical requirements. The core network, which connects major cities and data centers, handles high data volumes over long distances. Economies of scale play a significant role here, as reducing the cost per bit is crucial for maintaining profitability on the vast amount of data transmitted. This segment typically invests in high-capacity, longer-haul technologies that, while expensive, reduce the cost per bit through enhanced efficiency and higher data throughput.
Conversely, edge networks face different challenges and priorities. They must prioritize flexibility and adaptability to handle varying traffic loads efficiently. Reducing the cost per bit involves deploying technologies that can scale quickly and cost-effectively. The access segment of edge networks, which brings connectivity directly to end-users, focuses on maximizing coverage and reliability. Here, the cost per bit needs to be managed against the need for extensive physical infrastructure that reaches individual customers.
Coherent technology is often seen as more expensive than direct detection (IM-DD), but in this article, we will explore some ways in which the initial investment in coherent technology can help networks reduce their cost per bit.
Coherent Increases Transmission Reach
The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects.
These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power.
Coherent systems use sophisticated digital signal processing (DSP) technologies to automatically compensate for signal impairments, including chromatic and polarization mode dispersion. Coherent receivers are also highly sensitive, allowing them to detect signals over longer distances with higher fidelity than is possible with IM-DD systems. The dispersion compensation and increased sensitivity reduces the number of regenerative repeaters and other physical modules needed to boost the signal over longer distances.
Fewer repeaters mean lower energy consumption, reduced maintenance, and fewer operational disruptions, all contributing to a lower cost per bit. Additionally, the ability to transmit over extended distances without degradation in signal quality allows for more straightforward network architectures with longer point-to-point connections, simplifying the overall network design and further reducing costs
Coherent Increases Transmission Efficiency
Coherent systems use complex modulation formats that encode data on a light wave’s amplitude, phase, and polarization. By encoding multiple bits per symbol, these systems can transmit more data over a single wavelength than IM-DD, which primarily uses amplitude-only modulation, rather than just the intensity. This allows more bits to be transmitted per symbol, effectively increasing the data-carrying capacity of a single fiber.
Such efficiency can allow a single coherent channel to replace the work of several IM-DD channels. In our previous article, we provided an example of a single 100Gbps coherent channel replacing the link aggregation of four 10Gbps IM-DD channels found in some access and aggregation network architectures.
This substitution would replace eight SFP+ transceivers with just two coherent 100G transceivers, simplifying network configuration and operation. It more than doubles the capacity of 4x10Gbps link aggregation, allowing the network to handle more data traffic while reducing the required physical infrastructure, effectively reducing the cost per bit.
You can consult the recent Cignal AI report on 100ZR technologies to gain further insight into this link aggregation upgrade’s potential market and reach.
The Synergy with WDM Technology
Dense Wavelength Division Multiplexing (DWDM) is an optical technology that dramatically increases the amount of data transmitted over existing fiber networks. Data from various signals are separated, encoded on different wavelengths, and put together (multiplexed) in a single optical fiber.
The wavelengths are separated again and reconverted into the original digital signals at the receiving end. In other words, DWDM allows different data streams to be sent simultaneously over a single optical fiber without requiring the expensive installation of new fiber cables. In a way, it’s like adding more lanes to the information highway without building new roads!
The tremendous expansion in data volume afforded with DWDM can be seen compared to other optical methods. A standard transceiver, often called a grey transceiver, is a single-channel device – each fiber has a single laser source. You can transmit 10 Gbps with grey optics. Coarse Wavelength Division Multiplexing (CWDM) has multiple channels, although far fewer than possible with DWDM. For example, with a 4-channel CWDM, you can transmit 40 Gbps. DWDM can accommodate up to 100 channels. You can transmit 1 Tbps or one trillion bps at that capacity – 100 times more data than grey optics and 25 times more than CWDM.
While the upgrade to DWDM requires some initial investment in new and more tunable transceivers, the use of this technology ultimately reduces the cost per bit transmitted to the network. Demand in access networks will continue to grow as we move toward IoT and 5G, and DWDM will be vital to scaling cost-effectively. Self-tuning modules have also helped further reduce the expenses associated with tunable transceivers.
Takeaways
Coherent systems minimize the need for frequent signal regeneration by compensating signal dispersion and enhancing signal reach. This simplifies network architecture and contributes to a lower cost per bit. Additionally, by employing complex modulation techniques, coherent technology maximizes the data capacity per wavelength, potentially replacing multiple IM-DD systems with a single coherent channel. This can further streamline network operations and reduce expenses.
The synergy of coherent technology with Dense Wavelength Division Multiplexing (DWDM) can multiply the data throughput of existing fiber infrastructures without requiring new infrastructure installations. Overall, while coherent technology involves a higher upfront investment compared to IM-DD systems, it can lower the cost per bit by enhancing the efficiency, reach, and capacity of data transmission.
Tags: capacity, channels, Coherent technology, cost per bit, data, dispersion, DWDM, EFFECT Photonics, efficiency, fiber, IM-DD, infrastructure, modulation, multiplexing, network, operations, reach, signal, systems, technology, transmissionWhat Goes Into Power Per Bit
In the information and communication technology (ICT) sector, the exponential increase in data traffic makes…
In the information and communication technology (ICT) sector, the exponential increase in data traffic makes it difficult to keep emissions down and contribute to the energy transition. A 2020 study by Huawei estimates that the power consumption of the data center sector will increase threefold in the next ten years. Meanwhile, wireless access networks are expected to increase their power consumption even faster, more than quadrupling between 2020 and 2030.
These issues affect both the environment and the bottom lines of communications companies, which must commit increasingly larger percentages of their operating expenditure to cooling solutions.
As we explained in our previous articles, photonics and transceiver integration will play a key role in addressing these issues and making the ICT sector greener. EFFECT Photonics also believes that the transition of optical access networks to coherent 100G technology can help reduce power consumption.
This insight might sound counterintuitive at first since a coherent transceiver will normally consume more than twice the power of a direct detect one due to the use of a digital signal processor (DSP). However, by replacing the aggregation of multiple direct detect links with a single coherent link and skipping the upgrades to 56Gbps and going directly for 100Gbps, optical networks can save energy consumption, materials, and operational expenditures such as truck rolls.
The Impact of Streamlining Link Aggregation
The advanced stages of 5G deployment will require operators to cost-effectively scale fiber capacity in their fronthaul networks using more 10G DWDM SFP+ solutions and 25G SFP28 transceivers. This upgrade will pressure the aggregation segments of mobile backhaul and midhaul, which typically rely on link aggregation of multiple 10G DWDM links into a higher bandwidth group (e.g., 4x10G).
On the side of cable optical networks, the long-awaited migration to 10G Passive Optical Networks (10G PON) is happening and will also require the aggregation of multiple 10G links in optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs).
This type of link aggregation involves splitting larger traffic streams and can be intricate to integrate within an access ring. Furthermore, it carries an environmental impact.
A single 100G coherent pluggable consumes a maximum of six watts of power, which is significantly more than the two watts of power of a 10G SFP+ pluggable. However, aggregating four 10G links would require a total of eight SFP+ pluggables (two on each end) for a total maximum power consumption of 16 watts. Substituting this link aggregation for a single 100G coherent link would replace the eight SFP+ transceivers with just two coherent transceivers with a total power consumption of 12 watts. And on top of that reduced total power consumption, a single 100G coherent link more than doubles the capacity of aggregating those four 10G links.
Adopting a single 100G uplink also diminishes the need for such link aggregation, simplifying network configuration and operations. To gain further insight into the potential market and reach of this link aggregation upgrade, it is recommended to consult the recent Cignal AI report on 100ZR technologies.
The Environmental Advantage of Leaping to 100G
While conventional wisdom may suggest a step-by-step progression from 28G midhaul and backhaul network links to 56G and then to 100G, it’s important to remember that each round of network upgrade carries an environmental impact.
Let’s look at an example. As per the European 5G observatory, a country like The Netherlands has deployed 12,858 5G base stations. There are several thousands of mid- and backhaul links connecting groups of these base stations to the 5G core networks. Every time these networks require an upgrade to accommodate increasing capacity, tens of thousands of pluggable transceivers must be replaced nationwide. This upgrade entails a substantial capital investment as well as resources and materials.
A direct leap from 28G mid- and backhaul links directly to coherent 100G allows network operators to have their networks already future-proofed for the next ten years. From an environmental perspective, it saves the economic and environmental impact of buying, manufacturing, and installing tens of thousands of 56G plugs across mobile network deployments. It’s a strategic choice that avoids the redundancy and excess resource utilization associated with two consecutive upgrades, allowing for a more streamlined and sustainable deployment.
Streamlining Operations with 100G ZR
Beyond the environmental considerations and capital expenditure, the operational issues and expenses of new upgrades cannot be overlooked. Each successive generation of upgrades necessitates many truck rolls and other operational expenditures, which can be both costly and resource-intensive.
Each truck roll involves a number of costs:
- Staff time (labor cost)
- Staff safety (especially in poor weather conditions
- Staff opportunity cost (what complicated work could have been done instead of driving?)
- Fuel consumption (gasoline/petrol)
- Truck wear and tear
By directly upgrading from 25G to 100G, telecom operators can bypass an entire cycle of logistical and operational complexities, resulting in substantial savings in both time and resources.
This streamlined approach not only accelerates the transition toward higher speeds but also frees up resources that can be redirected toward other critical aspects of network optimization and sustainability initiatives.
Conclusion
In the midst of the energy transition, the ICT sector must also contribute toward a more sustainable and environmentally responsible future. While it might initially seem counterintuitive, upgrading to 100G coherent pluggables can help streamline optical access network architectures, reducing the number of pluggables required and their associated power consumption. Furthermore, upgrading these access network mid- and backhaul links directly to 100G leads to future-proofed networks that will not require financially and environmentally costly upgrades for the next decade.
As the ecosystem for QSFP28 100ZR solutions expands, production will scale up, making these solutions more widely accessible and affordable. This, in turn, will unlock new use cases within access networks.
Tags: 100G coherent transceivers, 100ZR technology, 10G DWDM SFP+ solutions, 5G network power savings, coherent 100G technology, data center power usage, digital signal processor power, EFFECT Photonics, energy transition in ICT, energy-efficient technology, environmental impact of ICT, green ICT solutions, ICT power consumption, link aggregation optimization, Optical Access Networks., photonics in ICT, power per bit efficiency, reducing ICT emissions, sustainable technology upgrades, telecom power consumption, transceiver integrationIntegrating Line Card Performance and Functions Into a Pluggable
Article first published 12th October 2021, updated 3rd July 2024. The optical transceiver market is…
Article first published 12th October 2021, updated 3rd July 2024.
The optical transceiver market is expected to double in size by 2025, and coherent optical technology has come a long way over the past decade to become more accessible and thus have a greater impact on this market. When Nortel (later Ciena) introduced the first commercial coherent transponder in 2008, the device was a bulky, expensive line card with discrete components distributed on multiple circuit boards.
As time went by, coherent devices got smaller and consumed less power. By 2018, most coherent line card transponder functions could be miniaturized into CFP2 transceiver modules that were the size of a pack of cards and could plug into modules with pluggable line sides. QSFP modules followed a couple of years later, and they were essentially the size of a large USB stick and could be plugged directly into routers. They were a great fit for network operators who wanted to take advantage of the lower power consumption and cost, field replaceability, vendor interoperability, and pay-as-you-grow features.
Despite the onset of pluggables, the big, proprietary line card optical engines have still played a role in the market by focusing on delivering best-in-class optical performance. The low-noise, high-power signals they produce have the longest reach for optical links and have wider compatibility with the ROADM multiplexers used in metro and long-haul networks. The smaller CFP2 modules produce, at best, roughly half the laser power of the line card modules, which limits their reach. Meanwhile, even smaller QSFP form factors can not fit optical amplifier components, so their transmit power and reach are much more limited than even a CFP2 module.
All in all, the trade-offs were clear: go for proprietary line card transponders if you want best-in-class performance and longest reach, and go for CFP2 or QSFP transceivers if you want a smaller footprint and power consumption. This trade-off, however, limits the more widespread adoption of coherent technology. For example, mobile network operators need high performance, a smaller footprint, and power consumption so that their metro and access networks can meet the rising demands for 5G data.
So what if we told you the current paradigm of line card transponders versus pluggable transceivers is outdated? Recent improvements in electronic and photonic integration have squeezed more performance and functions into smaller form factors, allowing pluggable devices to almost catch up to line cards.
Integration Enables Line Card Performance in a Pluggable Form Factor
The advances in photonic integration change the game and can enable high performance and transmit power in the smallest pluggable transceiver form factors. By integrating all photonic functions on a single chip, including lasers and optical amplifiers, pluggable transceiver modules can achieve transmit power levels closer to those of line card transponder modules while still keeping the smaller QSFP router pluggable form factor, power consumption, and cost.
Full photonic integration increases the transmit power further by minimizing the optical losses due to the use of more efficient optical modulators, fewer coupling losses compared to silicon, and the integration of the laser device on the same chip as the rest of the optical components.
Modern ASICs Can Fit Electronics Functions in a Pluggable Form Factor
As important as optical performance is, though, pluggable transceivers also needed improvements on the electronic side. Traditionally, line card systems not only had better optical performance but also broader and more advanced electronic functionalities, such as digital signal processing (DSP), advanced forward error correction (FEC), encryption, and advanced modulation schemes. These features are usually implemented on electronic application-specific integrated circuits (ASICs).
ASICs benefit from the same CMOS process improvements that drive progress in consumer electronics. Each new CMOS process generation can fit more transistors into a single chip. Ten years ago, an ASIC for line cards had tens of millions of transistors, while the 7nm ASIC technology used in modern pluggables has more than five billion transistors. This progress in transistor density allows ASICs to integrate more electronic functions than ever into a single chip while still making the chip smaller. Previously, every function—signal processing, analog/digital conversion, error correction, multiplexing, encryption—required a separate ASIC, but now they can all be consolidated on a single chip that fits in a pluggable transceiver.
This increase in transistor density and integration also leads to massive gains in power consumption and performance. For example, modern transceivers using 7nm ASICs have decreased their consumption by 50% compared to the previous generation using 16nm ASICs while delivering roughly a 30% increase in bandwidth and baud rates. By 2022, ASICs in pluggables will benefit from a newer 5nm CMOS process, enabling further improvements in transistor density, power consumption, and speed.
Electronic Integration Enables Line-Card System Management in a Pluggable Form Factor
The advancements in CMOS technology also enable the integration of system-level functions into a pluggable transceiver. Previously, functions such as in-band network management and security, remote management, autotuneability, or topology awareness had to live on the shelf controller or in the line card interface, but that’s not the case anymore. Thanks to the advances in electronic integration, we are closer than ever to achieving a full, open transponder on a pluggable that operates as part of the optical network. These programmable, pluggable transceivers provide more flexibility than ever to manage access networks.
For example, the pluggable transceiver could run in a mode that prioritizes high-performance or one that prioritizes low consumption by using simpler and less power-hungry signal processing and error correction features. Therefore, these pluggables could provide high-end performance in the smallest form-factor or low and mid-range performance at lower power consumption than embedded line card transponders.
EFFECT Photonics has already started implementing these system-management features in its products. For example, our direct-detect SFP+ transceiver modules feature NarroWave technology, which allows customers to monitor and control remote SFP+ modules from the central office without making any hardware or software changes in the field. NarroWave is agnostic of vendor equipment, data rate, or protocol of the in-band traffic.
Pluggable transceivers also provide the flexibility of multi-vendor interoperability. High-performance line card transponders have often prioritized using proprietary features to increase performance while neglecting interoperability. The new generations of pluggables don’t need to make this trade-off: they can operate in standards-compatible modes for interoperability or in high-performance modes that use proprietary features.
Takeaways
Coherent technology was originally reserved for premium long-distance links where performance is everything. Edge and access networks could not use this higher-performance technology since it was too bulky and expensive.
Photonic integration technology like the one used by EFFECT Photonics helps bring these big, proprietary, and expensive line card systems into a router pluggable form factor. This tech has squeezed more performance into a smaller area and at lower power consumption, making the device more cost-effective. Combining the improvements in photonic integration with the advances in electronic integration for ASICs, the goal of having a fully programmable transponder in a pluggable is practically a reality. Photonic integration will be a disruptive technology that will simplify network design and operation and reduce network operators’ capital and operating expenses.
The impact of this technological improvement in pluggable transceivers was summarized deftly by Keven Wollenweber, VP of Product Management for Cisco’s Routing Portfolio:
“Technology advancements have reached a point where coherent pluggables match the QSFP-DD form factor of grey optics, enabling a change in the way our customers build networks. 100G edge and access optimized coherent pluggables will not only provide operational simplicity, but also scalability, making access networks more future proof.”
Tags: 100G, access network, ASIC, CFP, coherent optics, CoherentPIC, DSP, edge network, electronic integration, fully integrated, Fully Integrated PICs, Integrated Photonics, line card, metro access, miniaturization, NarroWave, optical transceivers, photonic integration, PIC, pluggable, pluggable transceiver, QSFP, SFP, small form factor, sustainability telecommunicationAI and the New Drivers of Data Traffic
In this world, nothing can be said to be certain except death, taxes, and the…
In this world, nothing can be said to be certain except death, taxes, and the growth of data traffic in communication networks. However, the causes of that growth vary over time depending on emerging technologies and shifting consumer behaviors.
The relationship between network capacity and data traffic closely mirrors the concept of induced demand in highway traffic management. Induced demand in the context of roadways refers to the phenomenon where increasing the number of lanes or expanding the road infrastructure to reduce congestion and accommodate more vehicles often leads to even higher traffic volumes. This is because the improved road capacity makes driving more appealing, thus encouraging more people to use their vehicles or to use them more often.
Similarly, as network capacity is increased—be it through the expansion of bandwidth or the introduction of more efficient data transmission—the network becomes capable of supporting higher loads and faster services. This improvement in network performance can encourage more data-intensive applications and services to be developed and used, such as high-definition video streaming, real-time gaming, and comprehensive Internet-of-things (IoT) solutions. As a result, the demand for data grows further, often at a pace that quickly meets or even exceeds the newly added capacity.
This article will tackle some recent key trends of the last couple of years that are driving the latest surge in data traffic.
5G and the Internet of Things
The Internet of Things (IoT) is a series of technologies interconnecting physical devices, allowing them to communicate, collect, and exchange data without human intervention. This connectivity enhances operational efficiency, improves safety, and reduces human labor in various environments—from industrial settings with automated production lines to everyday consumer use, such as smart home devices that enhance user convenience and energy efficiency.
By converting ordinary objects into smart, connected components, IoT enables real-time data collection and analysis. This leads to more informed decision-making and predictive maintenance, which can significantly cut costs and increase productivity across multiple sectors.
The advent of 5G technology, with its promise of ultra-fast speeds and low latency, has enabled many IoT applications, turning mundane devices into smart, interconnected components of broader digital ecosystems. However, this means more devices contribute to the already vast data streams flowing through global networks.
Cloud Computing and the Edge
Cloud computing offers scalable and flexible IT resources over the Internet, allowing businesses to avoid the upfront cost and complexity of owning and maintaining their own IT infrastructure. By leveraging cloud services, organizations can access a wide array of computing resources on demand, such as servers, storage, databases, and software applications.
Meanwhile, applications such as IoT, AR/VR, and content delivery networks have driven the growth of edge computing. Edge computing complements cloud computing by processing data near the source rather than relying on a central data center. This is important for applications requiring real-time processing and low latency, such as autonomous vehicles, industrial automation, and smart city technologies. By minimizing the distance data must travel, edge computing reduces latency, increases data processing speed, and enhances sensitive data’s reliability and privacy.
As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Types of Edge Data Centres
Types of Edge | Data center | Location | Number of DCs per 10M people | Average Latency | Size | |
---|---|---|---|---|---|---|
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks |
Not edge | Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
As more people and organizations adopt cloud-based services and as these services become more data-intensive (e.g., high-definition video streaming, large-scale machine learning models), the volume of data traversing the internet continues to grow. While edge computing processes much of the data locally to reduce latency, it increases data traffic in the access networks connected to edge devices.
AI and Machine Learning
Artificial Intelligence (AI) and Machine Learning (ML) are deeply transforming many industries, enabling increased automation of man tasks. AI employs sophisticated algorithms to interpret data, automate decisions, and act upon those decisions. Machine learning, a branch of AI, focuses on algorithms that allow computers to learn from data to make predictions or decisions without explicit programming. This capability is essential for many applications, from spam detection in emails to more complex problems such as diagnosing diseases or managing traffic flows efficiently.
AI and ML significantly increase data traffic due to several factors:
- Data Collection: Training and operating AI/ML models require extensive data collection from varied sources. This data must be transmitted to where it can be processed, contributing to substantial network traffic.
- Connectivity Increase: Integrating AI in devices and services, such as IoT or smart devices, leads to more internet-connected devices and higher data volumes being transmitted to central servers for analysis.
- Complex Computations: AI and ML computations usually occur in cloud environments, necessitating high-capacity links for processing and subsequent results downloads.
The increasing complexity of AI processing will impact not just the interconnections between data centers but also the architectures inside the data center. AI nodes inside data center racks are normally connected via electrical or RF signals, while the racks are connected via optical fiber interconnects. However, as AI systems do more parallelized processing, data center racks run into electrical memory and power consumption constraints. These electrical interconnects between AI nodes are increasingly becoming a bottleneck in the ability of data center architectures to scale and handle the demands of AI models sustainably.
Takeaways
Emerging technologies such as 5G, IoT, cloud computing, and AI on data traffic are reshaping how data is generated, processed, and used across networks. Cloud computing continues democratizing access to technological resources, enabling businesses and individuals to leverage sophisticated tools without significant upfront investments. The Internet of Things (IoT) turns everyday objects into devices connected to 5G networks and the edge cloud. AI and machine learning represent perhaps the most significant drivers of increased data traffic, as they rely on massive data sets to train and operate.
These technologies will have a major impact on our society but will also need further innovations in network architecture to handle increased loads with minimal latency. Integrating AI across various devices and services not only multiplies the number of data-generating interactions but also complicates the data processing infrastructure, pushing the limits of current technologies and requiring new solutions to sustain growth. These are the challenges that drive the work of telecom and datacom companies all over the world.
Tags: 5G data surge, AI and 5G technology, AI and IoT integration, AI data traffic, AI impact on networks, AI in communication networks, AI machine learning traffic, AI-driven data increase, cloud computing data, data traffic growth, data-intensive applications, Edge computing, EFFECT Photonics, high-definition video streaming, induced demand in data networks, Internet of Things (IoT) traffic, network architecture innovations, network capacity and AI, new data traffic drivers, real-time data processing, smart devices and data trafficTowards the 800ZR Future
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be…
The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules.
Following the success of 400ZR standardization, the industry quickly recognized the need for even higher-capacity solutions. In December 2020, the Optical Internetworking Forum (OIF) announced the initiation of the 800G Coherent project. This project aimed to define interoperable 800 Gbps coherent line specifications for various applications, including amplified DWDM links up to 120 km and unamplified fixed wavelength links of 2-10 km.
After the success of 400ZR standardization, the data center industry and the OIF are starting to promote an 800ZR standard to enable the next generation of interconnects. In OFC 2024, we started seeing some demos from several vendors and the OIF on this new standards initiative.
Coherent or Direct Detect for Data Centers
While coherent technology has become the dominant one for interconnecting data centers over long distances (80 km upwards), the campus sector and the inside of the data center continue to be dominated by direct detection technologies such as PAM-4.
However, as data rates rise into 800Gbps and beyond, the power consumption of coherent technology is expected to come much closer to that of direct detect PAM-4 solutions, as shown in the figure below. This can make coherent technology competitive for campus interconnects while direct detect technology remains dominant inside the data center.
A major reason for this decreased gap is that direct detect technology often requires additional amplifiers and compensators at these data rates, while coherent pluggables do not. This also makes coherent technology simpler to deploy and maintain. Furthermore, as the volume of coherent transceivers produced increases, their prices can also go down.
While the 800ZR standard is focused on longer-distance metro interconnects, efforts have also been made to develop a coherent 800G short-reach(SR) standard. Even if these efforts are unsuccessful in this generation of transceivers, they can lay the groundwork for using coherent technology in short-reach once terabit links become the new standard.
The Challenge of Power
LightCounting forecasts significant growth in dense-wavelength division multiplexing (DWDM) port shipments with data rates of 600G, 800G, and beyond in the next five years.
The major obstacles in this roadmap remain transceivers’ power consumption, thermal management, and affordability. Over the last two decades, power ratings for pluggable modules have increased as we moved from direct detection to more power-hungry coherent transmission: from 2W for SFP modules to 3.5 W for QSFP modules and now to 14W for QSSFP-DD and 21.1W for OSFP form factors. Rockley Photonics researchers estimate that a future electronic switch filled with 800G modules would draw around 1 kW of power just for the optical modules.
Thus, many incentives exist to improve pluggable optical transceivers’ performance and power consumption. By embracing increased photonic integration, co-designed PICs and DSPs, and more compact lasers, pluggables will be better able to scale in data rates while remaining affordable and low-power.
What Does This Mean for Lasers and DSPs?
The new generation of 800ZR pluggable transceivers can leverage 5nm process technology for their digital signal processors (DSPs), offering significant advancements over the 7nm technology used in 400ZR transceivers. A primary difference between these two processes lies in the size of the transistors used to build the chips: the 5nm process technology uses smaller transistors than the 7nm process. This size reduction allows more transistors to be packed into the same silicon area, enhancing the chip’s performance and energy efficiency.
For example, this enhanced transistor density facilitates higher baud rates, a critical factor for data transmission. The 800ZR modules will operate at baud rates of 120 GBaud, which doubles the 60 GBaud used in 400ZR transceivers. The increased power efficiency also makes these 800ZR transceivers more suitable for data center environments.
Regarding lasers for 800ZR, the mandate continues to be the same as always: develop the most powerful laser possible with the smallest footprint. Since 800G transmission naturally leads to higher transmission losses than 400G transmission, higher laser power is necessary to compensate for these losses and achieve the required reach. Meanwhile, the smaller laser size helps with thermal management inside the 800ZR module, and high laser power provides a higher link budget.
Takeaways
Following the success of the 400ZR standard, the Optical Internetworking Forum (OIF) quickly moved on to developing 800ZR standards for data center interconnects. The deployment of 800ZR technology promises substantial bandwidth and system performance enhancements but also poses a sustainable solution to the escalating power and thermal challenges modern data centers face. The use of 5nm process technology for DSPs and small but high-performance lasers within these modules will be vital to achieve the required power efficiency and performance.
Tags: 5nm DSPs, 800Gbps, 800ZR technology, Coherent Transceivers, compact lasers, Data center interconnects, DWDM ports, EFFECT Photonics, high-speed transmission, Optical Internetworking Forum, photonic integration, power management, standardsData Centers in the Age of AI
Article first published 14th June 2023, updated 29th May 2024. Artificial intelligence (AI) is changing…
Article first published 14th June 2023, updated 29th May 2024.
Artificial intelligence (AI) is changing the technology landscape in various industries, and data centers are no exception. AI algorithms are computationally heavy and will increase data centers’ power consumption and cooling requirements, deeply affecting data center infrastructure.
The Constraints of the Power Grid
Data centers famously consume a significant amount of energy, and power-hungry AI algorithms will lead to a further increase in data center power consumption. The world’s major data center providers are already gearing up for this increase. For example, a recent Reuters report explains how Meta computing clusters needed 24 to 32 times the networking capacity. This increase required redesigning the clusters and data centers to include new liquid cooling systems.
Despite the best efforts of the world’s tech giants to rethink their architectures, it’s clear that data centers and their new AI workloads are hitting electrical power grid limitations. The capacity of the power grid is now increasingly regarded as the main chokepoint that prevents AI clusters from being more widely implemented in data centers.
Since changes in the power grid distribution would take decades to materialize, data center providers know they cannot continue to centralize their data center architectures. To adapt to the power grid constraints, providers are thinking about how to transfer data between decentralized data center locations instead.
For example, data centers can relocate to areas with available spare power, preferably from nearby renewable energy sources. Efficiency can increase further by sending data to branches with spare capacity. The Dutch government has already proposed this kind of decentralization as part of its spatial strategy for data centers.
Interconnecting Data Centers over Long Distances
Longer data center interconnects enable a more decentralized system of data centers with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids.
These trends push the data center industry to look for interoperable solutions for longer interconnects over 80 to 120 km distances. The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With modules that are small enough to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km.
After the success of 400ZR standardization, the data center industry and the OIF are starting to promote an 800ZR standard to enable the next generation of interconnects. In OFC 2024, we started seeing some demos from several vendors and the OIF on this new standards initiative.
Optical Interconnects Inside Data Centers
The increasing complexity of AI processing will impact not just the interconnections between data centers but also the architectures inside the data center. AI nodes inside data center racks are normally connected via electrical or RF signals, while the racks are connected via optical fiber interconnects. However, as AI systems do more parallelized processing, data center racks run into electrical memory and power consumption constraints. These electrical interconnects between AI nodes are increasingly becoming a bottleneck in the ability of data center architectures to scale and handle the demands of AI models sustainably.
Optics will play an increasingly larger role in this area. As explained by Andrew Alduino and Rob Stone of Meta in a talk at the 2022 OCP Global Summit, interconnecting AI nodes via optics will be vital to decreasing the power per bit transmitted inside data center racks. This means changing the traditional architecture inside these racks. Instead of using an electro-optical switch that converts the electrical interconnects between AI nodes to optical interconnects with another, the switching inside the data center rack would be entirely optical.
Avoiding the power losses of electrical connections and electrical-optical conversions will improve the cost and power per bit of connections inside the data center. As more data center capacity is needed, these new optical interconnections might also need co-packaged coherent optics to scale effectively. This is the argument made recently by our very own Joost Verberk, EFFECT Photonics’ VP of Product Management, at the 2024 OCP Regional Summit in Lisbon.
Takeaways
As AI continues reshaping the technological infrastructure, data centers undergo significant transformations to meet the new demands. The shift towards AI-intensive operations has exacerbated the existing strain on power grids, pushing data center providers towards more decentralized solutions. This includes relocating to areas with spare power and transferring data with optical interconnects across geographically dispersed locations.
The adoption of advanced optical interconnections is happening also inside data centers, as data center racks might transition to all-optical switching to connect their AI nodes. These evolving strategies not only address the immediate challenges of AI workloads but also set the stage for more sustainable and scalable data center operations in the future.
Tags: Artificial intelligence (AI), Cooling requirements, Data center architecture, data centers, energy efficiency, Intelligent automation, Networking capacity, Optimal temperature and airflow, power consumption, Power usage effectiveness (PUE) ratioWhy Semiconductors are Vital to Optics and Photonics
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades.…
Thanks to wafer-scale technology, electronics have driven down the cost per transistor for many decades. This allowed the world to enjoy chips that every generation became smaller and provided exponentially more computing power for the same amount of money. This scale-up process is how everyone now has a computer processor in their pocket that is millions of times more powerful than the most advanced computers of the 1960s that landed men on the moon.
For example, this progress in electronics integration is a key factor that brought down the size and cost of coherent transceivers, packing more bits than ever into smaller areas. However, photonics has struggled to keep up with electronics, with the photonic components dominating the cost of transceivers. Making transceivers more accessible across the entire optical network requires bringing down these costs.
In this article, we will explore a bit of the relationship between optics and semiconductors and explain what optics can learn from electronics when it comes to semiconductor processes.
At the Heart of Photonic Systems
Semiconductor materials are vital for photonics due to their electronic and optical properties. These materials have a bandgap that can be precisely manipulated to control the absorption and emission of light, and this is essential for creating photonic devices like lasers and photodetectors. The ability to engineer the electronic structure of semiconductors like silicon, gallium arsenide (GaAs), and indium phosphide (InP) allows for the design of devices that operate across various wavelengths of light. These capabilities allow us to develop efficient, compact, and versatile photonic components.
Moreover, semiconductor fabrication techniques, inherited from the microelectronics industry, enable the mass production of photonic devices, which supports scalability and integration of photonics with existing electronic systems. If photonics becomes as readily available and easy to use as electronics, it can become more widespread and have an even greater impact on the world.
“We need to buy photonics from a catalog as we do with electronics, have datasheets that work consistently, be able to solder it to a board and integrate it easily with the rest of the product design flow.“
Tim Koene – Chief Technology Officer, EFFECT Photonics
Some differences between electronics and photonics complicate this transition. Silicon, the dominant material in microelectronics, cannot naturally emit laser light from electrical signals. Therefore, making suitable components for integrated photonics often requires using III-V semiconductor materials such as InP and GaAs. The need for these non-silicon semiconductors has made the photonics manufacturing space harder to standardize and streamline than microelectronics.
The Need for a Fabless Model
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers manufacture their chips in-house within their fabrication facilities. This approach has some substantial advantages, giving component manufacturers complete control over their production process.
However, this approach has its trade-offs when scaling up. If a vertically-integrated chip developer wants to scale up in volume, they must make a hefty capital expenditure (CAPEX) in more equipment and personnel. They must develop new fabrication processes as well as develop and train personnel. Fabs are not only expensive to build but to operate. Unless they can be kept at nearly full utilization, operating expenses (OPEX) also drain the facility owners’ finances.
Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective. For example, LightCounting estimates that 55 million optical transceivers were sold in 2021, while the International Data Corporation estimates that 1.4 billion smartphones were sold in 2021. The latter figure is 25 times larger than that of the transceiver market.
Electronics manufacturing experienced a similar problem during their 70s and 80s boom, with smaller chip start-ups facing almost insurmountable barriers to market entry because of the massive CAPEX required. Furthermore, the large-scale electronics manufacturing foundries had excess production capacity that drained their OPEX. The large-scale foundries ended up selling that excess capacity to the smaller chip developers, who became fabless. In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at total capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
This fabless model, with companies designing and selling the chips but outsourcing the manufacturing, should also be the way to go for photonics. Instead of going through a more costly, time-consuming process, the troubles of scaling up for photonics developers are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on the end market. This is the simplest way forward if photonics moves into million-scale volumes.
Investment is Needed for Photonics to Scale like Electronics
Today, photonics is still a ways off from achieving the goal of becoming more like electronics in its manufacturing process. Photonics manufacturing chains are not at a point where they can quickly produce millions of integrated photonic devices per year. While packaging, assembly, and testing are only a small part of the cost of electronic systems, they are 80% of the total module cost in photonics, as shown in the figure below.
To scale and become more affordable, the photonics manufacturing chains must become more automated and leverage existing electronic packaging, assembly, and testing methods that are already well-known and standardized. Technologies like BGA-style packaging and flip-chip bonding might be novel for photonics developers who started implementing them in the last five or ten years, but electronics embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
The roadmap of scaling integrated photonics and making it more accessible is clear: it must leverage existing electronics manufacturing processes and ecosystems and tap into the same economy-of-scale principles as electronics. Implementing this roadmap, however, requires more investment in photonics. While such high-volume photonics manufacturing demands a higher upfront investment, the resulting high-volume production line will drive down the cost per device and opens them up to a much larger market. That’s the process by which electronics revolutionized the world.
Takeaways
A robust investment is needed to better adapt and integrate microelectronic semiconductor processes into the photonics manufacturing chain to harness the full potential of photonic technologies. Such advancements will not only refine the production scales but also enhance the accessibility and affordability of photonic solutions.
Tags: bandgap, BGA style packaging, Coherent Transceivers, computing power, economy-of-scale principles, EFFECT Photonics, electronic packaging, electronics, Fabless model, Flip-chip bonding, GaAs, high-volume production, III-V semiconductor materials, InP, Investment, lasers, manufacturing chains, microelectronics industry, optical network, Optics, Photodetectors, photonic components, Photonic devices, photonic solutions, Photonics, photonics developers, photonics manufacturing, Semiconductor, semiconductor fabrication techniques, silicon, transistor, wafer-scale technologyThe Impact of Photonics on Renewable Energy Systems
The quest for sustainable and clean energy solutions has increasingly turned towards photonics innovations. This…
The quest for sustainable and clean energy solutions has increasingly turned towards photonics innovations. This technology, centered around the science and engineering of light, can enhance certain renewable system technologies or enable other infrastructure (such as data centers) to get closer to renewable energy sources.
Transfer Data, Not Power
Photonics can play a key role in rethinking the architecture of data centers. Photonics enables a more decentralized system of data centers with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids.
For example, data centers can relocate to areas with available spare power, preferably from nearby renewable energy sources. Efficiency can increase further by sending data to branches with spare capacity. The Dutch government has already proposed this kind of decentralization as part of its spatial strategy for data centers.
Figure 1: High-speed fiber-optic connections allow data processing and storage to be moved to locations where excess (green) energy is available. Data can be moved elsewhere if power is needed for other purposes, such as charging electric vehicles.
Photonics and Solar Energy
Solar power is gaining popularity as a clean energy source that promises energy independence and environmental benefits while becoming increasingly cost-effective. Although it currently meets only a small portion of global energy needs due to its high costs compared to other technologies, significant advancements have been made thanks to government support and private investment. These developments are steadily positioning solar power as a viable mainstream energy option.
The field of photovoltaics focuses on converting sunlight directly into electricity using materials that exhibit the photovoltaic effect, primarily through solar panels. This field overlaps with the electronics semiconductor industry as both utilize similar materials, such as silicon, and share similar manufacturing techniques.
Photonics and photovoltaics are closely related because they use techniques to manipulate light. Therefore, some techniques (such as optical waveguides on semiconductors) used for photonic communication systems could also be useful to the photovoltaic sector. Meanwhile, advances in photonic communications, such as developing new materials that interact efficiently with light, can directly enhance the efficiency and effectiveness of photovoltaic cells, which capture solar energy. For example, developing photonic crystals and other nanostructured materials can lead to solar panels that trap sunlight more effectively.
There are other creative ways to use photonics technology here. For example, Ambient Photonics is a startup that develops low-light solar cells designed to generate power efficiently under indoor lighting conditions. Their technology focuses on providing a clean, sustainable energy source for powering Internet of Things (IoT) devices and other electronics that traditionally rely on batteries. By using their photovoltaic cells, Ambient aims to reduce dependency on traditional battery power and enhance the sustainability of devices through renewable energy integration.
The Impact on Wind Energy Monitoring
PhotonFirst, a Dutch startup, develops advanced photonic sensors specifically designed to enhance the efficiency and maintenance of wind turbines. Their sensors use light to measure critical turbine parameters in real-time, informing turbine operators about the behavior of components such as blades, towers, gearboxes, and cabling. This precise data helps optimize turbine performance, predict maintenance needs, and reduce downtime, thereby improving energy output and extending the lifespan of the turbines.
Initially, fiber optic sensing in wind turbines was only used at the blade roots to monitor load and temperature, and expanding this technology throughout large turbines was costly. However, PhotonFirst is trying to broaden the applications of such sensor systems while also enhancing their performance at a more manageable cost.
Takeaways
The synergy between photonics and renewable energy can lead to some important advances in the pursuit of sustainable power. Through advancements in solar energy conversion, wind energy monitoring, and moving data centers closer to renewable energy sources, photonics can help change how we generate, distribute, and utilize energy.
Tags: advances, Ambient Photonics, clean energy, data centers, Decentralization, EFFECT Photonics, electricity, energy output, engineering, high-speed optical fiber, infrastructure, innovations, Internet of Things (IoT) devices, lifespan, light, low-light solar cells, maintenance, manufacturing techniques, materials, nanostructured materials, optimization, photonic communication systems, photonic crystals, Photonic Sensors, Photonics, photovoltaics, power grids, real-time data, renewable energy, renewable system technologies, science, Semiconductor Industry, solar panels, Solar power, spatial strategy, Sustainable, sustainable power, synergy, technology, turbine parameters, wind energy monitoringThe Evolution of Data Center Interconnects
The digital era’s rapid expansion requires advances in data center interconnects (DCIs) to support the…
The digital era’s rapid expansion requires advances in data center interconnects (DCIs) to support the burgeoning demands of cloud computing and data architecture.
For the sake of this article, let’s think broadly about three categories of data center interconnects based on their reach and location with relation to the data center
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
As data centers become more complex and AI increases its demands on them, the intra-data center sector is increasing in complexity and variety, but that will be the subject of a different article.
Coherent optical technology has established itself as the go-to solution for interconnecting data centers over longer distances, while direct detect continues to dominate the intra data center sector.
The Increasing Importance of Decentralizing Data Centers
Smaller data centers placed locally have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end-user. These benefits are causing the global market for edge data centers to explode, with PWC predicting that it will nearly triple from $4 billion in 2017 to $13.5 billion in 2024.
Meanwhile, the demands of new AI infrastructure are pushing data center power consumption to such a degree that the electrical power grid might be unable to sustain it. Longer data center interconnects enable a more decentralized system of data centers with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids.
These trends push the data center industry to look for interoperable solutions for longer interconnects over distances of 80 to 120 km. The advances in electronic and photonic integration allowed coherent technology for metro DCIs to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With small enough modules to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km.
After the success of 400ZR standardization, the data center industry and the OIF are starting to promote an 800ZR standard to enable the next generation of interconnects. In OFC 2024, we started seeing some demos from several vendors and the OIF on this new standards initiative.
Direct Detect and Coherent Technology in DCIs
For many years, there has been an expectation that the increasing capacity demands on data centers would reach a point where coherent technology would deliver a lower cost per bit and power per bit than direct detect technologies.
However, direct detect technology (both NRZ and PAM-4) continues to successfully overcome the challenges of coherent technology and will continue to dominate the intra-DCI space (also called data center fabric) in the coming years. In this space, links span less than two kilometers, and for particularly short links (< 300 meters), affordable multimode fiber (MMF) is frequently used.
The tendency towards data center decentralization will also impact the inside of data centers. Larger data centers can require longer interconnects from one building to another. So in a way, some links inside the data centers could become more like campus DCI links, which require single-mode fiber solutions, and in those spaces coherent might have a better chance of becoming competitive.
Takeaways
The decision to use coherent or direct detection technology for DCIs boils down to the reach and capacity needs. Coherent is already established as the solution for metro DCIs, and efforts are underway towards an 800ZR standard to follow up on the highly successful 400ZR interconnect standard. With the move to Terabit speeds and scaling production volumes, it was expected to become competitive in the data center sector, too, but for now, direct detect technology continues to dominate this sector.
From basic networking to sophisticated, AI-enhanced architectures, DCIs have become the backbone of the digital infrastructure, enabling the seamless operation of global cloud services and data centers.
Tags: 400ZR, 800ZR, AI, Campus interconnects, cloud computing, coherent optical technology, data architecture, Data center interconnects, DCIs, decentralizing data centers, digital era, Direct Detect, edge data centers, EFFECTPhotonics, electrical power grid, global cloud services, intra-data center, metro DCIs, multimode fiber, NRZ, OFC 2024, PAM-4, photonic integration, power consumption, single-mode fiber, Terabit speedsCoherent Optics Explained
In the always-evolving world of communications, coherent optics deeply improved our ability to transmit at…
In the always-evolving world of communications, coherent optics deeply improved our ability to transmit at high capacity over vast distances. Coherent optical fiber communications were studied extensively in the 1980s to improve optical transmission reach, but the high complexity of receivers made the technology not so cost-effective to deploy. After 2005, a technological breakthrough made coherent systems more economically viable, and ever since, they’ve become a big part of optical networks. Since then, coherent technology has slowly but surely spread out from the network core and become more widely available on the network edge, which is a transition that EFFECT Photonics believes in.
This article delves into the fundamental principles behind coherent optics and why it’s become indispensable in modern telecommunications infrastructure.
The Basics of Coherent Transmission
Let’s start by discussing some basic concepts.
An optical transceiver is a device that converts electrical signals into optical signals for fiber transmission and vice versa when the optical signal is received. It interfaces between fiber optical networks and electronic computing devices such as computers, routers, and switches.
There are a few ways to encode electrical data into light pulses. Perhaps the most basic way is called intensity modulation/direct detection (IM-DD). That’s a fancy way of saying that the same digital 0s and 1s of your electrical signal will be imposed directly on your light signal. This method is akin to turning a flashlight on and off to send a Morse code message.
The advantage of IM-DD transmission is that its simplicity makes the transceiver design simpler and more affordable. However, there are limitations to how much data and distance this approach can cover.
Coherent transmission improves the range and capacity of data transmission by encoding information in other properties of a light wave. To summarize the key light properties:
- the intensity is essentially the height of the light wave
- the phase is the position of the wave in its cycle
- the polarization is the orientation of the wave
While IM-DD transmission only encodes information in the intensity of a light wave, coherent transmission encodes information into all three properties, allowing coherent systems to send far more data bits in a single light wave. A receiver’s ability to read the phase and polarization also makes the optical signal more tolerant to noise, which expands the potential transmission distance. The following video from our YouTube channel explains briefly how this works in a more graphical way.
The Role of a DSP and Laser in Coherent Systems
A sophisticated digital signal processor (DSP) encodes and decodes electrical signals into light signals in a coherent system. This is the electronic heart of the system. The DSP does much more than that: it compensates for transmission impairments in the fiber, performs analog-to-digital signal conversions (and vice versa), corrects errors, encrypts data, and monitors performance. Recently, DSPs have taken on more advanced functions, such as probabilistic constellation shaping or dynamic bandwidth allocation, which enable improved reach and performance.
The tunable laser is also a core component of all these optical communication systems, both IM-DD and coherent. The laser generates the optical signal encoded and sent over the optical fiber. Thus, the purity and strength of this signal will massively impact the bandwidth and reach of the communication system. For example, since coherent systems encode information in the phase of the light, the purer the light source is, the more information it can transmit.
The Miniaturization of Coherent Optics
In the past, coherent communications were the domain of complicated benchtop systems with many discrete components that were cumbersome to connect. When Nortel (later Ciena) introduced the first commercial coherent transponder in 2008, the device was a bulky, expensive line card with discrete components distributed on multiple circuit boards. Such technology was reserved for premium long-distance links where performance is everything.
As time went by, coherent devices got smaller and consumed less power. By 2018, most coherent line card transponder functions could be miniaturized into CFP2 transceiver modules that were the size of a pack of cards and could plug into modules with pluggable line sides. QSFP modules followed a couple of years later, and they were essentially the size of a large USB stick and could be plugged directly into routers. This reduction in size, power, and cost, as well as the ever-rising data demands of telecom networks, has made coherent technology increasingly viable in metro and access networks.
Takeaways
Coherent optics transformed telecommunications, marrying complex theoretical foundations with practical engineering advancements to substantially enhance data transmission capacities and distances. The journey of coherent systems from benchtop experiments to the backbone of our digital infrastructure is an excellent example of how progress and evolution in optical communications work, always driven by the ever-increasing demands of our interconnected world.
EFFECT Photonics, with its focus on integrating advanced technologies like DSPs and tunable lasers into compact, efficient transceivers, strongly believes in making coherent optics more accessible and bringing them deeper into the network edge.
Tags: 1980s, 2005, always-evolving world of communications, breakthrough, coherent optics, cost-effective, deploy, economically viable, EFFECT Photonics, high capacity, high complexity, improve, improved, network core, network edge, optical fiber communications, optical networks, optical transmission reach, Photonics, receivers, spread out, studied extensively, systems, technology, transition, transmit, vast distances, widely availableTransceiver Customization for Flexible Access Networks
Let’s be honest: not every optical network problem can be solved by scaling up capacity.…
Let’s be honest: not every optical network problem can be solved by scaling up capacity. It’s not always cost-effective, and it’s not always sustainable. Providers and operators who want to become market leaders must scale up while also learning to allocate their existing network resources most efficiently and dynamically. For example, they must monitor their network performance frequently, providing more energy and capacity to high-traffic links while reducing power and capacity in areas with little traffic. They must find hardware that fits the rest of their network devices to a tee instead of equipment that is over- or under-specified.
Operators usually change settings in the higher network layers to adjust their networks dynamically. For example, artificial intelligence in network layer management will become a major factor in reducing the energy consumption of telecom networks. However, as networks get increasingly complex, operators need more degrees of freedom and knobs to adjust. They must customize the network and physical layers to fit the network best.
Fortunately, the new generation of pluggable transceivers gives operators more customization options than ever to change physical layer settings and adapt to these changing and growing network requirements and use cases. In this article, we will provide examples from our own pluggable transceivers and NarroWave technology.
Customization for Remote Diagnostics and Management
NarroWave sets up a separate low-frequency communication channel between two modules. This channel allows the headend module to remotely modify certain aspects of the tail-end module, effectively enabling several remote management and diagnostics options.
For example, the operator can remotely measure metrics such as the transceiver temperature and power transmitted and received. These metrics can provide a quick and useful health check of the link. The headend module can also remotely read alarms for low/high values of these metrics.
Even after buying the module, our customers can customize several variables, such as the low/high transmitter power levels, NarroWave self-tuning variables, memory behavior when turning on/off, or also the temperature and power flags and alarms. Customers also have the freedom to rewrite administrative information such as the vendor name, organization unique identifier (OUI), serial and part numbers, and passwords to access certain memory registers. These features are useful for system integrators and OEM sellers.
Customization for Easier Installation
Some of our transceiver features are customizations that make installation tasks easier. For example, setting up a network sometimes requires fibers to be repatched, which creates a loss of signal (LOS). Our NarroWave procedures include a customizable Ignore LOS (in seconds) flag that holds up the self-tuning scan and allows operators to perform maintenance duties without causing error messages at the host equipment.
These remote diagnostics and management features can eliminate certain truck rolls and save more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an underground installation) that require expensive truck rolls.
Customization for Energy Sustainability
To talk about the impact of transceiver customization on energy sustainability, we first must review the concept of performance margins. This number is a vital measure of received signal quality and determines how much room there is for the signal to degrade without impacting the error-free operation of the optical link.
In the past, network designers have played it safe, maintaining large margins to ensure a robust network operation in different conditions. However, these higher margins usually require higher transmitter power and power consumption. Network management software can develop tighter, more accurate optical link budgets in real time that require lower residual margins through the remote diagnostics provided by this new generation of pluggable transceivers. This could lower the required transceiver powers and save valuable energy.
Another related sustainability feature is deciding whether to operate on low- or high-power mode depending on the optical link budget and fiber length. For example, if the transceiver needs to operate at its maximum 10G speed, it will likely need a higher performance margin and output power. However, if the operator uses the transceiver for just a 1G link, the transceiver can operate with a smaller residual margin and use a lower power setting. The transceiver uses energy more efficiently and sustainably by adapting to these circumstances.
Takeaways
Thanks to the advances in photonics and electronic integration, the new generation of pluggable transceivers has packaged many knobs and variables that previously required additional, specialized hardware that increased network complexity and costs. These advances give designers and operators more degrees of freedom than ever.
There are customizations that enable simpler remote management and diagnostics or easier installations. There are also customizable power settings that help the transceiver operate more sustainably. These benefits make access networks simpler, more affordable, and more sustainable to build and operate.
Tags: access networks, alarms, capacity, channel, cost-effective, customization options, customize, dynamically, EFFECT Photonics, efficiently, electronic integration, energy sustainability, fiber length, hardware, health check, high-traffic links, installation tasks, low-traffic areas, management, metrics, NarroWave technology, network devices, network management software, network resources, operational expenses, operators, optical network, performance margins, Photonics, Pluggable Transceivers, power, power consumption, providers, remote diagnostics, remote sites, Scaling up, Signal degradation, SustainableCoherent Transceivers at a Low Latency
Latency, the time it takes for data to travel from its source to its destination,…
Latency, the time it takes for data to travel from its source to its destination, is a critical metric in modern networks. Reducing latency is paramount in the context of 5G and emerging technologies like edge computing.
Smaller data centers placed locally (also called edge data centers) have the potential to minimize latency, overcome inconsistent connections, and store and compute data closer to the end-user. Various trends are driving the rise of the edge cloud:
- 5G technology and the Internet of Things (IoT): These mobile networks and sensor networks need low-cost computing resources closer to the user to reduce latency and better manage the higher density of connections and data.
- Content delivery networks (CDNs): The popularity of CDN services continues to grow, and most web traffic today is served through CDNs, especially for major sites like Facebook, Netflix, and Amazon. By using content delivery servers that are more geographically distributed and closer to the edge and the end user, websites can reduce latency, load times, and bandwidth costs as well as increasing content availability and redundancy.
- Software-defined networks (SDN) and Network function virtualization (NFV): The increased use of SDNs and NFV requires more cloud software processing.
- Augment and virtual reality applications (AR/VR): Edge data centers can reduce streaming latency and improve the performance of AR/VR applications. Cloud-native applications are driving the construction of edge infrastructure and services. However, they cannot distribute their processing capabilities without considerable investments in real estate, infrastructure deployment, and management.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. Let’s explore what changes are happening in edge networks to meet these latency demands and what impact will that have on transceivers.
The Different Latency Demands of the Cloud Edge
As shown in Table 1, a data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Types of Edge Data Centres
Types of Edge | Data center | Location | Number of DCs per 10M people | Average Latency | Size | |
---|---|---|---|---|---|---|
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks |
Not edge | Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
Cisco estimates that 85 zettabytes of useful raw data were created in 2021, but only 21 zettabytes were stored and processed in data centers. Edge data centers can help close this gap. For example, industries or cities can use edge data centers to aggregate all the data from their sensors. Instead of sending all this raw sensor data to the core cloud, the edge cloud can process it locally and turn it into a handful of performance indicators. The edge cloud can then relay these indicators to the core, which requires a much lower bandwidth than sending the raw data.
Using Coherent Technology in the Edge Cloud
As edge data center interconnects became more common, the issue of interconnecting them became more prominent. Direct detect technology had been the standard in data center interconnects. However, reaching distances greater than 50km and bandwidths over 100Gbps required for modern edge data center interconnects required external amplifiers and dispersion compensators that increased the complexity of network operations.
At the same time, advances in electronic and photonic integration allowed longer-reach coherent technology to be miniaturized into QSFP-DD and OSFP form factors. This progress allowed the Optical Internetworking Forum (OIF) to create the 400ZR and ZR+ standards for 400G DWDM pluggable modules. With modules that are small enough to pack a router faceplate densely, the datacom sector could profit from a 400ZR solution for high-capacity data center interconnects of up to 80km. The rise of 100G ZR technology takes this philosophy a step further, with a product aimed at spreading coherent technology further into edge and access networks.
How Does Latency in the Edge Affect DSP Requirements?
The traditional disadvantage of coherent technology vs direct detection is that coherent signal processing takes more time and computational resources and, therefore, introduces more latency in the network. Adapting to the latency requirements of the network edge might, especially in shorter link distances, might require digital signal processors (DSPs) to adopt a “lighter” version of the signal processing normally used in coherent technology.
Let’s give an example of how DSPs could behave differently in these cases. The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The fiber also adds other distortions due to nonlinear optical effects.
These effects get worse as the input power of the light signal increases, leading to a trade-off. You might want more power to transmit over longer distances, but the nonlinear distortions also become larger, which beats the point of using more power. The DSP performs several operations on the light signal to try to offset these dispersion and nonlinear distortions.
However, shorter-reach connections require less dispersion compensation, presenting an opportunity to streamline the processing done by a DSP. A lighter coherent implementation could reduce the use of dispersion compensation blocks. This significantly lowers system power consumption and latency too.
Another way to reduce processing and latency in shorter-reach data center links is to use less powerful forward error correction (FEC) in DSPs. You can learn more about FEC in one of our previous articles.
Takeaways
The shift towards edge data centers tries to address the low latency requirements in modern telecom networks. By decentralizing data storage and processing, closer to the point of use, edge computing reduces latency but also enhances the efficiency and reliability of network services across various applications, from IoT and 5G to content delivery and AR/VR experiences.
The use of coherent transceiver technology helps edge networks span longer reaches with higher capacity, but it also comes with the trade-off of increased latency due to more signal processing from the DSP. This scenario means that DSPs will have to reduce the use of certain processing blocks, such as dispersion compensation and FEC, to meet the specific latency requirements of edge computing.
Tags: 5G, applications, AR/VR, CDNs, cloud, data, data centers, demands, DSPs, Edge computing, EFFECT Photonics, infrastructure, latency, Networks, NFV, processing, SDN, services, technology, Transceivers, trendsMaking Smaller Lasers at a Big Scale
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has finally enabled the widespread implementation of IP over DWDM solutions.
With the increasing demand for coherent transceivers, many companies have performed acquisitions and mergers that allow them to develop transceiver components internally and thus secure their supply. LightCounting forecasts show that while this consolidation will decrease the sales of modulator and receiver components, the demand for tunable lasers will continue to grow. The forecast expects the tunable laser market for the transceiver to reach a size of $400M in 2026.
East Asia is a huge driver of the rising number of tunable lasers sales. With initiatives like the “Broadband China” strategy and significant investments in 5G and beyond, the demand for advanced optical components, including tunable lasers, has surged in China. Japan and South Korea have a long history of adopting early new optical telecom innovations. After all, Japanese companies are at the forefront of research and development in tunable laser technology, and South Korea has built highly future-proofed 5G networks that are ahead of the implementations in Europe and North America. To meet the demands of this region, the industry needs to improve at making highly integrated tunable lasers at scale.
Making New and Smaller Lasers
The impact of small and integrated lasers extends beyond mere size considerations; it crucially contributes to enhancing power efficiency. Smaller laser designs inherently operate at lower voltages and currents, offering improved heat dissipation and minimizing coupling losses. Photonic integration helps achieve these reductions, maximizing efficiency by consolidating multiple functions onto a single chip.
The journey towards 100G coherent technology in access networks requires compact and power-efficient coherent pluggables in the QSFP28 form factor and, with it, compact and power-efficient tunable lasers that fit this form factor.
This monolithic integration of all tunable laser functions allowed EFFECT Photonics to develop a novel pico-ITLA (pITLA) module that will become the world’s smallest ITLA for coherent applications. The pITLA is the next step in tunable laser integration, including all laser functions in a package with just 20% of the volume of a nano-ITLA module. The figure below shows that even a standard matchstick dwarves the pITLA in size.
EFFECT Photonics’ laser solution is unique because it enables a widely tunable laser for which all its functions, including the wavelength locker, are monolithically integrated on a single chip. This setup is ideal for reducing power consumption and scaling into high production volumes.
The Economics of Scale
As innovative as these new, small lasers can be, they will have little impact if they cannot be manufactured at a high enough volume to satisfy the demands of mobile and cloud providers and drive down the cost per device.
This economy-of-scale principle is the same one behind electronics manufacturing, and the same must be applied to photonics. The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease.
Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. This must be the goal for the optical transceiver and tunable laser industry.
Learning to Scale from Electronics
A key way to improve photonics manufacturing is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a new special production line is much more expensive than modifying an existing production flow.
One electronic technique essential to transfer into photonics is ball-grid array (BGA) packaging. BGA-style packaging has grown popular among electronics manufacturers over the last few decades. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
Another critical technique to move into photonics is flip-chip bonding. This process is where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
Takeaways
As the demand for data and telecommunication services surges globally, the industry is moving towards more compact, power-efficient, and scalable laser solutions that integrate all necessary functions on a single chip. With East Asia driving the demand for these advanced components through their ambitious broadband and 5G initiatives, the challenge now lies in applying the economies of scale principle from electronics manufacturing to photonics. This approach would dramatically reduce costs and enable the mass adoption of these technologies.
Tags: 100G coherent technology, 5G Networks, access networks, acquisitions, ball-grid array, Broadband China strategy, Coherent technology, datacom, Dense wavelength division multiplexing, East Asia, economy-of-scale principle, EFFECT Photonics, electronics manufacturing, electronics packaging, Fiber Infrastructure, Flip-chip bonding, high production volumes, highly integrated tunable lasers, IP over DWDM solutions, JePPIX consortium, LightCounting forecasts, mergers, Network capacity, optical components, photonic integrated chip, photonic integration, Photonics, Pico-ITLA Module, Pluggable transceiver modules, power consumption, Power Efficiency, QSFP28 form factor, System-on-Chip devices, Technical University of Eindhoven, telecom industries, The world, transceiver components, tunable laser market, Tunable laser technology, tunable lasers, wafer, wavelength lockerWhat are Access Networks and Why Should We Care?
The evolution of telecommunications has led to the development of complex network architectures designed to…
The evolution of telecommunications has led to the development of complex network architectures designed to meet data transmission needs across the globe. In a very simplified way, we could segment a typical network architecture into three primary sections
- The core network: A backbone network itself, designed to provide high-speed, long-distance transmission across regions or countries.
- The metro network: This network binds the access network to the core network and is responsible for aggregating data from multiple access networks. It serves as an intermediary that enables the flow of data between local areas and the high-capacity backbone of the telecom network.
- The access network: This network is the link between end-users and the broader telecommunications network.
Each of these segments plays a role in the network, but they differ significantly in their functions, design, and requirements. This article focuses on the telecom access network, exploring its unique characteristics, how it stands apart from metro and core networks, and the specific requirements for transceivers that distinguish it from other segments of the telecom network.
The Access Network Explained
The telecom access network is the critical link between end-users and the broader telecommunications network. It is the segment of the network that extends from the telecom service provider’s central office to the individual subscribers, whether they be residential homes, businesses, or mobile users. The primary function of the access network is to provide a direct pathway for users to access telecommunications services such as the Internet, telephone, and television.
The access network is characterized by its proximity to end-users and its focus on reaching as many subscribers as possible. It encompasses various technologies, including copper wires (DSL), fiber optics (FTTH, FTTB), coaxial cables, and wireless connections (Wi-Fi, cellular networks), each tailored to different service requirements and deployment scenarios.
The Requirements of the Access vs Core and Metro Networks
The distinction between the access network and the other sections of a telecom network, namely the metro and core networks, lies primarily in their operational focus and scale. The core network handles the highest volume of data, employing advanced technologies and high-capacity infrastructure to ensure seamless data transmission across great distances. The metro network is designed to handle a higher capacity than the access network because it consolidates traffic from many users. The access network segment is characterized by its vast and dispersed nature, aimed at covering as much geographical area as possible to connect a large number of subscribers
Metro and core networks are engineered for high capacity and long distance to manage the vast amounts of data traversing the global telecommunications infrastructure. They employ sophisticated routing, switching, and multiplexing technologies to optimize data flow and ensure reliability and quality of service over long distances. In contrast, the access network prioritizes accessibility, flexibility, and cost-effectiveness, aiming to deliver services to as many users as possible with varying bandwidth requirements.
What Do Transceivers in the Access Network Need?
Transceivers, devices that combine transmission and reception capabilities in a single unit, are critical components of all sections of a telecom network. However, the requirements for transceivers in the access network differ significantly from those in the metro and core networks.
- Range and Power Consumption: Access network transceivers often operate over shorter distances than their metro and core counterparts. They are designed to be power-efficient to support a dense deployment of endpoints with varying ranges.
- Flexibility and Scalability: Given the diverse technologies and deployment scenarios within the access network, transceivers must be highly flexible and scalable. This flexibility allows service providers to upgrade network capabilities or adapt to new standards without significant infrastructure overhaul.
- Cost Sensitivity: Cost is a critical factor in the access network due to the need to deploy a vast number of transceivers to connect individual subscribers.
- Environmental Robustness: Access network transceivers are often subjected to harsher environmental conditions than those deployed in controlled environments like data centers or network hubs. They will often have industrial temperature (I-temp) ratings.
Takeaways
The telecom access network connects end-users to the vast world of telecommunications services, distinguishing itself from the metro and core networks through its focus on accessibility and subscriber reach. The access network demands specific solutions for transceiver components, such as shorter rangers, lower power consumption and capacity, lower costs, and industrial hardening.
Tags: access network, businesses, central office, coaxial cables, copper wires, core network, Data transmission, EFFECT Photonics, end-users, fiber optics, high-speed, Internet, long-distance, metro network, mobile users, network architectures, residential homes, service requirements, telecom service provider, telecommunications services, telephone, television, The evolution of telecommunications, wireless connectionsWhat’s New Inside a 100G ZR Module?
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications…
In the optical access networks, the 400ZR pluggables that have become mainstream in datacom applications are too expensive and power-hungry. Therefore, operators are strongly interested in 100G pluggables that can house coherent optics in compact form factors, just like 400ZR pluggables do. The industry is labeling these pluggables as 100ZR.
A recently released Heavy Reading survey revealed that over 75% of operators surveyed believe that 100G coherent pluggable optics will be used extensively in their edge and access evolution strategy. However, this interest had yet to materialize into a 100ZR market because no affordable or power-efficient products were available. The most the industry could offer was 400ZR pluggables that were “powered-down” for 100G capacity.
By embracing smaller and more customizable light sources, new optimized DSP designs, and high-volume manufacturing capabilities, we can develop native 100ZR solutions with lower costs that better fit edge and access networks.
Smaller Tunable Lasers
Since the telecom and datacom industries want to pack more and more transceivers on a single router faceplate, integrable tunable laser assemblies (ITLAs) must maintain performance while moving to smaller footprints and lower power consumption and cost.
Fortunately, such ambitious specifications became possible thanks to improved photonic integration technology. The original 2011 ITLA standard from the Optical Internetworking Forum (OIF) was 74mm long by 30.5mm wide. By 2015, most tunable lasers shipped in a OIF form factor that cut the original ITLA footprint in half. In 2021, the nano-ITLA form factor designed for QSFP-DD and OSFP modules had once again cut the micro-ITLA footprint almost in half.
There are still plenty of discussions over the future of ITLA packaging to fit the QSFP28 form factors of these new 100ZR transceivers. EFFECT Photonics has developed a solution that monolithically integrates all tunable laser functions (including the wavelength locker) into a novel pico-ITLA (pITLA) module that will become the world’s smallest ITLA for coherent applications. The pITLA is the next step in tunable laser integration, including all laser functions in a package with just 20% of the volume of a nano-ITLA module. The figure below shows that even a standard matchstick dwarves the pITLA in size.
More Efficient DSPs
The 5 Watt-power requirement of 100ZR in a QSFP28 form factor is significantly reduced compared to the 15-Watt specification of 400ZR transceivers in a QSFP-DD form factor. Achieving this reduction requires a digital signal processor (DSP) specifically optimized for the 100G transceiver.
Current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
To illustrate the impact of co-designing PIC and DSP, let’s look at an example. A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead constitutes roughly 2-3 Watts or about 10-15% of transceiver power consumption.
If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Industrial Temperature Ranges
Traditionally, coherent devices have resided in the controlled settings of data center machine rooms or network provider equipment rooms. These rooms have active temperature control, cooling systems, dust and particle filters, airlocks, and humidity control. In such a setting, pluggable transceivers must operate within the so-called commercial temperature range (c-temp) from 0 to 70ºC.
On the other hand, the network edge often involves uncontrolled settings outdoors at the whims of Mother Nature. It might be at the top of an antenna, on mountain ranges, within traffic tunnels, or in Northern Europe’s severe winters. For these outdoor settings, transceivers should operate in the industrial temperature range (I-temp) from -40 to 85ºC. Higher altitude deployments provide additional challenges too. Because the air gets thinner, networking equipment cooling mechanisms become less effective, and the device cannot withstand casing temperatures as high as they can at sea level.
Making an I-temp transceiver means that every internal component—lasers, optical engine, DSP—must also be I-temp compliant. Thus, it’s essential to specifically design laser sub-assemblies and DSPs that can reliably work within the I-temp range.
Takeaways
The advent of 100G ZR modules addresses the industry’s need for more affordable and energy-efficient alternatives to implement coherent technologies in access networks. The drive towards miniaturization, exemplified by EFFECT Photonics’ development of the world’s smallest ITLA module, alongside DSP optimizations that minimize power conversion overhead, will help develop pluggables that meet the efficiency and footprint requirements of access networks. Furthermore, it’s necessary for these modules to operate across an I-temp range that can handle a variety of challenging environments in which these networks must operate.
These developments will help operators lay the groundwork for a new generation of coherent optical access networks.
Tags: 100G ZR, access networks, coherent optics, Cost-effective solutions, Datacom applications, DSP optimization, Edge evolution, efficiency, energy consumption, Environmental challenges, Industrial temperature range, ITLA modules, miniaturization, network operators, Networking equipment, Optical Access Networks., Photonics, Photonics Integration, pluggables, Technology advancements, tunable lasersTunable Lasers and DSPs in the Age of AI
The use of generative artificial intelligence (AI) models is transforming several industries, and data centers…
The use of generative artificial intelligence (AI) models is transforming several industries, and data centers are no exception. AI models are computationally heavy, and their increasing complexity will require faster and more efficient interconnections than ever between GPUs, nodes, server racks, and data center campuses. These interconnects will have a major impact on the ability of data center architectures to scale and handle the demands of AI models in a sustainable way.
As we discussed in a previous article, transceivers that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. However, what impact will that have on the tunable lasers and digital signal processors (DSPs) inside these transceivers? This article will review a couple of trends in lasers and DSPs to adapt to this new era.
The Power of Laser Arrays
In 2022, Intel Labs demonstrated an eight-wavelength laser array fully integrated on a silicon wafer. These milestones are essential for optical transceivers because the laser arrays can allow for multi-channel transceivers that are more cost-effective when scaling up to higher speeds.
Let’s say we need an intra-DCI link with 1.6 Terabits/s of capacity. There are three ways we could implement it:
- Four modules of 400G: This solution uses existing off-the-shelf modules but has the largest footprint. It requires four slots in the router faceplate and an external multiplexer to merge these into a single 1.6T channel.
- One module of 1.6T: This solution will not require the external multiplexer and occupies just one plug slot on the router faceplate. However, making a single-channel 1.6T device has the highest complexity and cost.
- One module with four internal channels of 400G: A module with an array of four lasers (and thus four different 400G channels) will only require one plug slot on the faceplate while avoiding the complexity and cost of the single-channel 1.6T approach.
Multi-laser array and multi-channel solutions will become increasingly necessary to increase link capacity in coherent systems. They will not need more slots in the router faceplate while simultaneously avoiding the higher cost and complexity of increasing the speed with just a single channel.
Co-Designing DSP and Optical Engine
Transceiver developers often source their DSP, laser, and optical engine from different suppliers, so all these chips are designed separately from each other. In such cases, the DSP is like a Swiss army knife: a jack of all trades designed for different kinds of optical engines but a master of none.
For example, current DSPs are designed to be agnostic to the material platform of the photonic integrated circuit (PIC) they are connected to, which can be Indium Phosphide (InP) or Silicon. Thus, they do not exploit the intrinsic advantages of these material platforms. Co-designing the DSP chip alongside the PIC can lead to a much better fit between these components.
A PIC and a standard platform-agnostic DSP typically operate with signals of differing intensities, so they need some RF analog electronic components to “talk” to each other. This signal power conversion overhead can constitute up to 2 Watts or about 10-15% of transceiver power consumption.
However, the modulator of an InP PIC can run at a lower voltage than a silicon modulator. If this InP PIC and the DSP are designed and optimized together instead of using a standard DSP, the PIC could be designed to run at a voltage compatible with the DSP’s signal output. This way, the optimized DSP could drive the PIC directly without needing the RF analog driver, doing away with most of the power conversion overhead we discussed previously.
Smart Devices Built In Scale
Making coherent optical transceivers more affordable is a matter of volume production. As discussed in a previous article, if PIC production volumes can increase from a few thousand chips per year to a few million, the price per optical chip can decrease from thousands of Euros to mere tens of Euros. Achieving this production goal requires photonics manufacturing chains to learn from electronics and leverage existing electronics manufacturing processes and ecosystems.
While vertically-integrated PIC development has its strengths, a fabless model in which developers outsource their PIC manufacturing to a large-scale foundry is the simplest way to scale to production volumes of millions of units. Fabless PIC developers can remain flexible and lean, relying on trusted large-scale manufacturing partners to guarantee a secure and high-volume supply of chips. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on their end market and designs instead of costly fabrication facilities.
Further progress must also be made in the packaging, assembly, and testing of photonic chips. While these processes are only a small part of the cost of electronic systems, the reverse happens with photonics. To become more accessible and affordable, the photonics manufacturing chain must become more automated and standardized. It must move towards proven and scalable packaging methods that are common in the electronics industry.
If you want to know more about how photonics developers can leverage electronic ecosystems and methods, we recommend you read our in-depth piece on the subject.
Takeaways
In conclusion, tunable lasers and DSPs are adapting to meet the rising demands of AI-driven data center infrastructure. The integration of multi-wavelength laser arrays and the co-design of DSPs and optical engines are crucial steps towards creating more efficient, scalable, and cost-effective optical transceivers. These devices will be smart and provide some telemetry data, and must shift towards volume production and the adoption of electronics industry methodologies. These innovations not only promise to enhance the capacity and efficiency of data center interconnects but also pave the way for a more sustainable growth trajectory in the face of AI’s computational demands.
Tags: 100ZR, access network, co-design, coherent, controlled, edge, fit for platform DSP, InP, Integrated Photonics, low power, photonic integration, Photonics, pluggables, power consumption, power conversion, QSFP 28, QSFP-DDTransceivers in the Time of TowerCos
A recent report from the International Telecommunications Union (ITU) declared that 37% of the global…
A recent report from the International Telecommunications Union (ITU) declared that 37% of the global population still lacks internet access due to infrastructure deficits. In this context, Tower Companies (TowerCos) will be crucial in expanding network coverage, particularly in underserved areas.
Tower Companies (TowerCos) are entities specializing in managing “passive” mobile infrastructure. In other words, they manage everything that is not active equipment that emits a mobile signal. The TowerCo’s primary role is to host telecommunications antennas for multiple operators, facilitating more efficient mobile deployments. This concept allows telecom operators to focus on active network management while TowerCos handles the maintenance, access, and security of passive infrastructures like towers and power supplies.
Historically, telecom companies managed every aspect of their service delivery, including the ownership of towers. However, increasing capital expenditure costs and the need for rapid expansion in network coverage have motivated operators to outsource this infrastructure to TowerCos. In this way, operators can reduce the required capital expenditure on infrastructure and move that into their operating costs
The increasing bandwidth demands of 5G networks and data centers, prompted by new Internet-of-Things and artificial intelligence use cases, have further solidified the importance of TowerCos. A 2018 McKinsey study reported that the migration to 5G could double the total cost of ownership of a telecommunications company’s infrastructure by 2020 to 2025.
To adapt to this fast expansion of TowerCos worldwide, optical transceiver developers should consider what are the key requirements for products that will go into TowerCo infrastructure. In this article, EFFECT Photonics would like to highlight three of them: integration, remote diagnostics and management, and industrial hardening.
Integration for Compactness and Power Efficiency
Space and energy efficiency are critical for TowerCo infrastructure because they want to accommodate telecom equipment from multiple operators on the same structure. Greater photonics integration will be key to reducing the footprint of transceivers and other optical telecom equipment, as well as improving their power efficiency.
In many electronic and photonic devices, the interconnections between different components are often sources of losses and inefficiency. A more compact, integrated device will have shorter and more energy-efficient interconnections. Using an example from electronics, Apple’s system-on-chip processors fully integrate all electronic processing functions on a single chip. As shown in the table below, these processors are significantly more energy efficient than the previous generations of Apple processors.
𝗠𝗮𝗰 𝗠𝗶𝗻𝗶 𝗠𝗼𝗱𝗲𝗹 | 𝗣𝗼𝘄𝗲𝗿 𝗖𝗼𝗻𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 | |
𝗜𝗱𝗹𝗲 | 𝗠𝗮𝘅 | |
2023, M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Table 1: Comparing the power consumption of a Mac Mini with an M1 and M2 SoC chips to previous generations of Mac Minis. [Source: Apple’s website] |
The photonics industry can set a similar goal to Apple’s system-on-chip. By integrating all the optical components (lasers, detectors, modulators, etc.) on a single chip can minimize the losses and make devices such as optical transceivers more compact and efficient.
Remote Diagnostics and Management
Transceivers used in TowerCo infrastructures must also include advanced diagnostic and management features. These capabilities are essential for remote sites, enabling TowerCos and their telecom operators customers to monitor and manage their networks effectively.
For example, TowerCos and operators extensively use network function virtualization (NFV) capabilities. NFV allows operator customers to build their network on the shared infrastructure as well as determine and distribute their services. These technologies benefit greatly from smart transceivers that can be diagnosed and managed remotely from the NFV layer.
The concept of zero-touch provisioning becomes useful here. Transceivers can be pre-programmed by the central office for specific operational parameters, such as temperature, wavelength drift, dispersion, and signal-to-noise ratio. They can then be shipped to remote sites, where technicians just have to plug and play. This simplifies deployment for TowerCos.
Moreover, the same communication channels used for provisioning can also facilitate ongoing monitoring and diagnostics. This feature particularly benefits remote sites, where traditional maintenance methods like truck rolls are costly and inefficient. By remotely monitoring key metrics like transceiver temperature and power, TowerCos and operator customers can conduct health checks and manage their infrastructure more efficiently.
Industrial Hardening
Transceivers in TowerCo infrastructures must be designed to withstand harsh outdoor environments. The resilience of these components is critical for maintaining continuous network service and preventing downtime, especially in remote or challenging locations.
Commercial temperature (C-temp) transceivers are designed to operate from 0°C to 70°C. These transceivers suit the controlled environments of data center and network provider equipment rooms. These rooms have active temperature control, cooling systems, filters for dust and other particulates, airlocks, and humidity control. On the other hand, industrial temperature (I-temp) transceivers are designed to withstand more extreme temperature ranges, typically from -40°C to 85°C. These transceivers are essential for deployments in outdoor environments or locations with harsh operating conditions. It could be at the top of an antenna, on mountain ranges, inside traffic tunnels, or in the harsh winters of Northern Europe.
𝗧𝗲𝗺𝗽𝗲𝗿𝗮𝘁𝘂𝗿𝗲 𝗦𝘁𝗮𝗻𝗱𝗮𝗿𝗱 | 𝗧𝗲𝗺𝗽𝗲𝗿𝗮𝘁𝘂𝗿𝗲 𝗥𝗮𝗻𝗴𝗲 (°𝗖) | |
𝗠𝗶𝗻 | 𝗠𝗮𝘅 | |
Commercial (C-temp) | 0 | 70 |
Extended (E-temp) | -20 | 85 |
Industrial (I-temp) | -40 | 85 |
Automotive / Full Military | -40 | 125 |
Table 2: Comparing the temperature ranges of different temperature hardening standards, including industrial and automotive/full military applications. |
Takeaways
TowerCos will be vital in expanding network coverage across the world and meeting the increasing demands of 5G networks. In this context, EFFECT Photonics believes that optical transceiver products that go into TowerCo infrastructure must meet the following key requirements
- Integration for compactness and power efficiency
- Advanced remote diagnostics and management features
- Industrial hardening for durability in harsh environments.
These aspects will be crucial for efficient, reliable, and cost-effective network deployment and maintenance and will support TowerCos in making optical connectivity more accessible worldwide.
Tags: 5G Networks, artificial intelligence, capital expenditure costs, data centers, EFFECT Photonics, efficient mobile deployments, Industrial Hardening, infrastructure deficits, integration, internet access, Internet of Things, key requirements, network coverage, optical transceiver developers, passive mobile infrastructure, rapid expansion, remote diagnostics, telecommunications antennas, TowerCo infrastructure, TowerCos, Transceivers, underserved areasReducing the Cost per Bit in Access Networks
Every telecommunications provider has the same fundamental problem. Many decades ago, service providers addressed increased…
Every telecommunications provider has the same fundamental problem. Many decades ago, service providers addressed increased network demands by spending more money and buying more hardware. However, network operators cannot allow their infrastructure spending to increase exponentially with network traffic, because the number of customers and the prices they are willing to pay for mobile services will not increase so steeply. The chart below is one that everyone in the communications industry is familiar with one way or another.
Given this context, reducing the cost per bit transmitted in a network is one of the fundamental mandates of telecommunication providers. As the global appetite for data grows exponentially, fueled by streaming services, cloud computing, and an ever-increasing number of connected devices, the pressure mounts on these providers to manage and reduce this cost.
In access networks, where the end users connect to the main network, this concept takes on an added layer of importance. These networks are the final link in the data delivery chain and are expensive to upgrade and maintain due to the sheer volume of equipment and devices required to reach each end user.
This is why one of EFFECT Photonics’ main missions is to use our optical solutions to reduce the cost per bit in access networks. In this article, we will briefly explain three key pillars that will allow us to achieve this goal.
Manufacturing at Scale
Previously, deploying optical technology required investing in large and expensive transponder equipment on both sides of the optical link. The rise of integrated photonics has not only reduced the footprint and energy consumption of coherent transceivers but also their cost. The economics of scale principles that rule the semiconductor industry reduce the cost of optical chips and the transceivers that use them.
The more optical components we can integrate into a single chip, the more can the price of each component decrease. The more optical System-on-Chip (SoC) devices can go into a single wafer, the more can the price of each SoC decrease. Researchers at the Technical University of Eindhoven and the JePPIX consortium have done some modelling to show how this economy of scale principle would apply to photonics. If production volumes can increase from a few thousands of chips per year to a few millions, the price per optical chip can decrease from thousands of Euros to mere tens of Euros.
By integrating all optical components on a single chip, we also shift the complexity from the assembly process to the much more efficient and scalable semiconductor wafer process. Assembling and packaging a device by interconnecting multiple photonic chips increases assembly complexity and costs. On the other hand, combining and aligning optical components on a wafer at a high volume is much easier, which drives down the device’s cost.
Integration Saves Power (and Energy)
Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat, and the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down.
These issues do not just affect the environment but also the bottom lines of communications companies. Cooling costs will increase even further with the exponential growth of traffic and the deployment of 5G networks. Integration is vital to reduce this heat dissipation and costs.
Photonics and optics are trying to follow a similar blueprint to the electronics industry and improve their integration to reduce power consumption and its associated costs. For example, over the last decade, coherent optical systems have been miniaturized from big, expensive line cards to small pluggables the size of a large USB stick. These compact transceivers with highly integrated optics and electronics have shorter interconnections, fewer losses, and more elements per chip area. These features all lead to a reduced power consumption over the last decade, as shown in the figure below.
DWDM Gives More Lanes to the Fiber Highway
Dense Wavelength Division Multiplexing (DWDM) is an optical technology that dramatically increases the amount of data transmitted over existing fiber networks. Data from various signals are separated, encoded on different wavelengths, and put together (multiplexed) in a single optical fiber.
The wavelengths are separated again and reconverted into the original digital signals at the receiving end. In other words, DWDM allows different data streams to be sent simultaneously over a single optical fiber without requiring the expensive installation of new fiber cables. In a way, it’s like adding more lanes to the information highway without building new roads!
The tremendous expansion in data volume afforded with DWDM can be seen compared to other optical methods. A standard transceiver, often called a grey transceiver, is a single-channel device – each fiber has a single laser source. You can transmit 10 Gbps with grey optics. Coarse Wavelength Division Multiplexing (CWDM) has multiple channels, although far fewer than possible with DWDM. For example, with a 4-channel CWDM, you can transmit 40 Gbps. DWDM can accommodate up to 100 channels. You can transmit 1 Tbps or one trillion bps at that capacity – 100 times more data than grey optics and 25 times more than CWDM.
While the upgrade to DWDM requires some initial investment in new and more tunable transceivers, the use of this technology ultimately reduces the cost per bit transmitted to the network. Demand in access networks will continue to grow as we move toward IoT and 5G, and DWDM will be vital to scaling cost-effectively. Self-tuning modules have also helped further reduce the expenses associated with tunable transceivers.
Takeaways
The escalating demand for data traffic requires reducing the cost per bit in access networks. EFFECT Photonics outlines three ways that can help achieve this goal:
- Manufacturing at scale to reduce the cost of optical chips and transceivers
- Photonic integration to lower power consumption and save on cooling cost
- Dense Wavelength Division Multiplexing (DWDM) to significantly increase data transmission capacity without deploying new fiber
At EFFECT Photonics believes these technologies and strategies to ensure efficient, cost-effective, and scalable data transmission for the future.
Tags: 5G Networks, access networks, communications industry, cost per bit, data transmission capacity, Dense-wavelength division multiplexing (DWDM), EFFECT Photonics, fiber networks, heat dissipation, infrastructure spending, Integrated Photonics, manufacturing at scale, mobile services, network demands, network traffic, Optical Chips, Optical solutions, Photonics, reducing, Semiconductor Industry, System-on-Chip (SoC) devices, telecommunications providerHow Pluggable Transceivers Help Your Network Scale
Modern optical networks must be scalable to accommodate escalating bandwidth requirements driven by data-intensive applications…
Modern optical networks must be scalable to accommodate escalating bandwidth requirements driven by data-intensive applications and emerging technologies, from video streaming to cloud computing and artificial intelligence.
Beyond bandwidth, a scalable network must adapt to changes in connectivity, coverage, and the integration of new technologies incrementally and cost-effectively. This adaptability optimizes the use of resources, promotes efficient growth, and contributes to the future-proofing of network infrastructure.
Optical networks face several challenges that hinder their further scalability. Infrastructure cost constraints prevent new optical fiber cables or equipment from being deployed. This can be a significant barrier for organizations looking to expand their networks.
Pluggable optical transceivers play a crucial role in making optical fiber networks more scalable by offering flexibility, ease of deployment, interoperability, and the ability to adapt to evolving network requirements. This article will dive into a few of these benefits.
The Benefit of Modularity
Arguably, the critical benefit of pluggable transceivers is a modular approach to network design. As network requirements change, operators can easily replace or upgrade transceivers without disrupting the entire network. This modularity allows for a more flexible and scalable infrastructure, as organizations can scale their networks incrementally based on demand rather than making significant upfront investments.
Pluggable transceivers support various data rates, allowing network operators to mix and match transceivers with different speeds within the same network. This is particularly useful when migrating from lower to higher data rates. It enables a phased approach to network upgrades, where components can be replaced gradually, and existing infrastructure can be utilized until a complete upgrade is economically feasible.
The Benefit of Interoperability
Pluggable transceivers also provide the flexibility of multi-vendor interoperability.
In the past, high-performance line card transponders often prioritized using proprietary features to increase performance while neglecting interoperability. As time went by, however, transceivers got smaller and consumed less power.
For example, in 2018, most coherent line card transponder functions could be miniaturized into CFP2 transceiver modules that were the size of a pack of cards and could be plugged into modules with pluggable line sides. QSFP modules followed a couple of years later, and they were essentially the size of a large USB stick and could be plugged directly into routers.
The new generations of pluggable transceivers don’t even suffer from the trade-off of performance vs. interoperability: they can operate in standards-compatible modes for interoperability or in high-performance modes that use proprietary features. They are an excellent fit for network operators who want to take advantage of the lower power consumption and cost, field replaceability, vendor interoperability, and pay-as-you-grow features.
The adherence of pluggables to industry standard sizes, such as SFP and QSFP, ensures a high degree of compatibility and interoperability across different vendors’ equipment. As a result, network operators can seamlessly integrate pluggable transceivers from various manufacturers into their existing infrastructure, allowing organizations to easily add or replace transceivers as needed without disrupting the entire network.
The Benefit of Easier Maintenance
The pluggable nature of transceivers simplifies maintenance tasks and troubleshooting processes in optical networks.
Pluggable transceivers are usually designed to be hot-swappable, allowing them to be inserted or removed from network devices without powering down the entire system. In case of a failure or the need for an upgrade, technicians can easily replace or reconfigure transceivers without disrupting the entire network.
This feature facilitates a smoother installation process, reducing downtime and minimizing disruptions to the network. Instead of replacing entire network devices, operators can focus on replacing or upgrading specific transceivers. This approach also minimizes costs associated with maintenance and upgrades, allowing organizations to allocate resources more efficiently.
Many pluggable transceivers support digital diagnostics monitoring (DDM) or DOM, providing real-time information about the transceiver’s performance, temperature, and optical parameters. This data can be centrally monitored and managed, enhancing the overall visibility and control over the network.
Takeaways
Pluggable transceivers are integral in addressing the scalability challenges modern optical networks face. Pluggable transceivers provide a modular solution, allowing seamless replacement or upgrades without disrupting the entire network, thus facilitating a scalable infrastructure that can evolve incrementally based on demand. Their support for various data rates further enables phased network upgrades, optimizing resource utilization and promoting cost-effective growth.
Moreover, the interoperability benefits of pluggable transceivers contribute significantly to their role in scalability. Additionally, the pluggable nature simplifies installation and maintenance tasks, especially since most pluggables are hot-swappable to minimize downtime and disruptions. Features like digital diagnostics monitoring allow for more proactive and efficient management of pluggables in your network.
Tags: adaptability, artificial intelligence, bandwidth requirements, benefits, cloud computing, compatibility, connectivity, cost effectively, coverage, data rates, data-intensive applications, emerging technologies, equipment, hot-swappable, industry standard sizes, infrastructure, integration, interoperability, maintenance, modularity, network design, network infrastructure, Networks, operators, optical fiber cables, optical networks, optical parameters, organizations, performance, Pluggable Transceivers, QSFP, Real-time Information, Scalability, scalability challenges, SFP, temperature, troubleshooting, upgrades, vendors, video streamingWhy (Small) Laser Size Matters
Several applications in the optical network edge would benefit from upgrading from 10G to 100G…
Several applications in the optical network edge would benefit from upgrading from 10G to 100G DWDM or from 100G grey to 100G DWDM optics:
- Business Services could scale their enterprise bandwidth beyond single-channel 100G links.
- Fixed Access links could upgrade the uplinks of existing termination devices such as optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs) from 10G to 100G DWDM.
- Mobile Midhaul benefits from a seamless upgrade of existing links from 10G to 100G DWDM.
- Mobile Backhaul benefits from upgrading their linksto 100G IPoDWDM.
The 100G coherent pluggables for these applications will have very low power consumption (less than 6 Watts) and QSFP28 form factors that are slightly smaller than a typical 400G QSFP-DD transceiver. To enable this next generation of coherent pluggables, the next generation of tunable lasers needs to reach another level of optical and electronic integration.
The Impact of Small and Integrated Lasers
Laser miniaturization and integration is not merely a matter of size; it’s also vital to enhance the power efficiency of these lasers. Below are some examples of the ways small lasers can improve energy efficiency.
- Lower Operating Voltage and Currents: Smaller, highly-integrated laser designs normally require lower threshold voltages and currents than larger lasers.
- Improved Heat Dissipation: Compact designs reduce the distances light must travel inside the laser chip. This leads to fewer optical losses and heat dissipation.
- Fewer Coupling Losses: One of the hardest things to do in photonics is coupling between free-space optics and a chip. Highly integrated lasers combine multiple functions on a single chip and avoid this kind of coupling and its associated losses.
Photonic integration is vital to achieve these size and power consumption reductions. The more components can be integrated on a single chip, the more can losses be minimized and the more efficient the optical transceiver can become.
The Past Successes and Future Challenges of Laser Integration
Over the last decade, technological progress in tunable laser integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). By 2015, tunable lasers were sold in the more compact micro-ITLA form factor, constituting a mere 22% of the original ITLA package volume. In 2019, the nano-ITLA form factor reduced ITLA volumes further, as the module was just 39% of the micro-ITLA volume.
Despite this progress, the industry will need further laser integration for the QSFP28 pluggables used in 100G ZR coherent access. Since QSFP28 pluggables have a lower power consumption and slightly smaller footprint than QSFP-DD modules, they should not use the same lasers as in QSFP-DD modules. They need specialized laser solutions with a smaller footprint and lower power consumption.
Achieving these ambitious targets requires monolithic lasers that ideally include all key laser functions (gain, laser cavity, and wavelength locker) on the same chip.
Pushing Tunable Laser Sizes Further Down
Reducing the footprint of tunable lasers in the future will need even greater integration of their parts. For example, every tunable laser needs a wavelength locker component that can stabilize the laser’s output regardless of environmental conditions such as temperature. Integrating the wavelength locker component on the laser chip instead of attaching it externally would help reduce the laser package’s footprint and power consumption.
EFFECT Photonics’ laser solution is unique because it enables a widely tunable laser for which all its functions, including the wavelength locker, are monolithically integrated on a single chip. This setup is ideal for reducing power consumption and scaling into high production volumes.
This monolithic integration of all tunable laser functions allowed EFFECT Photonics to develop a novel pico-ITLA (pITLA) module that will become the world’s smallest ITLA for coherent applications. The pITLA is the next step in tunable laser integration, including all laser functions in a package with just 20% of the volume of a nano-ITLA module. The figure below shows that even a standard matchstick dwarves the pITLA in size.
Takeaways
The impact of small and integrated lasers extends beyond mere size considerations; it crucially contributes to enhancing power efficiency. Smaller laser designs inherently operate at lower voltages and currents, offering improved heat dissipation and minimizing coupling losses. Photonic integration emerges as a pivotal factor in achieving these reductions, maximizing efficiency by consolidating multiple functions onto a single chip.
The journey towards 100G coherent technology in access networks requires compact and power-efficient coherent pluggables in the QSFP28 form factor and, with it, compact and power-efficient tunable lasers that fit this form factor. EFFECT Photonics is contributing a new step in this integration and miniaturization process with its pico-ITLA module. With a volume 20% that of a nano-ITLA module, the pITLA not only meets ambitious targets but also exemplifies the continuous push towards achieving compact, efficient, and scalable tunable lasers for the optical networking edge.
Tags: 100G Grey, 10G to 100G DWDM, Business services, Coherent pluggables, Converged Cable Access Platforms, EFFECT Photonics, Enterprise Bandwidth, Fixed Access Links, IPoDWDM, Laser Size, Mobile Backhaul, Mobile Midhaul, Optical Line Terminals, Optical Network Edge, Photonics Integration, Pico-ITLA Module, power consumption, QSFP28 Form Factors, Single-channel 100G Links, tunable lasers, UplinksTransceivers in Emergency Communications
Telecommunications are indispensable during emergencies and natural disasters for their pivotal role in coordinating emergency…
Telecommunications are indispensable during emergencies and natural disasters for their pivotal role in coordinating emergency responses, disseminating public safety information, and facilitating access to critical services. In times of crisis, efficient communication is essential for first responders, emergency services, and affected communities to collaborate seamlessly, allocate resources effectively, and ensure the safety of individuals.
Telecommunications also play a crucial role in family reunification, logistical support, and the gathering and disseminating of real-time information, contributing to well-informed decision-making and adaptive responses. Moreover, these communication channels provide emotional support, maintain social connections, and foster community resilience in adversity. As we navigate an unpredictable world fraught with natural disasters and unforeseen emergencies, the reliability of communication infrastructure becomes a lifeline for affected communities.
Optical transceivers for these disaster and emergency communications require specific characteristics to make them resistant to harsh environmental conditions and easier to install, deploy, and maintain. This article will dive into some ways that tunable transceivers can meet such requirements.
Self-Tuning Reduces Time to Service in Emergencies
Simplified provisioning and installation processes are essential to facilitate swift deployment, allowing responders to establish critical communication links without the burden of complex configurations. As time is often a critical factor during crises, transceivers should ideally adapt to the network environment to expedite the setup.
Typical tunable modules involve several tasks—manual tuning and verification of wavelength channel records—that can easily take an extra hour just for a single module installation. Self-tuning allows technicians to treat tunable modules the same way they do with grey transceivers. The network administrator could automate the configuration and provisioning of each transceiver unit from their central office, ship the units to each remote site, and the personnel in that site (who don’t need any technical experience!) just need to power up the unit. After turning them on, they could be further provisioned, managed, and monitored by experts anywhere in the world.
Once plugged in, the transceiver will set the operational parameters as programmed and communicate with the central office for confirmation. These provisioning options make deployment much more accessible for network operators. This plug-and-play operation of self-tuning modules eliminates the additional time and complexity of deploying new nodes and DWDM links in optical access networks.
The Role of Remote Diagnostics
When disaster strikes, some areas may become isolated or pose safety risks, making on-site monitoring impractical. In these scenarios, remote diagnostics proves invaluable in maintaining communication links in hard-to-reach locations or those affected by adverse conditions. They enable real-time assessment of transceiver health, performance, and potential issues without direct physical intervention.
In EFFECT Photonics’ transceivers, the same channel that establishes parameters remotely during installation can also perform monitoring and diagnostics afterward. The headend module in the central office could remotely modify certain aspects of the tail-end module in the remote site, effectively enabling several remote management and diagnostics options. The figure below provides a visualization of such a scenario.
The central office can remotely measure metrics such as the transceiver temperature and power transmitted and received. These metrics can provide a quick and helpful health check of the link. The headend module can also remotely read alarms for low/high values of these metrics.
Industrially-Hardened Transceivers for Rough Environments
Typical transceivers reside in the controlled settings of data center machine rooms or network provider equipment rooms. These rooms have active temperature control, cooling systems, dust and particle filters, airlocks, and humidity control. In such a setting, pluggable transceivers must operate within the so-called commercial temperature range (c-temp) from 0 to 70ºC.
However, optical transceivers for emergency and disaster scenarios must survive the rough outdoor environments at the whims of Mother Nature. Transceivers should operate in the industrial temperature range (I-temp) for these outdoor settings from -40 to 85ºC. Higher altitude deployments provide additional challenges, too. Because the air gets thinner, networking equipment cooling mechanisms become less effective, and the device cannot withstand casing temperatures as high as they can at sea level.
Industrial hardening involves using robust materials, protective enclosures, and enhanced durability features, ensuring that the transceivers can endure the rigors of the outdoors. Making an I-temp transceiver means that every internal component must also be I-temp compliant. You can learn more about industrial hardening in the following article.
Takeaways
In times of crisis, the pivotal role of telecommunications in coordinating emergency responses and ensuring public safety cannot be overstated. The deployment of self-tuning transceivers reduces the time to service during emergencies by simplifying provisioning and installation processes. Their plug-and-play operation allows for swift deployment in remote locations, ensuring that critical communication links are established efficiently, even by personnel without technical expertise.
Remote diagnostics capabilities help maintain communication links in hard-to-reach or hazardous locations, enabling real-time assessment without physical intervention. In these cases, industrial hardening of transceivers also emerges as a critical necessity, ensuring their resilience in rough outdoor environments subjected to the unpredictable forces of nature. By meeting these requirements, optical transceivers become resilient components of emergency communication systems, contributing significantly to the reliability and effectiveness of communication networks when they are needed most.
Tags: Adaptive Responses, Communication Infrastructure, Coordinating Emergency Responses, Crisis Communication, Critical Services, Emergency Communications, Family Reunification, First Responders, Industrial Hardening, Logistical Support, Natural Disasters, Optical Access Networks., Public Safety Information, Real-time Information, remote diagnostics, self-tuning, Swift Deployment, Telecommunications, Transceivers, tunable transceiversWhat is Laser Linewidth and Why Does it Matter
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division…
The world is moving towards tunability. The combination of tunable lasers and dense wavelength division multiplexing (DWDM) allows the datacom and telecom industries to expand their network capacity without increasing their existing fiber infrastructure. Furthermore, the miniaturization of coherent technology into pluggable transceiver modules has enabled the widespread implementation of IP over DWDM solutions. Self-tuning algorithms have also made DWDM solutions more widespread by simplifying installation and maintenance. Hence, many application cases—metro transport, data center interconnects, and —are moving towards tunable pluggables.
The tunable laser is a core component of all these tunable communication systems, both direct detection and coherent. The laser generates the optical signal modulated and sent over the optical fiber. Thus, the purity and strength of this signal will have a massive impact on the bandwidth and reach of the communication system.
What is laser linewidth?
Coherent systems encode information in the phase of the light, and the purer the light source is, the more information it can transmit. An ideal, perfectly pure light source can generate a single, exact color of light. However, real-life lasers are not pure and will generate light outside their intended color. The size of this deviation is what we call the laser linewidth. In other words, the linewidth describes the range of wavelengths present in the wavelength spectrum of the laser beam.
The linewidth of a laser can be defined in different ways depending on the specific criteria used. Here are a few examples:
- Full Width at Half Maximum (FWHM): This is a common and straightforward definition. It refers to the width of the laser spectrum at the point where the intensity is half its maximum.
- Gaussian Linewidth: In some cases, the linewidth can be characterized by the standard deviation of a Gaussian distribution that fits the spectral profile of the laser output.
- Schawlow-Townes Linewidth: This definition is associated with the quantum noise of the laser. You could consider this the fundamental, smallest possible linewidth an “ideal” laser could have.
- Lorentzian Linewidth: The Lorentzian linewidth is based on the Lorentzian distribution, often used to model the spectral lines of certain lasers.
- Frequency or Wavelength Range: Instead of using a specific criterion like FWHM, some applications may define linewidth by specifying the frequency or wavelength range within which a certain percentage (e.g., 95%) of the total power is contained.
These different definitions may be more suitable for specific contexts or applications, depending on the requirements and characteristics of the laser system in question.
What Impact Does Linewidth Have on Coherent Transmission?
A laser is a very precise generator of light signals. Phase noise is like a tiny, random wobble or instability in the timing of these signals. It’s as if the laser can’t decide exactly when to start and stop its light output, creating a small amount of uncertainty in the timing. Precise timing is everything for communication applications.
An impure laser with a large linewidth will have a more unstable phase that propagates errors in its transmitted data, as shown in the diagram below. This means it will transmit at a lower speed than desired.
What are Some Ways to Reduce Laser Linewidth and Noise?
One of the most straightforward ways to improve the linewidth of a semiconductor laser is to use it inside a second, somewhat larger resonator. This setup is called an external cavity laser (ECL) since this new resonator or cavity will use additional optical elements external to the original laser.
The new external resonator also provides more degrees of freedom for tuning the laser. ECLs have become the state-of-the-art solution in the telecom industry: they use a DFB or DBR laser as the “base laser” and external gratings as their filtering element for additional tuning. These lasers can provide a high-quality laser beam with low noise, narrow linewidth, and a wide tuning range. However, they came with a cost: manufacturing complexity.
EFFECT Photonics takes a very different approach to building lasers. Most developers make their lasers using linear resonators in which the laser light bounces back and forth between two mirrors. However, EFFECT Photonics uses ring resonators, which take a different approach to feedback: the light loops multiple times inside a ring that contains the active medium. The ring is coupled to the rest of the optical circuit via a waveguide.
The power of the ring resonator lies in its compactness, flexibility, and integrability. While a single ring resonator is not that impressive or tunable, using multiple rings and other optical elements allows them to achieve linewidth and tunability on par with the state-of-the-art tunable lasers that use linear resonators.
Takeaways
Laser linewidth, which describes the range of wavelengths in the laser beam, is paramount in coherent optical transmission systems. In such systems, where information is encoded in the phase of light, a purer light source allows for transmitting more information. A narrower laser linewidth corresponds to a more stable phase, reducing phase noise and enhancing the signal quality.
Techniques such as external cavity lasers (ECL) have been employed to improve linewidth, offering a high-quality laser beam with low noise and narrow linewidth. Alternatively, EFFECT Photonics employs ring resonators, providing an innovative approach to achieving linewidth and tunability comparable to state-of-the-art tunable lasers while emphasizing compactness and integrability.
Tags: Coherent technology, coherent transmission, Data center interconnects, Dense-wavelength division multiplexing (DWDM), EFFECT Photonics, External cavity laser (ECL), Frequency range, Full Width at Half Maximum (FWHM), Gaussian Linewidth, IP over DWDM, Laser linewidth, Lorentzian Linewidth, Metro transport, Phase noise, Pluggable transceiver modules, Ring resonators, Schawlow-Townes Linewidth, Self-tuning algorithms, tunable lasers, Wavelength rangeThe Power of Self-Tuning Access Networks
Article first published 16th February 2022, updated 31st January 2024. 5G networks will use higher…
Article first published 16th February 2022, updated 31st January 2024.
5G networks will use higher frequency bands, which require the deployment of more cell sites and antennas to cover the same geographical areas as 4G while existing antennas must upgrade to denser antenna arrays. On the side of fixed access networks, the rise of Remote PHY architectures and Dense Wavelength Division Multiplexing (DWDM) will lead to a similar increase in the density of optical network coverage.
Installing and maintaining these new nodes in fixed networks and optical fronthaul links for wireless networks will require many new DWDM optical links. Even though tunable DWDM modules have made these deployments a bit easier to handle, this tremendous network upgrade still comes with several challenges. Typical tunable modules still require several time-consuming processes to install and maintain, and that time quickly turns into higher expenses.
In the coming decade, the winners in the battle for dominance of access networks will be the providers and countries with the most extensive installed fiber base. Therefore, providers and nations must scale up cost-effectively AND quickly. Every hour saved is essential to reach targets before the competition. Fortunately, the telecom industry has a new weapon in the fight to reduce time-to-service and costs of their future networks: self-tuning DWDM modules.
Plug-and-Play Operation Reduces Time to Service
Typical tunable modules involve several tasks—manual tuning, verification of wavelength channel records—that can easily take an extra hour just for a single installation. Repair work on the field can take even longer if the technicians visit two different sites (e.g., the node and the multiplexer) to verify that they connected the correct fibers. If there are hundreds of nodes to install or repair, the required hours of labor can quickly rack up into the thousands.
Self-tuning allows technicians to treat tunable modules the same way they do with grey transceivers. There is no need for additional training for technicians to install the tunable module. Technicians only need to follow the typical cleaning and handling procedures, plug in the tunable module, and once plugged, the device will automatically scan and find the correct wavelength.
This plug-and-play operation of self-tuning modules eliminates the additional time and complexity of deploying new nodes and DWDM links in optical access networks. Self-tuning is a game-changing feature that makes DWDM networks simpler and more affordable to upgrade, manage, and maintain.
Host-Agnostic and Interoperable
Another way to save time when installing new tunable modules is to let specialized host equipment perform the tuning procedure instead. However, that would require the module and host to be compatible with each other and thus “speak the same language” when performing the tuning procedure. This situation leads to vendor lock-in: providers and integrators could not use host equipment or modules from a third party. This lock-in adds an extra layer of complexity and gives providers less flexibility to upgrade and innovate in their networks.
Self-tuning modules do not carry this trade-off because they are “host-agnostic”: they can plug into any host device as long as it accepts third-party 10G grey optics. Just as technicians can treat a self-tuning module as grey, any third-party host equipment can do the same. This benefit is possible because the module takes care of the tuning independently without relying on the host.
Enabling Simpler Network Management
Self-tuning lies at the core of EFFECT Photonics’ NarroWave technology. To implement our NarroWave procedures, we add a small low-frequency modulation signal to the tunable module and specific software that performs wavelength scanning and locking. Since this is a process controlled via software and the added signal is very small, it has no impact on these transceivers’ optical design and performance. It is simply an additional feature that the user can activate. The figure below gives a simplified overview of how NarroWave self-tuning works.
Since self-tuning software requires exchanging commands between modules across the network, it can also enable remote management tasks. For example, our NarroWave communication channel can also allow the operator’s headend module to have read-write control over certain memory registers of the tail-end module. This means that the operator can modify several module variables such as the wavelength channel, power levels, behaviour when turning on/off, all from the comfort of the central office.
In addition, the NarroWave channel also allows the headend module to read diagnostic information from the remote module, such as transmitter power levels, alarms, warnings, or status flags. NarroWave then allows the user to act upon this information and change control limits, initiate channel tuning, or clear flags. These remote diagnostics and management features avoid the need for additional truck rolls and save even more operational expenses. They are especially convenient when dealing with very remote and hard-to-reach sites (e.g., an underground installation) that require expensive truck rolls. Some vendors have made remote installation and management of these modules even more accessible through smartphone app interfaces.
Takeaways
With these advantages, self-tuning modules can help rethink how optical access networks are built and maintained. They minimize the network’s time-to-service by eliminating additional installation tasks such as manual tuning and record verification and reducing the potential for human error. They are host-agnostic and can plug into any third-party host equipment. Furthermore, tunability standards will allow modules from different vendors to communicate with each other, avoiding compatibility issues and simplifying upgrade choices. Finally, the communication channels used in self-tuning can also become channels for remote diagnostics and management, simplifying network operation even further.
Self-tuning modules are bound to make optical network deployment and operation faster, simpler, and more affordable. In our next article, we will elaborate on how to customize self-tuning modules to better fit the needs of specific networks.
Tags: access networks, DWDM, fixed access networks, flexible, G.metro, Integrated Photonics, OPEX, optical transceivers, photonic integration, Photonics, pluggables, remote diagnostics, remote management, self-tuning, Smart Tunable MSA, Transceivers, tuneabilityCoherent Optics for AI
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable,…
Artificial intelligence (AI) will have a significant role in making optical networks more scalable, affordable, and sustainable. It can gather information from devices across the optical network to identify patterns and make decisions independently without human input. By synergizing with other technologies, such as network function virtualization (NFV), AI can become a centralized management and orchestration network layer. Such a setup can fully automate network provisioning, diagnostics, and management, as shown in the diagram below.
However, artificial intelligence and machine learning algorithms are data-hungry. To work optimally, they need information from all network layers and ever-faster data centers to process it quickly. Pluggable optical transceivers thus need to become smarter, relaying more information back to the AI central unit, and faster, enabling increased AI processing.
The Need for Faster Transceivers
Optical transceivers are crucial in developing better AI systems by facilitating the rapid, reliable data transmission these systems need to do their jobs. High-speed, high-bandwidth connections are essential to interconnect data centers and supercomputers that host AI systems and allow them to analyze a massive volume of data.
In addition, optical transceivers are essential for facilitating the development of artificial intelligence-based edge computing, which entails relocating compute resources to the network’s periphery. This is essential for facilitating the quick processing of data from Internet-of-Things (IoT) devices like sensors and cameras, which helps minimize latency and increase reaction times.
400 Gbps links are becoming the standard across data center interconnects, but providers are already considering the next steps. LightCounting forecasts significant growth in the shipments of dense-wavelength division multiplexing (DWDM) ports with data rates of 600G, 800G, and beyond in the next five years. We discuss these solutions in greater detail in our article about the roadmap to 800G and beyond.
The Need for Telemetry Data
Mobile networks now and in the future will consist of a massive number of devices, software applications, and technologies. Self-managed, zero-touch automated networks will be required to handle all these new devices and use cases. Realizing this full network automation requires two vital components.
- Artificial intelligence and machine learning algorithms for comprehensive network automation: For instance, AI in network management can drastically cut the energy usage of future telecom networks.
- Sensor and control data flow across all network model layers, including the physical layer: As networks grow in size and complexity, the management and orchestration (MANO) software needs more degrees of freedom and dials to turn.
These goals require smart optical equipment and components that provide comprehensive telemetry data about their status and the fiber they are connected to. The AI-controlled centralized management and orchestration layer can then use this data for remote management and diagnostics. We discuss this topic further in our previous article on remote provisioning, diagnostics, and management.
For example, a smart optical transceiver that fits this centralized AI-management model should relay data to the AI controller about fiber conditions. Such monitoring is not just limited to finding major faults or cuts in the fiber but also smaller degradations or delays in the fiber that stem from age, increased stress in the link due to increased traffic, and nonlinear optical effects. A transceiver that could relay all this data allows the AI controller to make better decisions about how to route traffic through the network.
A Smart Transceiver to Rule All Network Links
After relaying data to the AI management system, a smart pluggable transceiver must also switch parameters to adapt to different use cases and instructions given by the controller.
Let’s look at an example of forward error correction (FEC). FEC makes the coherent link much more tolerant to noise than a direct detect system and enables much longer reach and higher capacity. In other words, FEC algorithms allow the DSP to enhance the link performance without changing the hardware. This enhancement is analogous to imaging cameras: image processing algorithms allow the lenses inside your phone camera to produce a higher-quality image.
A smart transceiver and DSP could switch among different FEC algorithms to adapt to network performance and use cases. Let’s look at the case of upgrading a long metro link of 650km running at 100 Gbps with open FEC. The operator needs to increase that link capacity to 400 Gbps, but open FEC could struggle to provide the necessary link performance. However, if the transceiver can be remotely reconfigured to use a proprietary FEC standard, the transceiver will be able to handle this upgraded link.
Reconfigurable transceivers can also be beneficial to auto-configure links to deal with specific network conditions, especially in brownfield links. Let’s return to the fiber monitoring subject we discussed in the previous section. A transceiver can change its modulation scheme or lower the power of its semiconductor optical amplifier (SOA) if telemetry data indicates a good quality fiber. Conversely, if the fiber quality is poor, the transceiver can transmit with a more limited modulation scheme or higher power to reduce bit errors. If the smart pluggable detects that the fiber length is relatively short, the laser transmitter power or the DSP power consumption could be scaled down to save energy.
Takeaways
Optical networks will need artificial intelligence and machine learning to scale more efficiently and affordably to handle the increased traffic and connected devices. Conversely, AI systems will also need faster pluggables than before to acquire data and make decisions more quickly. Pluggables that fit this new AI era must be fast, smart, and adapt to multiple use cases and conditions. They will need to scale up to speeds beyond 400G and relay monitoring data back to the AI management layer in the central office. The AI management layer can then program transceiver interfaces from this telemetry data to change parameters and optimize the network.
Tags: 400 Gbps links, artificial intelligence, coherent optics, data centers, Dense-wavelength division multiplexing (DWDM), diagnostics, Edge computing, EFFECT Photonics, energy efficiency, forward error correction (FEC), Internet of Things (IoT), Machine learning algorithms, Network automation, Network function virtualization (NFV), network optimization, optical networks, optical transceivers, Reconfigurable transceivers, remote management, SDN control, telemetry dataAn Intro to Data Center Interconnects
Data center interconnects (DCIs) refer to the networking technologies and solutions that enable seamless communication…
Data center interconnects (DCIs) refer to the networking technologies and solutions that enable seamless communication and data exchange between geographically dispersed data centers. As organizations increasingly rely on distributed computing resources and adopt cloud services, the need for efficient and high-speed connections between data centers becomes crucial. DCIs facilitate the transfer of data, applications, and workloads across multiple data center locations, ensuring optimal performance, redundancy, and scalability.
The Impact of Data Center Interconnects
The impact of robust data center interconnects on the operation of data centers is profound. Firstly, DCIs enhance overall reliability and availability by creating a resilient network infrastructure. In the event of a hardware failure or unexpected outage in one data center, DCIs enable seamless failover to another data center, minimizing downtime and ensuring continuous operations. This redundancy is vital for mission-critical applications and services.
Secondly, DCIs contribute to improved performance and reduced latency. By connecting data centers with high-speed, low-latency links, organizations can efficiently distribute workloads and resources, optimizing response times for users and applications. This is particularly important for real-time applications, such as video streaming, online gaming, and financial transactions.
Furthermore, DCIs support efficient data replication and backup strategies. Data can be synchronized across geographically distributed data centers, ensuring data integrity and providing effective disaster recovery solutions. This capability is crucial for compliance with regulatory requirements and safeguarding against data loss.
Types of Data Center Interconnects
As shown in the figure below, we can think about three categories of data center interconnects based on their reach
- Intra-data center interconnects (< 2km)
- Campus data center interconnects (<10km)
- Metro data center interconnects (<100km)
Intra-datacenter interconnects operate within a single data center facility .These interconnects are designed for short-distance communication within the same data center building or complex. Intra-DCIs are optimized for high-speed, low-latency connections between servers, storage systems, and networking devices within a single data center. They are crucial for supporting the internal communication and workload distribution required for efficient data center operations.
Campus DCIs connect multiple data centers but are typically limited to a campus area, which may include multiple buildings or locations within a relatively close proximity.This type of interconnect is suitable for organizations with distributed computing resources that need to be interconnected for redundancy, load balancing, and seamless failover within a campus environment.
Metro DCIs connect data centers that are located in different metropolitan areas or cities. They cover longer distances compared to intra-datacenter and campus interconnects, typically spanning tens of kilometers to a few hundred kilometers.
Metro DCIs are essential for creating a network of interconnected data centers across a metropolitan region. They facilitate data replication, disaster recovery, and business continuity by enabling seamless communication and resource sharing between data centers that may be geographically dispersed but still within a reasonable proximity.
The Rise of Edge Data Centers
Various trends are driving the rise of the edge cloud:
- 5G technology and the Internet of Things (IoT): These mobile networks and sensor networks need low-cost computing resources closer to the user to reduce latency and better manage the higher density of connections and data.
- Content delivery networks (CDNs): The popularity of CDN services continues to grow, and most web traffic today is served through CDNs, especially for major sites like Facebook, Netflix, and Amazon. By using content delivery servers that are more geographically distributed and closer to the edge and the end user, websites can reduce latency, load times, and bandwidth costs as well as increasing content availability and redundancy.
- Software-defined networks (SDN) and Network function virtualization (NFV). The increased use of SDNs and NFV requires more cloud software processing.
- Augment and virtual reality applications (AR/VR): Edge data centers can reduce the streaming latency and improve the performance of AR/VR applications.
Several of these applications require lower latencies than before, and centralized cloud computing cannot deliver those data packets quickly enough. A data center on a town or suburb aggregation point could halve the latency compared to a centralized hyperscale data center. Enterprises with their own data center on-premises can reduce latencies by 12 to 30 times compared to hyperscale data centers.
Cisco estimates that 85 zettabytes of useful raw data were created in 2021, but only 21 zettabytes were stored and processed in data centers. Edge data centers can help close this gap. For example, industries or cities can use edge data centers to aggregate all the data from their sensors. Instead of sending all this raw sensor data to the core cloud, the edge cloud can process it locally and turn it into a handful of performance indicators. The edge cloud can then relay these indicators to the core, which requires a much lower bandwidth than sending the raw data.
Takeaways
In conclusion, Data Center Interconnects (DCIs) play a pivotal role in fostering reliable, available, and high-performing data center operations, ensuring seamless communication and workload distribution. The categorization of DCIs into intra-data center, campus, and metro interconnects reflects their adaptability to various communication needs. The emergence of edge data centers, driven by 5G, IoT, CDNs, SDNs, NFV, and AR/VR applications, addresses the demand for low-latency computing resources.
Positioned strategically at aggregation points, edge data centers efficiently process and relay data, contributing to bandwidth optimization and closing the gap between raw data generation and traditional data center capacities. As organizations navigate a data-intensive landscape, the evolution of DCIs and the rise of edge data centers underscore their critical role in ensuring the seamless, efficient functioning of distributed computing ecosystems.
Tags: 5G technology, Backup strategies, Campus interconnects, Cloud services, Content delivery networks (CDNs), Data center interconnects, Data replication, DCIs, Distributed computing resources, edge data centers, EFFECT Photonics, High-speed connections, Internet of Things (IoT), Intra-data center interconnects, Latency reduction, Metro interconnects, Networking technologies, Performance optimization, Redundancy, Scalability, Software-defined networks (SDN).From the Lab Boutique to High Volume Production: How to Scale Up Photonics Manufacturing
Photonics, the science and technology of generating, detecting, and manipulating light, has witnessed remarkable progress…
Photonics, the science and technology of generating, detecting, and manipulating light, has witnessed remarkable progress in recent years. From cutting-edge research in academic labs to breakthrough innovations in startups, photonics is poised to revolutionize various industries, from telecommunications to healthcare. However, despite its tremendous potential, the transition from boutique lab-scale production to high-volume manufacturing remains a significant challenge.
To overcome this hurdle, the photonics industry must draw lessons from the successful scaling of electronics manufacturing. By adopting key strategies and practices that have propelled the electronics industry into the realm of mass production, photonics can pave the way for widespread adoption and integration into our everyday lives.
Learning from Electronics Packaging
A key way to improve photonics manufacturing is to learn from electronics packaging, assembly, and testing methods that are already well-known and standardized. After all, building a new special production line is much more expensive than modifying an existing production flow.
One electronic technique essential to transfer into photonics is ball-grid array (BGA) packaging. BGA-style packaging has grown popular among electronics manufacturers over the last few decades. It places the chip connections under the chip package, allowing more efficient use of space in circuit boards, a smaller package size, and better soldering.
Another critical technique to move into photonics is flip-chip bonding. This process is where solder bumps are deposited on the chip in the final fabrication step. The chip is flipped over and aligned with a circuit board for easier soldering.
These might be novel technologies for photonics developers who have started implementing them in the last five or ten years. However, the electronics industry embraced these technologies 20 or 30 years ago. Making these techniques more widespread will make a massive difference in photonics’ ability to scale up and become as available as electronics.
Adopting BGA-style packaging and flip-chip bonding techniques will make it easier for PICs to survive this soldering process. There is ongoing research and development worldwide, including at EFFECT Photonics, to transfer more electronics packaging methods into photonics. PICs that can handle being soldered to circuit boards allow the industry to build optical subassemblies that are more accessible to the open market and can go into trains, cars, or airplanes.
Supply Chain Optimization
Electronics manufacturers have honed the art of supply chain management to achieve cost-effective and efficient production processes. This includes strategies like just-in-time inventory management, lean manufacturing principles, and global sourcing. In contrast, the photonics industry often faces challenges related to specialized materials and components, resulting in longer lead times and higher costs.
Photonics manufacturers can learn from electronics by implementing supply chain optimization strategies. This involves diversifying sources, streamlining production workflows, and leveraging economies of scale. By fostering strategic partnerships with suppliers and embracing advanced inventory management systems, the photonics industry can overcome the hurdles that have hindered its growth.
The Advantages of Moving to a Fabless Model
Increasing the volume of photonics manufacturing is a big challenge. Some photonic chip developers manufacture their chips in-house within their fabrication facilities. This approach has some substantial advantages, giving component manufacturers complete control over their production process.
However, if a vertically-integrated chip developer wants to scale up in volume, they must make a hefty capital expenditure (CAPEX) in more equipment and personnel. They must develop new fabrication processes as well as develop and train personnel. Fabs are not only expensive to build but to operate. Unless they can be kept at nearly full utilization, operating expenses (OPEX) also drain the facility owners’ finances.
Especially in the case of an optical transceiver market that is not as big as that of consumer electronics, it’s hard not to wonder whether that initial investment is cost-effective. For example, LightCounting estimates that 55 million optical transceivers were sold in 2021, while the International Data Corporation estimates that 1.4 billion smartphones were sold in 2021. The latter figure is 25 times larger than that of the transceiver market.
Electronics manufacturing experienced a similar problem during their 70s and 80s boom, with smaller chip start-ups facing almost insurmountable barriers to market entry because of the massive CAPEX required. Furthermore, the large-scale electronics manufacturing foundries had excess production capacity that drained their OPEX. The large-scale foundries ended up selling that excess capacity to the smaller chip developers, who became fabless. In this scenario, everyone ended up winning. The foundries serviced multiple companies and could run their facilities at total capacity, while the fabless companies could outsource manufacturing and reduce their expenditures.
This fabless model, with companies designing and selling the chips but outsourcing the manufacturing, could also be the way to go for photonics. The troubles of scaling up for photonics developers are outsourced and (from the perspective of the fabless company) become as simple as putting a purchase order in place. Furthermore, the fabless model allows photonics developers to concentrate their R&D resources on the end market. This might be the most straightforward way for photonics to move into million-scale volumes.
Takeaways
Scaling up photonics manufacturing from boutique labs to high-volume production is a pivotal step in realizing the full potential of this transformative technology. By taking a page from the electronics industry’s playbook, focusing on standardization, optimizing the supply chain, and embracing automation, the photonics industry can overcome the challenges that have impeded its growth. With concerted efforts and strategic investments, the future of photonics looks poised for a paradigm shift, bringing us closer to a world illuminated by the power of light.
Tags: automation, BGA packaging, EFFECT Photonics, electronics, Fabless model, Flip-chip bonding, Global sourcing, innovation, Integrated circuits, Integrated Photonics, Just-in-time inventory, Lean manufacturing, Manufacturing, Optical transceiver, Photonics, PICs (Photonic Integrated Circuits), R&D, Scaling up, Semiconductor, Standardization, supply chainThe Role of Photonics in Advancing Smart Cities and IoT Networks
In the era of rapid urbanization and technological advancement, the challenges faced by smart cities…
In the era of rapid urbanization and technological advancement, the challenges faced by smart cities and IoT networks are more pressing than ever. With the increasing demand for efficient, interconnected systems, the need for a robust and reliable infrastructure is paramount. Enter photonics, a cutting-edge technology that harnesses the power of light to revolutionize communication and sensing systems. This article explores how photonics offers a promising solution to the complex problems faced by smart cities and IoT networks.
Enhanced Data Transmission and Bandwidth:
One of the foremost challenges in smart cities and IoT networks is the sheer volume of data that needs to be processed and transmitted in real-time. Traditional electronic systems often struggle to keep up with this demand, leading to bottlenecks and inefficiencies. Photonics, on the other hand, utilizes light to transmit data, enabling significantly higher bandwidths and faster transmission speeds.
Fiber optic networks, a prime example of photonics application, have already proven their mettle in providing high-speed internet to urban areas. By transmitting data in the form of light pulses through optical fibers, these networks can achieve gigabit speeds, ensuring seamless communication between devices, sensors, and systems. This enhanced data transmission capability is crucial for enabling the real-time monitoring and control required in smart cities and IoT networks.
Robust Sensing and Monitoring Systems
Smart cities rely heavily on an extensive network of sensors to monitor various parameters like air quality, traffic flow, temperature, and more. Photonics plays a pivotal role in enhancing the capabilities of these sensing systems. For instance, photonic sensors can provide highly accurate measurements using techniques such as interferometry and spectroscopy.
Furthermore, photonics enables the development of LiDAR (Light Detection and Ranging) systems, which use laser pulses to create detailed 3D maps of urban environments. These maps are invaluable for applications like autonomous vehicles, urban planning, and disaster response. The precision and reliability of photonics-based sensing technologies are indispensable for the effective functioning of smart cities.
Reduced Latency and Real-time Responsiveness:
In smart cities and IoT networks, milliseconds matter. Applications such as autonomous vehicles, healthcare monitoring, and smart grid management require near-instantaneous response times. Photonics plays a crucial role in minimizing latency.
By using light-based communication, photonics enables data to travel at nearly the speed of light, significantly reducing the time it takes for information to reach its destination. This real-time responsiveness is essential for applications that demand split-second decision-making. Whether it’s ensuring the safety of pedestrians on busy streets or optimizing energy distribution in a smart grid, the low latency provided by photonics is a game-changer.
Takeaways
In the face of the challenges posed by urbanization and the demand for interconnected systems, photonics emerges as a game-changing technology for smart cities and IoT networks. Its ability to facilitate high-speed data transmission, promote secure communication, and ensure scalability positions it as a key enabler of the smart cities of the future.
As we continue to advance towards a more connected and sustainable urban landscape, harnessing the potential of photonics will be instrumental in overcoming the hurdles that lie ahead. By integrating photonics into the fabric of smart cities and IoT networks, we pave the way for a more efficient, resilient, and environmentally conscious urban future.
Tags: Autonomous Vehicles, bandwidth, Communication Systems, Data transmission, EFFECT Photonics, Fiber-optic networks, Healthcare Monitoring, high-speed internet, IoT Networks, Latency reduction, LIDAR, Photonic Sensors, Photonics, Real-time Responsiveness, Scalability, Sensing Systems, Smart Cities, Smart Grid Management, Sustainable Urban Development, Technological Advancement, UrbanizationWhat is a Wavelength Locker: Ensuring Precision in Coherent Optical Communication Systems
In the dynamic landscape of modern communication systems, the demand for high precision and low…
In the dynamic landscape of modern communication systems, the demand for high precision and low noise lasers has become a critical factor in ensuring seamless data transmission. This requirement is particularly evident in the realm of dense wavelength division multiplexing (DWDM) systems, where the convergence of multiple data streams necessitates a level of precision that borders on the extraordinary.
In DWDM systems, data is transmitted over a single optical fiber using different wavelengths of light. Each wavelength serves as an independent channel, allowing for the simultaneous transmission of multiple streams of information. However, for this intricate dance of data to be successful, lasers must emit light at precisely defined wavelengths. Imagine a scenario where even a slight deviation in wavelength occurs – this seemingly minor discrepancy can lead to signal interference, resulting in a loss of data integrity and system efficiency.
This is where a crucial component steps into the spotlight: the wavelength locker. Its role in this complex interplay of photons cannot be overstated. By providing a mechanism to stabilize the wavelength emitted by semiconductor lasers, the wavelength locker ensures that each channel operates at its specified wavelength, thereby maintaining the integrity of the optical communication system.
Understanding the Wavelength Locker
A wavelength locker, in essence, serves as the guardian of precision in optical communication systems. Its operation hinges on a feedback loop that continuously monitors the emitted wavelength and makes adjustments as necessary. This dynamic process guarantees that the laser operates at its specified wavelength, irrespective of environmental conditions or operational variations.
In essence, the wavelength locker acts as a sentinel, steadfastly guarding against wavelength drift, temperature fluctuations, and external disturbances. This level of stability is paramount in DWDM systems, where even the slightest deviation from the target wavelength can have cascading effects on system performance.
Qualities of an Effective Wavelength Locker
The effectiveness of a wavelength locker is contingent upon several key characteristics. First and foremost, it must seamlessly integrate into the broader system architecture. This ensures that the introduction of the locker does not introduce additional complexities or inefficiencies into the setup.
Moreover, a low loss characteristic is of paramount importance. The wavelength locker must have a minimal impact on the signal strength to avoid degrading the overall performance of the optical communication system. Any additional attenuation introduced by the locker can lead to signal degradation, which is simply unacceptable in high-speed data transmission environments.
Additionally, simplicity in manufacturing and packaging is a critical factor. A wavelength locker that is easy to produce and package not only reduces production costs but also paves the way for widespread adoption in the industry. This accessibility is pivotal in driving advancements in optical communication technology.
Discrete vs Integrated Wavelength Lockers
External wavelength lockers, as the name suggests, are separate entities from the laser chip itself. They function as standalone components within the optical communication system. This design provides a degree of flexibility in choosing the laser source, allowing compatibility with a wide range of lasers. However, the additional components and interfaces can introduce complexity and potential points of failure.
Conversely, internal wavelength lockers are directly integrated onto the laser chip. This integration offers advantages such as a reduced footprint, simplified assembly, and potentially lower overall costs. However, this integration means that the choice of wavelength locker is tied to the specific laser source, limiting flexibility in system design. This trade-off between flexibility and integration efficiency is a crucial consideration in designing high-performance optical communication systems.
Takeaways
In the realm of coherent optical communication systems, precision and stability are the linchpins of success. The wavelength locker emerges as a silent sentinel, ensuring that lasers emit light at their designated wavelengths and enabling the seamless transmission of data in DWDM systems.
An effective wavelength locker embodies qualities like easy integration, low loss, and simplicity in manufacturing and packaging. The choice between external and internal lockers depends on the specific requirements of the system, balancing factors like flexibility, footprint, and cost.
Tags: PhotonicsThe Basics of Laser Safety
Lasers have revolutionized our lives, bringing advancements in various fields, including industry, medicine, research, and…
Lasers have revolutionized our lives, bringing advancements in various fields, including industry, medicine, research, and entertainment. From laser pointers used for presentations to powerful laser-cutting machines, these devices have become integral to our modern world. However, it is crucial to acknowledge that lasers can pose potential risks and hazards if not handled carefully.
This article will summarize some of the basics of laser safety. We must emphasize that this article does not replace proper, comprehensive laser safety training. If you are going to work with lasers, please make sure to take the training provided by your company or educational institution.
Principles of Lasers
Before delving into laser safety, it is important to establish a foundation of knowledge regarding the basic principles of lasers and the various types available. The term laser stands for Light Amplification by Stimulated Emission of Radiation. It is an optical device that emits coherent, monochromatic, and intense light.
There are several types of lasers, each with unique characteristics and applications. Common types include gas lasers (e.g., helium-neon, carbon dioxide), solid-state lasers (e.g., Nd:YAG, ruby), semiconductor lasers (e.g., laser diodes), and dye lasers. Understanding the specific characteristics of the type of laser you will use enables a more comprehensive approach to laser safety.
Exploring potential hazards and safety measures
By their very nature, Lasers possess inherent hazards that require careful attention and precautionary measures. Failure to adhere to laser safety guidelines can result in serious injuries. Some potential hazards associated with lasers include eye injuries, skin burns, fire hazards, and even electrical and chemical hazards in some laser systems. The table below summarizes many of the common laser hazards.
The eyes are particularly susceptible to laser hazards, as even brief exposure to high-powered lasers can cause permanent damage. Therefore, using appropriate protective eyewear is crucial when working with lasers. Laser safety eyewear should have optical density (OD) ratings that match the laser’s wavelength and power output, effectively attenuating the laser beam to a safe level.
Controlling laser beams is another critical aspect of laser safety. Using beam shutters, beam dumps, and beam stops helps prevent accidental exposure to laser radiation. Proper alignment and focusing of lasers are essential to avoid unintended exposure. Moreover, establishing controlled environments, such as laser interlock systems and restricted access areas, ensures that laser operations are conducted safely and that individuals are protected from accidental exposure.
Regular maintenance and calibration of laser systems are crucial to ensure their safe operation. Adequate training and awareness programs should be implemented to educate personnel on laser safety practices and emergency procedures. Additionally, risk assessments should be performed to identify and mitigate potential hazards specific to each laser application.
Goldfinger and How NOT to Use Lasers
In our quest to unravel the importance of laser safety, let us take a lighthearted detour into the world of James Bond. In the iconic film “Goldfinger,” Bond finds himself in a precarious situation, strapped to a table with a laser slowly inching toward his noble parts. While the scene captivates us with its tension and suspense, it also serves as a comical reminder of the laser safety rules that Goldfinger blatantly disregarded.
First and foremost, Goldfinger’s choice of protective eyewear was utterly nonexistent. Goldfinger failed to equip himself and his minions with the necessary eye protection as the laser beam whirred closer to Bond. Moreover, Goldfinger’s complete lack of beam control left much to be desired. Unencumbered by beam shutters or safeguards, the laser roamed freely, putting Bond at risk and Goldfinger himself at risk. Proper beam control, including beam dumps and beam stops, would have ensured that the laser’s path remained precisely controlled and confined, avoiding unintended encounters.
Takeaways
As lasers continue to play an essential role in advancing technology, it is crucial to prioritize laser safety to protect the well-being of laser users and those around them. This safety knowledge includes
- Understanding the basic principles of lasers and the different types available,
- Recognizing potential hazards and implementing essential safety measures, such as wearing protective eyewear, controlling laser beams, and establishing controlled environments
- Fostering a culture of laser safety through proper training, maintenance, and awareness
Demystifying the Role of Photonics in 5G Networks
The dawn of the 5G era promises a transformative leap in wireless communication, offering unparalleled…
The dawn of the 5G era promises a transformative leap in wireless communication, offering unparalleled speeds, low latency, and vast device connectivity. However, to fully realize the potential of 5G, we must overcome many technological challenges. Traditional methods, reliant on electronic signals, face limitations in terms of speed and capacity. This is where photonics, the science of generating, detecting, and manipulating light, emerges as a game-changer.
Photonics offers a promising solution to the hurdles that traditional technologies encounter. By harnessing the unique properties of light, photonics enables us to propel 5G networks to new heights. This article will delve into the intricacies of photonics and its pivotal role in the 5G revolution.
Where to Find Photonics in Mobile Networks
In the intricate web of mobile wireless networks, photonics plays a critical role in the form of optical fibers and components. Optical fibers, slender threads of glass or plastic, are the backbone of modern communication. They transmit data over long distances through light pulses, ensuring minimal signal loss. This feature is particularly crucial in 5G networks, where signals must travel over extended distances with minimal degradation.
Moreover, photonics components find their place in various critical points of a mobile network. For instance, photodetectors are used to convert optical signals back into electrical signals at the receiving end. This process is vital in ensuring that the transmitted data reaches its intended destination accurately and efficiently.
One of the revolutionary applications of photonics in mobile networks is Radio over Fiber (RoF) technology. Traditionally, radio signals travel through coaxial cables, facing signal degradation over long distances.
Conversely, RoF converts these radio signals into optical signals, which can be transmitted via optical fibers with minimal loss. This approach extends the reach of wireless signals and enables more efficient distribution of signals across a network. This means that even in rural areas or distant corners of a city, 5G signals can maintain their speed and strength.
Photonics’ Impact on Latency
Latency, the time it takes for data to travel from its source to its destination, is a critical metric in modern networks. Reducing latency is paramount in the context of 5G and emerging technologies like edge computing. Consider a self-driving car navigating city streets, relying on real-time data from various sensors. Any delay in data transmission could result in traffic accidents or missed turns. The table below shows some of the latency requirements for these edge computing cases.
Types of Edge Data Centres
Types of Edge | Data center | Location | Number of DCs per 10M people | Average Latency | Size | |
---|---|---|---|---|---|---|
On-premises edge | Enterprise site | Businesses | NA | 2-5 ms | 1 rack max | |
Network (Mobile) | Tower edge | Tower | Nationwide | 3000 | 10 ms | 2 racks max |
Outer edge | Aggregation points | Town | 150 | 30 ms | 2-6 racks | |
Inner edge | Core | Major city | 10 | 40 ms | 10+ racks | |
Regional edge | Regional edge | Regional | Major city | 100 | 50 ms | 100+ racks |
Not edge | Not edge | Hyperscale | State/national | 1 | 60+ ms | 5000+ racks |
Photonics holds the key to mitigating latency issues. Unlike electrons in traditional electronic systems, photons, being massless particles, travel at the speed of light. In the context of data transmission, this speed is unbeatable. This advantage is pivotal in a 5G network where data needs to travel quickly over long distances. Whether it’s in gaming applications, telemedicine, or smart infrastructure, the minimal latency provided by photonics ensures that information reaches its destination almost instantly.
Furthermore, photonics plays a significant role in the realm of edge computing. With edge computing, data processing occurs closer to the data source rather than relying on a centralized data center. Photonics allows for efficient and high-speed communication between these edge devices, facilitating real-time decision-making. This is indispensable for applications like smart cities, where traffic signals, surveillance cameras, and autonomous vehicles must communicate seamlessly, making cities safer and more efficient.
Photonics’ Impact on Capacity
The capacity of a network is defined by its ability to carry information. In this domain, light outshines electronic signals. The fundamental properties of light, specifically its high frequency and bandwidth, allow it to carry an immensely greater amount of information than electrical signals.
To put it into perspective, consider a network transportation system. Traditional electronic signals are akin to single-lane roads, limited in the amount of traffic they can accommodate. Through a technique called Wavelength Division Multiplexing (WDM), photonics transforms this network into a multi-lane highway.
WDM enables multiple streams of data, each encoded in a different wavelength of light, to travel concurrently through the same optical fiber. It’s akin to having several lanes on a highway where each lane carries a different type of vehicle – be it cars, trucks, or motorcycles. This massively increases the network’s capacity, allowing it to handle many users and devices simultaneously.
Conclusion
As we stand on the cusp of the 5G revolution, understanding the pivotal role of photonics is imperative. By harnessing the unique properties of light, we can overcome the limitations that traditional technologies face. Through optical fibers and RoF technology, photonics extends the reach and efficiency of mobile networks, ensuring that even remote areas are connected with the full capabilities of 5G.
Moreover, the impact of photonics on latency and capacity is profound. The speed of light, harnessed by photonics, ensures lightning-fast data transmission, crucial for applications like remote surgery and autonomous vehicles. Additionally, the ability of light to carry a vast amount of information positions photonics as the linchpin of network capacity enhancement.
Tags: 5G Networks, 5G revolution, Data transmission, Device connectivity, Edge computing, Edge devices, Game-changer, Information transmission, Latency reduction, mobile networks, Network capacity, Optical fibers, Photodetectors, Photonics, Radio over Fiber, RoF technology, Signal degradation, Smart infrastructure, Wavelength Division Multiplexing, Wireless communicationHow Optical Networks Illuminate Remote Work
In today’s fast-paced digital landscape, the concept of work has undergone a remarkable transformation. The…
In today’s fast-paced digital landscape, the concept of work has undergone a remarkable transformation. The rise of remote work and telecommuting has transcended traditional office settings, allowing individuals to contribute from the comfort of their homes or other remote locations. This shift has been expedited by technological advancements, with reliable and high-speed internet connectivity emerging as the lifeblood of this new work paradigm. As the limitations of traditional networks become increasingly evident in meeting the demands of remote work, the spotlight has turned to optical networks as a solution that not only addresses these challenges but also propels remote and hybrid work environments to new heights.
Traditional networks, predominantly reliant on copper-based infrastructure, were designed to cater to the needs of an era when remote work was more of an exception than a rule. As remote work gained momentum, these networks often struggled to cope with the demands of simultaneous video conferencing, data transfers, and cloud-based applications. Slow speeds, bandwidth limitations, and inconsistent connectivity resulted in frustrated workers and disrupted workflows. These limitations became glaringly evident during peak usage times, when the networks would buckle under the strain, impeding productivity and causing communication breakdowns.
Speed and Scalability Advantages of Optical Networks
Enter optical networks, leveraging the power of optical fiber to revolutionize remote work. As shown in the figure below, optical networks utilize light pulses to transmit data, offering unparalleled speed and scalability. Unlike traditional copper-based networks, which are constrained by the physical limitations of the medium, optical networks enable data to travel at the speed of light, allowing for seamless and rapid communication between remote workers and their colleagues, clients, or collaborators.
The incredible bandwidth capacity of optical fibers means that even data-intensive tasks like high-definition video streaming and large file transfers can be accomplished without a hint of lag. This inherent speed boosts productivity and enhances the quality of virtual interactions, fostering a sense of connectedness that bridges geographical divides. The robustness of optical networks is further exemplified by their ability to handle ever-increasing workloads. As remote workforces expand and data demands grow, optical networks can effortlessly accommodate these needs without compromising performance, making them an ideal companion for the modern remote work landscape.
Low Latency for Real-Time Communication
The latency between data transmission and reception has long been a thorn for remote workers. Delays in video conferences, voice calls, and collaborative applications can hinder effective communication and teamwork. Optical networks come to the rescue with their remarkably low latency characteristics. The efficiency of transmitting data via light signals ensures that delays are minimized, enabling real-time interactions that simulate face-to-face communication.
Remote workers can engage in spontaneous discussions, contribute ideas during brainstorming sessions, and provide instant feedback without the frustrating lag that often plagues traditional networks. This low latency factor improves the remote work experience and lays the groundwork for a future where virtual reality and augmented reality applications become integral to remote collaboration. The near-instantaneous data transfer facilitated by optical networks facilitates a sense of presence, allowing remote workers to feel like active participants in the shared digital space.
Eco-Friendly and Energy-Efficient Networks
In an era marked by heightened environmental consciousness, the eco-friendliness of technology solutions is a significant consideration. Optical networks shine in this regard as well. The energy consumption of optical networks is notably lower than that of traditional networks.
When electricity moves through a wire or coaxial cable, it encounters resistance, which leads to energy loss in the form of heat. Conversely, light experiences much less resistance when traveling through optical fiber, resulting in significantly lower energy loss during data transmission. As shown in the figure below, this energy loss gets exponentially worse with faster (i.e., higher-frequency) signals that can carry more data. Networks based on electrical signals also require more signal boosters and repeaters at regular intervals to maintain data integrity over long distances.
These devices demand substantial energy inputs and contribute to a larger carbon footprint. In contrast, optical networks transmit data over longer distances without the need for frequent signal regeneration, resulting in reduced energy consumption and lower emissions. By adopting optical networks, companies can enhance their remote work capabilities and contribute to sustainable practices that benefit the planet.
Conclusion
As remote work and hybrid work models become the norm rather than the exception, the importance of robust and reliable internet connectivity cannot be overstated. With their limitations in speed, scalability, latency, and energy efficiency, traditional networks have struggled to meet the demands of this evolving landscape. Optical networks, powered by the prowess of optical fiber technology, illuminate the path forward for remote work.
With their speed and scalability, low latency attributes, and eco-friendly characteristics, optical networks have addressed the challenges that once hindered remote work’s potential. Optical networks have unlocked new possibilities, allowing remote workers to seamlessly collaborate, communicate, and contribute in real time, regardless of their location.
Tags: bandwidth limitations, cloud-based applications, connectivity, copper-based infrastructure, data transfers, eco-friendly networks, EFFECT Photonics, energy-efficient technology, fiber optic technology, high-speed internet, Low latency, optical networks, real-time communication, remote work, Speed of light, sustainable practices, telecommuting, traditional networks, video conferencing, virtual interactions, work paradigmThe Highways of Light: How Optical Fiber Works
Optical fibers revolutionized how we transmit data, enabling faster long-distance connections. These slender strands of…
Optical fibers revolutionized how we transmit data, enabling faster long-distance connections. These slender strands of glass or plastic carry light pulses and serve as the backbone of modern telecommunication networks. Optical fibers have found applications beyond communications, including imaging, sensing, and medicine, further showcasing their versatility and impact in various fields.
Early optical fibers suffered significant losses during transmission, limiting their practicality for long-haul communication. In 1966, Charles Kao and George Hockham proposed that impurities in the glass were responsible for these losses, and they suggested that high-purity silica glass could achieve a much more reasonable attenuation of 20 dB per kilometer for telecommunications. This breakthrough, for which Kao received the Nobel Prize in Physics in 2009, kickstarted an era of explosive progress and growth for optical fiber.
In 1970, Corning scientists Robert Maurer, Donald Keck, and Peter Schultz successfully fabricated a glass fiber with an attenuation of 16 dB per kilometer, exceeding the performance benchmark set by Kao and Hockham. Two years later, Corning pushed the envelope further and achieved a loss of 4 dB/km, an order of magnitude improvement over their first effort. By 1979, Nippon Telegraph and Telephone (NTT) had reached a loss of 0.2 dB/km, meaning that only 5% of the light signal was lost over one kilometer. Optical fibers were ready for the world stage and deployed worldwide throughout the 1980s. The first transatlantic optical fiber link, spanning 6000 km, was established in 1988.
In this article, we will delve into the fascinating world of optical fibers, exploring how they work and what role optical transceivers play in fiber communications.
The Principle of Total Internal Reflection
Light bends when it passes from one material to another, such as from air to water. This bending occurs due to the change in the speed of light when it encounters a different material, causing the light rays to change direction. How much light changes direction depends on the angle at which it enters the new material and the factor by which the material slows down light. The latter factor is known as the refractive index of the material.
When light travels from a material of a higher refractive index to one of a lower index at a specific critical angle, the light will be entirely reflected into the high-index material. This phenomenon is called total internal reflection and is the fundamental principle behind the operation of optical fibers.
Optical Fibers and Total Internal Reflection
Optical fibers consist of a high-refractive-index core surrounded by a low-refractive-index cladding layer. Light entering the fiber core through one end at the correct critical angle will bounce back whenever it reaches the boundary between the core and the cladding. This behavior effectively traps light inside the core, allowing the light pulses to propagate through the fiber with minimal loss over long distances.
Optical fibers can achieve single-mode or multi-mode operation by carefully engineering the refractive indices of the core and cladding. Single-mode fibers have a small core diameter to transmit only a single optical mode or path. In contrast, multi-mode fibers have a larger core diameter, enabling the propagation of multiple modes simultaneously.
The quality of the light signal degrades when traveling through an optical fiber by a process called dispersion. The same phenomenon happens when a prism splits white light into several colors. The single-mode fiber design minimizes dispersion, making single-mode fibers ideal for long-distance communication. Since they are more susceptible to dispersion, multi-mode fibers are well-suited for short-distance applications such as local area networks (LANs).
The Role of Optical Transceivers in Fiber
Optical transceivers play a crucial role in fiber communications by converting electrical signals into optical signals for fiber transmission and vice versa when the optical signal is received. They act as the interface between fiber optical networks and electronic computing devices such as computers, routers, and switches.
As the name implies, an optical transceiver contains both a transmitter and a receiver within a single module. The transmitter has a laser or LED that generates light pulses representing the electronic data signal. On the receiving end, the receiver detects the optical signals and converts them back into electrical signals, which electronic devices can further process.
There are many different approaches to encode electrical data into light pulses. Two key approaches are intensity modulation/direct detection (IM-DD) and coherent transmission. IM-DD transmission only uses the amplitude of the light signal to encode data, while coherent transmission manipulates three different light properties to encode data: amplitude, phase, and polarization. These additional degrees of modulation allow for faster optical signals without compromising the transmission distance.
Takeaways
Optical fibers have transformed telecommunications in the last 50 years, enabling the rapid and efficient transmission of data over vast distances. By exploiting the principles of total internal reflection, these slender strands of glass or plastic carry pulses of light with minimal loss, ensuring high-speed communication worldwide. Optical transceivers act as the vital link between optical fiber and electronic networking devices, facilitating the conversion of electrical signals to optical signals and vice versa. Optical fibers and transceivers are at the forefront of our interconnected world, serving as the highways of light the digital age drives on.
Tags: amplifiers, cladding, coherent, conversion, core, Corning, detector, Direct Detect, dispersion, EFFECT Photonics, electrical, fiber, laser, Networks, Nobel Prize, noise, NTT, optical, refraction, refractive index, Telecommunications, total internal reflections, total internal relection, TransceiverWhat DSPs Do Optics in Space Need?
The space sector is experiencing rapid growth, driven by numerous privately-owned initiatives aiming to reduce…
The space sector is experiencing rapid growth, driven by numerous privately-owned initiatives aiming to reduce the cost of space flight. As this sector expands, it presents new opportunities for developing optical components in space and satellite communications.
However, the challenges associated with the space environment required digital signal processors (DSPs) optimized for optical communications in space. Unlike traditional fiber transmission systems, DSPs for space need to be optimized for receiver sensitivity and signal-to-noise ratio, adapt to signal recovery in the presence of frequent signal losses, and comply with stringent space certification standards.
Signal to Noise Ratio Over Dispersion
In traditional optical fiber communications, the presence of dispersion often leads to signal distortions that require extensive dispersion compensation techniques. However, space is a vacuum medium devoid of most dispersion-related challenges. Consequently, DSPs developed for space-based optical communications do not require the same level of dispersion compensation as their fiber counterparts. Instead, the primary focus shifts toward optimizing receiver sensitivity and improving the signal-to-noise ratio (SNR) to ensure reliable data transmission in space.
Optical signals travel enormous distances in space and will be extremely weak when they reach a receiver. Focusing DSP performance on enhancing the receiver’s sensitivity enables more accurate detection of these weak optical signals.
To achieve this goal, DSPs for space-based optical communications need forward error correction (FEC) methods that are even more robust than FEC used for ground links. These methods employ sophisticated coding techniques that introduce redundancy in the transmitted data, allowing for efficient error detection and correction.
Handling Signal Losses in Space
In contrast to optical fiber communications, where signal losses are typically infrequent but often fatal, space-based optical communications experience more frequent but non-fatal signal losses. Therefore, DSPs designed for space must be optimized to handle these intermittent signal losses and ensure efficient signal recovery.
While DSPs for fiber optical links on the ground often rely solely on their error correction methods, DSPs for space-based optical communications might ask more frequently for the signal to be retransmitted. This retransmission can increase the link latency significantly, which can be compensated in part by using satellite networks to provide more redundancy. You can read more about these satellite networks in one of our previous articles.
Space Certification and Standards
While industrial or commercial temperature certifications are typically sufficient for optical fiber transceivers and their components, DSPs designed for space-based optical communications must adhere to more stringent space certification standards. Space certification ensures that DSPs can withstand the extreme temperatures, vacuum conditions, and radiation exposure prevalent in space environments.
Our testing facilities and partners include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.
Takeaways
The testing process for photonic integrated circuits ensures their reliability and performance. Device-level testing focuses on individual components, allowing for precise characterization and identification of faulty elements. Functional testing evaluates the overall performance of the PIC, ensuring adherence to design specifications. Reliability testing assesses the robustness and lifespan of the PIC under various operating conditions.
Tags: 5G, certification, Coherent Optical Systems, Data center architecture, dispersion, DSP, DSPs, DVT, Electrical Power, free space optics, Networks, noise ratio, noise ratiom, Optical System-On-Chip (SoC), optics in space, Photonic Integrated Circuit, Photonics, PICs, Pluggable Transceivers, signal, signal loss, space, standardHow to Test a Photonic Integrated Circuit
As photonic integrated circuits (PICs) continue to play an increasingly vital role in modern communication…
As photonic integrated circuits (PICs) continue to play an increasingly vital role in modern communication systems, understanding their testing process is crucial to ensure their reliability and performance. Chip fabrication is a process with many sources of variability, and therefore, much testing is required to ensure that the fabricated chip agrees with what was originally designed and simulated.
As with most hardware, PIC testing can follow the steps of the DVT/EVT/PVT validation framework to scale the device from a mere prototype to a stage of mass production.
- EVT (Engineering Validation Test): This is the initial phase of hardware testing, where the focus is on validating that the engineering design meets the specifications and requirements.
- DVT (Design Validation Test): This phase aims to ensure that the hardware design is mature and stable, ready for production.
- PVT (Production Validation Test): PVT is conducted using production-intent materials and processes to verify that the final product will meet quality and performance expectations in mass production.
This article aims to provide an overview of some testing processes for photonic integrated circuits, covering device-level testing, functional testing, and reliability testing.
Device Level Testing
Device-level testing involves evaluating individual components within the PIC and assessing their characteristics, performance, and reliability to ensure proper functionality and integration. This testing is typically performed at the chip level or wafer level.
Ideally, testing should happen not only on the final, packaged device but in the earlier stages of PIC fabrication, such as measuring after the wafer fabrication process is completed or after cutting the wafer into smaller dies.
Greater integration of the PICs enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad die rather than the whole package, which saves valuable energy and materials.
Functional Testing
After the individual device testing, the next step of the EVT testing phase is functional testing. These tests evaluate the key functionalities of the PIC to ensure they meet design specifications and goals. Different applications will have different functionalities to be evaluated, of course. For example, some key functions evaluated in a PIC for telecommunications can be:
- Signal Transmission: To ensure reliable transmission, evaluate signal quality, bit error rate, and signal-to-noise ratio.
- Modulation: Assessing the modulators’ accuracy, bandwidth, and linearity to ensure accurate signal encoding and decoding.
- Switching: Evaluate the switch response time, crosstalk, and extinction ratio to ensure proper signal routing and minimal signal degradation.
Reliability Testing of the Packaged PIC
After the EVT round of characterization and validation of the chip and its package, the packed chip must be made ready for production, requiring a series of reliability tests in several environmental conditions. For example, different applications need different certifications of the temperatures in which the chip must operate.
For example, the packaged PICs made by EFFECT Photonics for telecommunications must comply with the Telcordia GR-468 qualification, which describes how to test optoelectronic devices for reliability under extreme conditions. Qualification depends upon maintaining optical integrity throughout an appropriate test regimen. Accelerated environmental tests are described in the diagram below.
Our testing facilities and partners include capabilities for the temperature cycling and reliability testing needed to match Telcordia standards, such as temperature cycling ovens and chambers with humidity control.
Takeaways
The testing process for photonic integrated circuits ensures their reliability and performance. Device-level testing focuses on individual components, allowing for precise characterization and identification of faulty elements. Functional testing evaluates the overall performance of the PIC, ensuring adherence to design specifications. Reliability testing assesses the robustness and lifespan of the PIC under various operating conditions.
Tags: 5G, Coherent Optical Systems, Data center architecture, Design Validation Test, DVT, Electrical Power, Electronic Equipment, Engineering Validation Test, EVT, Networks, Optical System-On-Chip (SoC), Photonic Integrated Circuit, Photonics, PIC, PICs, Pluggable Transceivers, Production Validation Test, PVT, Semiconductor Industry, testing, Wafer Scale Processes, Wireless Transmission100G Access Networks for the Energy Transition
The environmental consequences of fossil fuels like coal, oil, and natural gas have triggered a…
The environmental consequences of fossil fuels like coal, oil, and natural gas have triggered a crucial reassessment worldwide. The energy transition is a strategic pivot towards cleaner and more sustainable energy sources to reduce carbon emissions, and it requires a major collective effort from all industries.
In the information and communication technology (ICT) sector, the exponential increase in data traffic makes it difficult to keep emissions down and contribute to the energy transition. A 2020 study by Huawei estimates that the power consumption of the data center sector will increase threefold in the next ten years. Meanwhile, wireless access networks are expected to increase their power consumption even faster, more than quadrupling between 2020 and 2030.
These issues affect both the environment and the bottom lines of communications companies, which must commit increasingly larger percentages of their operating expenditure to cooling solutions.
As we explained in our previous articles , photonics and transceiver integration will play a key role in addressing these issues and making the ICT sector greener. EFFECT Photonics also believes that the transition of optical access networks to coherent 100G technology can help reduce power consumption.
This insight might sound counterintuitive at first since a coherent transceiver will normally consume more than twice the power of a direct detect one due to the use of a digital signal processor (DSP). However, by replacing the aggregation of multiple direct detect links with a single coherent link and skipping the upgrades to 56Gbps and going directly for 100Gbps, optical networks can save energy consumption, materials, and operational expenditures such as truck rolls.
The Impact of Streamlining Link Aggregation
The advanced stages of 5G deployment will require operators to cost-effectively scale fiber capacity in their fronthaul networks using more 10G DWDM SFP+ solutions and 25G SFP28 transceivers. This upgrade will pressure the aggregation segments of mobile backhaul and midhaul, which typically rely on link aggregation of multiple 10G DWDM links into a higher bandwidth group (e.g., 4x10G).
On the side of cable optical networks, the long-awaited migration to 10G Passive Optical Networks (10G PON) is happening and will also require the aggregation of multiple 10G links in optical line terminals (OLTs) and Converged Cable Access Platforms (CCAPs).
This type of link aggregation involves splitting larger traffic streams and can be intricate to integrate within an access ring. Furthermore, it carries an environmental impact.
A single 100G coherent pluggable consumes a maximum of six watts of power, which is significantly more than the two watts of power of a 10G SFP+ pluggable. However, aggregating four 10G links would require a total of eight SFP+ pluggables (two on each end) for a total maximum power consumption of 16 watts. Substituting this link aggregation for a single 100G coherent link would replace the eight SFP+ transceivers with just two coherent transceivers with a total power consumption of 12 watts. And on top of that reduced total power consumption, a single 100G coherent link more than doubles the capacity of aggregating those four 10G links.
Adopting a single 100G uplink also diminishes the need for such link aggregation, simplifying network configuration and operations. To gain further insight into the potential market and reach of this link aggregation upgrade, it is recommended to consult the recent Cignal AI report on 100ZR technologies.
The Environmental Advantage of Leaping to 100G
While conventional wisdom may suggest a step-by-step progression from 28G midhaul and backhaul network links to 56G and then to 100G, it’s important to remember that each round of network upgrade carries an environmental impact.
Let’s look at an example. As per the European 5G observatory, a country like The Netherlands has deployed 12,858 5G base stations. There are several thousands of mid- and backhaul links connecting groups of these base stations to the 5G core networks. Every time these networks require an upgrade to accommodate increasing capacity, tens of thousands of pluggable transceivers must be replaced nationwide. This upgrade entails a substantial capital investment as well as resources and materials.
A direct leap from 28G mid- and backhaul links directly to coherent 100G allows network operators to have their networks already future-proofed for the next ten years. From an environmental perspective, it saves the economic and environmental impact of buying, manufacturing, and installing tens of thousands of 56G plugs across mobile network deployments. It’s a strategic choice that avoids the redundancy and excess resource utilization associated with two consecutive upgrades, allowing for a more streamlined and sustainable deployment.
Streamlining Operations with 100G ZR
Beyond the environmental considerations and capital expenditure, the operational issues and expenses of new upgrades cannot be overlooked. Each successive generation of upgrades necessitates many truck rolls and other operational expenditures, which can be both costly and resource-intensive.
Each truck roll involves a number of costs:
- Staff time (labor cost)
- Staff safety (especially in poor weather conditions)
- Staff opportunity cost (what complicated work could have been done instead of driving?)
- Fuel consumption (gasoline/petrol)
- Truck wear and tear
By directly upgrading from 25G to 100G, telecom operators can bypass an entire cycle of logistical and operational complexities, resulting in substantial savings in both time and resources.
This streamlined approach not only accelerates the transition toward higher speeds but also frees up resources that can be redirected toward other critical aspects of network optimization and sustainability initiatives.
Conclusion
In the midst of the energy transition, the ICT sector must also contribute toward a more sustainable and environmentally responsible future. While it might initially seem counterintuitive, upgrading to 100G coherent pluggables can help streamline optical access network architectures, reducing the number of pluggables required and their associated power consumption. Furthermore, upgrading these access network mid- and backhaul links directly to 100G leads to future-proofed networks that will not require financially and environmentally costly upgrades for the next decade.
As the ecosystem for QSFP28 100ZR solutions expands, production will scale up, making these solutions more widely accessible and affordable. This, in turn, will unlock new use cases within access networks.
Tags: 4G vs. 5G, 5G Networks, Base Stations, Coherent Optical Systems, Data center architecture, data centers, Decentralization, Electrical Power, Electronic Equipment, Energy Transition, heat dissipation, miniaturization, Optical Fiber, Optical System-On-Chip (SoC), Photonics, Pluggable Transceivers, Power Usage Effectiveness (PUE), Semiconductor Industry, Wafer Scale Processes, Wireless TransmissionA Day Without Photonics
On October 21, 1983, the General Conference of Weights and Measures adopted the current value…
On October 21, 1983, the General Conference of Weights and Measures adopted the current value of the speed of light at 299,792.458 km/s. To commemorate this milestone, hundreds of optics and photonics companies, organizations, and institutes worldwide organize activities every year on this date to celebrate the Day of Photonics and how this technology is impacting our daily lives.
In this digital and technological age, photonics is a silent hero that often goes unnoticed by people outside of that industry. This field of science and engineering, which deals with generating, manipulating, and detecting light, has quietly revolutionized how we live, work, and communicate. From the laser pointers in our presentations to the fiber-optic cables that power our internet, photonics permeates every aspect of our modern lives. So what if, for a moment, we imagined what a day without photonics would look like?
Communications in Slow Motion
The dawn of a photonics-less day would suddenly stop our ability to communicate worldwide at the speed and scale that we currently do. After all, the fiber-optic networks that form the backbone of our global communication system are powered by photonic devices such as transceivers. A return to an age of slower copper-based communication would majorly affect everything from business and financial transactions to medical emergencies.
Blurrier Medical Imaging
Without photonics, several medical diagnostic tools, and treatments we take for granted would be diminished. For example, laser surgery revolutionized the treatment of eye conditions, and optical coherence tomography has become vital in retinal imaging and diagnosis.
More Power Consumption and Emissions
Solar power, a cornerstone of our sustainable energy future, relies on aspects of semiconductor science and photonics to harness the sun’s energy. LEDs (which we’ll discuss in the next section) have also significantly reduced power consumption. Photonics will also be critical to reducing power consumption and emissions in the information and communication technology sector, as we explained in one of our recent articles. Without photonics, our dependence on fossil fuels would increase, exacerbating environmental challenges.
A Darker World without LEDs
One of the photonics’ great success stories is light-emitting diodes (LEDs) manufactured at scale through semiconductor processes. LED lighting sales have experienced explosive growth in the past decade, quickly replacing traditional incandescent and fluorescent light bulbs that are less energy efficient. The International Energy Agency (IEA) estimates that residential LED sales have risen from around 5% of the market in 2013 to about 50% in 2022. The efficiency and versatility of these light sources have transformed industries and living spaces.
No Lasers, No Precision Manufacturing
Laser-based manufacturing processes are vital in modern industry. From precision cutting to printing, photonics has significantly impacted how we produce goods. Without it, manufacturing processes would revert to slower, less precise methods, impacting efficiency, cost-effectiveness, and product quality.
Less Accurate Sensors for Safety and Security
Sensors also play a crucial role in food safety, providing rapid and accurate detection of contaminants, pathogens, and allergens, ensuring the quality and safety of food products. Additionally, in environmental monitoring, photonic sensors facilitate real-time tracking of air and water quality, as well as the presence of pollutants, enabling timely responses to mitigate ecological risks. These sensors also play a role in LIDAR and the automotive industry. The accuracy of all these sensors would drop significantly without photonic systems and devices.
So, on this Day of Photonics, let us pause to acknowledge the immense contribution of photonics to our daily lives. It’s a field that deserves our attention, admiration, and continued investment, for a world without photonics is a world where many conveniences and capabilities we take for granted would disappear or be significantly hindered.
Tags: Communication technology, Day of Photonics, Environmental monitoring, Fiber-optic networks, Food safety, Laser surgery, LEDs, LIDAR, Light-emitting diodes, Medical imaging, Optical Communication, Photonics impact, Photonics industry, Photonics Technology, Precision manufacturing, Semiconductor science, sensors, Solar power, Speed of light, Sustainable energyTransceiver Integration for the Energy Transition
The world relies heavily on traditional fossil fuels like coal, oil, and natural gas, but…
The world relies heavily on traditional fossil fuels like coal, oil, and natural gas, but their environmental impact has prompted a critical reevaluation. The energy transition is a strategic pivot towards cleaner and more sustainable energy sources to reduce carbon emissions.
In the information and communication technology (ICT) sector, the exponential increase in data traffic makes it difficult to keep emissions down and contribute to the energy transition. A 2020 study by Huawei estimates that the power consumption of the data center sector will increase threefold in the next ten years. Meanwhile, wireless access networks are expected to increase their power consumption even faster, more than quadrupling between 2020 and 2030.
These issues affect both the environment and the bottom lines of communications companies, which must commit increasingly larger percentages of their operating expenditure to cooling solutions.
As we explained in our previous article, photonics will play a key role in addressing these issues and making the ICT sector greener. However, contributing to a successful energy transition requires more than replacing specific electronic components with photonic ones. Existing photonic components, such as optical transceivers, must also be upgraded to more highly integrated and power-efficient ones. As we will show in this article, even small improvements in existing optical transceivers can snowball into more significant power savings and carbon emissions reduction.
How One Watt of Savings Scales Up
Let’s discuss an example to show how a seemingly small improvement of one watt in pluggable transceiver power consumption can quickly scale up into major energy savings.
A 2020 paper from Microsoft Research estimates that for a metropolitan region of 10 data centers with 16 fiber pairs each and 100-GHz DWDM per fiber, the regional interconnect network needs to host 12,800 transceivers.
This number of transceivers could increase by a third in the coming years since the 400ZR transceiver ecosystem supports a denser 75 GHz DWDM grid, so this number of transceivers would increase to 17,000. Therefore, saving a watt of power in each transceiver would lead to a total of 17 kW in savings.
The power savings don’t end there, however. The transceiver is powered by the server, which is then powered by its power supply and, ultimately, the national electricity grid. On average, 2.5 Watts must be supplied from the national grid for every watt of power the transceiver uses. When applying that 2.5 factor, the 17 kW in savings we discussed earlier are, in reality, 42.5 kW.
In a year of power consumption, this rate adds up to a total of 372 MWh in power consumption savings. According to the US Environmental Protection Agency (EPA), these power savings in a single metro data center network are equivalent to 264 metric tons of carbon dioxide emissions. These emissions are equivalent to consuming 610 barrels of oil and could power up to 33 American homes for a year.
Saving Power through Integration
Having explained the potential impact of transceiver power savings, let’s delve into how to save this power.
Before 2020, Apple made its computer processors with discrete components. In other words, electronic components were manufactured on separate chips, and then these chips were assembled into a single package. However, the interconnections between the chips produced losses and incompatibilities that made their devices less energy efficient. After 2020, starting with Apple’s M1 processor, they fully integrated all components on a single chip, avoiding losses and incompatibilities. As shown in the table below, this electronic system-on-chip (SoC) consumes a third of the power compared to the processors with discrete components used in their previous generations of computers.
𝗠𝗮𝗰 𝗠𝗶𝗻𝗶 𝗠𝗼𝗱𝗲𝗹 | 𝗣𝗼𝘄𝗲𝗿 𝗖𝗼𝗻𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻 | |
𝗜𝗱𝗹𝗲 | 𝗠𝗮𝘅 | |
2023, M2 | 7 | 5 |
2020, M1 | 7 | 39 |
2018, Core i7 | 20 | 122 |
2014, Core i5 | 6 | 85 |
2010, Core 2 Duo | 10 | 85 |
2006, Core Solo or Duo | 23 | 110 |
2005, PowerPC G4 | 32 | 85 |
Table 1: Comparing the power consumption of a Mac Mini with an M1 and M2 SoC chips to previous generations of Mac Minis. [Source: Apple’s website] |
The photonics industry would benefit from a similar goal: implementing a photonic system-on-chip. Integrating all the required optical functions on a single chip can minimize the losses and make devices such as optical transceivers more efficient.
For example, the monolithic integration of all tunable laser functions allows EFFECT Photonics to develop a novel pico-ITLA (pITLA) module that will become the world’s smallest ITLA for coherent optical transceivers. The pITLA is the next step in tunable laser integration, including all laser functions in a package with just 20% of the volume of a nano-ITLA module. This increased integration aims to reduce the power and cost per bit transmitted further.
Early Testing Avoids Wastage
Testing is another aspect of the manufacturing process that impacts sustainability. The earlier faults are found in the testing process, the greater the impact on the materials and energy to process defective chips. Ideally, testing should happen not only on the final, packaged transceiver but also in the earlier stages of PIC fabrication, such as measuring after wafer processing or cutting the wafer into smaller dies.
When optical testing is done just on the finalized transceiver package, the whole package must often be discarded, even if just one component does not pass the testing process. This action can lead to a massive waste of materials that cannot be ”fixed” or reused at this stage of the manufacturing process.
Full integration of optical devices enables earlier optical testing on the semiconductor wafer and dies. By testing the dies and wafers directly before packaging, manufacturers need only discard the bad dies rather than the whole package, which saves valuable energy and materials.
Takeaways
For photonics to enable an energy transition in the ICT sector, it must be as accessible and easy to use as electronics. Like electronics, it must be built on a wafer-scale process that can produce millions of chips monthly.
Increased integration of photonic devices such as optical transceivers does not just reduce their energy consumption; it also makes them easier to produce at high volumes. Integrating all functions of an optical device into a single chip makes it much easier to scale up the manufacturing of that device. This scaling will drive down production costs, making integrated photonics more widely available and paving the way for its impactful integration into numerous technologies across the globe.
Tags: 4G vs. 5G, 5G Networks, Base Stations, Coherent Optical Systems, Data center architecture, data centers, Decentralization, Electrical Power, Electronic Equipment, Energy Transition, heat dissipation, miniaturization, Optical Fiber, Optical System-On-Chip (SoC), Photonics, Pluggable Transceivers, Power Usage Effectiveness (PUE), Semiconductor Industry, Wafer Scale Processes, Wireless TransmissionPhotonics For the Energy Transition
The world relies heavily on traditional fossil fuels like coal, oil, and natural gas, but…
The world relies heavily on traditional fossil fuels like coal, oil, and natural gas, but their environmental impact has prompted a critical reevaluation. The energy transition is a strategic pivot towards cleaner and more sustainable energy sources to reduce carbon emissions.
The energy transition has gained momentum over the last decade, with many countries setting ambitious targets for carbon neutrality and renewable energy adoption. Governments, industries, and communities worldwide are investing heavily in renewable infrastructure and implementing policies to reduce emissions.
In the information and communication technology (ICT) sector, the exponential increase in data traffic makes it difficult to keep emissions down and contribute to the energy transition. Data centers and 5G networks might be hot commodities, but the infrastructure that enables them runs even hotter. Electronic equipment generates plenty of heat; the more heat energy an electronic device dissipates, the more money and energy must be spent to cool it down.
A 2020 study by Huawei estimates that the power consumption of the data center sector will increase threefold in the next ten years. Meanwhile, wireless access networks are expected to increase their power consumption even faster, more than quadrupling between 2020 and 2030.
These issues affect the environment as well as the bottom lines of communications companies, which must commit increasingly larger percentages of their operating expenditure to cooling solutions.
Decreasing energy consumption and costs in the ICT sector requires more efficient equipment, and photonics technology will be vital in enabling such a goal. Photonics can transmit information more efficiently than electronics and ensure that the exponential increase in data traffic does not become an exponential increase in power consumption.
Photonics’ Power Advantages
Photonics and light have a few properties that improve the energy efficiency of data transmission compared to electronics and electric signals.
When electricity moves through a wire or coaxial cable, it encounters resistance, which leads to energy loss in the form of heat. Conversely, light experiences much less resistance when traveling through optical fiber, resulting in significantly lower energy loss during data transmission. As shown in the figure below, this energy loss gets exponentially worse with faster (i.e., higher-frequency) signals that can carry more data. Photonics scales much better with increasing frequency and data.
These losses and heat generation in electronic data transmission lead to higher power consumption, more cooling systems use, and reduced transmission distances compared to photonic transmission.
The low-loss properties of optical fibers enable light to be transmitted over vastly longer distances than electrical signals. Due to their longer reach, optical signals also save more power than electrical signals by reducing the number of times the signal needs regeneration.
With all these advantages, photonics entails a lower power per bit transmitted compared to electronic transmission, which often translates to a lower cost per bit.
Photonics’ Capacity Advantages
Aside from being more power efficient than electronics, another factor that decreases the power and cost per bit of photonic transmission is its data capacity and bandwidth.
Light waves have much higher frequencies than electrical signals. This means they oscillate more rapidly, allowing for a higher information-carrying capacity. In other words, light waves can encode more information than electrical signals.
Optical fibers have a much wider bandwidth than electrical wires or coaxial cables. This means they can carry a broader range of signals, allowing for higher data rates and more transmission of parallel data streams. Thanks to technologies such as dense wavelength division multiplexing (DWDM), multiple data channels can be sent and received simultaneously, significantly increasing the transmission capacity of an optical fiber.
Overall, the properties of light make it a superior medium for transmitting large volumes of data over long distances compared to electricity.
Transfer Data, Not Power
Photonics can also play a key role in rethinking the architecture of data centers. Photonics enables a more decentralized system of data centers with branches in different geographical areas connected through high-speed optical fiber links to cope with the strain of data center clusters on power grids.
For example, data centers can relocate to areas with available spare power, preferably from nearby renewable energy sources. Efficiency can increase further by sending data to branches with spare capacity. The Dutch government has already proposed this kind of decentralization as part of its spatial strategy for data centers.
Takeaways
Despite all these advantages, electronics’s one significant advantage over photonics is accessibility.
Electronic components can be easily manufactured at scale, ordered online from a catalog, soldered into a board, and integrated into a product. For photonics to enable an energy transition in the ICT sector, it must be as accessible and easy to use as electronics.
However, for photonics to truly scale and become as accessible as electronics, more investment is necessary to scale production and adapt existing electronics processes to photonics. This scaling will drive down production costs, making integrated photonics more widely available and paving the way for its impactful integration into numerous technologies across the globe.
Tags: 4G vs. 5G, 5G Networks, Base Stations, Coherent Optical Systems, Data center architecture, data centers, Decentralization, Electrical Power, Electronic Equipment, Energy Transition, heat dissipation, miniaturization, Optical Fiber, Optical System-On-Chip (SoC), Photonics, Pluggable Transceivers, Power Usage Effectiveness (PUE), Semiconductor Industry, Wafer Scale Processes, Wireless TransmissionThe Power of Monolithic Lasers
Over the last decade, technological progress in tunable laser integration has matched the need for…
Over the last decade, technological progress in tunable laser integration has matched the need for smaller footprints. In 2011, tunable lasers followed the multi-source agreement (MSA) for integrable tunable laser assemblies (ITLAs). By 2015, tunable lasers were sold in the more compact micro-ITLA form factor, constituting a mere 22% of the original ITLA package volume. In 2019, the nano-ITLA form factor reduced ITLA volumes further, as the module was just 39% of the micro-ITLA volume.
Despite this progress, the industry will need further laser integration for the QSFP28 pluggables used in 100G ZR coherent access. Since QSFP28 pluggables have a lower power consumption and slightly smaller footprint than QSFP-DD modules, they should not use the