FS launches PicOS AI Switch System to power large-scale AI and HPC workloads

FS launches PicOS AI Switch System to power large-scale AI and HPC workloads

FS has released its PicOS AI Switch System, a comprehensive networking solution engineered to support large-scale AI training, inference, and HPC workloads. By integrating advanced Broadcom Tomahawk series chips, the PicOS network operating system, and the AmpCon-DC management platform, the portfolio delivers lossless RoCEv2 networking, ultra-low latency, and intelligent traffic optimization, ensuring high GPU efficiency and reliable cluster performance.

As artificial intelligence and high-performance computing (HPC) deployments continue to expand, enterprises and cloud providers require infrastructure capable of supporting massive, distributed workloads with consistent performance, scalability, and operational simplicity. FS PicOS AI Switch System addresses these needs by providing advanced congestion control and high-density connectivity, offering a future-proof foundation for next-generation data centers.

To address diverse deployment needs, the PicOS AI Switch System includes a complete range of 400G and 800G models. These switches, powered by Broadcom Tomahawk 3/4/5 series chips, deliver high bandwidth, deterministic performance, and scalable connectivity across AI training, inference, and HPC environments. The following table outlines the FS AI data center switch portfolio:

Built with redundant architecture, deep buffers, and advanced congestion management, the PicOS AI Switch System ensures lossless performance and operational resilience. With the AmpCon-DC management platform, organizations can simplify deployment, configuration, and lifecycle management of large GPU-based networks—enabling faster scaling and lower operational overhead.

“At FS, we are committed to delivering end-to-end, high-performance networking solutions tailored to the AI era,” said Jaylnn He, Senior Manager of Product R&D at FS. “With PicOS AI switches, we equip our customers to build scalable, efficient, and future-ready GPU clusters that accelerate innovation across industries, uniting hardware, software, and automation to help data centers confidently scale AI training, inference, and interconnect workloads while capturing the full potential of accelerated computing.”



Source link