A DNS Crash disrupted networks around the world on January 8, 2026, after a flaw in the DNS client service caused multiple Cisco Small Business Switches to reboot repeatedly and, in some cases, completely core dump. The outage affected organizations of all sizes, from small IT teams managing a handful of switches to administrators responsible for dozens of devices spread across multiple sites.
The problem began surfacing around 2:00 AM and quickly appeared to be global in scope. Network administrators reported that switches suddenly entered reboot loops every 10 to 30 minutes, rendering networks unstable or unusable until emergency changes were made. The most frequently cited affected models included the CBS250, C1200, CBS350, SG350, and SG550X series. In several cases, switches had been running reliably for more than a year before failing simultaneously.
DNS Crash Cause Reboot Loops Across Models
Logs collected from impacted devices consistently pointed to fatal errors in the DNS client process, identified as the DNSC task. One of the most common log entries was:
“%DNS_CLIENT-F-SRCADDRFAIL: Result is 2. Failed to identify address for specified name ‘www.cisco.com.’”
Other failures involved time synchronization domains, including NIST-hosted servers such as “time-c.timefreq.bldrdoc.gov.” These DNS resolution failures triggered fatal errors that forced the switches to generate core dumps and automatically reset. Stack traces showed the crashes occurring inside the DNS client code path, rather than in SNTP or other services directly.
Administrators observed the issue across multiple firmware versions, including 4.1.7.17 (dated May 26, 2025), 4.1.3.36 (dated May 19, 2024), and 4.1.7.24 (dated August 27, 2025). The breadth of versions affected suggested a long-standing defect that was only exposed when a specific external condition occurred.

Administrators Trace Impact to DNS Lookups and SNTP Defaults
On Cisco’s community forums, one administrator described the scope of the outage in stark terms. Posting under the title “Cisco CBS250 and C1200 DNS crash,” the user wrote on January 8, 2026:
“Today was a bad day for the Cisco CBS250 and C1200’s. I’ve been running these for 1 to 2 years now and haven’t had an issue until today. I think every single one crashed today and kept crashing until I removed the DNS configuration. I have about 50 of these.”
The same administrator shared detailed crash logs showing fatal DNSC errors when the switches attempted to resolve both “www.cisco.com” and “time-c.timefreq.bldrdoc.gov.” Similar reports appeared on Reddit, where SG550X owners confirmed that devices at different sites began failing at the same time, reinforcing the conclusion that the trigger was external rather than a localized configuration error.
A pattern emerged linking the crashes to DNS lookups for default services embedded in the firmware. Even switches without explicit NTP configurations attempted to resolve domains such as time-pnp.cisco.com or www.cisco.com. When those lookups failed or returned unexpected responses, the DNS client treated the condition as fatal rather than recoverable, leading directly to a reboot.
Workarounds Stabilize Networks as Root Cause Remains Unpatched
Several forum participants speculated that a resolver-side change played a role. Attention focused on Cloudflare’s 1.1.1.1 DNS service, which many affected switches were using either as a primary or secondary resolver. One administrator summarized the concern bluntly: “How terrible that Cisco’s DNS implementation can’t handle a bad query response without resetting the whole switch.”
While not definitively confirmed, multiple reports suggested that a degradation or behavioral change on 1.1.1.1 coincided with the synchronized onset of the DNS Crash. Administrators noted that switches using alternative resolvers, or those with DNS disabled entirely, were often unaffected. However, others reported crashes even when 1.1.1.1 was configured only as a backup, indicating that the DNS client could still be triggered by problematic responses.
By mid-day on January 8, effective workarounds were circulating widely. The most reliable mitigation involved disabling DNS entirely using commands such as “no ip name-server” and “no ip domain-lookup.” Others removed default SNTP servers with “no sntp server time-pnp.cisco.com” or blocked outbound internet access from the switches. In nearly all cases, once DNS queries stopped, the switches stabilized.
Cisco support acknowledged the issue privately to customers and confirmed that it affected CBS, SG, and Catalyst 1200 and 1300 lines, including the CBS250 and C1200 families. As of January 9, 2026, no public advisory, patch, or field notice had been released.
