
What stands out is how old issues still cause harm. A faulty web app firewall opened the door for Capital One’s 2019 incident. Over 100 million customers were affected by that slip, followed by an $80 million penalty, then another $190 million paid later. For close to two years, Football Australia had live API keys visible in their site’s code — no protection at all. As a result, 127 data stores became reachable. Toyota kept customer files in a public cloud setup for nine years, maybe ten. Around 260,000 accounts slipped out during that time
A further deep dive paints the real picture:
- Most cloud setup errors — 8 out of 10 — happen because people slip up, not because code fails.
- One out of three cloud setups sits empty, ignored by any oversight. A third of online storage spaces get zero attention from monitors.
- Almost one out of every two hundred storage units on Amazon’s cloud sits open, per a 2024 report by monitoring firm Datadog. Their findings spotlight how common loose settings remain across web-based file systems.
- 50% of the time, fixing leaks runs about ninety-four days long. What comes after discovery drags on for nearly three months.
Strange how often this happens. It shouldn’t take long for stolen logins to cause harm — yet here, hackers had over three months just waiting. The Snowflake incident relied on old data pulled years ago, sitting untouched since 2020. No new passwords were issued, no extra login steps added and zero checks on odd activity. A pattern returns, messy and ignored.
