Why We Do What we do

Data Clouds and the Illusion of Safety

In the last decade we have slowly handed over our memories, photos, work files and whole businesses to vast, unseen data centers ran by a handful of faceless companies. The promise was simple, the cloud would take away the headaches of storage and backup. It would do for us what we could not do ourselves. Secure, redundant, worry‑free. We clicked Accept and moved on with our lives.

But the cloud is not a place; it is just someone else’s computers. And sometimes those people make mistakes, or hide them, or simply shrug when something goes wrong. In 2023–2025 a string of failures and breaches stripped the illusion that storing everything in one big, outsourced platform is always safer. This is why our applications are local first, do not rely on large data centers, and allow you to control your own data.


The Gmail Data Scare of 2025

In the summer of 2025 a report surfaced that a Salesforce database linked to Google had been breached. The leaked data exposed information on more than 2.5 billion Gmail users. According to the analysis by Valley Techlogic, no passwords were confirmed to be stolen, but the account details were leveraged for targeted phishing and social‑engineering attacks. The breach of a third‑party integration meant that attackers could send convincing emails or phone calls purporting to be from Google, coaxing victims into

account resets or revealing security codes. The scale of the breach made headlines, two and a half billion people being told their account data might be floating around. For individuals, the news translated into calls to enable multi‑factor authentication, change passwords and watch for suspicious links. For businesses it was a stark reminder that a vendor’s data hygiene becomes your problem the minute you plug into their ecosystem. I personally know a business that had data of employees and business partners leaked, and were very nearly scammed out of a not insignificant amount of money.

The most unsettling part was how quickly the narrative shifted. Early reports suggested the leak primarily affected business accounts, because the exposed data sat in Salesforce. That notion was quickly dispelled as regular Gmail users reported increased phishing attempts and even phone calls from scammers claiming to represent Google. Google assured the public that no passwords were compromised, yet the fact attackers could verify which accounts existed and belonged to which people made their scams far more convincing.

For the average person the incident became one more piece of static in a year of breach fatigue. We have come to accept that our data will periodically leak and that we must change a password or two. But there is a deeper issue. When half of Earth’s population depends on a single provider to manage their email, a breach at that provider reverberates globally. The centralization of communications under one roof means a single leak becomes a planetary storm.


When AWS Turned Off the Lights for One Man

Not long after the Gmail scare Amazon Web Services (AWS) deleted a decade’s worth of data belonging to a single software engineer named Abdelkader Boudih. Boudih described the event as a “complete digital annihilation”. For ten years he had paid AWS to host his code, tutorials and a book he was writing. One July afternoon in 2025 an

automated verification email landed in his inbox. A few weeks later AWS told him that his account had been terminated for failing to submit proper identification. Every file was gone. The developer sought explanations, and customer support agents sent canned responses. In the meantime he learned that the platform not only closed his account but wiped all the data, despite AWS’s publicly stated policy of retaining closed accounts for 90 days. Only after posting a detailed blog post did an AWS insider contact him and suggest that his data had been wiped accidentally due to a scripting error during a routine audit. The insider alleged that a mis‑read of command parameters caused a script meant to flag dormant accounts to delete live accounts instead. What makes Boudih’s story surprising is not just the data loss but the power dynamic. Even though AWS is a

paid service, he had no direct recourse. The only information he received came from an anonymous whistle‑blower. In his blog he wrote about the experience of pleading with support bots and receiving form letters while watching years of work disappear. Eventually he decided to build a tool to help others migrate off AWS entirely. He also claimed to be advising clients to move their workloads to competing providers like Oracle and Google Cloud. AWS issued a statement saying the account was suspended under normal verification protocols and

insisted there was no system error. That may be true. But to Boudih it felt like more. An unseen process flagged his account, terminated it, and erased everything without warning. His story reads like a parable for anyone who thinks renting space in someone else’s warehouse is the same as owning a home.

AWS US‑East‑1 Outage

On the evening of 20 October 2025 Amazon Web Services hiccuped, and everything from websites to applications failed . Workers across the world found their apps unresponsive, financial platforms, chat apps, and ride‑sharing services all froze. It was the biggest internet disruption since the CrowdStrike failure in 2024, showing in real time, what happens when too much of the global digital economy leans on one set of servers.

AWS’s own post‑incident report later explained that the trouble started deep inside DynamoDB, the company’s cloud database. Automation scripts manage hundreds of thousands of DNS records for DynamoDB and its load balancers. On this day a race condition in that automation created an empty DNS record for the regional endpoint. Applications around the world tried to look up the address for dynamodb.us-east-1.amazonaws.com and got nothing. AWS said the outage was caused by “a latent defect within the service’s automated DNS management system” and that the root cause was “an empty DNS record for the Virginia‑based US‑East‑1 data center region”. Engineers had to intervene manually because the automation couldn’t fix itself.

The consequences were immediate and bizarre. Reuters reported that the disruption knocked workers from London to Tokyo offline and prevented normal tasks like paying a hairdresser or changing an airline ticket. Signals from Downdetector showed more than 4 million people reported problems, and at least a thousand companies, from Snapchat, Coinbase, Signal, X, to Lyft suffered outages. Even Amazon’s own services including its shopping site, Prime Video and Alexa, stuttered. In one anecdote relayed by The Guardian, customers of a smart bed company were unable to adjust the temperature or incline of their mattresses because the bed’s app could not reach AWS; the manufacturer rushed to add a Bluetooth mode so users could regain control.

Experts seized the moment to point out the fragility of our digital scaffolding. Jake Moore of cybersecurity firm ESET told Reuters, “This outage once again highlights the dependency we have on relatively fragile infrastructures”. Dr Suelette Dreyfus of the University of Melbourne noted that the problem isn’t just AWS; the cloud, as a market, is concentrated in three companies, and we’ve “lost some of [the internet’s] resilience” by outsourcing so much to them. It wasn’t the first time US‑East‑1 had contributed to a major outage; Reuters noted it was at least the third in five years. The pattern makes it clear that when one of the big providers hiccups, the rest of us choke.

Convenience is not the same as control. When even your smart bed can’t warm your toes because of a DNS bug thousands of miles away, it’s a sign that we’ve traded autonomy for ease. The remedy is not to forsake the cloud entirely, but to diversify, to build fail‑safes, and to remember that the internet was designed to route around failures. Next time it might be worse if we don’t build healthier habits.

Google Drive’s Vanishing Act (2023)

The year 2023 brought a quieter but no less disturbing incident. Google Drive, the company’s cloud storage service, began losing months’ worth of files for some users. As reported by Tom’s Hardware and The Register, multiple customers posted on Google’s support forums that entire folders had disappeared from their Drive accounts. One user named Yeonjoong said six months of data were simply gone, while others reported losing years of documents. Google acknowledged that it was investigating but could not promise that the missing data could be recovered. The issue didn’t stop at web Drive. People who used Google’s desktop backup tool — which synchronizes local folders with the cloud found that the application had also purged their local copies. Trusting a single provider to store the master and the backup meant losing both at once. For individuals this meant losing photos, resumes, personal projects. For businesses it meant weeks of work

suddenly evaporating. The problem underscores a harsh truth, “sync” is not the same as “backup,” and cloud providers sometimes treat your files as transient data rather than archives.


The Human Cost of Outages and Breaches

It is tempting to treat these stories as isolated misfortunes, but they follow a pattern. The centralized cloud has become a single point of failure for billions. A misconfiguration in a data center in northern Virginia can take down a quarter of the internet, a coding error in an audit script can wipe ten years of a person’s life. Even seemingly small issues like a few lost months of Google Drive data reveal how quickly trust can evaporate when we have no visibility into the systems we rely on. Consider the cascading effect of major outages. When AWS experienced a widespread outage in October 2025, thousands of companies found their services offline. Zoom calls dropped, smart devices went dumb, websites loaded blank pages. Each of those companies had to answer to their own customers.

When Microsoft’s Azure platform suffered a disruption around the same time, similar dominoes fell. The outages were resolved within hours or days, but they raised questions. If the infrastructure that powers half of the internet can blink out, how resilient are the businesses built on top of it? The human cost is not just about downtime. When someone’s wedding photos or a manuscript vanish from a cloud folder, they experience loss. When a small business loses invoices stored in a SaaS platform, the owner spends days reconstructing their books. Meanwhile, statements from the providers speak in abstractions like service degradation and incident response. There is a disconnect

between how the providers talk and how the affected users feel. Hunter S. Thompson once said journalism is writing “what it is like to live in our time.” To live in our time is to hand your data, or your work over to faceless corporations, and watch your data fall through other people’s fingers and be told it was an edge case. Why do these incidents keep happening? At a technical level the answer is complex, but at a conceptual level it boils down to concentration and opacity. Many of us rely on the same handful of providers for email, cloud storage, computing and authentication. This concentration means a single bug or breach can have global repercussions. Cloud services are multi‑tenant by design. Your data sits on the same hardware as thousands of other customers’ data. This increases efficiency but also means that a misfire in one part of the system can ripple across others.When an automated system flags an account as risky, or when an engineer runs a script on a cluster, customers rarely know. Support channels are optimized for throughput, not for transparency. As a result, people often learn what happened to their data only when something goes wrong. Providers are rewarded for growth and uptime, not for helping customers maintain sovereignty over their data. Investing in tools that let you exit easily or maintain local control does not drive revenue, locking you in does. These structural issues turn occasional accidents into systemic risk. Because we treat digital services as utilities, we seldom question the arrangement until there is a crash.