Power Outage in SF Tuesday brought down major Web 2.0 sites.
6 back-to-back power outages hit the SOMA neighborhood of San Francisco Tuesday afternoon causing major havoc with popular web services. 365 Main is down, along with craigslist, Technorati, Yelp, AdBrite and SixApart (including TypePad, LiveJournal and Vox). Many other popular sites such as CNet were unavailable too.
Interestingly enough, a “source close to the company” (365 Main) had this to say:
“Someone came in sh*tfaced drunk, got angry, went berserk, and f**ked up a lot of stuff. There’s an outage on 40 or so racks at minimum.” ValleyWag had a good article on this with lots of interesting links.
This however was unlikely as the cause since the area had been having power outages and clearly their UPS system did not function properly.
The San Francisco website Laughing Squid has a write-up of the power outage .
Here is another informative post from Radar.OReilly.com
Six Apart, in a very 2.0 move, kept everyone updated via its twitter stream.
But the real question is, what happened to their power backups? They should be able to keep running regardless of any lack of power. This is a good post about what 365 had to say regarding its “Credibility Outage” (and basically they made a bunch of excuses).
So again, do you trust your Web 2.0 online providers? Clearly there is a gap between what “should” have happened and what actually did happen. Datacenter 365 Main released a self-congratulatory announcement celebrating two years of continuous uptime for client RedEnvelope, mere hours before today’s drunken blackout.. [PR Newswire]
And without extensive testing and backout plans, it is hard to know what exactly would happen if something happened like a server being hacked or a major power outage. I would be more interested in the disaster recovery plans and testing they (or any major player) had done than in what they theoretically think might happen, based on the things they think they have in place.
Coming from a big business background, where their only real commodity is data (in my case, insurance), I have seen and been involved in a huge amount of disaster recovery testing and planning. I remember what they, and other businesses went through for testing for the 2000 rollover and for any number of other potential disasters. September 11th tested their and many others disaster recovery plans. The Stock Market and major banks and other financial firms simply cannot just go down or lose data, for any reason.
But as we move to a Web 2.0 world, companies like 365 Main are now also the repositories of major amounts of data and for many Web 2.0 companies, their business is data, just like financial institutions. It’s not small potatoes anymore. Face it, they are big business now and need to act like a big business. I’m sure they are bringing in big business income. So who holds them accountable? I’m wondering if many of these Web 2.0 companies didn’t grow from such small beginnings that they may not even be aware of what they need to ask and know from their provider.
And unfortunately, I hear people with a business background being dismissed as “luddites” or “1.0” or “dinosaurs” or just not with it, supposedly not able to comprehend the new 2.0 world. It reminds me of when PCs first came out and I started programming them after having had a mainframe background for several years.
PC programming was wild and wooly. There were no standards, no one documented their code so maintaining it was a nightmare, and people would see how many functions they could put on 1 line of code (more being better in their mind). An “elegant” piece of code would be completely undecipherable by anyone (which seemed almost to be the point) and would have no documentation. Which meant of course, that the code for most of the programs were a mess because no one could figure out what the last person did so they hacked around it. But if you were from a mainframe background, you supposedly could not “understand” PCs and were basically a dinosaur. Well, I know that is a bunch of nonsense because I didn’t have any problem understanding PCs and PC coding. What I didn’t understand was why they allowed projects and programs to be so sloppy and poorly run and written.
It was a real case of 1.0 technology meets 2.0 technology. In this case, Mainframes vs. PCs. Now it is happening again with Web 2.0. And regardless of what the current “New Thing” ™ is, one thing they all have in common is the belief that they know more than the people who have used the ‘old’ technology. But what they don’t realize is that they really haven’t learned anything at all yet. They have a great direction and new ideas and concepts and great plans, but if for no other reason than that the technology has not been around that long, they don’t have practical experience and a background to build on. I’m sorry, but while college gives you an in and a piece of paper to say you are somebody, the real learning starts when you start applying the knowledge in real world situations.
I remember taking a LOMA (life office management) test on data processing and thinking it should be a piece of cake. It turned out to be one of the hardest of the set because I had to learn what they thought the right answers were, not what was actually correct. I had the same experiences in higher education where what was being taught was so outdated that it was really completely wrong and in my opinion, was harmful in many ways to learn, especially if you thought you knew something afterwards.
And this is where I think the 2.0 arrogance is showing. It is a wonderful new way of doing things, but there are many foundations they could and should build on that have already been figured out. They can take what has been done to new and exciting levels, but reinventing the wheel for every single thing is pointless and causes the new technology to be without wheels for a while.