Netuality

Taming the big bad websites

Archive for April, 2010

Linkdump: Coop, HBase performance and a bit of Warcraft

one comment

Riptano is to Cassandra what Cloudera is to Hadoop or Percona to MySQL. Mmmkey?

A great, insightful post from Pingdom (as usual) allows us to take a peek behind the doors at largest web sites in the world, just by reading selected stuff from their respective developer blogs.

Yahoo decreased data-center cooling costs compared to power costs from 50 cents/dollar to only one cent/dollar. This is obtained on their most recent Yahoo Computing Coop data-center built in Lockport, New York.

The data center operates with no chillers, and will require water for only a handful of days each year. Yahoo projects that the new facility will operate at a Power Usage Effectiveness (PUE) of 1.1, placing it among the most efficient in the industry. [...]

If it looks like a chicken coop, it’s because some of the design principles were adapted from …. well, chicken coops. “Tyson Foods has done research involving facilities with the heat source in the center of the facility, looking at how to evacuate the hot air,” said Noteboom. “We applied a lot of similar thought to our data center.”

The Lockport site is ideal for fresh air cooling, with a climate that allows Yahoo to operate for nearly the entire year without using air conditioning for its servers.

High Scalability blog dissects a paper describing Dapper, Google’s tracing system used to instrument all the components of a software system in order to understand its behavior. Immensely interesting:

As you might expect Google has produced and elegant and well thought out tracing system. In many ways it is similar to other tracing systems, but it has that unique Google twist. A tree structure, probabilistically unique keys, sampling, emphasising common infrastructure insertion points, technically minded data exploration tools, a global system perspective, MapReduce integration, sensitivity to index size, enforcement of system wide invariants, an open API—all seem very Googlish.

On my favorite blog :) HStack.org Andrei wrote a great post about real-life performance testing of HBase:

The numbers are the tip of the iceberg; things become really interesting once we start looking under the hood, and interpreting the results.

When investigating performance issues you have to assume that “everybody lies”. It is crucial that you don’t stop at a simple capacity or latency result; you need to investigate every layer: the performance tool, your code, their code, third-party libraries, the OS and the hardware. Here’s how we went about it:

The first potential liar is your test, then your test tool – they could both have bugs so you need to double-check.

But the most interesting distributed system of the week is World of Warcraft. Ars Technica describes a tour of the Blizzard campus and here’s a peek at the best NOC screen ever:

For the hooorde!

Written by Adrian

April 27th, 2010 at 10:35 pm

Posted in Linkdump

Tagged with , , , ,

Linkdump: Twitter, Twitter, CAP and … iPad

leave a comment

Well, not all Twitter runs on Cassandra :) Alex Payne explains how they build Hawkwind, a distributed search system written in Scala. Take a look at the slide 18, where you can clearly see that they use HBase as backend:



Also from the great guys at Twitter: gizzard. Interesting and appropriate name for a database sharding framework. Gizzard uses range-based partitioning and replication tree and knows to rely on a large range of data stores: RDBMSes, Lucene or Redis – you name it. But I wonder about the operational overhead when you have a really large gizzard cluster.

Michael Stonebraker has a short essay on CAP published in the ACM blogs. He identifies a series of use cases where the CAP theorem simply does not apply and cannot be appealed to for guidance:

Obviously, one should write software that can deal with load spikes without failing; for example, by shedding load or operating in a degraded mode. Also, good monitoring software will help identify such problems early, since the real solution is to add more capacity. Lastly, self-reconfiguring software that can absorb additional resources quickly is obviously a good idea.

In summary, one should not throw out the C so quickly, since there are real error scenarios where CAP does not apply and it seems like a bad tradeoff in many of the other situations.

Great nosqlEu coverage on Alex Popescu’s blog MyNoSQL. Read it to get all the presentations, tons of links and Twitter quotes.

Because every self-respecting blog should mention some info about the newly released iPad, here’s mine. According to the O’Reilly Radar, iPad is not ready for the cloud integration:

I am hoping for a future where all I need to supply a device with is my identity, and everything else falls into place. This doesn’t even have to be me trusting in a third-party cloud: there’s no reason similar mechanisms couldn’t be used privately in a home network setting.

I think the iPad is an amazing piece of hardware, and the most pleasant web browsing experience available. It is still very much a 1.0 device though, and its best days certainly lie ahead of it. I hope part of that improvement is a simple story for synchronization and cloud access.

Guess I’ll be waiting for the release of iPad Pro:

Written by Adrian

April 21st, 2010 at 11:24 pm

Posted in Linkdump

Tagged with , , ,