Netuality

Taming the big bad websites

Archive for the ‘Linkdump’ Category

Links of the week: moving away from the Oracles

leave a comment

Siddharth Annand from LinkedIn explains in an interview how to move away from the Oracles to the wonderful world of open-source and NoSQL. Now I’m waiting for them to opensource Databus and Espresso. And “Oracle is not web scale” – sorry but I had to quote this.

It seems that HAProxy has restarted development with a nice frequency, no less than one release each 10 days recently. I love this software, used it and will use it again. And to quote the author “Users should really upgrade, as I don’t want to waste time trying to spot stupid bugs in configs that are notoriously broken.“. Amen, bro! Always Be Upgrading.

Not related to scalability or anything, news of the week it’s about the Flame “Middle Eastern” malware:

These are the real pros at work. One can only wonder how many pieces of software like Flame are harbored by our innocent laptops.

In order to avoid ending on such a nasty note, here’s an autotune clip which I’m sure you’ll greatly enjoy:
YouTube Preview Image

Written by Adrian

June 8th, 2012 at 10:17 pm

Posted in Linkdump

Linkdump: nodevincing the boss, and probabilistic data structures

leave a comment

Great pragmatic considerations in Felix Geisendörfer’s Node.js: convincing the boss guide:

  • Not for CPU-heavy apps
  • There’s no Django for it (yet?) so don’t expect mindblowing productivity
  • Don’t do it for the nerdy buzzword bingo …
  • … do it for single-page js apps that feed with JSON from the server
  • Do it if real-time is important for your app
Oh, and the convincing part? Just build a cool prototype and find a local community from where you could hire smart developers. I can say this about any other technology, and bosses seem to be rather predictable in their reasoning…
Read the rest of this entry »

Written by Adrian

May 4th, 2012 at 10:23 pm

Posted in Linkdump

Tagged with ,

Linkdump: leaner meaner MySQL, gulping from the data buffet and lessons learned at Reddit

leave a comment

The Percona guys are pleading for a MySQL strongly optimized for a single type of storage engine:

We could save a lot of CPU cycles by having storage format same as processing format. We could tune Optimizer to handle Innodb specifics well. We could get rid of SQL level table locks and using Innodb internal data dictionary instead of Innodb files. We would use Innodb transactional log for replication (which could be extended a bit for this purpose). Finally backup can be done in truly hot way without nasty “FLUSH TABLE WITH READLOCK” and hoping nobody is touching “mysql” database any more. Single Storage Engine server would be also a lot easier to test and operate.

This also would not mean one has to give up flexibility completely, for example one can imagine having Innodb tables which do not log the changes, hence being faster for update operations.

Looks like Twitter data buffet is back in business. Some of the data is free. Enjoy it with moderation: too much data can make you slow.

Reddit‘s Steve Huffman gives a talk at Web Apps Miami 2010. Self-healing, separation of services, be stateless and cache like crazy, redundancy and yes, a little bit of Hadoop (Amazon’s Hadoop is Elastic Map Reduce). Read the full transcript on Carsonified:

We’ve actually been using Hadoop, Amazon’s Hadoop implementation to compute awards. If we need to do a complicated query like that, we store the data, we dump our database, or at the right time we store it in a way that will make those joins possible down the road. That being said; we’ve tried to avoid doing joins as much as possible, and when the data comes in we store it in the way we’re going to need it. That’s worked much better than trying to do it at run time.

Written by Adrian

May 10th, 2010 at 8:58 pm

Posted in Linkdump

Tagged with , ,

Linkdump: Coop, HBase performance and a bit of Warcraft

one comment

Riptano is to Cassandra what Cloudera is to Hadoop or Percona to MySQL. Mmmkey?

A great, insightful post from Pingdom (as usual) allows us to take a peek behind the doors at largest web sites in the world, just by reading selected stuff from their respective developer blogs.

Yahoo decreased data-center cooling costs compared to power costs from 50 cents/dollar to only one cent/dollar. This is obtained on their most recent Yahoo Computing Coop data-center built in Lockport, New York.

The data center operates with no chillers, and will require water for only a handful of days each year. Yahoo projects that the new facility will operate at a Power Usage Effectiveness (PUE) of 1.1, placing it among the most efficient in the industry. [...]

If it looks like a chicken coop, it’s because some of the design principles were adapted from …. well, chicken coops. “Tyson Foods has done research involving facilities with the heat source in the center of the facility, looking at how to evacuate the hot air,” said Noteboom. “We applied a lot of similar thought to our data center.”

The Lockport site is ideal for fresh air cooling, with a climate that allows Yahoo to operate for nearly the entire year without using air conditioning for its servers.

High Scalability blog dissects a paper describing Dapper, Google’s tracing system used to instrument all the components of a software system in order to understand its behavior. Immensely interesting:

As you might expect Google has produced and elegant and well thought out tracing system. In many ways it is similar to other tracing systems, but it has that unique Google twist. A tree structure, probabilistically unique keys, sampling, emphasising common infrastructure insertion points, technically minded data exploration tools, a global system perspective, MapReduce integration, sensitivity to index size, enforcement of system wide invariants, an open API—all seem very Googlish.

On my favorite blog :) HStack.org Andrei wrote a great post about real-life performance testing of HBase:

The numbers are the tip of the iceberg; things become really interesting once we start looking under the hood, and interpreting the results.

When investigating performance issues you have to assume that “everybody lies”. It is crucial that you don’t stop at a simple capacity or latency result; you need to investigate every layer: the performance tool, your code, their code, third-party libraries, the OS and the hardware. Here’s how we went about it:

The first potential liar is your test, then your test tool – they could both have bugs so you need to double-check.

But the most interesting distributed system of the week is World of Warcraft. Ars Technica describes a tour of the Blizzard campus and here’s a peek at the best NOC screen ever:

For the hooorde!

Written by Adrian

April 27th, 2010 at 10:35 pm

Posted in Linkdump

Tagged with , , , ,

Linkdump: Twitter, Twitter, CAP and … iPad

leave a comment

Well, not all Twitter runs on Cassandra :) Alex Payne explains how they build Hawkwind, a distributed search system written in Scala. Take a look at the slide 18, where you can clearly see that they use HBase as backend:



Also from the great guys at Twitter: gizzard. Interesting and appropriate name for a database sharding framework. Gizzard uses range-based partitioning and replication tree and knows to rely on a large range of data stores: RDBMSes, Lucene or Redis – you name it. But I wonder about the operational overhead when you have a really large gizzard cluster.

Michael Stonebraker has a short essay on CAP published in the ACM blogs. He identifies a series of use cases where the CAP theorem simply does not apply and cannot be appealed to for guidance:

Obviously, one should write software that can deal with load spikes without failing; for example, by shedding load or operating in a degraded mode. Also, good monitoring software will help identify such problems early, since the real solution is to add more capacity. Lastly, self-reconfiguring software that can absorb additional resources quickly is obviously a good idea.

In summary, one should not throw out the C so quickly, since there are real error scenarios where CAP does not apply and it seems like a bad tradeoff in many of the other situations.

Great nosqlEu coverage on Alex Popescu’s blog MyNoSQL. Read it to get all the presentations, tons of links and Twitter quotes.

Because every self-respecting blog should mention some info about the newly released iPad, here’s mine. According to the O’Reilly Radar, iPad is not ready for the cloud integration:

I am hoping for a future where all I need to supply a device with is my identity, and everything else falls into place. This doesn’t even have to be me trusting in a third-party cloud: there’s no reason similar mechanisms couldn’t be used privately in a home network setting.

I think the iPad is an amazing piece of hardware, and the most pleasant web browsing experience available. It is still very much a 1.0 device though, and its best days certainly lie ahead of it. I hope part of that improvement is a simple story for synchronization and cloud access.

Guess I’ll be waiting for the release of iPad Pro:

Written by Adrian

April 21st, 2010 at 11:24 pm

Posted in Linkdump

Tagged with , , ,

Linkdump: Cassandra lovers, blowing the circuit breaker and Oracle clouds

2 comments

Good points (as always) on Alexandru’s blog discussing the SQL scalability isn’t for everyone topic.

NoSQL as RDBMS are just tools for our job and there is nothing about the death of one of the other. But as we’ve learned over years, every new programming language is the death of all its precursors, every new programming paradigm is the death of everything that existed before and so on. The part that some seem to be missing or ignoring deliberately is that in most of these cases this death have never really happened.

For large-scale performance testing of a production environment check out how Facebook MySpace simulated 1 million concurrent users with a huge EC2 cluster, described on the High Scalability blog. While the article is a guest post from a company selling “cloud testing” solutions and has a bit of “sales juice” in it, it’s still a very good read:

Large-scale testing using EC2

Someone is in love with Cassandra after only 4 months. Hoping Cassandra doesn’t get too fat after the wedding:

Traditional sharding and replication with databases like MySQL and PostgreSQL have been shown to work even on the largest scale websites — but come at a large operational cost. Setting up replication for MySQL can be done quickly, but there are many issues you need to be aware of, such as slave replication lag. Sharding can be done once you reach write throughput limits, but you are almost always stuck writing your own sharding layer to fit how your data is created and operationally, it takes a lot of time to set everything up correctly. We skipped that step all together and added a couple hooks to make our data aggregation service siphon to both PostgreSQL and Cassandra for the initial integration.

Distributed data war stories from Anders @ bandwidth.com, HBase and Hadoop on commodity hardware:

As mentioned before, the commodity machines I used were very basic but I was able to insert conservatively about 500 records per second with this setup. I kept blowing the circuit breaker at the office as well forcing me to spread the machines across several power circuits but it proved that the system was at least fault tolerant!

SourceForge chooses Python, TurboGears and … MongoDB for a new version of their website. Looks like Mongo is becoming quite mainstream.

Don’t believe the rumors, Oracle is into cloud computing after all – at least according to Forrester. Well, as long as the clouds are private. And as long as you can live with “coming soon” tooling. And it’s not like they really have a clear long-term strategy for cloud computing:

I believe that cloud is a revolution for Oracle, IBM, SAP, and the other big vendors with direct sales forces (despite what they say). Cloud computing has the potential to undermine the account-management practices and pricing models these big companies are founded on. I think it will take years for each of the big vendors to adapt to cloud computing. Oracle is just beginning this journey; I think other vendors are further down the track.

The igvita blog hits NoSQL in the groin by showing a simple way of having a schema-free data store … in MySQL. It’s a sort of proxy that translates schemas into denormalized data placed in distinct tables:

Instead of defining columns on a table, each attribute has its own table (new tables are created on the fly), which means that we can add and remove attributes at will. In turn, performing a select simply means joining all of the tables on that individual key. To the client this is completely transparent, and while the proxy server does the actual work, this functionality could be easily extracted into a proper MySQL engine – I’m just surprised that no one has done so already.

While an interesting idea, not sure how effective this will be in practice, as joins are among the most time-consuming operations in the database world. I’m pretty sure that replacing a 10-column table get on the primary key with joins on 10 tables will add an important overhead.

Written by Adrian

March 4th, 2010 at 9:31 pm

Posted in Linkdump

Tagged with , , , ,

Linkdump: Cassandra @Twitter, Forrester not grokking NoSQL

one comment

Seven signs you need to accept NoSQL in your life according to the High Scalability blog. I especially like sign #6 “Maintaining a completely separate object caching system on top of an already beefy table storage system“. There are companies making serious bucks from selling exactly this type of caching systems. I find that a bit ironic, don’t you?

Twitter has just decided to adopt Cassandra as their main storage. I roughly estimated the status table to having  more than 9 billion rows – it’s a good table size to start thinking about the benefits of NoSQL. I would have been interested in seeing a comparison with other existing solutions and a rationale of their choice. According to some sources, Ryan King rejected HBase because if a region server is down, writes will be blocked for affected data until the data is redistributed – unlike Cassandra’s “write never fail” policy. According to other sources, this will be solved in a future version of HBase but I think Twitter needed a solution sooner rather than later. I hope for two things:

  • That the Twitter dudes will blog about their migration experience
  • That I’ll be able to access and search through all my older tweets, fer’ God sake!

Forrester Research thinks that NoSQL and Elastic Caching Platforms are very similar. So similar that “NoSQL Wants To Be Elastic Caching When It Grows Up“. According to Forrester “Ultimately, the real difference between NoSQL and elastic caching now may be in-memory versus persistent storage on disk.” Yeah sure: transactions, durability, indexing, security model – who needs this crap anyway?

Oh and let’s not forget about today’s GAE unscheduled downtime. Waiting forward for the post mortem, for sure there will be a thing or two to learn…

Written by Adrian

February 24th, 2010 at 11:18 pm

Posted in Linkdump

Tagged with , , ,

January 30 linkdump: cloud, cloud, cloud

leave a comment

Yes there is such a thing as cloud management services and Cloudkick has a business model around them:

The San Francisco company’s existing features — including a dashboard with an overview of your cloud infrastructure, email alerts, and graphs that you help you visualize data like bandwidth requirements — will always be free, said co-founder and chief executive Alex Polvi. But Cloudkick wants to charge for features on top of the basic service, such as SMS alerts when your app has problems and a change-log tool where sysadmins can communicate with each other, which Polvi described as “Twitter for servers.”

Great article on designing applications for the cloud from Godjo Adzic who spent his last two years in projects deployed on the Amazon cloud:

A very healthy way to look at this is that all your cloud applications will run on a bunch of cheap web servers. It’s healthy because planning for that in advance will help you keep your mental health when glitches occur, and it will also force you to design for machine failure upfront making the system more resilient.

Royans blog comments James Hamilton critical post about private clouds not being the future:

Though I believe in most of his comments, I’m not convinced with the generalization of the conclusions. In particular, what is the maximum number of servers one need to own, beyond which outsourcing will become a liability. I suspect this is not a very high number today, but will grow over time.

And a good detailed article about Hive used at Facebook:

Facebook has a production Hive cluster which is primarily used for log summarization, including aggregation of impressions, click counts and statistics around user engagement. They have a separate cluster for “Ad hoc analysis” which is free for all/most Facebook employees to use. And over time they figured out how to use it for spam detection, ad optimization and a host of other undocumented stuff.

Written by Adrian

January 30th, 2010 at 11:44 pm

Posted in Linkdump

Tagged with , , ,

January 23 linkdump: grids, BuddyPoke and the state of Internet

leave a comment

On Enterprise Storage a few experts look at grid computing and the future of cloud computing.

Can cloud computing succeed where grid failed and find widespread acceptance in enterprise data centers? And is there still room for grid computing in the brave new world of cloud computing? We asked some grid computing pioneers for their views on the issue.

[...]

And when it comes to IaaS [infrastructure as a service], I think in five years something like 80 to 90 percent of the computation we are doing could be cloud-based.

BuddyPoke cofounder Dave Westwood explains on the High Scalability blog how they achieved viral scale, Facebook viral scale to be more specific. BuddyPoke is today entirely hosted on GAE (Google AppEngine) and they some great insights and lessons learned.

On the surface BuddyPoke seems simple, but under hood there’s some intricate strategy going on. Minimizing costs while making it scale and perform is not obvious. Who does what, when, why and how takes some puzzling out. It’s certainly an approach a growing class of apps will find themselves using in the future.

Jamesh Varia from Amazon wrote a great Architecting for the Cloud: Best Practices [PDF] paper:

This paper is targeted towards cloud architects who are gearing up to move an enterprise-class application from a fixed physical environment to a virtualized cloud environment. The focus of this paper is to highlight concepts, principles and best practices in creating new cloud applications or migrating existing applications to the cloud.

The AWS cloud offers highly reliable pay-as-you-go infrastructure services. The AWS-specific tactics highlighted in the paper will help design cloud applications using these services. As a researcher, it is advised that you play with these commercial services, learn from the work of others, build on the top, enhance and further invent cloud computing.

The Pingdom guys have another fantastic post on their blog about the state of Internet in 2009:

  • 90 trillion – The number of emails sent on the Internet in 2009.
  • 92% – Peak spam levels late in the year.
  • 13.9% – The growth of Apache websites in 2009.
  • -22.1% – The growth of IIS websites in 2009.

These and more interesting statistics in their blog post.

Written by Adrian

January 23rd, 2010 at 1:20 pm

Posted in Linkdump

Tagged with , , ,

January 13 linkdump: KDD, EC2 congested, Coherence, Zimbra

leave a comment

Call to arms for the annual ACM KDD Conference. KDD stands for Knowledge Discovery and Data Mining, so if you’re looking for some hardcore use cases and new algorithms to apply, this is definitely the place to be (Washington, July 25-28):

KDD-2010 will feature keynote presentations, oral paper presentations, poster sessions, workshops, tutorials, panels, exhibits, demonstrations, and the KDD Cup competition.

There’s rumor on the street that Amazon EC2 is over-subscribed. From the trenches it appears that their scalability is … well, duh … not infinite and elasticity is a tiny bit rigid:

Anyone that uses virtualized computing, whether it is in the cloud or in their own private setup (VMWare for example) knows you take a performance hit. These performance hits can be considerable, but on the whole, are tolerable and can be built into an architecture from the start.

The problems that we are starting to see from Amazon, are more than just the overhead of a virtualized environment. They are deep rooted scalability problems at their end that need to be addressed sooner rather than later.

My Adobe colleague Ricky Ho has posted some notes on Oracle’s Coherence (formerly Tangosol), a distributed Java cache rich in features. A great read especially if you want a technical intro to the product (code snippets and everything).

The acquisition of the day is Zimbra being bought by VMWare. Yahoo is selling Zimbra a loss, it seems. Analysts wonder what exactly is VMWare planning to do, well they’re probably going up the stack and working on providing their own cloud ecosystem and related services. “VMWare Applications”, soon?

Under the terms of the agreement, Yahoo can continue to use Zimbra technology in its communications services. VMWare’s interest in Zimbra is a bit of a mystery since VMWare focuses on selling virtualization technology; in the release, VMWare offers somewhat of an explanation saying that the purchase furthers its “mission of taking complexity out of the datacenter, desktop, application development and core IT services”

Written by Adrian

January 13th, 2010 at 8:23 pm

Posted in Linkdump

Tagged with , , , , , ,