Wurbe is the informal web developers meeting group, from Bucharest Romania. Meeting #5 was focused on automated testing (unit, TDD, BDD, other stuff). This is my presentation:
Taming the big, bad, nasty websites
Wurbe is the informal web developers meeting group, from Bucharest Romania. Meeting #5 was focused on automated testing (unit, TDD, BDD, other stuff). This is my presentation:
My previous article was focused on Linux monitoring. Often, you’ll have in your datacenter at least a few Windows machines. SQL Server is one of the best excuses these days to get a Microsoft machine in your server room – and you know what, it’s a decent database – well, at least for medium-sized companies like the one I’m working for right now.
It is less known, but yes you can have SNMP support out of the box with Windows 2000 and XP, and it doesn’t need to be the Server flavor [obiously it works the same in 2003 Server]:

After the server is installed, the SNMP service has to be configured. Here’s how:
Next step is Cacti integration. Unfortunately, there is no Windows-specific profile for devices in Cacti. Therefore if you have lots of Windows machines, you’ll have to define your own. Or, take a Generic SNMP-enabled host and use it as a scaffold for each device configuration.
Out of the graphs and datasources already defined in Cacti [I am using 0.8.6c] only two work with Windows SNMP agents: processes and interface traffic values.

It’s a good start, but if you are serious about monitoring, you need to dig a little bit deeper. Once again, the MIB Browser comes to save the day. It’s very simple, just search on the Windows machine for any .mib files you are able to find, copy on your workstation, load them into the MIB browser and make some recursive walks (Get subtree on the root of the MIB).This way, I was able to find some interesting OID for the Windows machine. For instance, .1.3.6.1.2.1.25.3.3.1.2.1 -> .1.3.6.1.2.1.25.3.3.1.2.4 the OID for CPU load on each of the 4 virtual CPUs [it's a dual Xeon with HT].

Memory-related OIDs for my configuration are :
Here’s a neat memory chart for a windows machine. Notice that the values are in “blocks” which in my case is 64kb. The total physical memory is 4GB.

Most hardware manufacturers do offer SNMP agents for their hardware, as well as the corresponding .mib file . In my case, I was able to install an agent to monitor an LSI Megaraid controller. Here is a chart for the number of disk operations/second:

In one of my next articles, we’ll take a look together at the way you can export “non-standard” data over SNMP from Windows, in the same manner we did on Linux, using custom scripts. Till then, have an excellent week.
“More and more pressure is on Microsoft to rush Longhorn. Apparently, a number of Microsoft licensees will get some sort of massive refund if the product isn't delivered in 2006, and the word on the street is that the code keeps breaking. My guess is that at the last minute the company will kludge together a workable system missing a lot of features.”
John C. Dvorak, Now What Do We Do? Dept. 13 July 2004
“In order to make this date (of 2006), we've had to simplify some things, to stagger it. One of the things we're staggering is the Windows storage work,” Jim Allchin, Microsoft's vice president in charge of Windows development, said in an interview with CNET News.com
CNET News.com Microsoft revamps its plans for Longhorn August 27, 2004
… at least that's what Mr. David Gristwood says in this (otherwise excellent) entry ('21 Rules of Thumb – How Microsoft develops its Software') on his MSDN weblog. Davis thinks that :
Even discounting the added development burden, with the addition of each additional platform the job of QA increases substantially. While clever QA management can minimize the burden somewhat, the complexity of multi-platform support is beyond the reach of most development organizations. Place your bets. Demand multi-platform support from your system software vendor, then build your product on the absolute fewest number of platforms possible.
What kind of 'portability' are we talking about in the context of software development at Microsoft ? He is probably making allusions to software being developed simultaneously for desktop and pocket Windows, which is in fact quite a challlenge for QA and for the developer team. But if it's a tongue-in-the-cheek reference to Java WORA, I found this entry to be somewhat funny. Let's – for the sake of the argument – suppose that you develop for multiple platforms and your QA team is able to thoroughly test only one of them. Basically, this means that your product is going to work OK on the main platform and have some flaws (most probably in the GUI area) on other platforms. How is this worse than having a product which purposedly works on a single target platform ? Humm, is JVM 'system software' after all ?
At first, this might seem a mind-boggling combination. What do
jython and PHP have in common (excepting the fact that I am a Python fan
and my current consulting task is in a PHP project) ?
Well, internationalizing a PHP app is pretty much a trivial task.
If you are a sensible PHP programmer insisting to use PEAR instead of randomly choosing a script from the tons of snippets
populating the “scripting websites”, I18N is probably the
safest choice.
Maybe – for you – application maintainability and performance are not exactly important concerns.
For me, they are. This is why I chose to store internationalized texts in files rather than database.
I'd rather keep the database for real data, which is created, modified, aggregated and such.
And I'd rather like to have an internationalized error message on the screen even if the database is down.
Now we know that we'll use I18N and text will be kept in some php files. However, I am no professional translator and
have no desire to translate or to manually maintain the correspondence between translators files and PHP files
(no, translators won't modify PHP code, stop this nonsense right away).
Code generation comes immediately in mind.
Basically, my first idea was to investigate wether the files used by the translators can be quickly transformed to PHP,
and if I am able to generate their formats from my own files (aka. “roundtrip internationalization process” ?).
Unfortunately, this is not an easy task – as the only clue was that the translators use Office tools such as Word or Excel, because they
rely upon some specialized translation software integrated with these products.
The easy choice is Excel, since it allows a better organization of data than having to search for tables in a Word document.
The hard choice is the tool that I'd use for automatically reading and even generating Excel files.
The difficulty comes from the fact I don't have Windows with Office installed on my desktop, just Gentoo Linux and OpenOffice.
Thus, I am unable to write a simple Python script which could perform my generation tasks via automation.
Fortunately, this is not the first time I am confronted with the issue.
I happen to know that there is a very nice Java tool that I wholeheartedly
recommend for your Excel processing needs :
JExcelApi.
Still, Java is a heavyweight programming language – it would be a really bad idea to fire up the
Automation scripts are already in cron and there's also a nice text document explaining translators where to get
their files and where to put them after modification. The resulting script is not exactly fast, but this is tooling
and not production so this should not be a problem after all.
Whatever your project contraints are, give Jython a try and you'll be amazed … As they put it on the
Useless Python site – If it were any simpler, it would be illegal.
Finally there's a trick not quite related with Jyhon, nevertheless interesting.
There is an easy way of solving the problem of translating phrases with real data inside them, with easy parameter swapping.
We'll use the good old sprintf but not directly. We'll pass through a not so popular but extremely useful function,
call_user_func_array. Suppose that our example needs the
user name and authorization profile description to display inside a nice message. All you have to do is to define placeholders
in I18N files which would fit as the first argument for sprintf. The following example should make it clearer:
| localization/en/login.php |
$messages = array( 'loggedin'=>'You are authenticated successfully as user %1$s with profile %2$s.' ); $this->set($messages); |
| localization/fr/login.php |
$messages = array( 'loggedin'=>'Vous avez le profile %2$s en tant qu'utilisateur %1$s.' ); $this->set($messages); |
| Simple passing of multiple parameters to I18N in PHP. Example function without error processing or data domain checking. |
#this is the multiple parameter function
function complexTranslation($i18n, $label, $params)
{
return call_user_func_array('sprintf',array_merge(array($i18n->_($label)),$params));
}
|
| Then, you have to initialize your I18N object. This can be done in a generic manner for all pages. |
#specific I18N initialization stuff require_once 'I18N/Messages/File.php'; $g_language_dir = dirname($_SERVER['PATH_TRANSLATED']).'/localization/'; $i18n =& new I18N_Messages_File($g_langCode,$script_name,$g_language_dir); |
| Finally, use the function. |
#translate the successfull login message $loginbox = Tools::complexTranslation($i18n,'loggedin',array($operator->name,$profile->description)); |