Send your logs to the cloud; Loggly vs. Papertrail

N.B. This post is from 2011 – the landscape has changed since then…

 

Centralised cloud-based logging.  It sounds tasty – and it is – but who should you go with?  Well, Loggly and Papertrail are the only games in town when it comes to the aforementioned service; the only other competitor in this space is Splunk Storm, but their offering – well-pedigreed though it may be – is strictly in private beta at this time, and therefore cannot really be considered a valid option.

The fact of the matter is that Loggly and Papertrail are, at a high level, functionally identical. They offer more or less the same bouquet of functionality, including alert triggers, aggregate visualisation, and even map reduce tools for data mining and reporting. Loggly has been around longer, and has a better track record for open-source involvement, meaning that the eco-system around their service is more mature; however, that doesn’t mean that they are necessarily superior to Papertrail in terms of the actual service.

My suggestion: If you’re in a hurry, flip a coin and go with one or the other. If you have the time, you should go ahead and try both out for a bit; Papertrail has a 7-day free trial programme, and Loggly is free (in perpetuity) for sufficiently small amounts of data and retention (which is no problem if you’re just poking around).

I’m very interested in hearing about actual user experiences with either or both, so please don’t hesitate to add a comment or drop me a line directly via the contact form.

Edit: From @pyr : « you  can also consider @datadoghq which has a different take on the issue but might fit the bill. »

Edit 2: From the comments, there’s also Logentries, which I don’t personally have any experience with, but which appears to offer a reasonably comprehensive offering as well.

Heavyweight tilt : GitHub vs. Bitbucket

When it comes to code hosting on The Internets today, GitHub is absolutely the hottest, trendiest service going – but it’s not alone. Right now, the primary direct competitor to GitHub is Bitbucket, and choosing the best service for you or your company can be a less than obvious scenario – so let’s break it down, shall we?

GitHub is generally considered to be the most popular code hosting and collaboration site out there today. They have an excellent track record for innovation and evolution of their service, and they put their money where their mouth is, notably by promoting and releasing their own internal tools into the open source community.  Their site offers a buffet of ever-improving facilities for collaborative activity, notably including an integrated issue tracker and excellent code comparison tools, among others. To be fair, not every feature has had the same level of care and attention paid to it, and as a result, some elements feel quite a bit more mature than others; however, again, they never stop trying to make things better.

Bitbucket looks a lot like GitHub.  That’s a fact.  I don’t honestly know which one came first, but it’s clear that today they’re bouncing off of each other in terms of design, features, and functionality.  You can more or less transpose your user experience between the two sites without missing too much of a beat, so for a casual user looking to contribute here and there, you get two learning curves for the price of one (nice).  Bitbucket’s pace of evolution is (perhaps) less blistering, but they too are capable of rolling out new and improved toys over time.

let’s get down to brass tacks

Both services offer the same basic functionality, which is the ability to create an account, and associate that account with any number of publicly-accessible repositories; however, if you want a private repository, GitHub will make you pay for it, whereas BitBucket offers it gratis.  There, as it is said, lies the rub.  More on this later.

One of the big differences between the two services lie in their respective origins: GitHub remains an independent start-up, whereas Bitbucket (although once independent) was acquired by – and is now strongly associated with – Atlassian (of JIRA fame). It is my opinion that this affects the cultural make-up of Bitbucket in subtle ways, leading to a more corporate take on development, deployment, and importantly, community relations and involvement.  Take a look at their respective blogs (go ahead, I’ll wait).

A quick scan of the past few months from each blog will reveal some important differences:

  • GitHub’s release schedule is more aggressive, with improvements and new features coming more regularly, whereas Bitbucket places greater emphasis on their tight integration with JIRA, Jenkins, and other industry tools.
  • Bitbucket advertises paid services and software on their blog, whereas GitHub advertises open source projects.
  • Bitbucket’s blog has one recent author, whereas GitHub’s blog has many recent authors.
  • GitHub hosts more community events (notably drinkups, heh) over a greater geographic area than Bitbucket (and their posts have more community response overall).

Also, check out GitHub’s “about us” page – brogrammers abound!  I’d compare the group to Bitbucket, but as it so happens, they don’t have an analogous page.

Previously I mentioned that GitHub would like you to pay for private repositories.  This is obviously part of their revenue scheme (and who can blame them for wanting to get that cheese?), but it also has the side-effect of making people choose to willingly host their projects publicly.  This has ended up creating a (very) large community of active participants representing a variety of languages and interests, which in turn results in more projects, and so on and so forth.  This feedback loop is interesting since it auto-builds popularity: as more people use it, the more people will use it.

These observations are, in no way, objective statements of the superiority of one platform over the other – they are, however, indicative of cultural differences between the two companies.  This is (or, at least, should be) a non-trivial element when deciding which service is right for you or your organisation.  For example, I’m a beer-drinking open source veteran that works in start-ups and small companies, so culturally my preferences are different than those of a suit-wearing system architect, working for a thousand-person consulting firm.  One isn’t necessarily better than the other – they’re just not the same (and that’s OK).

but wait, there’s more

Alright, here comes the shocker: for paid services (i.e. private repositories), GitHub is much more expensive than Bitbucket.  As in nowhere near the same price.  At all.  How can this be?  Well, I’m not privy to the financials of either company (if I were, I doubt I’d have written this post), but hey, the money for all those great open source projects, drinkups, and (bluntly) salaries have to come from somewhere – and while Bitbucket has Atlassian’s pockets backing them, GitHub has to stand on their own successes, and live with their own failures.

The two services are not dissimilar technically speaking, so it’s really up to you to decide which culture is better suited for your project.  Do you just need a spot to put your private project, that you program alone, isolated from the greater Internet?  BitBucket.  Do you have a public project that you’d like other people to discover, hack on together, and build a community around?  GitHub.  As for paid services, well I suppose that comes down to whether you want to pay extra to support what GitHub is doing or not.

Now, let’s be fair, for a lot of companies, “culture” is an irrelevant factor in their purchasing department – cost is the only concern.  Fair enough.  But let’s say you’ve got a team of developers, all of whom already have their own projects on GitHub, are familiar with the tools and processes, and have a network of fellow hackers built-in and ready to go.  In that case, perhaps culture is worth something after all.

Improvements in Cassandra 1.0, briefly stated

Datastax recently announced the availability of Cassandra 1.0 (stable), and along with that announcement, they made a series of blog posts (1, 2, 3, 4, 5) about many of the great new features and improvements that the current version brings to the table.

For those of you looking for an executive summary of those posts, you’re in luck, cause I’ve got your back on this one.

  • New multi-layer approach to compression that provides improvements to both write and (especially) read operations.
  • Said compression strategy also yields potentially significant disk space savings.
  • Leverages the JNA library in order to provide in-memory caching; this procedure is optimised for garbage collection, resulting in a more efficient collection and a smaller overall footprint.
  • A much improved compaction strategy results in less costly compaction runs, improving overall performance on each node.
  • Fewer requests are made over the network, and said requests are smaller in size, improving overall performance across the cluster.

In short, 1.0 is a very significant, very important upgrade to 0.8 (et al.), and one which will likely bring it to the forefront of the hardcore big data / nosql scene at large.

Nagios plugin to parse JSON from an HTTP response

Update 2015-10-07: This plugin has evolved – please check the latest README for up to date details.

 

Hello all !  I wrote a plugin for Nagios that will parse JSON from an HTTP response.  If that sounds interesting to you, feel free to check out my check_http_json repo on Github.  The plugin has been tested with Ruby 1.8.7 and 1.9.3.  Pull requests welcome !

Usage: ./check_http_json.rb -u  -e  -w  -c 
-h, --help                       Help info.
-v, --verbose                    Additional human output.
-u, --uri URI                    Target URI. Incompatible with -f.
    --user USERNAME              HTTP basic authentication username.
    --pass PASSWORD              HTTP basic authentication password.
-f, --file PATH                  Target file. Incompatible with -u.
-e, --element ELEMENT            Desired element (ex. foo=>bar=>ish is foo.bar.ish).
-E, --element_regex REGEX        Desired element expressed as regular expression.
-d, --delimiter CHARACTER        Element delimiter (default is period).
-w, --warn VALUE                 Warning threshold (integer).
-c, --crit VALUE                 Critical threshold (integer).
-r, --result STRING              Expected string result. No need for -w or -c.
-R, --result_regex REGEX         Expected string result expressed as regular expression. No need for -w or -c.
-W, --result_warn STRING         Warning if element is [string]. -C is required.
-C, --result_crit STRING         Critical if element is [string]. -W is required.
-t, --timeout SECONDS            Wait before HTTP timeout.

The --warn and --crit arguments conform to the Nagios threshold format guidelines.

If a simple result of either string or regular expression (-r or -R) is specified :

  • A match is OK and anything else is CRIT.
  • The warn / crit thresholds will be ignored.

If the warn and crit results (-W and -C) are specified :

  • A match is WARN or CRIT and anything else is OK.
  • The warn / crit thresholds will be ignored.

Note that (-r or -R) and (-W and -C) are mutually exclusive.

Note also that the response must be pure JSON. Bad things happen if this isn’t the case.

How you choose to implement the plugin is, of course, up to you.  Here’s one suggestion:

# check json from http
define command{
 command_name    check_http_json-string
 command_line    /etc/nagios3/plugins/check_http_json.rb -u 'http://$HOSTNAME$:$ARG1$/$ARG2$' -e '$ARG3$' -r '$ARG4$'
}
define command{
 command_name    check_http_json-int
 command_line    /etc/nagios3/plugins/check_http_json.rb -u 'http://$HOSTNAME$:$ARG1$/$ARG2$' -e '$ARG3$' -w '$ARG4$' -c '$ARG5$'
}

# make use of http json check
define service{
 service_description     elasticsearch-cluster-status
 check_command           check_http_json-string!9200!_cluster/health!status!green
}
define service{
 service_description     elasticsearch-cluster-nodes
 check_command           check_http_json-int!9200!_cluster/health!number_of_nodes!4:!3:
}

RabbitMQ plugin for Collectd

Hello all,

I wrote a rudimentary RabbitMQ plugin for Collectd.  If that sounds interesting to you, feel free to take a look at my GitHub.  The plugin itself is written in Python and makes use of the Python plugin for Collectd.

It will accept four options from the Collectd plugin configuration :

Locations of binaries :

RmqcBin = /usr/sbin/rabbitmqctl
PmapBin = /usr/bin/pmap
PidofBin = /bin/pidof

Logging :

Verbose = false

It will attempt to gather the following information :

From « rabbitmqctl list_queues » :

messages
memory
consumser

From « pmap » of « beam.smp » :

memory mapped
memory writeable/private (used)
memory shared

Props to Garret Heaton for inspiration and conceptual guidance from his « redis-collectd-plugin ».

how to use the Distributed Numeric Assignment (DNA) plug-in in 389 Directory Server

Hello everybody !  Today’s post is about the Distributed Numeric Assignment (or « DNA » ) plug-in for the 389 Directory Server (also known as the Fedora, Red Hat, and CentOS Directory Servers).  Although this plug-in has existed for quite some, there isn’t a whole lot of documentation about how to implement it in a real-world scenario.  I recently submitted some documentation to the maintainer of the 389 wiki, but since i’m not sure how, when, or in what form that documentation will come to exist on their site, i thought i’d expand on it here as well.  If you’ve made it this far, i’m going to assume that you’re already familiar with the basics of LDAP, and already have an instance of Directory Server up and running – if not, i suggest you take a look through the official Red Hat documentation in order to get you started.

By way of some background, it is worth noting that my basic requirement was simply to have a centralised back-end for authenticating SSH logins to the various machines in our park.  The actual numerical values for the UID and GID fields did not need to be the same, they simply needed to be both extant and unique for each user, with the further caveat that they should not collide with any existing values that might be defined locally on the machines.  This is a very basic set of requirements, so it is an excellent starting point for our example.  The first step is to activate the DNA plug-in via the console :

[TAB] Servers and Applications
Domain -> Server -> Server Group -> Directory Server
[SECTION] Configuration
Server -> Plug-ins -> Distributed Numeric Assignment
[X] Enable plug-in
Save

The Directory Server needs to be restarted in order for the activation to take effect.  This can either be done via the console, or via the command-line as normal.  The next step is to define how DNA will interact with new user data ; this is different from configuring the plug-in itself, in that we will be setting up a layer in between the plug-in and the user data that will allow certain values to be generated automatically (which is, of course, the end goal of this exercise).  Consider the following two LDIF snippets :

# uids
dn: cn=UID numbers,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: extensibleObject
cn: UID numbers
dnatype: uidNumber
dnamagicregen: 99999
dnafilter: (objectclass=posixAccount))
dnascope: dc=example,dc=com
dnanextvalue: 1000

# gids
dn: cn=GID numbers,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config
objectClass: top
objectClass: extensibleObject
cn: GID numbers
dnatype: gidNumber
dnamagicregen: 99999
dnafilter: (|(objectclass=posixAccount)(objectclass=posixGroup))
dnascope: dc=example,dc=com
dnanextvalue: 1000

As you can see, they are nearly identical.  This configuration activates the DNA magic-number functionality for the UID and GID fields as shown in the Posix attributes section of the console, though the values used may require further explanation.  The only particular requirement for the magic number (specified by the « dnamagicregen » field) is that it be a value that cannot occur naturally, which is to say a value that would not be generated by the DNA plug-in, nor set manually at any time.  The default value is « 0 », but since this is clearly a number with meaning on the average Posix system, i would recommend a suitably large number that is unlikely to ever be used, such as « 99999 ».  Non-numerical values can technically be used too ; however, these will not be acceptable to the console, so unless you’re using a third-party interface (or doing everything from the commandline), a numerical value must be used.

The « dnanextvalue » field functionally indicates where the count will start from.  As noted previously, in order to avoid collisions with existing local entries on the various machines, i chose a start point of « 1000 », which was more than acceptable in my environment.  Once these two snippets are integrated via the commandline, simply re-start the Directory Server (again), and you’re good to go  From now on, any time that a new user is created with the value « 99999 » entered into either (or both) of the UID and GID Posix fields, DNA will automagically generate real values as appropriate.

Hope that helps – enjoy !

CPAN RPMs in RHEL / CentOS : generation, conflict, and solutions

Hello all !  Today we’re going to take a look at a somewhat obscure problem that – once encountered – can cause nothing but headaches for a system administrator.  The problem relates to conflicts in CPAN RPM packages, and what can be done to work around the issue.  If you’ve made it this far, i’m going to assume a couple of things : you’re comfortable with RPMs and repositories, have worked with a .spec file before, and you know what Perl modules are.  Good ?  Ok, let’s go.

Edit : About a week after i posted this article, the pastebin i uploaded the examples to disappeared.  Maybe it will come back – i don’t know – but if not, sorry for the broken links…

CPAN is an enormous collection of Perl modules.  If you’ve ever written a Perl script, there’s a good chance you’ve used a module that – at one point or another – came from this archive.  One of the really neat features of CPAN is the interactive manner in which modules can be downloaded and installed from the archive using Perl right from the command line (frankly, if you’re reading this post, there’s a good chance you’ve used this feature, too).  This is a fairly common way to install new modules and add functionality to your system, especially if you’re coding for local use (i.e. on your personal box).

It’s useful, but it’s not perfect, and one of the key areas where it starts to fail is scalability : if you’ve got a bunch of machines, and you need to SSH into each one to interactively install a CPAN module or two, it’s going to be a hassle.  Likewise, CPAN doesn’t often find its way into the hearts and minds of enterprise Red Hat or CentOS environments, where the official policy is often to install software via RPM only (for support, administration, and sanity reasons, this is often the case).

Luckily, some of the most commonly used CPAN modules exist as RPMs in the default repositories.  Some, but not all (and not even « many ») – for this, there are other repositories available.  Some examples :

That last one – Magnum – is particularly interesting given the subject of our post today.  From their info page :

At Magnum we have a firm rule that all CPAN modules on our machines are installed from RPMs. The Fedora and Centos projects build RPMs for many CPAN modules, but there are always ones missing and the ones that are available often lag behind the most up to date versions.  For that reason, we build a lot of RPMs of CPAN modules. And we don’t want to keep that work to ourselves, so on these pages we make them available for anyone to download.

Their RPMs are generated automagically using a great tool called « cpanspec », which does exactly what you think it does : given a CPAN tarball, it will generate a .spec file suitable for building an installable RPM.  It is available in the standard repositories, and can be installed easily via YUM as normal, so go ahead and do that now.  Ok, example time : say you needed HTML::Laundry, but after a quick peek through your repositories, it becomes readily apparent that an RPM is not available.  Thanks to cpanspec, all is not lost :

[build@host-119 ~]$ wget http://search.cpan.org/CPAN/authors/id/S/ST/STEVECOOK/HTML-Laundry-0.0103.tar.gz
[build@host-119 ~]$ cpanspec --packager "build <build@domain.ext>" HTML-Laundry-0.0103.tar.gz

We just downloaded the tarball right from the CPAN website, and ran cpanspec against it.  The « –packager » argument simple defines the person who’s generating the .spec, and doesn’t necessarily have to be anything accurate.  Go ahead and try it for yourself.  Now take a look at the resulting .spec file (or on the a pastebin here).  As you can see, it fills in all the fields, including the critical (and often tricky-to-determine) « BuildRequires » and « Requires » items.  Frankly, it’s solid gold, and it has made the lives of CentOS / RHEL admins all over the world much easier.

That said, it’s not perfect, and there are times when you might run into problems.  Actually, you may run into two problems in particular.  The first is conflicts over ownership, which arises when multiple RPMs claim to be responsible for the same file (or files, or directories, or features, or whatever).  The second is more nefarious : an RPM that writes files to the system without declaring ownership for them – a condition often referred to as « clobbering ».  The former is irritating, but at least it’s not destructive, unlike the latter, which can cause all manner of headaches.  To illustrate these two problems, let’s take a look at another example (this one being decidedly more real-world than that of Laundry above) : CGI.pm.

The .spec file that is generated from this tarball is functional and correct, and we can build an installable RPM out of it, so at first all appears well.  Again, go ahead and try for yourself – i’ll wait.  You may wish to capture the build output for review – otherwise, check the pastebin.  I’d like to draw your attention to the « Installing » lines.  By trimming the « Installing /var/tmp/perl-CGI.pm.3.49-1-root-root » element from each of those lines, we can see the actual paths and files that this RPM will install to.  Examples :

/usr/lib/perl5/vendor_perl/5.8.8/CGI.pm
/usr/lib/perl5/vendor_perl/5.8.8/CGI/Cookie.pm
/usr/lib/perl5/vendor_perl/5.8.8/CGI/Util.pm
/usr/share/man/man3/CGI.3pm
/usr/share/man/man3/CGI::Pretty.3pm
/usr/share/man/man3/CGI::Cookie.3pm

At first glance this looks perfectly acceptable.  But look what happens when we try to install the resulting RPM (clipped for brevity) :

[root@host-119 build]# rpm -iv /usr/src/redhat/RPMS/noarch/perl-CGI.pm-3.49-1.noarch.rpm
Preparing packages for installation...
file /usr/share/man/man3/CGI.3pm.gz from install of perl-CGI.pm-3.49-1.noarch conflicts with file from package perl-5.8.8-27.el5.x86_64
file /usr/share/man/man3/CGI::Cookie.3pm.gz from install of perl-CGI.pm-3.49-1.noarch conflicts with file from package perl-5.8.8-27.el5.x86_64
file /usr/share/man/man3/CGI::Pretty.3pm.gz from install of perl-CGI.pm-3.49-1.noarch conflicts with file from package perl-5.8.8-27.el5.x86_64

As it turns out, the Perl package that comes with RHEL / CentOS already contains CGI.pm.  This is normal, since it’s so popular, and is included as a convenience.  Thus, RPM – in an attempt to preserve the coherence of the package management system – refuses to install overtop of the existing owned files.  This is a fine illustration of the first of the two problems previously noted : conflicts over ownership.  As i mentioned above, it’s aggravating, but it’s not a bug – it’s a feature, and it’s doing exactly what it’s designed to do.  Irritating, but not ultimately dire.

If you look carefully, though, it’s also an illustration of the second problem.  Note the list of files that are conflicting.  Look back to the list of files that the package contains – notice anything missing from the conflicts list ?  That’s right – the actual module files (*.pm) are not showing conflicts, which means they’d get overwritten without complaint by RPM.  You might be thinking « who cares ? that’s what i want » right now, but trust me, it’s not what you want.  Imagine this CGI package, with this version of CGI.pm gets installed, and then later you upgrade the Perl package – your CGI.pm files will get overwritten by the Perl package, because as far as RPM is concerned, Perl owns those files.  All of a sudden, things break because you had scripts that relied on your particular version, but since you just upgraded Perl, you think (quite naturally) that the problem could be anywhere – where do you even start looking ?

Imagine the headache if there are multiple administrators, multiple servers, multiple data centres, and multiple clients paying multiple dollars.  No fun at all.

So how can we upgrade CGI.pm, using an RPM, without running into these problems ?  As is often the case, the answer is deceptively simple, but not immediately obvious.  Ultimately what we want to accomplish is twofold :

  • Avoid the man conflicts.
  • Ensure that the existing owned module files are not clobbered by our new package.

Concerning the man pages – and i’m going to be perfectly blunt here – the solution is to simply not install them, since, of course, they’re already there.  As for avoiding a clobbering condition, this requires a little bit of investigation into how Perl modules and libraries are stored on an RHEL / CentOS machine.  Consider the following output :

[root@host-119 ~]# ls -d /usr/lib64/perl5/*
/usr/lib64/perl5/5.8.8  /usr/lib64/perl5/site_perl  /usr/lib64/perl5/vendor_perl

What’s it all mean ?  Well, the « 5.8.8 » directory is the default directory as defined by the Perl architecture, and is system and platform-agnostic, which is to say that it’s (supposed to be) the same on every system.  The « vendor_perl » directory contains everything that specific to RHEL / CentOS (the « vendor » of the distribution).  As you may recall from the rpmbuild output above, this is where the RPM wants to install the modules (thus creating the clobbering condition).

There’s a third directory there, promisingly named « site_perl » ; as the name implies, this is where site-specific files are stored, which is to say items that are neither part of the default Perl architecture, nor part of the RHEL / CentOS distribution.  As you’ve no doubt guessed by now, site_perl is where we’re going to put our new modules.

Luckily for us, the only thing that needs to be changed is the .spec file – and we even get a headstart, since cpanspec does most of the heavy lifting for us.  Examining the .spec file once more, we see the following lines of note (again, cut for brevity) :

%build
%{__perl} Makefile.PL INSTALLDIRS=vendor
%files
%{perl_vendorlib}/*

These indicate that the target installation directory is that of the vendor, which is normally the case, and thus the default setting.  Since we want to install to the site directory, we make the following changes :

%build
%{__perl} Makefile.PL INSTALLDIRS=site
%files
%{perl_sitelib}/*

That solves our clobbering problem quite nicely, but what about the man files ?  As i mentioned above, the idea is to simply avoid installing them altogether, but since they’re generated automatically during the build process, how can we exclude them ?  What i’m about to present is a bit of a hack, but it’s absolutely effective, and ultimately quite clean : we delete them after they’ve been generated, and then don’t declare them in the file list.  Some items are already being potentially deleted by default, so let’s go ahead and add our own line into the mix :

find $RPM_BUILD_ROOT -depth -type d -exec rmdir {} 2>/dev/null ;
# destroy manified man, man.
find $RPM_BUILD_ROOT -type f -name '*.3pm' -exec rm -f {} ;

This will look for all of the « manified » man files and just remove from the build tree.  All that’s left now is to remove them from the file list.  This is as simple as deleting (or commenting out) their sole declaration :

#%{_mandir}/man3/*

Another option is to simply install use the « –excludedocs » argument when installing the RPM.  I opted to remove the docs altogether in order to ensure that the package can be installed without errors by anyone else without needed to know about the argument requirement ahead of time (and to facilitate automated rollouts).

What you’ll end up with is a .spec file that looks like this.  Go ahead and build your RPM – it’ll install without conflicts and without danger.  This is a technique that can be used for other CPAN packages as well, so go ahead and install everything you’ve always wanted.