I keep my CFEngine policy (and some other similar things) in a Subversion repository. The progression from unit test to integration test to production is handled by using tags. Basically, the integration test policy is the trunk, unit tests are done by branching the trunk, and promotion to production is done by tagging a revision of the trunk with a release name (monthly_YYYY_MM.POINT). But this discussion doesn’t need to be just about that approach; my solution should work for pretty much anyone who needs a directory to match a portion of a subversion structure.
Archive for the ‘technology’ Category
Recently, I’ve been trying to speed up my Subversion post-commit hooks. I have several things which are run from the hook, and the number of separate commands leads to a bunch of fork() calls (or clone(), whatever). Several of the scripts are already Python, so I figured I’d just write the hooks themselves in Python, making it so that the python interpreter would only need to start once and allowing the separate methods to be pre-compiled python. This should decrease overall execution time, making the end-user experience slightly better overall by decreasing the time they have to wait on the server. We’re talking about fractions of a second, but I have some operations which bulk-create directories in SVN or otherwise cause tens or hundreds of new revisions to be created at one time (which is necessary for the way some of my integration processes work), so it actually adds up.
This is also an excuse for me to learn Python, so bear with me if the code below is horrible. Actually, don’t bear with me – leave a comment letting me know how it should have been done. :)
I’m responsible for a pretty large CFEngine installation. CFEngine is designed to be pretty self-sufficient even when the network is unavailable, so it basically works by keeping its configuration local on each machine, and running from that local copy. This is mostly implemented using a file-based configuration structure. There’s a main configuration file (promises.cf) which includes several additional configuration files. In pretty much every situation, one of the promises (the name for an individual policy item) or bundles of promises will ensure that the local config files are in sync with the configuration files on the central master.
While it’s possible to use LDAP or define some variables on the central master, the main way configuration is done is by putting the policy into some files on the master and then allowing individual systems to copy those files down; the central master is basically just a fairly efficient file server.
So, we all know that ruby’s memory management is sketchy at best, and the Puppet is generally slow. But how can we quantify that? One of the metrics which is important to my usage is that of verifying the permissions on a large number of files. To that end, I wrote a simple script to compare the performance of ensuring that the contents of a large directory of files are owned by a specific group. Before each test, I remove a temp directory, create a set of sequentially-named files with the wrong group ownership, and then correct the ownership. I then run the same command again to see how quickly it can verify the permissions – which should be the common case.
For the baseline, I use “find | xargs chgrp”, which is slightly slower than “chgrp -R”, but not much slower (and, in my mind, slightly more fair). I then use a simple CFEngine policy and a simple Puppet policy to do the same thing. The summary? Puppet is dog slow at file recursion, while CFEngine is nearly as fast as pure find. CFEngine actually uses less memory than the shell when you get to many files (probably due to the pipe to xargs), and Puppet wastes memory like it’s been surfing the web for weeks using an old version of Firefox.
Get the highlighting code from https://github.com/neilhwatson/vim_cf3, and set it up in a location that will be loaded by default. I’m partial to making directory under /usr/local/share, and then linking the files in.
Ok, I found something I dislike about my Galaxy S3. Apparently, in order to trust a third-party (in this case, my own) SSL signing certificate, I need to change my authentication mechanism. If I have any non-default signing authorities on my phone, the option to do face unlock and voice unlock are disabled; you can only install them if you have a pin code or passphrase lock. Further, you can’t change the auth mechanism back to a “less secure” option until after you’ve removed those signing certificates.
I guess that I have to choose between trusting my own signing authority, and using a convenient authentication mechanism to get in to my phone.
If anyone happens to know of a workaround that lets me use face unlock *and* trust a couple of SSL certificate authorities, I’d sure appreciate it. I’m willing to accept the risk of someone taking my phone, unlocking it with a picture of me, and installing an additional certificate signing authority. :/
After upgrading my backup server from the previous LTS release (Lucid) to the new one, the config which backs up /etc on localhost was failing. It was failing because pings to localhost were failing. This is no good – localhost should be pingable. :) Ultimately, this is because IPv6 is enabled by default now. I don’t use IPv6 on my internal network, mostly because it’s new and scary and I don’t like change. Or because I just don’t need it. So, here’s how to disable IPv6 on your Ubuntu 12.04 / Precise box:
So, since I have to rebuild my mp3 library anyway, I thought I’d do a comparison between storing my mp3s on either reiserfs or xfs. I already know that reiserfs is horrible for recovery, but hopefully I won’t need that. Reiser is supposed to be good for storage because of the tail-packing thing, though.
I’ve recovered about 18GB of songs now, the biggest file is about 50MB; the average is about 8MB. Somewhat surprisingly, the xfs filesystem (/mnt/a) actually is using less space to store the identical directory structure (artist/album/mp3).
sauer@humpy:~$ for D in /srv/nfs4/music /mnt/a; do find $D | wc -l; done 1589 1589 sauer@humpy:~$ du -ks /srv/nfs4/music /mnt/a 12380231 /srv/nfs4/music 12369836 /mnt/a sauer@humpy:~$ df -k /srv/nfs4/music /mnt/a Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/idevol-music 52427196 12414684 40012512 24% /srv/nfs4/music /dev/mapper/idevol-music2 52403200 12403660 39999540 24% /mnt/a sauer@humpy:~$ sed -n '/music/p' /proc/mounts /dev/mapper/idevol-music /srv/nfs4/music reiserfs rw,noatime 0 0 /dev/mapper/idevol-music2 /mnt/a xfs rw,relatime,attr2,delaylog,logbsize=64k,sunit=128,swidth=384,noquota 0 0
So, since xfs also recovers faster and is more actively maintained, I’m switching to xfs.
So, I’ve got a handful of Ubuntu machines. I also have a bigger handful of DVDs. I’d like to conver the DVDs to easier-to-store videos which can be accessed by MythTV, XBMC, my mobile devices, and whatever else easily. The best broadly-supported format to do that in is h264-encoded mp4 files. And DVD::Rip does a nice job of letting me use all 20 or so CPUs I have laying around, rather than limiting me to just one workstation.
Unfortunately, DVD::Rip uses transcode, which uses ffmpeg to do the encoding. And Ubuntu’s ffmpeg, for whatever reason, lacks h264 support. There’s a guid to rebuilding it which has you pull down the latest source for all the utilities from CVS, and make new packages which don’t work right and are a pain to maintain. I, on the other hand, want to just take the Ubuntu package and add one compile-time option, so it’ll still work like the vendor-provided package. After all, all I ned to do is build the exact same thing with the “–enable-libx264″ option. Here’s how.
So, every time I set up a new Windows system to be backed up with BackupPC, I forget what I need to change. Thus, a blog entry.
- Right-click on a folder somewhere in order to share it. Probably the C drive. There’ll be something indicating that file sharing is disabled. Click through the network wizard thingie to enable file sharing.
- Create a backup user. I prefer to call the user backuppc. Go to the admin tools and “user and groups” to create the user, and put the user in the Backup Operators group. Set a password, and set the password to never expire + can’t be changed. Use the same settings for the machine-specific config inside BackupPC.
- In Administrative Tools, go to the Local Security Policy, and under User Rights Assignments, remove Backup Operators from the “Log On Locally” set (no reason for our remote backup operator to be on the log in screen). Also under Security Options, set “Network Access: Sharing and Security Model for Local Accounts” to “Classic – local users authenticate as themselves”. The default is to access the machine as a guest after authentication, which is crazy to me (and breaks the ability for backup users to access all files in the C$ share).
- Other minor things – validate that the firewall is set to allow file sharing services in.
smbclient -U backuppc \\\\yourwindowsmachine\\C\$