Google Play Edition Android for HTC One (M8) for Verizon

Update October 8, 2015: Android 5.1 (“Lollipop”) OTAs

HTC One (M8) Running Google Play Edition Android

My old Galaxy Nexus died so I had to get a new phone. I would have loved to have replaced it with a new Nexus 5, but it’s a work phone, and work is paying for it, and work has a contract with Verizon Wireless. So my options for a phone with a relatively stock version of Android were pretty limited—really just the Moto X. But I’ve had enough of the power-hungry AMOLED display of the Galaxy Nexus, and didn’t want to deal with that again in a Moto X, so I picked the HTC One (M8), knowing people on xda-developers have already ported the stock Google Play Edition (GPE) build of Android to it.

When I got the phone, the first thing I did was install the so-called “DigitalHigh” GPE build from xda-developers, but what I found was anything other than a stock Android experience. There were so many tweaks, options, customizations, and glaring security issues (chmod 755 everything!) that it just put me off. Really, the whole xda-developers community puts me off. And since this is a blog, let me go on a little rant:

xda-developers, probably the largest Android modification community, is a place full of advertisements and would-be hackers who call themselves “developers” just because they can compile the Linux kernel and put it up on a slow, ad-ridden file host. No one uses their real names, no one hosts their own files, no one releases source code for their work, and no one documents what they do. And that’s to say nothing about the users, who bring the community down in other ways, but for whom I feel, because I know I would hate to be stuck with the bloatware and skins the carriers and manufacturers collude to lock onto Android phones.

Clearly I don’t think xda-developers is a very pleasant place. The problem is some people on there actually do really good, really interesting work. So it’s an inescapable, conflicting, sometimes great, but usually frustrating source of information for Android.

That frustration led me to port the Google Play Edition build of Android to my new HTC One (M8) for Verizon myself, hopefully demonstrating the way I think Android modification should be done in the process. Some points:

  1. All of the modification is done in an automated way. I chose my favorite automation tool, Puppet, for the job, but shell scripts or Makefiles would work just the same. The point is to download, modify, and build everything required in a hands-off manner. Automation has the added benefit of doubling as a sort of documentation.
  2. All of the automation code is publicly available and version controlled.
  3. All of the code is committed with my real name, James Lee.
  4. Everything is hosted by me without ads, or is otherwise freely accessible—no file hosts.
  5. All modification is done with a light hand, only changing what absolutely must be changed. (Though I do make a concession to enable root access and the flashlight, but even that is done in a clean and transparent way that can be trivially disabled.)

I’m not going to pretend that this is novel, or innovative, or that it took some huge effort—it’s just modifying some configuration files. If you want to give credit somewhere, look at CyanogenMod. They’re doing Android right. They build from source and have a working version for the M8. Sadly, they’ll always be playing catch-up to Google. Still, I have a lot of respect for that team of (real) developers, and I based a number of modifications to the GPE build on their code, so thank you!

Sorry this post was more ranty than usual, but as you can see, I have strong opinions on this subject. If you made it this far, here is the result of my automation:

m8_gpe-4.4.4-KTU84P.H1-r1.zip
SHA256: 6a907e0047ee20038d4ee2bcb29d980c83837fdd63ea4dd52e89f5695a5c7c14

I leave this file here as a convenience to those who know exactly what to do with it and who are capable of using and understanding my automation tools, but simply don’t want to. If that is not you, then you probably shouldn’t be modifying your phone.

I’m looking forward to seeing how well this works when Android 5.0 drops.

UPDATE #1

Android 5.0 (“Lollipop”) has arrived and with a few small tweaks to my automation tools, I am pleased to provide a flashable image for the Verizon M8:

m8_gpe-5.0.1-LRX22C.H5-r1.zip
SHA256: c6cdb3b5dae7c2645ac9e6c7ebbc5720c9f035afde8fa4d7407247faf265b4a5

Compared to the 4.4.4 release, this build deviates even less from the official upstream image. In fact, the only modifications to what Google and HTC distribute are:

  • Add the Verizon device ID to the Device Tree image for booting.
  • Enable CDMA with two line changes in build.prop.
  • Set an override flag on boot to allow screen casting to work—a feature that is more prominent in Android 5.0.

Compare that to the “DigitalHigh” release on XDA which disables important security mechanisms (SELinux, ADB security, file permissions), changes a whole lot of things that don’t need to be—and shouldn’t be—changed (like the I/O scheduler, data roaming, animation speeds, and various WiFi settings), and continues to deviate further as time passes, with inclusions like the HTC Sense camera. At this point, “DigitalHigh” can hardly be called the Google Play Edition, and many of the changes are downright harmful. I would strongly urge you not to use it.

With my automation tools, you can see exactly what modifications are required for Verizon support, and you can run it yourself so you know you’re getting a build that is as close as possible to the way Google intended it to be.

UPDATE #2

Android 5.1 was released for the GPE M8 a couple of days ago, and I already have a build of it ready for Verizon M8 devices.

m8_gpe-5.1-LMY47O.H4-r1.zip
SHA256: dbd8b541b812f36282b1f7af08b97aaf85f816288084a2635edc02f520e2a2ed

Judging by the state of the Verizon M8 XDA forum, I believe I’m the first to have 5.1 on a Verizon M8. Another score for automation!

UPDATE #3

As some of you have noticed, some new OTAs have been pushed out for the Google Play Edition HTC M8. I’ve been on vacation so I haven’t had a chance until yesterday to look into them, but now I’ve been able to tweak my automation code to get these updates working on the Verizon M8. The nice thing about these changes is that I can now use the publicly available incremental updates directly from Google rather than having to rely on the community to produce dumps of their updated devices. Anyway, here are the updates, to be applied in succession on top of the LMY47O.H4 build from above:

m8_gpe-5.1-LMY47O.H4-to-LMY47O.H5-r1.zip
SHA256: cffdd511a0a06c5a9c8aad90803bd283f6be3a9e44fc5bd8df4d851a0a89a7c9

m8_gpe-5.1-LMY47O.H5-to-LMY47O.H6-r1.zip
SHA256: cf03655ca37dd969777095261d6cf761c76d620be812daf191e8871ed3d548c3

m8_gpe-5.1-LMY47O.H6-to-LMY47O.H9-r1.zip
SHA256: ed350a6c5cba9efa7d22ee10dfd04cf953e87820acc4d47999c4d0809f1fc905

m8_gpe-5.1-LMY47O.H9-to-LMY47O.H10-r1.zip
SHA256: 30a8aec044a3ceda77c7f7a78b0679454acefa1ccaa2b56baa9c9038bfc341a8

Again, these files are provided as a convenience to those who know what they’re doing.

Video Review: BlackVue DR650GW-2CH Dashcam

I get a lot out of video reviews. YouTube is often one of my first stops when I’m shopping around, even for cheap things. So I thought I’d try my hand at it for a product I recently bought, a car dashcam.

I’m reasonably happy with the way it turned out, but man was it hard work. Coming up with 12 minutes of things to say was easy, but having to pair it with 12 minutes of video is insane.

I am thankful for UMD for not only providing the full Adobe Creative Cloud for free, but also free access to Lynda.com with great tutorials on how to use it.

Adventures in HPC: RDMA and Erlang

I recently attended the SC13 conference where one of my goals was to learn about InfiniBand. I attended a full day tutorial session on the subject, which did a good job of introducing most of the concepts, but didn’t really delve as deep as I had hoped. That’s not really the fault of the class; InfiniBand, and the larger subject of remote direct memory access (RDMA), is incredibly complex. I wanted to learn more.

Now, I’ve been an Erlang enthusiast for a few years, and I’ve always wondered why it doesn’t have a larger following in the HPC community. I’ll grant you that Erlang doesn’t have the best reputation for performance, but in terms of concurrency, distribution, and fault tolerance, it is unmatched. And areas where performance is critical can be offloaded to other languages or, better yet, to GPGPUs and MICs with OpenCL.

But compared to its competition, there are areas where Erlang is lacking, for example, in distributed message passing, where it still uses TCP/IP. So in an effort to learn more about RDMA and in hopes of making Erlang a little more attractive to the HPC community, I set out to write an RDMA distribution driver for Erlang.

RDMA is a surprisingly tough nut to crack for its maturity. Documentation is scarce. Examples are even more so. Compared to TCP/IP, there is a lot more micro-management: you have to set up the connection; you have to decide how to allocate memory, queues, and buffers; you have to control how to send and receive; and you have to do your own flow control, among other complications. But for all that, you get the possibility of moving data between systems without invoking the kernel, and that promises significant performance gains over TCP/IP.

In addition, it would almost seem like RDMA was made for Erlang. RDMA is highly asynchronous and event-driven, which is a nearly perfect match for Erlang’s asynchronous message passing model. Once I got my head around some Erlang port driver idiosyncrasies, things sort-of fell in to place, and here is the result:

RDMA Ping Pong

pong. I’ve never been happier to see such a silly word.

Of course, the driver works for more than just pinging. It works for all distributed Erlang messages. In theory, you can drop it in to any Erlang application and it should just work.

The question is: how well does it work? Is it any better than the default TCP/IP distribution driver? For that, I devised a simple benchmark.

RDMA Benchmark Diagram

For each in a given set of nodes, the program will spawn a hundred processes that sit in a tight loop performing RPCs. The number of RPCs is counted and can be compared between different network implementations.

The program was tested on four nodes of a cluster, each with:

  • 2 x Intel Xeon X5560 Quad Core @ 2.80 GHz
  • 48 GB memory
  • Mellanox ConnectX QDR PCI Gen2 Channel Adapter
  • Red Hat Enterprise Linux 5.9 64-bit
  • Erlang/OTP R16B03
  • Elixir 0.12.0
  • OFED 1.5.4

The results are summarized as follows:

RDMA Benchmark

The RDMA implementation offers around a 50% increase in messaging performance over the default TCP/IP driver in this test. I believe this is primarily explained by the reduction in context switching. Where the TCP implementation has to issue a system call for every send and receive operation, requiring a context switch to the kernel, the RDMA implementation only calls into the kernel to be notified of incoming packets. And if packets are coming in fast enough, as they are in this test, then the driver can process many packets per context switch. The RDMA driver stays completely in user-space for send operations.

You may be wondering why the TCP driver performed about the same over the Ethernet and InfiniBand interfaces. These RPC operations involve very small messages, on the order of tens of bytes being passed back and forth, so this test really highlights the overhead of the network stacks, which is what I intended. I would imagine increasing the message size would make the InfiniBand interfaces take off, but I’ll leave that for a future test. Indeed, there are many more benchmarks I should perform.

Also, for now I’m avoiding the obvious comparison between Erlang and MPI. MPI libraries tend to have very mature, sophisticated RDMA implementations that I know I can’t compete against yet. I’d rather focus on improving the driver. I’ve started a to-do list. Feel free to pitch in and send me some pull requests on GitHub!

One last thing: Thank you The Geek in the Corner for your basic RDMA examples, and thank you Erlang/OTP community and Ericsson for your awesome documentation. As for my goal of wanting to learn about InfiniBand, I’d say goal accomplished.

How I Do Encrypted, Mirrored ZFS Root on Linux

Update: No more keyfiles!

I am done with Solaris. A quick look through this blog should be enough to see how much I like Solaris. So when I say “I’m done,” I want to be perfectly clear as to why: Oracle. As long as Oracle continues to keep Solaris’ development and code under wraps, I cannot feel comfortable using it or advocating for it, and that includes at work, where we are paying customers. I stuck with it up until now, waiting for a better alternative to come about, and now that ZFS is stable on Linux, I’m out.

I’ve returned to my first love, Gentoo. Well, more specifically, I’ve landed in Funtoo, a variant of Gentoo. I learned almost everything I know about Linux on Gentoo, and being back in its ecosystem feels like coming back home. Funtoo, in particular, addresses a lot of the annoyances that made me leave Gentoo in the first place by offering more stable packages of core software like the kernel and GCC. Its Git-based Portage tree is also a very nice addition. But it was Funtoo’s ZFS documentation and community that really got my attention.

The Funtoo ZFS install guide is a nice starting point, but my requirements were a bit beyond the scope of the document. I wanted:

  • redundancy handled by ZFS (that is, not by another layer like md),
  • encryption using a passphrase, not a keyfile,
  • and to be prompted for the passphrase once, not for each encrypted device.

My solution is depicted below:

ZFS Root Diagram

A small block device is encrypted using a passphrase. The randomly initialized contents of that device are then in turn used as a keyfile for unlocking the devices that make up the mirrored ZFS rpool. Not pictured is Dracut, the initramfs that takes care of assembling the md RAID devices, unlocking the encrypted devices, and mounting the ZFS root at boot time.

Here is a rough guide for doing it yourself:

  1. Partition the disks.
    Without going in to all the commands, use gdisk to make your first disk look something like this:

    # gdisk -l /dev/sda
    ...
    Number  Start (sector)    End (sector)  Size       Code  Name
       1            2048         1026047   500.0 MiB   FD00  Linux RAID
       2         1026048         1091583   32.0 MiB    EF02  BIOS boot partition
       3         1091584         1099775   4.0 MiB     FD00  Linux RAID
       4         1099776       781422734   372.1 GiB   8300  Linux filesystem
    

    Then copy the partition table to the second disk:

    # sgdisk --backup=/tmp/table /dev/sda
    # sgdisk --load-backup=/tmp/table /dev/sdb
    # sgdisk --randomize-guids /dev/sdb
    

    If your system uses EFI rather than BIOS, you won’t need a BIOS boot partition, so adjust your partition numbers accordingly.

  2. Create the md RAID devices for /boot and the keyfile.
    # mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
    # mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
    
  3. Set up the crypt devices.
    First, the keyfile:

    # cryptsetup -c aes-xts-plain64 luksFormat /dev/md1
    # cryptsetup luksOpen /dev/md1 keyfile
    # dd if=/dev/urandom of=/dev/mapper/keyfile
    

    Then, the ZFS vdevs:

    # cryptsetup -c aes-xts-plain64 luksFormat /dev/sda4 /dev/mapper/keyfile
    # cryptsetup -c aes-xts-plain64 luksFormat /dev/sdb4 /dev/mapper/keyfile
    # cryptsetup -d /dev/mapper/keyfile luksOpen /dev/sda4 rpool-crypt0
    # cryptsetup -d /dev/mapper/keyfile luksOpen /dev/sdb4 rpool-crypt1
    
  4. Format and mount everything up.
    # zpool create -O compression=on -m none -R /mnt/funtoo rpool mirror rpool-crypt0 rpool-crypt1
    # zfs create rpool/ROOT
    # zfs create -o mountpoint=/ rpool/ROOT/funtoo
    # zpool set bootfs=rpool/ROOT/funtoo rpool
    # zfs create -o mountpoint=/home rpool/home
    # zfs create -o volblocksize=4K -V 2G rpool/swap
    # mkswap -f /dev/zvol/rpool/swap
    # mkfs.ext2 /dev/md0
    # mkdir /mnt/funtoo/boot && mount /dev/md0 /mnt/funtoo/boot
    

    Now you can chroot and install Funtoo as you normally would.

When it comes time to finish the installation and set up the Dracut initramfs, there is a number of things that need to be in place. First, the ZFS package must be installed with the Dracut module. The current ebuild strips it out for some reason. I have a bug report open to fix that.

Second, /etc/mdadm.conf must be populated so that Dracut knows how to reassemble the md RAID devices. That can be done with the command mdadm --detail --scan > /etc/mdadm.conf.

Third, /etc/crypttab must be created so that Dracut knows how to unlock the encrypted devices:

keyfile /dev/md1 none luks
rpool-crypt0 /dev/sda4 /dev/mapper/keyfile luks
rpool-crypt1 /dev/sdb4 /dev/mapper/keyfile luks

Finally, you must tell Dracut about the encrypted devices required for boot. Create a file, /etc/dracut.conf.d/devices.conf containing:

add_device="/dev/md1 /dev/sda4 /dev/sdb4"

Once all that is done, you can build the initramfs using the command dracut --hostonly. To tell Dracut to use the ZFS root, add the kernel boot parameter root=zfs. The actual filesystem it chooses to mount is determined from the zpool’s bootfs property, which was set above.

And that’s it!

Now, I go a little further by creating a set of Puppet modules to do the whole thing for me. Actually, I can practically install a whole system from scratch with one command thanks to Puppet.

I also have a script that runs after boot to close the keyfile device. You’ve got to protect that thing.

# cat /etc/local.d/keyfile.start
#!/bin/sh
/sbin/cryptsetup luksClose keyfile

I think one criticism that could be leveled against this setup is that all the data on the ZFS pool gets encrypted and decrypted twice. That is because the redundancy comes in at a layer higher than the crypt layer. A way around that would be to set it up the other way around: encrypt a md RAID device and build the ZFS pool on top of that. Unfortunately, that comes at the cost of ZFS’s self healing capabilities. Until encryption support comes to ZFS directly, that’s the trade-off we have to make. In practice, though, the double encryption of this setup doesn’t make a noticeable performance impact.

UPDATE

I should mention that I’ve learned that Dracut is much smarter than I would have guessed, and it will let you enter a passphrase once and it will try it on all of the encrypted devices. This eliminates the need for the keyfile in my case, so I’ve updated all of my systems to simply use the same passphrase on all of the encrypted devices. I have found it to be a simpler and more reliable setup.

Customizing the OpenStack Keystone Authentication Backend

OpenStack Login For those of you unfamiliar with OpenStack, it is a collection of many independent pieces of cloud software, and they all tie into Keystone for user authentication and authorization. Keystone uses a MySQL database backend by default, and has some support for LDAP out-of-the-box. But what if you want to have it authenticate against some other service? Fortunately, the Keystone developers have already created a way to do that fairly easily; however, they haven’t documented it yet. Here’s how I did it:

  1. Grab the Keystone source from GitHub and checkout a stable branch:
    % git clone git://github.com/openstack/keystone.git
    % cd keystone
    % git checkout stable/grizzly
    
  2. Since we still want to use the MySQL backend for user authorization, we will extend the default identity driver, keystone.identity.backends.sql.Identity, and simply override the password checking function. Create a new file called keystone/identity/backends/custom.py containing:
    from __future__ import absolute_import
    import pam
    from . import sql

    class Identity(sql.Identity):
        def _check_password(self, password, user_ref):
            username = user_ref.get('name')
           
            if (username in ['admin', 'nova', 'swift']):
                return super(Identity, self)._check_password(password, user_ref)
           
            return pam.authenticate(username, password)

    In this snippet, we check the username and password against PAM, but that can be anything you want (Kerberos, Active Directory, LDAP, a flat file, etc.). If the username is one of the OpenStack service accounts, then the code uses the normal Keystone logic and checks it against the MySQL database.

  3. Build and install the code:
    % python setup.py build
    % sudo python setup.py install
    
  4. Configure Keystone to use the custom identity driver. In /etc/keystone/keystone.conf add or change the following section:
    [identity]
    driver = keystone.identity.backends.custom.Identity
  5. Start Keystone (keystone-all) and test, then save the changes to the Keystone source:
    % git add keystone/identity/backends/custom.py
    % git commit -m "Created custom identity driver" -a
    

And that’s it. In reality, I would probably fork the Keystone repository on GitHub and create a new branch for this work (git checkout -b customauth stable/grizzly), but that’s not really necessary. Actually, you could probably even get away with not recompiling Keystone. Just put the custom class somewhere in Keystone’s PYTHONPATH. But I’m not a Python expert, so maybe that wouldn’t work. Either way, I like having everything together, and Git makes it brainless to maintain customizations to large projects.

Benchmarking Duplog

In my last post I introduced Duplog, a small tool that basically forms the glue between rsyslog, RabbitMQ, Redis, and Splunk to enable a highly available, redundant syslog service that deduplicates messages along the way. In order for Duplog to serve the needs of a production enterprise, it will need to perform well. I fully expect the deduplication process to take a toll, and in order to find out how much, I devised a simple benchmark.

Duplog Benchmark Diagram

One one side, fake syslog generators pipe pairs of duplicate messages into the system as fast as they can. On the other, a process reads the messages out as fast as it can. Both sides report the rate of message ingestion and extraction.

The Details

  • 3 KVM virtual machines on the same host each with the same specs:
    • 1 x Intel Core i5 2400 @ 3.10 GHz
    • 1 GB memory
    • Virtio paravirtualized NIC
  • OS: Ubuntu Server 12.04.1 LTS 64-bit
  • RabbitMQ: 2.7.1
  • Redis: 2.2.12
  • Java: OpenJDK 6b27

The Results

(My testing wasn’t very scientific so take all of this with a grain of salt.)

Duplog Benchmark

The initial results were a little underwhelming, pushing about 750 messages per second through the system. I originally expected that hashing would be the major bottleneck, or the communication with the Redis server, but each of those processes were sitting comfortably on the CPU at about 50% and 20% usage, respectively. It turned out that the RabbitMQ message brokers were the source of the slow performance.

I began trying many different settings for RabbitMQ, starting by disabling the disk-backed queue, which made little difference. In fact, the developers have basically said as much: “In the case of a persistent message in a durable queue, yes, it will also go to disk, but that’s done in an asynchronous manner and is buffered heavily.”

So then I changed the prefetch setting. Rather than fetching and acknowledging one message at a time, going back and forth over the network each time, the message consumers can buffer a configurable number of messages at a time. It is possible to calculate the optimum prefetch count, but without detailed analytics handy, I just picked a prefetch size of 100.

That setting made a huge difference, as you can see in the histogram. Without spending so much time talking to RabbitMQ, consumers were free to spend more time calculating hashes, and that left the RabbitMQ message brokers more free to consume messages from rsyslog.

Another suggestion on the internet was to batch message acknowledgements. That added another modest gain in performance.

Finally, I tried enabling an unlimited prefetch count size. It is clear that caching as many messages as possible does improve performance, but it comes at the cost of fairness and adaptability. Luckily, neither of those characteristics is important for this application, so I’ve left it that way, along with re-enabling queue durability, whose performance hit, I think, is a fair trade-off for message persistence. I also reconfigured acknowledgements to fire every second rather than every 50 messages. Not only does that guarantee that successfully processed messages will get acknowledged sooner or later, it spaces out ACKs even more under normal operation, which boosted the performance yet again to around 6,000 messages per second.

So is 6,000 messages per second any good? Well, if you just throw a bunch of UDP datagrams at rsyslog (installed on the same servers as above), it turns out that it can take in about 25,000 messages per second without doing any processing. It is definitely reasonable to expect that the additional overhead of queueing and hashing in Duplog will have a significant impact. It is also important to note that these numbers are sustained while RabbitMQ is being written to and read from simultaneously. If you slow down or stop message production, Duplog is able to burst closer to 10,000 messages per second. The queueing component makes the whole system fairly tolerant of sudden spikes in logging.

For another perspective, suppose each syslog message averages 128 bytes (a reasonable, if small, estimate), then 6,000 messages per second works out to 66 GB per day. For comparison, I’ve calculated that all of the enterprise Unix systems in my group at UMD produce only 3 GB per day.

So as it stands now I think that Duplog performs well enough to work in many environments. I do expect to see better numbers on better hardware. I also think that there is plenty of optimization that can still be done. And in the worst case, I see no reason why this couldn’t be trivially scaled out: divide messages among multiple RabbitMQ queues to better take advantage of SMP. This testing definitely leaves me feeling optimistic.

Overengineering Syslog: Redundancy, High Availability, Deduplication, and Splunk

I am working on a new Splunk deployment at work, and as part of that project, I have to build a centralized syslog server. The server will collect logs from all of our systems and a forwarder will pass them along to Splunk to be indexed. That alone would be easy enough, but I think that logs are too important to leave to just one syslog server. Sending copies of the log data to two destinations may allow you to sustain outages in half of the log infrastructure while still getting up-to-the-minute logs in Splunk. I think duplicating log messages at the source is a fundamental aspect of a highly available, redundant syslog service when using the traditional UDP protocol.

That said, you don’t want to have Splunk index all of that data twice. That’ll cost you in licenses. But you also don’t want to just pick a copy of the logs to index—how would you know if the copy you pick is true and complete? Maybe the other copy is more complete. Or maybe both copies are incomplete in some way (for example, if routers were dropping some of those unreliable syslog datagrams). I think the best you can do is to take both copies of the log data, merge them together somehow, remove the duplicate messages, and hope that, between the two copies, you’re getting the complete picture.

I initially rejected the idea of syslog deduplication thinking it to be too complicated and fragile, but the more I looked into it, the more possible it seemed. When I came across Beetle, a highly available, deduplicating message queue, I knew it would be doable.

Beetle Architecture

Beetle Architecture

Beetle itself wouldn’t work for what I had in mind (it will deduplicate redundant messages from single sources; I want to deduplicate messages across streams from multiple sources), but I could take its component pieces and build my own system. I started hacking on some code a couple of days ago to get messages from rsyslog to RabbitMQ and then from RabbitMQ to some other process which could handle deduplication. It quickly turned into a working prototype that I’ve been calling Duplog. Duplog looks like this:

Duplog Architecture

Duplog Architecture

At its core, Duplog sits and reads messages out of redundant RabbitMQ queues, hashes them, and uses two constant-time Redis operations to deduplicate them. RabbitMQ makes the whole process fairly fault tolerant and was a great discovery for me (I can imagine many potential use cases for it besides this). Redis is a very flexible key-value store that I’ve configured to act as a least-recently-used cache. I can throw hashes at it all day and let it worry about expiring them.

One important design consideration for me was the ability to maintain duplicate messages within a single stream. Imagine you have a high-traffic web server. That server may be logging many identical HTTP requests at the same time. Those duplicates are important to capture in Splunk for reporting. My deduplication algorithm maintains them.

Looking at the architecture again, you will see that almost everything is redundant. You can take down almost any piece and still maintain seamless operation without dealing with failovers. The one exception is Redis. While it does have some high availability capability, it relies on failover which I don’t like. Instead, I’ve written Duplog to degrade gracefully. When it can’t connect to Redis, it will allow duplicate messages to pass through. A few duplicate messages isn’t the end of the world.

Feel free to play around with the code, but know that it is definitely a prototype. For now I’ve been working with the RabbitMQ and Redis default settings, but there is probably a lot that should be tuned, particularly timeouts, to make the whole system more solid. I also intend to do some benchmarking of the whole stack (that’ll probably be the next post), but initial tests on very modest hardware indicate that it will handle thousands of messages per second without breaking a sweat.

Protecting Puppet with Kerberos

Puppet uses bidirectional SSL to protect its client-server communication. All of the participants in a Puppet system must have valid, signed certificates and keys to talk to one another. This prevents agents from talking to rogue masters and it prevents nodes from spoofing one another. It also allows the master and agents to establish secure communication channels to prevent eavesdropping. Puppet comes with a built-in certificate authority (CA) to make the management of all the keys, certs, and signing requests fairly easy.

But what if you already have a large, established Kerberos infrastructure? You’re probably already generating and managing keys for all of your trusted hosts. Wouldn’t it be great to leverage your existing infrastructure and established processes instead of duplicating that effort with another authentication system?

Enter kx509. kx509 is a method for generating a short-lived X.509 (SSL) certificate from a valid Kerberos ticket. Effectively, a client can submit its Kerberos ticket to a trusted Kerberized CA (KCA), which then copies the principal name into the subject field of a new X.509 certificate and signs it with its own certificate. There’s really no trickery to it: if you trust the Kerberos ticket, and you trust the KCA, then you can trust the certificate generated by it. Sounds great, except that there is virtually no documentation on kx509, and even when you do get it running, there are a couple of issues that prevent it from working with Puppet out-of-the-box.

I wanted to figure out how to get it working with the least number of changes possible. To do this, I set up my own clean Kerberos and Puppet environment in a couple of VMs (a client and a server). I am documenting the whole process here for my own benefit, but maybe it will be useful to others.

The Setup

  • 2 virtual machines:
    • server.example.com (192.168.100.2)
    • client.example.com (192.168.100.3)
  • OS: Ubuntu Server 12.04.1 LTS 64-bit
  • Kerberos: Heimdal 1.5.2
  • Puppet: 2.7.11

I also set up a DNS server containing entries for the two hosts. Kerberos and Puppet are a lot easier to work with when they can use DNS, and it will be required for the kx509 stuff which we’ll see later.

I chose Heimdal because it has a built in KCA (and that’s what we run at work).

Continue reading

SuperSync

I wrote in the about me blurb on this blog that I like writing little programs for myself. One of the programs I’m most proud of is called SuperSync.

Back in college when I started developing an interest in music, I got in the habit of only acquiring losslessly encoded files. FLACs mostly. It wasn’t long before my collection outgrew what I could store on my iPod. So I hacked together a little script which I called “Sync” to encode my music files to something smaller, like Ogg Vorbis. I wrote it in Java because that’s what I knew best at the time, and for the most part, it just worked. It kept a flat database of files and timestamps to know what to sync to the iPod without reencoding everything every time.

But unfortunately, as my music collection grew, there were times, like when MusicBrainz would have a minor update for all of my files, which would make Sync think that everything needed to be resynced again. It got to to a point where some syncs would take a week, doing one file at a time.

It got me thinking: I have 10 CPU cores in my house. If I could get them all working together on the problem, I could get those long syncs down to a day or two. And thus SuperSync was born.

Still written in Java, SuperSync adds a distributed client/server architecture and nice GUI over top of Sync. The program takes the same flat database, and when it sees a new or updated file in the source directory, it copies it to the destination directory. If the file is a FLAC, it broadcasts a conversion request to the network. Any server can then respond if it has a free CPU. The server reads the file from my network share and sends the encoded file back to the client where it gets written out to the destination. The whole setup relies on having a consistent global namespace for the source collection. In my case, all of my systems can access my fileserver mounted at /nest in the same way. I can’t imagine many people have such a setup, so I don’t think a formal SuperSync release would be worthwhile.

In any case, the process looks something like this in action:

At the end of the sync, the program can optionally read a song log from Rockbox and scrobble it to Last.fm.

I’m also really proud of the way SuperSync is written. I spent a lot of time upfront to define clean interfaces using good object-oriented style. Feel free to checkout the source code if you’re into that sort of thing. Just ask me first if you want to use any of it.

Now the times are changing, and with Subsonic allowing me to stream music to my phone, I haven’t had to sync my music as much recently. But my iPod still has its purposes, so I’m glad I have SuperSync to let me take my whole collection with me.

Retirement: Defined Benefit or Defined Contribution?

Specifically, should employees of University System of Maryland institutions participate in the State Retirement and Pension System (SRPS) or the Optional Retirement Program (ORP)? That was one of the questions I had to answer for myself as I prepare to start a new job at the University of Maryland. The SRPS is a defined-benefit pension plan and the ORP is a defined-contribution 401(a)-like plan. The default is the SPRS, and it seems like they do everything they can to steer you to it (I suspect because your contributions are how the State funds current retirees), but is it a better deal?

First, some details: the SRPS requires employees to contribute 7% of their salary to the plan. In return, after 10 years of service, you can retire at age 65 and receive a monthly allowance following this formula:

\frac{0.015\:\times\:\textrm{salary}\:\times\:\textrm{years of service}}{12}

By contrast, the ORP is simple: the University will contribute a flat 7.25% of your salary to your choice of Fidelity or TIAA-CREF, and you can invest it however you want. The money is immediately vested. Additionally, you can take the 7% that you would have had to contribute to the SRPS and invest it on your own in a supplemental retirement plan or IRA. That’s a total of 14.25% of your salary going towards your retirement every year…comfortably within the 10-15% that experts recommend.

For me, considering it doesn’t vest until 10 years of service, the SRPS was right out. But as an experiment, I wanted to know which would be the better option if I worked for Maryland for 10 years. (The SRPS does allow you to withdraw your contributions compounded annually at 5% interest if you terminate employment before 10 years, but then you wouldn’t get the benefit of the State’s contribution to the plan, and you can almost certainly do better than 5% annually in the long-run by investing in a mix of stocks and bonds.)

I plugged in all of my numbers to the SRPS formula, and calculated an estimated withdraw rate for the ORP supposing a realistic inflation-adjusted growth rate. The results were clear: the ORP could provide me with about twice as much money during retirement. With results like those, I was curious whether the SRPS would be a good deal for anyone and under what circumstances that would be.

So I whipped up a little program to do the calculations for me. It has sliders for each of the input variables so the results can easily be compared for a wide variety of circumstances. The program works with your current salary and inflation-adjusted rates of return to give you a picture of what sort of spending power in today’s dollars you would have during retirement.

Consider a 35-year-old who makes $80,000 per year. If they expect to work for Maryland for 10 years and think they can earn around 5% in the market after adjusting for inflation (that’s a real annual return of 8% if you assume an average of 3% inflation), and intend to retire at 65 and expect to need income for 30 years in retirement, the ORP just barely comes out on top:

Indeed, that seems to be the turning point. Any older and you won’t have enough time to let those returns compound, and if you are any more risk-averse, then you won’t be able to generate the returns needed to outpace the pension system. In those cases, the SRPS would be a better choice for you, but only if you are in it for the long haul. You’d be wasting valuable investing time if you join the SRPS and leave before your contributions vest. Otherwise, read a book or two on investing, and do it yourself with the ORP.

But don’t take my word for it; do the math or put your numbers into my program and see what the better choice would be for you:

Run the Maryland Retirement Comparison Tool
Requires Java 5 or higher and Windows, MacOS X, or Linux

Of course, I make no guarantees that my program is accurate, but you can get the source code and check it out for yourself. Also consider investment risks and other factors such as plan benefits carefully.