Google Play Edition Android for HTC One (M8) for Verizon

Update October 8, 2015: Android 5.1 (“Lollipop”) OTAs

HTC One (M8) Running Google Play Edition Android

My old Galaxy Nexus died so I had to get a new phone. I would have loved to have replaced it with a new Nexus 5, but it’s a work phone, and work is paying for it, and work has a contract with Verizon Wireless. So my options for a phone with a relatively stock version of Android were pretty limited—really just the Moto X. But I’ve had enough of the power-hungry AMOLED display of the Galaxy Nexus, and didn’t want to deal with that again in a Moto X, so I picked the HTC One (M8), knowing people on xda-developers have already ported the stock Google Play Edition (GPE) build of Android to it.

When I got the phone, the first thing I did was install the so-called “DigitalHigh” GPE build from xda-developers, but what I found was anything other than a stock Android experience. There were so many tweaks, options, customizations, and glaring security issues (chmod 755 everything!) that it just put me off. Really, the whole xda-developers community puts me off. And since this is a blog, let me go on a little rant:

xda-developers, probably the largest Android modification community, is a place full of advertisements and would-be hackers who call themselves “developers” just because they can compile the Linux kernel and put it up on a slow, ad-ridden file host. No one uses their real names, no one hosts their own files, no one releases source code for their work, and no one documents what they do. And that’s to say nothing about the users, who bring the community down in other ways, but for whom I feel, because I know I would hate to be stuck with the bloatware and skins the carriers and manufacturers collude to lock onto Android phones.

Clearly I don’t think xda-developers is a very pleasant place. The problem is some people on there actually do really good, really interesting work. So it’s an inescapable, conflicting, sometimes great, but usually frustrating source of information for Android.

That frustration led me to port the Google Play Edition build of Android to my new HTC One (M8) for Verizon myself, hopefully demonstrating the way I think Android modification should be done in the process. Some points:

  1. All of the modification is done in an automated way. I chose my favorite automation tool, Puppet, for the job, but shell scripts or Makefiles would work just the same. The point is to download, modify, and build everything required in a hands-off manner. Automation has the added benefit of doubling as a sort of documentation.
  2. All of the automation code is publicly available and version controlled.
  3. All of the code is committed with my real name, James Lee.
  4. Everything is hosted by me without ads, or is otherwise freely accessible—no file hosts.
  5. All modification is done with a light hand, only changing what absolutely must be changed. (Though I do make a concession to enable root access and the flashlight, but even that is done in a clean and transparent way that can be trivially disabled.)

I’m not going to pretend that this is novel, or innovative, or that it took some huge effort—it’s just modifying some configuration files. If you want to give credit somewhere, look at CyanogenMod. They’re doing Android right. They build from source and have a working version for the M8. Sadly, they’ll always be playing catch-up to Google. Still, I have a lot of respect for that team of (real) developers, and I based a number of modifications to the GPE build on their code, so thank you!

Sorry this post was more ranty than usual, but as you can see, I have strong opinions on this subject. If you made it this far, here is the result of my automation:
SHA256: 6a907e0047ee20038d4ee2bcb29d980c83837fdd63ea4dd52e89f5695a5c7c14

I leave this file here as a convenience to those who know exactly what to do with it and who are capable of using and understanding my automation tools, but simply don’t want to. If that is not you, then you probably shouldn’t be modifying your phone.

I’m looking forward to seeing how well this works when Android 5.0 drops.


Android 5.0 (“Lollipop”) has arrived and with a few small tweaks to my automation tools, I am pleased to provide a flashable image for the Verizon M8:
SHA256: c6cdb3b5dae7c2645ac9e6c7ebbc5720c9f035afde8fa4d7407247faf265b4a5

Compared to the 4.4.4 release, this build deviates even less from the official upstream image. In fact, the only modifications to what Google and HTC distribute are:

  • Add the Verizon device ID to the Device Tree image for booting.
  • Enable CDMA with two line changes in build.prop.
  • Set an override flag on boot to allow screen casting to work—a feature that is more prominent in Android 5.0.

Compare that to the “DigitalHigh” release on XDA which disables important security mechanisms (SELinux, ADB security, file permissions), changes a whole lot of things that don’t need to be—and shouldn’t be—changed (like the I/O scheduler, data roaming, animation speeds, and various WiFi settings), and continues to deviate further as time passes, with inclusions like the HTC Sense camera. At this point, “DigitalHigh” can hardly be called the Google Play Edition, and many of the changes are downright harmful. I would strongly urge you not to use it.

With my automation tools, you can see exactly what modifications are required for Verizon support, and you can run it yourself so you know you’re getting a build that is as close as possible to the way Google intended it to be.


Android 5.1 was released for the GPE M8 a couple of days ago, and I already have a build of it ready for Verizon M8 devices.
SHA256: dbd8b541b812f36282b1f7af08b97aaf85f816288084a2635edc02f520e2a2ed

Judging by the state of the Verizon M8 XDA forum, I believe I’m the first to have 5.1 on a Verizon M8. Another score for automation!


As some of you have noticed, some new OTAs have been pushed out for the Google Play Edition HTC M8. I’ve been on vacation so I haven’t had a chance until yesterday to look into them, but now I’ve been able to tweak my automation code to get these updates working on the Verizon M8. The nice thing about these changes is that I can now use the publicly available incremental updates directly from Google rather than having to rely on the community to produce dumps of their updated devices. Anyway, here are the updates, to be applied in succession on top of the LMY47O.H4 build from above:
SHA256: cffdd511a0a06c5a9c8aad90803bd283f6be3a9e44fc5bd8df4d851a0a89a7c9
SHA256: cf03655ca37dd969777095261d6cf761c76d620be812daf191e8871ed3d548c3
SHA256: ed350a6c5cba9efa7d22ee10dfd04cf953e87820acc4d47999c4d0809f1fc905
SHA256: 30a8aec044a3ceda77c7f7a78b0679454acefa1ccaa2b56baa9c9038bfc341a8

Again, these files are provided as a convenience to those who know what they’re doing.

Adventures in HPC: RDMA and Erlang

I recently attended the SC13 conference where one of my goals was to learn about InfiniBand. I attended a full day tutorial session on the subject, which did a good job of introducing most of the concepts, but didn’t really delve as deep as I had hoped. That’s not really the fault of the class; InfiniBand, and the larger subject of remote direct memory access (RDMA), is incredibly complex. I wanted to learn more.

Now, I’ve been an Erlang enthusiast for a few years, and I’ve always wondered why it doesn’t have a larger following in the HPC community. I’ll grant you that Erlang doesn’t have the best reputation for performance, but in terms of concurrency, distribution, and fault tolerance, it is unmatched. And areas where performance is critical can be offloaded to other languages or, better yet, to GPGPUs and MICs with OpenCL.

But compared to its competition, there are areas where Erlang is lacking, for example, in distributed message passing, where it still uses TCP/IP. So in an effort to learn more about RDMA and in hopes of making Erlang a little more attractive to the HPC community, I set out to write an RDMA distribution driver for Erlang.

RDMA is a surprisingly tough nut to crack for its maturity. Documentation is scarce. Examples are even more so. Compared to TCP/IP, there is a lot more micro-management: you have to set up the connection; you have to decide how to allocate memory, queues, and buffers; you have to control how to send and receive; and you have to do your own flow control, among other complications. But for all that, you get the possibility of moving data between systems without invoking the kernel, and that promises significant performance gains over TCP/IP.

In addition, it would almost seem like RDMA was made for Erlang. RDMA is highly asynchronous and event-driven, which is a nearly perfect match for Erlang’s asynchronous message passing model. Once I got my head around some Erlang port driver idiosyncrasies, things sort-of fell in to place, and here is the result:

RDMA Ping Pong

pong. I’ve never been happier to see such a silly word.

Of course, the driver works for more than just pinging. It works for all distributed Erlang messages. In theory, you can drop it in to any Erlang application and it should just work.

The question is: how well does it work? Is it any better than the default TCP/IP distribution driver? For that, I devised a simple benchmark.

RDMA Benchmark Diagram

For each in a given set of nodes, the program will spawn a hundred processes that sit in a tight loop performing RPCs. The number of RPCs is counted and can be compared between different network implementations.

The program was tested on four nodes of a cluster, each with:

  • 2 x Intel Xeon X5560 Quad Core @ 2.80 GHz
  • 48 GB memory
  • Mellanox ConnectX QDR PCI Gen2 Channel Adapter
  • Red Hat Enterprise Linux 5.9 64-bit
  • Erlang/OTP R16B03
  • Elixir 0.12.0
  • OFED 1.5.4

The results are summarized as follows:

RDMA Benchmark

The RDMA implementation offers around a 50% increase in messaging performance over the default TCP/IP driver in this test. I believe this is primarily explained by the reduction in context switching. Where the TCP implementation has to issue a system call for every send and receive operation, requiring a context switch to the kernel, the RDMA implementation only calls into the kernel to be notified of incoming packets. And if packets are coming in fast enough, as they are in this test, then the driver can process many packets per context switch. The RDMA driver stays completely in user-space for send operations.

You may be wondering why the TCP driver performed about the same over the Ethernet and InfiniBand interfaces. These RPC operations involve very small messages, on the order of tens of bytes being passed back and forth, so this test really highlights the overhead of the network stacks, which is what I intended. I would imagine increasing the message size would make the InfiniBand interfaces take off, but I’ll leave that for a future test. Indeed, there are many more benchmarks I should perform.

Also, for now I’m avoiding the obvious comparison between Erlang and MPI. MPI libraries tend to have very mature, sophisticated RDMA implementations that I know I can’t compete against yet. I’d rather focus on improving the driver. I’ve started a to-do list. Feel free to pitch in and send me some pull requests on GitHub!

One last thing: Thank you The Geek in the Corner for your basic RDMA examples, and thank you Erlang/OTP community and Ericsson for your awesome documentation. As for my goal of wanting to learn about InfiniBand, I’d say goal accomplished.

Customizing the OpenStack Keystone Authentication Backend

OpenStack Login For those of you unfamiliar with OpenStack, it is a collection of many independent pieces of cloud software, and they all tie into Keystone for user authentication and authorization. Keystone uses a MySQL database backend by default, and has some support for LDAP out-of-the-box. But what if you want to have it authenticate against some other service? Fortunately, the Keystone developers have already created a way to do that fairly easily; however, they haven’t documented it yet. Here’s how I did it:

  1. Grab the Keystone source from GitHub and checkout a stable branch:
    % git clone git://
    % cd keystone
    % git checkout stable/grizzly
  2. Since we still want to use the MySQL backend for user authorization, we will extend the default identity driver, keystone.identity.backends.sql.Identity, and simply override the password checking function. Create a new file called keystone/identity/backends/ containing:
    from __future__ import absolute_import
    import pam
    from . import sql

    class Identity(sql.Identity):
        def _check_password(self, password, user_ref):
            username = user_ref.get('name')
            if (username in ['admin', 'nova', 'swift']):
                return super(Identity, self)._check_password(password, user_ref)
            return pam.authenticate(username, password)

    In this snippet, we check the username and password against PAM, but that can be anything you want (Kerberos, Active Directory, LDAP, a flat file, etc.). If the username is one of the OpenStack service accounts, then the code uses the normal Keystone logic and checks it against the MySQL database.

  3. Build and install the code:
    % python build
    % sudo python install
  4. Configure Keystone to use the custom identity driver. In /etc/keystone/keystone.conf add or change the following section:
    driver = keystone.identity.backends.custom.Identity
  5. Start Keystone (keystone-all) and test, then save the changes to the Keystone source:
    % git add keystone/identity/backends/
    % git commit -m "Created custom identity driver" -a

And that’s it. In reality, I would probably fork the Keystone repository on GitHub and create a new branch for this work (git checkout -b customauth stable/grizzly), but that’s not really necessary. Actually, you could probably even get away with not recompiling Keystone. Just put the custom class somewhere in Keystone’s PYTHONPATH. But I’m not a Python expert, so maybe that wouldn’t work. Either way, I like having everything together, and Git makes it brainless to maintain customizations to large projects.

Benchmarking Duplog

In my last post I introduced Duplog, a small tool that basically forms the glue between rsyslog, RabbitMQ, Redis, and Splunk to enable a highly available, redundant syslog service that deduplicates messages along the way. In order for Duplog to serve the needs of a production enterprise, it will need to perform well. I fully expect the deduplication process to take a toll, and in order to find out how much, I devised a simple benchmark.

Duplog Benchmark Diagram

One one side, fake syslog generators pipe pairs of duplicate messages into the system as fast as they can. On the other, a process reads the messages out as fast as it can. Both sides report the rate of message ingestion and extraction.

The Details

  • 3 KVM virtual machines on the same host each with the same specs:
    • 1 x Intel Core i5 2400 @ 3.10 GHz
    • 1 GB memory
    • Virtio paravirtualized NIC
  • OS: Ubuntu Server 12.04.1 LTS 64-bit
  • RabbitMQ: 2.7.1
  • Redis: 2.2.12
  • Java: OpenJDK 6b27

The Results

(My testing wasn’t very scientific so take all of this with a grain of salt.)

Duplog Benchmark

The initial results were a little underwhelming, pushing about 750 messages per second through the system. I originally expected that hashing would be the major bottleneck, or the communication with the Redis server, but each of those processes were sitting comfortably on the CPU at about 50% and 20% usage, respectively. It turned out that the RabbitMQ message brokers were the source of the slow performance.

I began trying many different settings for RabbitMQ, starting by disabling the disk-backed queue, which made little difference. In fact, the developers have basically said as much: “In the case of a persistent message in a durable queue, yes, it will also go to disk, but that’s done in an asynchronous manner and is buffered heavily.”

So then I changed the prefetch setting. Rather than fetching and acknowledging one message at a time, going back and forth over the network each time, the message consumers can buffer a configurable number of messages at a time. It is possible to calculate the optimum prefetch count, but without detailed analytics handy, I just picked a prefetch size of 100.

That setting made a huge difference, as you can see in the histogram. Without spending so much time talking to RabbitMQ, consumers were free to spend more time calculating hashes, and that left the RabbitMQ message brokers more free to consume messages from rsyslog.

Another suggestion on the internet was to batch message acknowledgements. That added another modest gain in performance.

Finally, I tried enabling an unlimited prefetch count size. It is clear that caching as many messages as possible does improve performance, but it comes at the cost of fairness and adaptability. Luckily, neither of those characteristics is important for this application, so I’ve left it that way, along with re-enabling queue durability, whose performance hit, I think, is a fair trade-off for message persistence. I also reconfigured acknowledgements to fire every second rather than every 50 messages. Not only does that guarantee that successfully processed messages will get acknowledged sooner or later, it spaces out ACKs even more under normal operation, which boosted the performance yet again to around 6,000 messages per second.

So is 6,000 messages per second any good? Well, if you just throw a bunch of UDP datagrams at rsyslog (installed on the same servers as above), it turns out that it can take in about 25,000 messages per second without doing any processing. It is definitely reasonable to expect that the additional overhead of queueing and hashing in Duplog will have a significant impact. It is also important to note that these numbers are sustained while RabbitMQ is being written to and read from simultaneously. If you slow down or stop message production, Duplog is able to burst closer to 10,000 messages per second. The queueing component makes the whole system fairly tolerant of sudden spikes in logging.

For another perspective, suppose each syslog message averages 128 bytes (a reasonable, if small, estimate), then 6,000 messages per second works out to 66 GB per day. For comparison, I’ve calculated that all of the enterprise Unix systems in my group at UMD produce only 3 GB per day.

So as it stands now I think that Duplog performs well enough to work in many environments. I do expect to see better numbers on better hardware. I also think that there is plenty of optimization that can still be done. And in the worst case, I see no reason why this couldn’t be trivially scaled out: divide messages among multiple RabbitMQ queues to better take advantage of SMP. This testing definitely leaves me feeling optimistic.

Overengineering Syslog: Redundancy, High Availability, Deduplication, and Splunk

I am working on a new Splunk deployment at work, and as part of that project, I have to build a centralized syslog server. The server will collect logs from all of our systems and a forwarder will pass them along to Splunk to be indexed. That alone would be easy enough, but I think that logs are too important to leave to just one syslog server. Sending copies of the log data to two destinations may allow you to sustain outages in half of the log infrastructure while still getting up-to-the-minute logs in Splunk. I think duplicating log messages at the source is a fundamental aspect of a highly available, redundant syslog service when using the traditional UDP protocol.

That said, you don’t want to have Splunk index all of that data twice. That’ll cost you in licenses. But you also don’t want to just pick a copy of the logs to index—how would you know if the copy you pick is true and complete? Maybe the other copy is more complete. Or maybe both copies are incomplete in some way (for example, if routers were dropping some of those unreliable syslog datagrams). I think the best you can do is to take both copies of the log data, merge them together somehow, remove the duplicate messages, and hope that, between the two copies, you’re getting the complete picture.

I initially rejected the idea of syslog deduplication thinking it to be too complicated and fragile, but the more I looked into it, the more possible it seemed. When I came across Beetle, a highly available, deduplicating message queue, I knew it would be doable.

Beetle Architecture

Beetle Architecture

Beetle itself wouldn’t work for what I had in mind (it will deduplicate redundant messages from single sources; I want to deduplicate messages across streams from multiple sources), but I could take its component pieces and build my own system. I started hacking on some code a couple of days ago to get messages from rsyslog to RabbitMQ and then from RabbitMQ to some other process which could handle deduplication. It quickly turned into a working prototype that I’ve been calling Duplog. Duplog looks like this:

Duplog Architecture

Duplog Architecture

At its core, Duplog sits and reads messages out of redundant RabbitMQ queues, hashes them, and uses two constant-time Redis operations to deduplicate them. RabbitMQ makes the whole process fairly fault tolerant and was a great discovery for me (I can imagine many potential use cases for it besides this). Redis is a very flexible key-value store that I’ve configured to act as a least-recently-used cache. I can throw hashes at it all day and let it worry about expiring them.

One important design consideration for me was the ability to maintain duplicate messages within a single stream. Imagine you have a high-traffic web server. That server may be logging many identical HTTP requests at the same time. Those duplicates are important to capture in Splunk for reporting. My deduplication algorithm maintains them.

Looking at the architecture again, you will see that almost everything is redundant. You can take down almost any piece and still maintain seamless operation without dealing with failovers. The one exception is Redis. While it does have some high availability capability, it relies on failover which I don’t like. Instead, I’ve written Duplog to degrade gracefully. When it can’t connect to Redis, it will allow duplicate messages to pass through. A few duplicate messages isn’t the end of the world.

Feel free to play around with the code, but know that it is definitely a prototype. For now I’ve been working with the RabbitMQ and Redis default settings, but there is probably a lot that should be tuned, particularly timeouts, to make the whole system more solid. I also intend to do some benchmarking of the whole stack (that’ll probably be the next post), but initial tests on very modest hardware indicate that it will handle thousands of messages per second without breaking a sweat.


I wrote in the about me blurb on this blog that I like writing little programs for myself. One of the programs I’m most proud of is called SuperSync.

Back in college when I started developing an interest in music, I got in the habit of only acquiring losslessly encoded files. FLACs mostly. It wasn’t long before my collection outgrew what I could store on my iPod. So I hacked together a little script which I called “Sync” to encode my music files to something smaller, like Ogg Vorbis. I wrote it in Java because that’s what I knew best at the time, and for the most part, it just worked. It kept a flat database of files and timestamps to know what to sync to the iPod without reencoding everything every time.

But unfortunately, as my music collection grew, there were times, like when MusicBrainz would have a minor update for all of my files, which would make Sync think that everything needed to be resynced again. It got to to a point where some syncs would take a week, doing one file at a time.

It got me thinking: I have 10 CPU cores in my house. If I could get them all working together on the problem, I could get those long syncs down to a day or two. And thus SuperSync was born.

Still written in Java, SuperSync adds a distributed client/server architecture and nice GUI over top of Sync. The program takes the same flat database, and when it sees a new or updated file in the source directory, it copies it to the destination directory. If the file is a FLAC, it broadcasts a conversion request to the network. Any server can then respond if it has a free CPU. The server reads the file from my network share and sends the encoded file back to the client where it gets written out to the destination. The whole setup relies on having a consistent global namespace for the source collection. In my case, all of my systems can access my fileserver mounted at /nest in the same way. I can’t imagine many people have such a setup, so I don’t think a formal SuperSync release would be worthwhile.

In any case, the process looks something like this in action:

At the end of the sync, the program can optionally read a song log from Rockbox and scrobble it to

I’m also really proud of the way SuperSync is written. I spent a lot of time upfront to define clean interfaces using good object-oriented style. Feel free to checkout the source code if you’re into that sort of thing. Just ask me first if you want to use any of it.

Now the times are changing, and with Subsonic allowing me to stream music to my phone, I haven’t had to sync my music as much recently. But my iPod still has its purposes, so I’m glad I have SuperSync to let me take my whole collection with me.

Retirement: Defined Benefit or Defined Contribution?

Specifically, should employees of University System of Maryland institutions participate in the State Retirement and Pension System (SRPS) or the Optional Retirement Program (ORP)? That was one of the questions I had to answer for myself as I prepare to start a new job at the University of Maryland. The SRPS is a defined-benefit pension plan and the ORP is a defined-contribution 401(a)-like plan. The default is the SPRS, and it seems like they do everything they can to steer you to it (I suspect because your contributions are how the State funds current retirees), but is it a better deal?

First, some details: the SRPS requires employees to contribute 7% of their salary to the plan. In return, after 10 years of service, you can retire at age 65 and receive a monthly allowance following this formula:

\frac{0.015\:\times\:\textrm{salary}\:\times\:\textrm{years of service}}{12}

By contrast, the ORP is simple: the University will contribute a flat 7.25% of your salary to your choice of Fidelity or TIAA-CREF, and you can invest it however you want. The money is immediately vested. Additionally, you can take the 7% that you would have had to contribute to the SRPS and invest it on your own in a supplemental retirement plan or IRA. That’s a total of 14.25% of your salary going towards your retirement every year…comfortably within the 10-15% that experts recommend.

For me, considering it doesn’t vest until 10 years of service, the SRPS was right out. But as an experiment, I wanted to know which would be the better option if I worked for Maryland for 10 years. (The SRPS does allow you to withdraw your contributions compounded annually at 5% interest if you terminate employment before 10 years, but then you wouldn’t get the benefit of the State’s contribution to the plan, and you can almost certainly do better than 5% annually in the long-run by investing in a mix of stocks and bonds.)

I plugged in all of my numbers to the SRPS formula, and calculated an estimated withdraw rate for the ORP supposing a realistic inflation-adjusted growth rate. The results were clear: the ORP could provide me with about twice as much money during retirement. With results like those, I was curious whether the SRPS would be a good deal for anyone and under what circumstances that would be.

So I whipped up a little program to do the calculations for me. It has sliders for each of the input variables so the results can easily be compared for a wide variety of circumstances. The program works with your current salary and inflation-adjusted rates of return to give you a picture of what sort of spending power in today’s dollars you would have during retirement.

Consider a 35-year-old who makes $80,000 per year. If they expect to work for Maryland for 10 years and think they can earn around 5% in the market after adjusting for inflation (that’s a real annual return of 8% if you assume an average of 3% inflation), and intend to retire at 65 and expect to need income for 30 years in retirement, the ORP just barely comes out on top:

Indeed, that seems to be the turning point. Any older and you won’t have enough time to let those returns compound, and if you are any more risk-averse, then you won’t be able to generate the returns needed to outpace the pension system. In those cases, the SRPS would be a better choice for you, but only if you are in it for the long haul. You’d be wasting valuable investing time if you join the SRPS and leave before your contributions vest. Otherwise, read a book or two on investing, and do it yourself with the ORP.

But don’t take my word for it; do the math or put your numbers into my program and see what the better choice would be for you:

Run the Maryland Retirement Comparison Tool
Requires Java 5 or higher and Windows, MacOS X, or Linux

Of course, I make no guarantees that my program is accurate, but you can get the source code and check it out for yourself. Also consider investment risks and other factors such as plan benefits carefully.

My Homemade LOM

In one of the final classes of my CS master’s program, Embedded Computing, we were required to complete a semester project of our choosing involving embedded systems. Like in previous semester projects, I wanted to do something that I would actually be able to use after the class ended.

This time around I chose to build a lights-out manager for my Sun Ultra 24 server (which this blog is hosted on). With a LOM I can control the system’s power and access its serial console so I will be able to perform OS updates remotely, among other things.

Since I already owned one and I didn’t want to spend a lot of money, I chose to develop the project on top of the Arduino platform. I like the size of the Arduino, and the availability of different shields to minimize soldering. It’s also able to be powered by USB, which is perfect because the Ultra 24 has an internal USB port that always supplies power.

To make things a little more challenging for myself (because the Arduino is pretty easy on its own), I chose to implement a hardware UART to communicate with the Ultra 24’s serial port. Specifically, I chose to use the Maxim MAX3110E SPI UART and RS-232 transceiver. Great little chip.

For communication with the outside world, I bought an Ethernet shield from Freetronics. It’s compatible with the official Arduino Ethernet shield, but includes a fix to allow the network module to work with other SPI devices (such as my UART) on the same bus. I started to implement the network UI using Telnet, but after realizing I would have to translate the serial console data from VT100 to NVT, I switched to Rlogin, which is like Telnet, but assumes like-to-like terminal types.

Lastly, for controling the system’s power, I figured out how to tap into the Ultra 24’s power LED and switch. Using the LED, I can check whether the system is on or off, and using the switch circuit and a transistor, I can power the system on and off. I managed to do this without affecting the operation of the front panel buttons/LEDs.

I’ll spare you all of the implementation details (if you’re interested, you can read my report). Suffice it to say, the thing works as well as I could have imagined. Here is a screenshot of me using the serial console on my workstation:

From my research, the serial and power motherboard headers are the same on most modern Intel systems, so this LOM should work on more than just an Ultra 24. If you want to build one of your own, my code is available on GitHub and the hardware schematic is in the report I linked to above.

Using Nitrogen as a Library Under Yaws


I’ve been working on a project off and on for the past year which uses the Spring Framework extensively. I love Spring for how easy it makes web development, from wiring up various persistence and validation libraries, to dependency injection, and brainless security and model-view-controller functionality. However, as the project has grown, I’ve become more and more frustrated with one aspect of Spring and Java web development in general: performance and resource usage. It’s so bad, I’ve pretty much stopped working on it altogether. Between Eclipse and Tomcat, you’ve already spent over 2 GB of memory, and every time you make a source code change, Tomcat has to reload the application which takes up to 30 seconds on my system, if it doesn’t crash first. This doesn’t suit my development style of making and testing lots of small, incremental changes.

So rather than buy a whole new computer, I’ve started to look for a new lightweight web framework to convert the project to. I really like Erlang and have wanted to write something big in it for a while, so when I found the Nitrogen Web Framework, I thought this might be my opportunity to do so. Erlang is designed for performance and fault-tolerance and has a great standard library in OTP, including a distributed database, mnesia, which should eliminate my need for an object-relational mapper (it stores Erlang terms directly) and enable me to make my application highly available in the future without much fuss. Nitrogen has the added benefit of simplifying some of the fancy things I wanted to do with AJAX but found too difficult with Spring MVC.

The thing I don’t like about Nitrogen is that it is designed to deliver a complete, stand-alone application with a built-in web server of your choosing and a copy of the entire Erlang runtime. This seems to be The Erlang/OTP Way of doing things, but it seems very foreign to me. I already have Erlang installed system-wide and a web server, Yaws, that I have a lot of time invested in. I’d rather use Nitrogen as a library in my application under Yaws just like I was using Spring as a library in my application under Tomcat.


I start my new project with Rebar:

$ mkdir test && cd test
$ wget && chmod +x rebar
$ ./rebar create-app appid=test
==> test (create-app)
Writing src/
Writing src/test_app.erl
Writing src/test_sup.erl
$ mkdir static include templates  # These directories will be used later

Now I define my project’s dependencies in rebar.config in the same directory:

{deps, [
    {nitrogen_core, "2.1.*", {git, "git://", "HEAD"}},
    {nprocreg, "0.2.*", {git, "git://", "HEAD"}},
    {simple_bridge, "1.2.*", {git, "git://", "HEAD"}},
    {sync, "0.1.*", {git, "git://", "HEAD"}}

These dependencies are taken from Nitrogen’s rebar.config. Next I write a Makefile to simplify common tasks:

default: compile static/nitrogen

        ./rebar get-deps

        echo '-define(BASEDIR, "$(PWD)").' > include/basedir.hrl

        ln -sf ../deps/nitrogen_core/www static/nitrogen

compile: include/basedir.hrl get-deps
        ./rebar compile

        -rm -f static/nitrogen include/basedir.hrl
        ./rebar delete-deps
        ./rebar clean

distclean: clean
        -rm -rf deps ebin

I expect I’ll be tweaking this Makefile some more in the future, but it demonstrates the absolute minimum to compile the application. When I run make, four things happen the first time:

  1. BASEDIR is defined as the current directory in include/basedir.hrl. We’ll use this later.
  2. All of the Nitrogen dependencies are pulled from Git to the deps directory.
  3. All of the code is compiled.
  4. The static content from Nitrogen (mostly Javascript files) is symlinked into our static content directory.

Next I prepare the code for running under Yaws. First I create the Nitrogen appmod in src/test_yaws.erl:

-export ([out/1]).

out(Arg) ->
    RequestBridge = simple_bridge:make_request(yaws_request_bridge, Arg),
    ResponseBridge = simple_bridge:make_response(yaws_response_bridge, Arg),
    nitrogen:init_request(RequestBridge, ResponseBridge),

This is taken from Nitrogen repository. I also modify the init/0 function in src/test_sup.erl to start the nprocreg application, similar to how it is done in Nitrogen proper:

init([]) ->
    {ok, { {one_for_one, 5, 10}, []} }.

Lastly, I add a function to src/test_app.erl which can be used by Yaws to start the application:


start() ->

One other thing that I do before loading the application up in Yaws is create a sample page, src/index.erl. This is downloaded from Nitrogen:

-module (index).

main() -> #template { file=?BASEDIR ++ "/templates/bare.html" }.

title() -> "Welcome to Nitrogen".

body() ->
    #container_12 { body=[
        #grid_8 { alpha=true, prefix=2, suffix=2, omega=true, body=inner_body() }

inner_body() ->
        #h1 { text="Welcome to Nitrogen" },
If you can see this page, then your Nitrogen server is up and
running. Click the button below to test postbacks.
        #button { id=button, text="Click me!", postback=click },
Run <b>./bin/dev help</b> to see some useful developer commands.


event(click) ->
    wf:replace(button, #panel {
        body="You clicked the button!",
        actions=#effect { effect=highlight }

I make sure to include basedir.hrl (generated by the Makefile, remember?) and modify the template path to start with ?BASEDIR. Since where Yaws is running is out of our control, we must reference files by absolute pathnames. Speaking of templates, I downloaded mine from the Nitrogen repository. Obviously, it can be modified however you want or you could create one from scratch.

Before we continue, I recompile everything by typing make.

Now the fun begins: wiring it all up in Yaws. I use my package for OpenSolaris which puts the configuration file in /etc/yaws/yaws.conf. I add the following to it:

ebin_dir = /docs/test/deps/nitrogen_core/ebin
ebin_dir = /docs/test/deps/nprocreg/ebin
ebin_dir = /docs/test/deps/simple_bridge/ebin
ebin_dir = /docs/test/deps/sync/ebin
ebin_dir = /docs/test/ebin

runmod = test_app

    port = 80
    listen =
    docroot = /docs/test/static
    appmods = </, test_yaws>

Obviously, your paths will probably be different. The point is to tell Yaws where all of the compiled code is, tell it to start your application (where the business logic will be contained), and tell it to use the Nitrogen appmod. Restart Yaws and it should all be working!

Now for some cool stuff. If you run the svc:/network/http:yaws service from my package, or you start Yaws like yaws --run_erl svc, you can run yaws --to_erl svc (easiest to do with root privileges) and get access to Yaws’s Erlang console. From here you can hot-reload code. For example, modify the title in index.erl and recompile by running make. In the Erlang console, you can run l(index). and it will pick up your changes. But there is something even cooler. From the Erlang console, type sync:go(). and now whenever you make a change to a loaded module’s source code, it will automatically be recompiled and loaded, almost instantly! It looks something like:

# yaws --to_erl svc
Attaching to /var//run/yaws/pipe/svc/erlang.pipe.1 (^D to exit)

1> sync:go().
Starting Sync (Automatic Code Reloader)
=INFO REPORT==== 17-Feb-2011::15:03:10 ===
/docs/test/src/index.erl:0: Recompiled. (Reason: Source modified.)

=INFO REPORT==== 17-Feb-2011::15:04:20 ===
/docs/test/src/index.erl:11: Error: syntax error before: body

=INFO REPORT==== 17-Feb-2011::15:04:26 ===
/docs/test/src/index.erl:0: Fixed!

2> sync:stop().

=INFO REPORT==== 17-Feb-2011::15:07:17 ===
    application: sync
    exited: stopped
    type: temporary

One gotcha that may or may not apply to you, is that Yaws should have permission to write to your application’s ebin directory if you want to save the automatically compiled code. In my case, Yaws runs as a different user than I develop as, a practice that I would highly recommend. So I use a ZFS ACL to allow the web server user read and write access:

$ /usr/bin/chmod -R A+user:webservd:rw:f:allow /docs/test/ebin
$ /usr/bin/ls -dv /docs/test/ebin
drwxr-xr-x+  2 jlee     staff          8 Feb 17 15:04 /docs/test/ebin

ACLs are pretty scary to some people, but I love ’em 🙂

Other Thoughts

You would not be able to run multiple Nitrogen projects on separate virtual hosts using this scheme. Nitrogen maps request paths to module names (for example, requesting “/admin/login” would load a module admin_login) and module names must be unique in Erlang. I think it would be possible to work around this using a Yaws rewrite module, though I haven’t tested it. I imagine if one virtual host maps “/admin/login” to “/foo/admin/login” and another maps it to “/bar/admin/login”, then Nitrogen would search for foo_admin_login and bar_admin_login, respectively, eliminating the conflicting namespace problem.

Now that I’ve gone through all the trouble of setting up Nitrogen the way I like, I should start converting my application over. Hopefully I’ll like it. It would be a shame to have done all this work for naught. I’m sure there will be posts to follow.

Replacing Apache with Yaws

After some time running with the Apache worker model, I noticed it was not much better on memory than prefork. It would spawn new threads as load increased, but didn’t give up the resources when load decreased. I know it does this for performance, but I am very tight on memory. And I know Apache is very tunable, so I could probably change that behavior, but tuning is boring. Playing with new stuff is fun! This was the perfect opportunity for me to look at lightweight web servers.

I toyed around with lighttpd and read the documentation for nginx. Both seem to do what I need them to do, but I was more interested in the equally fast and light Yaws. I’ve been really into Erlang lately (more on that in another post), so this was a great opportunity to see how a real Erlang application works.


Naturally, Yaws isn’t included in OpenSolaris yet, but Erlang is! So it was fairly easy to whip together a spec file and SMF manifest for the program. Then it was as easy as:

$ pkgtool build-only --download --interactive yaws.spec
$ pfexec pkg install yaws

I’ve set the configuration to go to /etc/yaws/yaws.conf and the service can be started with svcadm enable yaws. When I’m completely satisfied with the package, I’ll upload it to SourceJuicer.


I run two virtual hosts: and Those two domains also have “.org” counterparts which I want to redirect to the “.com” domain. The yaws.conf syntax to handle this case is very simple:

pick_first_virthost_on_nomatch = true

<server localhost>
        port = 80
        listen =
                / =

        port = 80
        listen =
        docroot = /docs/
        dir_listings = true

        port = 80
        listen =
        docroot = /docs/
        dir_listings = true

        port = 80
        listen =
                / =

Should be pretty self-explanatory, but the nice thing is the pick_first_virthost_on_nomatch directive combined with the localhost block so that if anyone gets to this site by any other address, they’ll be redirected to the canonical I did actually run into a bug with the redirection putting extra slashes in the URL, but squashed that bug pretty quickly with a bit of help from the Yaws mailing list. That whole problem is summarized in this thread.


Yaws handles PHP execution as a special case. All you have to do is add a couple lines to the configuration above:

php_exe_path = /usr/php/bin/php-cgi

        allowed_scripts = php

Reload the server (svcadm refresh yaws) and PHP, well, won’t work just yet. This actually took me an hour or two to figure out. OpenSolaris’ PHP is compiled with the --enable-force-cgi-redirect option which means PHP will refuse to execute unless it was invoked by an Apache Action directive. Fortunately, you can disable this security measure by setting cgi.force_redirect = 0 in your /etc/php/5.2/php.ini.


Trac needs a little work to get running in Yaws. It’s requires an environmental variable, TRAC_ENV, set to tell it where to find the project database. The easiest way to do that is to copy /usr/share/trac/cgi-bin/trac.cgi to the document root, modify it to set the environmental variable, and enable CGI scripts in Yaws by setting allowed_scripts = cgi.

But I decided to set up an appmod, so that trac.cgi could be left where it was, unmodified. Appmods are Erlang modules which get run by Yaws whenever the configured URL is requested. Here’s the one I wrote for Trac:



-define(APPMOD, "/trac").
-define(TRAC_ENV, "/trac/iriverter").
-define(SCRIPT, "/usr/share/trac/cgi-bin/trac.cgi").


out(Arg) ->
        Pathinfo = Arg#arg.pathinfo,
        Env = [{"SCRIPT_NAME", ?APPMOD}, {"TRAC_ENV", ?TRAC_ENV}],
        yaws_cgi:call_cgi(Arg, undefined, ?SCRIPT, Pathinfo, Env).

All appmods must define the out/1 function which takes an arg record which contains information about the current request. At the end of the function, the Yaws API is used to execute the CGI script with the extra environmental variables. This is compiled (erlc -o /var/yaws/ebin -I/usr/lib trac.erl) and enabled in the Yaws configuration by adding appmods = </trac, trac> to a server section. Then whenever someone requests /trac/foo/bar, Trac runs properly!

I also set up URL rewriting so that instead of requesting something like http://i.tsv.c/trac/wiki, all you see is http://i.tsv.c/wiki. This involves another Erlang module, a rewrite module. It looks like:




arg_rewrite(Arg) ->
        Req = Arg#arg.req,
        {abs_path, Path} = Req#http_request.path,
        try yaws_api:url_decode_q_split(Path) of
                {DecPath, _Query} ->
                        case DecPath == "/" orelse not filelib:is_file(Arg#arg.docroot ++ DecPath) of
                                true ->
                                        Arg#arg{req = Req#http_request{path = {abs_path, "/trac" ++ Path}}};
                                false ->
                exit:_ ->

This module runs as soon as a request comes into the server and allows you to modify many variables before the request is handled. This is a simple one which says: if the request is “/” or the requested file doesn’t exist, append the request to the Trac appmod, otherwise pass it through unaltered. It’s enabled in the Yaws server by adding arg_rewrite_mod = rewrite_trac to yaws.conf. The Trac appmod must also be modified to make sure SCRIPT_NAME is now / so the application generates links without containing /trac.


WordPress works perfectly once PHP is enabled in Yaws. WordPress permalinks, however, do not. A little background: WordPress normally relies on Apache’s mod_rewrite to execute index.php when it gets an request like /post/2009/07/27/suexec-on-opensolaris/. mod_rewrite sets up the environmental variables such that WordPress is able to detect how it was called and can process the page accordingly.

Without mod_rewrite, the best it can do is rely on requests like /index.php/post/2009/07/27/suexec-on-opensolaris/ and use the PATH_INFO variable, which is the text after index.php. I think that looks ugly, having index.php in every URL. You would think that simply rewriting the URL, just as was done with Trac, would solve the problem, but WordPress is too smart, and always sends you to a canonical URL which it thinks must include index.php.

After more experimentation than I care to explain, I discovered that if I set the REQUEST_URI variable to the original request (the one not including index.php), WordPress was happy. This was a tricky exercise in trying to set an environmental variable from the rewrite module. But, as we saw with the Trac example, environmental variables can be set from appmods. And I found that data can be passed from the rewrite module to the appmod through the arg record! Here’s my solution:




arg_rewrite(Arg) ->
        Req = Arg#arg.req,
        {abs_path, Path} = Req#http_request.path,
        try yaws_api:url_decode_q_split(Path) of
                {DecPath, _Query} ->
                        case DecPath == "/" orelse not filelib:is_file(Arg#arg.docroot ++ DecPath) of
                                true ->
                                        case string:str(Path, "/wsvn") == 1 of
                                                true ->
                                                        {ok, NewPath, _RepCount} = regexp:sub(Path, "^/wsvn(\.php)?", "/wsvn.php"),
                                                        Arg#arg{req = Req#http_request{path = {abs_path, NewPath}}};
                                                false ->
                                                        Arg#arg{opaque = [{"REQUEST_URI", Path}],
                                                                req = Req#http_request{path = {abs_path, "/blog" ++ Path}}}
                                false ->
                exit:_ ->

Nevermind the extra WebSVN rewriting code. Notice I set the opaque component of the arg. Then in the appmod:



-define(SCRIPT, "/docs/").


out(Arg) ->
        Pathinfo = Arg#arg.pathinfo,
        Env = Arg#arg.opaque,
        {ok, GC, _Groups} = yaws_api:getconf(),
        yaws_cgi:call_cgi(Arg, GC#gconf.phpexe, ?SCRIPT, Pathinfo, Env).

I pull the data from the arg record and pass it to the call_cgi/5 function. Also of note here is the special way to invoke the PHP CGI. The location of the php-cgi executable is pulled from the Yaws configuration and passed as the second argument to call_cgi/5 so Yaws knows what to do with the files. You can surely imagine this as a way to execute things other than PHP which do not have a #! at the top. Or emulating suEXEC with a custom wrapper 🙂

Overall, this probably seems like a lot of work to get things working that are trivial in other servers, but I’m finding that the appmods are really powerful, and the Yaws code itself is very easy to understand and modify. Plus you get the legendary performance and fault-tolerance of Erlang.