Comoyo is more than just a provider of TV and movies online. Behind the scenes, other exciting products are emerging. Together with my partner in crime, Jonas, I was recruited as a student intern for Comoyo Communications in late 2011. There we met a team of eager Comoyans working intensely on the future of communication technology.

Without going into detail, we are building a messaging service. This consists of a solid backend structure, and several frontends. Our task? Create a web frontend using technologies so fresh that computer hipsters worldwide would worship us!

Enter WebSockets.

Present out-of-the-box in most modern widespread desktop and mobile browsers, WebSockets is an independent protocol layered on top of TCP, providing a persistent connection between the web site and a server. The current exceptions are Opera (it’s almost here, though), and of course IE (soon up to speed as well).

Cue Netty

Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.

So, the backend we were to communicate asynchronously with was, in its present form, a TCP socket served by Netty. Incoming packets were decoded into JSON, and sent further down the chain for processing. Simple.

A simplified overview of how Netty fits into our project is displayed here.


A web frontend for the messaging service did exist when we joined the team. However, the site used a Django backend that took care of the communication with Netty, and provided a REST API for the frontend. The goal was to remove the Django node, leaving one less component to maintain.

Problem: We couldn’t simply open a WebSocket connection to our existing Netty server.

How Netty works

A brief workflow description for those new to Netty.

When sending a packet to a Netty server it goes into a Channel in the form of a ChannelBuffer. The Channel consists of a Pipeline with one or more ChannelHandlers in a specified order. Incoming packets are sent up the pipeline. Outgoing messages are sent down.

As seen below, the existing server first used the JsonFrameDecoder to decode the ChannelBuffer into a byte array containing JSON data. Then it passed the result down to CustomPacketHandler, an implementation of the ChannelUpstreamHandler interface.

    new JsonFrameDecoder(),
    new CustomPacketHandler()

The last packet handler is extremely simple. It sends the JSON byte array to a service communicating with a server that processes the content of the packet.

public void messageReceived(
    ChannelHandlerContext ctx, MessageEvent e) throws IOException
    byte[] bytes = (byte[]) e.getMessage();
    int connectionId = ctx.getChannel().getId();
    ExternalServerCommunicator.handlePacket(connectionId, bytes);

So what had to be done differently using WebSockets?

Opening the WebSocket connection

When opening a WebSocket connection, you do not start by sending WebSocket frames right away! The connection is initialized with a HTTP handshake request, and needs to be upgraded to a WebSocket connection on the server.

Therefore we needed to create our own server instance that would open customized WebSocket channels. In other words, we would have to create our own pipeline.

Let’s open a connection to a randomly chosen domain using the JavaScript console in the browser.

ws = new WebSocket('ws://');

Below is the pipeline that the first WebSocket packet is sent through.

ChannelPipeline pipeline = Channels.pipeline(); 
pipeline.addLast("decoder", new HttpRequestDecoder()); 
pipeline.addLast("aggregator", new HttpChunkAggregator(65536)); 
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("handler", new WebSocketPacketHandler()); 
return pipeline;

Why are we naming the handlers, you say? Because replacing them later will be easy as a breeze.

The first three handlers are standard HTTP handlers, and require no further explaination. Do note that encoders do nothing with incoming packets and decoders leave outgoing packets alone. Now we’ve arrived at the WebSocketPacketHandler with our HTTP request.

As before, the incoming packet is passed to the messageReceived function, although this time we need to do a bit more work. Our first job is to check what kind of packet this is.

public void messageReceived(
    ChannelHandlerContext ctx, MessageEvent e) throws IOException 
    Object msg = e.getMessage();
    if (msg instanceof HttpRequest) {
        handleHttpRequest(ctx, (HttpRequest) msg);
    } else if (msg instanceof WebSocketFrame) {
        handleWebSocketFrame(ctx, (WebSocketFrame) msg);

Since we sent a handshake request over HTTP, we are sent to handleHttpRequest, which is show in snippets below. Here we need to perform the handshake. Thankfully we do not need to do this ourselves, Netty provides this for us. We send an error back if the client’s handshake request yields an error.

WebSocketServerHandshakerFactory wsFactory = new WebSocketServerHandshakerFactory(
    "ws://" + req.getHeader(HttpHeaders.Names.HOST) + req.getUri(), null, false);
WebSocketServerHandshaker handshaker = wsFactory.newHandshaker(req);

if (this.handshaker == null) {
} else { 
    handshaker.handshake(ctx.getChannel(), req);

That’s it. We’ve successfully performed the handshake. However, the pipeline of the channel will still handle packets as if they are HTTP. Let’s fix that just below the handshake.

ChannelPipeline p = ctx.getChannel().getPipeline(); 
p.replace("decoder", "wsdecoder", new WebSocketFrameDecoder()); 
p.replace("encoder", "wsencoder", new WebSocketFrameEncoder());

Handling WebSocket frames

As mentioned, WebSockets uses a custom frame specification. It does not replace TCP, it runs on TCP, but it can still be concidered to belong at the transport level. A transport protocol on top of a transport protocol.

For us, this means we have to treat incoming and outgoing packets as WebSockets frames.

So now you have established a link between the browser and the server, and upgraded it to a WebSocket connection. How do you handle the incoming and outcoming packets? Let’s start by sending a message to the server from the javascript console.


So the packet arrives at the server, and this time it is passed to the WebSocketFrameDecoder. Bytes go in, magic happens, and a WebSocketFrame is passed on to the WebSocketPacketHandler. This time, finally, it is sent to the “correct” function.

private void handleWebSocketFrame(
    ChannelHandlerContext ctx, WebSocketFrame frame) throws IOException {
    if (frame instanceof CloseWebSocketFrame) {
        this.handshaker.close(ctx.getChannel(), (CloseWebSocketFrame) frame);
    else if (!(frame instanceof TextWebSocketFrame)) {
        // Preferably do something to handle unsupported frames

    String request = ((TextWebSocketFrame) frame).getText();
    byte[] bytes = request.getBytes();
    int connectionId = ctx.getChannel().getId();
    ExternalServerCommunicator.handlePacket(connectionId, bytes);

For simplicity I have left out a couple of things, such as handling a PingWebSocketFrame. When we are sure that the current frame is a TextWebSocketFrame, we extract the text and pass it to the same server communicator as we used before.

So that was the incoming packet. How do we deal with an outgoing packet? This is up to you. A Netty way of doing it is to implement a ChannelDownstreamHandler, call it WebSocketDownstreamHandler, and add it at the end of the pipeline. There you can do something like this.

public void handleDownstream(
    ChannelHandlerContext ctx, WebSocketEvent evt) throws Exception {

    final byte[] packetBytes = JsonOutgoingPacketConverter.convert(evt.getMessage());        
    final Channel clientChannel = ctx.getChannel();

    if (clientChannel != null) {
        clientChannel.write(new TextWebSocketFrame(new String(packetBytes)));

Here the WebSocketEvent is an implementation of the ChannelEvent interface, and is triggered when we receive a message from the business logic part of our service.

A JSON object is extracted from the message, converted to a byte array, and wrapped in a WebSocket frame. The WebSocketFrameEncoder encodes the message into a ChannelBuffer, and sends it out on the socket.

At last, the packet arrives in our browser.

Comment on Hacker News


At work this friday, we finally got our hands on the Raspberry Pi. I got to spend some hours with the device, and these are my thoughts.

First impression

The Pi is small. Very small actually. The fact that this thing can do 1080p video, run Debian and Arch Linux, and many other cool tricks, for little over 200 NOK, baffles my mind. Only 4 years ago I built my HTPC for 5000 NOK in a box the size of a stereo receiver, and this thing can handle hi def media a lot better, in a package much, much smaller.

Raspberry Pi

The magic is of course in the Broadcom BCM2835 and the excellent hardware design done by the Raspberry folks. The CPU is an ARM1176JZ-F Applications Processor, and the GPU a VideoCore IV. The manufacturer claims it is capable of playing back 1080p H.264 at 40Mb/s. However, the CPU is only 700 MHz, and seems to be the real bottleneck of the system. This is especially visible when trying to run a browser using the recommended linux install, Raspbian.

Installing Raspbian

To install or do anything with your new Pi, you first have to get a SD card. Luckily, I was prepared, and had one handy. The first thing most Raspberry Pi owners will try is the official Debian for Raspberry build, called Raspbian. This is an optimized build and has over 35,000 packages optimized for the floating point architecture of the Broadcom chip.

Installing the Raspbian image on our Pi using Mac OS X was not something I’d call mass-market user friendly just yet. This may be a lot different on other operating systems though. We followed a handy guide over at, and we will repeat the steps here.

First, we have to download and unzip the image itself. You can always find the latest image on

Now, we need to prepare our SD card.

Find the path of the SD card on your system.

$ df -h
Filesystem      Size   Used  Avail Capacity  Mounted on
/dev/disk9s1   7.4Gi  2.6Mi  7.4Gi     1%    /Volumes/NO NAME

Unmount the drive

$ diskutil unmount /dev/disk9s1
Volume NO NAME on disk9s1 unmounted

The diskname for my raw disk is now /dev/rdisk9. Be in the directory where you extracted your image file and do the following command. This takes a while, so be patient. The only output is at the end.

$ sudo dd bs=1m if=IMAGE_FILE_PATH.img of=/dev/rdisk9
1850+0 records in
1850+0 records out
1939865600 bytes transferred in 419.827777 secs (4620622 bytes/sec) 

Eject the disk

$ diskutil eject /dev/rdisk9

Booting it up

The hooked up Pi is by no means worthy of /r/cableporn, but I guess that isn’t the point.

Raspberry Pi hooked up

The booting itself is fairly quick, taking only 20 seconds or so. At first boot, we are presented with the usual blue Debian installer, which lets us configure the pi to our liking. A quick reboot later and we’re inside a desktop manager unknown to me.

Raspbian desktop

The experience is surprisingly fast considering this is a 200 NOK device, but not something I would ever use for a regular desktop experience except for emergencies. The browser is very sluggish, killing my hopes for a custom web interface made for TVs running on this thing. It does however, come preinstalled with Scratch and Python, which I guess is the educational part of the goal for the Raspberry team. I wanted to try XBMC on this thing, so I didn’t do much testing in forms of benchmarking on the Raspbian image.

Installing Raspbmc

Raspbmc is a simple and lightweight media center distribution for the Raspberry Pi, created by Sam Nazarko. It is basicly XBMC compiled for the Raspberry Pi.

Installation using OS X is fairly simple.

curl -O
chmod +x
sudo python

And that’s it! Follow the on screen instructions, and make sure you select the correct disk!

Booting Raspbmc

First time boot requires the Pi to connect to an update server. This process takes about 15 minutes. After that, we are greeted with the usual XBMC splash screen.

Raspbmc splash

Initial impressions are fairly meager however. The Pi is unresponsive for some time, and the UI feels sluggish. Installing add-ons was painful. However, a quick reboot fixed most of these problems, probably because XBMC did some initial checks of outdated addons in the background.

To test the performance of the device, we first tried The Dark Knight 1080p trailer. It played back flawlessly, with little buffering needed. Instant playback. Pretty great experience.

Raspberry Pi

We also tried some movie clips we had laying around, both from a usb drive and over the network, and we had no issues what so ever with buffering or stuttering during playback.

Closing thoughts

The Pi, having just ramped up production and opened the floodgates to the masses, is a very exciting device. Already we see improvements and new products popping up trying to mimic and improve the idea put forth by the Pi. One such device on the way to our headquarters is the Gooseberry, and many others are being released in the next six months. We look forward to the future of computing, and we will continue to look into the possibilities these new devices give us.

Stay tuned!


Comoyo Jekyll Blog Flow Chart When we set out to relaunch our company techblog, we had a few specific requirements:

  1. The blog should require no form of monitoring or on-call time.
  2. The blog must scale to meet (almost) any amount of traffic.
  3. Writing a post, submitting it for review, and getting it published should take as little time as possible away from the developer.
  4. All developers must be able to post, whenever they want to post.
  5. Developers must be familiar with the tools used to create the blog.

Of course, we could set up a dedicated Wordpress installation on one or several of our AWS-instances, and make it scalable with heavy caching and Varnish. However, this solution is prone to breakage, as we have seen with other Wordpress installations. Developers also have a tendency to not like WYSIWYG-editors, arguing that they create ugly and unreadable HTML. Not to mention that we would have to create users for every developer before they could start to blog, taking more time away from actual programming.

Enter Jekyll

Then we discovered Jekyll. Jekyll is a simple, blog aware, static site generator. It is the main component behind GitHub Pages, a free hosting solution from GitHub. GitHub takes your Jekyll application (and static files too!), runs it through its compiler, and hosts the results. They even have a handy guide on how to create and publish your Jekyll application. Developers can fork, make pull requests, comment inline, use whatever editor they wanted, see diffs of content, and best of all, it needed little to no monitoring from our on-call engineer.

So with this in mind, I was tasked with creating the company blog using Jekyll, including a complete bootstrap stylesheet for Comoyo. I used Initializr to create the basic structure, using HTML5 Boilerplate, Bootstrap 2.0.4, Modernizr, jQuery and LESS. Initial development was slow at first, mainly because I didn’t know about the excellent bootstrap solution provided by Jekyll Bootstrap. After the initial folder structure is set up, the rest was pretty straight forward. Our folder structure now looked something like this:

|-- _config.yml
|-- _includes
|   |-- article_header.html
|   |-- author.html
|   |-- authors.html
|   `-- comments.html
|-- _layouts
|   |-- default.html
|   `-- post.html
|-- _posts
|   |--
|   `--
|-- _site
|-- assets
|   |-- css
|   |-- fonts
|   |-- img
|   |-- js
|   `-- less
|-- .gitignore
|-- 404.html
|-- favicon.ico
|-- Gemfile
|-- humans.txt
`-- index.html

Outsource everything

With GitHub Pages, we no longer have a database, which would make it hard for ourselves to store comments, likes etc. However, as almost everyone else these days, we decided on using Disqus. The main advantage of outsourcing everything like this is that we don’t have to spend much money ourselves keeping the blog up, making it easier to justify spending developers time on writing posts, not spending time trying to keep everything backed up, monitored and live.

Of course, outsourcing has its drawbacks. Not controlling our own platform means that we risk unplanned downtime, data loss, sort of like setting sail in a ship who’s direction you no longer have control over. But since we use git, we will have multiple repos, thanks to the distributed nature of git. The risk of data loss is minimal. The only problem might be service outages at GitHub or Disqus, but then we would have bigger problems to deal with anyway.

Making Jekyll overcome its hurdles

As with any framework, Jekyll is not without its hurdles. The biggest issue we had was that GitHub pages does not support Jekyll plugins. Since we wanted to customize our blog with multiple authors, better frontpage view and more, this was now out of the question. Initially, we had used the excellent plugin only_first_paragraph to create a nice concatenated index of all posts. The solution we chose was to simply forget about it for now, and later add some javascript to do the same.

Another issue was that Jekyll is by default a single-author blogging platform. We needed support for multiple authors, preferably with author profiles and metadata. After some quick googling, we found a solution. We also created two simple includes, authors.html which lists a short bio for each author, and author.html that shows the author profile when reading a post.

# _config.yml
    name: Dag-Inge Aas
    position: Software Developer Intern
    gravatar: d4acca0bcb24dc67644055d4e44a6b29
    github: dagingaa
    twitter: daginge
    about: |    
      Summer intern at Comoyo. Likes front-end development, new technology
      and cake. Currently active as Head of IT for [ISFiT](

We could then get the authors array through site.authors and iterate through this as one would expect.

Getting Jekyll ready for production

By now, we have about 15 less-files getting requested, parsed and compiled to css by less.js. For a production site, this is unacceptable. Jekyll does not have a built in css or js-compressor, so we have to do this ourself. I landed on using recess, the same tool that Twitter Bootstrap uses to compile and compress the less-files into a single css-style. For someone that has checked the source, you can see that we still load several .js and .css-files, but the situation is far better than what it was. In time, we will create a build script that can be run before pushing changes to GitHub that automatically combines all javascript and css.

In short, we wanted our blog to scale to demand, use the latest web technology, be insanely fast, and become a digital playground were developers can create, write and modify everything the way they prefer. We believe that this is the best way to motivate developers to write for a company techblog.

Hi, we are Comoyo.