Nordic Collegiate Programming Contest (NCPC) is considered the national championship in programming in Norway. Teams are participating from several locations; Uni. of Oslo and NTNU in Trondheim in Norway, and several universities in Sweden, Denmark, Finland and Iceland. Comoyo has been lucky enough to initiate a cooperation with the student association organizing the contest at the University of Oslo, namely MAPS (Mathematics, Algorithms and Programming for Students). Since several of our employees are new graduates, we decided to run a team in the professionals class to see if we could measure up against the young and aspiring students.

The Problems

Problems are designed to test algorithm and problem-solving skills, much like the challenges we face every day at Comoyo. You’ll find the problem set in PDF here. For a more humorous approach to the problems, head over to the blog of Oda Josefine Noven, a student who have written the essay “NCPC - Reflections from a literature student” (in Norwegian).

The goal when designing the problems is that everyone should be able to solve at least one problem, and no one should be able to solve all the problems, but all the problems should be solved by someone. Matias Holte, member of NCPC problem committee Matias Holte

11:00 - Ready for action! Arriba! The team members are Vidar Klungre, Sverre Sundsdal and Tor Fredrik Eriksen.

Contest starts

11:16 - First problem solved, a pink baloon is the reward. Now thinking hard about the next one. “I mustache you a question?”

thinking about problems

11:48 - Second problem solved, now in third place of the Oslo teams. Time to switch mustaches for better luck! Switching mustaches Mustache selection

16:05 - Competition over. Spot the Comoyo! Panorama of room

We ended up in a third place amongst all the team competing in Oslo, and 16th place of the Norwegian teams. The two winner teams from Oslo will go to the regional european finals in Delft in the Netherlands on November 23-25. Winners

The results list is available here.

Hope to see you all for next years NCPC or IDI Open in the spring!

Comments

As part of the ever-changing internet industry, we at Comoyo are totally dependent on having fresh ideas and new thinking for solving the problems we’re facing. And we think that there few better ways get that kind of impulses than to involve students in our daily work. We currently have 4 students working part-time besides their studies, and we’re now hiring summer interns for 2013 - you can apply here

This summer we had 9 summer interns, and they worked on a variety of projects: - Building an internet messaging app using the Facebook integration in iOS 6 - Creating a web interface in HTML5 and WebSockets - Blog posts 1 & 2 - Prototyping an implementation of OAuth 2.0 - Building a notifications service that can send SMS and e-mail to customers - Playing around with analytics data to see how they can be used to improve our products and customer service - Preparing our film service for A/B testing

Some highlights from the summer: Intro for summer students Comoyo summer interns working Comoyo summer interns socializing Meeting with all of Comoyo Lunch outside at Fornebu Summer interns final presentation

Why do a summer internship at Comoyo?

  • You get to work on real problems that one of our regular employees have to solve unless you do it
  • The stuff you build can actually go into production
  • We want you to promote your own ideas and challenge us on how we can improve our products
  • You get to work with some of the smartest people in the industry, who have extensive experience with building and running large-scale internet service
  • We work with cutting-edge technology - either we build it ourselves, or we use some of the latest things that have been released
  • You get to know our culture and can find out if this is the right place for you to work - if the answer is yes, a summer internship is the best way to get a full-time position.

Apply for a summer internship now!

Comments

JavaZone is one of the biggest conferences for software developers in Scandinavia, and took place in Oslo this September. Comoyo was naturally present both in the audience and on the stage, where we shared some of the experiences we have gathered after 1.5 years of delivering film & TV streaming services.

Below you will find videos of the presentations made by Comoyo employees. If you have any questions, feedback or any experience you want to share on these topics, please comment below.

Real-World Performance Testing in AWS

By Bjørn Remseth, Senior Software Engineer (Comoyo) and Kristian Klette, Solutions Engineer (Iterate)

Comoyo provides infrastructure for moving pictures, both movies and TV, as well as live television. On March 23, 2012, the Norwegian national soccer league started up, and we were in no way certain that we would be able to handle the load we expected all the football fans in Norway to generate. The movie delivery subsystem is mostly, but not entirely, consisting of components hosted in Amazon’s Elastic Compute Cloud (EC2), and mostly (but not entirely) written in Java. In this talk we describe how we used multiple tools to generate what we believed to be realistic loads against our systems, and how we used this to tune the system to actually perform.

This may seem like a simple task, but it wasn’t: The system is made of components, but the component developers were not aware of the full system architecture.

Some components thought to “just work” (such as load balancers) were discovered to be a bottleneck. Initially, we didn’t even think about load balancers, since they were effectively invisible for the developers. Writing test scripts for load testing is a skill separate from writing other types of tests, and the test tools themselves are much less well standardized or developed and much easier to use for some tasks than others.

We did all of this, we learned how to harness the tools, we delivered the Tippeliga, and we did it all in just a few weeks. In this talk we share some of the hard won lessons on how to write tests, interpret their results, and how to actually tune a system for high performance.

Real-World Performance Testing in AWS from JavaZone on Vimeo.

Reliable scalability with MongoDB

By Markus Krüger, Senior Software Engineer

Many developers are discovering that traditional relational databases make it hard to scale to the large data volumes and user traffic required by Internet-scale applications. MongoDB is sailing up as one of the leading contenders in the NoSQL space. We at Comoyo are using MongoDB on Amazon Elastic Cloud Compute (EC2) for our payment subsystems, with a target of some hundred millions users regularly accessing the system. As we are handling financial transactions with high demands on reliability, we needed to make sure that MongoDB did not sacrifice our customers’ safety on the altar of performance. This talk presents how we use MongoDB to gain higher availability and scalability than traditional databases, with simpler development and administration, without losing the required reliability and durability.

The talk describes how we configured replication, and our approach to work around the lack of transactions in MongoDB. There will be pointers on tuning MongoDB for availability and reliability. Also, the talk will describe how we used MongoDB to implement once-and-only-once messaging semantics on top of Amazon Simple Queuing Service (SQS).

Reliable Scalability with MongoDB from JavaZone on Vimeo.

Cloudname, a system for coordinating services

By Haakon Dybdahl, Senior Software Engineer

Cloudname is an open source Apache licensed system for running services in the cloud (available from github.com/cloudname). The system consists of two parts: Nodee, which is a very simple service for provisioning and managing software artifacts and running processes in a distributed system. The other part is the Cloudname library, which is used for coordination of services in a distributed system. (Usually when we refer to Cloudname, we mean the Cloudname library).

The Cloudname library aims to provide a minimal set of coordination and configuration services needed to run large distributed systems. It takes care of mapping and tracking services independently of physical their physical location. Services are assigned coordinates. When a service starts it will claim its coordinate. Once a coordinate has been claimed the service can use this to publish endpoints. Clients can resolve endpoints by way of resolver expressions.

It is an important point that Cloudname is implemented as a library that uses Apache ZooKeeper to perform its heavy lifting.

Cloudname, a system for coordinating services from JavaZone on Vimeo.

We look forward to next year’s Javazone and other conferences, and hope to see you there!

Comments

As an ambitious internet company Comoyo have grown from 25 people to ca. 80 in 1.5 year, mostly through hiring engineers, and we have painfully experienced that finding female programmers is HARD!

Not that they’re not good enough, there just aren’t very many around to choose from. When recruiting for more senior tech lead roles, I’ve had headhunters deliver me long lists of up to 70 candidates, none of which were women. And this is in Norway, reputedly the second most gender equal country in the world. This year, we had a female summer intern, and we recently hired our first full-time female coder, but our ambitions are higher.

It’s an objective for us to have a good gender balance among our employees. First of all for the social atmosphere, but also to have development teams that are representative of our users. In addition, we think it’s important to have more women in the one industry that might impact our societies the most in the years to come - the world needs the ideas of women when it comes to how technology can be used to solve problems.

So why are there so few women in tech?

In our view, girls need to be made aware of the opportunities that lie in programming at an early stage, preferably in junior high/high school. In a recent article titled “Why don’t girls want to be geeks?” Carol Dawkins, an IT teacher, gives the following explanation:

Boys are more, 'Game on' - they don't mind if they make mistakes. They are more confident around the technology, whereas girls are a little bit shy, on the back foot before they start.

A lot of the misconceptions about programmers need to be adjusted, and we in the industry need to provide role models who can show that programming is both social, challenging and requires creativitiy and vision.  

This summer we stumbled across a cool project on Kickstarter: LadyCoders are three women with substantial tech careers who want to use their experiences to “put on a training seminar to show women how to bridge the gap between a computer science degree and programming skills, and a solid career in software and web development.” According to LadyCoders, “women lack mentorship at every single level of an information technology career.” The seminar will result in a DVD/downloadable movie and 20 free web videos.

So, we decided to pledge $250 to their campaign - and we encourage others to do the same! The campaign ends today, so hurry up.

kickstarter

Get your CV reviewed - or get an interview with us!

In return for our pledge, we get to submit the CV of one lucky girl to the three LadyCoders, and have them review it and give feedback to the candidate, with the aim of improving her chance for getting job interviews. So, we want to invite all coder girls who are interested in improving their CV to send us an e-mail and say something like “I want to have an awesome CV!” and you might be drawn as the lucky winner!

If you are also interested in an internship, summer job or full-time job at Comoyo, attach your CV and our recruiter will get back to you.

What next?

We at Comoyo aim to work closely with the relevant communities here in Norway to give aspiring female programmers a good impression of what it’s like to work as a programmer here, and show what kind of opportunities lies in the world of code. We do a lot of fun stuff we think girls should be interested in. Stay tuned.

Comments

When we arrived at Comoyo for our summer internship, we were given a pretty exciting task: build a messaging web app, using any technologies at hand, but without using a backend. Well, not completely true – we already had a UNIX-socket backend that we were to hook into, so the problem soon became building a self-contained HTML app able to talk to it without any backend of its own.

The natural platform choice became WebSocket-heavy HTML5.

After implementing a WebSocket handler in our Netty backend we started thinking about our client-side technology stack. Apart from using WebSockets for server communication, we decided to use localStorage for client-side storage and cache, and Backbone.js for the actual front-end. As Backbone.js depends on Underscore.js, we’ll just as well use it throughout the application. To speed up development, we decided to go for CoffeeScript instead of pure JavaScript.

Other people have integrated Backbone with web sockets before us. However, they all seem to be using Node.js with Socket.io, so we thought we’d share our slightly different approach.

Heads up! This article focuses on integrating Backbone with an asynchronous web socket protocol, and assumes some level of comprehension of how Backbone works. Please check out the Backbone documentation and its examples for a quick introduction before checking back.

A pretty simple layered architecture emerged.

Our layered architecture

Before we take a stroll through the layers, let’s talk about the glue: the event dispatch.

The event dispatch

To let the components talk to each other in a nice and loose fashion, we needed some sort of event dispatch system. Luckily, it turns out Backbone supplies it for absolutely free through its Backbone.Events class:

window.dispatch = _.clone(Backbone.Events)

Using the event dispatch is easy: you subscribe to and trigger events with dispatch.on(eventName, callback) and dispatch.trigger(eventName, payload) respectively.

The communication layer

With event dispatch in place, we naturally started with the bottom layer: the communication layer.

A very simple Communicator class wound up looking like this:

 1 class Communicator
 2 
 3   # The messages used in the protocol look like:
 4   #   {"com.comoyo.CommandName": {"key": value}}
 5   # 
 6   # ...so we keep the namespace handy.
 7   commandNamespace: 'com.comoyo.'
 8   
 9   # Set up the web socket and listen for incoming messages
10   constructor: (@server) ->
11     @webSocket = new WebSocket(@server)
12     @webSocket.onmessage = @handleMessage
13     @webSocket.onopen = -> dispatch.trigger('WebSocketOpen')
14 
15   handleMessage: (message) =>
16   
17     # The message string is the message object's data property
18     strMessage = message.data
19   
20     # Now, parse that string
21     jsonMessage = JSON.parse(strMessage)
22   
23     # Grab the command name, ie. the first root key (using Underscore.js)
24     fullCommandName = _.keys(jsonMessage)[0]
25     commandName = _.last(fullCommandName.split('.'))
26   
27     # Trigger the command in event dispatch, passing the payload
28     dispatch.trigger(commandName, jsonMessage[fullCommandName])
29 
30   sendMessage: (commandName, messageData) =>
31   
32     # Full command name with namespace
33     fullCommandName = @commandNamespace + commandName
34   
35     # Build the message
36     jsonMessage = {}
37     jsonMessage[fullCommandName] = messageData
38   
39     # Serialize the object into a JSON string...
40     strMessage = JSON.stringify(jsonMessage)
41   
42     # ...and send it!
43     @webSocket.send(strMessage)

The Communicator simply encapsulates the web sockets, wrapping them in a couple of simple messaging methods. This should make it easy as a breeze to swap our beloved WebSockets with SSE or other long polling techniques, in case we want to provide working fallbacks in old browsers, for instance.

The protocol controllers

With our communicator in place, we can set up controllers handling the various parts of communication with the backend protocol. The controllers communicate through the Communicator: they listen to incoming messages through the event dispatch and send messages with Communicator.sendMessage calls.

A large part of the protocol we implemented relies upon sequences of messages. The typical pattern consists of the following steps:

  1. We subscribe to a resource
  2. We’re notified of a changed resource
  3. We request the changed resource
  4. We receive the resource

Let’s take a look at our login controller for an example of how to implement this sequential protocol.

A simplified version of out login process should clarify the pattern. It consists of only two simple sequential steps:

  1. Client registration
  2. Account login

To tie these together, we could build a controller looking like this:

 1 class LoginController
 2   
 3   # Let the initializer listen for relevant events,
 4   # delegating them to its respective event handlers.
 5   initialize: ->
 6     dispatch.on('WebSocketOpen',
 7                           @sendClientRegistrationCommand, this)
 8     dispatch.on('ClientRegistrationResponse', 
 9                           @handleClientRegistrationResponse, this)
10     dispatch.on('AccountLoginResponse', 
11                           @handleAccountLoginResponse, this)
12   
13   # The client registration command is a simple message 
14   # with some metadata to initiate the connection
15   sendClientRegistrationCommand: ->
16     payload =
17       clientInformation:
18         clientMedium: 'web'
19     communicator.sendMessage('ClientRegistrationCommand', payload)
20   
21   # Our client registration response handler should store
22   # required metadata and send an account login command
23   handleClientRegistrationResponse: (data) ->
24     # Store client registration data somewhere handy
25     
26     # Send account login command
27     payload =
28       userInformation:
29         username: "myrlund"
30         password: "ihazpassword"
31     
32     communicator.sendMessage('AccountLoginCommand', payload)
33   
34   # Handle the response to our AccountLoginCommand
35   handleAccountLoginResponse: (data) ->
36     # Store session keys and user data somewhere handy
37     
38     # Check response data to see if login was successful
39     if data.loggedIn
40       beHappy()
41     else
42       tryAgain()

Simply put, the various commands listen to responses and fire the next step as soon as the response is handled. It’s easy to implement, and the message sequences are easily understood from the listener declarations in the controller initializer.

The storage layer

We want a persistence layer capable of two things: storing data received from the backend controllers, and talking to the front-end part of our app. Since we’ve already looked at setting up the controller layer, let’s start with the former: storing data from the backend controllers.

Storing data

In case you’re not familiar with localStorage, fear not: you don’t need to be. It’s as simple a key-value store as they come.

localStorage.setItem('ourKey', 'someValue')
localStorage.getItem('ourKey') // => 'someValue'

Next, we run into a small problem in that localStorage doesn’t support storing objects. It is, however, pretty good at strings. A simple solution is to serialize our objects into the store. To make it so, we encapsulate the localStorage in an event-driven storage object:

class Store
  
  # We keep an in-memory cache to speed up reading of data
  data: {}
  
  # Set this.store to localStorage and load cache
  constructor: (@schemas) ->
    @store = localStorage
    @loadCache()
  
  # We'll call addItems from the backend controllers.
  # 
  # items: object, indexed on unique id.
  #   ex. {"1": {content: "Foo"}, "2": {content: "Bar"}}
  addItems: (schema, items) ->
    
    # Add or overwrite existing items
    _.extend(@data[schema], items)
    
    # Write cache to store
    @save()
  
  # Iterates over keys in cache, serializing
  save: ->
    for key in _.keys(@data)
      @store.setItem(key, JSON.stringify(@data[key]))
  
  # Populates cache with stored data
  loadCache: ->
    for schema in @schemas
      @data[schema] = @fetch(schema) || {}
  
  # Fetches object from store
  fetch: (schema) ->
    JSON.parse(@store.getItem(schema))

Talking to Backbone

Although Backbone is designed for AJAX REST APIs out of the box, it supports any kind of backend through an extremely simple synchronization interface. One simply sets Backbone.sync to a function that in some way can handle the basic CRUD operations – create, read, update and delete.

Let’s add a sync method to our store, along with some helper methods.

Note: We’re using a read-only API, so we don’t really handle writing to the store. It should, however, be easy enough to implement by triggering a change event resulting in an appropriate backend call.

class Store
  
  # ...
  
  # Attaches to Backbone.sync
  # 
  # method:  either "create", "read", "update" or "delete"
  # model:   the model instance or model class in question
  # options: carries callback functions
  sync: (method, model, options) =>
    resp = false
    schemaName = @getSchemaName(model)
    
    # Switch over the possible methods
    switch method
    
      when "create"
        # In our case, we never create models directly
        console.log "This shouldn't happen."
        
      when "read"
        # Read one or all models, depending on whether id is set
        resp = if model.id then
          @find(schema, model.id)
        else
          @findAll(schema)
        
        unless resp
          return options.error("Not found.")
        
      when "destroy"
        # Perform a fake destroy
        resp = true
    
    # Fire the appropriate callback
    if resp
      options.success(resp)
    else
      options.error("Unknown error.")
  
  # Simple getters for one or all models in a schema
  find: (schema, id) ->
    @data[schema] && @data[schema][id]
  findAll: (schema) ->
    _.values(@data[schema]) || []
  
  # Models either have a schema name attached to themselves
  # or through their collections
  getSchemaName: (model) ->
    if model.schemaName
      model.schemaName
    else if model.collection && model.collection.schemaName
      model.collection.schemaName

# Export and attach to Backbone.sync
this.store = new Store(['contacts'])
Backbone.sync = this.store.sync

Overriding the Backbone.sync allows Backbone to talk to our store, but web sockets are a two way street, and we still don’t have any way of telling our Backbone collections of incoming data.

So, to the actual talking to Backbone part… A simple way to allow collections to subscribe to changes to schemas is to trigger an event when adding items from the backend. Let’s add to our addItems method.

addItems: (schema, items) ->
    
    # Add or overwrite existing items
    _.extend(@data[schema], items)
    
    # Write cache to store
    @save()
+   
+   # Fire a notification passing the changed ids
+   payload = {}
+   payload[schema] = _.keys(items)
+   dispatch.trigger("store:change:#{schema}", payload)

Here is an example of a Backbone collection integrating with this event mechanism:

class ContactCollection extends Backbone.Collection
  
  model: Contact
  
  # We don't use URLs in our protocol, but 
  # Backbone requires that we set it...
  url: ''
  
  # We'll need to define a schema name for use
  # in the Backbone.sync method of our store
  schemaName: "contacts"
  
  initialize: ->
    # Bind to the store's appropriate change event
    dispatch.on("store:change:#{@schemaName}", 
                @updateContacts, this)
  
  # Called whenever new data is inserted into 
  # the data store.
  updateContacts: (data) ->
    
    # The contacts property of the passed data is
    # an array of ids of the changed contacts
    contactIds = data[@schemaName]
    
    for contactId in contactIds
      # Check if the contact exists
      if conversation = @get(contactId)
        # If it exists, simply _set_ its updated properties
        conversation.set(store.find(@schemaName, contactId))
      else
        # Elsewise, create it and add it
        contactData = store.find(@schemaName, contactId)
        @add(new Contact(contactData))

That’s it!

That should cover integrating Backbone with an arbitrary web socket protocol.

Note that this approach is especially well suited to our particular use case, and there are probably both easier and better ways to integrate with other protocols. However, our approach should be generic enough to be fitted to any sensible situation.

Issues

On our journey, we ran into some issues that you might do well to keep in mind if you’re trying to do something similar to us.

LocalStorage is completely unencrypted

Without a backend rendering our HTML, we don’t have any safe place to store user credentials. We’re also handling sensitive data, which shouldn’t be left plainly stored in any computer’s web cache.

We took some simple measures to secure our users’ data:

  1. When a user logs out, we clear the entire localStorage.
  2. In the login form, we present an option for whether the computer in use is a public computer. If so, persist as little as possible.

We’ve been looking into some recently matured client-side crypto libraries, and the possibilities of encrypting the store. However, there is no safe place to store the encryption key client-side, requiring us to get it from the server for every page reload, in turn requiring authentication. This stackoverflow thread pretty much sums it up.

Resources

  1. Introducing WebSockets by HTML5 rocks – a great intro to using WebSockets in HTML5
  2. backbone-localstorage.js by documentcloud – a simple adapter for using Backbone with localStorage
  3. Understanding Backbone by Kim Joar Bekkelund – a great tutorial on Backbone-ifying a typical jQuery app

Edit: Fixed small error in the Communicator code example.

Comments