Looking back at 2016

A single post in the whole of 2016... It is not that I did not do anything that year, it is the opposite. The one thing I did not do is write about the things that kept me busy. So here it goes, my summary for 2016.


In 2016 I worked as a freelancer on multiple projects on "control system software for milling machines". I cannot say anything more about that, apart from the fact that it involves the development of an application in C++ using Qt/QML. The following description is from the Wikipedia article on QML

QML is mainly used for [...] applications where touch input, fluid animations (60 FPS) and user experience are crucial. QML documents describe an object tree of elements [both] graphical (e.g., rectangle, image) and behavioral (e.g., state, transition, animation). These elements can be combined to build components ranging in complexity from simple buttons and sliders, to complete [...] [applications].

The UI of the application, the "view" is written in the JavaScript-like QML language, often in a declarative way. The "models" could be written in QML but more often than not they are implemented in C++ and Qt. Hooking up view and model can be done in QML and in C++.

My first Qt project was in 2003 and I have been using it ever since. I am selling Qt short here, but Qt, without the QML part, allows you to build the desktop applications you know from the late 90s and early 2000s. You have your main window with standard menus, buttons, dialogs etc. You can style these standard UI elements, but it is not necessary to have a good-looking, recognizable application.

QML is different: standard QML provides you with a blank canvas to place your UI elements on with no styling whatsoever. You do not (have to) place a predefined button on that canvas. You create a rectangle on the canvas, create a so-called touch area in that rectangle and add behavior to be executed when a touch is registered. That rectangle does not have any styling associated with it and you have to style them. Styling can be done by graphical designers - no programming knowledge required. Probably it even should be done by a designer to have a UI that is easy on the eye and consistent.

Apart from the great learning experience, QML opens up new areas for me to work in as it can be used to develop desktop applications, mobile applications and embedded applications.


The aforementioned Qt/QML application had to be developed at my customers site, where code is version controlled using ClearCase. Sigh... On the positive side, ClearCase can version control your code. I do not want to turn this overview into a rant about ClearCase, but the model behind it and the tools to use it "take some time getting used to". To name some objective drawbacks, e.g.

  • files are version controlled, there are no change sets;
  • a file has to be explicitly checked out before modification;
  • it requires access to a "ClearCase server" for VC operations.

Maybe tooling exists that fixes these drawbacks, but these tools were not available at the site I worked. There were also other ClearCase-related drawbacks that were specific to the site:

  • the working area that contains the source code is located on network drives, which tremendously slows down compilation times
  • all version control actions could not be executed on the development machine, as it did not have access to the ClearCase server: you had to go to another machine to checkout & checkin files on the network drives.

After a few weeks I started to use the clearcase.el package to checkin and checkout files from Emacs. In this way I did not have to leave my IDE (and visit the ClearCase GUI tooling) to be able to change a file. However, if checkin and checkout were the only things you need, Git would probably not be developed. There you have it: I wanted to use Git.

I could not use Git on the network drives as it would have tainted the ClearCase "view". Fortunately a Git - ClearCase bridge exists that allows you to sync ClearCase repos and locally stored Git repos, namely git-cc. git-cc is a Python package that uses the command-line ClearCase and Git tools to do the work. It allowed me to store my code locally, version controlled by Git, and to sync from (and to) ClearCase when necessary. The use of Git sped up the build times enormously: a full rebuild went from 15 minutes to 1 minute 30.

The local Git workflow spread out to other developers also and we presented our results to management. It turned out the company has a Bitbucket license and management allowed our project to migrate from a ClearCase workflow to a Bitbucket & Git one. That will not only improve our personal workflow but also our collaboration. For example, using ClearCase our code reviews are done using file compare tools and using copy-and-paste in emails. Bitbucket allows a web-based review process, with comments and replies.

Although our project uses Bitbucket, we to keep using ClearCase as the final store of code. This means the sync between Clearcase and Bitbucket will remain necessary. I made some changes to git-cc to further improve the sync workflow and I merged these changes upstream. This was a major moment for me: it was my first open-source contribution not being paid for by a company and to be used outside the company.

Bitbucket & Jenkins

As mentioned, ClearCase does not have the concept of a changeset. This meant that as a developer, you had to manually kick off a Jenkins job that built the code and ran all tests. We were able to setup a Jenkins job that was triggered when changes in ClearCase occurred, but the result was not entirely satisfactory. It could happen that a build was triggered before all modifications were checked in. Such builds seldomly succeeded.

As Bitbucket & Git were being introduced, we had another look at the automated builds. Using the right combination of Bitbucket and Jenkins plugins, each push to Bitbucket automatically triggers a build. This sounds very easy but in practice this involved a lot of trial and error. Especially in the Jenkins plugin ecosystem it can be difficult to find the plugin you need and when you think you have found it, to configure it correctly. Not all Jenkins plugins are properly documented and sometimes it turns out that a plugin has been superseded by another one.

Another problem we faced was that the setup of our system test environment wasn't (and still isn't) a one-click affair. The system test environment that Jenkins used for the ClearCase builds was setup by another department. It took quite some communication, and unfortunately also some miscommunication, to reproduce that setup at another site. We documented each step to create this setup, but ideally that process is fully automated.

Google Mock

The current code base has a lot of unit tests, although some of these "unit" are integration tests. I do not mind that too much unless it makes testing difficult. What I often see is that integration test cannot test specific code paths in the Class Under Test: the indirection at play makes it hard to inject certain behavior in the dependencies. This problem was acknowledged by the team and some of my team members started using manually coded stubs for the Class Under Test.

In a previous project I used Google Mock, a C++ framework that allows you to assign expected behavior to C++ classes. So I asked for time to demonstrate its applicability to the current code base. That turned out really nice and since then Google Mock is part of the tool chest of the developers on that project.

Pharo & Smalltalk

Now for the non-work related things. In my one post of 2016, I mentioned I was looking into Pharo, an open-source implementation of Smalltalk, both a programming language and development environment. That post outlined with the struggles I had with the language and environment, and made me doubt whether it was a wise decision to invest time in it. At the end of the post I stated that I enrolled in a 7-week online course on Pharo to help me decide whether I should continue that investment.

I finished the course, with an "Attestation of Achievement" :) This does not mean I am fluent in Smalltalk. But it did show me the power of a view of your application as a live environment. To mention just one thing that already has spoiled me, Pharo allows to pause your application, go back to an earlier stack fame, modify code and data and proceed from that modified frame. That is very different from an environment that repeated (lengthy) builds and restarts of program runs to reproduce the error...

Is it wise to invest more time in Pharo? To be honest, I do not think I can put it to use in a business setting to the extent I was able to use Python when I learned it in the early 2000s, at least not in the short run. I found a comment on the Slashdot article Can learning Smalltalk make you a better programmer that eloquently explains my current position on Smalltalk:

I tend to recommend Lisp, Smalltalk and Haskell as languages to train how you think about programming. A basic grasp of these three does wonders for how you think about programming, at least for high level stuff. There is a big difference between languages which help train your brain, and languages which help you get stuff done. There is considerable overlap, of course, but by sticking only to languages which get stuff done you limit your capacity to think about your programming.

And Smalltalk does show me new ways of doing things. To give a really simple example, I seldomly use the debugger when I am developing in Python. Especially for my own code I mostly rely on print statements. The power of the Smalltalk debugger convinced me to keep the console-based Python debugger pudb within reach and that did improve my workflow.

So I am continuing my journey with Pharo. Currently I am working my way through the book Enterprise Pharo, which is geared towards web development using Pharo. That brings me to the next section.


Elm is a framework/language for front-end web-development. As a developer I want to be well-rounded and front-end web development is one of the things I feel is missing from my resume. I have worked as a (Django) web-developer but mostly on the back-end. As with most back-end work, regardless of how good a job you did, it is not more often than not the front-end that makes people go "wow". In short, I have always been a bit jealous of front-end developers.

So the lure is there, but the technology stack of HTML, CSS, JavaScript, etc. never was that alluring to me. The Elm Architecture offers you a much more high-level view that consists of a (1) model (state), a (2) view to display the model and an (3) update (method) to update your model state. In a way, it is very similar to QML development, which makes it all come full-circle.

I read the docs, worked my ways to some toy examples and this wet my appetite enough to enroll in a two-day online Elm workshop.

After that, I definitely wanted to spend more time on it, but it turned out that with all the other things that kept me busy, I was spreading myself thin. The teacher of the online workshop, Richard Feldman, is writing the book Elm in Action which should be finished in the summer of 2017. I hope to revisit Elm by then, unless Pharo allows me to scratch my (front-end) web-development itch.

New Year's resolution

This overview turned out a lot longer that I initially expected. And that is with leaving out most of the details and even skipping things... As mentioned, I worked on a lot of things but I just did not blog about it. Better file this one in the "Missed opportunities" category. Well, it does give me an idea for another New Year's resolution...


Comments powered by Disqus