Struggling with Smalltalk and Pharo

Posted 2016-05-11 23:35:00   |  reSt   |  More posts about smalltalk pharo

In April of 2015, I came across the release announcement of Pharo 4.0 on Hacker News. Pharo is an open-source implementation of Smalltalk, a programming language and development environment.

I first heard about Smalltalk in the early 1990s. Back then, I studied Computer Science and one professor mentioned how Smalltalk allowed him to model and simulate systems with tremendous ease. By the way, he was a professor of Mechanical Engineering :). Since then, I never worked with Smalltalk or even met someone who did. My languages of choice became C, C++ and later Python.

Every once in a while I came across an article, or other online reference of someone looking back on it favorably. Take for example this tweet from Kent Beck from December 20, 2012:

great joy today coding in smalltalk with an old friend. the design space is
HUGE compared to Java, PHP, and C++.

and the reply from Ron Jeffries:

@KentBeck i miss smalltalk a lot. nothing like it

References such as those kept my interest in Smalltalk alive. It had become clear to me that Smalltalk was, or had been, something special. This answer to the question what is so special about Smalltalk says it all:

the highly interactive style of programming you experience in Smalltalk is
simpler, more powerful and direct than anything you can do with Java or C# or
Ruby... and you can't really understand how well agile methods can work until
you've tried to do extreme programming in Smalltalk.

Go and read the full answer: it gives an impressive list of innovations that Smalltalk introduced.

So when I was looking for a small project for the summer holidays of 2015, I decided to spent some time with Pharo.

First impressions

It quickly became clear that Pharo is a development and execution environment in one. Compare this to Visual Studio (VS), which is an Integrated Development Environment (IDE). It allows you to build software that runs under one or more flavours of Microsoft Windows. VS is the development environment and Windows is the execution environment. With Pharo, these environments are the same.

Another way of looking at it is that of Pharo being a Virtual Machine (VM) on which the code runs. But in contrast to the Java Virtual Machine (JVM) which remains more-or-less hidden from the user, the Pharo virtual machine is very visible to the user. The following screenshot shows several applications running inside the Pharo environment:


This has several consequences. Pharo applications will always run inside their own top-level Pharo window. Furthermore, because the Pharo environment has its own distinct look-and-feel, Pharo applications will not look native to the host OS [1] .

And what about command-line applications? Apparently it is possible to run Pharo without a visible environment, called "headless". I have not spent the time to find out how to do so and what it means for the development and execution of command-line applications.

What also became clear is that all development had to be done inside the Pharo environment. Everything that is done in the Pharo environment is stored in a special files, the image files and a changes file. I realized I would not be able to use the tooling I have grown accustomed to. This was not the most pleasant realization for an ardent command-line user like me.

This meant no Emacs, grep, find, sed etc. What about git? Nope, Pharo uses its own distributed Version Control System (VCS) [2] . I did find that suprising. To me open-source software development is very much a "standing on the shoulders of others" kind of activity. And here I have something that really wants to do it (everything?) its own way... But maybe "its own way" is better than what I have been doing, so I plodded along.

Pharo 4.0 relies heavily on the mouse. But I am the kind of developer that really likes to keep his hands on the keyboard, at all times. Of course there are a keyboard shortcuts, but some of them are context dependent: whether they work or not depends on the item that has focus.

Switching windows is possible using the keyboard shortcut ALT-TAB. This is shortcut is often reserved by the host OS to switch applications. In Pharo, I did not find an easy way to bind it to another shortcut. All in all, I have to use the mouse a lot more than I would like.

There are more things I found cumbersome or "rough around the edges". For example, tabbing to the next UI element is inconsistent, the text editor is rather basic when you are used to Emacs. But enough about that.

Development in Pharo

To help me on my way with Pharo, I started out with two books, viz. Pharo by Example and Dynamic Web Development using Seaside. I worked my way through them, well mostly [3] , but it did not became clear to me what Smalltalk provides that makes it such a productive environment. That might be due to the size of the examples used. For example, it is shown that you can interact with live objects and modify them. That might be nice for small examples, but in my experience, if an application throws an exception that is not handled, the state is such that the best thing to do is to close the it. All in all, I am not (yet?) sold on the feature of working with live objects.

This is also made worse by the fact that the Pharo environment keeps feeling alien. The tools remain cumbersome to use, it is as if I am developing with one hand tied behind my back.

Let's go back to 2002 when I was a C/C++ developer that started spending some time with Python. Almost immediately the benefits of Python became clear: a batteries-included language that did not require compilation and that allowed me to develop small applications & scripts in no-time. Compare that to the time I spent on Smalltalk and where I am still left wondering what benefits it brings.

I have to acknowledge that my self-thought Python knowledge got a big boost when I started to work for a company that used Python professionally. From that moment on I had colleagues to consult and to learn from. However, a Smalltalk contract will be much more more difficult to find.


Doubts started to creep in on whether I should keep investigating Pharo: the environment that required one to leave behind familiar tools, that kept feeling alien, the lack of learning progress, the lack of understanding why it can be so productive etc..

Thanks to the internet, if you are looking for confirmation, you will find it. Just as you can find articles that praise Smalltalk, you can also find ones that criticize it, for example the C2 Wiki page Why is Smalltalk dead [4] , the blog post What killed Smalltalk [5] and the (in)famous 2009 Rails Conference keynote by Robert M. Martin What Killed Smalltalk Could Kill Ruby. To some of the objections I could already relate, viz. the lack of integration with the OS and the outside world in general.

The Smalltalk community appears small. The volume on the Smalltalk reddit and the Pharo developers mailing list is low, but to be honest, I might be looking at the wrong online channels. And also a small community can still be very much alive. But the fact that several Smalltalk-related websites were not up-to-date (or had nothing new to tell for several years) did not instill much confidence. An example is the Seaside website, whose homepage shows "latest news" from 2013 and whose Success Stories page has a lot of dead links. Another example is the PharoCasts website, which contains screencasts of Pharo. The last entry is from September 2012...

What's next?

In the beginning of this post, I mentioned that I was looking for a small project for the summer holidays of 2015. Well, after the summer holidays I concluded that Smalltalk was not for me, but I did so reluctantly.

The following quote is from the website of Object Arts, which developed a Smalltalk implementation specifically for Microsoft Windows:

Smalltalk is dangerous. It is a drug. My advice to you would be don't try it;
it could ruin your life. Once you take the time to learn it (to REALLY learn
it) you will see that there is still nothing out there that can quite touch

There must be something to Pharo and I feel that I just do not get it, yet.

A month ago I learned that a Massive Open Online Course (MOOC) on Pharo would start at the beginning of May 2016, Live Object programming in Pharo. It is a 7-week course developed by and given by Smalltalk developers and Pharo contributors. Among them is Stéphane Ducasse, one of the driving forces behind Pharo. The course looked like the ideal way to be able to finally determine whether Pharo can be productive for me, so I entered.

At the time of writing, I just finished the first week, which introduced Pharo, the Smalltalk language and which ended with a screencast of a small programming exercise. I especially liked how the screencast showed, almost casually, some minor usage tips. Let's see how it goes in the following weeks!

[1]Dolphin Smalltalk is a Smalltalk implementation for Windows that allows you to develop applications hat look native. There may be others.
[2]I know one can use Git with Pharo, but the standard way to do so is to use Monticello.
[3]To be honest, I came only half-way with both of them.
[4]The resulting Hacker News discussion at contains some interesting comments.
[5]The resulting Hacker News discussion at contains some interesting comments.


What makes development "agile"?

Posted 2015-08-28 21:10:00   |  reSt   |  More posts about development

Edit: Originally this post was titled "Agile Development as I interpret it"

Currently the company I work for, FEI, has several software-related positions open and in the last few months we interviewed software architects, team leads and project leads. The job descriptions mention a preference for people with experience in the area of "agile development". So when a resume mentioned that the applicant has experience in that area, I asked that person what, according to him or her, makes software development "agile". This blog post is about the answer I would give.

To me, "agile software development" means that you deliver user-facing functionality in a continuous sequence of short iterations [1]. Because of the length of each iteration, you have to limit the scope of the functionality you promise to deliver. If the scope is too broad, it will not fit time-wise.

To limit the scope of functionality is not enough, you also have to limit the scope of design and implementation. In part this is due to the limited duration of each iteration. But as it is uncertain what functionality will have to be supported in the coming iterations, designing and implementing code in this iteration for iterations to come can turn out to be a waste of time.

To limit oneself in scope of design and implementation does not mean you deliver sub-standard code. Because unless the project is scrapped, you will build upon that code in the coming iterations, you will refactor code to be able to support new functionality. So the code better be in good shape and remain in good shape to be able to keep up your pace for iterations to come.

So that would have been my answer, I hope. When I compare my anwer to the Agile Manifesto, I realize it is woefully incomplete. However, I do think that the elements of software development my answer touches upon, are required for it to be called agile software development.

[1]Personally I prefer iterations of two weeks, and three weeks at the most.


The Nature of Software Development

Posted 2015-03-04 20:38:00   |  reSt   |  More posts about development books

In this blog post I talk about the book The Nature of Software Development, written by Ron Jeffries and published in February of this year. To quote Wikipedia, Ron Jeffries is one of the 3 founders of the Extreme Programming (XP) software development methodology in 1996. My introduction to XP was about 4 years later. His website was one of the first websites I mined for information about XP. He literally has a lifetime of software development experience and whenever he writes, blogs or tweets, I take note.

What the book is about

Every software developer knows that building a product can be a painful experience. I am not so much talking about the technical aspects but more the questions of what to build and when. There are a lot of things that can make a seemingly simple job difficult, e.g. requirements that are unclear or change, deadlines that turn out to be unreachable. Mr. Jeffries wants to show us that there is safe way through the field of lava that a software development project might resemble.

In the Introduction of his book, Mr. Jeffries states the following:

Come along with me, and explore how we can make software development simpler by focusing on frequent delivery of visible value.

The gist of the book is a familiar one, namely that we should deliver value to the customer feature by feature. The following quotes are from the Chapter 2, "Value Is What We want"

We need to build pieces that make sense to us, and to our users. These are often called minimal marketable features (MMFs).


we [...] benefit from providing business features at an even finer grain than the usual MMF.

The book discusses these notions and their impact on other aspects of software development, such as planning, design and quality.

My $0.02

A lot, if not most of the things in the book you will have heard before. But the value of The Nature of Software Development is in the way it presents their combination to the reader. When you read the book, it feels like Mr. Jeffries is sitting next to you, as if you are having a conversation. If you are open to the approach (to software development) he advocates, the book will invigorate you to tackle that difficult software project at work.

However, if you are skeptical about his approach, I can imagine that Mr. Jeffries alone will not convince you. That it sounds so deceptively simple does not help. He kind-of addresses that simplicity in Chapter 13, "Not that simple". There he acknowledges that "real business" will complicate things, but that "it's all about deciding what we want, guiding ourselves toward what we really want". True, but the "we" can be a large "we": stakeholders such as developers, architects, project managers and customers have to be on board also. You will have to overcome any skepticism by any of these parties, so prepare to encounter the "that will never work for our situation".

I have seen software that was late due to features that were "really necessary for the first release" but which needed a lot of rework afterwards. I have also seen software that was late end up in a drawer because apart from being late, it was also unusable. To avoid that I prefer to build an application feature by feature, slice by slice. The Nature of Software Development confirms my personal experience and beliefs with a clarity and completeness my own thoughts on this subject lack. I can thoroughly recommend it.


GitHub issues and Emacs Lisp unit tests

Posted 2014-12-12 22:30:00   |  reSt   |  More posts about emacs unittest

In my current project I spent a significant amount of time reviewing other people's code. Progress on those pull requests is not always continuous and it is easy to lose track of those that require my attention. To avoid that, I keep a list of these pull requests in a separate org-mode file in Emacs. Then, with the right key press(es), my Agenda shows me the ones I need to have a(nother) look at.

When I add a pull request to the org-mode file, I also copy the title and URL from the GitHub website. This proved to be cumbersome and error-prone, so I wrote a small Lisp package to retrieve that information automatically. Read on for details on its development and the use of automated tests in Emacs Lisp.

You can find the code of the aforementioned Lisp package in my github-query repo at Bitbucket.

General idea

I wanted a function that asks for an issue number and automatically retrieves the title and the URL of that issue [1]. This proved to be relatively easy to do as

  1. GitHub can be accessed through a Web API that is nicely documented.
  2. Emacs comes with the url package that you can use, among others, to post requests over http and https, and receive responses.

Of course, the devil is in the details and it took me some time to develop the following function:

(defun github-query-get-issue(owner repo issue-number)
  "Return issue ISSUE-NUMBER of GitHub repo OWNER/REPO.

This function returns the response as an association list, but
you can also use `github-query-get-attribute' for that."

The following snippet shows how you can use that function to retrieve the url of issue 1 from the GitHub repo bbatsov/projectile:

(let ((response (github-query-get-issue "bbatsov" "projectile" 1)))
  (let ((url (github-query-get-attribute 'html_url response)))

The nicest thing was that I used automated unit tests during the development of the package, something that I had never done for Lisp code [2].

Automated tests in Emacs Lisp

Emacs comes standard with the library ert, "a tool for automated testing in Emacs Lisp". That library provides macro "ert-deftest" that allows you to define a test as an ordinary functions. It also provides you with several test assertions such as "should" and "should-not". The following test from the github-query shows how it can be used:

(ert-deftest github-query--get-issue-retrieves-correct-response()
  (let ((response (github-query-get-issue "bbatsov" "projectile" 1)))
    (should (equal 1 (github-query-get-attribute 'number response)))
    (should (equal "Obey .gitignore, .bzrignore, etc."
                (github-query-get-attribute 'title response)))
    (should (equal ""
                (github-query-get-attribute 'html_url response)))))

Being able to run automated tests helped me enormously. Without them I find it easy to end up in a spot where I am thinking "but this was working before, or wasn't it?" [3], especially with the interactivity that the REPL provides.

I did have some minor issues with ert. There are two modes in which you can run tests, viz.

  • in interactive mode, where you execute the tests in the current Emacs process, and
  • in batch-mode, where you start a new Emacs that runs your tests and exits.

Working in interactive mode means that you always have to explicitly reload the code under test when you have changed it. Fail to do so and ert uses the code as it was during the previous run. To avoid that explicit reload, I use ert in batch-mode. Then I know for sure that all code under test is (re)loaded. Unfortunately in batch-mode it is more difficult to specify which tests to run. There are things that you can only do in interactive mode, for example only run the tests that failed during the last run.


Well, ert is here to stay for me. I cannot imagine doing Emacs Lisp development without automated tests anymore (not that I develop a lot of Emacs Lisp code :).

I do find ert a bit cumbersome to use in batch-mode, but on the positive side, it forces me to have quick unit tests. There is another test runner for ert, ert-runner, that seems to make it easier to specify which tests to run in batch-mode so. I will have a look at that one.

[1]In my current project, I am part of a team that works in a single (private) repo.
[2]Well, not really the first time. I once worked on a Lisp package to run all functions in an Emacs Lisp file whose name started with test_ . I bootstrapped that functionality during its development. However, that code was neither completed nor published.
[3]For me this is not limited to Emacs Lisp, it holds for any programming language.


Python unittests from Emacs

Posted 2014-09-01 20:10:00   |  reSt   |  More posts about python emacs unittest

At the company I currently work for, most of my coworkers use PyCharm to develop the Python application we are working on. I tried PyCharm several times and although I can understand why it is so popular, I still prefer Emacs :) One of the nice PyCharm features is the functionality to run the unit test the cursor resides in. So I decided to support that functionality in Emacs and in this post I describe how. You can find the code I developed for that in my skempy repo at Bitbucket.

General idea

The general idea to be able to run the unit test "at point" was simple:

  1. Write a Python script that, given a Python source file and line number, returns the name of the unit test in that file at that line.
  2. In Emacs, call that Python script with the file name of the current buffer and the line number at point to retrieve the name of the unit test.
  3. In Emacs, use the compile-command to run the unit test and display the output in compilation mode.

It is easy to run a specific unit test using standard Python functionality, e.g. the command:

$> python -m unittest source_code.MyTestSuite.test_a

executes test method test_a of test class MyTestSuite in file

I wanted to have all the complexity in the Python script, so the output of the Python script had to be something like:


which the Emacs Lisp code could then pickup to build the compile-command that Emacs should use. This idea resulted in the Python package skempy and the command-line utility skempy-find-test.


The following text, which is from the skempy README, explains how to use skempy-find-test:

$ skempy-find-test --help
usage: skempy-find-test [-h] [--version] file_path line_no

Retrieve the method in the given Python file and at the given line.

positional arguments:
  file_path   Python file including path
  line_no     line number

optional arguments:
  -h, --help  show this help message and exit
  --version   show program's version number and exit

Assume you have the Python file tests/

import unittest

class TestMe(unittest.TestCase):

    def test_a(self):
        print "Hello World!"

The following snippet shows the output of skempy-find-test on that Python file at line 7, which is the line that contains the print statement:

$ skempy-find-test tests/ 7

Emacs integration

The root of the repo contains the Emacs Lisp file skempy.el, which provides a function to retrieve the test method at point and executes that test as a compile command:

(defun sks-execute-python-test()
  (let ((test-method (shell-command-to-string (format "skempy-find-test %s %d" (buffer-file-name) (line-number-at-pos)))))
    (compile (concat "python -m unittest " test-method)))

If you bind it to a key then running the test at point is a single keystroke away, e.g.:

(add-hook 'python-mode-hook
          '(lambda () (local-set-key [C-f7] 'sks-execute-python-test)))

Implementation details

Initially I wanted to parse the Python file that contains the unit test, reading the file line-by-line and using regular expressions to do some pattern matching. You might know the quote [1]

Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.

Indeed, before too long my spike in this direction was becoming overly complex.

I searched for another approach and this quickly lead to the Python ast module. This module "helps Python applications to process trees of the Python abstract syntax grammar". In other words, it helps you parse Python files.

To parse a Python file, I used the following exports of the ast module:

  • function ast.parse to create a tree of syntax nodes of a given Python file;
  • class ast.NodeVisitor which implements the visitor pattern to inspect the tree of nodes.

To put it bluntly, each syntax node represents a statement and contains additional information such as the line numbers of that statement. When you call ast.NodeVisitor.visit and pass the tree of nodes, visit calls the appropriate ast.NodeVisitor method for each node. If you want a specific behavior for a node type, you override the method for that node type. This resulted in the following code:

class LineFinder(ast.NodeVisitor):

    def __init__(self, line_no):
        self.line_no = line_no
        self.path = ""

    def visit_ClassDef(self, node):
        self.class_name =

        if node.lineno <= self.line_no:
            self.path = self.class_name


    def visit_FunctionDef(self, node):

        max_lineno = node.lineno
        for statement_node in node.body:
            max_lineno = max(max_lineno, statement_node.lineno)

        if node.lineno <= self.line_no <= max_lineno:
            self.path = "%s.%s" % (self.class_name,

        if not self.path:

def get_path_in_code(source_code, line_no):

    tree = ast.parse(source_code)
    line_finder = LineFinder(line_no)

    return line_finder.path

This code does not support all possible edge cases but it supports the use cases I currently have, which is enough for me.

Making it complete

The ast code alone is not enough. For example, the previous code snippet only returns a class and method name. That is not enough for the Python unit testrunner, which wants a Python file package path. So we had to go from




It was easy to support this. You can find the complete code including unit tests, documentation, setup etc. in my skempy repo at Bitbucket. If you want to try it out, please have a look at the README, which explains how to install it.

[1]This is the quote as you might know it and I only use to jest. The actual quote only warns against the overuse of regular expressions, as explained in this post on Coding Horror.


Emacs configuration with Cask

Posted 2014-05-26 22:16:00   |  reSt   |  More posts about emacs cask

This post is about how I use Emacs package management in combination with Cask to manage my personal Emacs configuration. If you just want to have a look at that configuration, you can find it in my emacs-config repo at Bitbucket. Read on if you want to know how I got there...

DIY package management

When you use Emacs, you will rely on a whole plethora of Emacs packages that do not belong to the default installation. Until October of last year, I used a Makefile to retrieve those external Emacs packages. The following snippet is from that Makefile and shows how I retrieved package ace-jump-mode:

        - rm -rf externals/ace-jump-mode
        cd externals ; git clone git://

The Makefile contained a lot of awfully similar rules to install the other external packages I relied on. Ideally it would have also contained rules to update those packages, but I never got round to implement that.

The actual configuration of the external packages was done in the Emacs startup file init.el. The next Lisp snippet from that file shows the configuration of ace-jump-mode:

;use ace-jump-mode to improve navigation
(add-to-list 'load-path "~/.emacs.d/externals/ace-jump-mode")
  "Emacs quick move minor mode"

This home-grown solution to package management served me well over the years. It enabled me to get a new install of Emacs up-and-running rather quickly.

Using Emacs package management directly

Emacs has a package management infrastructure since version 24.1 [1]. It consists of a set of online repositories and a library to interact with them. Since then it has become the way to install packages and in October of last year, I finally started using it. Indeed it is a breeze to use, the biggest benefits being

  • the use of a global list of available packages: no need to search through GitHub or other online sources,
  • the one-click install of an interesting package: no need to add an additional rule to my Makefile, and
  • automatic support for upgrading installed packages.

Initially I used the Emacs package package.el directly to access the package management infrastructure. The following snippet from my configuration file shows how that looked:

(require 'package)
(add-to-list 'package-archives '("melpa" . "") t)

;Bozhidar Batsov at
;Thank you!

(defvar sks-packages
  '(ace-jump-mode     ;;to enable text-specific cursor jumps
  "A list of packages that should be installed at Emacs startup.")

(require 'cl)

(defun sks-packages-installed-p ()
  (loop for p in sks-packages
        when (not (package-installed-p p)) do (return nil)
        finally (return t)))

(unless (sks-packages-installed-p)
  ;; check for new packages (package versions)
  (message "%s" "Emacs is now refreshing its package database...")
  (message "%s" " done.")
  ;; install the missing packages
  (dolist (p sks-packages)
    (when (not (package-installed-p p))
      (package-install p))))

      (eval-after-load "ace-jump-mode-autoloads"

(eval-after-load "ace-jump-mode-autoloads"
       "Emacs quick move minor mode"
     (define-key global-map (kbd "C-0") 'ace-jump-mode)))

Please note that the value of variable sks-packages in this snippet specifies a single external package to keep this example concise. The original Lisp file specifies a dozen more packages.

This approach was a big improvement over my own solution. Installation and configuration were located in the same file and gone were the dependencies on an additional Makefile and VC clients.

Emacs package management via Cask

The code that makes sure all packages in sks-packages are installed at startup in case there were not yet installed, is from Bozhidar Batsov. Since then, he has developed Cask, which is, and I quote from its documentation,

[...] a project management tool for Emacs Lisp to automate the package development cycle; development, dependencies, testing, building, packaging and more.

Cask can also be used to manage dependencies for your local Emacs configuration.

Because of both those design goals, I decided to use Cask for my Emacs package management purposes.

Cask uses a so-called Cask file where you specify the external packages you rely on. For example, to install ace-jump-mode, my Cask file would look like this:

(source melpa) ;;archive of VCS snapshots built automatically from upstream repositories

(depends-on "ace-jump-mode")

(eval-after-load "ace-jump-mode-autoloads"
       "Emacs quick move minor mode"
     (define-key global-map (kbd "C-0") 'ace-jump-mode)))

The first lines specifies the online repository that should be searched and the third line specifies the external package itself [2]. To install this dependency, I execute the following command:

$ .emacs.d> cask

Cask installs the package in ~/.emacs.d/.cask/, where is my current Emacs version.

To update any installed dependencies, I just have to do:

$ .emacs.d> cask update

To close it off

As mentioned, you can find my Emacs configuration in my emacs-config repo. Although the repo is hosted at Bitbucket, it is a Git repo and not a Mercurial one as one might expect. I am using Git almost full-time now as my main client relies on it and have become more proficient with Git than with Mercurial. Furthermore, there is not much that can beat the excellent Emacs mode for interacting with Git, magit.

To conclude all this, previously my Emacs configuration was accessible from one of my public Launchpad repos. I hosted my configuration there as Launchpad uses the Bazaar version control system, which was the first distributed VCS I used. The last few years Bazaar adoption has declined and its development has slowed down so I do not gain much, if anything, from hosting my configuration there.

[1]Emacs version 24.1 has been released 2012-06-10.
[2]The Cask documentation lists the repos it supports here. The comments in this snippet are from that documentation.


Build Qt 5.2 from source (Ubuntu 13.10)

Posted 2014-01-07 21:30:35   |  reSt   |  More posts about qt cpp

Qt 5.2 was released on the 12th of December, 2013. I wanted to give it a spin and I downloaded the source tarball to build it myself. This proved to be more difficult than expected but I managed in the end.

The biggest hurdle was to get Qt Quick (2) working. Qt Quick uses OpenGL so you need the OpenGL development headers. If these are not installed, which was the case with my new laptop, the output of the Qt configure script mentions the lack of OpenGL support. Unfortunately it took me quite some time to connect that to the fact that my build did not contain Qt Quick.

The remainder of this blog post describes how to create out-of-tree builds for Qt 5.2 and Qt Creator 3.0. I have created these builds on Ubuntu 13.10 but the information should be applicable to other flavors of Linux also.


As mentioned, for Qt Quick you need to have the OpenGL development headers installed. Execute the following command to install them:

$> sudo apt-get install libgl1-mesa-dev

To have your Qt5 applications blend in with your GTK desktop, you need the GTK 2.0 development headers:

$> sudo apt-get install libgtk2.0-dev

This enables GTK theme support but even with that working, Qt5 applications use a different theme by default. To force the use of a specific style, use the -style parameter when you start the Qt5 application, for example:

$> standarddialogs -style gtk+

For Qt4 applications you can set a default style with the qtconfig-qt4 utility, but Qt5 applications ignore its settings.

Build Qt 5.2

Download the Qt 5.2 tarball and unpack it:

$> wget
$> tar xvzf qt-everywhere-opensource-src-5.2.0.tar.gz

To build Qt for the local platform, execute the following commands [1]:

$> cd qt-everywhere-opensource-src-5.2.0
$> mkdir -p builds/local && cd builds/local
$> export PATH=$PWD/qtbase/bin:$PATH
$> ../../configure -prefix $PWD/qtbase -opensource -qt-xcb -nomake tests
$> make -j 4

We use a so-called out-of-source build to make it easy to rebuild Qt without having to worry that previous build artifacts influence the new build.

With the above value of the -prefix parameter, you do not have to install Qt using the make install command.

Note the -qt-xcb parameter for the configure command. It is there to, and I quote,

[...] get rid of most xcb- dependencies. Only libxcb will still be linked dynamically, since it will be most likely be pulled in via other dependencies anyway. This should allow for binaries that are portable across most modern Linux distributions.

This is mentioned in $PWD/../qtbase/src/plugins/platforms/README.

The "-j 4" parameter to "make" specifies to run 4 jobs simultaneously. My laptop has 4 processing cores, so theoretically this could speed up compilation by a factor of 4. I did notice one drawback of using multiple jobs: when one of the jobs fails, it can be difficult to determine which compilation step failed as the messages from the failing job already have scrolled off the screen.

Build Qt Creator 3.0

Download the Qt Creator 3.0 source tarball and unpack it:

$> wget
$> tar xvzf qt-creator-opensource-src-3.0.0

To build Qt Creator, execute the following commands from the root of the extracted tarball:

$> cd qt-creator-opensource-src-3.0.0.tar.gz
$> mkdir -p builds/local && cd builds/local
$> qmake -r ../..
$> make -j 4

Again we create an out-of-source build.

Potentially dangerous tip

If you accidentally build Qt in its source directory, you can clean that directory using the following command:

$> find . -type f -mtime -1 -exec rm {} \;

This command deletes all files that are less than a day old. It does this silently except for:


This file is read-only and you have to explicitly acknowledge that you want to delete it. It is safe to do as it the file will be regenerated during the next configure/build run. The following remarks are appropiate when you use this command:

- Be very, very careful where you execute that command. I once had it delete
  my new Qt build but much worse things can happen.
- This command only works if the original Qt files are more than a day old,
  which is the case for the version we are building here.
[1]these commands are inspired by this page of the Meego 1.2 Developer Documention


Scrum, tasks & task estimates

Posted 2013-04-06 19:10:00   |  reSt   |  More posts about scrum development

At the start of an iteration our Scrum team determines the tasks involved to realize each story and estimates them. This is an activity which can take a lot of time and, if your not careful, a lot of energy. When that happens, be prepared for whispers or even loud complaints about "micro management". In this post I explain why we need tasks and estimates.

In Scrum, the user stories to be realized in the next iteration have already been estimated at a high level. Teams often estimate them in so-called story points, which indicate a relative effort required to deliver that story: the higher the number (of story points), the more effort required. At my current employer, we use the values 1 (extra small), 2, 4, 8, 16 and 32 (extra large).

The team assigns story points by comparing the new stories to older, realized ones and the story points assigned to them. When the number of story points that the team has realized in previous iterations is (relatively) stable, we use it to predict the velocity of the team in the next iteration. The stories that are selected for that iteration should fit the velocity of the team.

As mentioned, these story points are a high-level estimate and, unless you are a really experienced and gelled team, the stories contain too much unknowns to plan and monitor the current iteration. This is were tasks and estimates come in. They should provide us with

  1. a better understanding of the work that needs to be done,
  2. a shared understanding of the work that needs to be done, and
  3. a burndown chart to track our progress [1].

The better understanding should lead to better estimates that tell us whether the iteration is overloaded or underloaded. We use these estimates to track progress throughout the sprint so we can

  • keep stakeholders informed during the sprint,
  • re-allocate people and resources when necessary,
  • add, remove or modify stories.

It can be tempting to define the perfect breakdown and find the perfect estimates, whatever those may be. If that works for your team, good for you but avoid drowning in a kind of mini-waterfall. For me the big a-ha moment was that we track progress on stories and not on tasks. The task breakdown should provide you with a better understanding of the work that has to be done and hopefully better estimates. That is their purpose. Keep in mind that the best understanding often comes from doing the actual work.

So what if the actual work deviates from the breakdown? In that case you should adapt the amount of work remaining according to your new insights. In this way the burndown reflects that slow-down or speed-up.

[1]This list has been inspired by the article at


Scrum bastard

Posted 2013-02-09 21:12:00   |  reSt   |  More posts about scrum development

In my current job I am a Scrum Master ™ :), apart from being one of the developers. In Scrum, you work in iterations of say, two to four weeks [1]. At the start of an iteration, it is clear which functionality or "user story" has to be realized in that iteration. Already these stories have been prepared or "groomed" by and with product owners so the team understands what is needed and can estimate the work involved. This does not mean these stories are specified in the fullest detail. Often during the iteration uncertainties and obstacles crop up but ideally, nothing really serious. The team can always approach the product owner to answer any questions or to make a decision.

This week something happened which really made me doubt my easy-goingness as a Scrum Master and it was not the first time. One of the stories, "Update the manual for the new functionality", was not groomed. Both product owners had been working on other things and planned grooming sessions were cancelled multiple times and in the end, postponed indefinitely. So the product owners and the team decided to just go on with the story, because, what is so unclear about the update to a manual? After a days work we presented the end result to one of the product owners. His reply was along the lines of "Nice, but I had imagined you would also write about A and B. Please change it.".

I was really annoyed by that, at least internally, because I do not like doing things twice. This time we got lucky and the changes he proposed were really additions, so there was hardly any rework. However it did impact our iteration planning. More importantly, this could have been avoided, if I, as a Scrum Master or even just as a team member, would have insisted on grooming the story. As mentioned, I am an easy-going person, the product owners were busy doing other things, yada yada yada...

What is the moral of this story? To never work on new functionality of which you only think of what the scope is? That sounds extreme and I can think of a lot of situations where it is extreme. You could even say that it is extreme for this small example. But if you work in an environment which has a tendency to "undergroom" its stories and as such, have an outcome like the one mentioned in this post, consider being a "scrum bastard" every once and a while. Insist that stories get the attention they deserve. Play it black and white: if stories are not worth being groomed, they are not worth being worked on.

[1]For more information, the official Scrum Guide forms a good introduction.


Awesome window manager

Posted 2012-10-04 00:32:00   |  reSt   |  More posts about linux awesome window-manager
  • updated 2013/10/01 to add missing bitmaps and a missing link: the original version was deployed by accident

I am writing this blog post on a 6 year old laptop. When I bought it in 2006, its specs were great, or at least I thought so: AMD Mobile Turion 64 processor, 2 GB of RAM, 80 Gb HDD and an NVidia Geforce Go 6200 graphics card with 256 MB on board. However time has passed and it is definitely not the fastest tool in the shed anymore. Its specs still seemed good enough for my purposes and I decided to spent some time on getting it up to speed again.

To cut a long story short, I brought this quest to a successful end but only because my free time does not cost money. I enjoyed the journey and it is the journey that counts.

The family laptop, which is a few years less ancient, is running Ubuntu 12.04 with Unity 3D and I like it very much. I also installed that combination on the old workhorse but that proved to be too much: Unity 3D did not even run and Unity 2D was sluggish. The previous time I had to use older hardware I settled for the more lightweight desktop environment (DE) XFCE. At the time I quite liked it so I decided to use it again.

The XFCE spin of Ubuntu, named Xubuntu, worked a lot better. But from time to time it still was a bit sluggish. A real annoyance was that applications I defined to run at startup sometimes did not run or ran too early and there effects were discarded. For example, one of my startup applications was the command to make my Caps Lock a Crtl key and to set my Compose key:

setxkbmap -option compose:rctrl -option ctrl:nocaps

More often that not I had to rerun that statement after login. The same was true for the xrandr statement that sets my second monitor correctly.

The biggest issue was the delay after I had logged in: after login, it could take a minute before the desktop was shown. By the way, it turned out that that delay was not caused by XFCE :) but more about that later.

The hardware was not the only reason to look for another desktop environment. At work I also use Xubuntu and in that setup I often have multiple terminal windows open. It can be a bit of a hassle to manually layout these (and other) windows on my desktop. I knew that there were window managers that could do that automatically for me and this brought me to Awesome.

Awesome is a so-called tiling window manager. A tiling window manager places and sizes the application windows in such a way on your desktop that optimal use is made of the available screen real estate. When you tile the application windows, they partition the visible desktop. Each time you open an additional window, the window manager automatically moves and resizes one or more of the already visible windows to make room for the new one. The following screenshot shows you an Awesome window placement scheme or layout, with a single master column for the most interesting window and a client column for the other windows:


The second screenshot shows a similar layout, but this time with a single master row and a client row:


There exist layouts in Awesome that do not tile. I often use the maximize layout, which lets the window that has the focus fill the desktop. This page on the Awesome wiki lists the available layouts.

Awesome itself is configured using a file written in (the programming language) Lua. In this file you define key bindings, your color theme, applications that have to run at startup, virtual desktops (or "tags" as they are named in Awesome) etc. The configurations that come with Awesome are self-explanatory so you do not need in-depth knowledge of Lua to customize them.


The screenshot above shows my current desktop. It is based on the Zenburn theme which is part of the Ubuntu package 'awesome'. I have made some relatively minor changes to that configuration:

  • add the XFCE application finder to the Awesome menu;
  • bind Alt-F2 to launch the application runner gmrun;
  • rebind several keys with regard to window navigation and placement;
  • use a custom desktop background with the Awesome key and mouse bindings.

You can find the complete complete configuration in my awesome Bitbucket repo. If the background seems familiar, that is correct. It is based on an image that comes with Ubuntu Unity that shows its key and mouse bindings. As Awesome comes with a lot of key bindings, which I still have not memorized, I tracked down the SVG of the Ubuntu Unity image and adapted that to Awesome. You can find both the SVG and a 1920x1080 rendition of it in my repo.

Awesome is a window manager and I quickly found a window manager is not the same as a desktop environment (DE). For example, XFCE implements the functionality to modify the appearance of your desktop, start the screensaver, automount USB devices, shutdown your workstation as an ordinary user. So the first time I ran Awesome, I was looking at large fonts and ugly widgets instead of the finely-tuned appearance I defined in XFCE. The easiest way to resolve that was by (automatically) starting xfsettings after login. The fonts of the Qt application however remained large and ugly. To get them to show the right fonts I had to set the Fontconfig configuration [1]. The README of the Bitbucket repo explains all this and more.

As already mentioned, my personal configuration makes use of XFCE. So if you want to reuse that configuration and your desktop environment is not XFCE, it will not work for you. Probably you just have to make a few minor changes to get it to work with another desktop environment or without any at all.

I want to close it off with the following. Earlier I mentioned the delay that occurred after I had logged in to XFCE. Unfortunately this delay occurred even when I used Awesome... It turned out that this is not a problem with XFCE but one with LightDM, the login manager Xubuntu uses, and 64-bit machines, see bug #996791 on Launchpad :).

Have fun with it!

[1]Visit for an excellent explanation.


Contents © 2016 Pieter Swinkels - Powered by Nikola