How are server side (sieve) filters working again?

by Lioba Leickel in Kolab Now Blogs at 10:42, Monday, 19 September

Kolab Now offers a mail filtering feature. That makes it possible to apply a series of actions to specifically selected incoming emails, so that incoming mail meeting a certain criteria can be automatically processed by the server and handled/organized according to the defined actions. For example the server can move the message to a specified […]

Important: DNS settings @KolabNow

by Lioba Leickel in Kolab Now Blogs at 11:45, Wednesday, 13 July

This goes out to those users who joined our service back in the days when it was still called MyKolab. (Note: No emails were hurt during the writing of this blog post) In 2015 our service changed its name from MyKolab to Kolab Now – as we know it today. At the same time a […]

Incident report: db server stuck in large operation..

by Mads Petersen in Kolab Now Blogs at 06:23, Tuesday, 07 June

On Monday June 06 2022, from approximately 11:24 UTC a group of Kolab Now users observed an error stating: “Gateway time-out” when trying to login to the webclient, the dashboard, or connect with any desktop or mobile client. The Kolab Now main page and all other services (Support, blog and knowledge base) were available […]

Kolab Now: Our prices explained

by Lioba Leickel in Kolab Now Blogs at 09:18, Thursday, 19 May

For all the fellows out there who are interested in our product and service we would like to explain today again the difference of individual & group accounts and prices that apply. Kolab is the result of more than 20 years development of Open Source technologies for collaboration. It offers a hosted e-mail and groupware […]

Updates to Kolab Now..

by Mads Petersen in Kolab Now Blogs at 11:49, Friday, 22 April

This Friday the operations team is rolling over the Kolab Now system and deploying a set of new updates. However, thanks to the redundancy in the architecture, users will not see any downtime or service interruptions. The update contains a lot of code, preparing the system for future features, and one update to the Voice […]

The strengths of the Kolab calendar: part 2

by Lioba Leickel in Kolab Now Blogs at 13:51, Wednesday, 20 April

Hello again everyone. Last weeks part 1 of the strengths of the Kolab calendar coped with the calendar in general, colorized events, calendar lists, and scheduling & editing of events. Today the journey continues and we will dive deeper in the scheduling & editing of events and cover topics like event invitations & notifications – […]

Kolab Now @ Swiss Cyber Security Days

by Lioba Leickel in Kolab Now Blogs at 10:43, Thursday, 31 March

A few years in a row, the Swiss Cyber Security Days have brought specialists and cyber security industry leaders together in the context of Swiss cyber security. This year the event is falling on 06 & 07 of April, and unlike last year, where the event was run as a ‘virtual get together’ it is […]

The strengths of the Kolab Calendar: Part 1

by Lioba Leickel in Kolab Now Blogs at 11:17, Wednesday, 23 March

At Kolab Now we love the feedback that we get from our users. It gives us a picture of what we do right, and where we need to improve. We hear which features users love and which they don’t. And that is how we learned, that one of the most valued and most used Kolab […]

Is your password ‘123456’?

by Mads Petersen in Kolab Now Blogs at 07:50, Monday, 14 February

If so, you share your password with thousand – perhaps 100s of thousands of others. Password complexity is being talked a lot about in the media these days. You have probably seen messages from your bank or your badminton club membership administrator about how your password should be complex to avoid password cracking. Last Friday, […]

Our Statistics for 2021

by Mads Petersen in Kolab Now Blogs at 11:06, Friday, 04 February

Again this year we deliver you the numbers according to our terms of service, that states there’s basically no way for anyone to get any access to your data without us letting the world know that it happened. We have again counted the interactions and here is our statistics for 2021. NOTE: Also this year […]

Data Migration Process with Audriga:

by Nanita Winniewski in Kolab Now Blogs at 10:05, Tuesday, 01 February

We often receive requests from users who desire to move their data from their current email providers to Kolab Now using our partner Audriga’s migration Service.( eg: “I want to use your services and/or the services of Audriga to migrate my Google Apps accounts to Kolab. How do I proceed?” The process is simple. First […]

Incident report: Failing front ends

by Mads Petersen in Kolab Now Blogs at 07:31, Wednesday, 26 January

On Friday January 21 at 15:48 UTC  users observed that the main page became unaccessible and reviled an error: 503 – Server unavailable. The webclient ( was still available, but with seriously degraded performance – to the point where the service would give up and reveal the ‘Maintenance’ page. Also IMAP, SMTP, ActiveSync, and […]

Incident Report: WebDAV/CalDAV/CardDAV services unavailable..

by Mads Petersen in Kolab Now Blogs at 13:29, Monday, 17 January

On Monday morning at 03:22 CET the kolab Now $DAV services stopped working. While resolving a performance issue for $DAV connections, we moved around  servers from an old infrastructure to a new infrastructure, and with that the rotation of logs. Unfortunately a tiny bit were missed while configuring the new servers, which caused the log […]

The Annual Certificate Refresh

by Mads Petersen in Kolab Now Blogs at 09:57, Tuesday, 28 December

Another year past and we needed to again refresh our certificates. As in previous years, e.g. 2018, 2019, and 2020, we rolled over certificates across all systems. Different from previous years though; this year we made a mistake when creating the full chain certificate file. This meant a disruption of the connection for many users. […]

Happy New Year 2022

by Lioba Leickel in Kolab Now Blogs at 11:37, Friday, 17 December

Ladies and Gentlemen, another turbulent and exciting year, 2021, soon comes to an end! We want to thank all y’all for your support, and for being or becoming a part of the Kolab Now family. We hope you enjoy and value our product and services the same way that we enjoy providing them. As always, […]

And.. its around the corner again – Christmas! Every year it is a very special time for me and my family. The days get shorter, everyone slows down and reflects on the busy year at home with cups of tea and hot chocolate on the couch, covered in a warm blanket. At the same time […]

A few small improvements

by alec in Kolabian at 09:52, Sunday, 07 November

I’ll describe a few new features that will be included in the next main Roundcube version that is currently under development (1.6). Read more for details and some screenshots. Improvements in plain text wrapping I added a possibility to disable automatic line-wrapping of sent mail body.                                                                     I also improved auto-wrapping of plain text messages on … Continue reading A few small improvements

Incident Report: Network Outage at Kolab Now

by Mads Petersen in Kolab Now Blogs at 16:05, Tuesday, 05 October

On Tuesday 2021-10-05, between 10:20 UTC and 10:40 UTC, a network issue kicked our firewalls off the grid, and Kolab Now was down. Our Operations team was on the case right away and could correct the issue instantly. As a result the downtime was very limited. When the dust had settled, it turned out that […]

Canned responses in HTML format

by alec in Kolabian at 08:44, Sunday, 29 August

A new feature just landed in Roundcube 1.6-git. It is a support for HTML formatting in saved response snippets. This includes links, images and any HTML feature we usually allow in our rich text editor. When such a response is inserted to a plain text message it will be converted to plain text (and vice … Continue reading Canned responses in HTML format

Incident Report: Storage Outage at Kolab Now

by Jeroen van Meeuwen in Kolab Now Blogs at 12:12, Wednesday, 25 August

On Monday, August 23rd 2021 at around 04:00 UTC, Kolab Now suffered a catastrophic outage related to its storage. This post will outline what happened, when it happened, and what is to happen. First: the timeline provided some of analysis that has already occurred — all times are approximate; Sunday, 16:00 UTC Hypervisors start reporting […]

Announcing Service Window: Improving Authentication

by Michael Bohlender in Kolab Now Blogs at 13:42, Thursday, 19 August

After much work we are now ready to switch over to a new authentication system based on laravel passport. The switch over will happen on Saturday, August 21st starting at 09:00 UTC. Short periods of downtime might occur, but we will of course keep it to a minimum. Users can follow the status of the […]

(re-)Introducing Bank payments..

by Mads Petersen in Kolab Now Blogs at 08:38, Tuesday, 10 August

On Saturday, 31st of July 2021 we added a few features to Kolab Now during a service window. We already informed you about two of the features that were added; the user controlled Sender Preferred Framework (SPF) for group admin users, and the user controlled opt out of the greylist for users. What we failed […]

Contact form – private vs. business

by alec in Kolabian at 19:02, Sunday, 25 October

If you use a webmail application for work you will more often need functionality related to business instead of private life. One example is an addressbook, where your contacts can have various properties. In Roundcube addressbook you can select from many properties both with “work” or “home” label, but the default is private life. Starting … Continue reading Contact form – private vs. business

Contact form – private vs. business

by alec in Kolabian at 19:02, Sunday, 25 October

If you use a webmail application for work you will more often need functionality related to business instead of private life. One example is an addressbook, where your contacts can have various properties. In Roundcube addressbook you can select from many properties both with “work” or “home” label, but the default is private life. Starting … Continue reading Contact form – private vs. business

Collected Recipients and Trusted Senders

by alec in Kolabian at 17:23, Saturday, 26 September

Roundcube has some features that make use of contacts users collected in their addressbook(s). However, the process of adding new contacts was mostly manual and slow. Here come new ways to automatically collect contacts while using the webmail. Automatic collection of email addresses of recipients that were used in a sent mail is quite a … Continue reading Collected Recipients and Trusted Senders

Collected Recipients and Trusted Senders

by alec in Kolabian at 17:23, Saturday, 26 September

Roundcube has some features that make use of contacts users collected in their addressbook(s). However, the process of adding new contacts was mostly manual and slow. Here come new ways to automatically collect contacts while using the webmail. Automatic collection of email addresses of recipients that were used in a sent mail is quite a … Continue reading Collected Recipients and Trusted Senders

Elastic: Dark mode

by alec in Kolabian at 19:06, Tuesday, 07 July

Nowadays, when every major web browser supports dark mode and most people use mobile devices, every application has to have a dark mode. Roundcube 1.5 will have it too. A few people worked on this in the past, and there’s even a pull request. It looked to me that it might be a lot of … Continue reading Elastic: Dark mode

Elastic: Dark mode

by alec in Kolabian at 19:06, Tuesday, 07 July

Nowadays, when every major web browser supports dark mode and most people use mobile devices, every application has to have a dark mode. Roundcube 1.5 will have it too. A few people worked on this in the past, and there’s even a pull request. It looked to me that it might be a lot of … Continue reading Elastic: Dark mode

Last week in Kube

by cmollekopf in Finding New Ways… at 14:23, Sunday, 24 May

  • You can now view “flagged”/”starred” messages per account. This is a short-cut to getting that functionality eventually folded into the todo view (I think…), and allows you to quickly show a list of messages that you have marked as important so you can deal with them. The view works best if there is a small number of marked messages, and you unflag them once you have dealt with them.
  • The experimental flatpak now contains spellcheck-highlighting support based on sonata. There is no configuration (language is autodetected per sentence), and there is also no UI to get to corrections, so it’s primarily useful to spot typos (which is good enough for the time being).
  • Quotes in plaintext emails are now highlighted, which makes for a better reading experiece.
  • Lot’s of enhancements to the todo view, which is now very functional as a day-to-day todo manager.
    • The todo-view now has an Inbox that shows all tasks that are neither done, nor in doing. This allows to go through open tasks from all task-lists, to pick them for the “Doing” list (which represents the set you’re currently working on).
  • Various fixes to the invtation-management code to work correctly with exceptions to recurrences.
  • Kube is now a single-instance application when run with the –lockfile option (as used in the flatpak). This was necessary to to deal with the fact that we can’t start multiple instances of kube in separate sandboxes, because storage and lockfiles rely on PID uniqueness. Starting kube again will now simply hide and show the window of the current instance, which results in the window showing up on the workspace you currently are (which also seems to be more consitent with how Gnome and MacOS behave.
  • The IMAP synchronization got more efficient, primarily enhancing the case where nothing changed on folders.
  • Adapted to source changes in KCalendarCore (so the Applications/19.08 release is now required)
  • It’s now possible to create/remove calendars and tasklists in Kube, and sink can modify the associated color (but there’s no UI yet).
  • Recipients in the email composer can now be moved between To/CC/BCC using drag and drop.
  • Listviews now have subtle fade-in animations, which helps spotting new items.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to:

Last week in Kube

by cmollekopf in Finding New Ways… at 13:26, Saturday, 21 March

  • Upgraded the flatpak to qt 5.14 (hoping to get my hands on the new markdown support), which resulted in discovering a regression for QSet properties.
  • The flatpak now employs a patch so the pinentry tool just uses libsecret as cache, which means if you run gnome-keyring you get password-less logins (and if somebody is going to finish ksecretservice that would of course work too). I have also looked into getting access to the host gpg-agent (which seems like the better solution), but that effort is currently stuck due to a missing feature in bubblewrap and because it’s not entirely clear if this will really be the way to go forward. Feel free to weigh in.
  • Fixed a bunch of rendering issues in invitations and calendar.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to:

Last months in Kube

by cmollekopf in Finding New Ways… at 15:57, Tuesday, 25 February

  • Kube got a basic but functional todo view.
  • HiDPI support. After a bit of massaging, Kube now scales correctly on HiDPI displays, so we no longer end up with tiny icons everywhere.
  • Fixed HTML view resizing. Resizing the WebengineView to fit the contents has been a constant source of problems and continues to break every now and then.
  • Improved keyboard navigation that allows for switching between views.
  • Improved IMAP sync which fixes a couple of corner cases and improves performance as well as responsiveness while a sync is executed.
  • Fixed free-page accumulation in lmdb database, which results in ~10x smaller db files.
  • Experimented with zstd compression for things like mime messages, which reduced the database size, but doesn’t seem to affect performance much.
  • The account password is now protected using your gpg key as well, so if there is a gpg-key set up, you will only have to unlock your keyring to unlock
    kube. This effectively turns the gpg-keyring into your Kube keyring as well (assuming you have a gpg-key for every account).
  • Implemented fulltext indexing of encrypted emails as well as basic support for protected headers. Read more
  • Shaved off another 2000 SLOC from the messageparser library as I moved it over to sink for encrypted email indexing.

The todo view

The todo view’s goal is to have a small personal list of todos, acknowledging that you can only accomplish so much during a day, and there is no intention of turning this into a project management suite down the road.

The idea is that you have a couple of lists as backlogs, and that you then pick a reasonable amount of items (<10 probably?) from those lists as currently in progress (that’s also how it’s stored in iCal). This then gives you a nice little list of things during the day/week/whatever suits you, that you can tick off.

New items can quickly be entered using keyboard shortcuts (press “?”) and that’s about it for the time being.

I think sub-todos might find their way eventually in there, but the rest should rather be quality of life improvements and eventually taking other sources of “things you need to act on” into account, such as emails that you should probably be answering or events that need to be prepared.

The todo view was the last officially missing piece, so with that we are view-complete (feature complete may be a bit a stretch still).

The keyring

Having to enter your account password for every account whenever you start Kube doesn’t make for a great user-experience, so this was fairly high on the nuisance list.

Naturally the first thought was that we would just use your platform’s keyring, but regrettably there still isn’t a solution that works on multiple platforms (not even on Linux, libsecret was never implemented for KDE),
so that results in a lot of effort for implementation and maintenance.

Fortunately, there is an alternative.

We already rely on GPG for end-to-end encryption, so why not use your GPG-key to also secure your account related secrets?

We already had an experimental feature that stores the account password encrypted using the key as a POC, so the next was to build this into the core and improve the experience somewhat.

The result is that you will no longer have to enter any account passwords after the initial entry, but will instead be prompted to unlock your GPG-keyring (if not already unlocked), and just like that we’ll gain a keyring and sidestep the keyring mess. With gpg-agent we can at least reuse something that we rely on anyways, and that we have available on all supported platforms.

The one fly in the ointment is that we currently have to start a gpg-agent inside the flatpak, so we can’t reuse an already unlocked keyring on the system.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to:

When most of your email traffic is encrypted there are two things that really start to become a problem:

  • You can’t search for encrypted emails
  • If the sender is using a client that supports encrypted headers, such as thunderbird, your emails will be completely unrecognisable because the subject no longer contains anything useful.

To fix this we’re going to start decrypting encrypted emails when syncing and indexing the encrypted content. That way we can make sure encrypted emails are just as usable as non-encrypted emails, at least as long as you’re using Kube.

This means that in the future you will not only be able to search through all your email, it also means you get a more useful subject displayed than “…” or some other nonsense.

Who doesn’t love a conversation list full of descriptive subjects like “…”

This is of course a trade-off in terms of security; you will have at least parts of your encrypted email in decrypted form on your disk, and so an attacker that has access to your system will have access to the encrypted contents in plain text. However, it’s a reasonable trade-off as an attacker that has access to your system, which contains your private GPG key as well, will have have good chances of obtaining what he looks for anyway. It’s also a necessary trade-off to make encrypted communication usable enough so it can be used across the board.

Of course we could attempt to protect the index, but this is best left to the same tools that protect the rest of your system, such as full-disk encryption.

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to:

Calendar and Tasks for non-Kolab users

by alec in Kolabian at 19:15, Wednesday, 23 October

The great Calendar and Tasklist plugins from Kolab always contained additional drivers, so they could be used out of a Kolab setup (using SQL database as a storage). That part was never a priority for Kolab developers, but with Roundcube 1.4 and Elastic skin these plugins will be again available from in an up-to-date … Continue reading Calendar and Tasks for non-Kolab users

Calendar and Tasks for non-Kolab users

by alec in Kolabian at 19:15, Wednesday, 23 October

The great Calendar and Tasklist plugins from Kolab always contained additional drivers, so they could be used out of a Kolab setup (using SQL database as a storage). That part was never a priority for Kolab developers, but with Roundcube 1.4 and Elastic skin these plugins will be again available from in an up-to-date … Continue reading Calendar and Tasks for non-Kolab users

Kube 0.8.0 is out!

by cmollekopf in Finding New Ways… at 23:28, Tuesday, 01 October

After a waaaaaay to long “break” I have finally tagged another release.

The largest change in this release is the addition of the calendar view, which is not only useful, but also marks an important milestone in our development roadmap; We finally have all the pieces together from a technology perspective.

The calendar’s week view

The calendar was a major undertaking due to a couple of challenges:

  • It’s synchronized over its own protocol (CalDAV)
  • It’s for once not a list, so it’s visually a completely different beast than everything else.
  • It has lots of fun special cases such as recurring events, timezones, overlapping events, multiday vs. single day events, …
  • We wanted to avoid loading your complete calendar (including the past 10 years) into memory, while making sure you get recurring events displayed even if they started 10 years ago.

The work done so far solves most of the important challenges, but there are also definitely a couple of holes in it still, such as no drag and drop support.

While the calendar is certainly the biggest new feature, there’s also a bunch of other improvements in there:

  • A new editor view, providing a much cleaner look than what we used to have.
  • Basic support for scheduling via iTip (both for sending an invitation to attendees, and for interacting with invitations you have recieved).
  • Autodiscovery support for CalDAV and CardDAV servers (So instead of is enough to configure your account).
  • Builds and runs on MacOS and Windows (it admittedly is not getting a lot of testing on those platforms, especially on Windows, but the baseline is there).
  • A fastmail account configuration dialog.
  • It’s now possible to create and modify contacts.
  • We no longer default to displaying HTML email
  • … and a bunch of other stuff in a little over 226 commits to sink and 561 commits to kube.


Tarballs are available at the usual locations:

Get It!

Of course the release is already outdated, so you may want to try a flatpak or some distro provided package instead:

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Last months in Kube

by cmollekopf in Finding New Ways… at 15:19, Saturday, 25 May

Kube is still alive! I got distracted for a while, both professionally and privately, and writing blog posts is unfortunately always the first thing that ends up on the chopping block. Anyways, lot’s of progress has been made (103 commits in sink, ~130 commits in the kube codebase):

  • For sink we had a variety of bugfixes and performance improvements, especially on the more recent CalDAV/CardDAV backends.
  • For CalDAV/CardDAV we now do basic autodiscovery using the .well-known url’s. The DNS part of the spec has not been implemented so far.
    This means for a properly set-up server you only have to specify the base url, and everything else will be discovered automatically from there.
  • On the more user-facing front we have:
    • Sent emails are now collapsed by default
    • Plain text is now the preferred method of viewing emails. You can still view the HTML variant if available by clicking a button.
      • The Addressbook is no longer read-only and you can now create contacts as well.
    • A visually reworked composer that avoids becoming too wide and removes a lot of the visual clutter.
    • The calendar can now render recurring-events.
    • It is now possible to create events as well.
    • Work on a tasks view has started


You may have noticed that it’s been a while since the last release. This is not only because releases are additional work, but also because we already have a continuous delivery method with the nightly flatpak.
It’s clear that releases do provide value, both as a communication tool which version should be packaged, and if they would be maintained. With the current manpower we cannot maintain releases though, which makes it significantly less interesting.

With that said, the 0.8 release with the calendar is now long overdue and should be coming out soonish.

Experimental flatpak

Just to put it out there; Additionally to the usual “master” branch of the flatpak, there is also an “experimental” branch, containing, surprise, various experimental bits and pieces.

This currently entails:

  • A plugin that stores the accounts password encrypted by the accounts gpg key (blindly assuming there is one with a matching email address).
  • A search view
  • The upcoming calendar view (which we’ll move over in the next release)
  • The above todo view (which will take a little longer to move to master)
  • A “File as expense” plugin (a showcase how we could do extensions in the mail view).
  • The Inbox crusher view (an experiment for a view to go through your inbox one-by-one).

It typically serves as a staging ground for new components, and is the version that I’m running day-to-day. flatpak makes it easy to switch back and forth between the branches on top of the same dataset, so you can try it and switch back if you don’t like what you see.

To give it a shot use the following command to install and switch to the experimental flatpak branch:

flatpak -y --user install --from
flatpak --user make-current com.kubeproject.kube experimental

To switch back simply issue:

flatpak --user make-current com.kubeproject.kube master

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.” For more info, head over to:

TinyMCE 5

by alec in Kolabian at 12:42, Thursday, 23 May

HTML editing functionality in Roundcube is provided by TinyMCE 4. Is the new TinyMCE 5 release ready for update? Is it better from Roundcube perspective? Is the built-in mobile mode good? When will Roundcube update to this version? I may not answer these (yet), but I can already provide some screenshots ;) It’s not a … Continue reading TinyMCE 5

TinyMCE 5

by alec in Kolabian at 12:42, Thursday, 23 May

HTML editing functionality in Roundcube is provided by TinyMCE 4. Is the new TinyMCE 5 release ready for update? Is it better from Roundcube perspective? Is the built-in mobile mode good? When will Roundcube update to this version? I may not answer these (yet), but I can already provide some screenshots ;) It’s not a … Continue reading TinyMCE 5

A new skin for Kolab WebAdmin

by alec in Kolabian at 14:16, Friday, 10 May

In the past week I’ve been working on refreshing the look of the Kolab Administration Panel. The user interface hasn’t been touched since it was created. Even though the webmail frontend has a few themes including the fresh Elastic skin. It was really needed. Using Bootstrap Framework, Roboto font and FontAwesome icons (technology used in … Continue reading A new skin for Kolab WebAdmin

A new skin for Kolab WebAdmin

by alec in Kolabian at 14:16, Friday, 10 May

In the past week I’ve been working on refreshing the look of the Kolab Administration Panel. The user interface hasn’t been touched since it was created. Even though the webmail frontend has a few themes including the fresh Elastic skin. It was really needed. Using Bootstrap Framework, Roboto font and FontAwesome icons (technology used in … Continue reading A new skin for Kolab WebAdmin

Instant updates aka Push in Roundcube

by alec in Kolabian at 13:32, Wednesday, 23 January

Roundcube’s user interface refreshes every now and then in a configured time interval. This is what many imap clients do (or did historically) and what most users understand and accept. I’m not going into details how it’s implemented internally right now, but it’s based on existing IMAP standards (and their limitations). The question is: can … Continue reading Instant updates aka Push in Roundcube

Instant updates aka Push in Roundcube

by alec in Kolabian at 13:32, Wednesday, 23 January

Roundcube’s user interface refreshes every now and then in a configured time interval. This is what many imap clients do (or did historically) and what most users understand and accept. I’m not going into details how it’s implemented internally right now, but it’s based on existing IMAP standards (and their limitations). The question is: can … Continue reading Instant updates aka Push in Roundcube

Kolab Single Sign On plugin

by alec in Kolabian at 10:07, Monday, 10 December

This new plugin (kolab_sso) adds possibility to authenticate users via external authentication services. For example, if your organization has such a service, users can access webmail with a single click (if already authenticated in the service). OpenIDC/OAuth2/SAML2 are technologies widely described and used in the Internet. So, I’ll just shortly write about what we provide … Continue reading Kolab Single Sign On plugin

Kolab Single Sign On plugin

by alec in Kolabian at 10:07, Monday, 10 December

This new plugin (kolab_sso) adds possibility to authenticate users via external authentication services. For example, if your organization has such a service, users can access webmail with a single click (if already authenticated in the service). OpenIDC/OAuth2/SAML2 are technologies widely described and used in the Internet. So, I’ll just shortly write about what we provide … Continue reading Kolab Single Sign On plugin

Markasjunk and Markasjunk2 plugins merged

by alec in Kolabian at 16:03, Tuesday, 13 November

Very recently Roundcube’s built-in markasjunk plugin has been merged with the markasjunk2 plugin developed by Philip Weir. If you didn’t know markasjunk2 already read more to understand what that means. Most important change is the ability to mark messages as not junk. Depending on configuration this means moving messages back to the Inbox folder or … Continue reading Markasjunk and Markasjunk2 plugins merged

Password plugin improvements

by alec in Kolabian at 15:07, Friday, 09 November

Thanks to work of Philip Weir (johndoh) Roundcube’s password plugin received a few new features. Philip is Roundcube contributor and author of some nice plugins, e.g. contextmenu, sauserprefs, swipe, but let’s see what’s new in Password. The plugin internal driver API has been extended with: possibility to override default password comparisons possibility to override default … Continue reading Password plugin improvements

Calendar progress

by cmollekopf in Finding New Ways… at 08:50, Wednesday, 29 August

As we’re closing in on a simple but functional calendar for Kube, I’d like to share our progress with you.

We’ve decided to start with a week view, as that seems to be a good compromise between information density and enough information for day-to-day use.
We will eventually complement that with a month view, which is probably all we need for the time being.

An agenda view will probably rather become part of a separate view, showing you upcoming events, tasks that need to be taken care of, important emails, ….

Anyhow, here’s the week view:


This view is based on two models, one for full-day events (on top) and one for the rest below. The models each query for all events that overlap with the time-range shown, which we can do efficiently due to the work of Rémi on the calendar queries. That means we really only have to deal with a few events in memory, which we can easily do on the spot. The models then do their magic to calculate the position and sizes of all the necessary boxes, so all we have to do in QML is paint them. This now also includes recurring events, although we’re not dealing with exceptions just yet.

The colors of the events are taken from the calendar, which gets synchronized via CalDAV. This is a tradeoff design wise because you can bet that those colors will not match Kube in any way. But we decided that Kube should work well with other devices and clients, and the color is a major factor to recognize immediately what belongs where. Perhaps we can still make the colors a bit easier on the eye by desaturating them a bit, we’ll have to see.

Otherwise there is dimming of the past, and a blue line indicating the current time. Simple and without distractions.

The next steps are going to be adding a detail view, as well as a simple editor, and then we should already have the basics for your daily needs.

We’re still running a Kolab Now promotion for Kube! For more information head over to:

Optimizing Kube’s storage

by Rémi Nicole in Finding New Ways… at 14:31, Thursday, 23 August

Near the middle / end of my internship, I got to modify parts of the storage system in Sink, the library handling all the data of Kube.

The goal was to both to speed up the storage and reducing disk space. These two goals often go hands in hand in databases, since smaller data means faster disk lookup, and more data put in memory, available for direct usage.

Reducing key sizes

The first important modification was for keys to be stored in a binary format instead of the displayable format. This allowed data retrieval to be faster, because the binary format would be much smaller than the display format.

On the memory usage side, this is a bit more awkward to measure: on the one hand, the since the size of keys is smaller in memory, but on the other hand, LMDB (the database system we are using) will try to put more data in memory.

But on the whole, either we are using less memory, or Kube will be faster altogether. The reality is probably a mix of the two.

As for the numbers, these are the results of two different benchmarks:

Develop branch This patch
Current Rss usage [kb]: 40700 Current Rss usage [kb]: 39112
On disk [kb]: 10788 On disk [kb]: 8836
Write amplification: 12.0075 Write amplification: 9.83485
Develop branch This patch
Total pages: 760 Total pages: 603
Used size: 1425408 Used size: 1191936
Total on disk: 3293184 Total on disk: 2650112
Write amplification: 3.63268 Write amplification: 3.51402

As we can see on both tables, we use less disk space after the patch, but memory usage has not gone down in all places.

Overall we use approximately 20% less disk space.

Separating UIDs and revisions

Another important modification on the storage system was separating UIDs and revisions. Before this patch, we used to store the UID and the revision as keys inside the database.

What’s more is that the key was in the format “{this-is-a-uid}0000000000042”. The reason for padding the number with zeroes was that we need the data to be ordered, and we needed the keys to be stored as a string for the UID.

However, every revision is unique, so we can store only the revision as a key. This also allows us to store the key as a number, use the “integer key” feature of LMDB so the sorting stays correct and save a nice amout of space.

But, since we can keep old revisions, we need to track them. Therefore we need another table for mapping UIDs and revisions.

The rationale behind this patch is that since the “uid to revisions” database will be much smaller than the main database, it can be put into memory, leading to faster lookups.

Even though the results were not as consequent as the “reducing key sizes” patch, it seems we still got a small improvement performance wise. We will better see how different the performance is in the near future, though.

In the meantime, we can use the benchmarks graphics to see how performance, memory and disk usage was impacted (courtesy of our CI):

Disk usage with dummy writes
Disk usage with dummy writes
Write amplification
Write amplification
Memory usage with dummy writes
Memory usage with dummy writes
Initial query time
Initial query time
Pipeline performance
Pipeline performance

In these figures above, the blue circles shows the impact of the first patch. The green circles shows the impact of the second patch.

The important part of these wonderful graphs are that:

  • Disk usage has gone down, especially the write amplification
  • Memory usage has gone down a bit.
  • In “Memory usage with dummy writes”, we can also see the difference between counting the memory mapping of LMBD or not when measuring memory usage. This was explained in another blog post by Christian.
  • The “Initial query time” graph is interesting: it shows that in the initial version of the first patch, some keys were converted from the binary and the displayable format back and forth, leading to performance regression. This has been fixed quickly after and shows the true improvement of reducing key sizes.
  • Querying a mail from the database is now faster, but the whole pipeline is slower, so this still needs to be investigated.

Overall, I’m quite happy with both of these modifications, even if the second one has smaller benefits. That’s because in addition to the disk usage improvement and memory improvement, the code is a bit cleaner and more readable.

Instead, There Will Be No Kolab 18

by kanarip in kanarip at 12:06, Thursday, 23 August

We had previously planned for a Kolab 18 release, but given the additional work associated with slapping a new version number on an otherwise fully compatible series of enhancements, have decided against it. What enhancements do I speak of? They include the responsive skin for Roundcube, a development started by the upstream project, and finalized… Continue reading Instead, There Will Be No Kolab 18

Last week in Kube

by cmollekopf in Finding New Ways… at 10:33, Thursday, 23 August

  • Lot’s of progress on the calendar. We now have a nice little read-only calendar. There’s still important stuff missing, like recurrences, but we’re getting there and I’m already using it with my actual calendar data.
    • A date switcher to move between weeks.
    • We now show the date for each day and the week number for the week view.
    • The events get the proper colors from the calendar.
    • The view can be filtered by calendar using the checkbox.
    • The calendar checkbox now indicates the color, to save some precious space in that area.
    • A tooltip provides the full calendar name if there is not enough space anyways.
    • The calendar list is now scrollable with many calendars.
  • Fixed the name in the menu and dock as well as the mounted image name on Mac OS.
  • Fixed attachment dialogs on Mac OS.
  • The CI now checks the intended download paths for availability.
  • The website now has build status badges from the CI.
  • The “New Email” button (and similar green buttons) now has improved focus indication.
  • We’re now fixing the minimum font size to 11 if there is no QT_QPA_PLATFORMTHEME defined on linux. Qt hardcodes it to 9 otherwise, which is tiny.
  • Because flatpak uses pid-namespaces and lmdb uses pid’s for locking, starting two flatpaks results in a crash (the db is shared but the pid’s are guaranteed to be the same). It doesn’t seem like there is a good solution to this short of communicating with the original flatpak and ensuring all related processes are running in the same container. For now this is circumvented by simply not starting the second instance using a lockfile.
  • We removed the IMAP and similar protocol specific accounts and replaced it with a single “Custom” account. The account concept is supposed to encapsulate multiple procols, such as IMAP, SMTP, CalDAV and CardDAV as one logical unit. It thus doesn’t make much sense to have an IMAP specific account, and with the new “Custom” account you can still set-up IMAP only, or combine it with CalDAV and CardDAV so you can use all aspects of Kube.


Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Last week in Kube

by cmollekopf in Finding New Ways… at 16:52, Thursday, 16 August

  • We have a new filter overlay. We opted to avoid the inline searchbar entirely, and are using the overlay to display it instead.
  • We try harder to display all search results and not artifically limit it. This was especially problematic because we don’t sort the search results before the limit, meaning we could end up loosing very relevant search results (such as recent mails).
  • The composers html switch is gone. Instead we simply always offer the buttons to apply formatting, together with a button to remove all formatting for plaintext instead.
  • Rémi landed his storage improvement patch which shaves off a cool ~15% of storage requirements. This also helps performance because smaller db mean less data to load into memory. This was followed up by fix for a performance regression caught by the CI.
  • The Kube icon is now available in all necessary sizes to make it look good on Mac.
  • Important emails are now indicated in the maillist.
  • The default settings of the GMail account have been fixed.
  • There is finally a separate login and email address field in the IMAP account configuration.
  • We have a new website at where we now also offer the nighlies. These are now real nightlies that are automatically updated if all CI checks pass.
  • Prevented multiple flatpak instances at the same time as this does not currently work properly.
  • Last but not least, there’s progress on the Calendar.

Never mind the colors, they are coming from the CalDAV backend.


Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Kube: new website, new flatpak

by cmollekopf in Finding New Ways… at 18:41, Monday, 06 August

Kube has a new website:


It’s got a fresh, cleaner design, together with less, but hopefully more to the point, content.

What comes with it though is that we’ll also be publishing the flatpak and Mac OS nightlies there from now on. The CI that is building those nightlies will be integrated eventually, but that job is not complete just yet.

So update your bookmarks now, going forward will be the first stop for anything Kube.

Kube 0.7.0 is out!

by cmollekopf in Finding New Ways… at 22:31, Thursday, 12 July

I’m pleased to announce the immediate availability of Kube 0.7.0

Over the past year or so we’ve done a lot of work and building and maturing Kube and it’s underlying platform Sink.
Since the last publicly announced release 0.3.0 there have been 413 commits to sink and 851 to Kube. Since that diff is rather large I’ll spare you the changelog and will do a quick recap of what we have instead:

  • A conversation view that allows you to read through conversations in chronological order.
  • A conversation list that bundles all messages of a conversation (thread) together.
  • A simple composer that supports drafts and has autocompletion (assisted by the addressbook) for all recipients.
  • GPG support for reading and writing messages (signing and encryption).
  • Automatic attachment of own public key.
  • Opening and saving of attachments.
  • Rendering of embedded messages.
  • A read-only addressbook via CardDAV.
  • Full keyboard navigation.
  • Fulltext search for all locally available messages.
  • An unintrusive new mail hint in the form of a highlighted folder.
  • Kube is completely configuration free apart from the account setup.
  • The account setup can be fully scripted through the sinksh commandline interface.
  • Available for Mac OS.
  • Builds on Windows (But sadly doesn’t completely work yet).
  • The dependency chain has been reduced to the necessary minimum.

While things still change rapidly and we have in no way reached the end of our ever growing roadmap, Kube has already become my favorite email client that I have ever used. YMMV.


Turns out we’re not done yet. Among the next plans we have:

  • A calendar via CalDAV (A first iteration is already complete).
  • Creation of new addressbook entries.
  • A dedicated search view.

While we remain committed to building a first class email experience we’re starting to venture a little beyond that with calendaring, while keeping our eyes focused on the grander vision of a tool that isn’t just yet another email client, but an assistant that helps you manage communication, time and tasks.


Get It!

Of course the release is already outdated, so you may want to try a flatpak or some distro provided package instead:

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Notes on building C++ projects on Windows

by cmollekopf in Finding New Ways… at 12:26, Saturday, 07 July

Building C++ projects is bad enough, doing it on Windows is torture. The tooling sucks, the commandline sucks, the OS sucks. I might be biased.
I don’t think there’s a “good” way to build software for windows (Maybe it’s just using native tooling, I wouldn’t know), but here are some notes
on what I did, and perhaps it helps another poor soul out there trying to get something to work on windows.

Orchestrating the build

Unless your project is trivial, you will have to use something to orchestrate your builds. You will have to build all your dependencies (or cobble something together from installers on the internet…). Various solutions exist, none is any good.

Among the options are:

  • CMake with ExternalProject: Not much better than any other scripting language, but would work I guess.
  • Craft: Don’t expect everything to work out of the box, but it’s python and it’s fixable. It also covers everything from fetching the sources to building an installer, which is nice.

Build tools

For the individual parts that you build you’ll require some build tools. Whatever you think of CMake, when it comes to cross platform support there is just nothing better.

Wherever I could choose I chose CMake together with clang-cl which is the $MS style clang frontend.

That leaves you with the projects where you could not choose. In some cases it’s just easier to rewrite the buildsystem in cmake and get on with it.
In other cases you actually have to use an autotools buildsystem, so you have to resort to something like MSYS2.


There are few options for installers, and NSIS still seems like the easiest of the lot, and it works.


You will have to touch that windows system for quite some time, but you’ll want to stop as soon as possible, so setup some CI solution to do the dirty work for you. Personally I’ve used Buildbot, I suppose Jenkins would work as well.

Qt/C++ notes

  • By default everything is hidden (which is the opposite of what we have on linux). Export explicitly what you want to use from a shared library.
  • Shared libraries consist of: a .dll containing the code, a .imp that contains the symbol table. All compiler specific.
  • C++ ABI is compiler specific. While it is possible to e.g. cross-compile a library from linux using mingwg and then link against that on windows,
    it’s not straightforward because you have to generate a .imp file that uses a mangling scheme that the compiler on windows understands (by default it wont work).
  • If you compile 64bit, all your libraries that you want to link against need to be 64bit.
  • If you use qt from the installer, add a qt.conf file to adjust the prefix. The compiled in paths for e.g. plugins won’t be available on the target system.
  • File paths passed to QML have to be converted with QUrl::fromLocalFile from a string. While absolute paths as strings work fine on unixes, it won’t work on windows.
  • Symlinks aren’t a thing on windows. For icon-themes (which make heavy use of symlinks), use a qrc file.
  • Qt is not deployed with SSL by default and tries to load the openssl libraries at run-time. Qt thus dictates which OpenSSL version you have to use. For Qt 5.9 OpenSSL 1.0.2o will work, for 5.10 you’ll need >= 1.1. Make sure you get 64/32bit depending on your Qt deployment. To deploy with the application I had to buld from source, but in general an installer like will work as well (I did not manage to package the appropriate dlls from the installer though). QSslSocket provides functions to check whether the loading worked.
  • libcurl needs to be built with -DCMAKE_USE_OPENSSL=TRUE switch to have ssl support.

Windows survival tools

  • Get a backtrace: windbg, make sure you get the 64bit version.
  • strace: procmon.exe
  • Get debug output: DebugView.exe
  • The only barely usable terminal on windows is cmdr
  • It’s possible to setup ssh access to windows, but it will still be a pain to use.
  • Wireshark works on windows too.


  • Stay away from anything autotools if you can. It is often even easier to rewrite the buildsystem with cmake (sad, but true).
    • AWK scripts not working on msys2
    • buildsystems abusing compilers to generate code (hint; it’s not gonna work)
  • If ninja ends up rerunning the cmake configuration phase over and over in an endless loop, try checking out the source repository again. I think it has something to do with file timestamps…


Last but not least, here’s the code I’m using:

Last week in Kube

by cmollekopf in Finding New Ways… at 11:36, Saturday, 07 July

Last week in Kube

  • Kube builds on Windows and is largely functional. There is still an issue with Xapian not working and LMDB sparse files being broken on windows.
  • Date-range queries have been implemented.
  • The flatpak’s gpg-agent integration has been “fixed” (The proper fix will have to be done on the flatpak side of things).
  • A bunch of bugs in live-query updating have been fixed.
  • We now do some basic conflict resolution to avoid overwriting local changes with changes from the server.
  • Various visual glitches and and keyboard navigation issues have been fixed.
  • The generated message-id for new messages no longer leaks the local hostname.
  • Added tooltips for various UI elements.
  • Various message parser fixes, especially for apple mail generated messages with attachments.
  • The Logview has been renamed to Notifications View, and now only shows up if it also contains something.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Last week in Kube

by cmollekopf in Finding New Ways… at 10:51, Monday, 18 June

Perhaps if Windows wasn’t such a PITA there would be more progress 😉

  • The Conversation view received some vim-style keyboard bindings (because who uses a mouse anyways).
  • The INBOX is now automatically selected when Kube is started, so we show something useful immediately.
  • Progress on Kube for Windows. Everything builds, but there are still a couple of remaining issues to sort out.
  • Ported from QGpgME to plain old GpgME. This was a necessary measure to build Kube on Windows, but also generally reduced complexity while removing the dependency on two large libraries that do nothing but wrapping the C interface.
  • Ported away from readline to cpp-linenoise, which is a much simpler and much more portable replacement for readline.
  • Rémi implemented the first steps for range queries, which will allow us to retrieve only the events that we require for to e.g. render a week in the calendar.
  • The storage layer got another round of fixes, fixing a race condition that could happen when initially creating the database for the first time (Blogpost on how to use LMDB).
  • The IMAP resource no longer repeatedly tries to upload messages that don’t conform to the protocol (Not that we should ever end up in that situation, but bugs…).
  • The CalDAV/CardDAV backends are now fully functional and support change-replay to the server (Rémi).
  • The CalDAV backend gained support for tasks.
  • Icons are now shipped and loaded from a resource file after running into too many problems otherwise on Windows.
  • A ton of other fixes for windows compatiblity.
  • A bunch of mail rendering fixes (also related to autocrypt among others).
  • Work on date range queries for efficient retrieval of events has been started.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Last week in Kube

by cmollekopf in Finding New Ways… at 14:34, Tuesday, 03 April

Kube by now is my daily driver, and we’ve managed to iron out a lot of the remaining kinks since the last update.

  • Rémi is now on board Blogpost
  • Xapian based search is alive and kicking Blogpost.
  • Search in conversationview via syntaxhighlighting.
  • Support for operations on aggregated values (such as threads). This allows us to i.e. mark an entire thread as read.
  • Fixed rendering of encrypted+signed messages.
  • Forwarding of encrypted mails (so they are properly re-encrypted to the recipient) (Rémi)
  • A revamp of the Addressbook (Michael)
  • Support for GPG key import and export (attaching the key to the mail) (Rémi)
  • We now highlight folders that contain new mails.
  • We’ve got a experimental but working build on mac (gpg not withstanding) Blogpost
  • Michael and Rémi are spearheading calendaring in Kube! (we’ve already merged first versions of calendar view and CalDAV backend)

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the following packages have been updated: kolab-freebusy:, Repack of tagged version guam:, Allow empty lines in commands

Last week in Kube

by cmollekopf in Finding New Ways… at 09:47, Wednesday, 14 February

New year, new Kube =)

  • Setup a clearer structure for the application with “Views” as the highest level components (composer, conversation, addressbook, log, accounts).
  • Made sure that individual views are self contained and can be launched using qmlscene. This is not only good for modularity, but simplifies the workflow when working on the UI.
  • Improved the test and prototyping infrastructure: Blogpost
  • A little investigation in where all the memory goes: Blogpost
  • Added an extension mechanism that allows us to easily experiment with new views, without compromising the main application.
  • Support to unlock kube from the commandline as a poor-mans keyring integration.
  • A rather large cleanup of encryption related code used in the message parser got rid of over 1k SLOC.
  • The encrypted/signed state of a mail is now properly visualized.
  • A storage upgrade mechanism was added (Although upgrading for now means removing all local data).
  • Large payloads are no longer stored externally but inline in the database. Tests have shown that this is not less performant, but improves the fault resiliency and simplifies the system.
  • A first version of Xapian based fulltext search for local content just landed (Blogpost will follow).
  • As always, a variety of bugfixes.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the following packages have been updated: roundcubemail-plugins-kolab:, Update from version 3.3.4 to 3.3.5 kolab-syncroton:, Update from 2.3.7 to 2.3.8 Because of these […]

This is an updated version of the post from July 2015: For a description of phabricator, see the post from July 2015. Here are the updated instructions for using arcanist with CentOS7: yum install git vim php-cli python-pep8 # this does not work because of contradicting dependancies: # wget -O /etc/yum.repos.d/kolab-tools.repo # yum […]

Last week in Kube

by cmollekopf in Finding New Ways… at 13:50, Thursday, 07 December

We’re now running a Kube program with Kolab Now where you can get a basic Kolab Now account now, and see it’s features grow together with Kube, while the price remains the same. To take advantage of that offer, install the Kolab Now flatpak and signup through Kube.

  • Fixed a problem where emails with a header larger than 8kb would not be parsed correctly.
  • Added a Kolab Now signup capability to the Kolab Now settings.
  • Added merging of log entries of the same type so you don’t end up with multiple errors.
  • Added a dedicated error view for some classes of errors: Blogpost
  • Added support for PGP encryption: Blogpost
  • Fixed opening of attachments in the flatpak(s) (/tmp needed to be shared with the host system).
  • Added a dockercontainer to the kube repository that is used for CI and can be used for development.
  • Added ASAN and LSAN checkers (LLVM based memory/leak checkers) to sink and fixed various related issues.
  • Created a stresstest for sink which allows to continously run synchronizations and queries. This is used to trigger hard to find crashes.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Today, one of our customers at noticed that when composing an e-mail in Roundcube, there is a spell checker button at the top. But only for English, and when there was text in the e-mail, and he clicked on the spell check button, he got the message: “An error was encountered on the server. […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the following packages have been updated: cyrus-imapd:, “Prevent unreadable/unwriteable /dev/null from getting in the way” kolab-autoconf:, “Check in version 1.3” kolab: […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. Today, the roundcubemail package has been updated from version to 1.3.3. This fixes a security issue, described in Also see the release notes and […]

Last week in Kube

by cmollekopf in Finding New Ways… at 11:03, Tuesday, 24 October

Ooops, skipped a couple of weeks. Development did not stop though, although there was some infrastructure work to be done and less user-visible changes therefore.

Temporarily reverted the commit to demonstrate incremental query performance improvements.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to:

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past weeks, the kolab-autoconf package has been updated from version 0.1 to 0.2. This affects Debian/Ubuntu and RHEL/CentOS. For details see: kolab-autoconf in OBS and […]

Kube: Finding our Focus

by cmollekopf in Finding New Ways… at 14:44, Thursday, 05 October

Over the past two years we’ve been laying the ground work towards the first goal of a simple and beautiful mail client.
By now we have the first few releases out, and the closer we get to our goal, the less clear becomes what the next goal on our roadmap is.

So here’s something that we’ll be focusing on:kolabnow_logoAn obvious reason why we picked Kolab Now is because it is what sustains the larger part of the Kube team, allowing us to work on this project in the first place. However, it’s also a prime example of a completely Open Source and standards compliant service. Improving the Kolab experience means improving IMAP support, improving the CardDAV implementation, perhaps even adding CalDAV. It also means implementing proper GPG support, and pushing the user experience edge by edge to where we expect it to be. Things that all standards compliant services will benefit from. The Kolab Now service ensures we can focus on the relevant problems by taking variables out of the equation by being essentially the reference installation of Kolab.

Now, this means that we’ll be putting a little more focus on the single account experience, it does not mean we’ll be dropping support for multi-account setups though. The develop branch (which will lead to the next release) will continue to support multiple accounts and account types. What we will do though is acknowledge that very little testing is happening with other services than Kolab, and that we will probably not prioritize any features that are exclusive for other services (such as GMail’s non standard IMAP behavior) in the near future. It’s about focus, not exclusion.

There are many other goals ahead of course, that’s not the problem. Various platforms to be conquered, CalDAV access to our calendaring data, perfecting the mail experience, a beautiful calendar view, working out the grand scheme of how we tie all these bits together and produce something unique… Lot’s of exciting stuff that we’re looking forward to be working on!

However, it’s also easy to get lost in all those possibilities. It’d be easy to hack together some half-baked implementations for a variety of those ideas, and then revise those implementations or just pick the next bit. But that doesn’t lead to what we want. We want a product that is actually used and just works, and that requires focus. Especially since we’re a small team, it’s more important than ever that we maintain, if not increase, our focus. Kolab Now gives us something to focus on.

Kube for Kolab Now

With that said, I’d like to announce the Kolab Now edition of Kube, that we’ve made available as an early access release.

Kolab Now Configuration
Kube’s simplified account setup for Kolab Now.

This is a completely separate release-stream that supports Kolab Now exclusively, and does not replace general purpose Kube releases. But it is not a separate codebase (For simplicity there exists a kolabnow release branch with a two-line patch, but that’s all there ever will be).

We’ll regularly update this release to share our latest developments with you.

If you already are, or would like to become a Kolab Now user, then you’re welcome to join us on our journey to bringing you the best possible Kube experience to your desktop. You’re not only going to profit from a great service, but you’ll also help sustain the development of Kube.

For future updates, keep an eye on

A customer of asked about notifications: you set up a filter in Roundcube, and you will be notified whenever an email arrives, and the notification is sent to an email address that you check more regularly. It is not like forwarding the message, but the notification does not contain the message itself, to force […]

Optionsbleed: Don’t get your panties in a wad

by kanarip in kanarip at 21:30, Thursday, 21 September

You’re a paranoid schizophrenic if you think optionsbleed affects you in any meaningful way beyond what you should have already been aware of, unless you run systems with multiple tenants that upload their own crap to document roots and you’ll happily serve as-is, yet pretend to provide your customers with security; this is a use-after-free… Continue reading Optionsbleed: Don’t get your panties in a wad

Kolab Now: Disruptions this Weekend

by kanarip in kanarip at 20:22, Saturday, 09 September

Some of you, very few of you in fact, may have noticed short-lived disruptions to Kolab Now services over the course of this weekend. This impacts < 1% of our users, really. Symptoms may include your client to have been disconnected, and maybe asking you to confirm your password. This is inconvenient, but it has… Continue reading Kolab Now: Disruptions this Weekend

Kolab Now Really Beta (DevOps Edition)

by kanarip in kanarip at 18:54, Friday, 08 September

This week, I accidentally made Kolab Now Beta really beta — though pre-alpha more than beta, strictly speaking — completely intentionally; Oops, I #devops'ed — Kolab Operations (@kolabops) September 5, 2017 I can now proudly announce it runs off of otherwise public GIT source repositories directly, and the developers working on the projects involved… Continue reading Kolab Now Really Beta (DevOps Edition)

Recently I was in the situation where I needed to manage users in Kolab from PHP. There is an API for the Kolab Webadmin, and it is documented here: There is also a PHP class, that I could have used: But for some reason, I am using CURL. It took me some time […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the roundcubemail-plugin-contextmenu package has been updated from version 2.1.1 to 2.3. This affects Debian/Ubuntu and RHEL/CentOS. For details see: roundcubemail-plugin-contextmenu in OBS and […]

Performance Testing w/ Fedora Help(*2)

by kanarip in kanarip at 16:17, Sunday, 03 September

In the next couple of weeks or so, we’ll be executing performance testing of Kolab on OpenPower in one of the world’s largest testing facilities. How do we do this? With help of Fedora(^2). Part I: The Data Set A good performance test requires a good data set. In the particular set of tests, we… Continue reading Performance Testing w/ Fedora Help(*2)

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the erlang package has been built for Debian Jessie for Plesk, so that version 18.3.4 will be available there. It only affects Debian Jessie […]

Kolab for Open Power

by kanarip in kanarip at 20:44, Wednesday, 30 August

Among a variety of deliberations concerning the security and transparency of a little Kolab thing running anywhere — at home, rented space or hybrid cloud — this post is about the transparency of the hardware layer, and our ongoing efforts to make that so. We have said what, why and how on LWN, at events… Continue reading Kolab for Open Power

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the package roundcubemail has been updated from 1.2 to version 1.3. It affects both RHEL/CentOS and Debian/Ubuntu. Other packages have been rebuilt due […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. I did not report on Updates for Kolab 16 for quite a while. But now that we are finally on a production server with Kolab 16 for […]

What Grey Listing Looks Like

by kanarip in kanarip at 16:40, Saturday, 05 August

In week 30, on a Friday morning, we applied something called Grey Listing. I told you that about a week’s worth of information was needed to analyse the underlying statistics on a per-domain, per-sender basis — but least I can do is give you a sense of what the statistics are. This will consist of… Continue reading What Grey Listing Looks Like

The 3rd Pillar to Save Your Ass

by kanarip in kanarip at 20:07, Thursday, 03 August

A controversial topic, to say the least, is what happens when you double-click a message in a Roundcube messages listing, while also having enabled the preview pane. Two things to consider: A regular way to use Roundcube is with a preview pane, A regular way to give reading a message more vertical real estate is… Continue reading The 3rd Pillar to Save Your Ass

Kolab Now: Grey Listing Applied

by kanarip in kanarip at 19:02, Thursday, 03 August

Aside from other anti-spam measures, we have applied a concept known as grey listing. Here’s a summary of how grey listing works: When an email delivery attempt is made, we know the sending server’s IP address, the sender address, and the recipient address. If this is a previously unseen combination of facts, the delivery attempt… Continue reading Kolab Now: Grey Listing Applied

Make Kolab Now Beta (+ 3-Column Layouts)

by Paul Brown in Kolab Community - Kolab News at 14:10, Thursday, 03 August

Make Kolab Now Beta (+ 3-Column Layouts)

Terrible puns aside, Kolab Now Beta is where you can test drive new features that we are considering for inclusion into Kolab Now.

If you would like to try out new layouts and bleeding edge services, you can access Kolab Now Beta simply by typing "beta." (don't forget the dot) before "" in the address bar of your browser.

Make Kolab Now Beta (+ 3-Column Layouts)

Today we are trying out a new 3-column email layout for wide screens. The 3-column display shows you your email folders in the left-most column, the list of messages in the central column, and the right-most column displays the body of your selected message.

Make Kolab Now Beta (+ 3-Column Layouts)

The 3-column layout makes a better use of wide screens, allowing you to see more lines of your messages and more messages in the message list, making it easier to work with large volumes of mail.

You can try the 3-column layout now by accessing Kolab Now Beta and clicking on the gear icon to the left of the Subject column above your message list.

Check out this short video showing the whole process.

DISCLAIMER: The features we test run in Kolab Now Beta are by definition unstable. Although we are pretty sure nothing terrible will happen to your data (it is backed up and secured), we cannot take responsibility if by using Beta your productivity drops, your sanity wanes, or indeed, you accidentally erase something.

Also, Kolab Now Beta is there so you can help us improve Kolab Now. If you come across a bug or have a suggestion to improve a feature, visit us on the Kolab Hub and let us know.

To know more about the things we try out on a regular basis, follow @kolabops on Twitter and strap in, because the ride is pretty wild!

Kolab Now users can start using the Collabora Online suite from directly within Kolab Now's web interface as from today. Kolab Now's online office is a webified version of LibreOffice and comes with a word processor, a spreadsheet application and a presentation editor. All are full-featured and, like their offline counterpart, support a very wide-range of formats indeed -- ODF of course, but also DOCX, XLSX, PPTX, and literally dozens of other formats.

The documents you create within Kolab Now enjoy the same extreme privacy protection you get for your email, tasks, calendars and contacts. Your data is stored by us, a Swiss company; using open source, peer-reviewed and audited software; developed by some of the most privacy-conscious engineers in the world; and protected by Switzerland's strictest privacy laws.

Kolab Now's office suite allows you to create new documents or upload them from you hard disk and work on them online. You can edit your documents on your own or invite colleagues to work on them with you. One user can start a text document and invite others to a session and they can all help to shape the final text. Several users can work at the same time on filling in cells of data on the same spreadsheet.

Write and edit ODF documents directly in your Kolab Now account.

And we're not done yet: among our next challenges is to provide a real time, embedded messaging service (read "chat") to make working with your colleagues easier and more fluid.

We have the full story on how this came to be here. We have also published a brief HOWTO so you can easily gets started and find your way around the interface.

You can try Kolab Now’s online office apps by signing up for a 30-day money back trial subscription.

If you represent a news organisation and would like to learn more or want to try Kolab Now’s office apps for a review in your publication, please contact our press person and request a demo account.

Kolab Now: Another Round of Updates

by kanarip in kanarip at 01:07, Monday, 10 July

This weekend has seen a variety of systems being issued either of, or combination of, the following commands; yum -y update yum –enablerepo=kolab-16-updates-testing -y update puppet agent -t –no-noop reboot rm -f /dev/null; mknod -m 666 /dev/null c 1 3 I don’t expect everyone to know and understand what these pieces mean, so I’ll divide… Continue reading Kolab Now: Another Round of Updates

Kolab Now: Disabled IMAP Proxy

by kanarip in kanarip at 10:08, Monday, 03 July

Since last weekend’s upgrade of Kolab Now to Kolab 16, some customers have reported IMAP connection and folder synchronization issues. I have therefore elected to bypass the IMAP proxy used to filter groupware folders, basically returning all IMAP client connections to the same behaviour and connection end-points you were accustomed to before. While our tests… Continue reading Kolab Now: Disabled IMAP Proxy

Re: Product Manager vs. Product Owner

by kanarip in kanarip at 00:08, Monday, 03 July

Dear Melissa Perri, I’ve read your article entitled “Product Manager vs. Product Owner” with interest,  and I recommend my readers also browse Melissa’s blog for other interesting articles. However, I would like to take this opportunity to respectfully disagree with some of your article’s implied rhetoric to be entertained and conclusions, and challenge others. In my… Continue reading Re: Product Manager vs. Product Owner

Upgrade of Kolab Now to Kolab 16

by kanarip in kanarip at 16:23, Thursday, 29 June

This weekend, from Sunday 00:00 UTC to Sunday 04:00 UTC, users and customers of Kolab Now may experience intermittent availability of various services. While our data center’s network provider will perform standard network maintenance, so will our staff upgrade our software and databases, and reconfigure infrastructure to get you to the next generation of collaboration… Continue reading Upgrade of Kolab Now to Kolab 16

Installing Kontact on CentOS7

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 10:44, Thursday, 29 June

If you search for “install kontact centos7”, you find the first link: Unfortunately, that does not work. Even if you add the Kolab 16 repository (wget so that libkolab etc can be installed, you still have conflicts because kdepim-libs is installed from the base repository. It seems, that the packages on OBS are […]

Kolab Now MX Migration Sorted

by kanarip in kanarip at 20:39, Friday, 23 June

When I switched over mail infrastructure earlier this week, I might have been under a few mistaken impressions — in hindsight, one might qualify at least some of them as embarrassing. Firstly, MX records never work the way you suspect — in part because you expect them to work as specified. Forget that. In a… Continue reading Kolab Now MX Migration Sorted

Purposefully Not the Center of Attention

by kanarip in kanarip at 15:29, Friday, 16 June

While it’s been argued before, over and over again, that the concepts behind a crypto-currency like BitCoin or Ethereum or Ripple or LiteCoin might mean the upheaval of the traditional economy, it’s also been argued that it is actually the underlying blockchain technology that implies the change of pace. I disagree. One can arguably not… Continue reading Purposefully Not the Center of Attention

Dear Lazyweb: No video for VLC on Fedora 26?

by kanarip in kanarip at 13:35, Sunday, 28 May

Dear Lazyweb, even though Fedora 26 is not yet released, I’ve upgraded — now, VideoLAN isn’t displaying video any longer. If I start it with —no–embedded–video I do get video, but it doesn’t seem to be able to run or switch to/from fullscreen. Resetting my preferences has not helped so far. I would appreciate to… Continue reading Dear Lazyweb: No video for VLC on Fedora 26?

Kolab 16 Available for Ubuntu Xenial: Testing

by kanarip in kanarip at 19:17, Tuesday, 09 May

I wrote before about a lack of support for PHP 7 in an essential utility used to generate language-specific bindings to our libraries, called SWIG. Well, on December 29th 2016, SWIG released a version that does support PHP 7. In the past few days, I’ve spent some time in patching both libkolabxml and libkolab packages… Continue reading Kolab 16 Available for Ubuntu Xenial: Testing

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. I did not report on Updates for Kolab 16 while some courageous people (dhoffend, airhardt/sicherha, hede, kanarip, and probably more) were making Kolab 16 ready for Debian. […]

There is documentation about how to import Contacts into the Roundcube address books from CSV files: Unfortunately, that documentation does not come with a description of the columns supported. I had a look at the source: From that you can see, that the import from GMail and from Outlook is supported via […]

Kolab 16 for Fedora 25

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 08:39, Saturday, 31 December

This is work in progress, but I just wanted to share the news: I have Kolab 16 packages for Fedora 25 (with PHP7), built on copr! The support for Fedora 24 is broken in OBS, ticket: Fedora 25 was added after that, but it is broken as well, see for example I was […]

Phabricator Packages for EPEL, Fedora

by kanarip in kanarip at 14:44, Thursday, 01 December

Using Pagure and COPR, Tim Flink and I have settled on using common infrastructure to further the inclusion of Phabricator in to the Fedora repositories (and EPEL). I’m hoping this will bear fruit and get more people on board. Our Pagure repository is at Phab Phour Phedora — a mix of Fabuluous Four, Phabricator for… Continue reading Phabricator Packages for EPEL, Fedora

Kolab 16.1 for Jessie: Way Ahead of Schedule

by kanarip in kanarip at 15:54, Friday, 18 November

I reported before that I was a little ahead of schedule in making Kolab 16.1 available for Debian Jessie, and as it turns out I was wrong — I’m way ahead. In fact, installing and configuring Kolab 16.1 on Jessie passes without any major head-aches. I think the one little tidbit I have left relates… Continue reading Kolab 16.1 for Jessie: Way Ahead of Schedule

Kolab 16.1 for Jessie: Ahead of Schedule

by kanarip in kanarip at 10:24, Tuesday, 15 November

I told you earlier this week I would be working on providing Debian packages for Kolab 16 next week, but an appointment I had this week was cancelled, and I get to get started just ever so slightly earlier than expected. This lengthens the window of time I have to deal with all the necessities,… Continue reading Kolab 16.1 for Jessie: Ahead of Schedule

the case of the one byte packet

by Aaron Seigo in aseigo at 11:27, Tuesday, 20 September

the case of the one byte packet

Yesterday I pushed a change set for review that fixes an odd corner case for the Guam IMAP proxy/filter tool that was uncovered thanks to the Kolab Now Beta program which allows people to try out new exciting things before we inflict them upon the world at large. So first let me thank those who are using Kolab Now Beta and giving us great and valuable feedback before turning to the details of this neat little bug.

So the report was that IMAP proxying was breaking with iOS devices. But, and here's the intriguing bit, only when connecting using implicit TLS; connecting to the IMAP server normally and upgrading with STARTTLS worked fine. What gives?

In IMAP, commands sent to the server are expected to start with a string of characters which becomes the identifying tag for that transaction, usually taking the form of "TAG COMMAND ARGS". The tag can be whatever the client wants, though many clients just use a number that increases monotonically (1, 2, 3, 4, ...). The server will use that tag to prefix the success/failure response in the case of multi-line responses, or tag the response itself in the case of simpler one-line responses. This allows the client to match up the server response with the request and know when the server is indeed finished spewing bytes at it.

We looked at the network traffic and in that specific case iOS devices fragment the IMAP client call into one packet with the tag and one packet with the command. No other client does this, and even iOS devices do not do this when using the STARTTLS upgrade mechanism. As a small performance hack, I had allowed the assumption that the "TAG COMMAND" part of client messages would never be fragmented on the network. This prevented the need for buffering and other bookkeeping in the application within a specific critical code path. It was an assumption that was indeed not guaranteed, but the world appeared to be friendly and cooperating. After all, what application would send "4" in one network packet, and then "XLIST" in a completely separate one? Would it (the application, the socket implementation, ..) not compose this nicely into one little buffer and send it all at once? If so, what network topology would ever fragment a tiny packet of a few dozen bytes into one byte packets? Seemed safe enough, what could go wrong .. oh, those horrible words.

So thanks to one client in one particular configuration being particularly silly, if technically still within its rights, I had to introduce a bit of buffering when and where necessary. So I took the opportunity to do a little performance enhancement that was on my TODO while I was mucking about in there: tag/command parsing which is necessary and useful for rules to determine whether they care about the current state of the connection, is now both centralized and cached. So instead of happening twice for each incoming fragment of a command (in the common case), it now happens at most once per client command, and that will hold no matter how many rules are added to a ruleset.

So, one bug squashed, and one little performance enhancement, thanks to user feedback and the Kolab Now Beta program. As soon as the patch gets through code review, it should get pushed through packaging and deployed on Kolab Now Beta. Huzzah.a

Current status for mailrendering in kube & kmail

by Sandro Knauß in Decrypted mind at 09:15, Monday, 18 July

In my last entry I introduced libotp. But this name has some problems, that people thought, that it is a library for one-time-passwords, so we renamed it to libmimetreeparser.

Over the the last months I cleanup and refactored the whole mimetreeparser to turn it into a self-contained library.



As a gerneral rule we wanted to make sure, that we only have dependecies in mimetreeparser, where we we can easily tell, why we need them. We end up with:

  • KF5::Libkleo is the dependecy we are not happy with, because it pulls in many widget related dependencies, that we want avoid. But there is light at the end of the tunnel and we will be hopefully switch in the next weeks to GpgME directly. GpgME is planning to have a Qt interface, that fulfills our need for decrypting and verifying mails. The source of the Qt interface of GpgME is libkleo, that's why the patch will be get quite small. At KDEPIM sprint in Toulouse in spring this year, I already give the Qt interface a try and made sure, that your tests are still passing.
  • KF5::Codecs to translate between different codes, that can occur in a mail
  • KF5::I18n for translations of error messages. If we want consistent translations of error messages we need to handle them in libmimetreeparser.
  • KF5::Mime because the input mail is a mimetree.

Rendering in Kube

In Kube we have decided to use QML to render mails for the user, that's made it easy to switch all html rendering specific parts away. So we end up with just triggering the ObjectTreeParser and create a model out of the resulting tree. The model is than the input for QML. QML now loads different code for different parts in the mail. For example for plain text it just shows the plain text, for html it loads this part in a WebEngine.
But as a matter of fact, the interface we use is quite new and it is currently still under development (T2308). For sure there will be changes, until we are happy with it. I will describe the interface in detail if we are happy with it. Just as sidenote, we don't want a separate interface for kube and kdepim. The new interface should be suitable for all clients. To not break the clients constently, we keep the current interface and develop the new interface from scratch and than switch we are happy with the interface.

Kube rendering

Rendering in Kmail

As before we use html as rendering output, but with the rise of libmimtreeparser, kmail uses also the messageparttree as input and translate this into html. So we also have here a clear seperation between the parsing step ( handled in libmimetreeparser) and the rendering step, that is happening in messageviewer. Kmail has additional support for different mime-types like iTip (invitations) ind vCard. The problem with these parts are, that they need to interact directly with Akonadi to load informations. So we can actually detect if a event is already known in Akonadi or if we have the vCard already saved, which than changes the visible representation of that part. This all works because libmimetreeparser has an interface to add additional mime-type handlers ( called BodyPartFormatters).

And additionally messageviewer now using grantlee for creating the html, that is very handy and makes it now easy to change the visual presentation of mails, just by changing the theme files. That should help a lot if we want to change the look-and-feel of email presentation to the user. And allow us additionally to think about different themes for emailpresentation. We also thought about the implications of the easy changeable themes and came up with the problem, that it shouldn't be that easy to change the theme files, because malicious users, could fake good cryptomails. That's why the theme files are shipped inside resource files.

Meanwhile I was implementing this, Laurent Montel added javascript/jQuery support to the messageviewer. So I sat down and created a example to switch the alterantivepart ( html and textpart, that can be switched) with jQuery. Okay we came to the conclusion that this is not a good idea (D1991). But maybe others came up with good ideas, where we can use the power of jQuery inside the messageviewer.

Alternative switcher 1

Alternative Switcher 2

New IMAP filter/proxy release: guam 0.8, eimap 0.2

by Aaron Seigo in aseigo at 14:58, Friday, 10 June

New IMAP filter/proxy release: guam 0.8, eimap 0.2

Over the last few months I have been poking away at a refactoring of the IMAP library that Kolab's IMAP filter/proxy uses behind the scenes, called eimap. It consolidated quite a bit of duplicated code between the various IMAP commands that are supported, and fixed a few bugs along the way. This refactoring dropped the code count, makes implementing new commands even easier, and has allowed for improvements that affect all commands (usually because they are related to the core IMAP protocol) to be made in one central place. This was rolled as eimap 0.2 the other week and has made its way through the packaging process for Kolab. This is a significant milestone for eimap on the path to being able to be considered "stable".

Guam 0.8 was tagged last week and takes full advantage of eimap 0.2. This has entered the packaging phase now, but you can grab guam 0.8 here:

Highlights of these two releases include:

    • several new IMAP commands supported
    • all core IMAP response handling is centralized, making the implementation for each command significantly simpler and more consistent
    • support for multi-line, single-line and binary response command types
    • support for literals continuation
    • improved TLS support
    • fixes for metadata fetching
    • support for automated interruption of passthrough state to send structured commands
    • commands receive server responses for commands they put into the queue
  • Guam
    • ported to eimap 0.2
    • limit processcommandqueue messages in the FSM's mailbox to one in the per-session state machine
    • be more expansive in what is supported in LIST commands for the groupware folder filter rule
    • init scripts for both sysv and systemd

One change that did not make it into 0.8 was the ability to define which port to bind guam listeners to by network interface. This is already merged for 0.9, however. I also received from interest in using Guam with other IMAP servers, so it looks likely that guam 0.8 will get testing with Dovecot in addition to Cyrus.

Caveats: If you are building by hand using the included rebar build, you may run into some issues with the lager dependencies, depending on what versions of lager and friends are installed globally (if any). If so, change the dependencies in rebar.config to match what is installed. This is largely down to rebar 2.x being a little limited in its ability to handle such things. We are moving to rebar3 for all the erlang packages, so eimap 0.3 and guam 0.9 will both use rebar3. I have guam already building with rebar3 in a 0.9 feature branch, and it was pretty painless and produces something even a little nicer already. As soon as I fix up the release generation, this will probably be the first feature branch to land in the develop branch of guam for 0.9!

It is also known that the test suite for Guam 0.8 is broken. I have this building and working again in the 0.9 branch, and will probably be doing some significant changes to how these tests are run for 0.9.


by Aaron Seigo in aseigo at 11:41, Friday, 10 June


I joined Kolab Systems just over 1.5 years ago, and during that time I have put a lot of my energy and time into working with the amazing team of people here to improve our processes and execution of those processes around sales, communication, community engagement, professional services delivery, and product development. They have certainly kept me busy and moving at warp 9, but the results have certainly been their own reward as we have moved together from strength to strength across the board.

One place that this has been visible is the strengthening of our relationship with Red Hat and IBM, which has culminated in two very significant achievements this year. First, Kolab is available on the Power 8 platform thanks to a fantastic collaboration with IBM. For enterprise customers and ISP/ASPs alike who need to be able to deliver Kolab at scale in minimum rack space, this is a big deal.

For those with existing Power 8 workloads, it also means that they can bring in a top-tier collaboration suite with quality services and support backing it up on their already provisioned hardware platform; put more simply: they won't have to support an additional x86-based pool of servers just for Kolab.

To help introduce this new set of possibilities, we have organized a series of open tech events called the Kolab Tasters in coordination with IBM and Red Hat.


Besides enjoying local beverages and street food with us at these events, attendees will be able to experience Kolab on Red Hat Enterprise Linux on Power 8 first-hand on the demo stations that will be available around the event site. Presentations from Kolab Systems, IBM and Red Hat form the main part of the agenda for each of these events, and will give attendees a deep understanding of how the open technologies from IBM (Power 8), Red Hat (Linux OS), and Kolab Systems (Kolab) deliver fantastic value and freedom, especially when used together.

The first events scheduled are:

  • Zürich, Switzerland on the 14th June, 2016
  • Vienna, Austria on the 22nd June, 2016
  • Bern, Switzerland on 28th June, 2016

There are some fantastic speakers lined up for these events, including Red Hat's Jan Wildeboer and Dr. Wolfgang Meier who is directory of hardware development at IBM. At the Vienna event, we will also be celebrating the official opening of Kolab Systems Austria, which has already begun to support their needs of partners, customers and government in the beautiful country of Austria from our office in Vienna.

Events in Germany, starting in Frankfurt, will be scheduled soon, and we will be doing a "mini-taster" at the Kolab Summit which is taking place in Nürnberg on the 24th and 25th of June. Additional events will be scheduled in accordance with interest over the next year. I expect this to become a semi-regular road-show, in fact.

And speaking of the Kolab Summit: is it also going to be a fantastic event. Co-hosted at the openSUSE Conference, we will be sharing the technical roadmap for Kolab for 2016-2017; unveiling our partner program for ISPs, ASPs and system integrators that we incrementally rolled out earlier this year and which is now ready for broad adoption; listening to guest speakers on timely topics such as Safe Harbor in the EU and taking Kolab into vertical markets; and, of course, having a busy "hallway session" where you can meet and talk with key developers, designers, management and sales people from the Kolabiverse.

You can still book your free tickets to these events from their respective websites:

Kolab Summit 2.0: Putting the freedom back into the cloud.

Join us this June 24-25 in Nürnberg, Germany for the second annual Kolab Summit. Like last year's summit, we've accepted the invitation of the openSUSE Community to co-locate the summit with the openSUSE Conference, which will be held June 22-26 in the same location. And because we have some special news to share and celebrate, we're also putting on a special edition Kolab Taster on Friday June 24th. The overarching theme for this year's summit will be how to put the freedom back into the cloud.

Using the US cloud is increasingly fraught with technical and legal insecurities. Cross-border transfer of data is becoming more complex, and sovereign control of your data seems increasingly hard to achieve. Kolab believes there needs to be a better answer than simply giving up the benefits of the cloud, renouncing its convenience and cost efficiency. That is why during this year's summit we will be discussing the impact of the Safe Harbor ruling, and how Kolab has been working with our partners at Collabora and others to provide a fully open, collaborative cloud technology platform.


Join us to learn how Kolab has been helping Application Service Providers (ASPs) withstand Office365's onslaught, and how the cloud of the future will be running Kolab. Get exclusive insights and previews into our thinking, road map and the state of development of Kube and other components.


In the business section we will be talking with partners about our new partner programme, business opportunities Kolab offers, and how to build a valuable proposition for your customers around Kolab.


Finally, join us on the evening of the 24th for an exclusive Kolab Taster where we will have some major news to celebrate. And while you're there, don't miss the opportunity to also be part of the exciting openSUSE Conference, which has finally come home to Nürnberg, the home city of SUSE, complete with castles, food, beer and much to see.

Tickets are FREE, so grab yours today.

libotp - email rendering in kube

by Sandro Knauß in Decrypted mind at 13:03, Friday, 05 February

The important part of a mailreader is rendering an email. Nobody likes to read the raw mime message.
For kube we looked around what we should use for mail rendering. and came to the conclusion, that we will use parts of kdepim for that task. But the current situation was not the usable for us, because is was tangled together with other things of the mailviewer (like Akonadi, Widgetcode,...) , so we would end up depending on nearly everything in kdepim. What was a nogo for us. But after a week of untangeling the rendering part out of messageviewer, we end up with a nice library that does only mail rendering called libotp branch dev/libotp. But for the moment it is just a working name. Maybe someone come up with a better one?

Why a lib - email rendering is easy?

encrypted mail Well if you look from the outside, it really looks looks like an easy task to solve. But in detail the task is quite complicated. We have crypted and signed mail parts, html/not html mailparts, alternate mime structure, broken mail clients and so on. And than we also want user interaction, do we want to decrypt the mail by default? Do we want to verify mails by default? Do we allow external html links? Does the user user prefer html over non html? ...

In total you have to keep many things in the mind while rendering a mail. And we are not talking, that we also want a pluginable system, where we be able to create own rendering for special types of mails. All these thing a already solved by the messageviewer of kdepim: We have high integrated crypto support, support for many different mime types and it was already used for years, additionally the test couverage is quite high. Like you see in the image above we are already been able to decrypt and verify mails.


libotp is a library that renders emails to html. Maybe you ask, why the hell html? I hate html mails, too :D But html is easy to display and back in time there was no alternative, to show something that is that dynamic. Nowadays we have other solutions like QML, but still we have html message, that we wanna be able to display. Currently, we have no way, to try out QML rendering for mails, because the output of libotp is limited to html. I hopefully can also solve this to give libotp the ability to redner to different output formats, by splitting the monolithic task of render an email to html into a parse step, in which the structure of the email is translated into the visible parts (a signed part is followed by a encrypted one, that has as child a html part,...) and the pure rendering step.

If you follow the link libotp branch dev/libotp, you may wonder, if a fork of messagelib is happening. No the repo is created, to use libotp now in kube and I made many shortcuts and use ugly hacks to get it working. The plan is that libotp is part of the messagelib repo and currently I have already made it to push the first part of (polished) patches upstream. If everything went fine, I will have everything upstreamed by next week.

How to use it ?

At the moment it is still work in process, so it may change. Also if other step up and give input about the way they wanna use it.
Let's have a look how kube it is using kube/framework/mail/maillistmodel.cpp

// mail -> kmime tree
const auto mailData = KMime::CRLFtoLF(file.readAll());  
KMime::Message::Ptr msg(new KMime::Message);  

first step - load the mail into KMime to have a tree with the mime parts. file is a mail in mbox format located in a local file.

// render the mail
StringHtmlWriter htmlWriter;  
QImage paintDevice;  
CSSHelper cssHelper(&paintDevice);  
MessageViewer::NodeHelper nodeHelper;  
ObjectTreeSource source(&htmlWriter, &cssHelper);  
MessageViewer::ObjectTreeParser otp(&source, &nodeHelper);  

now initalize the ObjectTreeParser. Therefore we need

  • HtmlWriter, that gets html output while rendering and do what ever the user wants to do with the html output (in our case just save it for later use - see htmlWriter.html()).

  • CSSHelper creates the header of the html with css, you have the possibility to set color schemas and fonts that are used for html rendering.

  • NodeHelper is the place, where information are stored, that need to be stored for longer (pointers to crypted part, that are currently in asynchronously been decrypted, or extra mail parts, that are visible but only on mime part in the mail). NodeHelper also informs you if any async job has endend. At the moment, we don't use the async mode nor we have mail internal links working, that's why the NodeHelper is a local variable here.

  • ObjectTreeSource is the setting object, here you can store, if decryption is allowed, if you prefer html output, if you like emotionicons, ...

  • And last not least the ObjectTreeParser(Otp) itself. It's doing the real work of parsing and rendering the mail :)


return htmlWriter.html();  

After initializing the Otp we can render the mail. This is done with otp.parseObjectTree(;. Around that we need to tell htmlWriter that a html creation has begun and ended afterwards.

As you may noticed, except the ObjectTreeParser and the NodeHelper kube has overloads of the objects. This makes libotp highly configurable for others needs already.

next steps

After the week of hacking now the current task is to push things upstream, to not create a fork and focus on one solution to render mails for kmail and kube together. After upstreaming I will start to extract the parts of libotp out of messageviewer (currently it is only another cmake target and not really divided) and make messageviewer to depend on libotp. With that libotp is a independent library that is used by both projects and I can focus again to polish libotp and messageviewer.


Here you see kube can now render your spam now nicely:
rendered spam

Like the spam says (and spam can't lie) - get the party started!

Kolab 16 at FOSDEM'16

by Paul Brown in Kolab Community - Kolab News at 10:54, Sunday, 31 January

The biggest European community-organised Open Source event is upon us and, this year, we at Kolab Systems have a very special reason to be there: we'll be presenting the new Kolab 16.1 [1] to the world during the meetup.

The development team has worked long and hard on this release, even longer and harder than usual. And that slog has led to several very interesting new features built into Kolab 16.1, features we are particularly proud of.

Take for example GUAM, our all-new, totally original, "IMAP-protocol firewall". Guam allows you to, for example, access Kolab from any client without having to see the special Kolab groupware folders, such as calendars, todos, contacts, and so on. As Guam is configured server-side, users do not have to do anything special on their clients.

Guam keeps users' inboxes clean, as it pipes the resource messages in the background only to the apps designated to deal with them. So a meeting scheduled by a project leader will only pop up in the calendar app, and a new employee recruited by HR will silently get added only to the contact roster, without the users ever accidentally seeing in their email the system-generated messages that prompt the changes.

As Guam is actually an IMAP proxy filter, something like a rule-based IMAP firewall (Aaron Seigo dixit), it is very flexible and allows you to do so much more. Come by our booth and find out how our developers have been using it and discuss your ideas with the people who actually built it.

Then there's MANTICORE. With Manticore we are taking our "collaborating in confidence" mantra to the next level. Manticore currently works only on documents, but ultimately it will bring collaborative and simultaneous editing to emails, notes and calendars as well, all without having to leave the Kolab web environment. Need to write an email with the input from a couple of colleagues? Instead of passing the text around, which is slow and error-prone (who hasn't ever sent the second to last version off by mistake?), just open it up with Manticore and edit the text all together, everybody at the same time. Interactive proofreading action!

Collaborative editing has arrived in Kolab 16.1

Finally we have implemented an OTP (One Time Password) authentication method to make logging into Kolab's web client even more secure. Every time a user goes to log in, you can have the system send a single-use code to her phone. The code is a random 6 digit number, which is only valid for a few minutes and that changes every time.

If you're an admin and tired of the security nightmares that are passwords copied onto slips of paper stuck to monitors; sick of trying to convince users that "1234" is really not that a clever password; or fed up of hearing complaints every time you have to renew their credentials because they have been hacked yet again; this is a feature that you'll definitely want to activate.

Of course there's much more to Kolab 16.1, including optimised code, a sleeker design, and usability improvements all round. You'll be able to see all these new features in action at our booth during the show and you'll also be able to meet the developers and ask them as many questions as you like on how you can use Kolab in your own environment.

What's even better is that you can also come over to our booth and copy a full, ready-to-go version of Kolab as a virtual machine onto your USB thumbdrive. Take it home with you and use it at your leisure, because, that's the other thing, we are fusing both the Enterprise and Community versions of Kolab together. This means the exact same code base from the Enterprise, our corporate-grade groupware, gets released to Community, with new versions, with all features, coming out regularly every year.

But that's not the only user news we have. We'll also be presenting our new Kolab Community portal during the conference. Kolab Community is designed to be a place for developers, admins and users to cooperate in the creation of the most reliable and open groupware collaboration suite out there. On Kolab Community you will be able to interact with the developers at Kolab Systems and other users on modern, feature-rich forums and mailing lists. The portal also hosts the front end to our bug tracker and our Open Build Service that lets you roll your own Kolab packages, tailored to your needs.

For example, if you need a pocket Kolab server you can take anywhere, why not build it to run on a Raspberry Pi? In fact that's another thing we'll be demoing at out booth. Bring along an SD card (8 GBs at least) and we'll flash Kolab 16.1 for the Pi on to it for you. You'll get to try the system right there and then take it away with you, fully ready to be deployed at home or in your office.

Useful Links

[1] Download Kolab 16.1:

[2] FOSDEM expo area:

And just to prove how well all the Kolab-related tools work everywhere, we'll also be running Kontact, KDE's groupware client, on a Windows 10 box. Kontact will be accessing a Kolab 16.1 server (of course) for all its email, calendars, contacts, notes, and so on; proving how both frameworks combined have nothing to envy from other proprietary alternatives, regardless of the underlying platform. You'll also be able to get the first glimpses of Kube, our new generation email client.

Apart from all of the above, there'll also be Q&As, merch (including some dope tattoos), special one-time Kolab Now offers (which are also somehow related to tattoos). You know: the works.

Discounts 4 Tats

Update: Tattoos get you discounts! Take a photo with a tattoo proving your love for Free Software and get a 30% discount on your ultra-secure Kolab Now online mail, file & groupware account. Get your tat-discount here.

Come visit us. We'll be in Building K, Level 1, as you go in, booth 4 on the left [2]; try out Kolab 16.1's new features live; get cool and useful stuff; join our community.

Sound like a plan?

More FOSDEM news: Kolab Systems's CEO, Georg Greve and Collabora's General Manage, Michael Meeks, will be signing an agreement during the event to integrate CloudSuite into Kolab.

(Collabora are the guys who offer enterprise support for LibreOffice, built the LibreOffice on Android app, and created LibreOffice Online.)

Collabora's CloudSuite supports word-processing, spreadsheets, presentations

CloudSuite is Collabora's version of LibreOffice in the cloud and comes with built-in collaborative editing. It allows editing and accessing online documents that can also be shared easily across users. Whole teams can collaboratively edit text documents, spreadsheets, and presentations. You can access and modify the shared documents through a web interface, directly from the LibreOffice desktop suite, or even from Collabora's own Android LibreOffice app.

Of course CloudSuite supports the full range of document formats LibreOffice does. That includes, among many others, the ISO-approved native Open Document formats, and most, if not all, Microsoft Office formats.

What with Kolab also gaining collaborative editing of text documents (already available), and emails, notes, calendars, and contacts over the next few months, you're probably seeing where this is going: by combining Kolab with Cloudsuite, the aim is to integrate a suite of office tools, each and all of which, from the email client to the presentation editor, can be used collaboratively and online by all the users of a team. And we'll be doing it using exclusively free and open sourced technologies and open standards.

You can already edit texts collaboratively from within Kolab

So, to summarize with of a tongue twister: Kolab & Collabora collaborate on collaboration.

Kolab at FOSDEM 2016

by Aaron Seigo in aseigo at 20:04, Thursday, 28 January

Kolab at FOSDEM 2016

Kolab is once again back at FOSDEM! Our booth is in the K hall, level 1, group A, #4. (What a mouthful!) We have LibreOffice as a neighbor this year, which is a happy coincidence, as we have some interesting news for all Kolab users that is related to office documents that we will be sharing at FOSDEM. That's not the only reason to visit us, though!

We'll be announcing a new release of Kolab with some exciting new features, and showing it running on multiple systems including a single-board Raspberry PI. In fact, if you have a PI of your own, bring an SD Card and we'll be happy to flash it for you with Kolab goodness. Instructions and images will be made available online after FOSDEM as well, so don't worry about missing out if you don't make it by the booth.

We'll also be making a very special offer to all you Free software lovers: a special discount on Kolab Now! Drop by the booth to find out how you can take advantage of this limited time offer.

And last, but far from least, we'll be showing off the new Kolab Community website and forums, sharing the latest on Kolab devel, Roundcube Next and the new desktop client, Kube. Several members of the Kolab development team and community will be there. I'll be there as well, and really looking forward to it!

Oh, and yes, there is the matter of that announcement I hinted at in the first paragraph of this blog entry ... you really want to come by and visit us to find out more about it ... and if you can't, then definitely be watching the Kolab blogs and the software news over the next few days!

Kolab: Bringing All Of Us Together, Part 2

by Aaron Seigo in aseigo at 19:47, Thursday, 28 January

Kolab: Bringing All Of Us Together, Part 2

This is the second blog in a serious about how we are working to improve the Kolab ecosystem. You can read the first installment about the new release strategy here. This time, however, we are going to looking at our online community infrastructure.

All the good things

There is quite a significant amount of infrastructure available to members of the Kolab community. There is an packaging system, online translation, a Phabricator instance where the source code and more is hosted, a comprehensive documentation site, mailing lists, irc channels and blogs. We are building up on that foundation, and one example of that is the introduction of Phabricator this past year.

Of course, it does not matter how good or numerous these tools are if people either do not find them or they are not the right sorts of tools people need. We had taken a look at what we have on offer, how they are being used and how we could improve. The biggest answer we came up with was: revamp the website and drop the wiki.

Introducing a new community website!

Kolab: Bringing All Of Us Together, Part 2

Today we turned the taps on a brand new website at Unlike the previous website, this one does not aim to sell people on what makes Kolab great; we already have other websites that do that pretty well. Instead, the new design focuses on making the community resources discoverable by putting them front and center.

In addition to upgrading the blog roll and creating a blog just for announcements and community news, we have created a set of web forums we are calling the Hub. Our design team will be using it to collaborate with users and other designers, and we invite developers, Kolab admins and users alike to discuss everything Kolab at the Hub.

Of course, we also took the opportunity to modernize the look, give it a responsive design, and reflect the new Kolab brand guidelines. But that is all just icing on the cake compared to the improved focus and new communication possibilities.

From here forward

We will be paying a lot of attention to community engagement and development in 2016, and this website, unveiled in time for FOSDEM is a great starting point. We will be adding more over time, such as real time commit graphs and the like, as well as taking your feedback for what would make it more useful to you. We are all looking forward to hearing from you! :)

Feel the FS love at FOSDEM #ILoveFS

by Aaron Seigo in Kolab Community - Kolab News at 15:15, Tuesday, 26 January

Well, it’s that special time of year again. A time when people can show their appreciation for the ones they love, that’s right, free software!

Free Software drives a huge number of devices in our everyday life. It ensures our freedom, our security, civil rights, and privacy. These values have always been at the heart of Kolab and what better way to say #ILoveFS than with the gift of Kolab Now!

To celebrate ‘I love Free Software Day’ we are offering a 30% discount* on all new Kolab Now accounts until the 14th February 2016.

So, how does this work?

Here’s the fun bit. Simply show your free software love by posting a picture on social media of your Kolab tattoo using #ILoveFS (available exclusively at FOSDEM) or simply share your Free Software contributions with us by email or in person. You can do that bit later if you like, for now, just head on over to and grab your new Kolab Now account. Offer must end 14th February 2016, so grab them fast.

*The following Terms & Conditions apply
30% Discount is applicable for the first 6 months for new individual or group accounts only. 30% discount is applicable for the first month only on new hosting accounts. Discount will be applied to your new account within the first 30 days of signup. Offer ends 14 February 2016. Kolab Systems AG has the right to withdraw this offer at any time. Cash equivalent is not available.

Driving Akonadi Next from the command line

by Aaron Seigo in aseigo at 21:26, Monday, 28 December

Christian recently blogged about a small command line tool that added to the client demo application a bunch of useful functionality for interacting with Akonadi Next from the command line. This inspired me to reach into my hard drive and pull out a bit of code I'd written for a side project of mine last year and turn up the Akonadi Next command line to 11. Say hello to akonadish.

akonadish supports all the commands Christian wrote about, and adds:

  • piping and file redirect of commands for More Unix(tm)
  • able to be used in stand-alone scripts (#!/usr/bin/env akonadish style)
  • an interactive shell featuring command history, tab completion, configuration knobs, and more

Here's a quick demo of it I recorded this evening (please excuse my stuffy nose ... recovering from a Christmas cold):

We feel this will be a big help for developers, power users and system administrators alike; in fact, we could have used a tool exactly like this for Akonadi with a client just this month ... alas, this only exists for Akonadi Next.

I will continue to develop the tool in response to user need. That may include things like access to useful system information (user name, e.g.?), new Akonadi Next commands, perhaps even that ability to define custom functions that combine multiple commands into one call... it's rather flexible, all-in-all.

Adopt it for your own

Speaking of which, if you have a project that would benefit from something similar, this tool can easily be re-purposed. The Akonadi parts are all kept in their own files, while the functionality of the shell itself is entirely generic. You can add new custom syntax by adding new modules that register syntax which references functions to run in response. A simple command module looks like this:

namespace Example

bool hello(const QStringList &args, State &state)
    state.printLine("Hello to you, too!");

Syntax::List syntax()
    return Syntax::List() << Syntax("hello", QObject::tr("Description"), &Example::hello);



Automcompletion is provided via a lambda assigned to the Syntax object's completer member:

sync.completer = &AkonadishUtils::resourceCompleter;

and sub-commands can be added by adding Syntax object to the children member:

get.children << Syntax("debug", QObject::tr("The current debug level from 0 to 6"), &CoreSyntax::printDebugLevel);

Commands can be run in an event loop when async results are needed by adding the EventDriven flag:

Syntax sync("sync", QObject::tr("..."), &AkonadiSync::sync, Syntax::EventDriven);

and autocompleters can do similarly using the State object passed in which provides commandStarted/commandFinished methods.
... all in all, pretty straightforward. If there is enough demand for it, I could even make it load commands from a plugin that matches the name of the binary (think: ln -s genericappsh myappsh), allowing it to be used entirely generically with little fuss. shrug I doubt it will come to that, but these are the possibilities that float through my head as I wait for compiles to finish. ;)

For the curious, the code can be found here.

Inital sync akonadi from commandline

by Sandro Knauß in Decrypted mind at 10:55, Friday, 18 December

When you start with Kontact you have to wait until the first sync of your mails with the IMAP or Kolab server is done. This is very annoying, because the first impression is that kontact is slow. So why not start this first sync with a script, and then the data is already available when the user starts kontact the first time?

1. Setup akonadi & kontact

We need to add the required config files to a new user home. This is simply copying config files to the new user home. We just need to replace username, email address and the password. Okay, that sounds quite easy, doesn't it? Oh wait - the password must be stored inside KWallet. KWallet can be accessed from the command line with kwalletcli. Unfortunatelly we can only use kwallet files not encrypted with a password because there is no way to enter the password with kwalletcli. Maybe pam-kwallet would be a solution; for plasma5 it there is an offical part for this, kwallet-pam, but I haven't tested it yet.

As an alternative to copying files around, we could have used kiosk system from KDE. With that you are able to preseed the configuration files for an user and have additionally the possibility to roll out changes. F.ex. if the server addresses changes. But for a smaller setup this is kind of overkill.

2. Start needed services

For starting a sync, we first need Akonadi running and Akonadi depends on a running DBus and kwalletd. KWallet refuses to start without a running XServer and is not happy with just xvfb.

3. Triggering the sync via DBus

akonadi has a great Dbus interface so it is quite easy to trigger a sync and track the end of the sync:

import gobject  
import dbus  
from dbus.mainloop.glib import DBusGMainLoop

def status(status, msg):  
    if status == 0:
        gobject.timeout_add(1, loop.quit)

session_bus = dbus.SessionBus()

proxy = session_bus.get_object('org.freedesktop.Akonadi.Resource.akonadi_kolab_resource_0', "/")  
proxy.connect_to_signal("status", status, dbus_interface="org.freedesktop.Akonadi.Agent.Status")  

loop = gobject.MainLoop()  

The status function gets all updates, and status=0 indicates the end of a sync.
Other than that is just getting the SessionBus and trigger the sychronize method and wait for till the loop ends.

4. Glue everything together

After having all parts in place, it can be glued into a nice script. As language I use python, together with some syntactic sugar it is quite small:

config.setupConfigDirs(home, fullName, email, name, uid, password)

with DBusServer(): "set kwallet password")
        kwalletbinding.kwallet_put("imap", akonadi_kolab_resource_0rc", password)

        with akonadi.AkonadiServer(open("akonadi.log", "w"), open("akonadi.err", "w")):
  "trigger fullSync")

first create the config files. If they are in place we need a DBus Server. If it is not available it is started (and stoped after leaving the with statement). Now the passwort is inserted in kwallet and the akonadiserver is started. If akonadi is running the fullSync is triggered.

You can find the whole at github:hefee/akonadi-initalsync

5. Testing

After having a nice script, the last bit that we want to test it. To have a fully controlled environment we docker images for that. One image for the server and one with this script. As base we use a Ubuntu 12.04 and our obs builds for kontact.

Because we already started with docker images for other parts of the depolyment of kontact I added them to the known repository github:/cmollekopf/docker

ipython ./automatedupdate/ #build kolabclient/percise  
python start set1  #start the kolab server (set1)

start the sync:

% ipython automatedupdate/
developer:/work$ cd akonadi-initalsync/  
developer:/work/akonadi-initalsync$ ./  
+ export QT_GRAPHICSSYSTEM=native
+ export QT_X11_NO_MITSHM=1
+ sudo setfacl -m user:developer:rw /dev/dri/card0
+ export KDE_DEBUG=1
+ USER=doe
+ PASSWORD=Welcome2KolabSystems
+ sleep 2
+ sudo /usr/sbin/mysqld
151215 14:17:25 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.  
151215 14:17:25 [Note] /usr/sbin/mysqld (mysqld 5.5.46-0ubuntu0.12.04.2) starting as process 16 ...  
+ sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
+ ./ 'John Doe' doe Welcome2KolabSystems akonadi_kolab_resource_0
INFO:root:setup configs  
INFO:DBusServer:starting dbus...  
INFO:root:set kwallet password  
INFO:Akonadi:starting akonadi ...  
INFO:root:trigger fullSync  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 started  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 was successfull.  
INFO:Akonadi:stopping akonadi ...  
INFO:DBusServer:stopping dbus...  

To be honest we need some more quirks, because we need to setup the X11 forward into docker. And in this case we also want to run one MySQL server for all users and not a MySQL server per user, that's why we also need to start mysql by hand and add a database, that can be used from akonadi. The real syncing begins with the line:

./ 'John Doe' doe Welcome2KolabSystems akonadi_kolab_resource_0

Kolab @ Fosdem

by Aaron Seigo in aseigo at 23:21, Tuesday, 15 December

Kolab will once again have a booth at FOSDEM, that fantastic event held in Brussels at the end of January. Several Kolab developers and deployers (and generally fun people) will be there wandering the halls, talking about Kolab and looking to connect with and learn from all the other fantastic people and projects who are there. It's going to be a great event! Be sure to find us if you are attending ... and come prepared for awesome :)

Kolab: Bringing All Of Us Together, Part 1

by Aaron Seigo in aseigo at 18:09, Tuesday, 15 December

Kolab: Bringing All Of Us Together, Part 1

The new year is looming, so this seems like a good time to share some of what we are thinking about at Kolab Systems when it comes to the Kolab ecosystem. As a result of careful examination of the Kolab ecosystem, we put together some priority adjustments to make. These include:

  • Kolab release process improvements
  • reboot
  • partner enablement

Each of these are big, exciting and fundamental improvements in their own right, so I will cover each in its own blog entry in which I will attempt to explain the challenges we see and what we are doing to address them. First up is the Kolab release process.


Kolab Enterprise as a software product is merging with the Kolab community edition. There will be a single Kolab software product open to all, because everyone needs Kolab, not only "enterprise" customers! Both the version selection and development process around Kolab will be substantially simplified as a direct result. Read on for the details!

Kolab Three Ways

Kolab currently comes in a few different "flavors": what you find in the source repositories and build servers; the Kolab Community Edition releases; and Kolab Enterprise. What is the difference?

Well, the code and raw package repos are essentially "server yourself": you download it, you put it together. Not so easy. The Community Edition releases are easy to install and were being released every six months, but support is left to community members and there is no long term release strategy for them. By contrast, you can purchase commercial support and services for Kolab Enterprise, and those releases are supported for a minimum of 5 years. The operating system platforms supported by each of these variants also varies, and moving between the Community Edition and Enterprise could at times be a bit of effort.

Yet they all come from the same source, with features and fixes flowed between them. However, the flow of those fixes and where features landed was not standardized. Sometimes features would land in Enterprise first and then debut in the Community Edition. Often it was the other way around. Where fixes would appear was similarly done on a case-by-case basis.

The complex relationship can be seen in the timeline below:

Kolab: Bringing All Of Us Together, Part 1

This has resulted duplication of effort, confusion over which edition to use when and where, and not enough predictability. We've been thinking about this situation quite deeply over the past months and have devised a plan that we feels improves the situation across the board.

A Better Plan Emerges

Starting in 2016 we will focus our efforts on a single Kolab product release that combines the Q/A of Kolab Enterprise with the availability of the Kolab community edition. Professional services and support, optimized operating system / platform integration and long term updates will all remain available from Kolab Systems, but everyone will be able to install and use the same Kolab packages.

It also means that both fixes and features will land in a consistent fashion. Development will be focused on the master branches of the Kolab source repositories, which have been hosted for a while now with Phabricator sporting open access to sprints and work boards. With our eyes and hands firmly on the main branches of the repositories, we will focus on bringing continuous delivery to them for increased quality.

Fixes and features alike will all flow into Kolab releases, bringing long desired predictability to that process, and Kolab Systems will continue to provide a minimum of 5 years of support for Kolab customers. This will also have the nice effect of making it easier for us to bring Kolab improvements live to Kolab Now.

These universal Kolab releases will be made available for all supported operating systems, including ones that the broader community elects to build packages for. This opens the way for the "enterprise-grade" Kolab packages on all operating systems, rather than "just" the community editions.

You can see how much clarity and simplicity this will bring to Kolab releases by comparing the diagram below with the previous one:

Kolab: Bringing All Of Us Together, Part 1

You can read more about Kolab release and development at Jeroen van Meeuwen's blog: The Evolution of Kolab Development and Short- vs. Long-term Commitments.

eimap: because what the world needs is another IMAP client

by Aaron Seigo in aseigo at 12:23, Friday, 11 December

eimap: because what the world needs is another IMAP client

Erlang is a very nice fit for many of the requirements various components in Kolab have ... perhaps one of these days I'll write something more in detail about why that is. For now, suffice it to say that we've started using Erlang for some of the new server-side components in Kolab.

The most common application protocol spoken in Kolab is IMAP. Unfortunately there was no maintained, functional IMAP client written in Erlang that we could find which met our needs. So, apparently the world needed another IMAP client, this time written in Erlang. (Note: When I say "IMAP client" I do not mean a GUI for users, but rather something that implements the client-side of the IMAP protocol: connect to a server, authenticate, run commands, etc.)

So say hello to eimap.

Usage Overview

eimap is implemented as a finite state machine that is meant to run in its own Erlang process. Each instance of an eimap represents a single connection to an IMAP server which can be used by one or more other processes to connect, authenticate and run commands against the server.

The public API of eimap consists mostly of requests that queue commands to be sent to the server. These functions take the process ID (PID) to send the result of the command to, and an optional response token that will accompany the response. Commands in the queue are processed in sequence, and the server responses are parsed into nice normal Erlang terms so one does not need to concern themselves with the details of the IMAP message protocols. Details like selecting folders before accessing them or setting up TLS is handled automagically by eimap by inserting necessary commands into the queue for the user.

Here is a short example of using eimap:

ServerConfig = #eimap_server_config{ host = "", port = 143, tls = false },
{ ok, Conn } = eimap:start_link(ServerConfig),
eimap:login(Conn, self(), undefined, "doe", "doe"),
eimap:get_folder_metadata(Conn, self(), folder_metadata, "*", ["/shared/vendor/kolab/folder-type"]),
eimap:logout(Conn, self(), undefined),

It starts an eimap process, queues up a login, getmetadata and logout command, then connects. The connect call could have come first, but it doesn't matter. When the connection is established the command queue is processed. eimap exits automatically when the connection closes, making cleanup nice and easy. You can also see the response routing in each of the command functions, e.g. self(), folder_metadata which means that the results of that GETMADATA IMAP command will be sent to this process as { folder_metadata, ParsedResponse } once completed. This is typically handled in a handle_info/3 function for gen_server processes (and similar).

Internally, each IMAP command is implemented in its own module which contains at least a new and a parse function. The new function creates the string to send the server for a given command, and parse does what it says returning a tuple that tells eimap whether it is completed, needs to consume more data from the server, or has encountered an error. This allows simple commands can be implemented very quickly, e.g.:

-export([new/1, parse/2]).
new(_Args) -> <<"COMPRESS DEFLATE">>.
parse(Data, Tag) -> formulate_reponse(eimap_utils:check_response_for_failure(Data, Tag)).
formulate_reponse(ok) -> compression_active;
formulate_reponse({ _, Reason }) -> { error, Reason }.

There is also a "passthrough" mode which allows a user to use eimap as a pipe between it and the IMAP server directly, bypassing the whole command queueing mechanism. However, if commands are queued, eimap drops out of passthrough to run those commands and process their responses before returning to passthrough.

It is not a complicated design by any means, and that's a virtue. :)

Plans and more plans!

As we write more Erlang code for use with Kolab and IMAP in general, eimap will be increasingly used and useful. The audit trail system for groupware objects needs some very basic IMAP functionality; the Guam IMAP proxy/filter heavily relies on this; and future projects such as a scalable JMAP proxy will also be needing it. So we will have a number of consumers for eimap as time goes on.

While the core design is mostly in place, there are quite a few commands that need to be implemented which you can see on the eimap workboard. Writing commands is quite straightforward as each goes into its own module in the src/commands directory and is developed with a corresponding test in the test/ directory; you don't even need an IMAP server, just the lovely (haha) relevant IMAP RFC. Once complete add a function to eimap itself to queue the command, and eimap handles the rest for you from there. Easy, peasy.

I've personally been adding the commands that I have immediate use for, and will be generally adding the rest over time. Participation, feedback and patches are welcome!

guam: an IMAP session filter/proxy

by Aaron Seigo in aseigo at 21:59, Thursday, 10 December

guam: an IMAP session filter/proxy

These days, the bulk of my work at Kolab Systems does not involve writing code. I have been spending quite a bit of time on the business side of things (and we have some genuinely exciting things coming in 2016), customer and partner interactions, as well as on higher-level technical design and consideration. So I get to roll around Roundcube Next, Kube (an Akonadi2-based client for desktop and mobile ... but more on that another time), Kolab server hardware pre-installs .. and that's all good and fun. Still, I do manage to write a bit of code most weeks, and one of the projects I've been working on lately is an IMAP filter/proxy called Guam.

I've been wanting to blog about it for a while, and as we are about to roll version 0.4 I figured now is as good a time as any.

The Basics of Guam

Guam provides a simple framework to alter data being passed between an IMAP client and server in real time. This "interference" is done using sets of rules. Each port that Guam listens has a set of rules with their own order and configuration. Initially rules start out passive and based on the data flow may elect to become active. Once active, a rule gets to peek at the data on the wire and may take whatever actions it wish, including altering that data before it gets sent on. In this way rules may alter client messages as well as server responses; they may also record or perform other out-of-band tasks. The imagination is the limit, really.

Use Cases

The first practical use case Guam is fulfilling is selective hiding of folders from IMAP clients. Kolab stores groupware data such as calendars, notes, tags and more in plain old IMAP folders. Clients that connect over IMAP to a Kolab server which are not aware of this get shown all those folders. I've even heard of users who have seen these folders and delete them thinking they were not supposed to be there, only to then wonder where the heck their calendars went. ;)

So there is a simple rule called filter_groupware_folders that tries to detect if the client is a Kolab-aware client by looking at the ID string it sends and if it does not look like a Kolab client it goes about filtering out those groupware folders. Kolab continues on as always, and IMAP clients do as well but simply do not see those other special folders. Problem solved.

But Guam can be used for much more than this simple, if rather practical, use case. Rules could be written that prevent downloading of attachments from mobile devices, or accessing messages marked as top-secret when being accessed from outside an organization's firewall. Or they could limit message listings to just the most recent or unread ones and provide access to that as a special service on a non-standard port. They could round-robin between IMAP servers, or direct different users to different IMAP servers transparently. And all of these can be chained in whichever order suits you.

The Essential Workings

The two most important things to configure in Guam are the IMAP servers to be accessed and the ports to accept client connections on.

guam: an IMAP session filter/proxy

Listener configuration includes the interface and port to listen on, TLS settings, which IMAP backend to use and, of course, the rule set to apply to traffic. IMAP server configuration includes the usual host/port and TLS preferences, and the listeners refer to them by name. It's really not very complicated. :)

Rules are implemented in Guam as Erlang modules which implement a simple behavior (Erlangish for "interface"): new/1, applies/3, apply_to_client_message/3, apply_to_server_message/3, and optionally imap_data/3. The name of the module defines the name of the rule in the config: a rule named foobar would be implemented in a module named kolab_guam_rule_foobar.

... and for a quick view that's about all there is to it!

Under the hood

I chose to write it in Erlang because the use case is pretty much perfect for it: lots of simultaneous connections that must be kept separate from one another. Failure in any single connection (including a crash of some sort in the code) does not interfere with any other connection; everything is asynchronous while remaining simple (the core application is a bit under 500 lines of code); and Erlang's VM scales very well as you add cores. In other words: stability, efficiency, simplicity.

Behind the scenes, Guam uses an Erlang IMAP client library that I've been working on called eimap. I won't get any awards for creativity in the naming of it, certainly, but "erlang IMAP" does what it says on the box: it's IMAP in Erlang. That code base is rather larger than the Guam one, and is quite interesting in its own right: finite state machines! passthrough modes! commands as independent modules! async and multi-process! ooh! aaaaaah! sparkles! eimap is a very easy project to get your fingers dirty with (new commands can be implemented in well under 6 lines of code) and will be used by a number of applications in future. More in the next blog entry about that, however.

In the meantime, if you want to get involved, check out the git repo, read the docs in there and take a look at the Guam workboard.

This week in Kolab-Tech

by Mads Petersen in The Kolaborator at 12:41, Saturday, 03 October

It's always fun when your remote colleagues are coming to visit the office. It helps communication putting a face to the name in the chat client - and the voice on the phone.

Giles, our creative director, was visiting from London the first days of the week, which made a lot of the work switch context to be about design and usability. As Giles is fairly new in the company we also spent some time discussing a few of our internal processes and procedures with him. It is great to have him onboard to fill in a previously not so investigated space with his large experience.

The server development team kept themselves busy with a few Roundcube issues, and with a few issues that we had in the new KolabNow dashboard. Additionally work was being done on the Roundcube-Next POC. We hope soon to have something to show on that front.

On the desktop side, we finalized the sprint 201539 and delivered a new version of Kontact on Windows and on Linux. The Windows installer is named Kontact-E14-2015-10-02-12-35.exe, and as always it is available on our mirror.

This Sunday our datacenter is doing some maintenance. They do not expect any interruption, but be prepared for a bit of connection troubles on Sunday night.

On to the future..

Last week in Kolab-Tech

by Mads Petersen in The Kolaborator at 15:51, Monday, 28 September

As we started the week already the previous Friday night (by shutting off the KolabNow cockpit and starting the big migration) it turned out to be a week all about (the bass) KolabNow.

Over the weekend we made a series of improvements to KolabNow that will improve the over all user experience with

  • Better performance
  • More stable environment
  • Less downtime
  • Our ability to update the environment with a minimum of interruption for endusers.

After the update, there were of course a few issues that needed to be tweeked, but details aside, the weekend was a big success. Thanks to the OPS staff for doing the hard work.

One thing we changed with this update was the way users get notified when their accounts are suspended. Before this weekend, users with suspended accounts would still be able to login and receive mail on KolabNow. After this update, users with suspended accounts will not be able to login. This was of course leading to a small breeze of users with suspended accounts contacting support with requests for re-enabeling of their accounts.

On the development side we were making progress on two fronts:

  • We are getting close to the end of the list of urgent Kontact defects. The second week of this sprint should get rid of that list. Our Desktop people will then get time to look forward again, and look at the next generation of Kolab Desktop Client.
  • We started experimenting with one (- of perhaps more to come) POCs for Roundcube-Next. We now need to start talking about the technologies and ideas behind that new product. More to follow about that.

Thank you for your interest - if you are still reading. :-)

This week in Kolab Tech

by Mads Petersen in The Kolaborator at 15:48, Friday, 18 September

Another week passed by; super fast, as we know that: Time is running fast when you have fun.

The client developers are on a roll. They have been hacking away on a defined bundle of issues in Korganizer and Zanshin, which has been annoying for users, and has prevented some organizations from adapting the desktop client. This work will proceed during the next sprint - and most probably the sprint after that.

One of our collaborative editing developers took part in the ODF plugfest. According to his report, a lot of good experiences were had, a lot of contacts were made, and there was good feedback for the plans of the deep Kolab/Manticore integration.

Our OPS people was busy most of the week with preparations for this weekends big KolabNow update. This is a needed overhaul of our background systems and software. As we now have the new hardware in place, and it has been running it's test circles around itself, we can finally start applying many of the improvements that we have prepared for some time. This weekend is very much a backend update; but an important one, which will make it easier for us to apply more changes in the future with a minimal amount of interruptions.

All y'all have a nice weekend now..

This week in Kolab tech..

by Mads Petersen in The Kolaborator at 11:01, Friday, 11 September

The week in development:

  • Our desktop people were spending time in Randa, a small town in the Swiss mountains, where they were discussing KDE related issues and hacking away together with similar minded people. Most probably they also got a chance or two for some social interaction.
  • Work was continued on the Copenhagen (MAPI integration) project. Where as it was easy to spot progress in the beginning, the details around folder permissions and configuration objects that are being worked out now are not as visible.
  • The Guam project (the scalable IMAP session and payload filter) is moving along as planned. The filter handling engine is in place. It is now being implanted into the main body of the system, and then work on the actual filter formulation can be started.
  • A few defects in Kolab on UCS was discovered in the beginning of the week. Those were investigated and are getting fixed as I am writing this. Hopefully we will be able to push a new package for this product early next week.

In other news: The engineering people are working hard to prepare the backend systems for some interesting upcoming KolabNow changes. There will be more information about those changes in other more appropriate places.

Only thing left is, to wish everyone a very nice weekend.

Last week @ Kolab Tech

by Mads Petersen in The Kolaborator at 14:00, Monday, 07 September

After a summer with ins and outs of the super hot Zurich office, this week finally brought some rain and a little chill. I can't wait for the snow to start.

The week started early and in full speed, as we had our hardware vendor visiting on Monday to replace a defect hypervisor. I sleep better at night knowing that everything is in order again.

A few of us was jumping on a bus to the fair city of Munich, to meet the techies at IT@M IT@M for a Kontact workshop; 3 days of intense desktop client talks, discussions and experiments. It was inspiring to see the work groups get together to resolve issues, do packaging on the LiMux platform and prepare pre-deployment configurations. A big value of the workshop was the opportunity to collect and consolidate a lot of end user experience. Luckily we also got time for a bit of pretaste of the special Wiesn bier.

Aside from discussing the desktop clients, creating packages and listening to use cases, Christian finally found and resolved the issue that for a while has prevented me from installing the latest Kontact on my fedora 22. Thanks Christian!

Kontact and GnuPG under Windows

by Sandro Knauß in Decrypted mind at 23:53, Wednesday, 02 September

Kontact has, in contrast to Thunderbird, integrated crypto support (OpenPGP and S/MIME) out-of-the-box.
That means on Linux you can simply start Kontact and read crypted mails (if you have already created keys).
After you select your crypto keys, you can immediately start writing encrypted mails. With that great user experince I never needed to dig further in the crypto stack.

select cryptokeys step1 select cryptokeys step2

But on Windows there is no GnuPG installed as default, so I need to dig into the whole world of crypto layers,
that are between Kontact and the actual part that does the de-/encryption.

Crypto Stack

Kontact uses a number of libraries that the team has written around GPGME.

The lowest level one is gpgmepp which is an object oriented wrapper for gpgme. This lets us avoid having to write code in C for KMail. Than we have libkleo which is a library built on top of gpgmepp that KMail uses to trigger de-/encryption in the lower levels. GPGME is the only required dependency to compile Kontact with crypto support.

But this is not enough to send and receive encrypted mail with Kontact on Windows, as I mentioned earlier. There are still runtime dependencies that we need to have in place. Fortunatelly the runtime crypto stack is already packaged by the GPG4Win team. Simply installing is still not enough to have crypto support, though. With GPG4Win, it is possible to select OpenPGP keys, create and read encrypted mails, but unfortunatelly it doesn't work with S/MIME.

So I had to dig futher into how GnuPG is actually working.

OpenPGP is handled by the gpg binary and for S/MIME we have gpgsm. Both are directly called from GPGME, using libassuan. Both application than talk to gpg-agent, which is actually the only programm that interacts with the key data. Both application can be used from the commandline, so it was easy to verify, that they were working and that we have no problems with GnuPG setup.

So first we start by creating keys (gpg --gen-key and gpgsm --gen-key) and than further testing what works with GPG4Win and what does not. We found a bug in GnuPG in the used version, but this one was closed in a newer version. Still Kontact didn't want to communicate with GPG4Win. The reason was a wrong standard path, preventing gpgme from finding gpgsm. With that fixed, we now have a working crypto stack under windows.

But to be honest, there are more application involved in a working crypto stack. At first we need gpgconf and gpgme-w32-spawn to be available in the Kontact directory. gpgconf helps gpgme to find gpg and gpgsm and is responsible to modify the content of .gnupg in the user's home directoy. Additionally, it infoms you about changes in config files. gpgme-w32-spawn is responsible for creating the other needed processes.

For having a UI where you can enter ypur password you need pinentry. S/MIME needs another agent, that does the CRL / OCSP checks. This is done by dirmgnr. In GnuPG 2.1 dirmgnr is the only component that performs connections to the outside. So every request that requires the Internet is done via dirmgnr.

This is, in short, the crypto stack that needs to work together to give you working encrypted mail support.

We are happy, that we now have a fully working Kontact under windows (again!). There are rumours, that Kontact was working also before that under windows with crypto support, but unfortunatelly when we started the crypted part was not working.

This work has done in the kolabsys branch, which is based on KDE Libraries 4. The next steps are to merge changes over to make sure that the current master branch of Kontact, which uses KDE Frameworks 5, is also working.


Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

randa meeting

Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.

So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).

Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.

After nice, steady growth during the early months, took us on a wild ride.

Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.

Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto as well.

What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.

In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.

Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.

On the highway to future improvements

So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.

The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.

That hardware has been installed last week.

So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.

For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.

Stay tuned.

Akonadi with a remote database

by Aaron Seigo in aseigo at 13:02, Friday, 14 August

Akonadi with a remote database

The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.

In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.

While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.

How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.

Configuring Akonadi

Note: as Dan points out in the comments below, this is only safe to do with a "fresh" Akonadi that has no data thus far. You'll want to first clean out (and possibly backup) all the data in $XDG_DATA_HOME/akonadi as well as be prepared to do some cleaning in the Kontact application configs that reference Akonadi entities by id. (Another practice we aim to light on fire and burn in Akonadi Next.)

First, you want Akonadi to not be running. Close Kontact if it is running and then run akonadictl stop. This can take a little while, even though that command returns immediately. To ensure Akonadi actually is stopped run akonadictl status and make sure it says that it is, indeed, stopped.

Next, start the Akonadi control panel. The command line approach is kcmshell4 kcm_akonadi_resources, but you can also open the command runner in Plasma (ALt+F2 or Alt+Space, depending) and type in akonadi to get something like this:

Akonadi with a remote database

It's the first item listed, at least on my laptop: Akonadi Configuration. You can also go the "slower" route and open System Settings and either search for akonadi or go right into the Personal Information panel. No matter how you go about it, you'll see something like this:

Akonadi with a remote database

Switch to the Akonadi Server Configuration tab and disable the Use internal MySQL server option. Then you can go about entering a hostname. This would be localhost for MySQL7 running on the same machine, or an IP address or domain name that is reachable from the system. You will also need to supply a database name4 (which defaults to akonadi), username5 and password. Clear the Options line of text, and hit the ol' OK button. That's it.

Akonadi with a remote database

Assuming your MySQL is up and running and the username and password you supplied are correct, Akonadi will now be using a remote MySQL database. Yes, it is that easy.


In this configuration, the limitations are twofold:

  • network quality
  • local configuration is now tied to that database

Network quality is the biggest factor. Akonadi can send a lot of database queries and each of those results in a network roundtrip. If your network latency for a roundtrip is 20ms, for instance, then you are pretty well hard-limited to 50 queries per second. Given that Akonadi can issue several queries for an item during initial sync, this can result in quite slow initial synchronization performance on networks with high latency.6

Past latency, bandwidth is the other important factor. If you have lots of users or just tons of big mails, consider the network traffic incurred in sending that data around the network.

For typical even semi-modern network in an office environment, however, the network should not be a big issue in terms of either latency or bandwidth.

The other item to pay attention to is that the local configuration and file data kept outside the database by Akonadi will now be tied to the contents of that database, and vice versa. So you can not simply setup a single database in a remote database server and then connect simultaneously to it from multiple Akonadi instances. In fact, I will guarantee you that this will eventually screw up your data in unpleasant ways. So don't do it. ;)

In an office environment where people don't move between machines and/or when the user data is stored centrally as well, this isn't an issue. Otherwise, create one database for each device you expect to connect to it. Yes, this means multiple copies of the data, but it will work without trashing your data and that's more important thing.

How well does it work?

Now for the Big Question: Is this actually practical and safe enough for daily use? I've been using this with my Kolab Now account since last week. To really stretch the realm of reality, I put the MySQL instance on a VM hosted in Germany. In spite of forcing Akonadi to trudge across the public internet (and over wifi), so far, so good. Once through a pretty slow initial synchronization, Kontact generally "feels" on par with and often even a bit snappier than most webmail services that I've used, though certainly slower than a local database. In an office environment, however, I would hope that the desktop systems have better network than "my laptop on wifi accessing a VM in Germany".

As for server load, for one power user with a ton of email (my life seems to revolve around email much of the time) it is entirely negligible. MySQL never budged much above 1% CPU usage during my monitoring of it, and after sync was usually just idling.

I won't be using this configuration for daily use. I still have my default-configured Akonadi as well, and that is not only faster but travels with my laptop wherever it is, network or not. Score one for offline access.


1: If you are thinking something along the lines of "the real issue is that it uses a database server at all", I would partially agree with you. For offline usage, good performance, and feature consistency between accounts, a local cache of some sort is absolutely required. So some local storage makes sense. A full RDBMS carries more overhead than truly necessary and SQL is not a 100% perfect fit for the dataset in question. Compared to today, there were far fewer options available to the Akonadi developers a decade ago when the Akonadi core was being written. When the choice is between "not perfect, but good enough" and "nothing", you usually don't get to pick "nothing". ;) In the Akonadi Next development, we've swapped out the external database process and the use of SQL for an embedded key/value store. Interestingly, the advancements in this area in the decade since Akonadi's beginning were driven by a combination of mobile and web application requirements. That last sentence could easily be unpacked into a whole other blog entry.

2: There is a (configurable) limit to the size of payload content (e.g. email body and attachments) that Akonadi will store in the database which defaults to 4KiB. Anything over that limit will get stored as a regular file on the file system with a reference to that file stored in the database.

3: This blog entry is, in part, a way to collect my thoughts for that documentation.

4: If the user is not allowed to create new databases, then you will need to pre-create the database in MySQL.

5: The user account is a MySQL account, not your regular system user account ... unless MySQL is configured to authenticate against the same user account information that system account login is, e.g. PAM / LDAP.

6: Akonadi appears to batch these queries into transactions that exist per-folder being sync'd or every 100 emails, whichever comes first, so if you are watching the database during sync you will see data appear in batches. This can be done pretty easily with an SQL statement like select count(*) from PartTable; Device this number by 3 to get the number of actual items, time how long it takes for a new batch to arrive and you'll quickly have your performance numbers for synchronization.

7: That same dialog also offers options for things other than MySQL. There are pros and cons to each of the options in terms of stability and performance. Perhaps I'll write about those in the future, but this blog entry with its stupid number of footnotes is already too long. ;)

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:









The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:


Roundcube Next Campaign Amount



Indiegogo Cost



PayPal Cost



Remaining Amount



So by the time the money was in our PayPal account, we are down 8.15%.

The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.

But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, lists the exchange rate as 0.9464749579 CHF per USD.



% of total

Roundcube Next Campaign Amount


SFr. 97,998.96


Remaining at PayPal


SFr. 90,008.06


Final at bank in CHF


SFr. 87,817.00


So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.

The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.

PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.

But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.

In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.

We also remain open to others to come aboard.

The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.

It’s not too late to be part of the Next. Just drop a message to

"... if nothing changes"

by Aaron Seigo in aseigo at 17:41, Friday, 17 July

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?

A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.

This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.

The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.

Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.

I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.

Roundcube Next: The Next Steps

by Aaron Seigo in aseigo at 15:17, Friday, 03 July

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps


The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Roundcube Next: The Next Steps

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Roundcube Next crowdfunding success and community

by Aaron Seigo in aseigo at 21:36, Monday, 29 June

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!

1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.

Riak KV, Basho and Kolab

by Aaron Seigo in aseigo at 10:22, Saturday, 27 June

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!

If you are a user of Roundcube, you want to contribute to If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.


Key Update

by Georg Greve in freedom bits at 20:58, Monday, 18 May

I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”
In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Brave GNU World <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <>
uid                  Georg C. F. Greve (Kolab Community) <>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <>
uid                  Georg C. F. Greve ( Board) <>
uid                  Georg C. F. Greve <>
uid                  Georg C. F. Greve (GNU Project) <>
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.

Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.



event and data logging

by Aaron Seigo in aseigo at 08:58, Wednesday, 01 April

Working with Kolab has kept me busy on numerous fronts since I joined near the end of last year. There is the upcoming Kolab Summit, refreshing Kolab Systems' messaging, helping with progress around Kolab Now, collaborating on development process improvement, working on the design and implementation of Akonadi Next, the occassional sales engineering call ... so I've been kept busy, and been able to work with a number of excellent people in the process both in Kolab Systems and the Kolab community at large.

While much of that list of topics doesn't immediately bring "writing code" to mind, I have had the opportunity to work on a few "hands on keyboard, writing code" projects. Thankfully. ;)

One of the more interesting ones, at least to me, has been work on an emerging data loss prevention and audit trail system for Kolab called Bonnie. It's one of those things that companies and governmental users tend to really want, but which is fairly non-trivial to achieve.

There are, in broad strokes, three main steps in such a system:

  1. Capturing and recording events
  2. Storing data payloads associated with those events
  3. Recreating histories which can be reviewed and even restored from

I've been primarily working on the first two items, while a colleague has been focusing on the third point. Since each of these points is a relatively large topic on their own, I'll be covering each individually in subsequent blog entries.

We'll start in the next blog entry by looking at event capture and storage, why it is necessary (as opposed to simply combing through system logs, e.g.) and what we gain from it. I'll also introduce one of the Bonnie components, Egara, which is responsible for this set of functionality.

eGoverment in the Netherlands

by Aaron Seigo in aseigo at 18:47, Friday, 27 March

Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly swift and far-reaching shift that has touched people's personal and professional lives.

So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse.

The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio.

They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident.

In their own words:

The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens.

In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions.

That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this.

Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.

Kolab Summit 2015

by Aaron Seigo in aseigo at 11:52, Monday, 16 March

Kolab Summit 2015

We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3.

Just as Kolab itself is made up of many technologies, many technologies will be present at the summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event.

As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.

SFD Call to Action: Let the STEED run!

by Georg Greve in freedom bits at 09:00, Friday, 20 September

Information Technology is a hype driven industry, a fact that has largely contributed to the current situation where the NSA and GCHQ have unprecedented access to the global communication and information. Including for a very Realpolitik based approach to how that information may be used. Economic and political manipulation may not be how these measures are advertised, but it may very well be the actual motivation. It’s the economy, stupid!

Ever since all of this started, many people have asked the question how to protect their privacy. Despite some there is still a lack of comprehensive answers to this question. There is an obvious answer that most mainstream media seem to have largely missed: Software freedom advocates had it right all along. You cannot trust proprietary cryptography, or proprietary software. If a company has a connection to the legal nexus of the United States, it is subject to US law and must comply with demands of the NSA and other authorities. But if that company also provides proprietary software it is virtually impossible for you to know what kind of agreements it has with the NSA, as most of their management prefer not to go to jail. But one would have to be very naive to think the United States is the only country where secret agreements exist.

Security unfortunately is a realm full of quacks and it is just as easy to be fooled as it is to fool yourself. In fact many of the discussions I’ve had over the past weeks painfully reminded me of what Cory Doctorow called “Schneier’s Law” although Bruce Schneier himself points out the principle has been around for much longer. He has dated it back to Charles Babbage in 1864:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

So in my experience it makes good sense to listen to what Bruce Schneier and a few others have to say, which is why I think his guide to staying secure on the internet is probably something everyone should have read. In that list of recommendations there are some points that ought to read familiar:

4) Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.

5) Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

So you were right, good for you” I hear you think. The point I am trying to make is a different one. It has been unbelievably difficult in the past to consequently do the right thing that would now give us the answers to the questions posed by the NSA and others. Both the Free Software Foundation Europe (FSFE) as an organisation and Kolab as a technology have a very long history to that extent. In fact if you’ve read the background of, you’ll hopefully see the same kind of approach there, as well. Having been involved with both has given me a unique perspective.

So when Bruce Schneier is listing GnuPG as the first of several applications he is using and recommending to stay secure, I can’t help but find this rather ironic and rewarding at the same time. Because I know what has been necessary for this crucial piece of software to come so far. Especially Werner Koch, but also Markus Brinkmann are two people all of us are indebted to, even though most people don’t realize it. Excellent software developers, but entrepreneurs with much room for improvement and (I’m sorry, guys) horrible at marketing and fundraising. So they pretty much exploited themselves over many years in order to be able to keep the development going because they knew their work was essential. Over the course of the past 12 years the entire Kolab team and especially individuals such as Bernhard Reiter at Intevation have always done what they could to involve them in development projects and push forward the technology.

And we will continue to do that, both through and some other development projects we are pushing with Kolab Systems for customers that have an interest in these technologies. But they have a whole lot more in mind than we could make possible immediately, such as dramatically increasing the usability for end-to-end cryptography. The concept they have developed is based on over a decade of working on obstacles to end user adoption. It’s called STEED — Usable End-to-End Encryption and has been available for two years now. I think it’s time to be finalized and implemented.

That’s why I am using tomorrow’s Software Freedom Day to ask for volunteers to help them run a crowdfunding campaign so they can finally put it into practice, in the open, to everyone’s benefit. Because that’s going to contribute more than just a little bit towards a world where privacy will once more be the default. So please help spread the word and let the STEED run!

Groklaw shutting down.

by Georg Greve in freedom bits at 12:31, Tuesday, 20 August

Today is a sad day for the world of Information Technology and the cause of software freedom. PJ just announced she’ll be shutting down Groklaw.

It’s hard to overestimate the role that Groklaw has played in the past years. Many of us, myself included, have worked with Groklaw over the years. I still take pride my article about the dangers of OOXML for Free Software and Open Standards might have been the first of many calls to arms on this topic. Or how Groklaw followed the Microsoft antitrust case that FSFE fought for and with the Samba team, and won for all of software freedom. Groklaw was essential in helping us counter some of the Microsoft spin-doctoring. Or the Sean Daly interview with Volker Lendecke, Jeremy Allison, Carlo Piana and myself for Groklaw after the landslide victory against Microsoft in court.

I remember very well how giddy I still was during the interview for having realized that Microsoft would not be able to take down FSFE, because that would have been the consequence had they gotten their way. We bet our life’s work at the time. And won. The relief was incredible.

So there is a good deal of personal sadness to hear about this, as well as a general concern which Paul Adams just summarized rather well on the #kolab IRC channel:

the world of IT is just that little bit less safe without groklaw

And it’s true. Groklaw has been the most important platform to counter corporate spin doctoring, has practiced an important form of whistleblowing long before Wikileaks, and has been giving alternative and background perspective on some of the most important things going on inside and outside the media limelight. without Groklaw, all of us will lack that essential information.

So firstly, I’d like to thank PJ for all the hard years of work on Groklaw. Never having had the pleasure of meeting her in real life, I still feel that I know her from the conversations we had over email over so many years. And I know how she got weary of the pressure, the death threats and the attempts at intimidating her into silence. Thank you for putting up with it for so long, and for doing what you felt was right and necessary despite the personal cost to yourself! The world needs more people like you.

But with email having been the only channel of communication she was comfortable using for reasons of personal safety, when Edward Snowden revealed the PRISM program, when Lavabit and Silent Circle shut down, when the boyfriends of journalists get detained at Heathrow, she apparently drew the conclusion this was no longer good enough to protect her own safety and the safety of the people she was in communication with.

That she chose as the service to confide in with her remaining communication lines at least to me confirms that we did the right thing when we launched and also that we did the right thing in the way we did it. But it cannot mitigate the feeling of loss for seeing Groklaw fall victim to the totalitarian tendencies our societies are exhibiting and apparently willingly embracing over the past years.

While we’re happy to provide a privacy asylum in a safe legislation, society should not need them. Privacy should be the default, not the exception.

In January this year we started the MyKolab beta phase and last week we finally moved it to its production environment, just in time for the Swiss national day. This seemed oddly fitting since the Swiss national day celebrates its independence and self-determination, as they were liberating themselves from the feudal system. So when Bruce Schneier wrote about how the Internet right now resembles a feudal system, it was too tempting an opportunity to miss. And of course PRISM and Tempora played their part in the timing, as well, although we obviously had no idea this leak was coming when we started the beta in January.

Anyhow. So now has its new home.

Step 1: Hardware & Legislation

It should be highlighted that we actually run this on our own hardware, in a trustworthy, secure data centre, in a rack which we physically control. Because that is where security starts, really. Also, we run this in Switzerland, with a Swiss company, and for a reason. Most people do not seem to realize the level of protection data enjoys in Switzerland. We tried to explain it in the FAQ, and the privacy information. But it seems that too many people still don’t get it.

Put frankly, in these matters, legislation trumps technology and even cryptography.

Because when push comes to shove, people would rather not go to jail. So no matter what snake oil someone may be trying to sell you about your data being secure because “it is encrypted on our server with your passphrase, so even we don’t have access” – choice of country and legislation trumps it all.

As long as server-side cryptography is involved a provider can of course access your data even when it is encrypted. Especially when the secret is as simple as your password which all your devices submit to the server every time you check mail. Better yet, when you have push activated, your devices even keep the connection open. And if the provider happens to be subject to a requirement to cooperate and hand over your data, of course they will. Quite often they don’t even necessarily know that this is going on if they do not control the physical machines.

XKCD 538: Security

XKCD 538: Security

So whenever someone tries to serve you that kind of snake oil, you should avoid that service at all cost, because you do not know which lies you are not catching them in the act with. And yes, it is a true example, unfortunately. The romantic picture of the internet as a third place above nation states has never had much evidence on its side. Whoever was harbouring these notions and missed XKCDs take on the matter should definitely have received their wakeup call by Lavabit and Silent Circle.

The reality of the matter is:

  1. There is no digital security without physical security, and
  2. Country and applicable legislation always win.

Step 2: Terms of Service & Pricing

So legislation, hardware. What else? Terms of Service come to mind. Too often they are deliberately written to obfuscate or frankly turn you into the product. Because writing software, buying hardware, physical security, maintaining systems, staffing help desks, electricity: All these things cost money. If you do not pay for it, make sure you know who does. Because otherwise it’s like this old poker adage: If you cannot tell who is the weakest player at the table, it’s you. Likewise for any on-line service: if you cannot tell who is paying for this, it’s probably you.

Sometimes this may just in ways you did not expect, or may not have been aware of. So while most people only look for the lowest price, the question you actually should be asking yourself is: Am I paying enough for this service that I think it can be profitable even when it does everything right and pays all its employees fairly even if they have families and perhaps even mortgages?

The alternative are services that are run by enthusiasts for the common good, or subsidized by third parties – sometimes for marketing purposes. If it is run by an enthusiast, the question is how long they can afford to run this service well, and what will happen if their priorities or interests change. Plus few enthusiasts are willing to dish out the kind of cash that comes with a physically controlled, secure system in a data centre. So more often than not, this is either a box in someone’s basement where pretty much anyone has access while they go out for a pizza or cinema, or – at least as problematic – a cheap VM at some provider with unknown physical, legislative and technical security.

If it is a subsidized service, it’s worse. Just like subsidies on food in Europe destroy the farming economy in Africa, making almost a whole continent dependent on charity, subsidized services cannibalize those that are run for professional interest.

In this case that means they damage the professional development community around Open Source, leading to less Free Software being developed. Why is that? Because such subsidized services typically do not bother with contributing upstream – which is a pure cost factor and this is already charity, so no-one feels there is a problem not to support the upstream – and they are destroying the value proposition of those services that contribute upstream. So the developers of the upstream technologies need to find other ways to support their work on Open Source, which typically means they get to spend less time on Free Software development.

This is the well-meaning counterpart to providers who simply take the software, do not bother to contribute upstream, but use it to provide a commercial service that near-automatically comes in below the price if you were to price it sustainably by factoring in the upstream contribution and ongoing development. The road to hell and all that.

None of this is anything we wanted to contribute to with

So we made sure to write Terms of Service that were as good, honest and clear as we could make them, discussed them with the people behind the important Terms of Service; Didn’t Read project, and even link to that project from our own Terms of Service so people have a fair chance to compare them without being lawyers or even reading them.

Step 3: Contributing to the Commons

Kolab Web Client - Roundcube++

Roundcube++ - The Kolab Web Client

We also were careful to not choose a pricing point that would cannibalize anything but proprietary software. Because we pay the developers. All of who write Open Source exclusively. This has made sure that we have been the largest main contributor to the Roundcube web mailer by some margin, for instance. In doing so, we deliberately made sure to keep the project independent and did not interfere with its internal management. Feel free to read the account of Thomas Brüderli on that front.

So while hundreds of thousands of sites use Roundcube world wide, and it is popular with millions of users, only a handful of companies bother to contribute to its development, and none as much as Kolab Systems AG, which is the largest contributor by orders of magnitude. Don’t get me wrong. That’s all fine. We are happy about everyone who makes use of the software we develop, and we firmly believe there is a greater good achieved through Free Software.

But the economics are nonetheless the same: The developers working on Roundcube have lives, families even, monthly bills to pay, and we pay them every month to continue working on the technology for everyone’s good. Within our company group, similar things can probably be said for more than 60 people. And of course there are other parts of our stack that we do not contribute as much to, in some cases we are primarily the beneficiary of others doing the same.

It’s a give and take among companies who operate in this way that works extremely well. But there are some who almost never contribute. And if, as a customer, you choose them over those that are part of the developer community, you are choosing to have less Open Source Software developed.

So looking at contribution to Free Software as one essential criterion for whether the company you are about to choose is acting sustainably or trying to work towards a tragedy of the commons is something I would certainly suggest you do.

This now brings us to an almost complete list of items you want to check

  • Physical control, including hardware
  • Legal control, choice of applicable legislation
  • Terms of Service that are honest and fair
  • Contribution to Open Source / Free Software

and you want to make sure you pay enough for all of these to meet the criteria you expect.

Bringing it all together

On all these counts simultaneously, we made sure to put into the top 10%. Perhaps even the top 5%, because we develop, maintain and publish the entire stack, as a solution, fully Open Source and more radically Open Standards based than any other solution in this area. So in fact you never need to rely upon continuing to provide the service you want.

You can always continue to use the exact same solution, on your own server, in your own hands.

That is a claim that is unique, as far as I am aware. And you know that whatever you pay for the service never contributes to the development of proprietary software, but contributes to the state of the art in Free Software, available for everyone to take control of their own computing needs, as well as also improving the service itself.

For me, it’s this part that truly makes special. Because if you ever need to break out of, your path to self-reliance and control is already built into the system, delivered with and supported by the service itself: It’s called Kolab.


Following the disclosures about details on how the United States and other countries are monitoring the world there has been a global discussion about this subject that’s been long overdue. In previous articles I tried to put together what actually has been proven thus far, what that means for society, and what are the implications for businesses around the world.

Now I’d like to take a look at governments. Firstly, of course governments have a functional aspect not entirely unlike business, and of course governments should be conscious about the society and values they promote. Purely on these grounds it would likely be possible to say quite a few things about the post PRISM society.

Secondly, there is of course also the question to which extent governments have known about this previously and may even have colluded with what has been going on – in some cases possibly without democratic support for doing so. It has been pointed by quite a few journalists that “I had no idea” amounts to saying you have not been following technical progress since the typewriter was invented, and there is some truth to that. Although typewriters have also known to be bugged, of course.

In fact when spending so much time at the United Nations, one of the typical sights would be a diplomat talking on their mobile phone while covering their mouth with one hand in order to ward off the possibility of lip readers. So there is clearly an understanding that trying to know more about anyone you may have common or opposing interests with will give you an advantage, and that everyone is trying to gain that advantage to the best of their ability.

What I think is really at play here are two different things: Having been blind-sided by the actual threat, and having been found naïve.

Defending against bunnies, turning your back on lions

Smart politicians will now have understood their threat model has been totally off. It’s much easier to intercept that mobile phone call (and get both sides of the conversation) than it is to learn to lip read, guarantee to speak the same language and try and make sure you have line of sight. In other words: They were spending lots of effort protecting the wrong things while ignoring the right things. So there is no way for them to know how vulnerable they have been, what damage arose from that, and what will follow from that for their future work.

So intelligent people should now be very worried, indeed. Because either they did not know better, or perhaps even let a sense of herd safety drag them along into behaviour that has compromised their professional integrity in so far as it may have exposed their internal thoughts to people they did not want to share them with. If you’ve ever seen how international treaties are being negotiated it does not take a whole lot of fantasy to imagine how this might be a problem. Given the levels of secrecy and apparent lack of supervision if highest level politicians truly had no idea, there is also a major concern about possible abuse of the system to influence the political balance within a country by those in government.

Politicians are also romantic

The other part of the surprise seems to stem from a certain romantic notion of friendship among nations harboured by many politicians and deliberately nurtured by nations that do not share such romantic perspectives, most importantly in this context the United States.

The allies of the United States, in particular also the European Union know that the US has these capabilities and is not afraid to use them to gain an advantage for the home team. But for some reason they thought they were part of that home team because the United States have been telling them they’re best friends forever. It does not lack a certain irony that Germany fell for this, not realizing that the United States are following their default approach abroad, which is commonly referred to as Realpolitik in the US.

So when European politicians suddenly realize that it may be problematic to negotiate free trade agreements with someone who is reading your internal briefings and mails and is listening to your phone calls, it is not so much out of a shock that the US is doing this in general. They know the US is not shy to use force at any level to obtain results. It’s about the fact they’re using these methods universally, no matter who you are. That they were willing to do so against Switzerland, a country in the heart of Europe, should have demonstrated that aptly. Only that in this particular case, EU politicians were hoping to ride on the coat-tails of the US cavalry.

International Organizations

Of course that surprise also betrays the level of collaboration that has been present for a long time. The reason they thought they were part of the home team is that in some cases, they were. So when they were the benefactors of this system as they worked side by side with the United States at the Intergovernmental Organizations to put in place the global power structures that rule the world, this sort of advantage might have seemed very handy and very welcome. Not too many questions were asked, I presume.

But if you’re one of the countries in transition, a country from the developing world, or simply a country that got on the wrong sides of the United States and their power block, you now have to wonder: How much worse are you off for having been pushed back in negotiation much further than if the “Northern” power block had not known all your internal assessments, plans and contingencies? And how can Intergovernmental Organizations truly function if all your communications with these organizations are unilaterally transparent to this power block?

It’s time to understand that imbalance, and address it. I know that several countries are aware of this, of course, and some of them are actively seeking ways to address that strategic disadvantage, since parts of our company group have been involved in that. But too many countries do not yet seem to have the appropriate measures in place, nor are they addressing it with sufficient resource and urgency, perhaps out of a underestimation of the analytic capabilities.

The PRISM leaks should have been the wakeup call for these countries. But I’d also expect them to raise their concerns at the Intergovernmental Organizations, asking the secretariats how the IT and communications strategy of these organizations adequately protects the mandate of the organizations, for they can only function if a certain minimum level of confidence can be placed into them and the integrity of their work flow.

Global Powerstructures

But on a more abstract level, all of this once more establishes a trend of the United States as a nexus of global destabilisation subject only to national interest. Because it is for the US government to decide which countries to bless with access to that information, and whose information to access. Cooperate and be rewarded. Be defiant and be punished. For example by ensuring your national business champion does not get that deal since we might just employ our information to ensure our competing US business will. This establishes a gravitation towards pleasing the interests of the United States that I find problematic. As I would find a similar imbalance towards any other nation.

But in this case it is the United States that has moved to “economic policy by spook” as a good friend recently called it. Although of course there may be other countries doing the same, as right now it seems more or less confirmed this is at least in part collusion at NATO level. Be that as it may, countries need to understand that their sovereignty and economic well-being is highly dependent upon the ability to protect your own information and that of your economy.

Which is why Brazil and India probably feel confirmed in their focus on strategic independence. With the high dependency of virtually any economic sector, Information Technology has become as fundamental as electricity, roads or water. Perhaps it is time to re-assess to which level governments want to ensure an independent, stable supply that holds up to the demands of their nation.

Estonias president recently suggested to establish European cloud providers, other areas of the world may want to pay close attention to this.

The Opportunity Exists, Does The Will?

Let’s say a nation wanted to address these issues. Imagine they had to engineer the entire stack of software. The prospects would be daunting.

Fortunately they don’t have to. Nothing runs your data centres and infrastructures better, and with higher security than Free Software does. Our community has been building these tools for a long time now, and they have the potential to serve as the big equalizer in the global IT power struggle. The UNCTAD Information Economy Reports provide some excellent fact based, neutral analysis of the impact of software freedom on economies around the world.

Countries stand to gain or lose a lot in this central question. Open Source may have been the answer all along, but PRISM highlighted the need is both real and urgent.

Any government should be able to answer the following question: What is your policy on a sovereign software supply and digital infrastructure?

If that question cannot be answered, it’s time to get to work. And soon.

All articles:

After a primer on the realities of today’s world, and the totalitarian tendencies that follow from this environment and our behaviour in it, let’s take a look at what this means for professional use of information technology.

Firstly, it should be obvious that when you use the cloud services of a company, you have no secrets from that company other than the ones this company guarantees you to keep. That promise is good up to the level of guarantee that such a company can make due to the legal environment it is situated in and of course subject to the level of trust you can place into the people running and owning the company.

So when using Google Apps for your business, you have no secrets from Google. Same for Office 365 and Microsoft. iCloud and Apple. Also, these companies are known for having very good internal data analytics. Google for instance has been using algorithms to identify employees that are about to leave in order to make them a better offer to keep them on board. Naturally that same algorithm could be used to identify which of your better employees might be susceptible to being head hunted.

Of course no-one will ever know whether that actually took place or whether it contributed to your company losing that important employee to Google. But the simple truth is: In some ways, Google/Microsoft/Apple is likely to know a lot more about your business than you do yourself. That knowledge has value, and it may be tempting to turn that value into shareholder value for either of these businesses.

If you are a US business, or a small, local business elsewhere that may not be an issue.

But if you are into software, or have more than regional reach, it may become a major concern. Because thanks to what we now know about PRISM, your using these services means the US intelligence services also have real-time access to your business and its development. And since FISA explicitly empowers these services to make use of those capabilities for the general interests of the United States – including foreign policy and economic goals – the conclusion is simple: You might just be handing your internal business information to people who are champions for your competition.

Your only protection is your own lack of success. And you might be right, you might be too small for them to use too many resources, because while their input bandwidth is almost unlimited, their output bandwidth for acting upon it of course has limits. But that’s about it.

The US has a long tradition of putting their public services at the disposal of industry, trying to promote their “tax paying home team.” It’s a cultural misunderstanding to assume they would be pulling their punches just because you like to watch Hollywood movies and sympathise with the American Dream.

Which is why the US has been active to promote GM crops in Europe, or uphold the interests of their pharmaceutical industry. Is anyone at Roche reading this? No shareholder is concerned about this? To me it would seem like a good example of what risks are unwittingly taken when you let the CFO manage the IT strategy. Those two professions rarely mix, in my experience.

The United States are not the only nation in the world doing this, of course. Almost every nation has at least a small agency trying to support its own industry in building international business, and the German chancellor typically has a whole entourage of industry representatives when she’s visiting countries that are markets of interests. I guess it’s a tribute to their honesty that the United States made it explicit for its intelligence services to feed into this system in this way.

Several other countries are likely to do the same, but probably not as successfully or aggressively.

Old school on site IT as the solution?

Some people may now feel fairly smart they did not jump on the Public Cloud bandwagon. Only that not all of them are as secure as they think they are. Because we also learned that access to data is not only happening through the public clouds. Some software vendors, most importantly Microsoft, are also supplying the NSA with priority access to vulnerabilities in their software. Likely they will do their best to manage the pipe of disclosure and resolution in a way that there is always at least one way for the NSA to get into your locally installed system in an automated fashion that is currently not publicly known.

This would also explain the ongoing debate about the “NSA back door in Windows” which were always denied, but the denial could have been carefully concealing this alternative way of achieving the same effect. So running your local installation of Windows is likely a little better for your business secrets than using public cloud services by US businesses, but not as much as you might want to believe. But it’s not just Windows, of course, Lotus has been called out on the same practice a long time ago, and one may wonder whether the other software vendors avoided doing it, or simply avoided being caught.

Given the discussions among three-letter agencies about wanting that level of access into any software and service originating in the United States, and provided the evident lack of public disclosure in this area, a rather large question mark remains. So on-site IT is not necessarily the solution, unless it is done to certain standards. In all honesty, most installations probably do not meet those at the moment.  And the cost associated with doing it properly may be considered excessive for your situation.

So it’s not as simple and not a black and white decision between “all on-site self-run” and “all in public cloud by US business”. There is a whole range of options in between that provide different advantages, disadvantages, costs and risks.

Weighing the risks

So whatever you do: There is always a risk analysis involved.

All businesses take risks based on educated guesses and sometimes even instinct. And they need to weigh cost against benefit. The perfect solution is rarely obtained, typically because it is excessively costly, so often businesses stick with “what works.” And their IT is no different in that regard.

It is a valid decision to say you’re not concerned about business secrets leaking, or consider the likely damage smaller than the risk of running a poorly secured IT under your own control either directly or through a third party. And the additional cost of running that kind of installation well does not seem justified in comparison to what you gain. So you go to a more trustworthy local provider that runs your installation on Open Source and Open Standards. Or you use the services of a large US public cloud vendor. It’s your call to make.

But I would argue this call should always be made consciously, in full knowledge of all risks and implications. And truth is that in too many cases people did not take this into account, it was more convenient to ignore and dismiss as unproven speculation . Only that it’s only speculation as long as it hasn’t been proven. So virtually any business right now should be re-evaluating its IT strategy to see what risks and benefits are associated with their current strategy, and whether another strategy might provide a more adequate approach.

And when that evaluation is done, I would suggest to look at the Total Cost of Operations (TCO). But not in an overly simplistic way, because most often the calculation is skewed in favour of proprietary lock-in. So always make sure to factor in cost of decommissioning the solution you are about to introduce. And the TCO isn’t everything.

IT is not just a cost, there is a benefit. All too often two alternatives are compared purely on the grounds of their cost. So more often than not the slightly cheaper solution will be chosen despite offering dramatically fewer benefits and a poor strategic outlook. And a year later you find out that it actually wasn’t cheaper, at all, because of hidden costs. And that you would have needed the benefits of the other solution. And that you’re in a strategic dead-end.

So I would always advocate to also take into account the benefits, both in things you require right now, and in things that you might be able to achieve in the future. For lack of a common terminology, let’s call this the Option Value Generated (OVG) for your business, both in gained productivity, as well as innovative potential. And then there is what I now conveniently name the Customer Confidence Impact (CCI) of both your ability to devise an efficient IT strategy, as well as how you handle their business, data and trust.

After all is said and done, you might still want to run US technology. And you might still want to use a public cloud service. If you do, be transparent about it, so your customers can choose whether or not they agree to that usage by being in business with you. Because some people are likely to take offence due to the social implications and ownership of their own data. In other words: Make sure those who communicate with you and use your services know where that data ends up.

This may not be a problem for your business and your customers. They may consider this entirely acceptable, and that is fine. Being able to make that call is part of what it means to have freedom to try out business approaches and strategies.

But if you do not communicate your usage of this service, be aware of the risks you might be incurring. The potential impact for customer confidence and public image for having misled your business associates and customers is dramatic. Just look at the level of coverage PRISM is getting and you’ll get an idea.

The door is wide open

When reviewing your strategy, keep in mind that you may require some level of ability to adapt to a changed world in the future. Nothing guarantees that better than Open Source and Open Standards. So if you have ignored this debate throughout the past years, now would be the time to take a look at the strategic reasons for the adoption of Free Software. Most importantly transparency, security, control, ability to innovate.

While the past ten years most of the debate has been around how Open Source can provide more efficient IT at better price for many people, PRISM has demonstrated that the strategic values of Free Software were spot on and are providing benefits for professional use of IT that proprietary software cannot hope to match.

Simultaneously the past 20 years have seen a dramatic growth of professional services in the area. Because benefits are nice in theory, but if they cannot be made use of because the support network is missing, they won’t reach the average business.

In fact, in the spirit of full disclosure, I speak of personal experience in this regard. Since 2009 I dedicated myself to building up such a business: Kolab Systems is an Open Source ISV for the Kolab Groupware Solution. We built this company because Kolab had a typical Open Source problem. Excellent concepts and technology, but a gap in professional support in services to allow wide adoption and use of that technology. That’s been fixed. We now provide support for on-site hosting as well as Kolab as a service through We even structured our corporate group to be able to take care of high security requirements in a verifiable way.

But we are of course not the only business that has built its business around combining the advantages of software freedom with professional services for its customers. There are so many businesses working on this that it would be impossible to list them all. And they provide services for customers of all sizes – up to the very largest businesses and governments of this world.

So the concerns are real, as are the advantages. And there is a plethora of professional services at your disposal to make use of the advantages and address the concerns.

The only question is whether you will make use of them.

All articles: