Email recipient input widget

by alec in Kolabian at 16:44, Thursday, 20 April

It already has been announced that we are working on a new theme for Roundcube. This is planned for version 1.4 and is in a very early state. However, we already started implementing some neat features that are nice to have especially when we’re talking about mobile devices support. Here I will show you one […]

Import tasks from .ics file

by alec in Kolabian at 12:37, Thursday, 06 April

I recently was in a need to import a set of tasks from an .ics file. And I was surprised we can only import calendar events, but not tasks. So, after some quick copy-pasting we have the new feature. Note, this should be improved in the future to export/import Kolab tag assignments.

Release of Kube 0.1.0

by cmollekopf in Finding New Ways… at 12:50, Friday, 03 March

It’s finally done! Kube 0.1.0 is out the door.

First off, this is a tech preview really and not meant for production use.

However, this marks a very important step for us, as it lifts us out of a rather long stretch of doing the ground work to get regular development up and running. With that out of the way we can now move in a steadier fashion, milestone by milestone.

That said, it’s also the perfect time to get involved!
We’re planning our milestones on phabricator, at least the ones within reach, so that’s the place to follow development along and where you can contribute, be it with ideas, feedback, packaging, builds on new platforms or, last but not least, code.

So what is there yet?

You can setup an IMAP account, you can read your mail (even encrypted), you can move messages around or delete them, and you can even write some mails.

kube_main

BUT there are of course a lot of missing bits:

  • GMail support is not great (it needs some extra treatment because GMail IMAP doesn’t really behave like IMAP), so you’ll see some duplicated messages.
  • We don’t offer an upgrade path between versions yet. You’ll have to nuke your local cache from time to time and resync.
  • User feedback in the UI is limited.
  • A lot of commonly expected functions are not existing yet.
  • ….

As you see… tech preview =)

What’s next?

We’ll focus on getting a solid mail client together first, so that’s what the next few milestones are all about.

The next milestone will focus on getting an addressbook ready, and after that we’ll focus on search for a bit.

I hope we can scope the milestones approximately ~1 month, but we’ll have to see how well that works. In any case releases will be done only once the milestone is reached, and if that takes a little longer, so be it.

Packaging

This also marks the point where it starts to make sense to package Kube.
I’ve built some packages on copr already which might help packagers as a start. I’ll also maintain a .spec file in the dist/ subdirectory for the kube and sink repositories (that you are welcome to use).

Please note that the codebase is not yet prepared for translations, so please wait with any translation efforts (of course patches to get translation going are very welcome).

In order to release Kube a couple of other dependencies are released with it (see also their separate release announcements):

  • sink-0.1.0: Being the heart of Kube, it will also see regular releases in the near future.
  • kimap2-0.1.0: The brushed up imap library that we use in sink.
  • kasync-0.1.0: Heavily used in sink for writing asynchronous code.

Tarballs


Release of KAsync 0.1.0

by cmollekopf in Finding New Ways… at 12:32, Friday, 03 March

I’m pleased to announce KAsync 0.1.0.

KAsync is a library to write composable asynchronous code using lambda-based continuations.

In a nutshell:

Instead of:

class Test : public QObject {
    Q_OBJECT
public:
    void start() {
        step1();
    }
public signals:
    void complete();
private:

    void step1() {
        .... execute step one
        QObject::connect(step1, Step1::result, this, Test::step2);
    }

    void step2() {
        .... execute step two
        QObject::connect(step1, Step1::result, this, Test::done);
    }

    void done() {
        emit complete();
    }

};

you write:

KAsync::Job step1() {
    return KAsync::start([] {
        //execute step one
    });
}

KAsync::Job step2() {
    return KAsync::start([] {
        //execute step two
    });
}

KAsync::Job test() {
    return step1().then(step2());
}

The first approach is the typical “job” object (e.g. KJob), using the object for encapsulation but otherwise just chaining various slots together.

This is however very verbose (because what typically would be a function now has to be a class), resulting in huge functional components implemented in a single class, which is really the same as having a huge function.

The problem get’s even worse with asynchronous for-loops and other constructs, because at that point member variables have to be used as the “stack” of your “function” and the chaining of slots resembles a function with lot’s and lot’s of goto statements (It becomes really hard to decipher what’s going on.

KAsync allows you to write such code in a much more compact fashion and also brings the necessary tools to write things like asynchronous for loops.

There’s a multitude of benefits with this:

  • The individual steps become composable functions again (that can also have input and output). Just like with regular functions.
  • The full assembly of steps (the test() function), becomes composable as well. Just like with regular functions.
  • Your function’s stack doesn’t leak to class member variables.

Additional features:

  • The job execution handles error propagation. Errors just bubble up through all error handlers unless a job reconciles the error (in which case the normal execution continues).
  • Each job has a “context” that can be used to manage the lifetime of objects that need to be available for the whole duration of the execution.

Please note that for the time being KAsync doesn’t offer any API/ABI guarantees.
The implementation heavily relies on templates and is thus mostly implemented in the header, thus most changes will require recompilation of your source code.

If you’d like to help out with KAsync or have feedback or comments, please use the comment section, or the phabricator page.

If you’d like to see KAsync in action, please see Sink.

Thanks go to Daniel Vrátil who did most of the initial implementation.

Tarball


Release of KIMAP2 0.1.0

by cmollekopf in Finding New Ways… at 12:10, Friday, 03 March

I’m pleased to announce the release of KIMAP2 0.1.0.

KIMAP2 is a KJob based IMAP protocol implementation.

KIMAP2 received a variety of improvements since it’s KIMAP times, among others:

  • A vastly reduced dependency chain as we no longer depend on KIO. KIMAP2 depends only on Qt, KF5CoreAddons, KF5Codecs, KF5Mime and the cyrus SASL library.
  • A completely overhauled parser. The old parser performed poorly in situations where data arrived faster at the socket than we could process it, resulting in a lot of time wasted memcpying buffers around and an unbounded memory usage. To fix this a dedicated thread was used for every socket, resulting in a lot of additional complexity and threading issues. The new parser uses a ringbuffer with fixed memory usage and doesn’t require any extra threads, allowing the socket to fill up if the application can’t process fast enough, and thus keeping the server from sending more data eventually.
    It also minimizes memcopies and other parsing overhead, so the cost of parsing is roughly reading the data once.
  • Minor API improvements for the fetch and list jobs.
  • Various fixes for the login process.
  • No more KI18N dependencies. KIMAP2 is a protocol implementation and doesn’t require translations.

For more information please refer to the README.

While KIMAP2 the successor of KIMAP, KIMAP will stick around for the time being for kdepim.
KIMAP and KIMAP2 are completely coinstallable: library names, namespaces, envirionment variables etc. have all been adjusted accordingly.

KIMAP2 is actively used in sink and is fully functional, but we do not yet guarantee API or ABI stability (this will mark the 1.0 release).
If you need API stability rather sooner than later, please do get in touch with me.

If you’d like to help out with KIMAP2 or have feedback or comments, please use the comment section, the kde-pim mailinglist or the phabricator page.

Tarball


Kube at FOSDEM 2017

by cmollekopf in Finding New Ways… at 01:27, Friday, 03 February

I haven’t talked about it much, but the last few months we’ve been busy working on Kube and we’re slowly closing in on a first tech preview.

I’ll be at FOSDEM over this weekend, and I’ll give a talk on Sunday, 16:20 in room K.4.401, so if you want to hear more be there!

…or just find me somewhere around the Kolab or KDE booth.

See you at FOSDEM!


Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. I did not report on Updates for Kolab 16 while some courageous people (dhoffend, airhardt/sicherha, hede, kanarip, and probably more) were making Kolab 16 ready for Debian. […]

There is documentation about how to import Contacts into the Roundcube address books from CSV files: https://docs.roundcube.net/doc/help/1.1/en_US/addressbook/importexport.html Unfortunately, that documentation does not come with a description of the columns supported. I had a look at the source: https://github.com/roundcube/roundcubemail/blob/master/program/steps/addressbook/import.inc https://github.com/roundcube/roundcubemail/blob/master/program/lib/Roundcube/rcube_csv2vcard.php From that you can see, that the import from GMail and from Outlook is supported via […]

Kolab 16 for Fedora 25

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 08:39, Saturday, 31 December

This is work in progress, but I just wanted to share the news: I have Kolab 16 packages for Fedora 25 (with PHP7), built on copr! The support for Fedora 24 is broken in OBS, ticket: https://git.kolab.org/T1564. Fedora 25 was added after that, but it is broken as well, see for example https://obs.kolabsys.com/package/show/Kolab:Winterfell/libcalendaring I was […]

Sieve raw editor

by alec in Kolabian at 18:44, Tuesday, 27 December

Thanks to great CodeMirror editor and its built-in syntax highlighter we now have a nice editor widget for Sieve scripts. So, now you can edit your scripts not only using the user-friendly interface, but also by directly changing the script code. Which is especially useful if you need to use a Sieve feature not supported […]

Sending contacts as vCards

by alec in Kolabian at 19:12, Friday, 16 December

There’s a lot about vCards in Roundcube. No surprise, this is simple and established open standard. Contacts import and export is based on it, as well as Roundcube’s internal format for contacts. There’s also the vcard_attachments plugin that adds some more functionality around vCard use. As the plugin name suggests it’s about handling of vCards […]

I recently read about https://dply.co and wanted to give it a go. The idea is that you can use a machine for free for 2 hours, and you can extend it by paying for additional time. You can use a script provided by someone else that configures the machine. This can be ideal for a […]

vCard in QR code

by alec in Kolabian at 18:12, Sunday, 11 December

Another small feature that will be available in Roundcube 1.3. An option to display QR code image that consist a (limited) contact information. So, you can quickly share contact with any mobile device. It looks there are  a few standards for this, but I choose the well-known vCard format as a data encapsulated by the […]

Contact identicons

by alec in Kolabian at 18:48, Friday, 02 December

Identicons (avatars, sigils) are meant to give a recognizable image that identifies specified entity (e.g. contact person), like photos. So, when a contact photo is not available we could still have a kind of nice image that is unique for every contact (email address). I just committed the identicon plugin for Roundcube that implements this. […]

Phabricator Packages for EPEL, Fedora

by kanarip in kanarip at 14:44, Thursday, 01 December

Using Pagure and COPR, Tim Flink and I have settled on using common infrastructure to further the inclusion of Phabricator in to the Fedora repositories (and EPEL). I’m hoping this will bear fruit and get more people on board. Our Pagure repository is at Phab Phour Phedora — a mix of Fabuluous Four, Phabricator for… Continue reading Phabricator Packages for EPEL, Fedora

Kolab 16.1 for Jessie: Way Ahead of Schedule

by kanarip in kanarip at 15:54, Friday, 18 November

I reported before that I was a little ahead of schedule in making Kolab 16.1 available for Debian Jessie, and as it turns out I was wrong — I’m way ahead. In fact, installing and configuring Kolab 16.1 on Jessie passes without any major head-aches. I think the one little tidbit I have left relates… Continue reading Kolab 16.1 for Jessie: Way Ahead of Schedule

A quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. The big news is: there are now packages of Kolab 16 for Debian Jessie! My first tests of setup-kolab show still some quirks. Please test and report bugs on Phabricator, […]

Collaborative editing with Collabora Online in Kolab

by alec in Kolabian at 11:49, Wednesday, 16 November

About a year ago I blogged about document editing features in Kolab. This year we went a big step forward. Thanks to Collabora Online you can now use LibreOffice via Kolab web-client. Every part of this system is Free Software, which is normal in Kolab world. Collabora Online (CODE) implements WOPI REST API. WOPI is […]

A quick overview on yesterday’s updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. The package cyrus-imapd has been upgraded, with these changes: https://obs.kolabsys.com/request/show/1819 and  https://obs.kolabsys.com/request/show/1821 and https://obs.kolabsys.com/request/show/1823 Here is the comment by Jeroen: Decode the subject line before re-encoding it in automated responses. The package pykolab has been […]

Kolab 16.1 for Jessie: Ahead of Schedule

by kanarip in kanarip at 10:24, Tuesday, 15 November

I told you earlier this week I would be working on providing Debian packages for Kolab 16 next week, but an appointment I had this week was cancelled, and I get to get started just ever so slightly earlier than expected. This lengthens the window of time I have to deal with all the necessities,… Continue reading Kolab 16.1 for Jessie: Ahead of Schedule

Kolab 16.1: What’s on the Horizon

by kanarip in kanarip at 13:59, Monday, 14 November

This is an announcement I just know you’re going to want to read: Kolab 16.1 will become available for the following platforms, with more details to come soon; Red Hat Enterprise Linux 7 and CentOS 7, Debian Jessie (8) This week, I’ll be working to make Kolab:16 for Red Hat Enterprise Linux 7 (Maipo) the… Continue reading Kolab 16.1: What’s on the Horizon

Updates to Kolab 16: Cyrus IMAP

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 11:40, Monday, 07 November

A quick overview on Saturday’s updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. The package cyrus-imapd has been upgraded to version 2.5.10, with these changes: https://obs.kolabsys.com/request/show/1815 and https://obs.kolabsys.com/request/show/1817 Here are the comments by Jeroen: Preserve the folder uniqueid on rename. Transfer a […]

Updates to Kolab 16: Erlang

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 08:48, Saturday, 05 November

A quick overview on yesterday’s updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. The package erlang has been updated to version 18.3.4.4: https://obs.kolabsys.com/request/show/1806 and https://obs.kolabsys.com/request/show/1808 The following packages have been rebuilt because of this: guam and all the other packages depending on […]

This Election Is Hillaryous

by kanarip in kanarip at 20:45, Friday, 04 November

Outside of the United States, we consume quite a lot of the news about the misery you seem to be suffering under. I don’t want to pretend I have all the right connotations in all the right places, but I have to admit I think we usually do consume these revelations with a thought about… Continue reading This Election Is Hillaryous

Heads-up on NSS 3.27, Guam

by kanarip in kanarip at 16:36, Friday, 04 November

Many distributions, among which Fedora in 23 & 24, and Arch Linux, have recently shipped NSS 3.27, sometimes packaged as 3.27.0, or even 3.27.1. This release may just have triggered some confusion about disabling, enabling, and defaulting to or not, the NSS implementation of TLS version 1.3 (currently in draft). Fun! We’ve received reports from… Continue reading Heads-up on NSS 3.27, Guam

Updates to Kolab 16

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 09:44, Friday, 14 October

A quick overview on yesterday’s updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. The package kolab has been rebuilt, with these changes: https://obs.kolabsys.com/request/show/1798 The main difference is that freshclam is now a required dependency when installing the package kolab-mta with a default […]

Improvement in (free-busy) availability finder

by alec in Kolabian at 20:05, Wednesday, 12 October

When planning a meeting with other people in your organization you use Availability Finder widget. It’s granularity from the beginning was one hour, no less no more. On the other hand in Calendar view you could configure that granularity, i.e. number of slots in an hour. My changeset that awaits a review will fix this […]

“Mark all as read” option

by alec in Kolabian at 15:28, Monday, 10 October

It was already possible to mark selected messages as read, but it required a few clicks. Looks like many people prefer single-click action for this particular case. Probably because other mail clients have it or maybe it’s just a very common action. So, I implemented it today as a core feature. You can find “Mark […]

Kube: Accounts

by cmollekopf in Finding New Ways… at 15:16, Monday, 10 October

Kube is a next generation communication and collaboration client, built with QtQuick on top of a high performance, low resource usage core called Sink.
It provides online and offline access to all your mail, contacts, calendars, notes, todo’s etc.
Kube has a strong focus on usability and the team works with designers and Ux experts from the ground up, to build a product that is not only visually appealing but also a joy to use.

To learn more about Kube, please see here.

Kube’s Account System

Data ownership

Kube is a network application at its core. That doesn’t mean you can’t use it without network (even permanently), but you’d severely limit its capabilities given that it’s meant to be a communication and collaboration tool.

Since network communication typically happens over a variety of services where you have a personal account, an account provides a good starting point for our domain model. If you have a system with large amounts of data that are constantly changing it’s vital to have a clear understanding of data ownership within the system. In Kube, this is always an account.

By putting the account front and center we ensure that we don’t have any data that just belongs to the system as a whole. This is important because it becomes very complex to work with data that “belongs to everyone” once we try to synchronize that data with various backends. If we modify a dataset should that replicate to all copies of it? What if one backend already deleted that record? Would that mean we also have to remove it from the other services?
And what if we have a second client that has a different set of account connected?
If we ensure that we always only have a single owner, we can avoid all those issues and build a more reliable and predictable system.

The various views can of course still correlate data across accounts where useful, e.g. to show a single person entry instead of one contact per addressbook, but they then also have to make sure that it is clear what happens if you go and modfiy e.g. the address of that person (Do we modify all copies in all accounts? What happens if one copy goes out of sync again because you used the webinterface?).

Last but not least we ensure this way that we have a clear path to synchronize all data to a backend eventually, even if we can’t do so immediately. E.g. because the backend in use does not support that data type yet.

The only bit of data that is stored outside of the account is data specific to the device in use, such as configuration data for the application itself. Data that isn’t hard to recreate, is easy to migrate and backup, and very little data in the first place.

Account backends

Most services provide you with a variety of data for an individual account. Whether you use Kolabnow, Google or a set of local Maildirs and ICal files,
you typically have access to Contact, Mails, Events, Todos and many more. Fortunately most services provide access to most data through open protocols,
but unfortunately we often end up in a situation where we need a variety of protocols to get to all data.

Within Sink we call each backend a “Resource”. A resource typically has a process to synchronize data to an offline cache, and then makes that data accessible through a standardized interface. This ensures that even if one resource synchronizes email over IMAP and another just gathers it from a local Maildir,
the data is accessible to the application through the same interface.

Because various accounts use various combinations of protocols, accounts can mix and match various resources to provide access to all data they have.
A Kolab account for instance, could combine an IMAP resource for email, a CALDAV resource for calendars and CARDDAV resource for contacts, plus any additional resources for instant messaging, notes, … you get the idea. Alternatively we could decide to get to all data over JMAP (a potential IMAP successor with support for more datatypes than just email) and thus implement a JMAP resource instead (which again could be reused by other accounts with the same requirements).

diagram

 

Specialized accounts

While accounts within Sink are mostly an assembly of some resources with some extra configuration, on the Kube side a QML plugin is used (we’re using KPackage for that) to define the configuration UI for the account. Because accounts are ideally just an assembly of a couple of existing Sink resources with a QML file to define the configuration UI, it becomes very cheap to create account plugins specific to a service. So while a generic IMAP account settings page could look like this:

imapaccount

… a Kolabnow setup page could look like this (and this already includes the setup of all resources including IMAP, CALDAV, CARDDAV, etc.):

kolabaccount

Because we can build all we know about the service directly into that UI, the user is optimally supported and all that is left ideally, are the credentials.

Conclusion

In the end the aim of this setup is that a user first starting Kube selects the service(s) he uses, enters his credentials and he’s good to go.
In a corporate setup, login and service can of course be preconfigured, so all that is left is whatever is used for authentication (such as a password).

By ensuring all data lives under the account we ensure no data ends up in limbo with unclear ownership, so all your devices have the same dataset available, and connecting a new devices is a matter of entering credentials.

This also helps simplifying backup, migration and various deployment scenarios.


I recently did some investigations into Roundcube. One task was to disable the files plugin for certain users. The other task was to disable the option to export the full addressbook. I had a look at the kolab_files plugin. In https://git.kolab.org/diffusion/RPK/browse/master/plugins/kolab_files/kolab_files.php you find these lines: // the files module can be enabled/disabled by the kolab_auth plugin if ($this->rc->config->get('kolab_files_disabled') || […]

Updates to Kolab 16

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 11:13, Tuesday, 04 October

A quick overview on last weeks updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. cyrus-imapd: Upgrade from 2.5.8.12 to 2.5.9-31-g959d458 Comment by Jeroen: Check in 31 revisions ahead of upstream 2.5.9 release details: https://obs.kolabsys.com/request/show/1791 pykolab: Upgrade from 0.8.3 to 0.8.4 details: https://obs.kolabsys.com/request/show/1793 for a fresh run of […]

There has been a new release of Roundcube: https://roundcube.net/news/2016/09/28/updates-1.2.2-and-1.1.6-published I have updated the OBS packages with this version, and you can run yum update or apt-get update && apt-get upgrade depending on your Linux OS. For more details about how to create such an update, or how to work around the issue when Roundcube has been […]

Re: 503s on git.kolab.org

by kanarip in kanarip at 13:41, Tuesday, 27 September

I’ve told you yesterday, that “503 Service Unavailable” errors on git.kolab.org were caused by the system running out of memory. I’ve meanwhile taken the following actions; I’ve nailed down the particular task at hand causing the OOM to 2999348, It is related to T1351, I suspect the so-called “Transaction Publish Worker” task executed for it… Continue reading Re: 503s on git.kolab.org

503s on git.kolab.org

by kanarip in kanarip at 20:04, Monday, 26 September

We regularly experience “503: Service Unavailable” on git.kolab.org as of late, even though the system is more up to date, and both fatter and leaner. Fatter, because more resources are allocated to it. More CPU, more memory, larger APC, opcache, and database services got separated out from the system, and put on to a MariaDB… Continue reading 503s on git.kolab.org

Service Window: git.kolab.org migration

by kanarip in kanarip at 14:29, Tuesday, 20 September

It’s been a while since we last updated Phabricator, and it’s been a while since the overall system that runs git.kolab.org was paid attention to. Suffice it to say that it’s time to upgrade the system. We’ve meanwhile packaged Phabricator and its dependencies, stripped off some superfluous external dependencies that it shipped, and fixed some… Continue reading Service Window: git.kolab.org migration

the case of the one byte packet

by Aaron Seigo in aseigo at 11:27, Tuesday, 20 September

the case of the one byte packet

Yesterday I pushed a change set for review that fixes an odd corner case for the Guam IMAP proxy/filter tool that was uncovered thanks to the Kolab Now Beta program which allows people to try out new exciting things before we inflict them upon the world at large. So first let me thank those who are using Kolab Now Beta and giving us great and valuable feedback before turning to the details of this neat little bug.

So the report was that IMAP proxying was breaking with iOS devices. But, and here's the intriguing bit, only when connecting using implicit TLS; connecting to the IMAP server normally and upgrading with STARTTLS worked fine. What gives?

In IMAP, commands sent to the server are expected to start with a string of characters which becomes the identifying tag for that transaction, usually taking the form of "TAG COMMAND ARGS". The tag can be whatever the client wants, though many clients just use a number that increases monotonically (1, 2, 3, 4, ...). The server will use that tag to prefix the success/failure response in the case of multi-line responses, or tag the response itself in the case of simpler one-line responses. This allows the client to match up the server response with the request and know when the server is indeed finished spewing bytes at it.

We looked at the network traffic and in that specific case iOS devices fragment the IMAP client call into one packet with the tag and one packet with the command. No other client does this, and even iOS devices do not do this when using the STARTTLS upgrade mechanism. As a small performance hack, I had allowed the assumption that the "TAG COMMAND" part of client messages would never be fragmented on the network. This prevented the need for buffering and other bookkeeping in the application within a specific critical code path. It was an assumption that was indeed not guaranteed, but the world appeared to be friendly and cooperating. After all, what application would send "4" in one network packet, and then "XLIST" in a completely separate one? Would it (the application, the socket implementation, ..) not compose this nicely into one little buffer and send it all at once? If so, what network topology would ever fragment a tiny packet of a few dozen bytes into one byte packets? Seemed safe enough, what could go wrong .. oh, those horrible words.

So thanks to one client in one particular configuration being particularly silly, if technically still within its rights, I had to introduce a bit of buffering when and where necessary. So I took the opportunity to do a little performance enhancement that was on my TODO while I was mucking about in there: tag/command parsing which is necessary and useful for rules to determine whether they care about the current state of the connection, is now both centralized and cached. So instead of happening twice for each incoming fragment of a command (in the common case), it now happens at most once per client command, and that will hold no matter how many rules are added to a ruleset.

So, one bug squashed, and one little performance enhancement, thanks to user feedback and the Kolab Now Beta program. As soon as the patch gets through code review, it should get pushed through packaging and deployed on Kolab Now Beta. Huzzah.a

Widescreen layout aka three column view

by alec in Kolabian at 19:11, Saturday, 17 September

Most of modern mail clients already use three column layout. For Roundcube exists a plugin that implements this feature, but it’s not actively developed and its quality is low. It’s about time to implement this as a core feature of Roundcube. So, my idea was to get rid of the preview pane setting (and switching) […]

I noticed that several pages are broken on https://docs.kolab.org a couple of examples: https://docs.kolab.org/architecture-and-design/ldap-intro.html https://docs.kolab.org/client-configuration/outlook.html I once did file a patch to fix the client configuration page (https://git.kolab.org/rDdc928121f87e22602e64669bd640fd63482751d5), but I guess that was not really solving the problem. I have now setup a Fedora 24 machine, and installed the following packages: dnf install python-setuptools make […]

Updates to Kolab 16

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 17:21, Friday, 09 September

A quick overview on today’s updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. kolab-utils: Upgrade from 3.1.2 to 3.1.4 Notes: Format checking capabilities in kolab-formatupgrade Added –shared option to format checker details: https://obs.kolabsys.com/request/show/1775 libcalendaring: Upgrade from 4.9.2 Beta to 4.9.2 upstream release details: https://obs.kolabsys.com/package/rdiff/Kolab:16/libcalendaring?linkrev=base&rev=2 […]

Creating Kolab users from commandline

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 17:10, Wednesday, 07 September

I posted a suggestion for this solution a while back: https://lists.kolab.org/pipermail/users/2015-September/019990.html Now I needed to do it myself, in a multi-domain scenario: creating many user accounts from a list of names. I used https://cgit.kolab.org/pykolab/tree/pykolab/cli/cmd_add_user.py for inspiration… This is the code I came up with: A sample call looks like this: python createusers.py example.org de_DE "von und zu Berlin" "Heinrich" […]

Apple on Guam: The Dots on The i

by kanarip in kanarip at 16:59, Thursday, 01 September

I’ve said before we had gotten reports from people that voluntarily participate in the Kolab Now Beta Programme, that the iPhone iOS and Mac OS X Mail.app were not really happy using IMAP over implicit SSL (you know, the one on port 993). Here’s a summary of the things that have happened thus far; Supply… Continue reading Apple on Guam: The Dots on The i

Emergency Service Window for Kolab Now

by kanarip in kanarip at 19:09, Tuesday, 30 August

We’re going to need to free up a hypervisor and put its load on other hypervisors, in order to pull out the one hypervisor and have some of its faulty hardware replaced — but there’s two problems; The hypervisor to free up has asserted required CPU capabilities most of the eligible targets do not have… Continue reading Emergency Service Window for Kolab Now

New options for compose attachments

by alec in Kolabian at 12:34, Saturday, 30 July

I just added a set of improvements in mail compose window. Attachments list was extended with possibility to open and download uploaded files. The new menu contains also an option to rename already attached file. This makes the attachments list in mail preview and in compose more unified. Previewing and downloading files is already handled […]

Kolab Now Beta Program: An Update

by kanarip in kanarip at 09:42, Wednesday, 27 July

Now that we’ve launched our Kolab Now Beta Program, I get to tell you more about what it is we learn from it. Before we start, the features that are a part of this program are not supported. You’re welcome to provide us with feedback on the Kolab Hub Beta Program forum, but feedback does… Continue reading Kolab Now Beta Program: An Update

Introducing the Kolab Now Beta Program

by kanarip in kanarip at 09:23, Wednesday, 27 July

I’m pleased to be able and allowed to introduce you to the Kolab Now Beta Program, which my colleagues and myself can use to deploy newer versions of software sooner, faster and with less strict guarantees of availability and stability (i.e. the concept of a “service window” does not apply). The first feature to be… Continue reading Introducing the Kolab Now Beta Program

WebP and MathML support

by alec in Kolabian at 09:41, Sunday, 24 July

Some web browsers support these technologies. Recently I added support for them in Roundcube. Here’s a short description what you can do with these right now and what are the limitations. WebP images This image format is supported by Google Chrome, Opera and Safari. So if you open a message with such images attached you’ll […]

Current status for mailrendering in kube & kmail

by Sandro Knauß in Decrypted mind at 09:15, Monday, 18 July

In my last entry I introduced libotp. But this name has some problems, that people thought, that it is a library for one-time-passwords, so we renamed it to libmimetreeparser.

Over the the last months I cleanup and refactored the whole mimetreeparser to turn it into a self-contained library.

Usage

Dependencies

As a gerneral rule we wanted to make sure, that we only have dependecies in mimetreeparser, where we we can easily tell, why we need them. We end up with:

KF5::Libkleo
KF5::Codecs
KF5::I18n
KF5::Mime
  • KF5::Libkleo is the dependecy we are not happy with, because it pulls in many widget related dependencies, that we want avoid. But there is light at the end of the tunnel and we will be hopefully switch in the next weeks to GpgME directly. GpgME is planning to have a Qt interface, that fulfills our need for decrypting and verifying mails. The source of the Qt interface of GpgME is libkleo, that's why the patch will be get quite small. At KDEPIM sprint in Toulouse in spring this year, I already give the Qt interface a try and made sure, that your tests are still passing.
  • KF5::Codecs to translate between different codes, that can occur in a mail
  • KF5::I18n for translations of error messages. If we want consistent translations of error messages we need to handle them in libmimetreeparser.
  • KF5::Mime because the input mail is a mimetree.

Rendering in Kube

In Kube we have decided to use QML to render mails for the user, that's made it easy to switch all html rendering specific parts away. So we end up with just triggering the ObjectTreeParser and create a model out of the resulting tree. The model is than the input for QML. QML now loads different code for different parts in the mail. For example for plain text it just shows the plain text, for html it loads this part in a WebEngine.
But as a matter of fact, the interface we use is quite new and it is currently still under development (T2308). For sure there will be changes, until we are happy with it. I will describe the interface in detail if we are happy with it. Just as sidenote, we don't want a separate interface for kube and kdepim. The new interface should be suitable for all clients. To not break the clients constently, we keep the current interface and develop the new interface from scratch and than switch we are happy with the interface.

Kube rendering

Rendering in Kmail

As before we use html as rendering output, but with the rise of libmimtreeparser, kmail uses also the messageparttree as input and translate this into html. So we also have here a clear seperation between the parsing step ( handled in libmimetreeparser) and the rendering step, that is happening in messageviewer. Kmail has additional support for different mime-types like iTip (invitations) ind vCard. The problem with these parts are, that they need to interact directly with Akonadi to load informations. So we can actually detect if a event is already known in Akonadi or if we have the vCard already saved, which than changes the visible representation of that part. This all works because libmimetreeparser has an interface to add additional mime-type handlers ( called BodyPartFormatters).

And additionally messageviewer now using grantlee for creating the html, that is very handy and makes it now easy to change the visual presentation of mails, just by changing the theme files. That should help a lot if we want to change the look-and-feel of email presentation to the user. And allow us additionally to think about different themes for emailpresentation. We also thought about the implications of the easy changeable themes and came up with the problem, that it shouldn't be that easy to change the theme files, because malicious users, could fake good cryptomails. That's why the theme files are shipped inside resource files.

Meanwhile I was implementing this, Laurent Montel added javascript/jQuery support to the messageviewer. So I sat down and created a example to switch the alterantivepart ( html and textpart, that can be switched) with jQuery. Okay we came to the conclusion that this is not a good idea (D1991). But maybe others came up with good ideas, where we can use the power of jQuery inside the messageviewer.

Alternative switcher 1

Alternative Switcher 2

HPKP or DANE?

by kanarip in kanarip at 10:53, Thursday, 14 July

Close your eyes and imagine a world in which a user visits a website. This website may be encrypted, and when it is, the SSL certificate used must somehow be validated. Traditionally, SSL certificates are issued by third parties that have their certificate authorities included in browsers and operating system’s bundles. This way, whichever SSL… Continue reading HPKP or DANE?

Kolab’s Authentication Cache

by kanarip in kanarip at 14:44, Friday, 08 July

Soon after the release of Kolab 3 — the original, 2012 edition– the SASL Authentication daemon that Kolab provides to facilitate distributing LDAP hierarchies over multiple root DNs or even separate LDAP servers has assisted the speed of authentication with a cache. Not the kind of cache you may be thinking about, though. The information… Continue reading Kolab’s Authentication Cache

rcode0: the absolute best of the bunch

by kanarip in kanarip at 19:33, Friday, 01 July

We’ve run several DNS zones for Kolab Now, using an in-house hidden nameserver topology — in order to protect the keys used to sign DNSSEC zones, both the hidden topology as well as keeping as much of it in-house as possible, have been absolute musts. More recently, we received a certain type of threat, causing… Continue reading rcode0: the absolute best of the bunch

Kolab:Development is #bristory

by kanarip in kanarip at 17:07, Monday, 27 June

Just under 5 months ago, I announced the inception of Kolab:Winterfell. Now seems as appropriate a time as any to not leave Kolab:Development where it is and walk away, so I’ve removed it from our build system. I’ve provided a superfluous and biased graphic with this blog post that Pretty Accurately reflects my thoughts on… Continue reading Kolab:Development is #bristory

Last weekend, there has been the Kolab Summit 2.0 in Nürnberg: see https://summit.kolab.org/ and https://kolab.org/group-blog/2016/06/02/kolab-summit-2-0-putting-the-freedom-back-into-the-cloud/ Unfortunately I was not able to go there myself. Fortunately, all the sessions have been published on Youtube! So here are the links, and some minutes for some of the sessions. Obviously my summary is not complete, so you better watch the videos […]

Kube going cross-platform in Randa

by cmollekopf in Finding New Ways… at 20:03, Saturday, 18 June

I’m on my way back home from the cross-platform sprint in Randa. The four days of hacking, discussing and hiking that I spent there, allowed me to get a much clearer picture of how the cross-platform story for Kube can work, and what effort we will require to get there.

We intend to ship Kube eventually on not only Linux, Windows and Mac, but also on mobile platforms like Android, so it is vital that we figure out blockers as early as possible, and keep the whole stack portable.

Flatpak

The first experiment was a new distribution mechanism instead of a full new platform. Fortunately Aleix Pol already learned the ropes and quickly whipped up a Flatpak definition file that resultet in a self contained Kube distribution that actually worked.

Given that we already use homegrown docker containers to achieve similar results, we will likely switch to Flatpak to build git snapshots for early adopters and people participating in the development process (such as designers).

Android

Anreas Cord-Landwehr perpared a docker image that brings the complete cross-compiler toolchain with it, so that makes for a much smoother setup process than doing everything manually. I mostly just followed the KDE-Android documentation .

After resolving some initial issues with the Qt-Installer with Andreas (Qt has to be installed using the Gui-Installer from console as well, using some arcane configuration script that changes variables with every release…. WTF Qt), this got me quickly set up to compile the first dependencies.

Thanks to the work of Andreas, most frameworks already compile flawlessly, some other dependencies like LMDB, flatbuffers and KIMAP (which currently still depends on KIO), will require some more work though. However, I have a pretty good idea by now what will be required to get everything to build on Android, which was the point of the exercise.

Windows

I postponed the actual building of anything on Windows until I get back to my workstation, but I’ve had some good discussions about the various possibilities that we have to build for Windows.

While I was initially intrigued by using MXE to cross-compile the whole stack (I’d love not having to have a Windows VM to build packages), the simplicity of the external CMake project that Kåre Särs setup for Kate is tempting as well. The downside of MXE would be of course that we don’t get to use the native compiler, which may or may not result in performance impacts, but definitely doesn’t allow developers on Windows to work with MSVC (should we get any at some point….).

I guess some experimentation will be in order to see what works best.

Mac

Lacking an OS X machine this is also still in the theoretical realm, but we also discussed the different building techniques and how the aim must be to produce end-user installable application bundles.

 

As you can see there is still plenty to do, so if you feel like trying to build the stack on your favorite platform, that help would be greatly appreciated! Feel free to contact me directly, grab a task on Phabricator, or join our weekly meetings on meet.jit.si/kube (currently every Wednesday 12:00).

The time I spent in Randa showed once more how tremendously useful these sprints are to exchange knowledge. I would have had a much harder time figuring out all the various processes and issues without the help of the various experts at the sprint, so this was a nice kick-start of the cross-platform effort for Kube. So thank you Mario and team that you organized this excellent event once more, and if you can, please help keeping these Sprints happening.


One of Them Tasters…

by kanarip in kanarip at 17:47, Friday, 17 June

One of them taster workstations at our beer-and-food-and-freedom taster in Vienna is just simply going to be the greatest experience you’ll have ever had. The reason is not the Fedora LiveCD I composed from a tweaked and tuned,  running workstation. The reason is not that we run these off of an up-to-date “Install to hard-drive”.… Continue reading One of Them Tasters…

Oh SNAP, and there’s the Devil

by kanarip in kanarip at 17:23, Thursday, 16 June

I don’t know how else to put it. I’m sorry. It’s bad. It’s bad in my opinion, not fact. My opinion, is my expectation, will only turn fact by the time it is too late to do anything about it. It’s like, “why back-up anything?” — well, you’ll know when you’ve lost everything. In other… Continue reading Oh SNAP, and there’s the Devil

Dear WordPress, … really?

by kanarip in kanarip at 16:00, Thursday, 16 June

Dear WordPress, I look at my statistics rather frequently, perhaps even more so than is necessarily healthy. Here, I’m shown 4 visits from the “European Union”, while another 8+4+3+3+3 visits from countries in the European Union. What is anyone to think of this, in your view? I’d like to believe that I’m some sort of… Continue reading Dear WordPress, … really?

How I made Crypt_GPG 100 times faster

by alec in Kolabian at 13:50, Wednesday, 15 June

Here’s the short story of performance investigation I did to make Crypt_GPG library (used by Roundcube’s Enigma plugin) 100 times faster. My fixes improved encryption/signing performance as well as peak memory usage. Once I was testing Enigma for messages with attachments. I noticed it is really slow when encrypting or even just signing the message […]

New IMAP filter/proxy release: guam 0.8, eimap 0.2

by Aaron Seigo in aseigo at 14:58, Friday, 10 June

New IMAP filter/proxy release: guam 0.8, eimap 0.2

Over the last few months I have been poking away at a refactoring of the IMAP library that Kolab's IMAP filter/proxy uses behind the scenes, called eimap. It consolidated quite a bit of duplicated code between the various IMAP commands that are supported, and fixed a few bugs along the way. This refactoring dropped the code count, makes implementing new commands even easier, and has allowed for improvements that affect all commands (usually because they are related to the core IMAP protocol) to be made in one central place. This was rolled as eimap 0.2 the other week and has made its way through the packaging process for Kolab. This is a significant milestone for eimap on the path to being able to be considered "stable".

Guam 0.8 was tagged last week and takes full advantage of eimap 0.2. This has entered the packaging phase now, but you can grab guam 0.8 here:

Highlights of these two releases include:

  • EIMAP
    • several new IMAP commands supported
    • all core IMAP response handling is centralized, making the implementation for each command significantly simpler and more consistent
    • support for multi-line, single-line and binary response command types
    • support for literals continuation
    • improved TLS support
    • fixes for metadata fetching
    • support for automated interruption of passthrough state to send structured commands
    • commands receive server responses for commands they put into the queue
  • Guam
    • ported to eimap 0.2
    • limit processcommandqueue messages in the FSM's mailbox to one in the per-session state machine
    • be more expansive in what is supported in LIST commands for the groupware folder filter rule
    • init scripts for both sysv and systemd

One change that did not make it into 0.8 was the ability to define which port to bind guam listeners to by network interface. This is already merged for 0.9, however. I also received from interest in using Guam with other IMAP servers, so it looks likely that guam 0.8 will get testing with Dovecot in addition to Cyrus.

Caveats: If you are building by hand using the included rebar build, you may run into some issues with the lager dependencies, depending on what versions of lager and friends are installed globally (if any). If so, change the dependencies in rebar.config to match what is installed. This is largely down to rebar 2.x being a little limited in its ability to handle such things. We are moving to rebar3 for all the erlang packages, so eimap 0.3 and guam 0.9 will both use rebar3. I have guam already building with rebar3 in a 0.9 feature branch, and it was pretty painless and produces something even a little nicer already. As soon as I fix up the release generation, this will probably be the first feature branch to land in the develop branch of guam for 0.9!

It is also known that the test suite for Guam 0.8 is broken. I have this building and working again in the 0.9 branch, and will probably be doing some significant changes to how these tests are run for 0.9.

events?(Kolab)

by Aaron Seigo in aseigo at 11:41, Friday, 10 June

events?(Kolab)

I joined Kolab Systems just over 1.5 years ago, and during that time I have put a lot of my energy and time into working with the amazing team of people here to improve our processes and execution of those processes around sales, communication, community engagement, professional services delivery, and product development. They have certainly kept me busy and moving at warp 9, but the results have certainly been their own reward as we have moved together from strength to strength across the board.

One place that this has been visible is the strengthening of our relationship with Red Hat and IBM, which has culminated in two very significant achievements this year. First, Kolab is available on the Power 8 platform thanks to a fantastic collaboration with IBM. For enterprise customers and ISP/ASPs alike who need to be able to deliver Kolab at scale in minimum rack space, this is a big deal.

For those with existing Power 8 workloads, it also means that they can bring in a top-tier collaboration suite with quality services and support backing it up on their already provisioned hardware platform; put more simply: they won't have to support an additional x86-based pool of servers just for Kolab.

To help introduce this new set of possibilities, we have organized a series of open tech events called the Kolab Tasters in coordination with IBM and Red Hat.

events?(Kolab)

Besides enjoying local beverages and street food with us at these events, attendees will be able to experience Kolab on Red Hat Enterprise Linux on Power 8 first-hand on the demo stations that will be available around the event site. Presentations from Kolab Systems, IBM and Red Hat form the main part of the agenda for each of these events, and will give attendees a deep understanding of how the open technologies from IBM (Power 8), Red Hat (Linux OS), and Kolab Systems (Kolab) deliver fantastic value and freedom, especially when used together.

The first events scheduled are:

  • Zürich, Switzerland on the 14th June, 2016
  • Vienna, Austria on the 22nd June, 2016
  • Bern, Switzerland on 28th June, 2016

There are some fantastic speakers lined up for these events, including Red Hat's Jan Wildeboer and Dr. Wolfgang Meier who is directory of hardware development at IBM. At the Vienna event, we will also be celebrating the official opening of Kolab Systems Austria, which has already begun to support their needs of partners, customers and government in the beautiful country of Austria from our office in Vienna.

Events in Germany, starting in Frankfurt, will be scheduled soon, and we will be doing a "mini-taster" at the Kolab Summit which is taking place in Nürnberg on the 24th and 25th of June. Additional events will be scheduled in accordance with interest over the next year. I expect this to become a semi-regular road-show, in fact.

And speaking of the Kolab Summit: is it also going to be a fantastic event. Co-hosted at the openSUSE Conference, we will be sharing the technical roadmap for Kolab for 2016-2017; unveiling our partner program for ISPs, ASPs and system integrators that we incrementally rolled out earlier this year and which is now ready for broad adoption; listening to guest speakers on timely topics such as Safe Harbor in the EU and taking Kolab into vertical markets; and, of course, having a busy "hallway session" where you can meet and talk with key developers, designers, management and sales people from the Kolabiverse.

You can still book your free tickets to these events from their respective websites:

Kolab Summit 2.0: Putting the freedom back into the cloud.

Join us this June 24-25 in Nürnberg, Germany for the second annual Kolab Summit. Like last year's summit, we've accepted the invitation of the openSUSE Community to co-locate the summit with the openSUSE Conference, which will be held June 22-26 in the same location. And because we have some special news to share and celebrate, we're also putting on a special edition Kolab Taster on Friday June 24th. The overarching theme for this year's summit will be how to put the freedom back into the cloud.

Using the US cloud is increasingly fraught with technical and legal insecurities. Cross-border transfer of data is becoming more complex, and sovereign control of your data seems increasingly hard to achieve. Kolab believes there needs to be a better answer than simply giving up the benefits of the cloud, renouncing its convenience and cost efficiency. That is why during this year's summit we will be discussing the impact of the Safe Harbor ruling, and how Kolab has been working with our partners at Collabora and others to provide a fully open, collaborative cloud technology platform.

Technology

Join us to learn how Kolab has been helping Application Service Providers (ASPs) withstand Office365's onslaught, and how the cloud of the future will be running Kolab. Get exclusive insights and previews into our thinking, road map and the state of development of Kube and other components.

Business

In the business section we will be talking with partners about our new partner programme, business opportunities Kolab offers, and how to build a valuable proposition for your customers around Kolab.

Celebration

Finally, join us on the evening of the 24th for an exclusive Kolab Taster where we will have some major news to celebrate. And while you're there, don't miss the opportunity to also be part of the exciting openSUSE Conference, which has finally come home to Nürnberg, the home city of SUSE, complete with castles, food, beer and much to see.

Tickets are FREE, so grab yours today.

Tasks export

by alec in Kolabian at 15:52, Tuesday, 31 May

As with Calendar events, tasks are objects that provide calendaring information. Both internally are based on iCalendar format. So, why we couldn’t export tasks in iCal format as we can with Calendar events? Well, now we can. As you see on the screenshot now we have Export button in the Tasks toolbar. It allows you […]

An XSS vulnerability has been reported, and fixed in roundcube. see http://seclists.org/oss-sec/2016/q2/414 and https://github.com/roundcube/roundcubemail/issues/5240 I have applied this fix to Kolab 3.4 Updates: https://obs.kolabsys.com/package/show/Kolab:3.4:Updates/roundcubemail I also prepared an update for Kolab 16: https://obs.kolabsys.com/request/show/1646 (I had to do the branch and submit request from the command line, because today the SSL certificate for obs.kolabsys.com expired, which […]

There has been a new release of Roundcube: https://roundcube.net/news/2016/04/20/updates-1.1.5-and-1.0.9-published I have updated the OBS package with this version, and you can run yum update or apt-get update && apt-get upgrade depending on your Linux OS. For more details about how to create such an update, or how to work around the issue when Roundcube has […]

At TBits.net we have a setup for a customer, where Kolab 3.4 needs to cooperate with Exchange, and with an older mail system during the transition. This post describes how to configure Postfix to do this job. The scenario: Exchange is used for a smaller number of power users. They need to work with Outlook, and that is […]

Sieve ‘duplicate’ extension

by alec in Kolabian at 09:44, Thursday, 05 May

The ‘duplicate’ extension (RFC 7352) adds a new test command called duplicate to the Sieve language. This test adds the ability to detect duplications. It is supported by dovecot’s Pigeonhole project. It’s now supported also by Roundcube’s managesieve plugin. The main application for this new test is handling duplicate deliveries commonly caused by mailing list […]

I had the problem that I used an password containing the character “<“. This works fine for the 389 directory server, but you cannot login with that password through the Kolab Webadmin interface, because PHP does not handle that character “<” in POST data: if the password is “test<1234”, it only assumes “test”. Therefore the login will fail. […]

(Better?) HTML to Text conversion

by alec in Kolabian at 08:40, Wednesday, 27 April

Roundcube has a class that handles HTML to text conversion based on the old html2text class. It improved over years, but it has its issues, e.g. tables support is really poor. Can we do better? First, let’s see where such conversion is used in Roundcube: Displaying HTML-only message when HTML preview is disabled, Creating HTML […]

Copying calendar events

by alec in Kolabian at 14:04, Sunday, 17 April

Thanks to Christoph Schwarzenberg now copying calendar events is possible. Not a big change, but looks like some users needed it. Click on the new Options menu element will open event creation dialog with event details copied from the original event. As simple as that.

Message/rfc822 attachment preview

by alec in Kolabian at 18:15, Monday, 28 March

When you forward a message as attachment or you just attach saved message it will be sent as a file of type message/rfc822. Roundcube could display text content from inside it, also its attachments, but that was all. Now you can do more with it. The attachment of type message/rfc822 can now be previewed the […]

So what is Kube? (and who is Sink?)

by cmollekopf in Finding New Ways… at 14:03, Wednesday, 02 March

Michael first blogged about Kube, but we apparently missed to properly introduce the Project. Let me fix that for you 😉

Kube is a modern groupware client, built to be effective and efficient on a variety of platforms and form-factors. It is built on top of a high-performance data access layer and Qt Quick to provide an exceptional user experience with minimal resource usage. Kube is based on the lessons learned from KDE Kontact and Akonadi, building on the strengths and replacing the weak points.

Kube is further developed in coordination with Roundcube Next, to achieve a consistent user experience across the two interfaces and to ensure that we can collaborate while building the UX.

A roadmap has been available for some time for the first release here, but in the long run we of course want to go beyond a simple email application. The central aspects of the the problem space that we want to address is communication and collaboration as well as organization. I know this is till a bit fuzzy, but there is a lot of work to be done before we can specify this clearly.

To ensure that we can move fast once the basic framework is ready, the architecture is very modular to enable component reuse and make it as easy as possible to create new ones. This way we can shift our focus over time from building the technology stack to evolving the UX.

Sink

Sink is a high-performance data access layer that provides a plugin mechanism for various backends (remote servers e.g. imap, local maildir, …) an editable offline cache that can replay changes to the server, a query system for efficient data-access and a unified API for groupware types such as events, mails, todos, etc.

It is built on top of LMDB (a key-value store) and Qt to be fast and efficient.

Sink is built for reliability, speed and maintainability.

What Kube & Sink aren’t

It is not a rename of Kontact and Akonadi.
Kontact and Akonadi will continue to be maintained by the KDEPIM team and Kube is a separate project (altough we share bits and pieces under the hood).
It is not a rewrite of Kontact
There is no intention of replicating Kontact. We’re not interested in providing every feature that Kontact has, but rather focus on a set that is useful for the usecases we try to solve (which is WIP).

Development

Development planning happens on phabricator, and the kdepim mailinglist. Our next sprint is in Toulouse together with the rest of the KDEPIM team.

We also have a weekly meeting on Wednesday, 16:00 CET with notes sent to the ML. If you would like to participate in those meetings just let me know, you’re more than welcome.

Current state

Kube is under heavy development and in an early stage, but we’re making good progress and starting to see the first results (you can read mail from maildir and even reply to mails). However, it is not yet ready for general consumption (though installable installable).

If you want to follow the development closely it is also possible to build Kube inside a docker container, or just use the container that contains a built version of Kube (it’s not yet updated automatically, so let me know if you want further information on that).

I hope that makes it a bit clearer what Kube and Sink is and isn’t, and where we’re going with it. If something is still unclear, please let me know in the comments section, and if you want to participate, by all means, join us =)


Kube Architecture – A Primer

by cmollekopf in Finding New Ways… at 08:18, Wednesday, 02 March

Kube’s architecture is starting to emerge, so it is time that I give an overview on the current plans.

But to understand why we’re going where we’re going it is useful to consider the assumptions we work with, so let’s start there:

Kube is a networked application.
While Kube can certainly be used on a machine that has never seen a network connection, that is not where it shines. Kube is built to interact with various services and to work well with multiple devices. This is the reality we live in and that we’re building for.
Kube is scalable.
Kube not only scales from small datasets that are quick to synchronize to large datasets, that we can’t simply load into memory all at once. It also scales to different form factors. Kube is usable on devices with small and large screens, with touch or mouse input, etc.
Kube is cross platform.
Kube should run just as well on your laptop (be it Linux, OS X or Windows) as it does on your mobile (be it Plasma Mobile or Android).
Kube is a platform for rapid development.
We’re not interested in rebuilding mail and calendar and stopping there. Groupware needs to evolve and we want to facilitate communication and collaboration, not email and events. This requires that the user experience can continue to evolve and that we can experiment with new ideas quickly, without having to do large-scale changes to the codebase.
Groupware types are overlapping.
Traditionally PIM/Groupware applications are split up by formats and protocols, such as IMAP, MIME and iCal but that’s not how typical workflows work. Just because the transport chosen by iTip for an invitation happens to be a MIME message transported over IMAP to my machine, doesn’t mean that’s necessarily how I want to view it. I may want to start a communication with a person from my addressbook, calendar or email composer. A note may turn into a set of todo’s eventually. …

A lot of pondering over these points has led to a set of concepts that I’d like to quickly introduce:

Components

Kube is built from different components. Each component is a KPackage that provides a QML UI backed by various C++ elements from the Kube framework. By building reusable components we ensure that i.e. the email application can show the very same contact view as the addressbook, with all the actions you’d expect available. This not only allows us to mix various UI elements freely while building the User Experience, it also ensures consistency across the board with little effort. The components load their data themselves by instantiating the appropriate models and are thus fully self contained.

Components will come in various granularities, from simple widgets suitable for popup display to i.e. a full email application.

The components concept will also be interesting for integration. A plasma clock plasmoid could for instance detect that the Kube calendar package is available, and show this instead of it’s native one. That way the integration is little effort, the user experience is well integrated (you get the exact same UX as in the regular application), and the full set of functionality is directly available (unlike when only the data was shared).

Models

Kube is reactive. Models provide the data that the UI is built upon, so the UI only has to render whatever the model provides. This avoids complex stateful UI’s and ensures a proper separation of bussiness logic and UI. The UI directly instantiates and configures the models it requires.
The models feed on the data they get from Sink or other sources, and are as such often thin wrappers around other API’s. The dynamic nature of models allows to dynamically load more data as required to keep the system efficient.

Actions

In the other direction provide “Actions” the interaction with the rest of the system. An action can be “mark as read”, or “send mail”, or any other interaction with the system that is suitable for reuse. The action system is a publisher-subscriber system where various parts can execute actions that are handled by one of the registered action-handlers.

This loose-coupling between action and handler allows actions to be dynamically handled by different parts of the system system, i.e. based on the currently active account when sending an email. It also ensures that action handlers are nice and small functional components that can be invoked from various parts in the system that require similar functionality.

Pre-Handlers allow preparatory steps to be injected into the action-execution, such as retrieving configuration or requesting authentication, or resolving some identifier over a remote service. Anything that is required really to have all input data available to be able to execute the action handler.

Controllers

Controllers are C++ components that expose properties for a QML UI. These are useful to prepare data for the UI where a simple model is not sufficient, and can include additional UI-helpers such as validators or autocompletion for input fields.

Accounts

Accounts is the attempt to account for (pun intended) the networked nature of the environment we’re working in. Most information we’re working with in Kube is or should be synchronized over one or the other account and there remains very little that is specific to the local machine (besides application state). This means most data and configuration is always tied to an account to ensure clear ownership.

However accounts not only manifest in where data is being put, they also manifest as “plugins” for various backends. They tie together a QML configuration UI, an underlying configuration controller (for validation and autocompletion etc), a Sink resource to access data i.e. over IMAP, a set of action handlers i.e. to send mail over smtp and potentially various defaults for identity etc.

In case you’re internally already shouting “KAccounts!, KAccounts!”; We’re aware of the overlap, but I don’t see how we can solve all our problems using it, and there is definitely an argument for an integrated solution with regards to portability to other platforms. However, I do think there are opportunities in terms of platform integration.

An that’s it!

Further information can be found in the Kube Documentation.


Kolab webclient caches

by alec in Kolabian at 10:36, Thursday, 25 February

Kolab webclient (Roundcube together with set of Kolab plugins) does cache some data for better performance. There are a few types of cache. I’ll try to explain what we have there and how you can configure the cache for your needs. IMAP indexes and folders metadata In this cache we store folders lists, message counts, […]

Document editing sessions overview

by alec in Kolabian at 11:17, Monday, 08 February

Here‘s an addition to the collaborative editing functionality implemented in Kolab 16. As I described here editing sessions are managed by Chwala. You can notice existence of a session on the files list. The new feature allows you to see all ongoing sessions in one place. We added a new entry (Sessions) in the folders […]

libotp - email rendering in kube

by Sandro Knauß in Decrypted mind at 13:03, Friday, 05 February

The important part of a mailreader is rendering an email. Nobody likes to read the raw mime message.
For kube we looked around what we should use for mail rendering. and came to the conclusion, that we will use parts of kdepim for that task. But the current situation was not the usable for us, because is was tangled together with other things of the mailviewer (like Akonadi, Widgetcode,...) , so we would end up depending on nearly everything in kdepim. What was a nogo for us. But after a week of untangeling the rendering part out of messageviewer, we end up with a nice library that does only mail rendering called libotp branch dev/libotp. But for the moment it is just a working name. Maybe someone come up with a better one?

Why a lib - email rendering is easy?

encrypted mail Well if you look from the outside, it really looks looks like an easy task to solve. But in detail the task is quite complicated. We have crypted and signed mail parts, html/not html mailparts, alternate mime structure, broken mail clients and so on. And than we also want user interaction, do we want to decrypt the mail by default? Do we want to verify mails by default? Do we allow external html links? Does the user user prefer html over non html? ...

In total you have to keep many things in the mind while rendering a mail. And we are not talking, that we also want a pluginable system, where we be able to create own rendering for special types of mails. All these thing a already solved by the messageviewer of kdepim: We have high integrated crypto support, support for many different mime types and it was already used for years, additionally the test couverage is quite high. Like you see in the image above we are already been able to decrypt and verify mails.

libotp

libotp is a library that renders emails to html. Maybe you ask, why the hell html? I hate html mails, too :D But html is easy to display and back in time there was no alternative, to show something that is that dynamic. Nowadays we have other solutions like QML, but still we have html message, that we wanna be able to display. Currently, we have no way, to try out QML rendering for mails, because the output of libotp is limited to html. I hopefully can also solve this to give libotp the ability to redner to different output formats, by splitting the monolithic task of render an email to html into a parse step, in which the structure of the email is translated into the visible parts (a signed part is followed by a encrypted one, that has as child a html part,...) and the pure rendering step.

If you follow the link libotp branch dev/libotp, you may wonder, if a fork of messagelib is happening. No the repo is created, to use libotp now in kube and I made many shortcuts and use ugly hacks to get it working. The plan is that libotp is part of the messagelib repo and currently I have already made it to push the first part of (polished) patches upstream. If everything went fine, I will have everything upstreamed by next week.

How to use it ?

At the moment it is still work in process, so it may change. Also if other step up and give input about the way they wanna use it.
Let's have a look how kube it is using kube/framework/mail/maillistmodel.cpp

// mail -> kmime tree
const auto mailData = KMime::CRLFtoLF(file.readAll());  
KMime::Message::Ptr msg(new KMime::Message);  
msg->setContent(mailData);  
msg->parse();  

first step - load the mail into KMime to have a tree with the mime parts. file is a mail in mbox format located in a local file.

// render the mail
StringHtmlWriter htmlWriter;  
QImage paintDevice;  
CSSHelper cssHelper(&paintDevice);  
MessageViewer::NodeHelper nodeHelper;  
ObjectTreeSource source(&htmlWriter, &cssHelper);  
MessageViewer::ObjectTreeParser otp(&source, &nodeHelper);  

now initalize the ObjectTreeParser. Therefore we need

  • HtmlWriter, that gets html output while rendering and do what ever the user wants to do with the html output (in our case just save it for later use - see htmlWriter.html()).

  • CSSHelper creates the header of the html with css, you have the possibility to set color schemas and fonts that are used for html rendering.

  • NodeHelper is the place, where information are stored, that need to be stored for longer (pointers to crypted part, that are currently in asynchronously been decrypted, or extra mail parts, that are visible but only on mime part in the mail). NodeHelper also informs you if any async job has endend. At the moment, we don't use the async mode nor we have mail internal links working, that's why the NodeHelper is a local variable here.

  • ObjectTreeSource is the setting object, here you can store, if decryption is allowed, if you prefer html output, if you like emotionicons, ...

  • And last not least the ObjectTreeParser(Otp) itself. It's doing the real work of parsing and rendering the mail :)

htmlWriter.begin(QString());  
otp.parseObjectTree(msg.data());  
htmlWriter.end();

return htmlWriter.html();  

After initializing the Otp we can render the mail. This is done with otp.parseObjectTree(msg.data());. Around that we need to tell htmlWriter that a html creation has begun and ended afterwards.

As you may noticed, except the ObjectTreeParser and the NodeHelper kube has overloads of the objects. This makes libotp highly configurable for others needs already.

next steps

After the week of hacking now the current task is to push things upstream, to not create a fork and focus on one solution to render mails for kmail and kube together. After upstreaming I will start to extract the parts of libotp out of messageviewer (currently it is only another cmake target and not really divided) and make messageviewer to depend on libotp. With that libotp is a independent library that is used by both projects and I can focus again to polish libotp and messageviewer.

tl;dr;

Here you see kube can now render your spam now nicely:
rendered spam

Like the spam says (and spam can't lie) - get the party started!

Kolab 16 at FOSDEM'16

by Paul Brown in Kolab Community - Kolab News at 10:54, Sunday, 31 January

The biggest European community-organised Open Source event is upon us and, this year, we at Kolab Systems have a very special reason to be there: we'll be presenting the new Kolab 16.1 [1] to the world during the meetup.

The development team has worked long and hard on this release, even longer and harder than usual. And that slog has led to several very interesting new features built into Kolab 16.1, features we are particularly proud of.

Take for example GUAM, our all-new, totally original, "IMAP-protocol firewall". Guam allows you to, for example, access Kolab from any client without having to see the special Kolab groupware folders, such as calendars, todos, contacts, and so on. As Guam is configured server-side, users do not have to do anything special on their clients.

Guam keeps users' inboxes clean, as it pipes the resource messages in the background only to the apps designated to deal with them. So a meeting scheduled by a project leader will only pop up in the calendar app, and a new employee recruited by HR will silently get added only to the contact roster, without the users ever accidentally seeing in their email the system-generated messages that prompt the changes.

As Guam is actually an IMAP proxy filter, something like a rule-based IMAP firewall (Aaron Seigo dixit), it is very flexible and allows you to do so much more. Come by our booth and find out how our developers have been using it and discuss your ideas with the people who actually built it.

Then there's MANTICORE. With Manticore we are taking our "collaborating in confidence" mantra to the next level. Manticore currently works only on documents, but ultimately it will bring collaborative and simultaneous editing to emails, notes and calendars as well, all without having to leave the Kolab web environment. Need to write an email with the input from a couple of colleagues? Instead of passing the text around, which is slow and error-prone (who hasn't ever sent the second to last version off by mistake?), just open it up with Manticore and edit the text all together, everybody at the same time. Interactive proofreading action!

Collaborative editing has arrived in Kolab 16.1

Finally we have implemented an OTP (One Time Password) authentication method to make logging into Kolab's web client even more secure. Every time a user goes to log in, you can have the system send a single-use code to her phone. The code is a random 6 digit number, which is only valid for a few minutes and that changes every time.

If you're an admin and tired of the security nightmares that are passwords copied onto slips of paper stuck to monitors; sick of trying to convince users that "1234" is really not that a clever password; or fed up of hearing complaints every time you have to renew their credentials because they have been hacked yet again; this is a feature that you'll definitely want to activate.

Of course there's much more to Kolab 16.1, including optimised code, a sleeker design, and usability improvements all round. You'll be able to see all these new features in action at our booth during the show and you'll also be able to meet the developers and ask them as many questions as you like on how you can use Kolab in your own environment.

What's even better is that you can also come over to our booth and copy a full, ready-to-go version of Kolab as a virtual machine onto your USB thumbdrive. Take it home with you and use it at your leisure, because, that's the other thing, we are fusing both the Enterprise and Community versions of Kolab together. This means the exact same code base from the Enterprise, our corporate-grade groupware, gets released to Community, with new versions, with all features, coming out regularly every year.

But that's not the only user news we have. We'll also be presenting our new Kolab Community portal during the conference. Kolab Community is designed to be a place for developers, admins and users to cooperate in the creation of the most reliable and open groupware collaboration suite out there. On Kolab Community you will be able to interact with the developers at Kolab Systems and other users on modern, feature-rich forums and mailing lists. The portal also hosts the front end to our bug tracker and our Open Build Service that lets you roll your own Kolab packages, tailored to your needs.

For example, if you need a pocket Kolab server you can take anywhere, why not build it to run on a Raspberry Pi? In fact that's another thing we'll be demoing at out booth. Bring along an SD card (8 GBs at least) and we'll flash Kolab 16.1 for the Pi on to it for you. You'll get to try the system right there and then take it away with you, fully ready to be deployed at home or in your office.

Useful Links

[1] Download Kolab 16.1: https://docs.kolab.org/installation-guide/index.html

[2] FOSDEM expo area: https://fosdem.org/2016/stands/

And just to prove how well all the Kolab-related tools work everywhere, we'll also be running Kontact, KDE's groupware client, on a Windows 10 box. Kontact will be accessing a Kolab 16.1 server (of course) for all its email, calendars, contacts, notes, and so on; proving how both frameworks combined have nothing to envy from other proprietary alternatives, regardless of the underlying platform. You'll also be able to get the first glimpses of Kube, our new generation email client.

Apart from all of the above, there'll also be Q&As, merch (including some dope tattoos), special one-time Kolab Now offers (which are also somehow related to tattoos). You know: the works.

Discounts 4 Tats

Update: Tattoos get you discounts! Take a photo with a tattoo proving your love for Free Software and get a 30% discount on your ultra-secure Kolab Now online mail, file & groupware account. Get your tat-discount here.

Come visit us. We'll be in Building K, Level 1, as you go in, booth 4 on the left [2]; try out Kolab 16.1's new features live; get cool and useful stuff; join our community.

Sound like a plan?

More FOSDEM news: Kolab Systems's CEO, Georg Greve and Collabora's General Manage, Michael Meeks, will be signing an agreement during the event to integrate CloudSuite into Kolab.

(Collabora are the guys who offer enterprise support for LibreOffice, built the LibreOffice on Android app, and created LibreOffice Online.)

Collabora's CloudSuite supports word-processing, spreadsheets, presentations

CloudSuite is Collabora's version of LibreOffice in the cloud and comes with built-in collaborative editing. It allows editing and accessing online documents that can also be shared easily across users. Whole teams can collaboratively edit text documents, spreadsheets, and presentations. You can access and modify the shared documents through a web interface, directly from the LibreOffice desktop suite, or even from Collabora's own Android LibreOffice app.

Of course CloudSuite supports the full range of document formats LibreOffice does. That includes, among many others, the ISO-approved native Open Document formats, and most, if not all, Microsoft Office formats.

What with Kolab also gaining collaborative editing of text documents (already available), and emails, notes, calendars, and contacts over the next few months, you're probably seeing where this is going: by combining Kolab with Cloudsuite, the aim is to integrate a suite of office tools, each and all of which, from the email client to the presentation editor, can be used collaboratively and online by all the users of a team. And we'll be doing it using exclusively free and open sourced technologies and open standards.

You can already edit texts collaboratively from within Kolab

So, to summarize with of a tongue twister: Kolab & Collabora collaborate on collaboration.

Kolab at FOSDEM 2016

by Aaron Seigo in aseigo at 20:04, Thursday, 28 January

Kolab at FOSDEM 2016

Kolab is once again back at FOSDEM! Our booth is in the K hall, level 1, group A, #4. (What a mouthful!) We have LibreOffice as a neighbor this year, which is a happy coincidence, as we have some interesting news for all Kolab users that is related to office documents that we will be sharing at FOSDEM. That's not the only reason to visit us, though!

We'll be announcing a new release of Kolab with some exciting new features, and showing it running on multiple systems including a single-board Raspberry PI. In fact, if you have a PI of your own, bring an SD Card and we'll be happy to flash it for you with Kolab goodness. Instructions and images will be made available online after FOSDEM as well, so don't worry about missing out if you don't make it by the booth.

We'll also be making a very special offer to all you Free software lovers: a special discount on Kolab Now! Drop by the booth to find out how you can take advantage of this limited time offer.

And last, but far from least, we'll be showing off the new Kolab Community website and forums, sharing the latest on Kolab devel, Roundcube Next and the new desktop client, Kube. Several members of the Kolab development team and community will be there. I'll be there as well, and really looking forward to it!

Oh, and yes, there is the matter of that announcement I hinted at in the first paragraph of this blog entry ... you really want to come by and visit us to find out more about it ... and if you can't, then definitely be watching the Kolab blogs and the software news over the next few days!

Kolab: Bringing All Of Us Together, Part 2

by Aaron Seigo in aseigo at 19:47, Thursday, 28 January

Kolab: Bringing All Of Us Together, Part 2

This is the second blog in a serious about how we are working to improve the Kolab ecosystem. You can read the first installment about the new release strategy here. This time, however, we are going to looking at our online community infrastructure.

All the good things

There is quite a significant amount of infrastructure available to members of the Kolab community. There is an packaging system, online translation, a Phabricator instance where the source code and more is hosted, a comprehensive documentation site, mailing lists, irc channels and blogs. We are building up on that foundation, and one example of that is the introduction of Phabricator this past year.

Of course, it does not matter how good or numerous these tools are if people either do not find them or they are not the right sorts of tools people need. We had taken a look at what we have on offer, how they are being used and how we could improve. The biggest answer we came up with was: revamp the kolab.org website and drop the wiki.

Introducing a new community website!

Kolab: Bringing All Of Us Together, Part 2

Today we turned the taps on a brand new website at kolab.org. Unlike the previous website, this one does not aim to sell people on what makes Kolab great; we already have other websites that do that pretty well. Instead, the new design focuses on making the community resources discoverable by putting them front and center.

In addition to upgrading the blog roll and creating a blog just for announcements and community news, we have created a set of web forums we are calling the Hub. Our design team will be using it to collaborate with users and other designers, and we invite developers, Kolab admins and users alike to discuss everything Kolab at the Hub.

Of course, we also took the opportunity to modernize the look, give it a responsive design, and reflect the new Kolab brand guidelines. But that is all just icing on the cake compared to the improved focus and new communication possibilities.

From here forward

We will be paying a lot of attention to community engagement and development in 2016, and this website, unveiled in time for FOSDEM is a great starting point. We will be adding more over time, such as real time commit graphs and the like, as well as taking your feedback for what would make it more useful to you. We are all looking forward to hearing from you! :)

Feel the FS love at FOSDEM #ILoveFS

by Aaron Seigo in Kolab Community - Kolab News at 15:15, Tuesday, 26 January

Well, it’s that special time of year again. A time when people can show their appreciation for the ones they love, that’s right, free software!

Free Software drives a huge number of devices in our everyday life. It ensures our freedom, our security, civil rights, and privacy. These values have always been at the heart of Kolab and what better way to say #ILoveFS than with the gift of Kolab Now!

To celebrate ‘I love Free Software Day’ we are offering a 30% discount* on all new Kolab Now accounts until the 14th February 2016.

So, how does this work?

Here’s the fun bit. Simply show your free software love by posting a picture on social media of your Kolab tattoo using #ILoveFS (available exclusively at FOSDEM) or simply share your Free Software contributions with us by email or in person. You can do that bit later if you like, for now, just head on over to kolabnow.com/ILoveFS and grab your new Kolab Now account. Offer must end 14th February 2016, so grab them fast.

*The following Terms & Conditions apply
30% Discount is applicable for the first 6 months for new individual or group accounts only. 30% discount is applicable for the first month only on new hosting accounts. Discount will be applied to your new account within the first 30 days of signup. Offer ends 14 February 2016. Kolab Systems AG has the right to withdraw this offer at any time. Cash equivalent is not available.

The year of Kube

by cmollekopf in Finding New Ways… at 10:00, Saturday, 23 January

After having reached the first milestone of a read-only prototype of Kube, it’s time to provide a lookout of what we plan to achieve in 2016.
I have put together a Roadmap, of what I think are realistic goals that we can achieve in 2016. Obviously this will evolve over time and we’ll keep adjusting this as we advance faster/slower or simply move in other directions.

Since we’re building a completely new technology stack, a lot of the roadmap revolves around ensuring that we can create what we envision technology wise,
and that we have the necessary infrastructure to move fast while having confidence in the quality. It’s important that we do this before growing the codebase too much so we can still make the necessary adjustments without having too much code to adjust.

On the UX side we’ll want to work on concepts and prototypes, although we’ll probably keep the first implemented UI’s to something fairly simple and standard.
Over time we have to build a vision where we want to go in the long run so this can steer the development. This will be a long and ongoing process involving not only wire-frames and mockups, but hopefully also user research and analysis of our problem space (how do we communicate rather than how does GMail work).

However, since we can’t just stomp that grander vision out of the ground, the primary goal for us this year, is a simple email client that doesn’t do much, but does what it does well. Hopefully we can go beyond that with some other components available (calendar, addressbook, …), or perhaps something simple available on mobile already, but we’ll have to see how fast it goes first. Overall we’ll want to focus on quality rather than quantity to prove what quality level we’re able to reach and to ensure we’re well lined up to move fast in the following year(s).

The Roadmap

I split the roadmap into four quarters, each having it’s own focus. Note that Akonadi Next has been renamed to Sink to avoid confusion (now that Akonadi 5 is released and we planned for Akonadi2…).

1. Quarter

Milestones:
– Read-only Kube Mail prototype.
– Fully functional Kube Mail prototype but with very limited functionality set (read and compose mail).
– Testenvironment that is also usable by designers.
– Logging mechanism in Sink and potentially Kube so we can produce comprehensive logs.
– Automatic gathering of performance statistics so we can benchmark and prove progress over time.
– The code inventory1 is completed and we know what features we used to have in Kontact.
– Sink Maildir resource.
– Start of gathering of requirements for Kube Mail (features, ….).
– Start of UX design work.

We focus on pushing forward functionality wise, and refactoring the codebase every now and then to get a feeling how we can build applications with the new framework.
The UI is not a major focus, but we may start doing some preparatory work on how things eventually should be. Not much attention is paid to usability etc.
Once we have the Kube Mail prototype ready, with a minimum set of features, but a reasonable codebase and stability (so it becomes somewhat useful for the ones that want to give it a try), we start communicating about it more with regular blogposts etc.

2. Quarter

Milestones:
– Build on Windows.
– Build on Mac.
– Comprehensive automated testing of the full application.
– First prototype on Android.
– First prototype on Plasma Mobile?
– Sink IMAP resource.
– Sink Kolab resource.
– Sink ICal resource.
– Start of gathering of performance requirements for Kube Mail (responsiveness, disk-usage, ….)
– Define target feature set to reach by the end of the year.

We ensure the codebase builds on all major platforms and ensure it keeps building and working everywhere. We ensure we can test everything we need, and work out what we want to test (i.e. including UI or not). Kube is extended with further functionality and we develop the means to access a Kolab/IMAP Server (perhaps with mail only).

3. Quarter

Milestones:
– Prototype for Kube Shell.
– Prototype for Kube Calendar.
– Potentially prototype for other Kube applications.
– Rough UX Design for most applications that are part of Kube.
– Implementation of further features in Kube Mail according to the defined feature set.

We start working on prototypes with other datatypes, which includes data access as well as UI. The implemented UI’s are not final, but we end up with a usable calendar. We keep working on the concepts and designs, and we approximately know what we want to end up with.

4. Quarter

Milestones:
– Implementation of the final UI for the Kube Mail release.
– Potentially also implementation of a final UI for other components already.
– UX Design for all applications “completed” (it’s never complete but we have a version that we want to implement).
– Tests with users.

We polish Kube Mail, ensure it’s easy to install and setup on all platforms and that all the implemented features work flawlessly.

Progress so far

Currently we have a prototype that has:
– A read-only maildir resource.
– HTML rendering of emails.
– Basic actions such as deleting a mail.

My plan is to hook the Maildir resource up with offlineimap, so I can start reading my mail in Kube within the next weeks 😉

Next to this we’re working on infrastructure, documentation, planning, UI Design…
Current progress can be followed in our Phabricator projects 23, and the documentation, while still lagging behind, is starting to take shape in the “docs/” subdirectory of the respective repositories45.

There’s meanwhile also a prototype of a docker container to experiment with available 6, and the Sink documentation explains how we currently build Sink and Kube inside a docker container with kdesrcbuild.

Join the Fun

We have weekly hangouts on that you are welcome to join (just contact me directly or write to the kde-pim mailinglist). The notes are on notes.kde.org and are regularly sent to the kdepim mailinglist as well.
As you can guess the project is in a very early state, so we’re still mostly trying to get the whole framework into shape, and not so much writing the actual application. However, if you’re interested in trying to build the system on other platforms, working on UI concepts or generally tinker around with the codebase we have and help shaping what it should become, you’re more than welcome to join =)


  1. git://anongit.kde.org/scratch/aseigo/KontactCodebaseInventory.git 
  2. https://phabricator.kde.org/project/profile/5/&#160;
  3. https://phabricator.kde.org/project/profile/43/&#160;
  4. git://anongit.kde.org/akonadi-next 
  5. git://anongit.kde.org/kontact-quick 
  6. https://github.com/cmollekopf/docker/blob/master/kubestandalone/run.sh&#160;

Driving Akonadi Next from the command line

by Aaron Seigo in aseigo at 21:26, Monday, 28 December

Christian recently blogged about a small command line tool that added to the client demo application a bunch of useful functionality for interacting with Akonadi Next from the command line. This inspired me to reach into my hard drive and pull out a bit of code I'd written for a side project of mine last year and turn up the Akonadi Next command line to 11. Say hello to akonadish.

akonadish supports all the commands Christian wrote about, and adds:

  • piping and file redirect of commands for More Unix(tm)
  • able to be used in stand-alone scripts (#!/usr/bin/env akonadish style)
  • an interactive shell featuring command history, tab completion, configuration knobs, and more

Here's a quick demo of it I recorded this evening (please excuse my stuffy nose ... recovering from a Christmas cold):

We feel this will be a big help for developers, power users and system administrators alike; in fact, we could have used a tool exactly like this for Akonadi with a client just this month ... alas, this only exists for Akonadi Next.

I will continue to develop the tool in response to user need. That may include things like access to useful system information (user name, e.g.?), new Akonadi Next commands, perhaps even that ability to define custom functions that combine multiple commands into one call... it's rather flexible, all-in-all.

Adopt it for your own

Speaking of which, if you have a project that would benefit from something similar, this tool can easily be re-purposed. The Akonadi parts are all kept in their own files, while the functionality of the shell itself is entirely generic. You can add new custom syntax by adding new modules that register syntax which references functions to run in response. A simple command module looks like this:

namespace Example
{

bool hello(const QStringList &args, State &state)
{
    state.printLine("Hello to you, too!");
}

Syntax::List syntax()
{
    return Syntax::List() << Syntax("hello", QObject::tr("Description"), &Example::hello);
}

REGISTER_SYNTAX(Example)

}

Automcompletion is provided via a lambda assigned to the Syntax object's completer member:

sync.completer = &AkonadishUtils::resourceCompleter;

and sub-commands can be added by adding Syntax object to the children member:

get.children << Syntax("debug", QObject::tr("The current debug level from 0 to 6"), &CoreSyntax::printDebugLevel);

Commands can be run in an event loop when async results are needed by adding the EventDriven flag:

Syntax sync("sync", QObject::tr("..."), &AkonadiSync::sync, Syntax::EventDriven);

and autocompleters can do similarly using the State object passed in which provides commandStarted/commandFinished methods.
... all in all, pretty straightforward. If there is enough demand for it, I could even make it load commands from a plugin that matches the name of the binary (think: ln -s genericappsh myappsh), allowing it to be used entirely generically with little fuss. shrug I doubt it will come to that, but these are the possibilities that float through my head as I wait for compiles to finish. ;)

For the curious, the code can be found here.

Akonadi Next Cmd

by cmollekopf in Finding New Ways… at 11:08, Wednesday, 23 December

For Akonadi Next I built a little utility that I intend to call “akonadi_cmd” and it’s slowly becoming useful.

It started as the first Akonadi Next client, for me to experiment a bit with the API, but it recently gained a bunch of commands and can now be used for various tasks.

The syntax is the following:
akonadi_cmd COMMAND TYPE ...

The Akonadi Next API always works on a single type, so you can i.e. query for folders, or mails, but not for folders and mails. Instead you query for the mails with a folder filter, if that’s what you’re looking for. akonadi_cmd’s syntax reflects that.

Commands

list
The list command allows to execute queries and retreive results in form of lists.
Eventually you will be able to specify which properties should be retrieved, for now it’s a hardcoded list for each type. It’s generally useful to check what the database contains and whether queries work.
count
Like list, but only output the result count.
stat
Some statistics how large the database is, how the size is distributed accross indexes, etc.
create/modify/delete
Allows to create/modify/delete entities. Currently this is only of limited use, but works already nicely with resources. Eventually it will allow to i.e. create/modify/delete all kinds of entities such as events/mails/folders/….
clear
Drops all caches of a resource but leaves the config intact. This is useful while developing because it i.e. allows to retry a sync, without having to configure the resource again.
synchronize
Allows to synchronize a resource. For an imap resource that would mean that the remote server is contacted and the local dataset is brought up to date,
for a maildir resource it simply means all data is indexed and becomes queriable by akonadi.

Eventually this will allow to specify a query as well to i.e. only synchronize a specific folder.

show
Provides the same contents as “list” but in a graphical tree view. This was really just a way for me to test whether I can actually get data into a view, so I’m not sure if it will survive as a command. For the time being it’s nice to compare it’s performance to the QML counterpart.

Setting up a new resource instance

akonadi_cmd is already the primary way how you create resource instances:

akonadi_cmd create resource org.kde.maildir path /home/developer/maildir1

This creates a resource of type “org.kde.maildir” and a configuration of “path” with the value “home/developer/maildir1”. Resources are stored in configuration files, so all this does is write to some config files.

akonadi_cmd list resource

By listing all available resources we can find the identifier of the resource that was automatically assigned.

akonadi_cmd synchronize org.kde.maildir.instance1

This triggers the actual synchronization in the resource, and from there on the data is available.

akonadi_cmd list folder org.kde.maildir.instance1

This will get you all folders that are in the resource.

akonadi_cmd remove resource org.kde.maildir.instance1

And this will finally remove all traces of the resource instance.

Implementation

What’s perhaps interesting from the implementation side is that the command line tool uses exactly the same models that we also use in Kube.

    Akonadi2::Query query;
    query.resources << res.toLatin1();

    auto model = loadModel(type, query);
    QObject::connect(model.data(), &QAbstractItemModel::rowsInserted, [model](const QModelIndex &index, int start, int end) {
        for (int i = start; i <= end; i++) {
            std::cout << "\tRow " << model->rowCount() << ":\t ";
            std::cout << "\t" << model->data(model->index(i, 0, index), Akonadi2::Store::DomainObjectBaseRole).value<Akonadi2::ApplicationDomain::ApplicationDomainType::Ptr>()->identifier().toStdString() << "\t";
            for (int col = 0; col < model->columnCount(QModelIndex()); col++) {
                std::cout << "\t|" << model->data(model->index(i, col, index)).toString().toStdString();
            }
            std::cout << std::endl;
        }
    });
    QObject::connect(model.data(), &QAbstractItemModel::dataChanged, [model, &app](const QModelIndex &, const QModelIndex &, const QVector<int> &roles) {
        if (roles.contains(Akonadi2::Store::ChildrenFetchedRole)) {
            app.quit();
        }
    });
    if (!model->data(QModelIndex(), Akonadi2::Store::ChildrenFetchedRole).toBool()) {
        return app.exec();
    }

This is possible because we’re using QAbstractItemModel as an asynchronous result set. While one could argue whether that is the best API for an application that is essentially synchronous, it still shows that the API is useful for a variety of applications.

And last but not least, since I figured out how to record animated gifs, the above procedure in a live demo 😉

akonadicmd


Inital sync akonadi from commandline

by Sandro Knauß in Decrypted mind at 10:55, Friday, 18 December

When you start with Kontact you have to wait until the first sync of your mails with the IMAP or Kolab server is done. This is very annoying, because the first impression is that kontact is slow. So why not start this first sync with a script, and then the data is already available when the user starts kontact the first time?

1. Setup akonadi & kontact

We need to add the required config files to a new user home. This is simply copying config files to the new user home. We just need to replace username, email address and the password. Okay, that sounds quite easy, doesn't it? Oh wait - the password must be stored inside KWallet. KWallet can be accessed from the command line with kwalletcli. Unfortunatelly we can only use kwallet files not encrypted with a password because there is no way to enter the password with kwalletcli. Maybe pam-kwallet would be a solution; for plasma5 it there is an offical part for this, kwallet-pam, but I haven't tested it yet.

As an alternative to copying files around, we could have used kiosk system from KDE. With that you are able to preseed the configuration files for an user and have additionally the possibility to roll out changes. F.ex. if the server addresses changes. But for a smaller setup this is kind of overkill.

2. Start needed services

For starting a sync, we first need Akonadi running and Akonadi depends on a running DBus and kwalletd. KWallet refuses to start without a running XServer and is not happy with just xvfb.

3. Triggering the sync via DBus

akonadi has a great Dbus interface so it is quite easy to trigger a sync and track the end of the sync:

import gobject  
import dbus  
from dbus.mainloop.glib import DBusGMainLoop

def status(status, msg):  
    if status == 0:
        gobject.timeout_add(1, loop.quit)

DBusGMainLoop(set_as_default=True)  
session_bus = dbus.SessionBus()

proxy = session_bus.get_object('org.freedesktop.Akonadi.Resource.akonadi_kolab_resource_0', "/")  
proxy.connect_to_signal("status", status, dbus_interface="org.freedesktop.Akonadi.Agent.Status")  
proxy.synchronize(dbus_interface='org.freedesktop.Akonadi.Resource')

loop = gobject.MainLoop()  
loop.run()  

The status function gets all updates, and status=0 indicates the end of a sync.
Other than that is just getting the SessionBus and trigger the sychronize method and wait for till the loop ends.

4. Glue everything together

After having all parts in place, it can be glued into a nice script. As language I use python, together with some syntactic sugar it is quite small:

config.setupConfigDirs(home, fullName, email, name, uid, password)

with DBusServer():  
        logging.info("set kwallet password")
        kwalletbinding.kwallet_put("imap", akonadi_kolab_resource_0rc", password)

        with akonadi.AkonadiServer(open("akonadi.log", "w"), open("akonadi.err", "w")):
            logging.info("trigger fullSync")
            akonadi.fullSync(akonadi_resource_name)

first create the config files. If they are in place we need a DBus Server. If it is not available it is started (and stoped after leaving the with statement). Now the passwort is inserted in kwallet and the akonadiserver is started. If akonadi is running the fullSync is triggered.

You can find the whole at github:hefee/akonadi-initalsync

5. Testing

After having a nice script, the last bit that we want to test it. To have a fully controlled environment we docker images for that. One image for the server and one with this script. As base we use a Ubuntu 12.04 and our obs builds for kontact.

Because we already started with docker images for other parts of the depolyment of kontact I added them to the known repository github:/cmollekopf/docker

ipython ./automatedupdate/build.py #build kolabclient/percise  
python testenv.py start set1  #start the kolab server (set1)

start the sync:

% ipython automatedupdate/run.py
developer:/work$ cd akonadi-initalsync/  
developer:/work/akonadi-initalsync$ ./test.sh  
+ export QT_GRAPHICSSYSTEM=native
+ QT_GRAPHICSSYSTEM=native
+ export QT_X11_NO_MITSHM=1
+ QT_X11_NO_MITSHM=1
+ sudo setfacl -m user:developer:rw /dev/dri/card0
+ export KDE_DEBUG=1
+ KDE_DEBUG=1
+ USER=doe
+ PASSWORD=Welcome2KolabSystems
+ sleep 2
+ sudo /usr/sbin/mysqld
151215 14:17:25 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.  
151215 14:17:25 [Note] /usr/sbin/mysqld (mysqld 5.5.46-0ubuntu0.12.04.2) starting as process 16 ...  
+ sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
+ ./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0
INFO:root:setup configs  
INFO:DBusServer:starting dbus...  
INFO:root:set kwallet password  
INFO:Akonadi:starting akonadi ...  
INFO:root:trigger fullSync  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 started  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 was successfull.  
INFO:Akonadi:stopping akonadi ...  
INFO:DBusServer:stopping dbus...  

To be honest we need some more quirks, because we need to setup the X11 forward into docker. And in this case we also want to run one MySQL server for all users and not a MySQL server per user, that's why we also need to start mysql by hand and add a database, that can be used from akonadi. The real syncing begins with the line:

./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0

Kolab @ Fosdem

by Aaron Seigo in aseigo at 23:21, Tuesday, 15 December

Kolab will once again have a booth at FOSDEM, that fantastic event held in Brussels at the end of January. Several Kolab developers and deployers (and generally fun people) will be there wandering the halls, talking about Kolab and looking to connect with and learn from all the other fantastic people and projects who are there. It's going to be a great event! Be sure to find us if you are attending ... and come prepared for awesome :)

Kolab: Bringing All Of Us Together, Part 1

by Aaron Seigo in aseigo at 18:09, Tuesday, 15 December

Kolab: Bringing All Of Us Together, Part 1

The new year is looming, so this seems like a good time to share some of what we are thinking about at Kolab Systems when it comes to the Kolab ecosystem. As a result of careful examination of the Kolab ecosystem, we put together some priority adjustments to make. These include:

  • Kolab release process improvements
  • kolab.org reboot
  • partner enablement

Each of these are big, exciting and fundamental improvements in their own right, so I will cover each in its own blog entry in which I will attempt to explain the challenges we see and what we are doing to address them. First up is the Kolab release process.

tl;dr

Kolab Enterprise as a software product is merging with the Kolab community edition. There will be a single Kolab software product open to all, because everyone needs Kolab, not only "enterprise" customers! Both the version selection and development process around Kolab will be substantially simplified as a direct result. Read on for the details!

Kolab Three Ways

Kolab currently comes in a few different "flavors": what you find in the source repositories and build servers; the Kolab Community Edition releases; and Kolab Enterprise. What is the difference?

Well, the code and raw package repos are essentially "server yourself": you download it, you put it together. Not so easy. The Community Edition releases are easy to install and were being released every six months, but support is left to community members and there is no long term release strategy for them. By contrast, you can purchase commercial support and services for Kolab Enterprise, and those releases are supported for a minimum of 5 years. The operating system platforms supported by each of these variants also varies, and moving between the Community Edition and Enterprise could at times be a bit of effort.

Yet they all come from the same source, with features and fixes flowed between them. However, the flow of those fixes and where features landed was not standardized. Sometimes features would land in Enterprise first and then debut in the Community Edition. Often it was the other way around. Where fixes would appear was similarly done on a case-by-case basis.

The complex relationship can be seen in the timeline below:

Kolab: Bringing All Of Us Together, Part 1

This has resulted duplication of effort, confusion over which edition to use when and where, and not enough predictability. We've been thinking about this situation quite deeply over the past months and have devised a plan that we feels improves the situation across the board.

A Better Plan Emerges

Starting in 2016 we will focus our efforts on a single Kolab product release that combines the Q/A of Kolab Enterprise with the availability of the Kolab community edition. Professional services and support, optimized operating system / platform integration and long term updates will all remain available from Kolab Systems, but everyone will be able to install and use the same Kolab packages.

It also means that both fixes and features will land in a consistent fashion. Development will be focused on the master branches of the Kolab source repositories, which have been hosted for a while now with Phabricator sporting open access to sprints and work boards. With our eyes and hands firmly on the main branches of the repositories, we will focus on bringing continuous delivery to them for increased quality.

Fixes and features alike will all flow into Kolab releases, bringing long desired predictability to that process, and Kolab Systems will continue to provide a minimum of 5 years of support for Kolab customers. This will also have the nice effect of making it easier for us to bring Kolab improvements live to Kolab Now.

These universal Kolab releases will be made available for all supported operating systems, including ones that the broader community elects to build packages for. This opens the way for the "enterprise-grade" Kolab packages on all operating systems, rather than "just" the community editions.

You can see how much clarity and simplicity this will bring to Kolab releases by comparing the diagram below with the previous one:

Kolab: Bringing All Of Us Together, Part 1

You can read more about Kolab release and development at Jeroen van Meeuwen's blog: The Evolution of Kolab Development and Short- vs. Long-term Commitments.

eimap: because what the world needs is another IMAP client

by Aaron Seigo in aseigo at 12:23, Friday, 11 December

eimap: because what the world needs is another IMAP client

Erlang is a very nice fit for many of the requirements various components in Kolab have ... perhaps one of these days I'll write something more in detail about why that is. For now, suffice it to say that we've started using Erlang for some of the new server-side components in Kolab.

The most common application protocol spoken in Kolab is IMAP. Unfortunately there was no maintained, functional IMAP client written in Erlang that we could find which met our needs. So, apparently the world needed another IMAP client, this time written in Erlang. (Note: When I say "IMAP client" I do not mean a GUI for users, but rather something that implements the client-side of the IMAP protocol: connect to a server, authenticate, run commands, etc.)

So say hello to eimap.

Usage Overview

eimap is implemented as a finite state machine that is meant to run in its own Erlang process. Each instance of an eimap represents a single connection to an IMAP server which can be used by one or more other processes to connect, authenticate and run commands against the server.

The public API of eimap consists mostly of requests that queue commands to be sent to the server. These functions take the process ID (PID) to send the result of the command to, and an optional response token that will accompany the response. Commands in the queue are processed in sequence, and the server responses are parsed into nice normal Erlang terms so one does not need to concern themselves with the details of the IMAP message protocols. Details like selecting folders before accessing them or setting up TLS is handled automagically by eimap by inserting necessary commands into the queue for the user.

Here is a short example of using eimap:

ServerConfig = #eimap_server_config{ host = "192.168.56.101", port = 143, tls = false },
{ ok, Conn } = eimap:start_link(ServerConfig),
eimap:login(Conn, self(), undefined, "doe", "doe"),
eimap:get_folder_metadata(Conn, self(), folder_metadata, "*", ["/shared/vendor/kolab/folder-type"]),
eimap:logout(Conn, self(), undefined),
eimap:connect(Conn).

It starts an eimap process, queues up a login, getmetadata and logout command, then connects. The connect call could have come first, but it doesn't matter. When the connection is established the command queue is processed. eimap exits automatically when the connection closes, making cleanup nice and easy. You can also see the response routing in each of the command functions, e.g. self(), folder_metadata which means that the results of that GETMADATA IMAP command will be sent to this process as { folder_metadata, ParsedResponse } once completed. This is typically handled in a handle_info/3 function for gen_server processes (and similar).

Internally, each IMAP command is implemented in its own module which contains at least a new and a parse function. The new function creates the string to send the server for a given command, and parse does what it says returning a tuple that tells eimap whether it is completed, needs to consume more data from the server, or has encountered an error. This allows simple commands can be implemented very quickly, e.g.:

-module(eimap_command_compress).
-behavior(eimap_command).
-export([new/1, parse/2]).
new(_Args) -> <<"COMPRESS DEFLATE">>.
parse(Data, Tag) -> formulate_reponse(eimap_utils:check_response_for_failure(Data, Tag)).
formulate_reponse(ok) -> compression_active;
formulate_reponse({ _, Reason }) -> { error, Reason }.

There is also a "passthrough" mode which allows a user to use eimap as a pipe between it and the IMAP server directly, bypassing the whole command queueing mechanism. However, if commands are queued, eimap drops out of passthrough to run those commands and process their responses before returning to passthrough.

It is not a complicated design by any means, and that's a virtue. :)

Plans and more plans!

As we write more Erlang code for use with Kolab and IMAP in general, eimap will be increasingly used and useful. The audit trail system for groupware objects needs some very basic IMAP functionality; the Guam IMAP proxy/filter heavily relies on this; and future projects such as a scalable JMAP proxy will also be needing it. So we will have a number of consumers for eimap as time goes on.

While the core design is mostly in place, there are quite a few commands that need to be implemented which you can see on the eimap workboard. Writing commands is quite straightforward as each goes into its own module in the src/commands directory and is developed with a corresponding test in the test/ directory; you don't even need an IMAP server, just the lovely (haha) relevant IMAP RFC. Once complete add a function to eimap itself to queue the command, and eimap handles the rest for you from there. Easy, peasy.

I've personally been adding the commands that I have immediate use for, and will be generally adding the rest over time. Participation, feedback and patches are welcome!

guam: an IMAP session filter/proxy

by Aaron Seigo in aseigo at 21:59, Thursday, 10 December

guam: an IMAP session filter/proxy

These days, the bulk of my work at Kolab Systems does not involve writing code. I have been spending quite a bit of time on the business side of things (and we have some genuinely exciting things coming in 2016), customer and partner interactions, as well as on higher-level technical design and consideration. So I get to roll around Roundcube Next, Kube (an Akonadi2-based client for desktop and mobile ... but more on that another time), Kolab server hardware pre-installs .. and that's all good and fun. Still, I do manage to write a bit of code most weeks, and one of the projects I've been working on lately is an IMAP filter/proxy called Guam.

I've been wanting to blog about it for a while, and as we are about to roll version 0.4 I figured now is as good a time as any.

The Basics of Guam

Guam provides a simple framework to alter data being passed between an IMAP client and server in real time. This "interference" is done using sets of rules. Each port that Guam listens has a set of rules with their own order and configuration. Initially rules start out passive and based on the data flow may elect to become active. Once active, a rule gets to peek at the data on the wire and may take whatever actions it wish, including altering that data before it gets sent on. In this way rules may alter client messages as well as server responses; they may also record or perform other out-of-band tasks. The imagination is the limit, really.

Use Cases

The first practical use case Guam is fulfilling is selective hiding of folders from IMAP clients. Kolab stores groupware data such as calendars, notes, tags and more in plain old IMAP folders. Clients that connect over IMAP to a Kolab server which are not aware of this get shown all those folders. I've even heard of users who have seen these folders and delete them thinking they were not supposed to be there, only to then wonder where the heck their calendars went. ;)

So there is a simple rule called filter_groupware_folders that tries to detect if the client is a Kolab-aware client by looking at the ID string it sends and if it does not look like a Kolab client it goes about filtering out those groupware folders. Kolab continues on as always, and IMAP clients do as well but simply do not see those other special folders. Problem solved.

But Guam can be used for much more than this simple, if rather practical, use case. Rules could be written that prevent downloading of attachments from mobile devices, or accessing messages marked as top-secret when being accessed from outside an organization's firewall. Or they could limit message listings to just the most recent or unread ones and provide access to that as a special service on a non-standard port. They could round-robin between IMAP servers, or direct different users to different IMAP servers transparently. And all of these can be chained in whichever order suits you.

The Essential Workings

The two most important things to configure in Guam are the IMAP servers to be accessed and the ports to accept client connections on.

guam: an IMAP session filter/proxy

Listener configuration includes the interface and port to listen on, TLS settings, which IMAP backend to use and, of course, the rule set to apply to traffic. IMAP server configuration includes the usual host/port and TLS preferences, and the listeners refer to them by name. It's really not very complicated. :)

Rules are implemented in Guam as Erlang modules which implement a simple behavior (Erlangish for "interface"): new/1, applies/3, apply_to_client_message/3, apply_to_server_message/3, and optionally imap_data/3. The name of the module defines the name of the rule in the config: a rule named foobar would be implemented in a module named kolab_guam_rule_foobar.

... and for a quick view that's about all there is to it!

Under the hood

I chose to write it in Erlang because the use case is pretty much perfect for it: lots of simultaneous connections that must be kept separate from one another. Failure in any single connection (including a crash of some sort in the code) does not interfere with any other connection; everything is asynchronous while remaining simple (the core application is a bit under 500 lines of code); and Erlang's VM scales very well as you add cores. In other words: stability, efficiency, simplicity.

Behind the scenes, Guam uses an Erlang IMAP client library that I've been working on called eimap. I won't get any awards for creativity in the naming of it, certainly, but "erlang IMAP" does what it says on the box: it's IMAP in Erlang. That code base is rather larger than the Guam one, and is quite interesting in its own right: finite state machines! passthrough modes! commands as independent modules! async and multi-process! ooh! aaaaaah! sparkles! eimap is a very easy project to get your fingers dirty with (new commands can be implemented in well under 6 lines of code) and will be used by a number of applications in future. More in the next blog entry about that, however.

In the meantime, if you want to get involved, check out the git repo, read the docs in there and take a look at the Guam workboard.

This week in Kolab-Tech

by Mads Petersen in The Kolaborator at 12:41, Saturday, 03 October

It's always fun when your remote colleagues are coming to visit the office. It helps communication putting a face to the name in the chat client - and the voice on the phone.

Giles, our creative director, was visiting from London the first days of the week, which made a lot of the work switch context to be about design and usability. As Giles is fairly new in the company we also spent some time discussing a few of our internal processes and procedures with him. It is great to have him onboard to fill in a previously not so investigated space with his large experience.

The server development team kept themselves busy with a few Roundcube issues, and with a few issues that we had in the new KolabNow dashboard. Additionally work was being done on the Roundcube-Next POC. We hope soon to have something to show on that front.

On the desktop side, we finalized the sprint 201539 and delivered a new version of Kontact on Windows and on Linux. The Windows installer is named Kontact-E14-2015-10-02-12-35.exe, and as always it is available on our mirror.

This Sunday our datacenter is doing some maintenance. They do not expect any interruption, but be prepared for a bit of connection troubles on Sunday night.

On to the future..

Last week in Kolab-Tech

by Mads Petersen in The Kolaborator at 15:51, Monday, 28 September

As we started the week already the previous Friday night (by shutting off the KolabNow cockpit and starting the big migration) it turned out to be a week all about (the bass) KolabNow.

Over the weekend we made a series of improvements to KolabNow that will improve the over all user experience with

  • Better performance
  • More stable environment
  • Less downtime
  • Our ability to update the environment with a minimum of interruption for endusers.

After the update, there were of course a few issues that needed to be tweeked, but details aside, the weekend was a big success. Thanks to the OPS staff for doing the hard work.

One thing we changed with this update was the way users get notified when their accounts are suspended. Before this weekend, users with suspended accounts would still be able to login and receive mail on KolabNow. After this update, users with suspended accounts will not be able to login. This was of course leading to a small breeze of users with suspended accounts contacting support with requests for re-enabeling of their accounts.

On the development side we were making progress on two fronts:

  • We are getting close to the end of the list of urgent Kontact defects. The second week of this sprint should get rid of that list. Our Desktop people will then get time to look forward again, and look at the next generation of Kolab Desktop Client.
  • We started experimenting with one (- of perhaps more to come) POCs for Roundcube-Next. We now need to start talking about the technologies and ideas behind that new product. More to follow about that.

Thank you for your interest - if you are still reading. :-)

This week in Kolab Tech

by Mads Petersen in The Kolaborator at 15:48, Friday, 18 September

Another week passed by; super fast, as we know that: Time is running fast when you have fun.

The client developers are on a roll. They have been hacking away on a defined bundle of issues in Korganizer and Zanshin, which has been annoying for users, and has prevented some organizations from adapting the desktop client. This work will proceed during the next sprint - and most probably the sprint after that.

One of our collaborative editing developers took part in the ODF plugfest. According to his report, a lot of good experiences were had, a lot of contacts were made, and there was good feedback for the plans of the deep Kolab/Manticore integration.

Our OPS people was busy most of the week with preparations for this weekends big KolabNow update. This is a needed overhaul of our background systems and software. As we now have the new hardware in place, and it has been running it's test circles around itself, we can finally start applying many of the improvements that we have prepared for some time. This weekend is very much a backend update; but an important one, which will make it easier for us to apply more changes in the future with a minimal amount of interruptions.

All y'all have a nice weekend now..

This week in Kolab tech..

by Mads Petersen in The Kolaborator at 11:01, Friday, 11 September

The week in development:

  • Our desktop people were spending time in Randa, a small town in the Swiss mountains, where they were discussing KDE related issues and hacking away together with similar minded people. Most probably they also got a chance or two for some social interaction.
  • Work was continued on the Copenhagen (MAPI integration) project. Where as it was easy to spot progress in the beginning, the details around folder permissions and configuration objects that are being worked out now are not as visible.
  • The Guam project (the scalable IMAP session and payload filter) is moving along as planned. The filter handling engine is in place. It is now being implanted into the main body of the system, and then work on the actual filter formulation can be started.
  • A few defects in Kolab on UCS was discovered in the beginning of the week. Those were investigated and are getting fixed as I am writing this. Hopefully we will be able to push a new package for this product early next week.

In other news: The engineering people are working hard to prepare the backend systems for some interesting upcoming KolabNow changes. There will be more information about those changes in other more appropriate places.

Only thing left is, to wish everyone a very nice weekend.

Last week @ Kolab Tech

by Mads Petersen in The Kolaborator at 14:00, Monday, 07 September

After a summer with ins and outs of the super hot Zurich office, this week finally brought some rain and a little chill. I can't wait for the snow to start.

The week started early and in full speed, as we had our hardware vendor visiting on Monday to replace a defect hypervisor. I sleep better at night knowing that everything is in order again.

A few of us was jumping on a bus to the fair city of Munich, to meet the techies at IT@M IT@M for a Kontact workshop; 3 days of intense desktop client talks, discussions and experiments. It was inspiring to see the work groups get together to resolve issues, do packaging on the LiMux platform and prepare pre-deployment configurations. A big value of the workshop was the opportunity to collect and consolidate a lot of end user experience. Luckily we also got time for a bit of pretaste of the special Wiesn bier.

Aside from discussing the desktop clients, creating packages and listening to use cases, Christian finally found and resolved the issue that for a while has prevented me from installing the latest Kontact on my fedora 22. Thanks Christian!

Kontact and GnuPG under Windows

by Sandro Knauß in Decrypted mind at 23:53, Wednesday, 02 September

Kontact has, in contrast to Thunderbird, integrated crypto support (OpenPGP and S/MIME) out-of-the-box.
That means on Linux you can simply start Kontact and read crypted mails (if you have already created keys).
After you select your crypto keys, you can immediately start writing encrypted mails. With that great user experince I never needed to dig further in the crypto stack.

select cryptokeys step1 select cryptokeys step2

But on Windows there is no GnuPG installed as default, so I need to dig into the whole world of crypto layers,
that are between Kontact and the actual part that does the de-/encryption.


Crypto Stack

Kontact uses a number of libraries that the team has written around GPGME.

The lowest level one is gpgmepp which is an object oriented wrapper for gpgme. This lets us avoid having to write code in C for KMail. Than we have libkleo which is a library built on top of gpgmepp that KMail uses to trigger de-/encryption in the lower levels. GPGME is the only required dependency to compile Kontact with crypto support.

But this is not enough to send and receive encrypted mail with Kontact on Windows, as I mentioned earlier. There are still runtime dependencies that we need to have in place. Fortunatelly the runtime crypto stack is already packaged by the GPG4Win team. Simply installing is still not enough to have crypto support, though. With GPG4Win, it is possible to select OpenPGP keys, create and read encrypted mails, but unfortunatelly it doesn't work with S/MIME.

So I had to dig futher into how GnuPG is actually working.

OpenPGP is handled by the gpg binary and for S/MIME we have gpgsm. Both are directly called from GPGME, using libassuan. Both application than talk to gpg-agent, which is actually the only programm that interacts with the key data. Both application can be used from the commandline, so it was easy to verify, that they were working and that we have no problems with GnuPG setup.

So first we start by creating keys (gpg --gen-key and gpgsm --gen-key) and than further testing what works with GPG4Win and what does not. We found a bug in GnuPG in the used version, but this one was closed in a newer version. Still Kontact didn't want to communicate with GPG4Win. The reason was a wrong standard path, preventing gpgme from finding gpgsm. With that fixed, we now have a working crypto stack under windows.

But to be honest, there are more application involved in a working crypto stack. At first we need gpgconf and gpgme-w32-spawn to be available in the Kontact directory. gpgconf helps gpgme to find gpg and gpgsm and is responsible to modify the content of .gnupg in the user's home directoy. Additionally, it infoms you about changes in config files. gpgme-w32-spawn is responsible for creating the other needed processes.

For having a UI where you can enter ypur password you need pinentry. S/MIME needs another agent, that does the CRL / OCSP checks. This is done by dirmgnr. In GnuPG 2.1 dirmgnr is the only component that performs connections to the outside. So every request that requires the Internet is done via dirmgnr.

This is, in short, the crypto stack that needs to work together to give you working encrypted mail support.

We are happy, that we now have a fully working Kontact under windows (again!). There are rumours, that Kontact was working also before that under windows with crypto support, but unfortunatelly when we started the crypted part was not working.

This work has done in the kolabsys branch, which is based on KDE Libraries 4. The next steps are to merge changes over to make sure that the current master branch of Kontact, which uses KDE Frameworks 5, is also working.

Randa

Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

randa meeting

Bringing Akonadi Next up to speed

by cmollekopf in Finding New Ways… at 12:10, Saturday, 29 August

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.

Randa

Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!


Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.

So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).

Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.

After nice, steady growth during the early months, MyKolab.com took us on a wild ride.

Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.

Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto MyKolab.com as well.

What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.

In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.

Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.

On the highway to future improvements

So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.

The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.

That hardware has been installed last week.

So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.

For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.

Stay tuned.

Akonadi with a remote database

by Aaron Seigo in aseigo at 13:02, Friday, 14 August

Akonadi with a remote database

The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.

In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.

While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.

How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.

Configuring Akonadi

Note: as Dan points out in the comments below, this is only safe to do with a "fresh" Akonadi that has no data thus far. You'll want to first clean out (and possibly backup) all the data in $XDG_DATA_HOME/akonadi as well as be prepared to do some cleaning in the Kontact application configs that reference Akonadi entities by id. (Another practice we aim to light on fire and burn in Akonadi Next.)

First, you want Akonadi to not be running. Close Kontact if it is running and then run akonadictl stop. This can take a little while, even though that command returns immediately. To ensure Akonadi actually is stopped run akonadictl status and make sure it says that it is, indeed, stopped.

Next, start the Akonadi control panel. The command line approach is kcmshell4 kcm_akonadi_resources, but you can also open the command runner in Plasma (ALt+F2 or Alt+Space, depending) and type in akonadi to get something like this:

Akonadi with a remote database

It's the first item listed, at least on my laptop: Akonadi Configuration. You can also go the "slower" route and open System Settings and either search for akonadi or go right into the Personal Information panel. No matter how you go about it, you'll see something like this:

Akonadi with a remote database

Switch to the Akonadi Server Configuration tab and disable the Use internal MySQL server option. Then you can go about entering a hostname. This would be localhost for MySQL7 running on the same machine, or an IP address or domain name that is reachable from the system. You will also need to supply a database name4 (which defaults to akonadi), username5 and password. Clear the Options line of text, and hit the ol' OK button. That's it.

Akonadi with a remote database

Assuming your MySQL is up and running and the username and password you supplied are correct, Akonadi will now be using a remote MySQL database. Yes, it is that easy.

Caveats

In this configuration, the limitations are twofold:

  • network quality
  • local configuration is now tied to that database

Network quality is the biggest factor. Akonadi can send a lot of database queries and each of those results in a network roundtrip. If your network latency for a roundtrip is 20ms, for instance, then you are pretty well hard-limited to 50 queries per second. Given that Akonadi can issue several queries for an item during initial sync, this can result in quite slow initial synchronization performance on networks with high latency.6

Past latency, bandwidth is the other important factor. If you have lots of users or just tons of big mails, consider the network traffic incurred in sending that data around the network.

For typical even semi-modern network in an office environment, however, the network should not be a big issue in terms of either latency or bandwidth.

The other item to pay attention to is that the local configuration and file data kept outside the database by Akonadi will now be tied to the contents of that database, and vice versa. So you can not simply setup a single database in a remote database server and then connect simultaneously to it from multiple Akonadi instances. In fact, I will guarantee you that this will eventually screw up your data in unpleasant ways. So don't do it. ;)

In an office environment where people don't move between machines and/or when the user data is stored centrally as well, this isn't an issue. Otherwise, create one database for each device you expect to connect to it. Yes, this means multiple copies of the data, but it will work without trashing your data and that's more important thing.

How well does it work?

Now for the Big Question: Is this actually practical and safe enough for daily use? I've been using this with my Kolab Now account since last week. To really stretch the realm of reality, I put the MySQL instance on a VM hosted in Germany. In spite of forcing Akonadi to trudge across the public internet (and over wifi), so far, so good. Once through a pretty slow initial synchronization, Kontact generally "feels" on par with and often even a bit snappier than most webmail services that I've used, though certainly slower than a local database. In an office environment, however, I would hope that the desktop systems have better network than "my laptop on wifi accessing a VM in Germany".

As for server load, for one power user with a ton of email (my life seems to revolve around email much of the time) it is entirely negligible. MySQL never budged much above 1% CPU usage during my monitoring of it, and after sync was usually just idling.

I won't be using this configuration for daily use. I still have my default-configured Akonadi as well, and that is not only faster but travels with my laptop wherever it is, network or not. Score one for offline access.

Footnotes

1: If you are thinking something along the lines of "the real issue is that it uses a database server at all", I would partially agree with you. For offline usage, good performance, and feature consistency between accounts, a local cache of some sort is absolutely required. So some local storage makes sense. A full RDBMS carries more overhead than truly necessary and SQL is not a 100% perfect fit for the dataset in question. Compared to today, there were far fewer options available to the Akonadi developers a decade ago when the Akonadi core was being written. When the choice is between "not perfect, but good enough" and "nothing", you usually don't get to pick "nothing". ;) In the Akonadi Next development, we've swapped out the external database process and the use of SQL for an embedded key/value store. Interestingly, the advancements in this area in the decade since Akonadi's beginning were driven by a combination of mobile and web application requirements. That last sentence could easily be unpacked into a whole other blog entry.

2: There is a (configurable) limit to the size of payload content (e.g. email body and attachments) that Akonadi will store in the database which defaults to 4KiB. Anything over that limit will get stored as a regular file on the file system with a reference to that file stored in the database.

3: This blog entry is, in part, a way to collect my thoughts for that documentation.

4: If the user is not allowed to create new databases, then you will need to pre-create the database in MySQL.

5: The user account is a MySQL account, not your regular system user account ... unless MySQL is configured to authenticate against the same user account information that system account login is, e.g. PAM / LDAP.

6: Akonadi appears to batch these queries into transactions that exist per-folder being sync'd or every 100 emails, whichever comes first, so if you are watching the database during sync you will see data appear in batches. This can be done pretty easily with an SQL statement like select count(*) from PartTable; Device this number by 3 to get the number of actual items, time how long it takes for a new batch to arrive and you'll quickly have your performance numbers for synchronization.

7: That same dialog also offers options for things other than MySQL. There are pros and cons to each of the options in terms of stability and performance. Perhaps I'll write about those in the future, but this blog entry with its stupid number of footnotes is already too long. ;)

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:

bHosted

Contargo

cPanel

Fastmail

Sandstorm.io

sys4

Tucows

TransIP

XS4ALL

The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:

 

Roundcube Next Campaign Amount

$103,541.00

100.00%

Indiegogo Cost

-$4,141.64

4.00%

PayPal Cost

-$4,301.17

4.15%

Remaining Amount

$95,098.19

91.85%

So by the time the money was in our PayPal account, we are down 8.15%.

The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.

But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, XE.com lists the exchange rate as 0.9464749579 CHF per USD.

USD

CHF

% of total

Roundcube Next Campaign Amount

$103,541.00

SFr. 97,998.96

100.00%

Remaining at PayPal

$95,098.19

SFr. 90,008.06

91.85%

Final at bank in CHF

$92,783.23

SFr. 87,817.00

89.61%

So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.

The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.

PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.

But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.

In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.

We also remain open to others to come aboard.

The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.

It’s not too late to be part of the Next. Just drop a message to contact@kolabsystems.com.

"... if nothing changes"

by Aaron Seigo in aseigo at 17:41, Friday, 17 July

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?

A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.

This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.

The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.

Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.

I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.

Kontact on Windows

by cmollekopf in Finding New Ways… at 14:49, Friday, 10 July

I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)

Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.

To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.

If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.

[0] https://github.com/cmollekopf/kdepimwindows
[1] http://mirror.kolabsys.com/pub/upload/windows/Kontact-E5-2015-06-30-19-41.exe

Kontact_WindowsAccount_Wizard


Roundcube Next: The Next Steps

by Aaron Seigo in aseigo at 15:17, Friday, 03 July

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps

Perks

The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Roundcube Next: The Next Steps

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night Sandstorm.io became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Reproducible testing with docker

by cmollekopf in Finding New Ways… at 17:22, Wednesday, 01 July

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.

Docker

Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:
https://github.com/cmollekopf/docker.git


Roundcube Next crowdfunding success and community

by Aaron Seigo in aseigo at 21:36, Monday, 29 June

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!


1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.

Riak KV, Basho and Kolab

by Aaron Seigo in aseigo at 10:22, Saturday, 27 June

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!

If you are a user of Roundcube, you want to contribute to roundcu.be/next. If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.

 

Key Update

by Georg Greve in freedom bits at 20:58, Monday, 18 May

I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”
In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <greve@gnu.org>
uid                  Georg C. F. Greve <greve@fsfeurope.org>
uid                  Georg C. F. Greve <greve@brave-gnu-world.org>
uid                  Brave GNU World <column@gnu.org>
uid                  Georg C. F. Greve <greve@fsfe.org>
uid                  Georg C. F. Greve <greve@gnuhh.org>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <georg.greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.ch>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Community) <georg@kolab.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfeurope.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfe.org>
uid                  Georg C. F. Greve (digitalSTROM.org Board) <georg.greve@digitalSTROM.org>
uid                  Georg C. F. Greve <mail@georggreve.net>
uid                  Georg C. F. Greve (GNU Project) <greve@gnu.org>
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.

Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.

 

 

event and data logging

by Aaron Seigo in aseigo at 08:58, Wednesday, 01 April

Working with Kolab has kept me busy on numerous fronts since I joined near the end of last year. There is the upcoming Kolab Summit, refreshing Kolab Systems' messaging, helping with progress around Kolab Now, collaborating on development process improvement, working on the design and implementation of Akonadi Next, the occassional sales engineering call ... so I've been kept busy, and been able to work with a number of excellent people in the process both in Kolab Systems and the Kolab community at large.

While much of that list of topics doesn't immediately bring "writing code" to mind, I have had the opportunity to work on a few "hands on keyboard, writing code" projects. Thankfully. ;)

One of the more interesting ones, at least to me, has been work on an emerging data loss prevention and audit trail system for Kolab called Bonnie. It's one of those things that companies and governmental users tend to really want, but which is fairly non-trivial to achieve.

There are, in broad strokes, three main steps in such a system:

  1. Capturing and recording events
  2. Storing data payloads associated with those events
  3. Recreating histories which can be reviewed and even restored from

I've been primarily working on the first two items, while a colleague has been focusing on the third point. Since each of these points is a relatively large topic on their own, I'll be covering each individually in subsequent blog entries.

We'll start in the next blog entry by looking at event capture and storage, why it is necessary (as opposed to simply combing through system logs, e.g.) and what we gain from it. I'll also introduce one of the Bonnie components, Egara, which is responsible for this set of functionality.

eGoverment in the Netherlands

by Aaron Seigo in aseigo at 18:47, Friday, 27 March

Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly swift and far-reaching shift that has touched people's personal and professional lives.

So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse.

The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio.

They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident.

In their own words:

The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens.

In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions.

That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this.

Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.

Kolab Summit 2015

by Aaron Seigo in aseigo at 11:52, Monday, 16 March

Kolab Summit 2015

We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3.

Just as Kolab itself is made up of many technologies, many technologies will be present at the summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event.

As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.

Ever since we introduced our ideas the next version of akonadi, we’ve been working on a proof of concept implementation, but we haven’t talked a lot about it. I’d therefore like to give a short progress report.

By choosing decentralized storage and a key-value store as the underlying technology, we first need to prove that this approach can deliver the desired performance with all pieces of the infrastructure in place. I think we have mostly reached that milestone by now. The new architecture is very flexible and looks promising so far. We managed IMO quite well to keep the levels of abstraction to a necessary minimum, which results in a system that is easily adjusted as new problems need to be solved and feels very controllable from a developer perspective.

We’ve started off with implementing the full stack for a single resource and a single domain type. For this we developed a simple dummy-resource that currently has an in-memory hash map as backend, and can only store events. This is a sufficient first step, as turning that into the full solution is a matter of adding further flatbuffer schemas for other types and defining the relevant indexes necessary to query what we want to query. By only working on a single type we can first carve out the necessary interfaces and make sure that we make the effort required to add new types minimal and thus maximize code reuse.

The design we’re pursuing, as presented during the pim sprint, consists of:

  • A set of resource processes
  • A store per resource, maintained by the individual resources (there is no central store)
  • A set of indexes maintained by the individual resources
  • A clientapi that knows how to access the store and how to talk to the resources through a plugin provided by the resource implementation.

By now we can write to the dummyresource through the client api, the resource internally queues the new entity, updates it’s indexes and writes the entity to storage. On the reading part we can execute simple queries against the indexes and retrieve the found entities. The synchronizer process can meanwhile generate also new entities, so client and synchronizer can write concurrently to the store. We therefore can do the full write/read roundtrip meaning we have most fundamental requirements covered. Missing are other operations than creating new entities (removal and modifications), and the writeback to the source by the synchronizer. But that’s just a matter of completing the implementation (we have the design).

To the numbers: Writing from the client is currently implemented in a very inefficient way and it’s trivial to drastically improve this, but in my latest test I could already write ~240 (small) entities per second. Reading works around 40k entities per second (in a single query) including the lookup on the secondary index. The upper limit of what the storage itself can achieve (on my laptop) is at 30k entities per second to write, and 250k entities per second to read, so there is room for improvement =)

Given that design and performance look promising so far, the next milestone will be to refactor the codebase sufficiently to ensure new resources can be added with sufficient ease, and making sure all the necessary facilities (such as a proper logging system), or at least stubs thereof, are available.

I’m writing this on a plane to Singapore which we’re using as gateway to Indonesia to chase after waves and volcanoes for the next few weeks, but after that I’m  looking forward to go full steam ahead with what we started here. I think it’s going to be something cool =)


On Domain Models and Layers in kdepim

by cmollekopf in Finding New Ways… at 10:28, Friday, 19 December

In our current kdepim code we use some classes throughout the codebase. I’m going to line out the problems with that and propose how we can do better.

The Application Domain

Each application has a “domain” it was created for. KOrganizer has for instance the calendar domain, and kmail the email domain, and each of those domains can be described with domain objects, which make up the domain model. The domain model of an application is essential, because it is what defines how we can represent the problems of that domain. If Korganizer didn’t have a domain model with attendees to events, we wouldn’t have any way to represent attendees internally, and thus couldn’t develop a feature based on that.

The logic implementing the functionality on top of those domain objects is the domain logic. It implements for instance what has to happen if we remove an event from a calendar, or how we can calculate all occurrences of a recurring event.

In the calendaring domain we use KCalCore to provide many of those domain objects and a large part of the domain logic. KCalCore::Event for instance, represents an event, can hold all necessary data of that event, and has the domain logic directly built-in to calculate recurrences.
Since it is a public library, it provides domain-objects and the domain-logic for all calendaring applications, which is awesome, right? Only if you use it right.

KCalCore

KCalCore provides additionally to the containers and the calendaring logic also serialization to the iCalendar format, which is also why it more or less tries to adhere to the iCalendar RFC, for both representation an interpretation of calendaring data. This is of course very useful for applications that deal with that, and there’s nothing particularly wrong with it. One could argue that serialization and interpretation of calendaring data should be split up, but since both is described by the same RFC I think it makes a lot of sense to keep the implementations together.

Coupling

A problem arises when classes like KCalCore::Event are used as domain objects, and interface for the storage layer, and as actual storage format, which is precisely what we do in kdepim.

The problem is that we introduce very high coupling between those components/layers and by choosing a library that adheres to an RFC the whole system is even locked down by a fully grown specification. I suppose that would be fine if only one application is using the storage layer,
and that application’s sole purpose is to implement exactly that RFC and nothing else, ever. In all other cases I think it is a mistake.

Domain Logic

The domain logic of an application has to evolve with the application by definition. The domain objects used for that are supposed to model the problem at hand precisely, in a way that a domain logic can be built that is easy to understand and evolve as requirements change. Properties that are not used by an application only hide the important bits of a domain objects, and if a new feature is added it must be possible to adjust the domain object to reflect that. By using a class like KCalCore::Event for the domain object, these adjustments become largely impossible.

The consequence is that we employ workarounds everywhere. KCalCore doesn’t provide what you need? Simply store it as “custom property”. We don’t have a class for calendars? Let’s use Akonadi::Collection with some custom attributes. Mechanisms have been designed to extend these rigid structures so we can at least work with it, but that only lead to more complex code that is ever harder to understand.

Instead we could write domain logic that expresses precisely what we need, and is easier to understand and maintain.

Zanshin for instance took the calendaring domain, and applied the GettingThingsDone (GTD) methodology to it. It takes a rather simple approach to todo’s and initially only required a description, a due date and a state. However, it introduced the notion that only “projects” can have subtodo’s. This restriction needs to be reflected in the domain model, and implemented in the domain logic.
Because there are no projects in KCalCore, it was simply defined that todo’s with a magic property “X-Project” are defined as project. There’s nothing wrong with that itself, but you don’t want to litter your code with “if (todo->hasProperty(X-Project))”. So what do you do? You create a wrapper. And that wrapper is now already your new domain object with a nice interface. Kevin fortunately realized that we can do better, and rewrote zanshin with its own custom domain objects, that simply interface with the KCalCore containers in a thin translation layer to akonadi. This made the code much clearer, and keeps those “x-property”-workarounds in one place only.

Layering

A useful approach to think about application architecture are IMO layers. It’s not a silver bullet, and shouldn’t be done too excessively I think, but in some cases layer do make a lot of sense. I suggest to think about the following layers:

  • The presentation layer: Displays stuff and nothing else. This is where you expose your domain model to the UI, and where your QML sits.
  • The domain layer: The core of the application. This is where all the useful magic happens.
  • The data access layer: A thin translation layer between domain and storage. It makes it possible to use the same storage layer from multiple domains and to replace the storage layer without replacing all the rest.
  • The storage layer: The layer that persists the domain model. Akonadi.

By keeping these layer’s in mind we can do a better job at keeping the coupling at a reasonable level, allowing individual components to  change as required.

The presentation layer is required in any case if we want to move to QML. With QML we can no longer have half of the domain logic in the UI code, and most of the domain model should probably be exposed as a model that is directly accessible by QML.

The data access layer is where akonadi provides a standardized interface for all data, so multiple applications can shared the same storage layer. This is currently made up by the i.e. KCalCore for calendars, the akonadi client API, and a couple of akonadi objects, such as Akonadi::Item and Akonadi::Collection. As this layer defines what data can be accessed by all applications, it needs to be flexible and likely has to be evolved frequently.

The way forward

For akonadi’s client API, aka the data access layer, I plan on defining a set of interfaces for things like calendars, events, mailfolders, emails, etc. This should eventually replace KCalCore, KContacts and friends from being the canonical interface to the data.

Applications should eventually move to their own domain logic implementation. For reading and structuring data, models are IMO a suitable tool, and if we design them right this will also pave the way for QML interfaces. Of course i.e. KCalCore still has its uses for its calendaring routines, or as a serialization library to create iTip messages, but we should IMO stop using it for everything. The same of course applies to KContacts.

What we still could do IMO, is share some domain logic between applications, including some domain objects. A KDEPIM::Domain::Contact could be used across applications, just like KContact::Adressee was. This keeps different applications from implementing the same logic, but of course also introduces coupling between those again.

What IMO has to stay separate is the data access layer, which implements an interface to the storage layer, and that doesn’t necessarily conform to the domain layer (you could i.e. store “Blog posts” as notes in storage). This separation is IMO useful, as I expect the application domain to evolve separately from what actual storage backends provide (see zanshin).

This is of course quite a chunk of work, that won’t happen at once. But need to know now where we want to end up in a couple of years, if we intend to ever get there.


Putting the code where it belongs

by cmollekopf in Finding New Ways… at 01:21, Friday, 03 October

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.


A new folder subscription system

by cmollekopf in Finding New Ways… at 12:40, Wednesday, 21 May

Wouldn’t it be great if Kontact would allow you to select a set of folders you’re interested in, that setting would automatically be respected by all your devices and you’d still be able to control for each individual folder whether it should be visible and available offline?

I’ll line out a system that allows you to achieve just that in a groupware environment. I’ll take Kolab and calendar folders as example, but the concept applies to all groupware systems and is just as well applicable to email or other groupware content.

User Scenarios

  •  Anna has access to hundreds of shared calendars, but she usually only uses a few selected ones. She therefore only has a subset of the available calendars enabled, that are shown to her in the calendar selection dialog, available for offline usage and also get synchronized to her mobile phone. If she realizes she no longer requires a calendar, she simply disables it and it disappears from the Kontact, the Webclient and her phone.
  • Joe works with a small team that shares their calendars with him. Usually he only uses the shared team-calendar, but sometimes he wants to quickly check if they are in the office before calling them, and he’s often doing this in the train with unreliable internet connection. He therefore disables the team member’s calendars but still enables synchronization for them. This hides the calendars from all his devices, but he still can quickly enable them on his laptop while being offline.
  • Fred has a mailing list folder that he always reads on his mobile, but never on his laptop. He keeps the folder enabled, but hides it on his laptop so his folder list isn’t unnecessarily cluttered.

What these scenarios tell us is that we need a flexible mechanism to specify the folders we want to see and the folders we want synchronized. Additionally we want, in today’s world where we have multiple devices, to synchronize the selection of folders that are important to us. It is likely I’d like to see the calendar I have just enabled in Kontact also on my phone. However, we always want to keep the possibility to alter that default setting on specific devices.

Current State

If you’re using a Kolab Server, you can use IMAP subscriptions to control what folders you want to see on your devices. Kontact currently respects that setting in that it makes all folders visible and available for offline usage. Additionally you have local subscriptions to disable certain folders (so they are not downloaded or displayed) on a specific device. That is not very flexible though, and personally I ended up having pretty much all folders enabled that I ever used, leading to cluttered folder selections and lot’s of bandwith and storage space used to keep everything available offline.

To change the subscription state, KMail offers to open the IMAP-subscription dialog which allows to toggle the subscription state of individual folders. This works, but is not well integrated (it’s a separate dialog), and is also not well integrable since it’s IMAP specific.

Because the solution is not well integrated, it tends to be rather static in my experience. I tend to subscribe to all folders that I ever use, which results in a very long and cluttered folder-list.

A new integrated subscription system

What would be much better, is if the back-end could provide a default setting that is synchronized to the server, and we could quickly enable or disable folders as we require them. Additionally we can override the default settings for each individual folder to optimize our setup as required.

To make the system more flexible, while not unnecessarily complex, we need a per folder setting that allows to override a backend provided default value. Additionally we need an interface for applications to alter the subscription state through Akonadi (instead of bypassing it). This allows for a well integrated solution that doesn’t rely on a separate, IMAP-specific dialog.

Each folder requires the following settings:

  • An enabled/disabled state that provides the default value for synchronizing and displaying a folder.
  • An explicit preference to synchronize a folder.
  • An explicit preference to make a folder visible.

A folder is visible if:

  • There is an explicit preference that the folder is visible.
  • There is no explicit preference on visibility and the folder is enabled.

A folder is synchronized if:

  • There is an explicit preference that the folder is synchronized.
  • There is no explicit preference on synchronization and the folder is enabled.

The resource-backend can synchronize the enabled/disabled state which should give a default experience as expected. Additionally it is possible to override that default state using the explicit preference on a per folder level.

User Interaction

By default you would be working with the enabled/disabled state, that is synchronized by the resource backend. If you enable a folder it becomes visible and synchronized, if you disable it, it becomes invisible and not synchronized. For the enabled/disabled state we can build a very easy user interface, as it is a single boolean state, that we can integrate into the primary UI.

Because the enabled/disabled state is synchronized, an enabled calendar will automatically appear on your MyKolab.com web interface and your mobile. One click, and you’re all set.

Mockup of folder sync properties

Example mockup of folder sync properties

In the advanced settings, you can then override visibility and synchronization preference at will as a local-only setting, giving you full flexibility. This can be hidden in a properties dialog, so it doesn’t clutter the primary UI.

This makes the default usecase very simple to use (you either want a folder or you don’t want it), while we keep full flexibility in overriding the default behaviour.

IMAP Synchronization

The IMAP resource will synchronize the enabled/disabled state with IMAP subscriptions if you have subscriptions enabled in the resource. This way we can use the enabled/disabled state as interface to change the subscriptions, and don’t have to use a separate dialog to toggle that state.

Interaction with existing mechanisms

This mechanism can probably replace local subscriptions eventually. However, in order not to break existing setups I plan to leave local subscriptions working as they currently are.

Conclusion

By implementing this proposal we get the required flexibility to make sure the resources of our machine are optimally used, while different clients still interact with each other as expected. Additionally we gain a uniform interface to enable/disable a collection that can be synchronized by backends (e.g. using the IMAP subscription state). This will allow applications to nicely integrate this setting, and should therefore make this feature a lot easier to use and overall more agile.

New doors are opened as this will enable us to do on-demand loading of folders. By having the complete folder list available locally (but disabled by default and thus hidden), we can use the collections to load their content temporarily and on-demand. Want to quickly look at that shared calendar you don’t have enabled? Simply search for it and have a quick look, the data is synchronized on-demand and the folder is as quickly gone as you found it, once it is no longer required. This will diminish the requirement to have folders constantly clutter your folder list even further.

So, what do you think?


SFD Call to Action: Let the STEED run!

by Georg Greve in freedom bits at 09:00, Friday, 20 September

Information Technology is a hype driven industry, a fact that has largely contributed to the current situation where the NSA and GCHQ have unprecedented access to the global communication and information. Including for a very Realpolitik based approach to how that information may be used. Economic and political manipulation may not be how these measures are advertised, but it may very well be the actual motivation. It’s the economy, stupid!

Ever since all of this started, many people have asked the question how to protect their privacy. Despite some there is still a lack of comprehensive answers to this question. There is an obvious answer that most mainstream media seem to have largely missed: Software freedom advocates had it right all along. You cannot trust proprietary cryptography, or proprietary software. If a company has a connection to the legal nexus of the United States, it is subject to US law and must comply with demands of the NSA and other authorities. But if that company also provides proprietary software it is virtually impossible for you to know what kind of agreements it has with the NSA, as most of their management prefer not to go to jail. But one would have to be very naive to think the United States is the only country where secret agreements exist.

Security unfortunately is a realm full of quacks and it is just as easy to be fooled as it is to fool yourself. In fact many of the discussions I’ve had over the past weeks painfully reminded me of what Cory Doctorow called “Schneier’s Law” although Bruce Schneier himself points out the principle has been around for much longer. He has dated it back to Charles Babbage in 1864:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

So in my experience it makes good sense to listen to what Bruce Schneier and a few others have to say, which is why I think his guide to staying secure on the internet is probably something everyone should have read. In that list of recommendations there are some points that ought to read familiar:

4) Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.

5) Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

So you were right, good for you” I hear you think. The point I am trying to make is a different one. It has been unbelievably difficult in the past to consequently do the right thing that would now give us the answers to the questions posed by the NSA and others. Both the Free Software Foundation Europe (FSFE) as an organisation and Kolab as a technology have a very long history to that extent. In fact if you’ve read the background of MyKolab.com, you’ll hopefully see the same kind of approach there, as well. Having been involved with both has given me a unique perspective.

So when Bruce Schneier is listing GnuPG as the first of several applications he is using and recommending to stay secure, I can’t help but find this rather ironic and rewarding at the same time. Because I know what has been necessary for this crucial piece of software to come so far. Especially Werner Koch, but also Markus Brinkmann are two people all of us are indebted to, even though most people don’t realize it. Excellent software developers, but entrepreneurs with much room for improvement and (I’m sorry, guys) horrible at marketing and fundraising. So they pretty much exploited themselves over many years in order to be able to keep the development going because they knew their work was essential. Over the course of the past 12 years the entire Kolab team and especially individuals such as Bernhard Reiter at Intevation have always done what they could to involve them in development projects and push forward the technology.

And we will continue to do that, both through MyKolab.com and some other development projects we are pushing with Kolab Systems for customers that have an interest in these technologies. But they have a whole lot more in mind than we could make possible immediately, such as dramatically increasing the usability for end-to-end cryptography. The concept they have developed is based on over a decade of working on obstacles to end user adoption. It’s called STEED — Usable End-to-End Encryption and has been available for two years now. I think it’s time to be finalized and implemented.

That’s why I am using tomorrow’s Software Freedom Day to ask for volunteers to help them run a crowdfunding campaign so they can finally put it into practice, in the open, to everyone’s benefit. Because that’s going to contribute more than just a little bit towards a world where privacy will once more be the default. So please help spread the word and let the STEED run!

Groklaw shutting down.

by Georg Greve in freedom bits at 12:31, Tuesday, 20 August

Today is a sad day for the world of Information Technology and the cause of software freedom. PJ just announced she’ll be shutting down Groklaw.

It’s hard to overestimate the role that Groklaw has played in the past years. Many of us, myself included, have worked with Groklaw over the years. I still take pride my article about the dangers of OOXML for Free Software and Open Standards might have been the first of many calls to arms on this topic. Or how Groklaw followed the Microsoft antitrust case that FSFE fought for and with the Samba team, and won for all of software freedom. Groklaw was essential in helping us counter some of the Microsoft spin-doctoring. Or the Sean Daly interview with Volker Lendecke, Jeremy Allison, Carlo Piana and myself for Groklaw after the landslide victory against Microsoft in court.

I remember very well how giddy I still was during the interview for having realized that Microsoft would not be able to take down FSFE, because that would have been the consequence had they gotten their way. We bet our life’s work at the time. And won. The relief was incredible.

So there is a good deal of personal sadness to hear about this, as well as a general concern which Paul Adams just summarized rather well on the #kolab IRC channel:

the world of IT is just that little bit less safe without groklaw

And it’s true. Groklaw has been the most important platform to counter corporate spin doctoring, has practiced an important form of whistleblowing long before Wikileaks, and has been giving alternative and background perspective on some of the most important things going on inside and outside the media limelight. without Groklaw, all of us will lack that essential information.

So firstly, I’d like to thank PJ for all the hard years of work on Groklaw. Never having had the pleasure of meeting her in real life, I still feel that I know her from the conversations we had over email over so many years. And I know how she got weary of the pressure, the death threats and the attempts at intimidating her into silence. Thank you for putting up with it for so long, and for doing what you felt was right and necessary despite the personal cost to yourself! The world needs more people like you.

But with email having been the only channel of communication she was comfortable using for reasons of personal safety, when Edward Snowden revealed the PRISM program, when Lavabit and Silent Circle shut down, when the boyfriends of journalists get detained at Heathrow, she apparently drew the conclusion this was no longer good enough to protect her own safety and the safety of the people she was in communication with.

That she chose MyKolab.com as the service to confide in with her remaining communication lines at least to me confirms that we did the right thing when we launched MyKolab.com and also that we did the right thing in the way we did it. But it cannot mitigate the feeling of loss for seeing Groklaw fall victim to the totalitarian tendencies our societies are exhibiting and apparently willingly embracing over the past years.

While we’re happy to provide a privacy asylum in a safe legislation, society should not need them. Privacy should be the default, not the exception.

In January this year we started the MyKolab beta phase and last week we finally moved it to its production environment, just in time for the Swiss national day. This seemed oddly fitting since the Swiss national day celebrates its independence and self-determination, as they were liberating themselves from the feudal system. So when Bruce Schneier wrote about how the Internet right now resembles a feudal system, it was too tempting an opportunity to miss. And of course PRISM and Tempora played their part in the timing, as well, although we obviously had no idea this leak was coming when we started the beta in January.

Anyhow. So now MyKolab.com has its new home.

Step 1: Hardware & Legislation

It should be highlighted that we actually run this on our own hardware, in a trustworthy, secure data centre, in a rack which we physically control. Because that is where security starts, really. Also, we run this in Switzerland, with a Swiss company, and for a reason. Most people do not seem to realize the level of protection data enjoys in Switzerland. We tried to explain it in the FAQ, and the privacy information. But it seems that too many people still don’t get it.

Put frankly, in these matters, legislation trumps technology and even cryptography.

Because when push comes to shove, people would rather not go to jail. So no matter what snake oil someone may be trying to sell you about your data being secure because “it is encrypted on our server with your passphrase, so even we don’t have access” – choice of country and legislation trumps it all.

As long as server-side cryptography is involved a provider can of course access your data even when it is encrypted. Especially when the secret is as simple as your password which all your devices submit to the server every time you check mail. Better yet, when you have push activated, your devices even keep the connection open. And if the provider happens to be subject to a requirement to cooperate and hand over your data, of course they will. Quite often they don’t even necessarily know that this is going on if they do not control the physical machines.

XKCD 538: Security

XKCD 538: Security

So whenever someone tries to serve you that kind of snake oil, you should avoid that service at all cost, because you do not know which lies you are not catching them in the act with. And yes, it is a true example, unfortunately. The romantic picture of the internet as a third place above nation states has never had much evidence on its side. Whoever was harbouring these notions and missed XKCDs take on the matter should definitely have received their wakeup call by Lavabit and Silent Circle.

The reality of the matter is:

  1. There is no digital security without physical security, and
  2. Country and applicable legislation always win.

Step 2: Terms of Service & Pricing

So legislation, hardware. What else? Terms of Service come to mind. Too often they are deliberately written to obfuscate or frankly turn you into the product. Because writing software, buying hardware, physical security, maintaining systems, staffing help desks, electricity: All these things cost money. If you do not pay for it, make sure you know who does. Because otherwise it’s like this old poker adage: If you cannot tell who is the weakest player at the table, it’s you. Likewise for any on-line service: if you cannot tell who is paying for this, it’s probably you.

Sometimes this may just in ways you did not expect, or may not have been aware of. So while most people only look for the lowest price, the question you actually should be asking yourself is: Am I paying enough for this service that I think it can be profitable even when it does everything right and pays all its employees fairly even if they have families and perhaps even mortgages?

The alternative are services that are run by enthusiasts for the common good, or subsidized by third parties – sometimes for marketing purposes. If it is run by an enthusiast, the question is how long they can afford to run this service well, and what will happen if their priorities or interests change. Plus few enthusiasts are willing to dish out the kind of cash that comes with a physically controlled, secure system in a data centre. So more often than not, this is either a box in someone’s basement where pretty much anyone has access while they go out for a pizza or cinema, or – at least as problematic – a cheap VM at some provider with unknown physical, legislative and technical security.

If it is a subsidized service, it’s worse. Just like subsidies on food in Europe destroy the farming economy in Africa, making almost a whole continent dependent on charity, subsidized services cannibalize those that are run for professional interest.

In this case that means they damage the professional development community around Open Source, leading to less Free Software being developed. Why is that? Because such subsidized services typically do not bother with contributing upstream – which is a pure cost factor and this is already charity, so no-one feels there is a problem not to support the upstream – and they are destroying the value proposition of those services that contribute upstream. So the developers of the upstream technologies need to find other ways to support their work on Open Source, which typically means they get to spend less time on Free Software development.

This is the well-meaning counterpart to providers who simply take the software, do not bother to contribute upstream, but use it to provide a commercial service that near-automatically comes in below the price if you were to price it sustainably by factoring in the upstream contribution and ongoing development. The road to hell and all that.

None of this is anything we wanted to contribute to with MyKolab.com.

So we made sure to write Terms of Service that were as good, honest and clear as we could make them, discussed them with the people behind the important Terms of Service; Didn’t Read project, and even link to that project from our own Terms of Service so people have a fair chance to compare them without being lawyers or even reading them.

Step 3: Contributing to the Commons

Kolab Web Client - Roundcube++

Roundcube++ - The Kolab Web Client

We also were careful to not choose a pricing point that would cannibalize anything but proprietary software. Because we pay the developers. All of who write Open Source exclusively. This has made sure that we have been the largest main contributor to the Roundcube web mailer by some margin, for instance. In doing so, we deliberately made sure to keep the project independent and did not interfere with its internal management. Feel free to read the account of Thomas Brüderli on that front.

So while hundreds of thousands of sites use Roundcube world wide, and it is popular with millions of users, only a handful of companies bother to contribute to its development, and none as much as Kolab Systems AG, which is the largest contributor by orders of magnitude. Don’t get me wrong. That’s all fine. We are happy about everyone who makes use of the software we develop, and we firmly believe there is a greater good achieved through Free Software.

But the economics are nonetheless the same: The developers working on Roundcube have lives, families even, monthly bills to pay, and we pay them every month to continue working on the technology for everyone’s good. Within our company group, similar things can probably be said for more than 60 people. And of course there are other parts of our stack that we do not contribute as much to, in some cases we are primarily the beneficiary of others doing the same.

It’s a give and take among companies who operate in this way that works extremely well. But there are some who almost never contribute. And if, as a customer, you choose them over those that are part of the developer community, you are choosing to have less Open Source Software developed.

So looking at contribution to Free Software as one essential criterion for whether the company you are about to choose is acting sustainably or trying to work towards a tragedy of the commons is something I would certainly suggest you do.

This now brings us to an almost complete list of items you want to check

  • Physical control, including hardware
  • Legal control, choice of applicable legislation
  • Terms of Service that are honest and fair
  • Contribution to Open Source / Free Software

and you want to make sure you pay enough for all of these to meet the criteria you expect.

Bringing it all together

On all these counts simultaneously, we made sure to put MyKolab.com into the top 10%. Perhaps even the top 5%, because we develop, maintain and publish the entire stack, as a solution, fully Open Source and more radically Open Standards based than any other solution in this area. So in fact you never need to rely upon MyKolab.com continuing to provide the service you want.

You can always continue to use the exact same solution, on your own server, in your own hands.

That is a claim that is unique, as far as I am aware. And you know that whatever you pay for the service never contributes to the development of proprietary software, but contributes to the state of the art in Free Software, available for everyone to take control of their own computing needs, as well as also improving the service itself.

For me, it’s this part that truly makes MyKolab.com special. Because if you ever need to break out of MyKolab.com, your path to self-reliance and control is already built into the system, delivered with and supported by the service itself: It’s called Kolab.

 

Following the disclosures about details on how the United States and other countries are monitoring the world there has been a global discussion about this subject that’s been long overdue. In previous articles I tried to put together what actually has been proven thus far, what that means for society, and what are the implications for businesses around the world.

Now I’d like to take a look at governments. Firstly, of course governments have a functional aspect not entirely unlike business, and of course governments should be conscious about the society and values they promote. Purely on these grounds it would likely be possible to say quite a few things about the post PRISM society.

Secondly, there is of course also the question to which extent governments have known about this previously and may even have colluded with what has been going on – in some cases possibly without democratic support for doing so. It has been pointed by quite a few journalists that “I had no idea” amounts to saying you have not been following technical progress since the typewriter was invented, and there is some truth to that. Although typewriters have also known to be bugged, of course.

In fact when spending so much time at the United Nations, one of the typical sights would be a diplomat talking on their mobile phone while covering their mouth with one hand in order to ward off the possibility of lip readers. So there is clearly an understanding that trying to know more about anyone you may have common or opposing interests with will give you an advantage, and that everyone is trying to gain that advantage to the best of their ability.

What I think is really at play here are two different things: Having been blind-sided by the actual threat, and having been found naïve.

Defending against bunnies, turning your back on lions

Smart politicians will now have understood their threat model has been totally off. It’s much easier to intercept that mobile phone call (and get both sides of the conversation) than it is to learn to lip read, guarantee to speak the same language and try and make sure you have line of sight. In other words: They were spending lots of effort protecting the wrong things while ignoring the right things. So there is no way for them to know how vulnerable they have been, what damage arose from that, and what will follow from that for their future work.

So intelligent people should now be very worried, indeed. Because either they did not know better, or perhaps even let a sense of herd safety drag them along into behaviour that has compromised their professional integrity in so far as it may have exposed their internal thoughts to people they did not want to share them with. If you’ve ever seen how international treaties are being negotiated it does not take a whole lot of fantasy to imagine how this might be a problem. Given the levels of secrecy and apparent lack of supervision if highest level politicians truly had no idea, there is also a major concern about possible abuse of the system to influence the political balance within a country by those in government.

Politicians are also romantic

The other part of the surprise seems to stem from a certain romantic notion of friendship among nations harboured by many politicians and deliberately nurtured by nations that do not share such romantic perspectives, most importantly in this context the United States.

The allies of the United States, in particular also the European Union know that the US has these capabilities and is not afraid to use them to gain an advantage for the home team. But for some reason they thought they were part of that home team because the United States have been telling them they’re best friends forever. It does not lack a certain irony that Germany fell for this, not realizing that the United States are following their default approach abroad, which is commonly referred to as Realpolitik in the US.

So when European politicians suddenly realize that it may be problematic to negotiate free trade agreements with someone who is reading your internal briefings and mails and is listening to your phone calls, it is not so much out of a shock that the US is doing this in general. They know the US is not shy to use force at any level to obtain results. It’s about the fact they’re using these methods universally, no matter who you are. That they were willing to do so against Switzerland, a country in the heart of Europe, should have demonstrated that aptly. Only that in this particular case, EU politicians were hoping to ride on the coat-tails of the US cavalry.

International Organizations

Of course that surprise also betrays the level of collaboration that has been present for a long time. The reason they thought they were part of the home team is that in some cases, they were. So when they were the benefactors of this system as they worked side by side with the United States at the Intergovernmental Organizations to put in place the global power structures that rule the world, this sort of advantage might have seemed very handy and very welcome. Not too many questions were asked, I presume.

But if you’re one of the countries in transition, a country from the developing world, or simply a country that got on the wrong sides of the United States and their power block, you now have to wonder: How much worse are you off for having been pushed back in negotiation much further than if the “Northern” power block had not known all your internal assessments, plans and contingencies? And how can Intergovernmental Organizations truly function if all your communications with these organizations are unilaterally transparent to this power block?

It’s time to understand that imbalance, and address it. I know that several countries are aware of this, of course, and some of them are actively seeking ways to address that strategic disadvantage, since parts of our company group have been involved in that. But too many countries do not yet seem to have the appropriate measures in place, nor are they addressing it with sufficient resource and urgency, perhaps out of a underestimation of the analytic capabilities.

The PRISM leaks should have been the wakeup call for these countries. But I’d also expect them to raise their concerns at the Intergovernmental Organizations, asking the secretariats how the IT and communications strategy of these organizations adequately protects the mandate of the organizations, for they can only function if a certain minimum level of confidence can be placed into them and the integrity of their work flow.

Global Powerstructures

But on a more abstract level, all of this once more establishes a trend of the United States as a nexus of global destabilisation subject only to national interest. Because it is for the US government to decide which countries to bless with access to that information, and whose information to access. Cooperate and be rewarded. Be defiant and be punished. For example by ensuring your national business champion does not get that deal since we might just employ our information to ensure our competing US business will. This establishes a gravitation towards pleasing the interests of the United States that I find problematic. As I would find a similar imbalance towards any other nation.

But in this case it is the United States that has moved to “economic policy by spook” as a good friend recently called it. Although of course there may be other countries doing the same, as right now it seems more or less confirmed this is at least in part collusion at NATO level. Be that as it may, countries need to understand that their sovereignty and economic well-being is highly dependent upon the ability to protect your own information and that of your economy.

Which is why Brazil and India probably feel confirmed in their focus on strategic independence. With the high dependency of virtually any economic sector, Information Technology has become as fundamental as electricity, roads or water. Perhaps it is time to re-assess to which level governments want to ensure an independent, stable supply that holds up to the demands of their nation.

Estonias president recently suggested to establish European cloud providers, other areas of the world may want to pay close attention to this.

The Opportunity Exists, Does The Will?

Let’s say a nation wanted to address these issues. Imagine they had to engineer the entire stack of software. The prospects would be daunting.

Fortunately they don’t have to. Nothing runs your data centres and infrastructures better, and with higher security than Free Software does. Our community has been building these tools for a long time now, and they have the potential to serve as the big equalizer in the global IT power struggle. The UNCTAD Information Economy Reports provide some excellent fact based, neutral analysis of the impact of software freedom on economies around the world.

Countries stand to gain or lose a lot in this central question. Open Source may have been the answer all along, but PRISM highlighted the need is both real and urgent.

Any government should be able to answer the following question: What is your policy on a sovereign software supply and digital infrastructure?

If that question cannot be answered, it’s time to get to work. And soon.


All articles:

After a primer on the realities of today’s world, and the totalitarian tendencies that follow from this environment and our behaviour in it, let’s take a look at what this means for professional use of information technology.

Firstly, it should be obvious that when you use the cloud services of a company, you have no secrets from that company other than the ones this company guarantees you to keep. That promise is good up to the level of guarantee that such a company can make due to the legal environment it is situated in and of course subject to the level of trust you can place into the people running and owning the company.

So when using Google Apps for your business, you have no secrets from Google. Same for Office 365 and Microsoft. iCloud and Apple. Also, these companies are known for having very good internal data analytics. Google for instance has been using algorithms to identify employees that are about to leave in order to make them a better offer to keep them on board. Naturally that same algorithm could be used to identify which of your better employees might be susceptible to being head hunted.

Of course no-one will ever know whether that actually took place or whether it contributed to your company losing that important employee to Google. But the simple truth is: In some ways, Google/Microsoft/Apple is likely to know a lot more about your business than you do yourself. That knowledge has value, and it may be tempting to turn that value into shareholder value for either of these businesses.

If you are a US business, or a small, local business elsewhere that may not be an issue.

But if you are into software, or have more than regional reach, it may become a major concern. Because thanks to what we now know about PRISM, your using these services means the US intelligence services also have real-time access to your business and its development. And since FISA explicitly empowers these services to make use of those capabilities for the general interests of the United States – including foreign policy and economic goals – the conclusion is simple: You might just be handing your internal business information to people who are champions for your competition.

Your only protection is your own lack of success. And you might be right, you might be too small for them to use too many resources, because while their input bandwidth is almost unlimited, their output bandwidth for acting upon it of course has limits. But that’s about it.

The US has a long tradition of putting their public services at the disposal of industry, trying to promote their “tax paying home team.” It’s a cultural misunderstanding to assume they would be pulling their punches just because you like to watch Hollywood movies and sympathise with the American Dream.

Which is why the US has been active to promote GM crops in Europe, or uphold the interests of their pharmaceutical industry. Is anyone at Roche reading this? No shareholder is concerned about this? To me it would seem like a good example of what risks are unwittingly taken when you let the CFO manage the IT strategy. Those two professions rarely mix, in my experience.

The United States are not the only nation in the world doing this, of course. Almost every nation has at least a small agency trying to support its own industry in building international business, and the German chancellor typically has a whole entourage of industry representatives when she’s visiting countries that are markets of interests. I guess it’s a tribute to their honesty that the United States made it explicit for its intelligence services to feed into this system in this way.

Several other countries are likely to do the same, but probably not as successfully or aggressively.

Old school on site IT as the solution?

Some people may now feel fairly smart they did not jump on the Public Cloud bandwagon. Only that not all of them are as secure as they think they are. Because we also learned that access to data is not only happening through the public clouds. Some software vendors, most importantly Microsoft, are also supplying the NSA with priority access to vulnerabilities in their software. Likely they will do their best to manage the pipe of disclosure and resolution in a way that there is always at least one way for the NSA to get into your locally installed system in an automated fashion that is currently not publicly known.

This would also explain the ongoing debate about the “NSA back door in Windows” which were always denied, but the denial could have been carefully concealing this alternative way of achieving the same effect. So running your local installation of Windows is likely a little better for your business secrets than using public cloud services by US businesses, but not as much as you might want to believe. But it’s not just Windows, of course, Lotus has been called out on the same practice a long time ago, and one may wonder whether the other software vendors avoided doing it, or simply avoided being caught.

Given the discussions among three-letter agencies about wanting that level of access into any software and service originating in the United States, and provided the evident lack of public disclosure in this area, a rather large question mark remains. So on-site IT is not necessarily the solution, unless it is done to certain standards. In all honesty, most installations probably do not meet those at the moment.  And the cost associated with doing it properly may be considered excessive for your situation.

So it’s not as simple and not a black and white decision between “all on-site self-run” and “all in public cloud by US business”. There is a whole range of options in between that provide different advantages, disadvantages, costs and risks.

Weighing the risks

So whatever you do: There is always a risk analysis involved.

All businesses take risks based on educated guesses and sometimes even instinct. And they need to weigh cost against benefit. The perfect solution is rarely obtained, typically because it is excessively costly, so often businesses stick with “what works.” And their IT is no different in that regard.

It is a valid decision to say you’re not concerned about business secrets leaking, or consider the likely damage smaller than the risk of running a poorly secured IT under your own control either directly or through a third party. And the additional cost of running that kind of installation well does not seem justified in comparison to what you gain. So you go to a more trustworthy local provider that runs your installation on Open Source and Open Standards. Or you use the services of a large US public cloud vendor. It’s your call to make.

But I would argue this call should always be made consciously, in full knowledge of all risks and implications. And truth is that in too many cases people did not take this into account, it was more convenient to ignore and dismiss as unproven speculation . Only that it’s only speculation as long as it hasn’t been proven. So virtually any business right now should be re-evaluating its IT strategy to see what risks and benefits are associated with their current strategy, and whether another strategy might provide a more adequate approach.

And when that evaluation is done, I would suggest to look at the Total Cost of Operations (TCO). But not in an overly simplistic way, because most often the calculation is skewed in favour of proprietary lock-in. So always make sure to factor in cost of decommissioning the solution you are about to introduce. And the TCO isn’t everything.

IT is not just a cost, there is a benefit. All too often two alternatives are compared purely on the grounds of their cost. So more often than not the slightly cheaper solution will be chosen despite offering dramatically fewer benefits and a poor strategic outlook. And a year later you find out that it actually wasn’t cheaper, at all, because of hidden costs. And that you would have needed the benefits of the other solution. And that you’re in a strategic dead-end.

So I would always advocate to also take into account the benefits, both in things you require right now, and in things that you might be able to achieve in the future. For lack of a common terminology, let’s call this the Option Value Generated (OVG) for your business, both in gained productivity, as well as innovative potential. And then there is what I now conveniently name the Customer Confidence Impact (CCI) of both your ability to devise an efficient IT strategy, as well as how you handle their business, data and trust.

After all is said and done, you might still want to run US technology. And you might still want to use a public cloud service. If you do, be transparent about it, so your customers can choose whether or not they agree to that usage by being in business with you. Because some people are likely to take offence due to the social implications and ownership of their own data. In other words: Make sure those who communicate with you and use your services know where that data ends up.

This may not be a problem for your business and your customers. They may consider this entirely acceptable, and that is fine. Being able to make that call is part of what it means to have freedom to try out business approaches and strategies.

But if you do not communicate your usage of this service, be aware of the risks you might be incurring. The potential impact for customer confidence and public image for having misled your business associates and customers is dramatic. Just look at the level of coverage PRISM is getting and you’ll get an idea.

The door is wide open

When reviewing your strategy, keep in mind that you may require some level of ability to adapt to a changed world in the future. Nothing guarantees that better than Open Source and Open Standards. So if you have ignored this debate throughout the past years, now would be the time to take a look at the strategic reasons for the adoption of Free Software. Most importantly transparency, security, control, ability to innovate.

While the past ten years most of the debate has been around how Open Source can provide more efficient IT at better price for many people, PRISM has demonstrated that the strategic values of Free Software were spot on and are providing benefits for professional use of IT that proprietary software cannot hope to match.

Simultaneously the past 20 years have seen a dramatic growth of professional services in the area. Because benefits are nice in theory, but if they cannot be made use of because the support network is missing, they won’t reach the average business.

In fact, in the spirit of full disclosure, I speak of personal experience in this regard. Since 2009 I dedicated myself to building up such a business: Kolab Systems is an Open Source ISV for the Kolab Groupware Solution. We built this company because Kolab had a typical Open Source problem. Excellent concepts and technology, but a gap in professional support in services to allow wide adoption and use of that technology. That’s been fixed. We now provide support for on-site hosting as well as Kolab as a service through MyKolab.com. We even structured our corporate group to be able to take care of high security requirements in a verifiable way.

But we are of course not the only business that has built its business around combining the advantages of software freedom with professional services for its customers. There are so many businesses working on this that it would be impossible to list them all. And they provide services for customers of all sizes – up to the very largest businesses and governments of this world.

So the concerns are real, as are the advantages. And there is a plethora of professional services at your disposal to make use of the advantages and address the concerns.

The only question is whether you will make use of them.


All articles:

So Akonadi is already a “cache” for your PIM-data, and now we’re trying hard to feed all that data into a second “cache” called Nepomuk, just for some searching? We clearly must be crazy.

The process of keeping these to caches in sync is not entirely trivial, storing the data in Nepomuk is rather expensive, and obviously we’re duplicating all data. Rest assured we have our reasons though.

  • Akonadi handles the payload of items stored in it transparently, meaning it has no idea what it is actually caching (apart from some hints such as mimetypes). While that is a very good design decision (great flexibility), it has the drawback that we can’t really search for anything inside the payload (because we don’t know what we’re searching through, where to look, etc)
  • The solution to the searching problem is of course building an index, which is a cache of all data optimized for searching. It essentially structures the data in a way that content->item lookups become fast (while normal usage does this the other way round). So that  already means duplicating all your data (more or less), because we’re trading disk-space and memory for searching speed. And Nepomuk is what we’re using as index for that.

Now there would of course be simpler ways to build an index for searching than using Nepomuk, but Nepomuk provides way more opportunities than just a simple, textbased index, allowing us to build awesome features on top of it, while the latter would essentially be a dead end.

To build that cache we’re doing the following:

  • analyze all items in Akonadi
  • split them up into individual parts such as (for an email example): subject, plaintext content, email addresses, flags
  • store that separated data in Nepomuk in a structured way

This results in networks of data stored in Nepomuk:

PersonA [hasEMailAddress] addressA
PersonA [hasEMailAddress] addressB
emailA [hasSender] addressA
emailB [hasSender] addressB

So this “network” relates emails to email-addresses, and email-addresses to contacts, and contacts to actual persons, and suddenly you can ask the system for all emails from a person, no matter which of the person’s email-addresses have been used in the mails. Of course we can add to that IM conversations with the same Person, or documents you exchanged during that conversation, … the possibilities are almost endless.

Based on that information much more powerful interfaces can be written. For instance one could write a communication tool which doesn’t really care anymore which communication channel you’re using and dynamically mixes IM and email depending on whether/where the other person is currently available for a chat or would rather have a mail, which can be read later on, and doing so without splitting the conversation across various mail/chat interfaces.
This is of course just one example of many (neither am I claiming the idea, it’s just a nice example for what is possible).

So that’s basically why we took the difficult route for searching (At least that is why I am working on this).

Now, we’re not quite there yet, but we already start to get the first fruits of our labor;

  • KMail can now automatically complete addresses from all emails you have ever received
  • Filtering in KMail does fulltext searching, making it a lot easier to find old conversations
  • The kpeoples library already uses this data for contacts merging, which will result in a much nicer addressbook
  • And of course having the data available in Nepomuk enables other developers to start working with it

I’ll follow up on that post with some more technical background on how the feeders are working and possibly some information on the problematic areas from a client perspective (such as the address auto-completion in KMail).