mollekopf's picture

Bringing Akonadi Next up to speed

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.

greve's picture

Kolab Now: Learn, live, adapt in production

Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

Aaron Seigo's picture

Akonadi with a remote database

Akonadi with a remote database

The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.

In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.

While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.

How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.

greve's picture

Your opportunity for a front row seat: The economics of the Roundcube Next Indiegogo Campaign

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:









The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:


Aaron Seigo's picture

"...if nothing changes"

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

mollekopf's picture

Kontact on Windows

I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)

Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.

To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.

If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.



Aaron Seigo's picture

Roundcube Next: The Next Steps

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps


The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

greve's picture

Applying the most important lesson for non-developers in Free Software through Roundcube Next

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

mollekopf's picture

Reproducible testing with docker

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.


Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Timotheus Pokorra's picture

Submitting patches to Kolab via Phabricator

Just before the Kolab Summit, at the end of April 2015, the Phabricator instance for Kolab went online! Thanks to Jeroen and the team from Kolab Systems who made that happen!

I have to admit it took me a while to get to understand Phabricator, and how to use it. I am still learning, but I now know enough to write an initial post about it.

Phabricator describes itself as an “open source, software engineering platform”. It aims to provide all the tools you need to engineer a software. In their words: “a collection of open source web applications that help software companies build better software”

To some degree, it replaces solutions like Github or Gitlab, but it has much more than Code Repository, Bug Tracking and Wiki functionality. It also has tools for Code Review, Notifications and Continuous builds, Project and Task Management, and much more. For a full list, see

In this post, I want to focus on how you work with the code, and how to submit patches. I am quite used to the idea of Pull Requests as Github does it. Things are a little bit different with Phabricator. But when you get used to it, they are probably more powerful.

Starting with browsing the code: there is the Diffusion application. You can see all the Kolab projects there.
It also shows the “git clone” command at the top for each project.
Admittedly, that is quite crowded, and if you still want the simple cgit interface, you get it here:

Now imagine you have fixed a bug or want to submit a change for the Kolab documentation (project docs). You clone the repository, locally edit the files, and commit them locally.