Subscribe to the RSS feed of this planet! RSS
Tue, 2015-10-13 09:23

In this article I described how we implemented client-side encryption in Roundcube using Mailvelope. There’s another approach for encryption, it is the Enigma plugin. It implements all the functionality using server-side GNUPG software. So, the big difference in these is that: Mailvelope keeps your keys in the browser, Enigma stores them on the server. In the current state Enigma however, has a lot more features.

Installation and settings

To use Enigma just enable it as any other plugin. Then in Preferences > Settings > Encryption you’ll see a set of options that will give you possibility to enable/disable encryption-related features.

NOTE: As keys are stored on the server, make sure the directory used as a storage has proper permissions, and it’s good to move it somewhere out of the location accessible from the web (even if secured by .htaccess rules).

Figure 1. Encryption preferences section.

Keys management

To manage your keys goto Settings > PGP Keys. There you can generate a new key pair or import keys. See the following screenshots for more details.

Figure 2. Key generation form.

Figure 3. Key information frame.

Composing messages

In message compose screen a new toolbar button is added with popup where you can decide if the message have to be signed and/or encrypted. The behaviour and the icon is slightly different than the one used for Mailvelope functionality. Also, note that we did not change the compose screen in any way, so all standard features like responses and spellchecking actually work.

Figure 4. Encryption options in compose.


You can find the Enigma plugin code in Roundcube 1.0 and 1.1, but only the version in Roundcube 1.2 (current git-master) is usable. I put a lot of work into this plugin and I hope there will be users that will use it. It depends on you if that solution will be extended with S/MIME or other features in future versions. Current state is described in the plugin README file .

Sat, 2015-10-10 14:25

The most valuable feature of incoming Roundcube 1.2 release is PGP encryption support. There are two independent solutions for this, Enigma plugin and Mailvelope. In this article I’ll describe what we achived with Mailvelope. The integration code was mostly written by Thomas Brüderli and only slightly improved/fixed by me.

It looks like Mailvelope is the best (if not only) solution for encryption in a web browser. It’s based on OpenPGP.js that is an implementation of PGP encryption in JavaScript. Mailvelope is distributed as Chrome and Firefox extensions. It supports some email services like GMail, it also provides an API for programmers. And this is the way we decided to integrate it with Roundcube.

Mailvelope installation

For more info goto Mailvelope documentation. To have it working with Roundcube you have to install the extension in your browser, then goto your Roundcube webmail and using Mailvelope “Add” button add your page to list of mail providers. One last required step is to enable API use on the provider edit page.

Compose an encrypted message

If Roundcube detects enabled Mailvelope, new button will appear in compose toolbar. It may be disabled in HTML mode, so switch to plain text. If you click it Mailvelope frame will appear. There you can write your message and add attachments. As you notice on the screenshot some features are disabled. Unfortunately, at the moment we can do much more with the Mailvelope textarea. Note: to send an encrypted message you first have to import/generate a private key in Mailvelope settings.

Figure 1. Message compose screen with enabled encryption frame.

When you try to send a mail to an address for which no public key was found in the Mailvelope database (keyring), you will be provided with possibility to search public key servers and import the keys.

Figure 2. Key search result in compose.

Preview an encrypted message

Also in a message preview Mailvelope will add its frame containing decrypted text and attachments. You’ll be prompted for a key passphrase when needed.

Figure 3. Encrypted message preview.


Unfortunately this is way from being complete. The Mailvelope API is very limited at the moment. It does not support signing and signature verification. Access to the encryption frame is limited. There are also some bugs. Currently you can only send and receive simple encrypted messages (attachments are supported).

You can track progress and read about the issues in this ticket.

Fri, 2015-10-09 09:28

This is a nice feature on desktop, so it is in desktop-like web applications. Average user likes this feature and uses it. That said, we have in Roundcube some drag’n’drop capabilities, it is:

  1. Dragging messages to folders (in messages list view) – copy/move action.
  2. Dragging folders to folders (in Preferences > Folders) – move action.
  3. Dragging contacts to groups/sources (in Addressbook) – copy/move/assign to group action.
  4. Dropping files from desktop to compose attachments.
  5. As of yesterday it is also possible to drag’n’drop attachments from mail preview to compose window.
  6. If you use Kolab, you can actually also drop files from desktop into Calendar, Tasks and Files.
  7. You can also re-arange messages list columns using drag’n’drop technique.

These should work in most of web browsers (their recent versions). The question is: can we have more of this? E.g. wouldn’t be nice to drag attachments or messages from browser to desktop? Or messages to compose attachments? Well, it would, but it’s not so simple…

I recently investigated what recent web standards and browsers provide in this regard. This does not look good. Standard way of drag’n’drop is DataTransfer and some events defined events, plus HTML attribute ‘draggable’. Unfortunately there’s no standard way of dropping a file to the desktop. So, what options do we have:

  1. Chrome browser supports its own DownloadURL parameter of DataTransfer, but even Chromium does not use it.
  2. In Firefox you can drag a link which on Windows will create a file link, so not what we want, but under Linux (KDE) it actually can download a real file. Unfortunately, this does work only with public files. It does not work with session-based apps (as Roundcube). We’d need to implement something like one-time-public URIs to attachments.
  3. I didn’t find any information about other browsers.

So, as you see not much we can do today. There’s another issue, this will anyway do not work with Roundcube widgets implementing its own “internal” drag’n’drop, e.g. messages list. Also, there’s no standard to drag many resources at a time so we cannot replace our “internal” implementation.

Thu, 2015-10-08 11:46

I just committed a small Roundcube feature that adds date interval selector to the search options popup. IMAP search query using BEFORE and SINCE keywords will be generated from the selected interval option.

See it on the screenshot.


On the screenshot you can also notice another small feature that will be part of Roundcube 1.2. It is a messages list page selector, you can use it to directly jump to the specified page.

Tue, 2015-10-06 12:55

The Kolab’s file storage component, named Chwala, from the beginning listed all folders of type ‘file’ available to the user. The subscription state of a folder was ignored. This changed today. From now on Chwala API returns only subscribed folders, the same as other components.

Of course, user is able to subscribe or unsubscribe a folder. Together with this folder filtering and searching has been implemented. So, you can now quickly filter displayed folders (which is useful when you have many of them). You can also search for unsubscribed folders and subscribe them (i.e. add to the permanent folders list). The search is smart enough to search by folder name and user name, so searching in other users namespace also works.

Key-press filtering is implemented “in the browser” so it works fast. To search unsubscribed folders (server-side) you have to press Enter key. Exactly the same as folder search in other components.

Figure 1. Folders list with the new header and search icon.
search folders - step 1
Figure 2. Searching for unsubscribed folders.
search folders - step 2
Figure 3. Subscribed folder added to the list.
search folders - step 3

For the moment it only works with Kolab storage. Other storage drivers supported by Chwala, i.e. WebDAV and SeaFile do not support subscriptions. In the future we may implement this as well.

Sat, 2015-10-03 12:41

It's always fun when your remote colleagues are coming to visit the office. It helps communication putting a face to the name in the chat client - and the voice on the phone.

Giles, our creative director, was visiting from London the first days of the week, which made a lot of the work switch context to be about design and usability. As Giles is fairly new in the company we also spent some time discussing a few of our internal processes and procedures with him. It is great to have him onboard to fill in a previously not so investigated space with his large experience.

The server development team kept themselves busy with a few Roundcube issues, and with a few issues that we had in the new KolabNow dashboard. Additionally work was being done on the Roundcube-Next POC. We hope soon to have something to show on that front.

On the desktop side, we finalized the sprint 201539 and delivered a new version of Kontact on Windows and on Linux. The Windows installer is named Kontact-E14-2015-10-02-12-35.exe, and as always it is available on our mirror.

This Sunday our datacenter is doing some maintenance. They do not expect any interruption, but be prepared for a bit of connection troubles on Sunday night.

On to the future..

Mon, 2015-09-28 15:51

As we started the week already the previous Friday night (by shutting off the KolabNow cockpit and starting the big migration) it turned out to be a week all about (the bass) KolabNow.

Over the weekend we made a series of improvements to KolabNow that will improve the over all user experience with

  • Better performance
  • More stable environment
  • Less downtime
  • Our ability to update the environment with a minimum of interruption for endusers.

After the update, there were of course a few issues that needed to be tweeked, but details aside, the weekend was a big success. Thanks to the OPS staff for doing the hard work.

One thing we changed with this update was the way users get notified when their accounts are suspended. Before this weekend, users with suspended accounts would still be able to login and receive mail on KolabNow. After this update, users with suspended accounts will not be able to login. This was of course leading to a small breeze of users with suspended accounts contacting support with requests for re-enabeling of their accounts.

On the development side we were making progress on two fronts:

  • We are getting close to the end of the list of urgent Kontact defects. The second week of this sprint should get rid of that list. Our Desktop people will then get time to look forward again, and look at the next generation of Kolab Desktop Client.
  • We started experimenting with one (- of perhaps more to come) POCs for Roundcube-Next. We now need to start talking about the technologies and ideas behind that new product. More to follow about that.

Thank you for your interest - if you are still reading. :-)

Fri, 2015-09-18 15:48

Another week passed by; super fast, as we know that: Time is running fast when you have fun.

The client developers are on a roll. They have been hacking away on a defined bundle of issues in Korganizer and Zanshin, which has been annoying for users, and has prevented some organizations from adapting the desktop client. This work will proceed during the next sprint - and most probably the sprint after that.

One of our collaborative editing developers took part in the ODF plugfest. According to his report, a lot of good experiences were had, a lot of contacts were made, and there was good feedback for the plans of the deep Kolab/Manticore integration.

Our OPS people was busy most of the week with preparations for this weekends big KolabNow update. This is a needed overhaul of our background systems and software. As we now have the new hardware in place, and it has been running it's test circles around itself, we can finally start applying many of the improvements that we have prepared for some time. This weekend is very much a backend update; but an important one, which will make it easier for us to apply more changes in the future with a minimal amount of interruptions.

All y'all have a nice weekend now..

roundcube's picture
Mon, 2015-09-14 02:00

We just published updates to both stable versions 1.0 and 1.1
after fixing many minor bugs and ensuring compatibility with upstream
versions of 3rd party libraries used in Roundcube. Version 1.0.7 comes
with cherry-picked fixes from the more recent version to ensure proper
long term support.

See the full changelog here.

Both versions are considered stable and we recommend to update all
productive installations of Roundcube with either of these versions.
Download them from

As usual, don’t forget to backup your data before updating!

Fri, 2015-09-11 11:01

The week in development:

  • Our desktop people were spending time in Randa, a small town in the Swiss mountains, where they were discussing KDE related issues and hacking away together with similar minded people. Most probably they also got a chance or two for some social interaction.
  • Work was continued on the Copenhagen (MAPI integration) project. Where as it was easy to spot progress in the beginning, the details around folder permissions and configuration objects that are being worked out now are not as visible.
  • The Guam project (the scalable IMAP session and payload filter) is moving along as planned. The filter handling engine is in place. It is now being implanted into the main body of the system, and then work on the actual filter formulation can be started.
  • A few defects in Kolab on UCS was discovered in the beginning of the week. Those were investigated and are getting fixed as I am writing this. Hopefully we will be able to push a new package for this product early next week.

In other news: The engineering people are working hard to prepare the backend systems for some interesting upcoming KolabNow changes. There will be more information about those changes in other more appropriate places.

Only thing left is, to wish everyone a very nice weekend.

Mon, 2015-09-07 14:00

After a summer with ins and outs of the super hot Zurich office, this week finally brought some rain and a little chill. I can't wait for the snow to start.

The week started early and in full speed, as we had our hardware vendor visiting on Monday to replace a defect hypervisor. I sleep better at night knowing that everything is in order again.

A few of us was jumping on a bus to the fair city of Munich, to meet the techies at IT@M IT@M for a Kontact workshop; 3 days of intense desktop client talks, discussions and experiments. It was inspiring to see the work groups get together to resolve issues, do packaging on the LiMux platform and prepare pre-deployment configurations. A big value of the workshop was the opportunity to collect and consolidate a lot of end user experience. Luckily we also got time for a bit of pretaste of the special Wiesn bier.

Aside from discussing the desktop clients, creating packages and listening to use cases, Christian finally found and resolved the issue that for a while has prevented me from installing the latest Kontact on my fedora 22. Thanks Christian!

Wed, 2015-09-02 23:53

Kontact has, in contrast to Thunderbird, integrated crypto support (OpenPGP and S/MIME) out-of-the-box.

That means on Linux you can simply start Kontact and read crypted mails (if you have already created keys).

After you select your crypto keys, you can immediately start writing encrypted mails. With that great user experince I never needed to dig further in the crypto stack.

select cryptokeys step1
select cryptokeys step2

But on Windows there is no GnuPG installed as default, so I need to dig into the whole world of crypto layers,

that are between Kontact and the actual part that does the de-/encryption.

Crypto Stack

Kontact uses a number of libraries that the team has written around GPGME.

The lowest level one is gpgmepp which is an object oriented wrapper for gpgme. This lets us avoid having to write code in C for KMail. Than we have libkleo which is a library built on top of gpgmepp that KMail uses to trigger de-/encryption in the lower levels. GPGME is the only required dependency to compile Kontact with crypto support.

But this is not enough to send and receive encrypted mail with Kontact on Windows, as I mentioned earlier. There are still runtime dependencies that we need to have in place. Fortunatelly the runtime crypto stack is already packaged by the GPG4Win team. Simply installing is still not enough to have crypto support, though. With GPG4Win, it is possible to select OpenPGP keys, create and read encrypted mails, but unfortunatelly it doesn't work with S/MIME.

So I had to dig futher into how GnuPG is actually working.

OpenPGP is handled by the gpg binary and for S/MIME we have gpgsm. Both are directly called from GPGME, using libassuan. Both application than talk to gpg-agent, which is actually the only programm that interacts with the key data. Both application can be used from the commandline, so it was easy to verify, that they were working and that we have no problems with GnuPG setup.

So first we start by creating keys (gpg --gen-key and gpgsm --gen-key) and than further testing what works with GPG4Win and what does not. We found a bug in GnuPG in the used version, but this one was closed in a newer version. Still Kontact didn't want to communicate with GPG4Win. The reason was a wrong standard path, preventing gpgme from finding gpgsm. With that fixed, we now have a working crypto stack under windows.

But to be honest, there are more application involved in a working crypto stack. At first we need gpgconf and gpgme-w32-spawn to be available in the Kontact directory. gpgconf helps gpgme to find gpg and gpgsm and is responsible to modify the content of .gnupg in the user's home directoy. Additionally, it infoms you about changes in config files. gpgme-w32-spawn is responsible for creating the other needed processes.

For having a UI where you can enter ypur password you need pinentry. S/MIME needs another agent, that does the CRL / OCSP checks. This is done by dirmgnr. In GnuPG 2.1 dirmgnr is the only component that performs connections to the outside. So every request that requires the Internet is done via dirmgnr.

This is, in short, the crypto stack that needs to work together to give you working encrypted mail support.

We are happy, that we now have a fully working Kontact under windows (again!). There are rumours, that Kontact was working also before that under windows with crypto support, but unfortunatelly when we started the crypted part was not working.

This work has done in the kolabsys branch, which is based on KDE Libraries 4. The next steps are to merge changes over to make sure that the current master branch of Kontact, which uses KDE Frameworks 5, is also working.


Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

randa meeting

Tue, 2015-09-01 10:55

You're live! Nice. We've put together a little post to introduce you to the Ghost editor and get you started. You can manage your content by signing in to the admin area at <your blog URL>/ghost/. When you arrive, you can select this post from a list on the left and see a preview of it on the right. Click the little pencil icon at the top of the preview to edit this post and read the next section!

Getting Started

Ghost uses something called Markdown for writing. Essentially, it's a shorthand way to manage your post formatting as you write!

Writing in Markdown is really easy. In the left hand panel of Ghost, you simply write as you normally would. Where appropriate, you can use shortcuts to style your content. For example, a list:

  • Item number one
  • Item number two
    • A nested item
  • A final item

or with numbers!

  1. Remember to buy some milk
  2. Drink the milk
  3. Tweet that I remembered to buy the milk, and drank it

Want to link to a source? No problem. If you paste in a URL, like - it'll automatically be linked up. But if you want to customise your anchor text, you can do that too! Here's a link to the Ghost website. Neat.

What about Images?

Images work too! Already know the URL of the image you want to include in your article? Simply paste it in like this to make it show up:

The Ghost Logo

Not sure which image you want to use yet? That's ok too. Leave yourself a descriptive placeholder and keep writing. Come back later and drag and drop the image in to upload:


Sometimes a link isn't enough, you want to quote someone on what they've said. Perhaps you've started using a new blogging platform and feel the sudden urge to share their slogan? A quote might be just the way to do it!

Ghost - Just a blogging platform

Working with Code

Got a streak of geek? We've got you covered there, too. You can write inline <code> blocks really easily with back ticks. Want to show off something more comprehensive? 4 spaces of indentation gets you there.

.awesome-thing {
display: block;
width: 100%;

Ready for a Break?

Throw 3 or more dashes down on any new line and you've got yourself a fancy new divider. Aw yeah.

Advanced Usage

There's one fantastic secret about Markdown. If you want, you can write plain old HTML and it'll still work! Very flexible.

That should be enough to get you started. Have fun - and let us know what you think :)

mollekopf's picture
Sat, 2015-08-29 12:10

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.


Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

greve's picture
Mon, 2015-08-24 10:13

Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.

So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).

Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.

After nice, steady growth during the early months, took us on a wild ride.

Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.

Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto as well.

What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.

In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.

Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.

On the highway to future improvements

So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.

The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.

That hardware has been installed last week.

So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.

For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.

Stay tuned.

Aaron Seigo's picture
Fri, 2015-08-14 13:02

Akonadi with a remote database

The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.

In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.

While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.

How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.

Configuring Akonadi

Note: as Dan points out in the comments below, this is only safe to do with a "fresh" Akonadi that has no data thus far. You'll want to first clean out (and possibly backup) all the data in $XDG_DATA_HOME/akonadi as well as be prepared to do some cleaning in the Kontact application configs that reference Akonadi entities by id. (Another practice we aim to light on fire and burn in Akonadi Next.)

First, you want Akonadi to not be running. Close Kontact if it is running and then run akonadictl stop. This can take a little while, even though that command returns immediately. To ensure Akonadi actually is stopped run akonadictl status and make sure it says that it is, indeed, stopped.

Next, start the Akonadi control panel. The command line approach is kcmshell4 kcm_akonadi_resources, but you can also open the command runner in Plasma (ALt+F2 or Alt+Space, depending) and type in akonadi to get something like this:

Akonadi with a remote database

It's the first item listed, at least on my laptop: Akonadi Configuration. You can also go the "slower" route and open System Settings and either search for akonadi or go right into the Personal Information panel. No matter how you go about it, you'll see something like this:

Akonadi with a remote database

Switch to the Akonadi Server Configuration tab and disable the Use internal MySQL server option. Then you can go about entering a hostname. This would be localhost for MySQL7 running on the same machine, or an IP address or domain name that is reachable from the system. You will also need to supply a database name4 (which defaults to akonadi), username5 and password. Clear the Options line of text, and hit the ol' OK button. That's it.

Akonadi with a remote database

Assuming your MySQL is up and running and the username and password you supplied are correct, Akonadi will now be using a remote MySQL database. Yes, it is that easy.


In this configuration, the limitations are twofold:

  • network quality
  • local configuration is now tied to that database

Network quality is the biggest factor. Akonadi can send a lot of database queries and each of those results in a network roundtrip. If your network latency for a roundtrip is 20ms, for instance, then you are pretty well hard-limited to 50 queries per second. Given that Akonadi can issue several queries for an item during initial sync, this can result in quite slow initial synchronization performance on networks with high latency.6

Past latency, bandwidth is the other important factor. If you have lots of users or just tons of big mails, consider the network traffic incurred in sending that data around the network.

For typical even semi-modern network in an office environment, however, the network should not be a big issue in terms of either latency or bandwidth.

The other item to pay attention to is that the local configuration and file data kept outside the database by Akonadi will now be tied to the contents of that database, and vice versa. So you can not simply setup a single database in a remote database server and then connect simultaneously to it from multiple Akonadi instances. In fact, I will guarantee you that this will eventually screw up your data in unpleasant ways. So don't do it. ;)

In an office environment where people don't move between machines and/or when the user data is stored centrally as well, this isn't an issue. Otherwise, create one database for each device you expect to connect to it. Yes, this means multiple copies of the data, but it will work without trashing your data and that's more important thing.

How well does it work?

Now for the Big Question: Is this actually practical and safe enough for daily use? I've been using this with my Kolab Now account since last week. To really stretch the realm of reality, I put the MySQL instance on a VM hosted in Germany. In spite of forcing Akonadi to trudge across the public internet (and over wifi), so far, so good. Once through a pretty slow initial synchronization, Kontact generally "feels" on par with and often even a bit snappier than most webmail services that I've used, though certainly slower than a local database. In an office environment, however, I would hope that the desktop systems have better network than "my laptop on wifi accessing a VM in Germany".

As for server load, for one power user with a ton of email (my life seems to revolve around email much of the time) it is entirely negligible. MySQL never budged much above 1% CPU usage during my monitoring of it, and after sync was usually just idling.

I won't be using this configuration for daily use. I still have my default-configured Akonadi as well, and that is not only faster but travels with my laptop wherever it is, network or not. Score one for offline access.


1: If you are thinking something along the lines of "the real issue is that it uses a database server at all", I would partially agree with you. For offline usage, good performance, and feature consistency between accounts, a local cache of some sort is absolutely required. So some local storage makes sense. A full RDBMS carries more overhead than truly necessary and SQL is not a 100% perfect fit for the dataset in question. Compared to today, there were far fewer options available to the Akonadi developers a decade ago when the Akonadi core was being written. When the choice is between "not perfect, but good enough" and "nothing", you usually don't get to pick "nothing". ;) In the Akonadi Next development, we've swapped out the external database process and the use of SQL for an embedded key/value store. Interestingly, the advancements in this area in the decade since Akonadi's beginning were driven by a combination of mobile and web application requirements. That last sentence could easily be unpacked into a whole other blog entry.

2: There is a (configurable) limit to the size of payload content (e.g. email body and attachments) that Akonadi will store in the database which defaults to 4KiB. Anything over that limit will get stored as a regular file on the file system with a reference to that file stored in the database.

3: This blog entry is, in part, a way to collect my thoughts for that documentation.

4: If the user is not allowed to create new databases, then you will need to pre-create the database in MySQL.

5: The user account is a MySQL account, not your regular system user account ... unless MySQL is configured to authenticate against the same user account information that system account login is, e.g. PAM / LDAP.

6: Akonadi appears to batch these queries into transactions that exist per-folder being sync'd or every 100 emails, whichever comes first, so if you are watching the database during sync you will see data appear in batches. This can be done pretty easily with an SQL statement like select count(*) from PartTable; Device this number by 3 to get the number of actual items, time how long it takes for a new batch to arrive and you'll quickly have your performance numbers for synchronization.

7: That same dialog also offers options for things other than MySQL. There are pros and cons to each of the options in terms of stability and performance. Perhaps I'll write about those in the future, but this blog entry with its stupid number of footnotes is already too long. ;)

greve's picture
Wed, 2015-08-05 10:51

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:









The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:


Roundcube Next Campaign Amount



Indiegogo Cost



PayPal Cost



Remaining Amount



So by the time the money was in our PayPal account, we are down 8.15%.

The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.

But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, lists the exchange rate as 0.9464749579 CHF per USD.



% of total

Roundcube Next Campaign Amount


SFr. 97,998.96


Remaining at PayPal


SFr. 90,008.06


Final at bank in CHF


SFr. 87,817.00


So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.

The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.

PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.

But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.

In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.

We also remain open to others to come aboard.

The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.

It’s not too late to be part of the Next. Just drop a message to

Aaron Seigo's picture
Fri, 2015-07-17 17:41

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?

A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.

This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.

The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.

Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.

I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.

mollekopf's picture
Fri, 2015-07-10 14:49

I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)

Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.

To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.

If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.



Aaron Seigo's picture
Fri, 2015-07-03 15:17

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps


The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Roundcube Next: The Next Steps

greve's picture
Thu, 2015-07-02 10:01

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

mollekopf's picture
Wed, 2015-07-01 17:22

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.


Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:

Timotheus Pokorra's picture
Wed, 2015-07-01 12:52

Just before the Kolab Summit, at the end of April 2015, the Phabricator instance for Kolab went online! Thanks to Jeroen and the team from Kolab Systems who made that happen!

I have to admit it took me a while to get to understand Phabricator, and how to use it. I am still learning, but I now know enough to write an initial post about it.

Phabricator describes itself as an “open source, software engineering platform”. It aims to provide all the tools you need to engineer a software. In their words: “a collection of open source web applications that help software companies build better software”

To some degree, it replaces solutions like Github or Gitlab, but it has much more than Code Repository, Bug Tracking and Wiki functionality. It also has tools for Code Review, Notifications and Continuous builds, Project and Task Management, and much more. For a full list, see

In this post, I want to focus on how you work with the code, and how to submit patches. I am quite used to the idea of Pull Requests as Github does it. Things are a little bit different with Phabricator. But when you get used to it, they are probably more powerful.

Starting with browsing the code: there is the Diffusion application. You can see all the Kolab projects there.
It also shows the “git clone” command at the top for each project.
Admittedly, that is quite crowded, and if you still want the simple cgit interface, you get it here:

Now imagine you have fixed a bug or want to submit a change for the Kolab documentation (project docs). You clone the repository, locally edit the files, and commit them locally.

You can submit patches online with the application Differential: Go to Differential, and at the top right, you find the link “Create Diff“: There you can paste your patch or upload it from file, and specify which project/repository it is for. All the developers of that repository will be notified of your patch. They will review it, and if they accept it, the patch is ready to land. I will explain that below.

Alternatively, you can submit a patch from the command line as well!
Let me introduce you to Arcanist: this is a command line application which is part of Phabricator, that helps with integration of your git directory with Phabricator. There is a good manual for Arcanist: Arcanist User Guide
Arcanist is not part of Fedora yet (I have not checked other distributions), but you can install it from the Kolab Development repository like this, eg on Fedora 21:

# import the Kolab key
rpm --import ";search=0x830C2BCF446D5A45"
curl -o /etc/yum.repos.d/KolabDevelopment.repo
yum install arcanist
# configure arcanist (file: ~/.arcrc)
arc set-config default
arc install-certificate
# go to and copy the token and paste it to arc

Now you can create a clone of the repository, in this example the Kolab Documentation:

git clone
# if you have already an account on, and uploaded your SSH key to your configuration:
# git clone ssh://
cd docs
# do your changes
# vi source/installation-guide/centos.rst
git commit -a
arc diff            # Creates a new revision out of ALL unpushed commits on
                    # this branch.

This will also create a code review item on Differential!

For more options of arc diff, see the Arcanist User Guide on arc diff

By the way, have a look at this example:

Now after your code change was reviewed and accepted, your code change is “ready to land”.
It depends if you have write permissions on the repository. If you don’t have them, ask on IRC (freenode #kolab) or on the kolab developers’ mailinglist for someone to merge your change.

If you have push permissions, this is what you do (if D23 is your Differential id):

# assuming you have Arcanist configured as described above...
arc patch --nobranch D23
# if we are dealing with a branch:
# arc land D23
git push origin master

I hope this helps to get started with using Phabricator, and it encourages you to keep or start submitting patches to make Kolab even better!

Aaron Seigo's picture
Mon, 2015-06-29 21:36

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!

1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.

Aaron Seigo's picture
Sat, 2015-06-27 10:22

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!