Aaron Seigo's picture

Protocol Plugfest: opening closed doors to interoperability together

Protocol Plugest -

The "world wide web" has been such an amazing success in large part because it was based on open protocols and formats that anyone can implement and use on a level playing field. This opened the way for interoperability on a grand and global scale, and is why http and HTML succeeded where many others failed previously.

Unfortunately, not all areas of computing are as blessed with open protocols and formats. Some are quite significant, too, with hundreds of millions of people using and relying on them every single day. Thankfully, some brave souls have stepped up to implement these proprietary protocols and formats using open technology, specifically as free software. The poster child for this is Samba and the ubiquitous file and print server protocols from Microsoft.

Such formats abound and are a key component in every day business (and personal) computer usage, and since the protocols are often not as open as we'd like it can be tricky to provide free, open and interoperable implementations of them. Again, just ask the Samba team. Yet bringing the option of freedom in technologies used by business and government around the world relies on these efforts.

The free software community is moving rapidly on all fronts of this landscape, and to help ensure that our efforts actually do work as expected and that we are sharing useful information with each other between projects, a fantastic conference has been set up: the Protocol Plugfest which will be held in Zaragoza, Spain on May 12-14. This is much like the ODF Plugfest which focuses on office file formats, but with a stronger focus on protocols found in products such as Active Directory, Exchange, Sharepoint, CIFS and LDAP.

vanmeeuwen's picture

Benchmarking Storage Pods, Part II

Welcome back to the ongoing series of blog posts on benchmarking storage pods! Today is another beautiful Thursday and we have some extra information for you.

In my first blog post in this series, I had just barely gotten my hands on a Storage Pod -- and I was out to set a baseline for the storage performance. I mentioned that our intention had been to use SSDs for really fast access, and bulk SATA drives for the massive amounts of storage. I may also have mentioned that the controllers were seemingly unfairly balanced. More details of course are in part I.

First of all, I have to send a big thank you to our vendor, who almost immediately responded with some quick tips, clearly showing that the Storage Pod crowd of is paying attention, and wants you to get the loudest bang for your buck. Much appreciated, and nothing but kudos!

vanmeeuwen's picture

Benchmarking Storage Pods, Part I

I have recently obtained access to a so-called Storage Pod -- an open hardware design for stuffing up to 45 SATA III drives in to a chassis no higher than 4U -- a.k.a. "Storinator".

How does one benchmark such goodness? Well, most importantly you first need to set your baseline. This I will do in this Part I of what hopefully becomes a series of posts on the subject.

The pod comes with 2 Highpoint Rocket 750 controller cards connected to 3 SATA back-planes, each of which capable of transferring up to 5 GBps. This seems unfairly balanced, since the slots these controller cards are stuffed in each have their own maximum transfer rate. Let's keep this in mind while we dive in deeper.

It's stuffed with 39 times 4 TB Western Digital "Red" drives, striking a balance between overall power consumption, capacity and Mean Time Between Failure (MTBF). This leaves 6 slots open, in which are 1 TB Samsung SSDs.

vanmeeuwen's picture

Kolab 3.4 Release Planning

We're meeting on Friday, February 13th at 15:00 CET in the #kolab IRC channel on the FreeNode network, to discuss the release of Kolab 3.4.

Aaron Seigo's picture

tracking the lifetime of objects on a kolab server

An interesting problem came up during a recent bit of Kolab server-side work: tracking the lifetime of objects in the IMAP store. How hard can it be, right? In my experience, it is the deep, dark holes which start with that question. ;)

In our case, the IMAP server keeps objects (e.g. emails, events, whatever) in directories that appear as folders to the client. It keeps each object in a separate file, so each object gets a unique filename in that folder. That file name plus the folder path is unique at that point in time, something enforced by the filesystem. Let's call this the "object id" or OID for short. Of course, when you move an object from one folder to another that id changes; in fact, the file name may also change and so the OID is not constant over time. We can track those changes, however.
Inside the file is the message content of the object. There is an ID there as well, and generally those are unique to a given user. Let's call this the "message id", or MID for short. However, it is possible to share messages with others, such as when sending a calendar invitation around between users. This can lead to situations quite easily where multiple users have a message with the same MID. They of course have different OIDs .. right?
Well, yes, but only at a given point in time. It is possible that over time, with "clever" moving of objects between folders, particularly once we add in the concept of shared folders (which is the same as saying shared calendars, notes, etc.), it is possible that at different points in time there can be objects with the same OID and the same MID but which are actually physically different things.
Now if we want to get the history for a given object in the IMAP storage and list all the things relevant to it in an entirely deterministic way, how can we guarantee proper results? In a simple logging model one might want to simply key change events with the OID, MID or some combination, but since those are not guaranteed to be unique over time in any combination this leaves a small challenge on the table to make it deterministic.
roundcube's picture

New stable version 1.1.0 released

We’re proud to announce the arrival of the next major version 1.1.0 of
Roundcube webmail which is now available for download. With this
milestone we introduce new features since version 1.0 as well as
some clean-up with the 3rd party libraries:

  • Allow searching across multiple folders
  • Improved support for screen readers and assistive technology using
    WCAG 2.0 and WAI ARIA standards
  • Update to TinyMCE 4.1 to support images in HTML signatures (copy & paste)
  • Added namespace filter and folder searching in folder manager
  • New config option to disable UI elements/actions
  • Stronger password encryption using OpenSSL
  • Support for the IMAP SPECIAL-USE extension
  • Support for Oracle as database backend
  • Manage 3rd party libs with Composer

In addition to that, we added some new features to improve protection
against possible but yet unknown CSRF attacks - thanks to the help of
Kolab Systems who supplied the concept
and development resources for this.

Although the new security features are yet experimental and disabled by default,
our wiki describes how to enable the Secure URLs
and give it a try.

And of course, this new version also includes all patches for reported
CSRF and XSS vulnerabilities previously released in the 1.0.x series.

IMPORTANT: with the 1.1.x series, we drop support for PHP < 5.3.7
and Internet Explorer < 9. IE7/IE8 support can be restored by
enabling the ‘legacy_browser’ plugin.

See the complete Changelog at
and download the new packages from

mollekopf's picture

Progress on the prototype for a possible next version of akonadi

Ever since we introduced our ideas the next version of akonadi, we’ve been working on a proof of concept implementation, but we haven’t talked a lot about it. I’d therefore like to give a short progress report.

By choosing decentralized storage and a key-value store as the underlying technology, we first need to prove that this approach can deliver the desired performance with all pieces of the infrastructure in place. I think we have mostly reached that milestone by now. The new architecture is very flexible and looks promising so far. We managed IMO quite well to keep the levels of abstraction to a necessary minimum, which results in a system that is easily adjusted as new problems need to be solved and feels very controllable from a developer perspective.

We’ve started off with implementing the full stack for a single resource and a single domain type. For this we developed a simple dummy-resource that currently has an in-memory hash map as backend, and can only store events. This is a sufficient first step, as turning that into the full solution is a matter of adding further flatbuffer schemas for other types and defining the relevant indexes necessary to query what we want to query. By only working on a single type we can first carve out the necessary interfaces and make sure that we make the effort required to add new types minimal and thus maximize code reuse.

The design we’re pursuing, as presented during the pim sprint, consists of:

Timotheus Pokorra's picture

Nightly builds and tests for CentOS and Debian is glad to announce that nightly tests now run every night against the nightly development packages of Kolab, for both CentOS 6 and Debian Wheezy.

At the moment, the nightly tests also run for the Kolab 3.3 Updates for CentOS 6.

Through these tests, we can spot problems and obvious bugs in the code early, when they are still easy to fix. This is’s contribution to making Kolab more stable and reliable.

As soon as Kolab 3.4 is out (expected for the middle of February 2015), we will enable nightly tests for CentOS 6 and Debian Wheezy for Kolab 3.4.

You can see the tests and the results here:

I use the TBits KolabScripts to install the server unattended.

The TBits KolabScripts also contain some Selenium tests written in Python. These tests check if the user can change his own password, and tests creating new domains and new users, sending emails, catchall addresses, etc.

Also the TBits scripts for ISPs are tested: These scripts add domain admins functionality to give customers the option to manage their own domains, limited by a domain quota and maximum accounts number.

We use the LightBuildServer for running the tests. This software is written in Python, by Timotheus Pokorra. It is very light-weight, does not come with much code, and is easy to extend and configure. It uses LXC to create virtual machines for each test or build run. You can see it live in action for Kolab

Andreas Cordes's picture

Migration to "Cubietruck" finished


due to a new job since the beginning of the year and moving to a new country I was a bit stressed. In addition to this, my provider changed from IPv4 to IPv6 at a DSLite line.

So my port forwarding is no longer working and I had to organize all the stuff to get access to my mails again :-). Well, that's a disadvantage of my solution but I hope this will be solved somehow.

So my script to install Kolab on my truck:

export RELEASE=3.3
export RELEASEDIR=kolab_3.3_src
mkdir -p /mnt/kolab/$RELEASEDIR
mkdir -p /mnt/kolab/kolab_${RELEASE}_deb
echo "deb-src$RELEASE/Debian_7.0/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb-src$RELEASE:/Updates/Debian_7.0/ ./" >> /etc/apt/sources.list.d/kolab.list
#echo "deb /" >> /etc/apt/sources.list.d/kolab.list
echo "Package: *" >  /etc/apt/preferences.d/kolab
echo "Pin: origin" >>  /etc/apt/preferences.d/kolab
echo "Pin-Priority: 501" >>  /etc/apt/preferences.d/kolab
wget -qO -$RELEASE/Debian_7.0/Release.key | apt-key add -
wget -qO -$RELEASE:/Updates/Debian_7.0/Release.key | apt-key add -
aptitude update
cd /mnt/kolab/$RELEASEDIR
echo Get debian sources
roundcube's picture

Security update 1.0.5 released

We just published a security update to the stable version 1.0.
Beside a recently reported Cross-Site-Scripting vulnerability,
this release also contains some bug fixes and improvements we
found important for the long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from,
see the full changelog here.

Please do backup before updating!