Subscribe to the RSS feed of this planet! RSS
roundcube's picture
Mon, 2015-11-23 17:30

We’re proud to announce that the beta release of the next major version 1.2 of
Roundcube webmail is out now for download and testing. With this milestone
we introduce new features primarily focusing on security and PGP encryption:

  • PHP7 compatibility
  • PGP encryption
  • Drag-n-drop attachments from mail preview to compose window
  • Mail messages searching with predefined date interval
  • Improved security measures to protect from brute-force attacks

And of course plenty of small improvements and bug fixes.

The PGP encryption support in Roundcube comes with two options:


The integration of this browser plugin for Firefox
and Chrome comes out of the box in Roundcube 1.2 and is enabled if the Mailvelope
API is detected in a user’s browser. See the Mailvelope documentation
how to enable it for your site.

Read more about the Mailvelope integration
and how this looks like.

Enigma plugin

This Roundcube plugin adds server-side PGP encryption features to Roundcube. Enabling this
means that users need to fully trust the webmail server as encryption is done on the server
(using GnuPG) and private keys are also stored there.

In order to activate server-side PGP encryption for all your users, the ‘enigma’
plugin, which is shipped with this package, has to be enabled in the Roundcube config.
See the plugin’s README for details.

Also read this blogpost
about the Enigma plugin and how it works.

IMPORTANT: with this version, we finally deprecate some old Roundcube library functions.
Please test your plugins thoroughly and look for deprecation warnings in the logs.

See the complete Changelog at
and download the new packages from

Please note that this is a beta release and we recommend to test it on a
separate environment. And don’t forget to backup your data before installing it.

Sun, 2015-11-22 17:59

Today, I’ve turned off our last two CentOS systems, and now we run Red Hat Enterprise Linux 100% (minus the Fedora workstations), because, you know, amateur-hour be gone!

Regretfully, they were firewalls, meaning at some point a unicorn somewhere may have felt the network failing over. Twice.

Thu, 2015-11-19 15:28

Somebody mentions to me that “struggles” may be a big word for such a small problem, and that mentioning Nulecule in the same breath may not be fair at all.

That person, Joe Brockmeier, is correct, and I hereby pledge to buy him a beer — as I know Joe loves beer — does FOSDEM work for you Joe?

My earlier blog post did not give you any insight on what, how or why I struggled with Nulecule, nor whether the struggle is with Nulecule specifically or orchestrating too many containers with few too many micro-services as a whole.

My Nulecule application is Kolab — the full application suite. It depends on a number of other applications which are principally valid Nulecule applications in their own right. Examples include MongoDB, MariaDB, PostgreSQL, HAProxy, 389 Directory Services, FreeIPA, and such and so forth.

Some of these have existing Docker images. Nulecule applications are available for some of these, or are easily made in to Nulecule applications. Some are slightly harder to encapsulate however, and I’ll take one example to illustrate the point; 389 Directory Server.

Complex and Custom: 389 DS

A default 389 Directory Server installation falls just short of what Kolab requires or desires to become fully and properly functional;

  • Schema extensions provide additional functionality,
  • Anonymous binds should not be allowed,
  • ACLs should be more restrictive,
  • Better passwords should be allowed than only 7-bit ones,
  • Access logging is I/O intensive and less important than Audit logging,
  • Additional indexes on attributes included in the schema extensions need to be created,
  • Service- and Administration accounts and roles need to be created.

To facilitate this particular form of setting up 389 Directory Server, we currently ship our own version of /usr/share/data/dirsrv/template.ldif, in which we substitute some values during initial startup.

With these specifics, how would I first create a generic Nulecule application for 389 Directory Server and then extend it? This is a philosophical question I think deserves answering but probably requires a lot of deliberation.

Multiple Instances of a Nulecule Application

In another area of my application suite, 6 of my micro-services require a database — MariaDB. The challenge here is three-fold;

Atomicapp collects the configuration data required for the Nulecule applications by application graph name only, and the position of the application in the graph is discarded. This is to say that application A and B both requiring application C would have one combined configuration section for the application C, which is then to be utilized for both A and B.

Furthermore, the Nulecule application for MariaDB (the case in point) creates pods and services all of them named “mariadb” — but there can be only one (per namespace). The creation of the second pod or service will fail. I would use ‘generateName’ for the pod and service name, but that is not currently supported. Therefore, this restriction applies to all Nulecule applications and not just MariaDB. My way to work around this restriction for now, is to fork off mariadb-centos7-atomicapp and substitute its id and name parameters.

The third part of the problem is the level of customization that may be recommended for a MariaDB service; the maximum size of allowed packets, the use of one file per table in InnoDB, buffer sizes, cache sizes, etc., etc. An external Nulecule application should come with the best run-time defaults, and allow for a level of customization by the consuming application. Frankly, the same goes for the 389 Directory Server application I talked about earlier — it would just function differently.

Focus on Priorities

I’m relatively new to both the Atomic and Nulecule development communities, despite spending time with a select group of people in Red Hat’s Westford offices last year, so I can only say so much.

A number of the conversations on mailing lists and IRC and in meetings today evolve around integrating atomicapp’s and atomic’s command-line interfaces, whether or not Nulecule applications should or have to ship atomicapp code themselves. These are not unimportant topics and may very well need to be addressed sooner rather than later, or risk they become a sink-hole we later need to climb out of. Fair enough.

However, these topics dominate the conversation. It seems a disproportionate amount of resources is being spent on them, whereas we lack persistent volume configuration for Nulecule applications, duplicate specifications, settings and configuration items, and lack proper support to deploy most things more complex than a web service with a database — which is not to say it is completely impossible, it’s just needlessly difficult.

I believe the best way it can be made easier to develop and deploy Nulecule applications is to walk people through in clearly articulated, documented show-cases of example applications, every one slightly more complex than the last one. Part of the ramp-up cost is to learn about setting up the necessary systems and how to do anything not already an example. I will probably get more involved in these topics to support my team of developers when the time is right.

Mon, 2015-11-16 22:18

I’m working on enabling continuous deployment with help of Nulecule, so that I can have my developer’s edges as well as a series of central environments entertain deployment based on the same triggers.

For those of you outside the inner circle, Nulecule is supposed to abstract deployment to different cloud orchestration providers, whether vanilla Docker, something assisted by Kubernetes, or something as fancy as OpenShift.

In the Kolab Groupware world, orchestration of micro-services is no new challenge. We operate many, many environments that deploy “micro-services” based on (fat) VMs, but the way this works based on (thin) containers is different — there’s no running Puppet inside a container, for instance, or if there is, you’re doing it wrong.

So, aside from the fact we know what our micro-services are across the application suite, the trick is in the orchestration. I have some 25-30 micro-services that require access to a service account’s credentials, because we do not normally allow anonymous binds against LDAP (any exploit anywhere would open you up to reconnaissance otherwise). Currently, setting one container to hold and configure those credentials in a canonical authentication and authorization database and distributing those credentials to the other 24-29 containers doesn’t seem possible without the user or administrator installing Kolab using containers specifying the credentials 25-30 times over.

On a similar note, it is currently not possible to specify what volumes for a Docker container are actually supposed to be persistent storage.

The point to take from this is to appreciate that Kolab is often at the forefront of technologies that have little to do with bouncing back and forth some emails among people that just so happen to share a calendar — in this case specifically, we’re abusing our use-case’s requirements with regards to orchestration to direct the conversation about a unified format to describe such requirements.

Sun, 2015-11-08 12:18

Recently I implemented a couple of improvements that are related with files storage. It’s better access rights support and storage integration with more components.

Add attachments “from could”

It was already possible to attach files from file storage in email compose page. Now you can also get files “from cloud” in event and task creation/editing forms.

Figure 1. “From cloud” button in event dialog.


Read-only folders

As for now the user interface where folders from file storage were used wasn’t aware of folder access rights. Now it’s changed. Chwala API provides information about access rights. So, the folders list displays a locker icon for read-only folders. Also, when you use “Save to cloud” feature, the list will contain a list of writable folders only. Additionally, any write operations (like file upload, delete or move) are prevented by the UI, which means e.g. some buttons are inactive in read-only folders.

Figure 1. “From cloud” button in event dialog.


Sun, 2015-10-18 18:42

Kolab Groupware is a collaboration suite establishing the integration of various applications you know already; Most prominently, these include 389 Directory Server, Postfix, Cyrus IMAP and Roundcube. Together, these applications would comprise a simple mail system that, in terms of functionality, would fall short of “groupware” and “collaboration” functionality.

Side-note: Cyrus IMAP has added CalDAV, CardDAV and WebDAV capabilities, encroaching on the territory Kolab otherwise occupied, rather exclusively, for as far as the world of Free Software is concerned. As such, Cyrus IMAP provides more of the groupware functionality than I initially stated, but does not provide ActiveSync capabilities, a web-mail client interface, Resource Management, and many other facilities included with Kolab Groupware. Inversely, however, Kolab is going to need to show to be the most adaptable, and applaud and welcome and embrace these developments in Cyrus IMAP, rather than attempt to compete with it somehow.

This blog post is about Single Sign-On and second factor authentication, however, and where and how these fit in with Kolab.

First, we need to clarify the terminology, because SSO and 2-factor authentication both mean different things to different people.

Single Sign-On

Single Sign-On is the functionality in which a complete infrastructure of services provided allows you to authenticate precisely once, and only ever once, from which point on forward you are trusted to have identified yourself — contrary to where you can use the same credentials over and over against different services, but you have to fill out and submit your credentials again and again to create valid sessions.

The ability to use the same credentials everywhere is usually achieved by making individual applications use the same authentication and authorization database in very much the same way — usually LDAP in some form or the other, or a SQL database. In principle your credentials fly over the wire time and time again in exactly the same way they would otherwise, they just happen to be the same credentials every time.

That is not to say that in a Single Sign-On scenario, no credentials go over the wire. They just so happen to not be your username and password, but separate ones issued to you, and negotiated the level of trust for between multiple parties — you, the issuer and the service. The clearest, cleanest and best example of this is Kerberos.

Two-Factor Authentication

Two-factor authentication — or multi-factor authentication — involves the functionality to supplement what you know (your password) with something you have. This translates to a token of sorts, usually presented in a physical form or encasing (an application’s configuration on a smart phone), that you are required to have on you and is expected to never change ownership. This can be a YubiKey, a smartcard, a phone, and such.

Since these devices are eligible to contain a form of processing power, most common implementations of the second factor today are one-time passwords — supplemental tokens that are valid precisely one time, or for a limited window in time — without involving space it is understood this is sufficient.

In the realm of Kolab especially, your username is often widely known — after all, an email address is rather limited in functionality unless your mailbox is to remain empty. More broadly speaking, your username tends to be known regardless — Twitter handles, Google email addresses, etc.

Back to Kolab

As I’ve mentioned before, Kolab has multiple access points — IMAP, SMTP, LDAP, the various web applications such as Roundcube, ActiveSync, CalDAV, CardDAV and WebDAV. The difficulty in applying Single Sign-On and Two-Factor Authentication needs to be viewed in the context of some of these separate applications being required to authenticate to other applications as you. How, though, does a web-mail client interface authenticate as you when it needs access to IMAP, LDAP or SMTP?

Proxy Authorization?

The first option is to consider the fact that IMAP, LDAP and SMTP have a concept of “proxy authorization”, where a set of credentials other than your account’s is used to authenticate, and the connection is subsequently authorized as if it had been your account logging in to begin with. More specifically, IMAP and SMTP provide this functionality as part of their use of SASL, and LDAP provides these capabilities separately.

However, this means these credentials would need to be available to the client application. Using these credentials with additional privileges (proxy authorization) over the user’s credentials constitutes privilege escalation, hopefully followed by de-escalation, and is therefore generally considered bad practice, especially in the context of services exposed to the Internet.

Central Authentication Service?

Another option is the Central Authentication Service (CAS). With such a service, you typically authenticate via a web application, which submits your credentials to CAS for verification. The web application is then issued tokens representing the validity of your session. When the web application in question needs to authenticate to another service as you, it submits the temporary credentials associated with the specific session, and therefore allows the third party application to validate the tokens in question as being a part of an existing session (again via CAS), operating under conditions to be expected — i.e. Roundcube connecting to Cyrus IMAP. Contrary to popular belief, this isn’t necessarily limited to just web services doing so, and can perfectly well be applied to non-web service endpoints as well.

However, a full desktop client to IMAP, such as Kontact, would not necessarily understand the need to consult a web service for obtaining tokens then valid against IMAP or SMTP. IMAP nor SMTP (or, actually, SASL) itself does not provide an authentication mechanism through which a full desktop client can be told to facilitate such type of external authentication.

OTP for Two-Factor Authentication

Therein lies the problem with OTP as well. There is currently no mechanism in which IMAP or SMTP (again, actually SASL) can be told to authenticate against a website first. While it can be required to issue an OTP, no SASL server-side implementation I’m aware of allows either of these to occur through a centralized authentication service. Feel free to leave your recommendations in the comments.

“But, …”, you might say, “… does not Google implement an XOAUTH2 mechanism?” or “Is there not RFC 7628?” — to which the answer is “Yes, but …”.

Cyrus SASL does not currently support OAUTHBEARER nor XOAUTH2 mechanisms. Alternatives like gSASL do not do so either. No full desktop client software that I’m aware of supports it — feel free to leave your recommendations in the comments below. Kolab does not currently support it, in that it is not an OAuth or OAuth2 provider.

Several other mechanisms achieving a similar relaying of the authentication process to occur outside of the IMAP or SMTP connection — SAML20 and OPENID20 come to mind, both of which are implemented in GNU’s SASL. For what it’s worth, Kolab is not a SAML nor OpenID provider currently, either.

I could, if I wanted, break these down in to work units — enhancement tickets if you will. I have, in fact, opted to include an OAuth provider in my new PACK project for this reason among others. Either, though, is slightly beyond the scope of this post (albeit not this blog).

Identity Management

A third option is viable only for fully integrated Identity Management solutions, where FreeIPA comes to mind. With FreeIPA, we may consider the Kerberos ticket issued valid, optionally so only after a valid second factor has also been submitted to obtain the ticket. Relaying the authorization is a bit of a pain, but we’ve managed in a Proof-of-Concept implementation for which I did the work, and the patches against Roundcube and the Kolab-specific authentication plugins have been accepted. The deployment scenario under which this is considered a viable option however is rather limiting. You do not typically deploy a Kerberos environment on the Internet in the first place (while work is ongoing to make that a more reasonable approach), but then still, a single client operating system (on to which the user logs in and applications get launched) is also typically associated with a sole provider environment. Long story short, I associate this typically with a single environment within a single organization, rather than a hosted or even colocated Kolab Groupware deploment.


While not one particular conclusion set in stone, several options result in various combinations of units of work. We could protect access using OTP and subsequently allow only the web client to be used by those that feel all access must be protected with OTP.

We could use CAS to further reduce the number of points where copies of credentials need to be saved off as those points require access to other services as you, while protecting privilege escalation vector-based attack surfaces. We could use an OAuth2 provider to negotiate the authorizations for the various applications that Kolab includes.

Comments and ideas welcome, as I feel we’re (I am) breaking ground on new territory.

Sun, 2015-10-18 14:56

I now have a bit more information on what it is I will be achieving, and I wanted to share the roadmap and horizon of the project I’m undertaking.

The software development project is called PACK — a portal or panel for administration of Kolab Groupware. For our current most important deployment, Kolab Now, this involves customers and accounting features. This would include, for example, a Web Administration Panel and a Customer Control Panel.

For the purpose of this intermezzo, we’ll simply state that some parts of the overall application suite address different needs for different audiences in different deployments. To illustrate, an on-premises installation likely has little need for third parties to be able to autonomously register themselves as customers. However, depending on the internal accounting practices of such organization, a department may need to fork over part of its budget to the IT department for providing them with collaboration services.

That being said, I’m in for quite a lengthy and complex project. On the horizon may be a rather complete encapsulation of quite a few too many things to include in a first few milestones.

As I’ve mentioned before, I’m following my own learning curve. There’s some troublesome areas in Flask and its extensions, where modelling of the applications to include multiple features based on multiple extensions is tricky. One such example is including localization (l10n), internationalization (i18n), themes, assets and caching in to one application.

Furthermore, the extent to which we wish to include features makes it very difficult to come up with a data model and database design that encapsulates everything.

So, where do we get started? Well, the use-cases that are top of mind are limited to those we have for Kolab Now and Kolab Enterprise. This means we let individual users register or sign up, and subscribe to entitlements on product offerings. This includes a nice little range of offerings, such as;

  • Knowledge base articles, some of which may be open to the public, some of which may be open for logged in users with no further entitlements on any product, some of which may be available only to users with particular entitlements. These articles form an extra on top of any amount of entitlements.
  • Subscription Management, for those on-premises installations of Kolab Groupware with long-term support and add-on software channel entitlements.
  • Individual User Accounts, which are the equivalent of the current singular groupware account.
  • Group Manager Accounts, where you register with a domain name you own, and get to create however many accounts you wish.
  • Colocated Kolab deployments, where you register with a domain name you own, and in return you get your own, managed installation of Kolab Groupware — with the intention to allow for a lot more customization and integration compared to the two former hosted account types.

Compared to how you sign up to a hosted Kolab account today, these individual entitlements should be separate from your base account. With this I mean that you should be able to create an account that holds no entitlements and requires no payment, and then subscribe to entitlements some of which may require payment right-away or at some point in the future. This parts with the current registration process, where you subscribe directly to one entitlement and one entitlement only, and the entitlement you register for will continue to be a different account for each entitlement.

To illustrate why we part with that process, we only have to look at a family situation. A couple of parents may wish to use Kolab Groupware, but would logically like to avoid two different administration accounts being issued two different invoices for the two different accounts.

Another reason to part with the current process of registration directly to individual entitlements is to encourage users to try different Kolab offerings before they commit to buying either of the forms that Kolab comes in. This immediately draws the need to be able to clearly distinguish between an account on the access portal and the accounts registered once access to the portal is obtained.

The  easiest way to let people “sign up” and explore is to allow them to identify themselves with one of their existing identities — a Twitter, Facebook or Google account is what we have currently included. However, I have since learned that that is not secure enough. I have added on top of that four second factor authentication mechanisms: Passwords, TOTP, HOTP and TAN via SMS. I have since also added registration with email and password, where we would not require you to confirm ownership of or access to the mailbox of the email address you enter before we consider it a valid login name to the portal — remember we are not issuing any entitlements to this initial registration.

We intend to then let you restrict access to your account further by at least adding the second factor, and once you subscribe to product entitlements, switch over to that new account as the canonical account for access to the portal. This scenario would be applicable should you opt for, say, an individual account, to which you could switch over the primary login, and void the third-party access. For support subscription entitlements however, this would not apply equally as much — since you would not have another account with Kolab Now to switch over to.

Suffice it to say I have learned too much to continue the Flask Mega-Tutorial in the same way I originally envisioned I would.

I’m going to have to scratch my head for a little while, to figure out what it is I want, need and require, and perhaps start over with the mega tutorial taking in to account the various things I’m going to be doing, and the seemingly iterative process of restructuring the example code layouts.

Timotheus Pokorra's picture
Thu, 2015-10-15 10:12

There has been a new release of Roundcube:

I noticed that because Epel already has version 1.1.3, but Kolab 3.4 Updates still has 1.1.2. Now there is an installation conflict, because yum wants to use the epel version, but that leads to other conflicts.

A temporary solution is to exclude all roundcubemail* packages in the epel repo file:

sed -i "s#enabled=1#enabled=1\nexclude=roundcubemail*#g" /etc/yum.repos.d/epel.repo

The proper solution is to upgrade the roundcubemail package for Kolab 3.4 Updates on OBS.

I was slightly confused which tarball to use, and Daniel Hoffend aka dhoffend helped me out:

  1. Go to
  2. Get the commit id for release 1.1.3: 357cd5103d1c27f8416ef316c4a4c31588db45b8
  3. git clone
    cd roundcubemail
    git checkout -b newrelease 357cd5103d1c27f8416ef316c4a4c31588db45b8
    git archive --prefix=roundcubemail-1.1.3/ HEAD | gzip -c > ../roundcubemail-1.1.3.tar.gz

To test the new package, download this repo file:

yum install yum-utils
yum-config-manager --add-repo
yum update

The new updated package will hopefully arrive in Kolab 3.4 Updates within the next days.

Tue, 2015-10-13 09:23

In this article I described how we implemented client-side encryption in Roundcube using Mailvelope. There’s another approach for encryption, it is the Enigma plugin. It implements all the functionality using server-side GNUPG software. So, the big difference in these is that: Mailvelope keeps your keys in the browser, Enigma stores them on the server. In the current state Enigma however, has a lot more features.

Installation and settings

To use Enigma just enable it as any other plugin. Then in Preferences > Settings > Encryption you’ll see a set of options that will give you possibility to enable/disable encryption-related features.

NOTE: As keys are stored on the server, make sure the directory used as a storage has proper permissions, and it’s good to move it somewhere out of the location accessible from the web (even if secured by .htaccess rules).

Figure 1. Encryption preferences section.

Keys management

To manage your keys goto Settings > PGP Keys. There you can generate a new key pair or import keys. See the following screenshots for more details.

Figure 2. Key generation form.

Figure 3. Key information frame.

Composing messages

In message compose screen a new toolbar button is added with popup where you can decide if the message have to be signed and/or encrypted. The behaviour and the icon is slightly different than the one used for Mailvelope functionality. Also, note that we did not change the compose screen in any way, so all standard features like responses and spellchecking actually work.

Figure 4. Encryption options in compose.


You can find the Enigma plugin code in Roundcube 1.0 and 1.1, but only the version in Roundcube 1.2 (current git-master) is usable. I put a lot of work into this plugin and I hope there will be users that will use it. It depends on you if that solution will be extended with S/MIME or other features in future versions. Current state is described in the plugin README file .

Sat, 2015-10-10 14:25

The most valuable feature of incoming Roundcube 1.2 release is PGP encryption support. There are two independent solutions for this, Enigma plugin and Mailvelope. In this article I’ll describe what we achived with Mailvelope. The integration code was mostly written by Thomas Brüderli and only slightly improved/fixed by me.

It looks like Mailvelope is the best (if not only) solution for encryption in a web browser. It’s based on OpenPGP.js that is an implementation of PGP encryption in JavaScript. Mailvelope is distributed as Chrome and Firefox extensions. It supports some email services like GMail, it also provides an API for programmers. And this is the way we decided to integrate it with Roundcube.

Mailvelope installation

For more info goto Mailvelope documentation. To have it working with Roundcube you have to install the extension in your browser, then goto your Roundcube webmail and using Mailvelope “Add” button add your page to list of mail providers. One last required step is to enable API use on the provider edit page.

Compose an encrypted message

If Roundcube detects enabled Mailvelope, new button will appear in compose toolbar. It may be disabled in HTML mode, so switch to plain text. If you click it Mailvelope frame will appear. There you can write your message and add attachments. As you notice on the screenshot some features are disabled. Unfortunately, at the moment we can do much more with the Mailvelope textarea. Note: to send an encrypted message you first have to import/generate a private key in Mailvelope settings.

Figure 1. Message compose screen with enabled encryption frame.

When you try to send a mail to an address for which no public key was found in the Mailvelope database (keyring), you will be provided with possibility to search public key servers and import the keys.

Figure 2. Key search result in compose.

Preview an encrypted message

Also in a message preview Mailvelope will add its frame containing decrypted text and attachments. You’ll be prompted for a key passphrase when needed.

Figure 3. Encrypted message preview.


Unfortunately this is way from being complete. The Mailvelope API is very limited at the moment. It does not support signing and signature verification. Access to the encryption frame is limited. There are also some bugs. Currently you can only send and receive simple encrypted messages (attachments are supported).

You can track progress and read about the issues in this ticket.

Fri, 2015-10-09 14:30

Welcome back to my mega-tutorial on Flask. If you’re following along with Part I and Part II, you should already have a minimal Flask application that doesn’t really do anything meaningful. While I had said before, this part would be all about testing, you may think “Why? It doesn’t do anything!”. That’s correct.

It’s time to appreciate the requirements on the application. You may or may not have an extensive comprehension of some of the things it may need to do — I did when I got started, but what precisely the application needed to do was not yet clear.

What the application is going to need to be able to do, and how the application will go about achieving exactly that is subject to specification. Clear specification of the requirements, as well as clear specification of how to achieve those targets.

This part revolves around testing, but I know I’m going to need to be more specific — it is about unit testing, and arguably also functional testing, with help of fixtures.

We’ll be using Python’s unittest in combination with Flask-Fixtures. We’ll also use the fixtures to provide your application with some content, so it is not as dull outside of testing — the tests first wipe the database, then populate the database, but tear down and drop all tables from the database when they are done.

First things first, I propose you put your tests in the directory named, well, err… tests/. A simple naming convention to ensure tests are ordered (correctly), is to name the individual files in a pattern test_$, such as for example tests/

This is the first give-away. Test your schema to ensure it function as you expect it to. This means when you expect an ON DELETE CASCADE to function such that the associated items are removed when you remove the referred item, you test this.

So let’s start a database model. Again, I cannot emphasize enough how important it is to get your requirements sorted. In our example, we’re going to have items and one owner per item.

The following is an example of ppppp/db/ Note that I would normally put different classes in to different files inside of a model/ subdirectory, … useful especially when database models grow larger.

This enables us to import the database and model in ./ so let’s do that:

If you were to run ./ now, you’ll find tracebacks about the database not yet having been created, which is correct:

$ python -c 'from ppppp.db import db; db.create_all()'

This should resolve that problem. However your database is empty. This is where fixtures come in.

Create tests/fixtures/owners.json, and put in it:

Create tests/fixtures/items.json, and put in it:

Now, we’re going to switch over ./ to use the Flask-Script extension:

And we can now run: ./ load_fixtures. This will drop the database tables and re-create them, and add our fixtures.

You should now also use ./ runserver to run the actual webserver, and when you do, you should see:

  "Clock": "Jane", 
  "Watch": "John"

Now we can start testing.

Again, a nice little naming convention to ensure tests are executed in order may be to name them test_$x_$subject, as follows:

Let’s read back as to what we’re verifying;

In test_001_owners(), we verify the tests/fixtures/owners.json file was loaded successfully, and completely. Boring, but necessary.

In test_002_items(), we verify the tests/fixtures/items.json file was loaded successfully and completely. Boring, but even more necessary than the test for tests/fixtures/owners.json, because of the foreign key restrictions.

In test_003_owner_delete_cascade() however, we test the real deal: Is the item deleted when the user is? It is important to realize that just writing this one test would fail to pass the mark, because an owner Jane could simply have no items, because the schema, fixtures or even the tests could have been (written) wrong.

Running the tests would result in:

$ venv/bin/nosetests -v tests
test_001_owners (test_000_schema.TestSchema) ... ok
test_002_items (test_000_schema.TestSchema) ... ok
test_003_owner_delete_cascade (test_000_schema.TestSchema) ... ok

Ran 3 tests in 2.018s


Hence verifying we have specified the ON DELETE CASCADE part of the schema correctly.

At this point, I would suggest inserting more tests to create a duplicate entry John or Jane, set a unique constraint for, etc.

However, we cannot test the web application in full. Adding a test such as:

Will fail with a 404 Not Found error. Why? Because the app we are using does not use the routes we have in ./

Here’s where that ppppp/web/ directory comes in. In effect, ./ only needs a very minimal configuration of the application, so that it can load fixtures and do other such stuff, but more importantly, run what is otherwise app. It could import that from some location so that other stuff can also import it from that location. We’re going to be using ppppp/web/ for this:

And we make it so it can be imported sanely (i.e. without having to expose everything), through ppppp/web/

Now we can dumb down ./ quite a bit:

The tests we can now write include interaction with the application, without having to copy routes and other such application logic across all tests.

Fri, 2015-10-09 09:28

This is a nice feature on desktop, so it is in desktop-like web applications. Average user likes this feature and uses it. That said, we have in Roundcube some drag’n’drop capabilities, it is:

  1. Dragging messages to folders (in messages list view) – copy/move action.
  2. Dragging folders to folders (in Preferences > Folders) – move action.
  3. Dragging contacts to groups/sources (in Addressbook) – copy/move/assign to group action.
  4. Dropping files from desktop to compose attachments.
  5. As of yesterday it is also possible to drag’n’drop attachments from mail preview to compose window.
  6. If you use Kolab, you can actually also drop files from desktop into Calendar, Tasks and Files.
  7. You can also re-arange messages list columns using drag’n’drop technique.

These should work in most of web browsers (their recent versions). The question is: can we have more of this? E.g. wouldn’t be nice to drag attachments or messages from browser to desktop? Or messages to compose attachments? Well, it would, but it’s not so simple…

I recently investigated what recent web standards and browsers provide in this regard. This does not look good. Standard way of drag’n’drop is DataTransfer and some events defined events, plus HTML attribute ‘draggable’. Unfortunately there’s no standard way of dropping a file to the desktop. So, what options do we have:

  1. Chrome browser supports its own DownloadURL parameter of DataTransfer, but even Chromium does not use it.
  2. In Firefox you can drag a link which on Windows will create a file link, so not what we want, but under Linux (KDE) it actually can download a real file. Unfortunately, this does work only with public files. It does not work with session-based apps (as Roundcube). We’d need to implement something like one-time-public URIs to attachments.
  3. I didn’t find any information about other browsers.

So, as you see not much we can do today. There’s another issue, this will anyway do not work with Roundcube widgets implementing its own “internal” drag’n’drop, e.g. messages list. Also, there’s no standard to drag many resources at a time so we cannot replace our “internal” implementation.

Thu, 2015-10-08 12:11

JMAP is a JSON-based API for synchronizing a mail client with a mail server.

As you may be aware, Kolab is a lot more than just a mail server, and in our endeavours to bring you the next-generation experience for collaboration, JMAP is a very interesting candidate for our web client — you may know it as Roundcube Next.

As with so many things on the leading edge of software development, figuring out how to run an environment suitable for development can be cumbersome. The particular case in point is that no IMAP server currently supports the JMAP protocol natively — albeit work is ongoing to change that.

Our friends over at FastMail have developed a JMAP proxy in Perl, currently deemed the most complete implementation, but it is a bit of a hassle to get it up and running.

So, in order to save us significant chunks of time and not run in as many circles as much, I have created a Docker image. This first iteration is large, non-optimized and uses Perl’s CPAN quite a bit too much for comfort, but I can show that it works.

First, you need to pull in the image:

$ docker pull kolab/jmap-proxy

Second, you need to run it:

$ docker run -d -p 80:80 kolab/jmap-proxy

Under the working assumption that you can now reach, you should therefore also have a JMAP proxy up and running.


Thu, 2015-10-08 11:46

I just committed a small Roundcube feature that adds date interval selector to the search options popup. IMAP search query using BEFORE and SINCE keywords will be generated from the selected interval option.

See it on the screenshot.


On the screenshot you can also notice another small feature that will be part of Roundcube 1.2. It is a messages list page selector, you can use it to directly jump to the specified page.

Tue, 2015-10-06 12:55

The Kolab’s file storage component, named Chwala, from the beginning listed all folders of type ‘file’ available to the user. The subscription state of a folder was ignored. This changed today. From now on Chwala API returns only subscribed folders, the same as other components.

Of course, user is able to subscribe or unsubscribe a folder. Together with this folder filtering and searching has been implemented. So, you can now quickly filter displayed folders (which is useful when you have many of them). You can also search for unsubscribed folders and subscribe them (i.e. add to the permanent folders list). The search is smart enough to search by folder name and user name, so searching in other users namespace also works.

Key-press filtering is implemented “in the browser” so it works fast. To search unsubscribed folders (server-side) you have to press Enter key. Exactly the same as folder search in other components.

Figure 1. Folders list with the new header and search icon.
search folders - step 1
Figure 2. Searching for unsubscribed folders.
search folders - step 2
Figure 3. Subscribed folder added to the list.
search folders - step 3

For the moment it only works with Kolab storage. Other storage drivers supported by Chwala, i.e. WebDAV and SeaFile do not support subscriptions. In the future we may implement this as well.

Mon, 2015-10-05 19:01

Everybody who knows me knows I hate to run in circles. Suffering from a repetition syndrome is what awaits everyone who’s ignoring those nifty tricks that make their lives more convenient.

Over the course of years, I’ve met many people that use GIT too frequently to not have their setup include the convenience of a GIT prompt for their shell. To put things in perspective, this includes system administrators, but also software developers. Amazing, really. Astonishing, frankly.

Rather than convincing people individually through show-and-tell, and walking those people using the holding-hands-spelling-it-out “teaching” mechanism, and partly also because I promised in Part I of A Flask Mega-Tutorial, here’s how I set up mine.

Some might call it boring, but I’m a bash shell user, so you’ll need some or the other software package for bash completion, which in the world’s best distribution is called bash-completion.

In your ~/.bashrc, include the following snippet somewhere near the end (note this modifies your prompt only minimally compared to the default, and does precisely that on purpose):

Reload your ~/.bashrc by typing:

$ . ~/.bashrc

Yes, WordPress’s and/or this theme’s markup for preformatted text is the worst.

Now, here’s what I figured we would do:

  1. Create a GIT repository some place:

    Your prompt now shows a # character to indicate there’s a GIT repository, but nothing’s there yet.

  2. Let’s create an initial commit:

    Your prompt now shows a clean repository.

  3. Touch a file, add it, and commit it:

    Nothing too fancy there, but a % for untracked files, a + for newly tracked files not yet committed, and back to just the branch when it’s all good again.

  4. Add a remote, track it and fetch it:

    Having no commits in common is expected in this case, I just used one of the gists. I’m lazy like that ;-)

    Note though how it indicates I’m ahead two commits, and behind one.

  5. Rebase on top of the tracking remote:

    And here we see we are +1 commit ahead of the remote branch we’re tracking. I’m not going to push this to keep my gist intact, but what I can do:

  6. Reset. Reset. Reset.

    And what I have = up-to-date with my tracking remote.

Sun, 2015-10-04 15:19

TL;DR: Do not use OAuth to “Sign in with…” without a second factor.

OAuth is the mechanism with which a third party (a “client” or “App”) can be delegated a level of authority on an account (the “first party”, most commonly you) with an OAuth provider (the “second party”). This usually includes allowing an app access to some of your user information, and sometimes, to post on your behalf.

With a web application that uses OAuth to authenticate the user with the help of a third party, i.e. OAuth, you — that is, the user being authenticated — have to take the following in to account;

  1. The third party is the party that controls the authentication and authorization process for the user,
  2. The third party issues the second party, that application you are authenticating against, with the necessary tokens, both the public and the private one,
  3. The third party sends the second party the yay or nay on the authentication and authorization for the user accounts.

As such, it is an exploit path. Not an attack surface, not an attack vector, but right-out an exploit path.

Provided the kind of knowledge stored with a third party, and their ability to just fake it, there’s literally no way to stop those third parties from abusing that power and gain access to the second party account you have just been “authorized” to use.

I’ll back up for a second, because it isn’t all that bad — it’s bad, just not that bad. The title of this blog post could therefore be considered misleading, because “Thou Shalt Not Use OAuth … to sign in with third party websites without a second factor” would have been more accurate. Also, by no means is the website itself absolved from all responsibility, but there’s billions of them and only one of you — so it’s easier to start with you.

Like I said, OAuth is supposed to give third party applications a level of access to the information you have given the OAuth provider and/or permission to act, with the OAuth provider, on your behalf. This includes, for example, this blog post getting spread via Twitter, Facebook and Google+. It is I who has authorized and authorizes WordPress to post such messages — on my behalf — and for this WordPress uses OAuth and becomes an “app” in my respective accounts. So far, so good.

The real problem is “Sign in with…” functionality on websites. It lowers the barrier for a website significantly, allowing a visitor to sign in with an existing identity, and thus without that visitor needing to go though a separate registration process. In that, it is genius. However, it is also a sole authentication factor originating and ending with a single source. Not so bad when you want to comment on a blog post, but very bad if your level of privileges on the website using OAuth includes entitlements to services and information.

To further illustrate, your OAuth providers (Facebook, Google, Twitter, Dropbox, Github, Weibo, Douban, QQ, Linkedin) can already act on your behalf if a website only requires you to “prove” your identity using OAuth. For such places where the level of privileges directly associated with is very, very limited — like leaving a comment on a blog post perhaps — this isn’t too bad. But as a login to a password vault, you can see it’s very, very bad.

In short, we are considering adding OAuth-based authentication to Kolab Now, in a new software development project you’ll hear about more later. One intention is to lower the barrier for sign-up as far as we reasonably can. We’re working under an assumption that everyone has a Twitter, Facebook, Google, Github, Linkedin, Weibo, QQ, Dropbox or Douban account they could sign in with initially, and get to exploring products and services. An in case you do not, don’t worry. A more traditional email address / username and password sign-up should also be available, as would an SMS-based solution we have yet to design.

It is now clear (to us) that no further privileges or entitlements can be assigned to an account that authenticates solely using a third party OAuth provider. We will either require a second factor authentication token and suggest you switch your login account to the first product or service you purchase or subscribe to when you log in.

I hope it is also clear (to you) what exactly could be the results of using “Sign in with…” on websites, when used without a second factor.

PS: The fallacy of using one-time passwords are to be addressed later.

Sat, 2015-10-03 12:41

It's always fun when your remote colleagues are coming to visit the office. It helps communication putting a face to the name in the chat client - and the voice on the phone.

Giles, our creative director, was visiting from London the first days of the week, which made a lot of the work switch context to be about design and usability. As Giles is fairly new in the company we also spent some time discussing a few of our internal processes and procedures with him. It is great to have him onboard to fill in a previously not so investigated space with his large experience.

The server development team kept themselves busy with a few Roundcube issues, and with a few issues that we had in the new KolabNow dashboard. Additionally work was being done on the Roundcube-Next POC. We hope soon to have something to show on that front.

On the desktop side, we finalized the sprint 201539 and delivered a new version of Kontact on Windows and on Linux. The Windows installer is named Kontact-E14-2015-10-02-12-35.exe, and as always it is available on our mirror.

This Sunday our datacenter is doing some maintenance. They do not expect any interruption, but be prepared for a bit of connection troubles on Sunday night.

On to the future..

Thu, 2015-10-01 19:35

I’ve planned to tell you more about Flask. I’ve mentioned this is an application in development, and I was following my own learning curve. I have to admit I’ve outpaced my learning curve mega-tutorial series, slightly. I was planning on maybe posting one part of the mega tutorial about every week or so, but I’m falling behind and so I’m speeding up. I can only hope to keep up the pace in posting new parts.

A weekend of hacking has taught me much more than I had imagined. A couple of hours at work yet some more, and a number of sessions with colleagues has outlined and coloured in some of the scope and functionality of the application.

I meanwhile have an actual application with Twitter, Facebook and Google OAuth authentication, which next to the more traditional email and password, we might include as an option for the initial signup and login.

I also have localized and internationalized content, to such fashion that function results are cached, and entire pages are cached, but you won’t notice when you change your language preferences — or when I fake the country of origin to test the associated currency exchange rate routines for that matter. I would appreciate opinions on how annoying, taking your preferred locale in to account, a website displaying the date and time in the incorrect format is for people. I, for one, read most of the web in United States English, but I am European. The US notation putting the month in front of the day (i.e. today is 10/01/2015) makes for some easy mistakes.

While I had Part II in draft even before I published Part I, this is not the case for Part III — all about testing. I have my work cut out for me! I will want more of your input along the way, and I’ll call these intermezzos.

Thu, 2015-10-01 19:10

In Part I, we set up a virtual environment and installed a few too many modules in it. In this part, we’ll look over the directory hierarchy and start creating an application that can load configuration.

Remember to navigate to that ppppp.git repository of ours, and source venv/bin/activate.

A Directory Structure

The following is a suggested directory structure for the application. While I’m fairly certain this works, I’m open to suggestions on whether it is the best possible layout. Note that in my own application, I’ve not yet started using Blueprints nor Views yet, but may end up doing so, and I will definitely have an API exposed (possibly RESTful), that is also not included here.

In summary, this includes:


This directory will hold the Sphinx documentation for your project. Another part of this mega-tutorial may be dedicated to just that.


The application root directory for your application. This is where Flask will search for templates, static contents, themes, translations, and anything you’ll want to address with from ppppp.module import foo.


Database migration scripts.


Your unit and functional tests, as well as your fixtures for those tests.

You will serve and use all of this from a file in the root of the GIT repository, named Let’s get started!

Hello World!

For a first application, we’re going to spit out the infamous “Hello World!” phrase. Note that the shebang is important, and allows the virtual environment to be used as opposed to the system-wide Python installation.

Save this off, make it executable, commit it to GIT and run it with ./ Then visit (or wherever it says it runs on the console). You should see your message appear.

Note that syntax errors and other such errors notwithstanding, the application should automatically reload (and do so successfully) when you change any of the already loaded files, as it monitors the filesystem underneath the application for changed files.

Loading Configuration

Now, we’re going to want to load this application with some configuration. We can do this with the app.config object already available. Create a empty file ppppp/ and another one ppppp/ containing the following:

On a general note, I tend to use single quotes for strings of which I do not make up the value willy nilly, and double quotes for those that I do.

Note that none of these settings have any effect as of yet.

Now, you can modify such that it loads this configuration file:

app = Flask('ppppp')

# Load default configuration

This should allow you to substitute your return “Hello World!” for any of those configured items, such as:

def root():
    return app.config.get('THEMES_PATH', None)

Your should now look as follows:

We can now make this a multi-faceted, white-label site by also loading configuration from a file contained in an environment variable:


import os
if os.environ.has_key('PPPPP_SETTINGS'):
    if os.path.isfile(os.environ['PPPPP_SETTINGS']):

Making look as follows:

This is the end of Part II, I mean to address the following topics in future parts:

  • testing, testing, testing
  • rendering pages from templates
  • creating the base layout and extend it
  • error handling
  • the database model
  • rendering pages from themed templates
  • base and themed asset management
  • localization
  • localization of the database contents
  • language and theme selection

UPDATE: When I said I had not yet even drafted Part III on testing, I could not resist to get started. In writing it though, I discovered some typos. Yey for testing! The typos have obviously been corrected.

Tue, 2015-09-29 10:00

Welcome to what may be the first of many in a series on Flask, which I’ve sunk my teeth in to just last weekend. I may note that in operations, I have worked with Python before, and more specifically some Flask also. This means I’m familiar with many of the concepts, and yet it is a learning curve.

Inspired by the existing The Flask Mega-Tutorial by Miguel Grinberg, I hope that by the end of this blog post, I have sparked enough interest in following through on this roll, and you can see why I’m doing a mega-tutorial specifically along the lines of my own learning curve as well.

This particular mega-tutorial, deliberately “a” mega-tutorial rather than “the” mega-tutorial, is based on my own learning curve. I have developed a sense of specific needs for my application — which might hypothetically make it to production some day — and so far I’m just spending weekends and evenings on it, partly out of curiosity, and because I have an interest in hacking.

That said, this episode is all about the basics. Get yourself a GIT repository for an application we shall call ppppp (why? ’cause that name doesn’t conflict):

$ mkdir ppppp.git
$ cd ppppp.git
$ git init

I recommend you consider configuring GIT so you get prompt indications similar to mine, but much more on that in another post. Suffice it to illustrate that this is my regular prompt:

[kanarip@kanarip ~]$

And this is my prompt when I navigate in to the project I’m currently working on (with Flask, no surprise there, I suppose):

[kanarip@kanarip someapp.git (master *+%)]$

For the sake of preserving that little horizontal space we have available, though, I shall just continue to refer to the prompt with $.

Add a virtual environment. This creates a directory layout with Python and Pip installed, and some convenience scripts for you to load that environment.

$ virtualenv venv
$ source venv/bin/activate

Note that you should NOT add this virtual environment to your GIT repository. I have it inside the GIT repository though, because I can add the directory venv/ to .gitignore:

$ echo venv/ >> .gitignore
$ git add .gitignore
$ git commit .gitignore -m "Add .gitignore"

You are now ready to pull in the following packages:

(venv)$ pip install Flask Flask-Assets Flask-Babel \
    Flask-Cache Flask-Fixtures Flask-Migrate \
    Flask-RESTful Flask-Script Flask-SQLAlchemy \
    Flask-SQLAlchemy-Cache Flask-Themes Flask-WTF \
    cssmin jsmin celery country-currencies coverage \
    openexchangerates pycountry pygeoip Sphinx \
    SQLAlchemy-i18n nose

Do I have your attention now?

UPDATE: Part II is now available.

UPDATE^2: In writing Part III on testing, I discovered some typos. Yey for testing!

Tue, 2015-09-29 02:57

Hi there,

This is a first blog post on WordPress, the one blogging platform I could find that does not have an all bad dysfunctional editor. I suppose I could’ve known, usually things are popular for a reason. With the exception of those Jabber-on-Steroids self-help programs, I reckon.

I’ve been using different blogging platforms on-and-off again, including LiveJournal, BlogSpot and Phame, and recently my attention had been drawn to Ghost — as hypothetically the next big thing in blogging. I found it not as good as I had hoped… My first WUT? moment is at the size of the installation — I ended up with over 500 MB.

In any case, after quite a pause in blogging, I figured I might pick it up again. I think my last post on BlogSpot was from 2008, I recall having done a few posts from Sydney in 2009 on LiveJournal. I’ve used the site for a few things related to Kolab, but nothing personal. It’s been too long.

So, many of my current projects are software development projects in PHP, Python, Sphinx, there’s some Ruby, Erlang, C, C++ and others, and I wanted to share more about what’s going on with those — limited to just the interesting and insightful parts of it all, hopefully.

Mon, 2015-09-28 15:51

As we started the week already the previous Friday night (by shutting off the KolabNow cockpit and starting the big migration) it turned out to be a week all about (the bass) KolabNow.

Over the weekend we made a series of improvements to KolabNow that will improve the over all user experience with

  • Better performance
  • More stable environment
  • Less downtime
  • Our ability to update the environment with a minimum of interruption for endusers.

After the update, there were of course a few issues that needed to be tweeked, but details aside, the weekend was a big success. Thanks to the OPS staff for doing the hard work.

One thing we changed with this update was the way users get notified when their accounts are suspended. Before this weekend, users with suspended accounts would still be able to login and receive mail on KolabNow. After this update, users with suspended accounts will not be able to login. This was of course leading to a small breeze of users with suspended accounts contacting support with requests for re-enabeling of their accounts.

On the development side we were making progress on two fronts:

  • We are getting close to the end of the list of urgent Kontact defects. The second week of this sprint should get rid of that list. Our Desktop people will then get time to look forward again, and look at the next generation of Kolab Desktop Client.
  • We started experimenting with one (- of perhaps more to come) POCs for Roundcube-Next. We now need to start talking about the technologies and ideas behind that new product. More to follow about that.

Thank you for your interest - if you are still reading. :-)

Fri, 2015-09-18 15:48

Another week passed by; super fast, as we know that: Time is running fast when you have fun.

The client developers are on a roll. They have been hacking away on a defined bundle of issues in Korganizer and Zanshin, which has been annoying for users, and has prevented some organizations from adapting the desktop client. This work will proceed during the next sprint - and most probably the sprint after that.

One of our collaborative editing developers took part in the ODF plugfest. According to his report, a lot of good experiences were had, a lot of contacts were made, and there was good feedback for the plans of the deep Kolab/Manticore integration.

Our OPS people was busy most of the week with preparations for this weekends big KolabNow update. This is a needed overhaul of our background systems and software. As we now have the new hardware in place, and it has been running it's test circles around itself, we can finally start applying many of the improvements that we have prepared for some time. This weekend is very much a backend update; but an important one, which will make it easier for us to apply more changes in the future with a minimal amount of interruptions.

All y'all have a nice weekend now..

roundcube's picture
Mon, 2015-09-14 02:00

We just published updates to both stable versions 1.0 and 1.1
after fixing many minor bugs and ensuring compatibility with upstream
versions of 3rd party libraries used in Roundcube. Version 1.0.7 comes
with cherry-picked fixes from the more recent version to ensure proper
long term support.

See the full changelog here.

Both versions are considered stable and we recommend to update all
productive installations of Roundcube with either of these versions.
Download them from

As usual, don’t forget to backup your data before updating!