Today, one of our customers at TBits.net noticed that when composing an e-mail in Roundcube, there is a spell checker button at the top. But only for English, and when there was text in the e-mail, and he clicked on the spell check button, he got the message: “An error was encountered on the server. […]

Elastic: Core functionality covered

by alec in Kolabian at 15:20, Thursday, 23 November

Still in a need of a graphic designer! I’m finishing work on core functionality for the new Roundcube theme. Almost all of the current functionality as of now is implemented in Elastic, including core plugins. In this post I’m providing information about some recent updates (with some sreenshots). Yesterday I finished the quota widget. And […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the following packages have been updated: cyrus-imapd: https://obs.kolabsys.com/request/show/2144, “Prevent unreadable/unwriteable /dev/null from getting in the way” kolab-autoconf: https://obs.kolabsys.com/request/show/2143, “Check in version 1.3” kolab: […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. Today, the roundcubemail package has been updated from version 1.3.0.41 to 1.3.3. This fixes a security issue, described in https://roundcube.net/news/2017/11/08/security-updates-1.3.3-1.2.7-and-1.1.10. Also see the release notes https://roundcube.net/news/2017/09/04/update-1.3.1-released and https://roundcube.net/news/2017/10/31/update-1.3.2-released […]

Error feedback in Kube

by Christian Mollekopf in Kolab Now at 11:21, Wednesday, 08 November

One of the most frustrating user experiences is when you get an error message that you can’t do anything about, or even worse, that you don’t even understand. While it’s very well possible that the error message is entirely justified, wouldn’t it be great if the system didn’t just tell you that something is wrong, […]

iOS IMAP Connection Issues Resolved

by Jeroen van Meeuwen in Kolab Now at 10:14, Tuesday, 07 November

In the last few days, I’ve spent hours and hours configuring accounts on my private iPhone 6 running iOS 11 in order to attempt to nail down where its connection issues originate. I’m happy to be able to tell you I seem to have nailed it down, and there’s a quick fix for it, that […]

Last week in Kube

by cmollekopf in Finding New Ways… at 11:03, Tuesday, 24 October

Ooops, skipped a couple of weeks. Development did not stop though, although there was some infrastructure work to be done and less user-visible changes therefore.

Temporarily reverted the commit to demonstrate incremental query performance improvements.

Kube Commits, Sink Commits

Previous updates

More information on the Kolab Now blog!

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org


The beginning of an HTML editor

by Christian Mollekopf in Kolab Now at 11:00, Tuesday, 24 October

Kube recently gained a very basic HTML editor. It just does bold and cursive so far, but it’s a start for a more fully featured editor. Of course you can also just continue to write plain-text mails if you prefer. Enjoy =) For more info about Kube, please head over to About Kube.

Elastic: Contact data presentation

by alec in Kolabian at 15:37, Monday, 23 October

Contact data presentation, in read-only as well as in an editable form, is most complicated core feature of Roundcube. There are elements like photo, titles, fieldsets, groups and “composite” inputs. So, it took a while to implement for the new skin. Here’s the result. The flow is similar to Larry’s, but I (and Bootstrap) added […]

Support Kube through Kolab Now for fun and profit!

by Christian Mollekopf in Kolab Now at 14:21, Monday, 23 October

If you ever wanted to try Kube a great way to do so is with a Kolab Now account. You will not only reap the benefits of using an secure, Switzerland hosted Open Source service, but you will also help sustain the development of Kube. Kube now sports a sign-up link, allowing you to sign […]

Assessing the state of Kube with Buildbot

by Christian Mollekopf in Kolab Now at 13:42, Monday, 23 October

Over the past weeks I’ve been busy improving our continous integration (CI) system for Kube. In particular, I’ve set up a Buildbot instance to build, test, benchmark and deliver Kube. Buildbot Buildbot is a nifty CI “framework”. It’s not so much a CI “system” as it really isn’t much more than a framework to build […]

Kube flatpak, now slightly slimmer

by Christian Mollekopf in Kolab Now at 12:21, Friday, 20 October

While working on our flatpak build infrastructure I noticed that the flatpak is much larger than it should be. After quickly digging into the flatpak one of the culprits was quickly found; a bunch of unnecessary files such as include files were left. After making sure those are removed, the flatpak is now roughly 60MB […]

Incident Report: Backend Down

by Jeroen van Meeuwen in Kolab Now at 11:24, Tuesday, 17 October

Earlier this morning, at 04:38 UTC, one out of the twenty-two IMAP backends in production stopped serving its mail spool, showing Input/Output errors on its disk. Our Standard Operating Procedure is to examine log files, flush vm caches, stop the virtual machine, and start it back up again. This occurred at 05:48 UTC. The IMAP […]

Junk Email Filter.com is Junk

by Jeroen van Meeuwen in Kolab Now at 08:59, Monday, 16 October

We’re dropping our use of junkemailfilter.com “Spam DNS Lists”, because we have few positive experiences with it. Frankly, it is Junk. A service such as Junk Email Filter is supposed to use response values to regular DNS queries that allow services such as Kolab Now to gain some insight as to the reputation of sender […]

Update on the Kube flatpak nvidia driver issue

by Christian Mollekopf in Kolab Now at 11:00, Thursday, 12 October

After some investigation into the nvidia driver issue, the flatpak has now been rebuilt based on the org.freedesktop.Platform runtime. A limitation of flatpak currently keeps flatpak from making the appropriate driver available from any other runtime. As a nice side-effect the overall size of the flatpak has been reduced by removing some unnecessary parts. If […]

Announcing Service Windows: Implementing 2FA

by Mads Petersen in Kolab Now at 16:11, Wednesday, 11 October

As we recently announced, we have pursued an opt-in second factor authentication feature on Kolab Now. As described, the implementation is limiting users to the web client, and this requires some reconfiguration of various servers and services. To be able to configure our external facing services (IMAP, POP, Managesieve, Submission, CalDAV, CardDAV, WebDAV and ActiveSync) […]

Kube flatpak not working with nvidia drivers

by Christian Mollekopf in Kolab Now at 12:36, Wednesday, 11 October

It was recently pointed out that the Kube flatpak for Kolab Now does not work with nvidia drivers, leaving you with error messages like this: libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast Unrecognized OpenGL version Unrecognized OpenGL version As it turns out this is a problem with the […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past weeks, the kolab-autoconf package has been updated from version 0.1 to 0.2. This affects Debian/Ubuntu and RHEL/CentOS. For details see: kolab-autoconf in OBS and […]

Pushing Support to our Enterprise Customer Support Platform

by Jeroen van Meeuwen in Kolab Now at 16:05, Tuesday, 10 October

We are preparing a transition from our current platform underpinning the support@kolabnow.com email address — based on OTRS — to a more up-to-date, feature-rich environment based on Phabricator. For various reasons, we much anticipate this change. Not in the least we’ll have real-time notifications about new tickets, and user responses, and a few chat rooms […]

TOTP-based Two Factor Authentication Passed QA

by Jeroen van Meeuwen in Kolab Now at 12:30, Tuesday, 10 October

In a previous blog post, I have told you about our experimenting with TOTP-based two factor authentication. It proves functional in the Cockpit and in the Web Client, so we’re preparing the promotion to production. The promotion to production will require some reconfiguration of various pieces of infrastructure, not in the least the cockpit and […]

A Stricter DMARC Policy, Part II

by Jeroen van Meeuwen in Kolab Now at 10:29, Monday, 09 October

Last month, we let you know a stricter DMARC policy was being applied to Kolab Now infrastructure. With a primary aim to increase our reputation and decrease phishing attempts from clearly false senders, we’ve since learned about some secondary effects; A concept known as “alignment” is taken very strictly. For some reason, the specification for […]

I Apologize for the Delay

by Jeroen van Meeuwen in Kolab Now at 12:30, Saturday, 07 October

If you’ve noticed our responses to support tickets or monitoring alerts is a little slower than usual, that’s because this is now the view from our office: In reality though, our responses are better and faster — despite the view. Our Knowledge Base (specific for Kolab Now) helps a great deal, but we never have […]

Journey to the Center of Kube: Sink

by Christian Mollekopf in Kolab Now at 14:50, Thursday, 05 October

Kube is a client that allows you to work offline, so you can work no matter whether your train just entered a tunnel, you’re on board of a plane or you’re just too lazy to get up and ask for the free wifi password. One implication of this is that we have to deal with […]

Kube: Finding our Focus

by cmollekopf in Finding New Ways… at 14:44, Thursday, 05 October

Over the past two years we’ve been laying the ground work towards the first goal of a simple and beautiful mail client.
By now we have the first few releases out, and the closer we get to our goal, the less clear becomes what the next goal on our roadmap is.

So here’s something that we’ll be focusing on:kolabnow_logoAn obvious reason why we picked Kolab Now is because it is what sustains the larger part of the Kube team, allowing us to work on this project in the first place. However, it’s also a prime example of a completely Open Source and standards compliant service. Improving the Kolab experience means improving IMAP support, improving the CardDAV implementation, perhaps even adding CalDAV. It also means implementing proper GPG support, and pushing the user experience edge by edge to where we expect it to be. Things that all standards compliant services will benefit from. The Kolab Now service ensures we can focus on the relevant problems by taking variables out of the equation by being essentially the reference installation of Kolab.

Now, this means that we’ll be putting a little more focus on the single account experience, it does not mean we’ll be dropping support for multi-account setups though. The develop branch (which will lead to the next release) will continue to support multiple accounts and account types. What we will do though is acknowledge that very little testing is happening with other services than Kolab, and that we will probably not prioritize any features that are exclusive for other services (such as GMail’s non standard IMAP behavior) in the near future. It’s about focus, not exclusion.

There are many other goals ahead of course, that’s not the problem. Various platforms to be conquered, CalDAV access to our calendaring data, perfecting the mail experience, a beautiful calendar view, working out the grand scheme of how we tie all these bits together and produce something unique… Lot’s of exciting stuff that we’re looking forward to be working on!

However, it’s also easy to get lost in all those possibilities. It’d be easy to hack together some half-baked implementations for a variety of those ideas, and then revise those implementations or just pick the next bit. But that doesn’t lead to what we want. We want a product that is actually used and just works, and that requires focus. Especially since we’re a small team, it’s more important than ever that we maintain, if not increase, our focus. Kolab Now gives us something to focus on.

Kube for Kolab Now

With that said, I’d like to announce the Kolab Now edition of Kube, that we’ve made available as an early access release.

Kolab Now Configuration
Kube’s simplified account setup for Kolab Now.

This is a completely separate release-stream that supports Kolab Now exclusively, and does not replace general purpose Kube releases. But it is not a separate codebase (For simplicity there exists a kolabnow release branch with a two-line patch, but that’s all there ever will be).

We’ll regularly update this release to share our latest developments with you.

If you already are, or would like to become a Kolab Now user, then you’re welcome to join us on our journey to bringing you the best possible Kube experience to your desktop. You’re not only going to profit from a great service, but you’ll also help sustain the development of Kube.

For future updates, keep an eye on blogs.kolabnow.com


Experimenting with TOTP Two Factor Authentication

by Jeroen van Meeuwen in Kolab Now at 11:15, Tuesday, 03 October

We’re currently experimenting with an implementation of TOTP-based 2 factor authentication, allowing our customers to use a second factor. Until now, Kolab Now required its users to supply a username and a password. This is considered only a single factor, since the username is your email address and thus known to third parties. User accounts […]

A customer of TBits.net asked about notifications: you set up a filter in Roundcube, and you will be notified whenever an email arrives, and the notification is sent to an email address that you check more regularly. It is not like forwarding the message, but the notification does not contain the message itself, to force […]

Last week in Kube

by cmollekopf in Finding New Ways… at 09:16, Tuesday, 26 September

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • Added support for secret distribution to resources in sink. This will be the base for avoiding passwords in configuration files.
  • Added a simple in-memory keyring in Kube. Doesn’t persist the password yet as we have no secure storage.
  • Simplified the configuration dialog to only require name + email address
  • Moved the password entry into a separate view so we can request the password on startup if not available.
  • Fixed keyboard focus in configuration and password view.
  • Fixed indefinitely growing webviews. This was a problem with mails that would always grow one pixel larger than what was available which lead to a resize cycle.

Kube Commits, Sink Commits

Previous updates


A Stricter DMARC Policy

by Jeroen van Meeuwen in Kolab Now at 02:43, Tuesday, 26 September

Sometimes, we receive reports that either our general reputation has declined to the point that certain receiving parties will block some of the email sent through our infrastructure, and that bothers us — because it bothers our customers. This usually involves just a limited number of messages, but is annoying nonetheless. Other times we receive […]

Improving Collaborative Editing

by Jeroen van Meeuwen in Kolab Now at 17:42, Sunday, 24 September

While many of our customers have used collaborative editing, there’s certainly one aspect that could be improved; we’re seeing a lot of pending invitations to editing sessions, that apparently do not reach the intended recipient. When a session owner invites somebody else to the collaborative editing session, it isn’t really clearly indicated to the invitee […]

Elastic

by alec in Kolabian at 16:07, Saturday, 23 September

It’s not a secret that I’m working on a new Roundcube theme. It’s supposed to provide a responsive interface for desktop, tables and phones. With help of Thomas Bruederli, who provided some early mockups and John Jackson, who provided some design ideas, and a few other helping hands, after nine months of work it’s starting […]

Introducing the Kolab Now Knowledge Base, Blogs

by Jeroen van Meeuwen in Kolab Now at 16:52, Friday, 22 September

I’m pleased to announce the Kolab Now Knowledge Base, a library of FAQ articles, documentation relevant for users and customers, and “Learn More” walk-through articles showing you Kolab Now from the inside out. Furthermore, our staff will be able to tell you more about what’s going on within the greater Kolab Now ecosystem through this […]

Optionsbleed: Don’t get your panties in a wad

by kanarip in kanarip at 21:30, Thursday, 21 September

You’re a paranoid schizophrenic if you think optionsbleed affects you in any meaningful way beyond what you should have already been aware of, unless you run systems with multiple tenants that upload their own crap to document roots and you’ll happily serve as-is, yet pretend to provide your customers with security; this is a use-after-free… Continue reading Optionsbleed: Don’t get your panties in a wad

Kube in Randa

by cmollekopf in Finding New Ways… at 00:15, Wednesday, 20 September

I’ve spent the last few days with fellow KDE hackers in beautiful Randa in the Swiss Mountains.
It’s an annual event that focuses on a specific topic every year, and this time accessibility was up, so Michael and me made our way up here to improve Kube in that direction (and to enjoy the scenic surroundings of course).

I have to admit we didn’t spend all our time on accessibilty, since we’ve recently done a big push in that direction already with keyboard navigation and translation preparations. Anyhow, here’s an overview on what we’ve worked on:

Query performance

While it might now be a showstopper, there is an annoyance in Kube that if you initially sync a lot of mail (my INBOX alone contains 40k) this not only uses a lot of CPU in the synchronizer process (which is expected), but also in the Kube client process, which seems odd at first.

The reason for this are the live queries, which update themselves whenever the resource signals that it has a new revision available (A new revision could be a new mail or a modification for instance). Usually, when we process an update we do so incrementally. If you just received a new mail or two, or something has been marked as read etc., that is also the sensible thing to do. It’s a comparatively tiny amount of work, and we really want incremental changes in the result anyways, otherwise we’d mess up the client’s UI state (selection, scroll position, …).

However, when we fetch 40k mails that also means we get 40k updates that we have to process and it’s surprisingly hard to avoid that:

  • We don’t know in advance that there will be 40k updates, so we have to process the updates initially.
  • We can’t really skip any updates, because any update might affect what we have in the view. We just don’t know until we’ve already processed it.
  • We can’t just fall back to redoing the full query because we need incremental changes, otherwise you’d see a blinking UI until the sync is done.
  • We don’t want to just stop updating until the sync is done, because in most cases you already have useful data to work with, so this is really a background task, and we don’t want to block all updates.

If you end up processing the larger part of the data-set as incremental update it becomes naturally really expensive, because you can’t leverage the indexes which made queries efficient in the first place. So this is not something that can just be optimized away.

This means some middleground has to be struck which allows for updates, but doesn’t render the application maxed out because it’s updating so fast.

A possible approach to fix this would be the following:

  • Set some threshold when it becomes cheaper to redo the query instead of processing the updates.
  • If the threshold is reached, redo the query from scratch.
  • Diff the current results with the redone results so we can still update the UI incrementally.
  • Apply some event compression to the revision updates, so we land in the optimized path quicker.

Given that this really isn’t a huge problem right now, since the application does remain responsive (querying is done in a separate thread), and this will only ever happen if you sync a large folder for the first time, I won’t spend a lot of time on that for now. The most important part was figuring out why this problem appeared in the first place, and understanding it’s reasons and implications. This will allow us to attack it properly once we get to it.

HTML composer

Kube’s composer is currently a simple text field where you can type plain text. This is obviously not the end goal.
To explore our options in providing a more advanced composer, we tried prototyping a simple HTML editor.
Unfortunately our experiments showed that QML isn’t really up to that yet. The official example falls flat on it’s face when trying it (it simply fails except for a few specific cases), and the “API” to do any sort of formatting is more a hack than anything else.

Granted, the API docs of the TextArea also state that you can’t modify the internal document, or in other words, no HTML composer for us.

The options we have left are:

  • Some custom text editing component (sounds like a lot of work, probably is).
  • Try to hack around TextAreas deficiencies until we find some magic combination that works.
  • Use a WebView and implement everything in javascript (now that’s just sad).

For now the item is postponed.

GPG Support

We’ve scoped out what we want to reach for initial GPG support in Kube. While we already have read-only GPG support (we can decrypt mails and verify signatures), there is some more work to be done until we can also sign and encrypt messages we’re sending. The focus will be on keeping things simple to get the job done, and not build the most advanced GPG interface ever.

Things that will happen are:

  • Key selection for the account that will then be used for encryption.
  • Key setup for new accounts that don’t have a key yet (either by importing an existing key or creating a new one)
  • Encryption option in the composer that depends on the availability of keys for all recipients.
  • Key selection for users in the addressbook.

All of it can be found on Phabricator

Cross platform builds

Since we had some cross platform building experts on site, I explored some of our options:

  • Cross compiling for Windows with MXE.
  • Building on Windows and OSX using craft.
  • Building with some custom scripts on Windows.
  • Building with macports on OSX.
  • Building with brew on OSX.

Although I had mixed feelings about craft back from the days when I used emerge (the craft predecessor) to build Kontact on Windows,
it at least looks like the most sensible option still.

Crosscompiling using MXE seems ok-ish for Windows; I had some success for parts of the stack and the rest will just be the grunt work of making tarballs and writing build definition files. At least that way you don’t have to work on a Windows system. However I heard that the compiled end-result will just not be as good as using native compilers, so perhaps it’s not worth it to trade some building convenience for a sub-par end result.
Also, writing the definition files is pretty exactly the same as for craft, and MXE would be for Windows only.

So, looks like I’ll be doing another round with craft sometime soonish.

Plasma phone

IMG_20170914_141340

It kinda works.

Bushan Sha quickly wipped up a package for the Plasma Phone and installed it, which allowed us to see Kube running for the first time on a mobile device, which felt surprisingly cool 😉

There clearly are some rough edges, such as the UI being completely unsuitable for that form factor (who would have thought…), and a bug in one of the QML components which breaks folder selection using a touch screen. Still, it was nice to see =)

Input validation

To improve the UX with forms in Kube, or more specifically the configuration dialog, we wanted to provide some feedback to the user on what fields need to be filled out. Michael implemented a neat little visualization that that marks invalid input fields and gently fades away once the input becomes valid.

Screenshot_20170919_232321

Screenshot_20170919_232514

So far we only validate that fields are not empty, but that can be changed by assigning new validators that e.g. check that an email address field indeed contains a valid email address.

Recap

It was once again a very productive week, with lot’s of good food and a couple of fun hikes in between to clear out the head.
Besides the usual work going on it’s invaluable to have an exchange with other people that just might have solved the problem already that you’re going to spend the next few weeks on. So in many ways this is not only about a concentrated work effort, but also about professional training and exchange, and that is only possible in when you’re sitting together with those people.

I’d like to thank the whole team that organized the sprint, took care of us with great food and lot’s of chocolate and generally just made our stay in Randa great once more.

If you made it all the way down here and skipped the fundraiser on top, please consider donating something, it’s the only way we can keep doing meetings like this and they are invaluable for us as a community.


Last week in Kube

by cmollekopf in Finding New Ways… at 00:20, Monday, 18 September

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • We now synchronize both contacts and drafts when opening the composer. The former are required for autocompletion, the latter to show your drafts.
  • A maillist filter bar to replace the currently defunct search.
  • We now synchronize the currently selected folder every 5min.
  • Bumped Qt requirement to 5.9 so we can finally use the focus stealing prevention of the webengineview.
  • Automatically launch into the account configuration if no accounts are set-up.
  • A single account mode for deployments that support only a single account. This can currently only be activated with a switch in the qml code.
  • Prototyped a simple html editor, unfortunately the QML TextArea API is just not there yet and the end result ended up buggy. Postponed for now.
  • Worked out a plan for gpg based encryption that is now available as phabricator tickets for the next milestone.
  • Improved input form validation and feedback.
  • Did some work towards improving a performance bottleneck in live queries when a lot (thousands) of updates are coming in (such as during the initial sync). Not quite there yet though.
  • Did some cross compilation experiments with MXE for Windows and without MXE for Android. WIP
  • Witnessed Kube on the Plasma Phone. This will require an adapted UI but generally seems to actually work.
  • Fixed some layouting issues where some TextAreas wouldn’t resize with the rest of the UI.

Kube Commits, Sink Commits

Previous updates


Last week in Kube

by cmollekopf in Finding New Ways… at 21:08, Monday, 11 September

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • Fixed scrolling behaviour with a mouse wheel (it used to be very slow).
  • Prepared the 0.4 release.
  • Fixed RE: prefixing for replies.
  • Fixed issue where searching for removed entities would result in a query going through all uids instead of only the ones of a specific type. As this touched storage a “sinksh upgrade” will be required to clean up.
  • Added some missing icons that where symlink targets.
  • Inverted colors on folderview scrollbar to match that area of the UI.
  • Fixed layouting issues with long attachment lists in the composer.

Kube Commits, Sink Commits

Previous updates


Kolab Now: Disruptions this Weekend

by kanarip in kanarip at 20:22, Saturday, 09 September

Some of you, very few of you in fact, may have noticed short-lived disruptions to Kolab Now services over the course of this weekend. This impacts < 1% of our users, really. Symptoms may include your client to have been disconnected, and maybe asking you to confirm your password. This is inconvenient, but it has… Continue reading Kolab Now: Disruptions this Weekend

Kolab Now Really Beta (DevOps Edition)

by kanarip in kanarip at 18:54, Friday, 08 September

This week, I accidentally made Kolab Now Beta really beta — though pre-alpha more than beta, strictly speaking — completely intentionally; Oops, I #devops'ed https://t.co/zp9ncO0w1k — Kolab Operations (@kolabops) September 5, 2017 I can now proudly announce it runs off of otherwise public GIT source repositories directly, and the developers working on the projects involved… Continue reading Kolab Now Really Beta (DevOps Edition)

Recently I was in the situation where I needed to manage users in Kolab from PHP. There is an API for the Kolab Webadmin, and it is documented here: https://docs.kolab.org/architecture-and-design/kolab-wap-api.html There is also a PHP class, that I could have used: https://cgit.kolab.org/webadmin/tree/lib/kolab_client_api.php. But for some reason, I am using CURL. It took me some time […]

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the roundcubemail-plugin-contextmenu package has been updated from version 2.1.1 to 2.3. This affects Debian/Ubuntu and RHEL/CentOS. For details see: roundcubemail-plugin-contextmenu in OBS and […]

Event start date not in the recurrence pattern?

by alec in Kolabian at 13:56, Wednesday, 06 September

It came from Outlook users but we’ve found out it’s indeed a real issue. For example assume it is Thursday and you create a new event that is recurring weekly on Fridays. Did you ever wanted such an event to occur on the day it has been created? I think if you want Fridays that’s […]

Last week in Kube

by cmollekopf in Finding New Ways… at 20:46, Sunday, 03 September

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • Improved connected status tracking in IMAP resource. The IMAP resource now correctly goes into offline status if an operation times out.
  • Introduced a ConnectionLost error when a connection times out that Kube can display.
  • Fixed a couple of threading issues in sink. One could lead to a deadlock on shutdown of the synchronizer process which resulted in synchronizer processes not dying.
  • Fixed account status monitoring when creating a new account. Previously the account status would not include newly created resources, which broke account status monitoring when creating a new account without restarting the application afterwards.
  • The scrollbars are now properly hidden if the content is smaller than the container (nothing to scroll).
  • Set a color according to the signature state on the colorbar indicating the signature state.
  • Added a tooltip to the signature state bar providing some basic info. This is a stub until we have a proper UI element for that.

Kube Commits, Sink Commits

Previous updates


Performance Testing w/ Fedora Help(*2)

by kanarip in kanarip at 16:17, Sunday, 03 September

In the next couple of weeks or so, we’ll be executing performance testing of Kolab on OpenPower in one of the world’s largest testing facilities. How do we do this? With help of Fedora(^2). Part I: The Data Set A good performance test requires a good data set. In the particular set of tests, we… Continue reading Performance Testing w/ Fedora Help(*2)

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the erlang package has been built for Debian Jessie for Plesk, so that version 18.3.4 will be available there. It only affects Debian Jessie […]

Kolab for Open Power

by kanarip in kanarip at 20:44, Wednesday, 30 August

Among a variety of deliberations concerning the security and transparency of a little Kolab thing running anywhere — at home, rented space or hybrid cloud — this post is about the transparency of the hardware layer, and our ongoing efforts to make that so. We have said what, why and how on LWN, at events… Continue reading Kolab for Open Power

Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. In the past days, the package roundcubemail has been updated from 1.2 to version 1.3. It affects both RHEL/CentOS and Debian/Ubuntu. Other packages have been rebuilt due […]

Last week in Kube

by cmollekopf in Finding New Ways… at 21:14, Saturday, 26 August

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • WebEngineProfile no longer blocks kube on exit. It looks like some webengine code ends up in a deadlock when installing a QWebEngineUrlRequestInterceptoron the default profile from the main thread. It was solved by creating a custom WebEngineProfile as custom QML element.
  • Fixed scrolling issue where a slow loading mail would result in the positioning code interfering with user scrolling. This fixes email positioning in the conversation view.
  • Made scrollbars always visible. This is to help people that use the mouse to grab the scrollbar handle for scrolling.
  • Fixed some account config corner cases and improved user feedback when saving changes.
  • Large CMake cleanup to remove duplication and to clarify what settings we set.
  • Made sure all tests pass again, cleaned up testsuite.
  • Fixed encoding issue when replying to mail that would mangle some utf-8 chars.
  • Fixed font sizes so the same size is applied throughout the application. This resolves some scaling issues we had on some devices.
  • Changed connected/disconnected detection so resources that have no known status yet turn up as disconnected.

Kube Commits, Sink Commits

Previous updates

The 0.4 release seems within reach now =)


Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. I did not report on Updates for Kolab 16 for quite a while. But now that we are finally on a production server with Kolab 16 for […]

Progress on Kube

by cmollekopf in Finding New Ways… at 07:46, Sunday, 20 August

A lot has happened since the last release, so let me bring you up to speed on what is cooking for the 0.4 release.
We’ve been mostly focusing on ironing out UX problems all over the place. It turns out, when writing desktop applications using QtQuick you’ll be ending up with a lot of details to figure out for yourself.

Kube Components

We noticed that we end up modifying most components (buttons, listviews, treeviews, ….), so we ended up “subclassing” (as far as that exists in QML), most components. In some cases this is just to consistently set some default options which we would otherwise have to duplicate, in some cases it’s about styling where we have to replace default styling either for pure visual reasons (to make it pretty), or for additional functionality (proper focus indicators).
In some cases it’s even behavioral as in the scrolling case you’ll see later on.

In any case, it’s very well worth it to create your own components as soon as you realize you can’t live with the defaults (or rely on the defaults of a framework like Kirigami), because you’ll have a much easier time at maintaining consistency, improving existing components and generally just end up with much cleaner code.

Scrolling

One of the first issues to tackle was the scrolling behavior. Scrolling is mostly implemented in Flickable, though i.e. QtQuick.Controls.ScrollView overrides it’s behavior to provide a more desktopy scroll feeling. The problem is indeed that Flickables flicking behavior is absolutely unusable on a desktop system. It depends a lot on your input devices, with some high-precision trackpads it apparently ends up doing alright, but in general it’s just designed for touch interaction.

Problems include:

  • Way to fast scrolling speed.
  • The flicking is way to long and only stoppable by scrolling in the opposite direction (at least with my trackpad and mouse).
  • Difficulties in fine positioning e.g. a listview, scrolling is generally already too fast and sometimes the view just dashes off.

These problems are unfortunately not solvable by somehow configuring the Flickable (Believe me, I’ve tried), so what we ended up doing is
overriding its behavior. This is done using a MouseArea that we overlay with the flickable (ScrollHelper.qml) and then manually control the scrolling position.

This is a very similar approach to what also QtQuick.Controls.ScrollView does and what Kirigami does as well for some of its components.

It’s not perfect and apparently doesn’t yet play nicely with some mice as the fine tuning is difficult with various input devices. There is a variety of high/low precision devices, some of which give pixel deltas (so absolute positioning), and some of them give angle deltas (which are some sort of ticks), and some of them of course give both and don’t tell you which to use. What seems to work best is trying to calculate both into absolute pixel deltas and then just use either of the values (preferably the pixel delta one it seems). This will give you about the behavior you get in e.g. a browser, so that works IMO nicely.

For most components this was fortunately easy to add since we already had custom components for them, so we could just add the ScrollHelper there.
For others like the TreeView it was a bit more involved. The reason is that the TreeView itself is already a ScrollView, which not only implements a different scrolling behavior, but also brings its own scrollbars which look different from what we’re using everywhere else. The solution ended up being to wrap it with another Flickable so we can use our usual approach. Not pretty, but the fewer components we have that implement the same thing yet again in a different way the better.

Focus visualization

As I started to look into keyboard navigation the first thing I noticed was that the focus visualization was severely lacking. If you move around in the UI by keyboard only you always need to be able to follow the currently focused item, but many of our components didn’t differentiate between having keyboard focus or being selected and sometimes lacked a focus visualization altogether. The result was that the focus would randomly vanish as you for instance focused an already selected element in a listview, or you couldn’t differentiate if you have now moved the focus to another list-item or already selected it.

The result of it all is a highlighting scheme that we have now applied fairly consistently:

  • We have a highlight for selected items
  • We have a lighter highlight for focus
  • …and we draw a border around items that have focus but are somehow not suitable for the light highlight. This is typically either because it’s i.e. text content (where a highlight would be distracting), or because it’s an item that is already selected (highlight over highlight doesn’t really work).

Once again we were only able to implement this because we had the necessary components in place.

Keyboard navigation

Next up came keyboard navigation. I already took a couple of stabs at this, so I was determined to solve this for good this time. Alas, it wasn’t exactly trivial. The most important thing to remember is that you will need a lot of FocusScopes. FocusScopes are used to componentize the UI into focusable areas that can then have focusable subareas and so on. This allows your custom built component that typically consists of a couple of items to deal with focus in it’s own little domain, without worrying about the rest of the application. It’s quite a bit of manual work with a lot of experimenting, so it’s best done early in the development process.

The rest is then about juggling the focus and focusOnTab properties to direct the focus to the correct places.

Of course arrow-key navigation still needs to be implemented separately, which is done for all list- and treeviews.

The result of this is that it’s now possible to completely (I think?) navigate kube by keyboard.

There are some rough edges like the webview stealing the focus every time it loads something (something we can only fix with Qt 5.9, which is taking it’s sweet time to become available on my distro), and there is work to be done on shortcuts, but the basics are in place now.

Translations

While at it working on accessibility stuff we figured it’s about time we prepare translations as well. We’ll be using Qt based translations because it seems to be good enough and the QML integration of ki18n comes with some unwelcome dependencies. Nothing unsolvable of course but the mantra is definitely not to have dependencies that we don’t know what for.

Anyways, Michael went through the codebase and converted all strings to translatable, and we have a Messages.sh script, so that should be pretty much good to go now. I don’t think we’ll have translations for 0.4 already, but it’s good to have the infrastructure in place.

Copyable labels

Another interesting little challenge was when we noticed that it’s sometimes convenient to copy some text you see on your screen. It’s actually pretty annoying if you have to manually type off the address you just looked up in the address book. However, if you’re just using QtQuick.Controls2.Label, that’s exactly what you’re going to have to do.

Cursor based selection, as we’re used to from most desktop applications, has a couple of challenges.

  • If you have to implement that cursor/selection stuff yourself it’s actually rather complicated.
  • The text you want to copy is more often than not distributed over a couple of labels that are somehow positioned relative to each other, which makes implementing cursor based selection even more complicated.
  • Because you’re copying visually randomly distributed labels and end up with a single blob of text it’s not trivial to turn that into usable plaintext. We all know the moment you paste something from a website into a text document and it just ends up being an unrecognizable mess.
  • Cursor based selection is not going to be great with touch interaction (which we’ll want eventually).

Screenshot_20170819_232959

The solution we settled for instead is that of selectable items. In essence a selectable item is a manual grouping of a couple of labels that can be copied as once using a shortcut or a context menu action. This allows the programmer to prepare useful chunks of copyable information (say an address in an addressbook), and allows him to make sure it also ends up in a sane formatting, no matter how it’s displayed in the view itself.

The downside of this is of course that you can no longer just copy random bits of a text you see, it’s all or nothing. But since you’re going to paste it into a text editor anyways that shouldn’t be a big deal. The benefit of it, and I think this is a genuine improvement, is that you can just quickly copy something and you always get the same result, and you don’t have to deal with finicky cursor positions that just missed that one letter again.

Flatpak

The flatpak now actually works! Still not perfect (you need to use –devel), but try for yourself: Instructions
Thanks to Aleix Pol we should have nightly builds available as well =)

Other changes include:

  • The threading index now merges subthreads once all messages become available. This is necessary to correctly build the threading index if messages are not delivered in order (so there is a missing link between messages). Because we build a persistent threading index when receiving the messages (so we can avoid doing that in memory on every load), we have to detect that case and merge the two subthreads that exist before the missing link becomes available.
  • The conversation view was ported away from QtQuick’s ListView. The ListView was only usable with non-uniformly sized items through a couple of hacks and never played well with positioning at the last mail in the conversation. We’re now using a custom solution based on a Flickable + Column + Repeater, which works much better. This means we’re always rendering all mails in a thread, but we had to do that before anyways (otherwise scrolling became impossible), and we could also improve it with the new solution by only rendering currently visible mails (at the cost of loosing an accurate scrollbar).
  • The email parsing was moved into it’s own threads. Gpgme is dead slow, so (email-)threads containing signatures would visibly stutter (Without signature the parsing is ~1ms, with ~150ms. With encryption we can easily go up to ~1s). With the new code this no longer blocks the view and multiple mails are parsed in parallel, which makes it nice and snappy.
  • Lot’s of cleanup and porting to QtQuick.Controls2.
  • Lot’s of fixes big and small.

It’s been a busy couple of weeks.

Randa

The annual Randa meeting is coming up and it needs your support! Randa will give us another week of concentrated effort to work on Kube’s keyboard navigation, translation and other interaction issues we still have. Those sprints are very valuable for us, and are in dire need of support to finance the whole endavour, so any help would be more than welcome: https://www.kde.org/fundraisers/randameetings2017/

Thanks!


What Grey Listing Looks Like

by kanarip in kanarip at 16:40, Saturday, 05 August

In week 30, on a Friday morning, we applied something called Grey Listing. I told you that about a week’s worth of information was needed to analyse the underlying statistics on a per-domain, per-sender basis — but least I can do is give you a sense of what the statistics are. This will consist of… Continue reading What Grey Listing Looks Like

The 3rd Pillar to Save Your Ass

by kanarip in kanarip at 20:07, Thursday, 03 August

A controversial topic, to say the least, is what happens when you double-click a message in a Roundcube messages listing, while also having enabled the preview pane. Two things to consider: A regular way to use Roundcube is with a preview pane, A regular way to give reading a message more vertical real estate is… Continue reading The 3rd Pillar to Save Your Ass

Kolab Now: Grey Listing Applied

by kanarip in kanarip at 19:02, Thursday, 03 August

Aside from other anti-spam measures, we have applied a concept known as grey listing. Here’s a summary of how grey listing works: When an email delivery attempt is made, we know the sending server’s IP address, the sender address, and the recipient address. If this is a previously unseen combination of facts, the delivery attempt… Continue reading Kolab Now: Grey Listing Applied

Make Kolab Now Beta (+ 3-Column Layouts)

by Paul Brown in Kolab Community - Kolab News at 14:10, Thursday, 03 August

Make Kolab Now Beta (+ 3-Column Layouts)

Terrible puns aside, Kolab Now Beta is where you can test drive new features that we are considering for inclusion into Kolab Now.

If you would like to try out new layouts and bleeding edge services, you can access Kolab Now Beta simply by typing "beta." (don't forget the dot) before "kolabnow.com" in the address bar of your browser.

Make Kolab Now Beta (+ 3-Column Layouts)

Today we are trying out a new 3-column email layout for wide screens. The 3-column display shows you your email folders in the left-most column, the list of messages in the central column, and the right-most column displays the body of your selected message.

Make Kolab Now Beta (+ 3-Column Layouts)

The 3-column layout makes a better use of wide screens, allowing you to see more lines of your messages and more messages in the message list, making it easier to work with large volumes of mail.

You can try the 3-column layout now by accessing Kolab Now Beta and clicking on the gear icon to the left of the Subject column above your message list.

Check out this short video showing the whole process.

DISCLAIMER: The features we test run in Kolab Now Beta are by definition unstable. Although we are pretty sure nothing terrible will happen to your data (it is backed up and secured), we cannot take responsibility if by using Beta your productivity drops, your sanity wanes, or indeed, you accidentally erase something.

Also, Kolab Now Beta is there so you can help us improve Kolab Now. If you come across a bug or have a suggestion to improve a feature, visit us on the Kolab Hub and let us know.

To know more about the things we try out on a regular basis, follow @kolabops on Twitter and strap in, because the ride is pretty wild!

Kolab Now users can start using the Collabora Online suite from directly within Kolab Now's web interface as from today. Kolab Now's online office is a webified version of LibreOffice and comes with a word processor, a spreadsheet application and a presentation editor. All are full-featured and, like their offline counterpart, support a very wide-range of formats indeed -- ODF of course, but also DOCX, XLSX, PPTX, and literally dozens of other formats.

The documents you create within Kolab Now enjoy the same extreme privacy protection you get for your email, tasks, calendars and contacts. Your data is stored by us, a Swiss company; using open source, peer-reviewed and audited software; developed by some of the most privacy-conscious engineers in the world; and protected by Switzerland's strictest privacy laws.

Kolab Now's office suite allows you to create new documents or upload them from you hard disk and work on them online. You can edit your documents on your own or invite colleagues to work on them with you. One user can start a text document and invite others to a session and they can all help to shape the final text. Several users can work at the same time on filling in cells of data on the same spreadsheet.

Write and edit ODF documents directly in your Kolab Now account.

And we're not done yet: among our next challenges is to provide a real time, embedded messaging service (read "chat") to make working with your colleagues easier and more fluid.


We have the full story on how this came to be here. We have also published a brief HOWTO so you can easily gets started and find your way around the interface.

You can try Kolab Now’s online office apps by signing up for a 30-day money back trial subscription.

If you represent a news organisation and would like to learn more or want to try Kolab Now’s office apps for a review in your publication, please contact our press person and request a demo account.

Email Resend (Bounce) feature

by alec in Kolabian at 12:18, Wednesday, 12 July

That’s a feature for power users, I suppose. It allows to resend an existing email message untouched. So, it’s not the same as mail forwarding, but similar. Because it’s similar to forwarding the Resend option is placed in the Forward menu. The bounce does not require a complete mail composing page, so we use a […]

Kolab Now: Another Round of Updates

by kanarip in kanarip at 01:07, Monday, 10 July

This weekend has seen a variety of systems being issued either of, or combination of, the following commands; yum -y update yum –enablerepo=kolab-16-updates-testing -y update puppet agent -t –no-noop reboot rm -f /dev/null; mknod -m 666 /dev/null c 1 3 I don’t expect everyone to know and understand what these pieces mean, so I’ll divide… Continue reading Kolab Now: Another Round of Updates

Release of Kube 0.3.1

by cmollekopf in Finding New Ways… at 22:48, Tuesday, 04 July

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

Kube 0.3.1 is out the door.

Over the past months we’ve been working hard on turning Kube into something more useful than a pure techpreview, and while Kube 0.3.1 still isn’t anywhere near production ready, I can finally say that I can use it for most of my email needs.

First, let’s get out of the way what doesn’t work just yet so you know what you can expect.

  • We don’t provide any upgrade path, so with every release it is necessary to run “sinksh upgrade” manually, which currently simply nukes all your local caches (but not the config, so you’ll just have to resync).
  • Email sending is limited to plaintext (we convert html mails to plaintext on reply).
  • Passwords are stored in plaintext in configuration files, which may not be acceptable to you.
  • It’s of course also possible that you will experience unpleasant surprises if you use it for any relevant emailing.
  • The scrolling experience depends a lot on your input devices. This seems to be a Qt internal defect that we hope will resolve itself with Qt 5.9.

What we do have though is:

  • A working composer that is right now very basic but has a lot of potential (more about that in a separate blogpost).
  • Support for reading encrypted and and signed mails (although the UI support is still very basic for details on validitiy etc).
  • A basic CardDAV based Addressbook.
  • A fairly pleasant reading experience (IMO anyways).

Lot’s of work also went into the application architecture, UI feedback for user interactions (e.g. progress during synchronization), cleanup of our dependency tree and the concept of how we will continue to evolve Kube.

The UI team did excellent work on the UI front (surprise!) and has cleaned up the look considerably. Due to the realization that our look depends a lot on the looks of the icon theme, we are now shipping an icon theme with Kube (based on breeze). This will also further our goal to work equally well on a variety of platforms where we cannot assume a specific icon theme to be available.

We have also transitioned mostly to QtQuickControls2, the remaining bits will require Qt 5.9. We’ll then plan on staying on this LTS release for the time being.

Overall, while not quite ready for prime time, this release marks a very important milestone on our road to a production ready release and I’m very happy to have a lot of issues, that have been bothering us for long, finally resolved.
I also think that while we’ll still have to do one or the other large change every now and then, the codebase can now start to settle as we have resolved our largest problems.

If you’d like to read more or give it a shot, please head over to kube.kde.org for installation instructions for experimental builds for Fedora 25 and Archlinux.

Tarballs

https://download.kde.org/unstable/sink/0.3.0/src/sink-0.3.0.tar.xz.mirrorlist
https://download.kde.org/unstable/kube/0.3.1/src/kube-0.3.1.tar.xz.mirrorlist

Randa

Unfortunately I can’t attend Akademy this year, but if you’d like to work on Kube, talk to us, have a Beer or just go for a hike, make sure to come to Randa for the annual Randa Meetings from the 10.09.2017 – 16.09.2017. To register click here.


Release of KDav2 0.1.0

by cmollekopf in Finding New Ways… at 22:46, Tuesday, 04 July

I’m pleased to announce the release of KDav2 0.1.0.

KDav2 is a KJob based DAV protocol implementation.

KDav2’s major improvement is a vastly reduced dependency chain as we no longer depend on KIO. KDav2 depends only on Qt and KF5CoreAddons.
For more information please refer to the README.

While KDav2 the successor of KDav, KDav will stick around for the time being for Kontact.
KDav and KDav2 are completely coinstallable: library names, namespaces, environment variables etc. have all been adjusted accordingly.

KDav2 is actively used in sink and is fully functional, but we do not yet guarantee API or ABI stability (this will mark the 1.0 release).
If you need API stability rather sooner than later, please do get in touch with me.

If you’d like to help out with KDav2 or have feedback or comments, please use the comment section, the kde-pim mailinglist or the phabricator page.

Tarball

https://download.kde.org/unstable/kdav2/0.1.0/src/kdav2-0.1.0.tar.xz.mirrorlist


Kolab Now: Disabled IMAP Proxy

by kanarip in kanarip at 10:08, Monday, 03 July

Since last weekend’s upgrade of Kolab Now to Kolab 16, some customers have reported IMAP connection and folder synchronization issues. I have therefore elected to bypass the IMAP proxy used to filter groupware folders, basically returning all IMAP client connections to the same behaviour and connection end-points you were accustomed to before. While our tests… Continue reading Kolab Now: Disabled IMAP Proxy

Re: Product Manager vs. Product Owner

by kanarip in kanarip at 00:08, Monday, 03 July

Dear Melissa Perri, I’ve read your article entitled “Product Manager vs. Product Owner” with interest,  and I recommend my readers also browse Melissa’s blog for other interesting articles. However, I would like to take this opportunity to respectfully disagree with some of your article’s implied rhetoric to be entertained and conclusions, and challenge others. In my… Continue reading Re: Product Manager vs. Product Owner

Upgrade of Kolab Now to Kolab 16

by kanarip in kanarip at 16:23, Thursday, 29 June

This weekend, from Sunday 00:00 UTC to Sunday 04:00 UTC, users and customers of Kolab Now may experience intermittent availability of various services. While our data center’s network provider will perform standard network maintenance, so will our staff upgrade our software and databases, and reconfigure infrastructure to get you to the next generation of collaboration… Continue reading Upgrade of Kolab Now to Kolab 16

Kube’s dependency situation is finally resolved

by cmollekopf in Finding New Ways… at 14:45, Thursday, 29 June

We’ve worked on this for the past two years and have finally reached an acceptable state where we don’t end up pulling in hundreds of packages.

While initial implementations of things like the email message parser and the DAV and IMAP library brought huge dependencies like KIO, DBus and even Akonadi (that would arguably have been resolvable on a packaging level, but… not a great situation anyways). With all that removed Kube now depends on ~75 packages less (the exact number will depend on factors such as your distros packaging), as well as drastically reducing it’s runtime entanglement (we no longer need dbus, klauncher, external KIO processes, …).

The new dependency situation looks approximately like this:

  • Qt: (We’ll bump that soon to 5.9 for some Qt Quick Controls2 improvements, and then plan on staying on this LTS release)
  • KIMAP2: Used for IMAP access, depending on KMime and Qt.
  • KDav2: Used for DAV access, depending on Qt.
  • KAsync: A pure Qt library that is extensively used in Sink.
  • KMime: Used for mailparsing, depending on Qt.
  • KContacts: Used for parsing VCard, depending on Qt.
  • lmdb: Our key value-store that is a simple C library.
  • flatbuffers: Also part of our storage and a simple self-contained C++ library.
  • QGpgme: Part of gpgme, used for crypto stuff.
  • KCodecs: A tier1 framework used for some parsing tasks.
  • KPackage: A tier2 framework for packaging qml.
  • KCoreAddons: A tier1 framework that we need for KJob.
  • libcurl: Currently used for its SMTP implementation.

And that’s about it. This means we’re now in a situation where each and every dependency that we have is justified and there for a reason. We’re also in a position where most dependencies can be individually replaced should there be a need to so.
This not only makes me much more confident that we can maintain this system in the long run and makes porting to other platforms feasible, it’s also just a much, much healthier situation to be in for a software project.

In case you are wondering, here’s an incomplete list of packages that we used to depend on and no longer do (based on Fedora 25 packaging, check out rpmreaper to recursively query dependencies):

 grantlee-qt5
 kdepim-apps-libs
 kf5-akonadi-contacts
 kf5-akonadi-mime
 kf5-akonadi-search
 kf5-akonadi-server
 kf5-akonadi-server-mysql
 kf5-grantleetheme
 kf5-kauth
 kf5-kbookmarks
 kf5-kcalendarcore
 kf5-kcompletion
 kf5-kconfig-gui
 kf5-kconfigwidgets
 kf5-kdbusaddons
 kf5-kded
 kf5-kdelibs4support
 kf5-kdelibs4support-libs
 kf5-kdoctools
 kf5-kemoticons
 kf5-kiconthemes
 kf5-kidentitymanagement
 kf5-kimap
 kf5-kinit
 kf5-kio-file-widgets
 kf5-kitemmodels
 kf5-kldap
 kf5-kmailtransport
 kf5-kmbox
 kf5-knewstuff
 kf5-knotifications
 kf5-kparts
 kf5-kpimtextedit
 kf5-krunner
 kf5-kservice
 kf5-ktextwidgets
 kf5-kunitconversion
 kf5-kwallet-libs
 kf5-kwidgetsaddons
 kf5-kwindowsystem
 kf5-kxmlgui
 kf5-libgravatar
 kf5-libkdepim
 kf5-libkleo
 kf5-messagelib
 kf5-pimcommon
 kf5-solid-libs
 kf5-sonnet-ui
 kf5-syntax-highlighting
 kf5-threadweaver
 libaio
 libical
 libsphinxclient
 lsof
 m4
 mariadb
 mariadb-common
 mariadb-config
 mariadb-errmsg
 mariadb-libs
 mariadb-server
 mariadb-server-utils
 net-tools
 perl-DBD-MySQL
 perl-DBI
 perl-Math-BigInt
 perl-Math-Complex
 perl-Storable
 postgresql-libs
 qt5-qtbase-mysql
 rsync
 sphinx

Installing Kontact on CentOS7

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 10:44, Thursday, 29 June

If you search for “install kontact centos7”, you find the first link: https://www.kolabsys.com/installation-guide/kontact-centos.html Unfortunately, that does not work. Even if you add the Kolab 16 repository (wget https://obs.kolabsys.com/repositories/Kolab:/16/CentOS_7/Kolab:16.repo) so that libkolab etc can be installed, you still have conflicts because kdepim-libs is installed from the base repository. It seems, that the packages on OBS are […]

Kolab Now MX Migration Sorted

by kanarip in kanarip at 20:39, Friday, 23 June

When I switched over mail infrastructure earlier this week, I might have been under a few mistaken impressions — in hindsight, one might qualify at least some of them as embarrassing. Firstly, MX records never work the way you suspect — in part because you expect them to work as specified. Forget that. In a… Continue reading Kolab Now MX Migration Sorted

Kube: Views and Workflows

by cmollekopf in Finding New Ways… at 12:21, Saturday, 17 June

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

Ever since we started working on Kube we faced the conundrum of how to allow Kube to innovate and to provide actual additional value to whatever is already existing out there, while not ending up being a completely theoretical exercise of what could be done, but doesn’t really work in practice. In other words, we want to solve actual problems, but do so in “better” ways than what’s already out there, because otherwise, why bother?

I put “better” into quotes because this is of course subjective, but let me elaborate a bit on what I mean by that.

Traditionally, communication and organization has been dealt with using fairly disjoint tools, that we as users then combine in whatever arbitrary fashion that is useful to us.

For instance:

  • EMail
  • Chat
  • Voice/Video Chat
  • Calendaring
  • Taskmanagement
  • Notetaking

However, these are just tools that may help us work towards a goal, but often don’t support us directly in what we’re actually trying to accomplish.

My goto example; Jane wants to have a meeting with Jim and Bob:

  • Jane tries to find a timeslot that works for all of them (by mail, phone, personally…)
  • She then creates an event in her calendar and invites Jim and Bob, who can in turn accept or decline (scheduling)
  • The meeting will probably have an Agenda that is perhaps distributed over email, or within the event description
  • A meeting room might need to be booked, or an online service might need to be decided.
  • Once the meeting takes place the agenda needs to be followed and notes need to be taken.
  • Meeting minutes and some actionable items come out of the meeting, that then, depending on the type of meeting, may need to be approved by all participants.
  • So finally the approved meeting minutes are distributed and the actionable items are assigned, and perhaps the whole thing is archived somewhere for posterity.

As you can see, a seemingly simple task can actually become a fairly complex workflow, and while we do have a toolbox that helps with some of those steps, nothing really ties the whole thing together.

And that’s precisely where I think we can improve.

Instead of trying to do yet another IMAP client, or yet another calendaring application a far more interesting aspect is how can we improve our workflows, whatever tools that might involve.
Will that involve some email, and some calendaring and some notetaking? Probably, but it’s just a means to and end and not an end by itself.

So if we think about the meeting scheduling workflow there is a variety of ways how we can support Jane:

  • The scheduling can be supported by:
    • Traditional iCal based scheduling
    • An email message to invite someone by text.
    • Some external service like doodle.com
  • The agenda can be structured as a todo-list that you can check off during the meeting (perhaps with time limits assigend for each agenda item)
  • An online meeting space can be integrated, directly offering the agenda and collaborative note-taking.
  • The distribution and approval of meeting minutes can be automated, resulting in a timeline of past meetings, including meeting minutes and actionable items (tasks) that fell out of it.

That means however that rather than building disjoint views for email, calendar and chat, perhaps we would help Jane more if we built and actual UI for that specific purpose, and other UI’s for other purposes.

Views

So in an ideal world we’d have an ideal tool for every task the user ever has to execute which would mean we fully understand each individual user and all his responsibilities and favorite workflows….
Probably not going to happen anytime soon.

While there are absolutely reachable goals like Jane’s meeting workflow above, they all come at significant implementation cost and we can’t hope to implement enough of them right off the bat in adequate quality.
What we can do however is keeping that mindset of building workflows rather than IMAP/iCal/… clients and setting a target somewhere far off on the horizon and try to build useful stuff along the way.

For us that means that we’ve now set the basic structure of Kube as a set of “Views” that are used as containers for those workflows.

KubeViewSwitcher

This is a purposefully loose concept that will allow us to transition gradually from fairly traditional and generic views to more purposeful and specific views because we can introduce new views and phase out old ones, once their purpose is helped better by a more specific view.

Some of our initial view ideas are drafted up here: https://phabricator.kde.org/T6029

What we’re starting out with is this:

  • A Conversations view:
    While this initially is a fairly standard email view, it will eventually become a conversation centric view where you can follow and pick up on ongoing conversations no matter on what medium (email, chat, …).

kube_mailview

  • A People view:
    While also serving the purpose of an addressbook it is first and foremost a people centric way to interact with Kube.
    Perhaps you just want to start a conversation that way, or perhaps you want to lookup past interactions you had with the person.

kube_addressbook

  • A composer view:
    Initially a fairly standard email composer but eventually this will be much more about content creation and less about email specifically.
    The idea is that what you actually want to do if you’re opening the composer is to write some content. Perhaps this content will end up as email,
    or perhaps it will just end up on a note that will eventually be turned into a blogpost, chances are you don’t even know before you’re done writing.
    This is why the composer implements a workflow that starts with the starting point (your drafts) then goes over to the actual composer to create the content, and finally allows you to do something with the content, i.e. publish or store it somewhere (for now that only supports sending it by email or saving it as draft, but a note/blog/… could be equally viable goals).

The idea of all this is that we can initially build fairly standard and battle-tested layouts and over time work our way towards more specialized, better solutions. It also allows us to offload some perhaps necessary, but not central features to a secondary view, keeping us from having to stuff all available features into the single “email” view, and allowing us to specialize the views for the usecases they’re built for.


Purposefully Not the Center of Attention

by kanarip in kanarip at 15:29, Friday, 16 June

While it’s been argued before, over and over again, that the concepts behind a crypto-currency like BitCoin or Ethereum or Ripple or LiteCoin might mean the upheaval of the traditional economy, it’s also been argued that it is actually the underlying blockchain technology that implies the change of pace. I disagree. One can arguably not… Continue reading Purposefully Not the Center of Attention

Kube::Fabric

by cmollekopf in Finding New Ways… at 17:23, Tuesday, 06 June

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

One of the key features of QML is to be able to describe the UI in a declarative fashion, resulting in code that closely represents the visual hierarchy.
This is nice for writing UI’s because you can think very visually and then just turn that into code.
A couple of trial and error cycles later you typically end up with a result that reflects approximately what you sketched out on paper.

However, interactions with the UI are also handled from QML, and this is where this becomes problematic. User interactions with QML result in some signals or API calls on some component, typically some signal handler, and that’s where your application code takes over and tries to do something useful with that interaction. Perhaps we trigger some visual change, or we call a function on a controller object that does further work. Because we try to encapsulate and build interfaces we might build an API for the complete component that can then be used by the users of the component to further interact with the system.
This can quickly get complex with a growing number of interactions and components.

The straightforward implementation is to just forward the necessary API through your components, but that creates a lot of friction in the development process. A simple move of a button may result in drastic changes where that button is within the visual hierarchy, and if we have to forward the API for that we end up writing a lot of boilerplate code for what ought to be a trivial change.

Item {
    id: root
    property signal reply
    Button {
        onClicked: root.reply()
    }
}

One alternative approach is to abuse the QML Context to avoid forwarding the API (so we just call to some magic component id), but this results in components that are not self-contained and that just call to some magic “singleton” objects to communicate with the rest of the application.
Not great in my opinion.

Item {
    id: root
    Button {
        onClicked: applicationController.reply() //Let's hope applicationController is defined somewhere
    }
}

The approach we’ve chosen in Kube is to introduce a mechanism that is orthogonal to the visual hierarchy. Kube::Fabric is a simple messagebus where messages can be published and subscribed to. A clicked button may for instance post a message to the bus that is called “Kube.Messages.reply”. This message may then be handled by any component listening on the messagebus. Case in point; the main component has a handler that will open a composer and fill it with the quoted message so you can fill in your reply.

So the “Fabric” creates a separate communication plane that “weaves” the various UI components together into something functional. It allows us to concentrate on the user experience when working on the UI without having to worry how we can eventually wire up the component to the other bits required (and it gives us an explicit point where we interconnect parts over the fabric), and thus allows us to write cleaner code while moving faster.

Item {
    id: root
    Button {
        onClicked: Kube.Fabric.postMessage(Kube.Messages.reply, {"mail": model.mail, "isDraft": model.draft})
    }
}

API vs. Protocols

The API approach tightly couples components together and eventually leads to situations where we i.e. have multiple versions of the same API call because we’re dealing with some new requirements but we have to maintain compatibility for existing users of the API. This problem is then further amplified by duplicating API’s throughout the visual hierarchy.

The Fabric rather builds a protocol approach. The fabric becomes the transport layer, the messages posted form the protocol. While the messages form the contract and thus also require that no parameter is removed or changed, they can be expanded without affecting existing users of the message.
New parameters don’t bother any existing users and thus allow for much more flexibility with little (development) overhead.

Other applications

Another part where we started to use the fabric extensively is for notifications. A component that listens for notifications from Sink (Error/Progress/Status/…), simply feeds those notifications into the fabric, allowing various UI components to react to those notifications as a appropriate. This approach allows us to feed notifications from a variety of sources into the system, so they can be dealt with in a uniform way.

Code

In actual code it is used like this.

Posting a message from QML:

 Kube.Fabric.postMessage(Kube.Messages.reply, {"mail": model.mail, "isDraft": model.draft})

Posting a message from C++ (message being a QVariantMap):

Fabric::Fabric{}.postMessage("errorNotification", message);

Listening for messages in QML:

Kube.Listener {
    filter: Kube.Messages.reply
    onMessageReceived: kubeViews.openComposerWithMail(message.mail, false)
}

Lookout

The fabric is right now rather simplistic and we’ll keep it that way until we see actual requirements for more. However, I think there is interesting potential in separating message-buses by assigning different fabrics to different components, with a parent fabric to hold everything together. This would enable components to first handle a message locally before resorting to a parent fabric if necessary. For the reply button that could mean spawning an inline-reply editor instead of switching the whole application state into compose mode.


Dear Lazyweb: No video for VLC on Fedora 26?

by kanarip in kanarip at 13:35, Sunday, 28 May

Dear Lazyweb, even though Fedora 26 is not yet released, I’ve upgraded — now, VideoLAN isn’t displaying video any longer. If I start it with —no–embedded–video I do get video, but it doesn’t seem to be able to run or switch to/from fullscreen. Resetting my preferences has not helped so far. I would appreciate to… Continue reading Dear Lazyweb: No video for VLC on Fedora 26?

Kolab 16 Available for Ubuntu Xenial: Testing

by kanarip in kanarip at 19:17, Tuesday, 09 May

I wrote before about a lack of support for PHP 7 in an essential utility used to generate language-specific bindings to our libraries, called SWIG. Well, on December 29th 2016, SWIG released a version that does support PHP 7. In the past few days, I’ve spent some time in patching both libkolabxml and libkolab packages… Continue reading Kolab 16 Available for Ubuntu Xenial: Testing

Email recipient input widget

by alec in Kolabian at 16:44, Thursday, 20 April

It already has been announced that we are working on a new theme for Roundcube. This is planned for version 1.4 and is in a very early state. However, we already started implementing some neat features that are nice to have especially when we’re talking about mobile devices support. Here I will show you one […]

Import tasks from .ics file

by alec in Kolabian at 12:37, Thursday, 06 April

I recently was in a need to import a set of tasks from an .ics file. And I was surprised we can only import calendar events, but not tasks. So, after some quick copy-pasting we have the new feature. Note, this should be improved in the future to export/import Kolab tag assignments.

Release of Kube 0.1.0

by cmollekopf in Finding New Ways… at 12:50, Friday, 03 March

It’s finally done! Kube 0.1.0 is out the door.

First off, this is a tech preview really and not meant for production use.

However, this marks a very important step for us, as it lifts us out of a rather long stretch of doing the ground work to get regular development up and running. With that out of the way we can now move in a steadier fashion, milestone by milestone.

That said, it’s also the perfect time to get involved!
We’re planning our milestones on phabricator, at least the ones within reach, so that’s the place to follow development along and where you can contribute, be it with ideas, feedback, packaging, builds on new platforms or, last but not least, code.

So what is there yet?

You can setup an IMAP account, you can read your mail (even encrypted), you can move messages around or delete them, and you can even write some mails.

kube_main

BUT there are of course a lot of missing bits:

  • GMail support is not great (it needs some extra treatment because GMail IMAP doesn’t really behave like IMAP), so you’ll see some duplicated messages.
  • We don’t offer an upgrade path between versions yet. You’ll have to nuke your local cache from time to time and resync.
  • User feedback in the UI is limited.
  • A lot of commonly expected functions are not existing yet.
  • ….

As you see… tech preview =)

What’s next?

We’ll focus on getting a solid mail client together first, so that’s what the next few milestones are all about.

The next milestone will focus on getting an addressbook ready, and after that we’ll focus on search for a bit.

I hope we can scope the milestones approximately ~1 month, but we’ll have to see how well that works. In any case releases will be done only once the milestone is reached, and if that takes a little longer, so be it.

Packaging

This also marks the point where it starts to make sense to package Kube.
I’ve built some packages on copr already which might help packagers as a start. I’ll also maintain a .spec file in the dist/ subdirectory for the kube and sink repositories (that you are welcome to use).

Please note that the codebase is not yet prepared for translations, so please wait with any translation efforts (of course patches to get translation going are very welcome).

In order to release Kube a couple of other dependencies are released with it (see also their separate release announcements):

  • sink-0.1.0: Being the heart of Kube, it will also see regular releases in the near future.
  • kimap2-0.1.0: The brushed up imap library that we use in sink.
  • kasync-0.1.0: Heavily used in sink for writing asynchronous code.

Tarballs


Release of KAsync 0.1.0

by cmollekopf in Finding New Ways… at 12:32, Friday, 03 March

I’m pleased to announce KAsync 0.1.0.

KAsync is a library to write composable asynchronous code using lambda-based continuations.

In a nutshell:

Instead of:

class Test : public QObject {
    Q_OBJECT
public:
    void start() {
        step1();
    }
public signals:
    void complete();
private:

    void step1() {
        .... execute step one
        QObject::connect(step1, Step1::result, this, Test::step2);
    }

    void step2() {
        .... execute step two
        QObject::connect(step1, Step1::result, this, Test::done);
    }

    void done() {
        emit complete();
    }

};

you write:

KAsync::Job step1() {
    return KAsync::start([] {
        //execute step one
    });
}

KAsync::Job step2() {
    return KAsync::start([] {
        //execute step two
    });
}

KAsync::Job test() {
    return step1().then(step2());
}

The first approach is the typical “job” object (e.g. KJob), using the object for encapsulation but otherwise just chaining various slots together.

This is however very verbose (because what typically would be a function now has to be a class), resulting in huge functional components implemented in a single class, which is really the same as having a huge function.

The problem get’s even worse with asynchronous for-loops and other constructs, because at that point member variables have to be used as the “stack” of your “function” and the chaining of slots resembles a function with lot’s and lot’s of goto statements (It becomes really hard to decipher what’s going on.

KAsync allows you to write such code in a much more compact fashion and also brings the necessary tools to write things like asynchronous for loops.

There’s a multitude of benefits with this:

  • The individual steps become composable functions again (that can also have input and output). Just like with regular functions.
  • The full assembly of steps (the test() function), becomes composable as well. Just like with regular functions.
  • Your function’s stack doesn’t leak to class member variables.

Additional features:

  • The job execution handles error propagation. Errors just bubble up through all error handlers unless a job reconciles the error (in which case the normal execution continues).
  • Each job has a “context” that can be used to manage the lifetime of objects that need to be available for the whole duration of the execution.

Please note that for the time being KAsync doesn’t offer any API/ABI guarantees.
The implementation heavily relies on templates and is thus mostly implemented in the header, thus most changes will require recompilation of your source code.

If you’d like to help out with KAsync or have feedback or comments, please use the comment section, or the phabricator page.

If you’d like to see KAsync in action, please see Sink.

Thanks go to Daniel Vrátil who did most of the initial implementation.

Tarball


Release of KIMAP2 0.1.0

by cmollekopf in Finding New Ways… at 12:10, Friday, 03 March

I’m pleased to announce the release of KIMAP2 0.1.0.

KIMAP2 is a KJob based IMAP protocol implementation.

KIMAP2 received a variety of improvements since it’s KIMAP times, among others:

  • A vastly reduced dependency chain as we no longer depend on KIO. KIMAP2 depends only on Qt, KF5CoreAddons, KF5Codecs, KF5Mime and the cyrus SASL library.
  • A completely overhauled parser. The old parser performed poorly in situations where data arrived faster at the socket than we could process it, resulting in a lot of time wasted memcpying buffers around and an unbounded memory usage. To fix this a dedicated thread was used for every socket, resulting in a lot of additional complexity and threading issues. The new parser uses a ringbuffer with fixed memory usage and doesn’t require any extra threads, allowing the socket to fill up if the application can’t process fast enough, and thus keeping the server from sending more data eventually.
    It also minimizes memcopies and other parsing overhead, so the cost of parsing is roughly reading the data once.
  • Minor API improvements for the fetch and list jobs.
  • Various fixes for the login process.
  • No more KI18N dependencies. KIMAP2 is a protocol implementation and doesn’t require translations.

For more information please refer to the README.

While KIMAP2 the successor of KIMAP, KIMAP will stick around for the time being for kdepim.
KIMAP and KIMAP2 are completely coinstallable: library names, namespaces, envirionment variables etc. have all been adjusted accordingly.

KIMAP2 is actively used in sink and is fully functional, but we do not yet guarantee API or ABI stability (this will mark the 1.0 release).
If you need API stability rather sooner than later, please do get in touch with me.

If you’d like to help out with KIMAP2 or have feedback or comments, please use the comment section, the kde-pim mailinglist or the phabricator page.

Tarball


Kube at FOSDEM 2017

by cmollekopf in Finding New Ways… at 01:27, Friday, 03 February

I haven’t talked about it much, but the last few months we’ve been busy working on Kube and we’re slowly closing in on a first tech preview.

I’ll be at FOSDEM over this weekend, and I’ll give a talk on Sunday, 16:20 in room K.4.401, so if you want to hear more be there!

…or just find me somewhere around the Kolab or KDE booth.

See you at FOSDEM!


Here comes a quick overview on recent updates to Kolab 16 packages. Please note: I am only using public information. I have no background knowledge about the releases. I did not report on Updates for Kolab 16 while some courageous people (dhoffend, airhardt/sicherha, hede, kanarip, and probably more) were making Kolab 16 ready for Debian. […]

There is documentation about how to import Contacts into the Roundcube address books from CSV files: https://docs.roundcube.net/doc/help/1.1/en_US/addressbook/importexport.html Unfortunately, that documentation does not come with a description of the columns supported. I had a look at the source: https://github.com/roundcube/roundcubemail/blob/master/program/steps/addressbook/import.inc https://github.com/roundcube/roundcubemail/blob/master/program/lib/Roundcube/rcube_csv2vcard.php From that you can see, that the import from GMail and from Outlook is supported via […]

Kolab 16 for Fedora 25

by Timotheus in Kolab – Homepage of Timotheus Pokorra at 08:39, Saturday, 31 December

This is work in progress, but I just wanted to share the news: I have Kolab 16 packages for Fedora 25 (with PHP7), built on copr! The support for Fedora 24 is broken in OBS, ticket: https://git.kolab.org/T1564. Fedora 25 was added after that, but it is broken as well, see for example https://obs.kolabsys.com/package/show/Kolab:Winterfell/libcalendaring I was […]

Sieve raw editor

by alec in Kolabian at 18:44, Tuesday, 27 December

Thanks to great CodeMirror editor and its built-in syntax highlighter we now have a nice editor widget for Sieve scripts. So, now you can edit your scripts not only using the user-friendly interface, but also by directly changing the script code. Which is especially useful if you need to use a Sieve feature not supported […]

Sending contacts as vCards

by alec in Kolabian at 19:12, Friday, 16 December

There’s a lot about vCards in Roundcube. No surprise, this is simple and established open standard. Contacts import and export is based on it, as well as Roundcube’s internal format for contacts. There’s also the vcard_attachments plugin that adds some more functionality around vCard use. As the plugin name suggests it’s about handling of vCards […]

vCard in QR code

by alec in Kolabian at 18:12, Sunday, 11 December

Another small feature that will be available in Roundcube 1.3. An option to display QR code image that consist a (limited) contact information. So, you can quickly share contact with any mobile device. It looks there are  a few standards for this, but I choose the well-known vCard format as a data encapsulated by the […]

Contact identicons

by alec in Kolabian at 18:48, Friday, 02 December

Identicons (avatars, sigils) are meant to give a recognizable image that identifies specified entity (e.g. contact person), like photos. So, when a contact photo is not available we could still have a kind of nice image that is unique for every contact (email address). I just committed the identicon plugin for Roundcube that implements this. […]

Phabricator Packages for EPEL, Fedora

by kanarip in kanarip at 14:44, Thursday, 01 December

Using Pagure and COPR, Tim Flink and I have settled on using common infrastructure to further the inclusion of Phabricator in to the Fedora repositories (and EPEL). I’m hoping this will bear fruit and get more people on board. Our Pagure repository is at Phab Phour Phedora — a mix of Fabuluous Four, Phabricator for… Continue reading Phabricator Packages for EPEL, Fedora

Kolab 16.1 for Jessie: Way Ahead of Schedule

by kanarip in kanarip at 15:54, Friday, 18 November

I reported before that I was a little ahead of schedule in making Kolab 16.1 available for Debian Jessie, and as it turns out I was wrong — I’m way ahead. In fact, installing and configuring Kolab 16.1 on Jessie passes without any major head-aches. I think the one little tidbit I have left relates… Continue reading Kolab 16.1 for Jessie: Way Ahead of Schedule

Collaborative editing with Collabora Online in Kolab

by alec in Kolabian at 11:49, Wednesday, 16 November

About a year ago I blogged about document editing features in Kolab. This year we went a big step forward. Thanks to Collabora Online you can now use LibreOffice via Kolab web-client. Every part of this system is Free Software, which is normal in Kolab world. Collabora Online (CODE) implements WOPI REST API. WOPI is […]

Kolab 16.1 for Jessie: Ahead of Schedule

by kanarip in kanarip at 10:24, Tuesday, 15 November

I told you earlier this week I would be working on providing Debian packages for Kolab 16 next week, but an appointment I had this week was cancelled, and I get to get started just ever so slightly earlier than expected. This lengthens the window of time I have to deal with all the necessities,… Continue reading Kolab 16.1 for Jessie: Ahead of Schedule

Kolab 16.1: What’s on the Horizon

by kanarip in kanarip at 13:59, Monday, 14 November

This is an announcement I just know you’re going to want to read: Kolab 16.1 will become available for the following platforms, with more details to come soon; Red Hat Enterprise Linux 7 and CentOS 7, Debian Jessie (8) This week, I’ll be working to make Kolab:16 for Red Hat Enterprise Linux 7 (Maipo) the… Continue reading Kolab 16.1: What’s on the Horizon

Improvement in (free-busy) availability finder

by alec in Kolabian at 20:05, Wednesday, 12 October

When planning a meeting with other people in your organization you use Availability Finder widget. It’s granularity from the beginning was one hour, no less no more. On the other hand in Calendar view you could configure that granularity, i.e. number of slots in an hour. My changeset that awaits a review will fix this […]

“Mark all as read” option

by alec in Kolabian at 15:28, Monday, 10 October

It was already possible to mark selected messages as read, but it required a few clicks. Looks like many people prefer single-click action for this particular case. Probably because other mail clients have it or maybe it’s just a very common action. So, I implemented it today as a core feature. You can find “Mark […]

Kube: Accounts

by cmollekopf in Finding New Ways… at 15:16, Monday, 10 October

Kube is a next generation communication and collaboration client, built with QtQuick on top of a high performance, low resource usage core called Sink.
It provides online and offline access to all your mail, contacts, calendars, notes, todo’s etc.
Kube has a strong focus on usability and the team works with designers and Ux experts from the ground up, to build a product that is not only visually appealing but also a joy to use.

To learn more about Kube, please see here.

Kube’s Account System

Data ownership

Kube is a network application at its core. That doesn’t mean you can’t use it without network (even permanently), but you’d severely limit its capabilities given that it’s meant to be a communication and collaboration tool.

Since network communication typically happens over a variety of services where you have a personal account, an account provides a good starting point for our domain model. If you have a system with large amounts of data that are constantly changing it’s vital to have a clear understanding of data ownership within the system. In Kube, this is always an account.

By putting the account front and center we ensure that we don’t have any data that just belongs to the system as a whole. This is important because it becomes very complex to work with data that “belongs to everyone” once we try to synchronize that data with various backends. If we modify a dataset should that replicate to all copies of it? What if one backend already deleted that record? Would that mean we also have to remove it from the other services?
And what if we have a second client that has a different set of account connected?
If we ensure that we always only have a single owner, we can avoid all those issues and build a more reliable and predictable system.

The various views can of course still correlate data across accounts where useful, e.g. to show a single person entry instead of one contact per addressbook, but they then also have to make sure that it is clear what happens if you go and modfiy e.g. the address of that person (Do we modify all copies in all accounts? What happens if one copy goes out of sync again because you used the webinterface?).

Last but not least we ensure this way that we have a clear path to synchronize all data to a backend eventually, even if we can’t do so immediately. E.g. because the backend in use does not support that data type yet.

The only bit of data that is stored outside of the account is data specific to the device in use, such as configuration data for the application itself. Data that isn’t hard to recreate, is easy to migrate and backup, and very little data in the first place.

Account backends

Most services provide you with a variety of data for an individual account. Whether you use Kolabnow, Google or a set of local Maildirs and ICal files,
you typically have access to Contact, Mails, Events, Todos and many more. Fortunately most services provide access to most data through open protocols,
but unfortunately we often end up in a situation where we need a variety of protocols to get to all data.

Within Sink we call each backend a “Resource”. A resource typically has a process to synchronize data to an offline cache, and then makes that data accessible through a standardized interface. This ensures that even if one resource synchronizes email over IMAP and another just gathers it from a local Maildir,
the data is accessible to the application through the same interface.

Because various accounts use various combinations of protocols, accounts can mix and match various resources to provide access to all data they have.
A Kolab account for instance, could combine an IMAP resource for email, a CALDAV resource for calendars and CARDDAV resource for contacts, plus any additional resources for instant messaging, notes, … you get the idea. Alternatively we could decide to get to all data over JMAP (a potential IMAP successor with support for more datatypes than just email) and thus implement a JMAP resource instead (which again could be reused by other accounts with the same requirements).

diagram

 

Specialized accounts

While accounts within Sink are mostly an assembly of some resources with some extra configuration, on the Kube side a QML plugin is used (we’re using KPackage for that) to define the configuration UI for the account. Because accounts are ideally just an assembly of a couple of existing Sink resources with a QML file to define the configuration UI, it becomes very cheap to create account plugins specific to a service. So while a generic IMAP account settings page could look like this:

imapaccount

… a Kolabnow setup page could look like this (and this already includes the setup of all resources including IMAP, CALDAV, CARDDAV, etc.):

kolabaccount

Because we can build all we know about the service directly into that UI, the user is optimally supported and all that is left ideally, are the credentials.

Conclusion

In the end the aim of this setup is that a user first starting Kube selects the service(s) he uses, enters his credentials and he’s good to go.
In a corporate setup, login and service can of course be preconfigured, so all that is left is whatever is used for authentication (such as a password).

By ensuring all data lives under the account we ensure no data ends up in limbo with unclear ownership, so all your devices have the same dataset available, and connecting a new devices is a matter of entering credentials.

This also helps simplifying backup, migration and various deployment scenarios.


the case of the one byte packet

by Aaron Seigo in aseigo at 11:27, Tuesday, 20 September

the case of the one byte packet

Yesterday I pushed a change set for review that fixes an odd corner case for the Guam IMAP proxy/filter tool that was uncovered thanks to the Kolab Now Beta program which allows people to try out new exciting things before we inflict them upon the world at large. So first let me thank those who are using Kolab Now Beta and giving us great and valuable feedback before turning to the details of this neat little bug.

So the report was that IMAP proxying was breaking with iOS devices. But, and here's the intriguing bit, only when connecting using implicit TLS; connecting to the IMAP server normally and upgrading with STARTTLS worked fine. What gives?

In IMAP, commands sent to the server are expected to start with a string of characters which becomes the identifying tag for that transaction, usually taking the form of "TAG COMMAND ARGS". The tag can be whatever the client wants, though many clients just use a number that increases monotonically (1, 2, 3, 4, ...). The server will use that tag to prefix the success/failure response in the case of multi-line responses, or tag the response itself in the case of simpler one-line responses. This allows the client to match up the server response with the request and know when the server is indeed finished spewing bytes at it.

We looked at the network traffic and in that specific case iOS devices fragment the IMAP client call into one packet with the tag and one packet with the command. No other client does this, and even iOS devices do not do this when using the STARTTLS upgrade mechanism. As a small performance hack, I had allowed the assumption that the "TAG COMMAND" part of client messages would never be fragmented on the network. This prevented the need for buffering and other bookkeeping in the application within a specific critical code path. It was an assumption that was indeed not guaranteed, but the world appeared to be friendly and cooperating. After all, what application would send "4" in one network packet, and then "XLIST" in a completely separate one? Would it (the application, the socket implementation, ..) not compose this nicely into one little buffer and send it all at once? If so, what network topology would ever fragment a tiny packet of a few dozen bytes into one byte packets? Seemed safe enough, what could go wrong .. oh, those horrible words.

So thanks to one client in one particular configuration being particularly silly, if technically still within its rights, I had to introduce a bit of buffering when and where necessary. So I took the opportunity to do a little performance enhancement that was on my TODO while I was mucking about in there: tag/command parsing which is necessary and useful for rules to determine whether they care about the current state of the connection, is now both centralized and cached. So instead of happening twice for each incoming fragment of a command (in the common case), it now happens at most once per client command, and that will hold no matter how many rules are added to a ruleset.

So, one bug squashed, and one little performance enhancement, thanks to user feedback and the Kolab Now Beta program. As soon as the patch gets through code review, it should get pushed through packaging and deployed on Kolab Now Beta. Huzzah.a

Widescreen layout aka three column view

by alec in Kolabian at 19:11, Saturday, 17 September

Most of modern mail clients already use three column layout. For Roundcube exists a plugin that implements this feature, but it’s not actively developed and its quality is low. It’s about time to implement this as a core feature of Roundcube. So, my idea was to get rid of the preview pane setting (and switching) […]

New options for compose attachments

by alec in Kolabian at 12:34, Saturday, 30 July

I just added a set of improvements in mail compose window. Attachments list was extended with possibility to open and download uploaded files. The new menu contains also an option to rename already attached file. This makes the attachments list in mail preview and in compose more unified. Previewing and downloading files is already handled […]

WebP and MathML support

by alec in Kolabian at 09:41, Sunday, 24 July

Some web browsers support these technologies. Recently I added support for them in Roundcube. Here’s a short description what you can do with these right now and what are the limitations. WebP images This image format is supported by Google Chrome, Opera and Safari. So if you open a message with such images attached you’ll […]

Current status for mailrendering in kube & kmail

by Sandro Knauß in Decrypted mind at 09:15, Monday, 18 July

In my last entry I introduced libotp. But this name has some problems, that people thought, that it is a library for one-time-passwords, so we renamed it to libmimetreeparser.

Over the the last months I cleanup and refactored the whole mimetreeparser to turn it into a self-contained library.

Usage

Dependencies

As a gerneral rule we wanted to make sure, that we only have dependecies in mimetreeparser, where we we can easily tell, why we need them. We end up with:

KF5::Libkleo
KF5::Codecs
KF5::I18n
KF5::Mime
  • KF5::Libkleo is the dependecy we are not happy with, because it pulls in many widget related dependencies, that we want avoid. But there is light at the end of the tunnel and we will be hopefully switch in the next weeks to GpgME directly. GpgME is planning to have a Qt interface, that fulfills our need for decrypting and verifying mails. The source of the Qt interface of GpgME is libkleo, that's why the patch will be get quite small. At KDEPIM sprint in Toulouse in spring this year, I already give the Qt interface a try and made sure, that your tests are still passing.
  • KF5::Codecs to translate between different codes, that can occur in a mail
  • KF5::I18n for translations of error messages. If we want consistent translations of error messages we need to handle them in libmimetreeparser.
  • KF5::Mime because the input mail is a mimetree.

Rendering in Kube

In Kube we have decided to use QML to render mails for the user, that's made it easy to switch all html rendering specific parts away. So we end up with just triggering the ObjectTreeParser and create a model out of the resulting tree. The model is than the input for QML. QML now loads different code for different parts in the mail. For example for plain text it just shows the plain text, for html it loads this part in a WebEngine.
But as a matter of fact, the interface we use is quite new and it is currently still under development (T2308). For sure there will be changes, until we are happy with it. I will describe the interface in detail if we are happy with it. Just as sidenote, we don't want a separate interface for kube and kdepim. The new interface should be suitable for all clients. To not break the clients constently, we keep the current interface and develop the new interface from scratch and than switch we are happy with the interface.

Kube rendering

Rendering in Kmail

As before we use html as rendering output, but with the rise of libmimtreeparser, kmail uses also the messageparttree as input and translate this into html. So we also have here a clear seperation between the parsing step ( handled in libmimetreeparser) and the rendering step, that is happening in messageviewer. Kmail has additional support for different mime-types like iTip (invitations) ind vCard. The problem with these parts are, that they need to interact directly with Akonadi to load informations. So we can actually detect if a event is already known in Akonadi or if we have the vCard already saved, which than changes the visible representation of that part. This all works because libmimetreeparser has an interface to add additional mime-type handlers ( called BodyPartFormatters).

And additionally messageviewer now using grantlee for creating the html, that is very handy and makes it now easy to change the visual presentation of mails, just by changing the theme files. That should help a lot if we want to change the look-and-feel of email presentation to the user. And allow us additionally to think about different themes for emailpresentation. We also thought about the implications of the easy changeable themes and came up with the problem, that it shouldn't be that easy to change the theme files, because malicious users, could fake good cryptomails. That's why the theme files are shipped inside resource files.

Meanwhile I was implementing this, Laurent Montel added javascript/jQuery support to the messageviewer. So I sat down and created a example to switch the alterantivepart ( html and textpart, that can be switched) with jQuery. Okay we came to the conclusion that this is not a good idea (D1991). But maybe others came up with good ideas, where we can use the power of jQuery inside the messageviewer.

Alternative switcher 1

Alternative Switcher 2

Kube going cross-platform in Randa

by cmollekopf in Finding New Ways… at 20:03, Saturday, 18 June

I’m on my way back home from the cross-platform sprint in Randa. The four days of hacking, discussing and hiking that I spent there, allowed me to get a much clearer picture of how the cross-platform story for Kube can work, and what effort we will require to get there.

We intend to ship Kube eventually on not only Linux, Windows and Mac, but also on mobile platforms like Android, so it is vital that we figure out blockers as early as possible, and keep the whole stack portable.

Flatpak

The first experiment was a new distribution mechanism instead of a full new platform. Fortunately Aleix Pol already learned the ropes and quickly whipped up a Flatpak definition file that resultet in a self contained Kube distribution that actually worked.

Given that we already use homegrown docker containers to achieve similar results, we will likely switch to Flatpak to build git snapshots for early adopters and people participating in the development process (such as designers).

Android

Anreas Cord-Landwehr perpared a docker image that brings the complete cross-compiler toolchain with it, so that makes for a much smoother setup process than doing everything manually. I mostly just followed the KDE-Android documentation .

After resolving some initial issues with the Qt-Installer with Andreas (Qt has to be installed using the Gui-Installer from console as well, using some arcane configuration script that changes variables with every release…. WTF Qt), this got me quickly set up to compile the first dependencies.

Thanks to the work of Andreas, most frameworks already compile flawlessly, some other dependencies like LMDB, flatbuffers and KIMAP (which currently still depends on KIO), will require some more work though. However, I have a pretty good idea by now what will be required to get everything to build on Android, which was the point of the exercise.

Windows

I postponed the actual building of anything on Windows until I get back to my workstation, but I’ve had some good discussions about the various possibilities that we have to build for Windows.

While I was initially intrigued by using MXE to cross-compile the whole stack (I’d love not having to have a Windows VM to build packages), the simplicity of the external CMake project that Kåre Särs setup for Kate is tempting as well. The downside of MXE would be of course that we don’t get to use the native compiler, which may or may not result in performance impacts, but definitely doesn’t allow developers on Windows to work with MSVC (should we get any at some point….).

I guess some experimentation will be in order to see what works best.

Mac

Lacking an OS X machine this is also still in the theoretical realm, but we also discussed the different building techniques and how the aim must be to produce end-user installable application bundles.

 

As you can see there is still plenty to do, so if you feel like trying to build the stack on your favorite platform, that help would be greatly appreciated! Feel free to contact me directly, grab a task on Phabricator, or join our weekly meetings on meet.jit.si/kube (currently every Wednesday 12:00).

The time I spent in Randa showed once more how tremendously useful these sprints are to exchange knowledge. I would have had a much harder time figuring out all the various processes and issues without the help of the various experts at the sprint, so this was a nice kick-start of the cross-platform effort for Kube. So thank you Mario and team that you organized this excellent event once more, and if you can, please help keeping these Sprints happening.


How I made Crypt_GPG 100 times faster

by alec in Kolabian at 13:50, Wednesday, 15 June

Here’s the short story of performance investigation I did to make Crypt_GPG library (used by Roundcube’s Enigma plugin) 100 times faster. My fixes improved encryption/signing performance as well as peak memory usage. Once I was testing Enigma for messages with attachments. I noticed it is really slow when encrypting or even just signing the message […]

New IMAP filter/proxy release: guam 0.8, eimap 0.2

by Aaron Seigo in aseigo at 14:58, Friday, 10 June

New IMAP filter/proxy release: guam 0.8, eimap 0.2

Over the last few months I have been poking away at a refactoring of the IMAP library that Kolab's IMAP filter/proxy uses behind the scenes, called eimap. It consolidated quite a bit of duplicated code between the various IMAP commands that are supported, and fixed a few bugs along the way. This refactoring dropped the code count, makes implementing new commands even easier, and has allowed for improvements that affect all commands (usually because they are related to the core IMAP protocol) to be made in one central place. This was rolled as eimap 0.2 the other week and has made its way through the packaging process for Kolab. This is a significant milestone for eimap on the path to being able to be considered "stable".

Guam 0.8 was tagged last week and takes full advantage of eimap 0.2. This has entered the packaging phase now, but you can grab guam 0.8 here:

Highlights of these two releases include:

  • EIMAP
    • several new IMAP commands supported
    • all core IMAP response handling is centralized, making the implementation for each command significantly simpler and more consistent
    • support for multi-line, single-line and binary response command types
    • support for literals continuation
    • improved TLS support
    • fixes for metadata fetching
    • support for automated interruption of passthrough state to send structured commands
    • commands receive server responses for commands they put into the queue
  • Guam
    • ported to eimap 0.2
    • limit processcommandqueue messages in the FSM's mailbox to one in the per-session state machine
    • be more expansive in what is supported in LIST commands for the groupware folder filter rule
    • init scripts for both sysv and systemd

One change that did not make it into 0.8 was the ability to define which port to bind guam listeners to by network interface. This is already merged for 0.9, however. I also received from interest in using Guam with other IMAP servers, so it looks likely that guam 0.8 will get testing with Dovecot in addition to Cyrus.

Caveats: If you are building by hand using the included rebar build, you may run into some issues with the lager dependencies, depending on what versions of lager and friends are installed globally (if any). If so, change the dependencies in rebar.config to match what is installed. This is largely down to rebar 2.x being a little limited in its ability to handle such things. We are moving to rebar3 for all the erlang packages, so eimap 0.3 and guam 0.9 will both use rebar3. I have guam already building with rebar3 in a 0.9 feature branch, and it was pretty painless and produces something even a little nicer already. As soon as I fix up the release generation, this will probably be the first feature branch to land in the develop branch of guam for 0.9!

It is also known that the test suite for Guam 0.8 is broken. I have this building and working again in the 0.9 branch, and will probably be doing some significant changes to how these tests are run for 0.9.

events?(Kolab)

by Aaron Seigo in aseigo at 11:41, Friday, 10 June

events?(Kolab)

I joined Kolab Systems just over 1.5 years ago, and during that time I have put a lot of my energy and time into working with the amazing team of people here to improve our processes and execution of those processes around sales, communication, community engagement, professional services delivery, and product development. They have certainly kept me busy and moving at warp 9, but the results have certainly been their own reward as we have moved together from strength to strength across the board.

One place that this has been visible is the strengthening of our relationship with Red Hat and IBM, which has culminated in two very significant achievements this year. First, Kolab is available on the Power 8 platform thanks to a fantastic collaboration with IBM. For enterprise customers and ISP/ASPs alike who need to be able to deliver Kolab at scale in minimum rack space, this is a big deal.

For those with existing Power 8 workloads, it also means that they can bring in a top-tier collaboration suite with quality services and support backing it up on their already provisioned hardware platform; put more simply: they won't have to support an additional x86-based pool of servers just for Kolab.

To help introduce this new set of possibilities, we have organized a series of open tech events called the Kolab Tasters in coordination with IBM and Red Hat.

events?(Kolab)

Besides enjoying local beverages and street food with us at these events, attendees will be able to experience Kolab on Red Hat Enterprise Linux on Power 8 first-hand on the demo stations that will be available around the event site. Presentations from Kolab Systems, IBM and Red Hat form the main part of the agenda for each of these events, and will give attendees a deep understanding of how the open technologies from IBM (Power 8), Red Hat (Linux OS), and Kolab Systems (Kolab) deliver fantastic value and freedom, especially when used together.

The first events scheduled are:

  • Zürich, Switzerland on the 14th June, 2016
  • Vienna, Austria on the 22nd June, 2016
  • Bern, Switzerland on 28th June, 2016

There are some fantastic speakers lined up for these events, including Red Hat's Jan Wildeboer and Dr. Wolfgang Meier who is directory of hardware development at IBM. At the Vienna event, we will also be celebrating the official opening of Kolab Systems Austria, which has already begun to support their needs of partners, customers and government in the beautiful country of Austria from our office in Vienna.

Events in Germany, starting in Frankfurt, will be scheduled soon, and we will be doing a "mini-taster" at the Kolab Summit which is taking place in Nürnberg on the 24th and 25th of June. Additional events will be scheduled in accordance with interest over the next year. I expect this to become a semi-regular road-show, in fact.

And speaking of the Kolab Summit: is it also going to be a fantastic event. Co-hosted at the openSUSE Conference, we will be sharing the technical roadmap for Kolab for 2016-2017; unveiling our partner program for ISPs, ASPs and system integrators that we incrementally rolled out earlier this year and which is now ready for broad adoption; listening to guest speakers on timely topics such as Safe Harbor in the EU and taking Kolab into vertical markets; and, of course, having a busy "hallway session" where you can meet and talk with key developers, designers, management and sales people from the Kolabiverse.

You can still book your free tickets to these events from their respective websites:

Kolab Summit 2.0: Putting the freedom back into the cloud.

Join us this June 24-25 in Nürnberg, Germany for the second annual Kolab Summit. Like last year's summit, we've accepted the invitation of the openSUSE Community to co-locate the summit with the openSUSE Conference, which will be held June 22-26 in the same location. And because we have some special news to share and celebrate, we're also putting on a special edition Kolab Taster on Friday June 24th. The overarching theme for this year's summit will be how to put the freedom back into the cloud.

Using the US cloud is increasingly fraught with technical and legal insecurities. Cross-border transfer of data is becoming more complex, and sovereign control of your data seems increasingly hard to achieve. Kolab believes there needs to be a better answer than simply giving up the benefits of the cloud, renouncing its convenience and cost efficiency. That is why during this year's summit we will be discussing the impact of the Safe Harbor ruling, and how Kolab has been working with our partners at Collabora and others to provide a fully open, collaborative cloud technology platform.

Technology

Join us to learn how Kolab has been helping Application Service Providers (ASPs) withstand Office365's onslaught, and how the cloud of the future will be running Kolab. Get exclusive insights and previews into our thinking, road map and the state of development of Kube and other components.

Business

In the business section we will be talking with partners about our new partner programme, business opportunities Kolab offers, and how to build a valuable proposition for your customers around Kolab.

Celebration

Finally, join us on the evening of the 24th for an exclusive Kolab Taster where we will have some major news to celebrate. And while you're there, don't miss the opportunity to also be part of the exciting openSUSE Conference, which has finally come home to Nürnberg, the home city of SUSE, complete with castles, food, beer and much to see.

Tickets are FREE, so grab yours today.

Tasks export

by alec in Kolabian at 15:52, Tuesday, 31 May

As with Calendar events, tasks are objects that provide calendaring information. Both internally are based on iCalendar format. So, why we couldn’t export tasks in iCal format as we can with Calendar events? Well, now we can. As you see on the screenshot now we have Export button in the Tasks toolbar. It allows you […]

Sieve ‘duplicate’ extension

by alec in Kolabian at 09:44, Thursday, 05 May

The ‘duplicate’ extension (RFC 7352) adds a new test command called duplicate to the Sieve language. This test adds the ability to detect duplications. It is supported by dovecot’s Pigeonhole project. It’s now supported also by Roundcube’s managesieve plugin. The main application for this new test is handling duplicate deliveries commonly caused by mailing list […]

libotp - email rendering in kube

by Sandro Knauß in Decrypted mind at 13:03, Friday, 05 February

The important part of a mailreader is rendering an email. Nobody likes to read the raw mime message.
For kube we looked around what we should use for mail rendering. and came to the conclusion, that we will use parts of kdepim for that task. But the current situation was not the usable for us, because is was tangled together with other things of the mailviewer (like Akonadi, Widgetcode,...) , so we would end up depending on nearly everything in kdepim. What was a nogo for us. But after a week of untangeling the rendering part out of messageviewer, we end up with a nice library that does only mail rendering called libotp branch dev/libotp. But for the moment it is just a working name. Maybe someone come up with a better one?

Why a lib - email rendering is easy?

encrypted mail Well if you look from the outside, it really looks looks like an easy task to solve. But in detail the task is quite complicated. We have crypted and signed mail parts, html/not html mailparts, alternate mime structure, broken mail clients and so on. And than we also want user interaction, do we want to decrypt the mail by default? Do we want to verify mails by default? Do we allow external html links? Does the user user prefer html over non html? ...

In total you have to keep many things in the mind while rendering a mail. And we are not talking, that we also want a pluginable system, where we be able to create own rendering for special types of mails. All these thing a already solved by the messageviewer of kdepim: We have high integrated crypto support, support for many different mime types and it was already used for years, additionally the test couverage is quite high. Like you see in the image above we are already been able to decrypt and verify mails.

libotp

libotp is a library that renders emails to html. Maybe you ask, why the hell html? I hate html mails, too :D But html is easy to display and back in time there was no alternative, to show something that is that dynamic. Nowadays we have other solutions like QML, but still we have html message, that we wanna be able to display. Currently, we have no way, to try out QML rendering for mails, because the output of libotp is limited to html. I hopefully can also solve this to give libotp the ability to redner to different output formats, by splitting the monolithic task of render an email to html into a parse step, in which the structure of the email is translated into the visible parts (a signed part is followed by a encrypted one, that has as child a html part,...) and the pure rendering step.

If you follow the link libotp branch dev/libotp, you may wonder, if a fork of messagelib is happening. No the repo is created, to use libotp now in kube and I made many shortcuts and use ugly hacks to get it working. The plan is that libotp is part of the messagelib repo and currently I have already made it to push the first part of (polished) patches upstream. If everything went fine, I will have everything upstreamed by next week.

How to use it ?

At the moment it is still work in process, so it may change. Also if other step up and give input about the way they wanna use it.
Let's have a look how kube it is using kube/framework/mail/maillistmodel.cpp

// mail -> kmime tree
const auto mailData = KMime::CRLFtoLF(file.readAll());  
KMime::Message::Ptr msg(new KMime::Message);  
msg->setContent(mailData);  
msg->parse();  

first step - load the mail into KMime to have a tree with the mime parts. file is a mail in mbox format located in a local file.

// render the mail
StringHtmlWriter htmlWriter;  
QImage paintDevice;  
CSSHelper cssHelper(&paintDevice);  
MessageViewer::NodeHelper nodeHelper;  
ObjectTreeSource source(&htmlWriter, &cssHelper);  
MessageViewer::ObjectTreeParser otp(&source, &nodeHelper);  

now initalize the ObjectTreeParser. Therefore we need

  • HtmlWriter, that gets html output while rendering and do what ever the user wants to do with the html output (in our case just save it for later use - see htmlWriter.html()).

  • CSSHelper creates the header of the html with css, you have the possibility to set color schemas and fonts that are used for html rendering.

  • NodeHelper is the place, where information are stored, that need to be stored for longer (pointers to crypted part, that are currently in asynchronously been decrypted, or extra mail parts, that are visible but only on mime part in the mail). NodeHelper also informs you if any async job has endend. At the moment, we don't use the async mode nor we have mail internal links working, that's why the NodeHelper is a local variable here.

  • ObjectTreeSource is the setting object, here you can store, if decryption is allowed, if you prefer html output, if you like emotionicons, ...

  • And last not least the ObjectTreeParser(Otp) itself. It's doing the real work of parsing and rendering the mail :)

htmlWriter.begin(QString());  
otp.parseObjectTree(msg.data());  
htmlWriter.end();

return htmlWriter.html();  

After initializing the Otp we can render the mail. This is done with otp.parseObjectTree(msg.data());. Around that we need to tell htmlWriter that a html creation has begun and ended afterwards.

As you may noticed, except the ObjectTreeParser and the NodeHelper kube has overloads of the objects. This makes libotp highly configurable for others needs already.

next steps

After the week of hacking now the current task is to push things upstream, to not create a fork and focus on one solution to render mails for kmail and kube together. After upstreaming I will start to extract the parts of libotp out of messageviewer (currently it is only another cmake target and not really divided) and make messageviewer to depend on libotp. With that libotp is a independent library that is used by both projects and I can focus again to polish libotp and messageviewer.

tl;dr;

Here you see kube can now render your spam now nicely:
rendered spam

Like the spam says (and spam can't lie) - get the party started!

Kolab 16 at FOSDEM'16

by Paul Brown in Kolab Community - Kolab News at 10:54, Sunday, 31 January

The biggest European community-organised Open Source event is upon us and, this year, we at Kolab Systems have a very special reason to be there: we'll be presenting the new Kolab 16.1 [1] to the world during the meetup.

The development team has worked long and hard on this release, even longer and harder than usual. And that slog has led to several very interesting new features built into Kolab 16.1, features we are particularly proud of.

Take for example GUAM, our all-new, totally original, "IMAP-protocol firewall". Guam allows you to, for example, access Kolab from any client without having to see the special Kolab groupware folders, such as calendars, todos, contacts, and so on. As Guam is configured server-side, users do not have to do anything special on their clients.

Guam keeps users' inboxes clean, as it pipes the resource messages in the background only to the apps designated to deal with them. So a meeting scheduled by a project leader will only pop up in the calendar app, and a new employee recruited by HR will silently get added only to the contact roster, without the users ever accidentally seeing in their email the system-generated messages that prompt the changes.

As Guam is actually an IMAP proxy filter, something like a rule-based IMAP firewall (Aaron Seigo dixit), it is very flexible and allows you to do so much more. Come by our booth and find out how our developers have been using it and discuss your ideas with the people who actually built it.

Then there's MANTICORE. With Manticore we are taking our "collaborating in confidence" mantra to the next level. Manticore currently works only on documents, but ultimately it will bring collaborative and simultaneous editing to emails, notes and calendars as well, all without having to leave the Kolab web environment. Need to write an email with the input from a couple of colleagues? Instead of passing the text around, which is slow and error-prone (who hasn't ever sent the second to last version off by mistake?), just open it up with Manticore and edit the text all together, everybody at the same time. Interactive proofreading action!

Collaborative editing has arrived in Kolab 16.1

Finally we have implemented an OTP (One Time Password) authentication method to make logging into Kolab's web client even more secure. Every time a user goes to log in, you can have the system send a single-use code to her phone. The code is a random 6 digit number, which is only valid for a few minutes and that changes every time.

If you're an admin and tired of the security nightmares that are passwords copied onto slips of paper stuck to monitors; sick of trying to convince users that "1234" is really not that a clever password; or fed up of hearing complaints every time you have to renew their credentials because they have been hacked yet again; this is a feature that you'll definitely want to activate.

Of course there's much more to Kolab 16.1, including optimised code, a sleeker design, and usability improvements all round. You'll be able to see all these new features in action at our booth during the show and you'll also be able to meet the developers and ask them as many questions as you like on how you can use Kolab in your own environment.

What's even better is that you can also come over to our booth and copy a full, ready-to-go version of Kolab as a virtual machine onto your USB thumbdrive. Take it home with you and use it at your leisure, because, that's the other thing, we are fusing both the Enterprise and Community versions of Kolab together. This means the exact same code base from the Enterprise, our corporate-grade groupware, gets released to Community, with new versions, with all features, coming out regularly every year.

But that's not the only user news we have. We'll also be presenting our new Kolab Community portal during the conference. Kolab Community is designed to be a place for developers, admins and users to cooperate in the creation of the most reliable and open groupware collaboration suite out there. On Kolab Community you will be able to interact with the developers at Kolab Systems and other users on modern, feature-rich forums and mailing lists. The portal also hosts the front end to our bug tracker and our Open Build Service that lets you roll your own Kolab packages, tailored to your needs.

For example, if you need a pocket Kolab server you can take anywhere, why not build it to run on a Raspberry Pi? In fact that's another thing we'll be demoing at out booth. Bring along an SD card (8 GBs at least) and we'll flash Kolab 16.1 for the Pi on to it for you. You'll get to try the system right there and then take it away with you, fully ready to be deployed at home or in your office.

Useful Links

[1] Download Kolab 16.1: https://docs.kolab.org/installation-guide/index.html

[2] FOSDEM expo area: https://fosdem.org/2016/stands/

And just to prove how well all the Kolab-related tools work everywhere, we'll also be running Kontact, KDE's groupware client, on a Windows 10 box. Kontact will be accessing a Kolab 16.1 server (of course) for all its email, calendars, contacts, notes, and so on; proving how both frameworks combined have nothing to envy from other proprietary alternatives, regardless of the underlying platform. You'll also be able to get the first glimpses of Kube, our new generation email client.

Apart from all of the above, there'll also be Q&As, merch (including some dope tattoos), special one-time Kolab Now offers (which are also somehow related to tattoos). You know: the works.

Discounts 4 Tats

Update: Tattoos get you discounts! Take a photo with a tattoo proving your love for Free Software and get a 30% discount on your ultra-secure Kolab Now online mail, file & groupware account. Get your tat-discount here.

Come visit us. We'll be in Building K, Level 1, as you go in, booth 4 on the left [2]; try out Kolab 16.1's new features live; get cool and useful stuff; join our community.

Sound like a plan?

More FOSDEM news: Kolab Systems's CEO, Georg Greve and Collabora's General Manage, Michael Meeks, will be signing an agreement during the event to integrate CloudSuite into Kolab.

(Collabora are the guys who offer enterprise support for LibreOffice, built the LibreOffice on Android app, and created LibreOffice Online.)

Collabora's CloudSuite supports word-processing, spreadsheets, presentations

CloudSuite is Collabora's version of LibreOffice in the cloud and comes with built-in collaborative editing. It allows editing and accessing online documents that can also be shared easily across users. Whole teams can collaboratively edit text documents, spreadsheets, and presentations. You can access and modify the shared documents through a web interface, directly from the LibreOffice desktop suite, or even from Collabora's own Android LibreOffice app.

Of course CloudSuite supports the full range of document formats LibreOffice does. That includes, among many others, the ISO-approved native Open Document formats, and most, if not all, Microsoft Office formats.

What with Kolab also gaining collaborative editing of text documents (already available), and emails, notes, calendars, and contacts over the next few months, you're probably seeing where this is going: by combining Kolab with Cloudsuite, the aim is to integrate a suite of office tools, each and all of which, from the email client to the presentation editor, can be used collaboratively and online by all the users of a team. And we'll be doing it using exclusively free and open sourced technologies and open standards.

You can already edit texts collaboratively from within Kolab

So, to summarize with of a tongue twister: Kolab & Collabora collaborate on collaboration.

Kolab at FOSDEM 2016

by Aaron Seigo in aseigo at 20:04, Thursday, 28 January

Kolab at FOSDEM 2016

Kolab is once again back at FOSDEM! Our booth is in the K hall, level 1, group A, #4. (What a mouthful!) We have LibreOffice as a neighbor this year, which is a happy coincidence, as we have some interesting news for all Kolab users that is related to office documents that we will be sharing at FOSDEM. That's not the only reason to visit us, though!

We'll be announcing a new release of Kolab with some exciting new features, and showing it running on multiple systems including a single-board Raspberry PI. In fact, if you have a PI of your own, bring an SD Card and we'll be happy to flash it for you with Kolab goodness. Instructions and images will be made available online after FOSDEM as well, so don't worry about missing out if you don't make it by the booth.

We'll also be making a very special offer to all you Free software lovers: a special discount on Kolab Now! Drop by the booth to find out how you can take advantage of this limited time offer.

And last, but far from least, we'll be showing off the new Kolab Community website and forums, sharing the latest on Kolab devel, Roundcube Next and the new desktop client, Kube. Several members of the Kolab development team and community will be there. I'll be there as well, and really looking forward to it!

Oh, and yes, there is the matter of that announcement I hinted at in the first paragraph of this blog entry ... you really want to come by and visit us to find out more about it ... and if you can't, then definitely be watching the Kolab blogs and the software news over the next few days!

Kolab: Bringing All Of Us Together, Part 2

by Aaron Seigo in aseigo at 19:47, Thursday, 28 January

Kolab: Bringing All Of Us Together, Part 2

This is the second blog in a serious about how we are working to improve the Kolab ecosystem. You can read the first installment about the new release strategy here. This time, however, we are going to looking at our online community infrastructure.

All the good things

There is quite a significant amount of infrastructure available to members of the Kolab community. There is an packaging system, online translation, a Phabricator instance where the source code and more is hosted, a comprehensive documentation site, mailing lists, irc channels and blogs. We are building up on that foundation, and one example of that is the introduction of Phabricator this past year.

Of course, it does not matter how good or numerous these tools are if people either do not find them or they are not the right sorts of tools people need. We had taken a look at what we have on offer, how they are being used and how we could improve. The biggest answer we came up with was: revamp the kolab.org website and drop the wiki.

Introducing a new community website!

Kolab: Bringing All Of Us Together, Part 2

Today we turned the taps on a brand new website at kolab.org. Unlike the previous website, this one does not aim to sell people on what makes Kolab great; we already have other websites that do that pretty well. Instead, the new design focuses on making the community resources discoverable by putting them front and center.

In addition to upgrading the blog roll and creating a blog just for announcements and community news, we have created a set of web forums we are calling the Hub. Our design team will be using it to collaborate with users and other designers, and we invite developers, Kolab admins and users alike to discuss everything Kolab at the Hub.

Of course, we also took the opportunity to modernize the look, give it a responsive design, and reflect the new Kolab brand guidelines. But that is all just icing on the cake compared to the improved focus and new communication possibilities.

From here forward

We will be paying a lot of attention to community engagement and development in 2016, and this website, unveiled in time for FOSDEM is a great starting point. We will be adding more over time, such as real time commit graphs and the like, as well as taking your feedback for what would make it more useful to you. We are all looking forward to hearing from you! :)

Feel the FS love at FOSDEM #ILoveFS

by Aaron Seigo in Kolab Community - Kolab News at 15:15, Tuesday, 26 January

Well, it’s that special time of year again. A time when people can show their appreciation for the ones they love, that’s right, free software!

Free Software drives a huge number of devices in our everyday life. It ensures our freedom, our security, civil rights, and privacy. These values have always been at the heart of Kolab and what better way to say #ILoveFS than with the gift of Kolab Now!

To celebrate ‘I love Free Software Day’ we are offering a 30% discount* on all new Kolab Now accounts until the 14th February 2016.

So, how does this work?

Here’s the fun bit. Simply show your free software love by posting a picture on social media of your Kolab tattoo using #ILoveFS (available exclusively at FOSDEM) or simply share your Free Software contributions with us by email or in person. You can do that bit later if you like, for now, just head on over to kolabnow.com/ILoveFS and grab your new Kolab Now account. Offer must end 14th February 2016, so grab them fast.

*The following Terms & Conditions apply
30% Discount is applicable for the first 6 months for new individual or group accounts only. 30% discount is applicable for the first month only on new hosting accounts. Discount will be applied to your new account within the first 30 days of signup. Offer ends 14 February 2016. Kolab Systems AG has the right to withdraw this offer at any time. Cash equivalent is not available.

Driving Akonadi Next from the command line

by Aaron Seigo in aseigo at 21:26, Monday, 28 December

Christian recently blogged about a small command line tool that added to the client demo application a bunch of useful functionality for interacting with Akonadi Next from the command line. This inspired me to reach into my hard drive and pull out a bit of code I'd written for a side project of mine last year and turn up the Akonadi Next command line to 11. Say hello to akonadish.

akonadish supports all the commands Christian wrote about, and adds:

  • piping and file redirect of commands for More Unix(tm)
  • able to be used in stand-alone scripts (#!/usr/bin/env akonadish style)
  • an interactive shell featuring command history, tab completion, configuration knobs, and more

Here's a quick demo of it I recorded this evening (please excuse my stuffy nose ... recovering from a Christmas cold):

We feel this will be a big help for developers, power users and system administrators alike; in fact, we could have used a tool exactly like this for Akonadi with a client just this month ... alas, this only exists for Akonadi Next.

I will continue to develop the tool in response to user need. That may include things like access to useful system information (user name, e.g.?), new Akonadi Next commands, perhaps even that ability to define custom functions that combine multiple commands into one call... it's rather flexible, all-in-all.

Adopt it for your own

Speaking of which, if you have a project that would benefit from something similar, this tool can easily be re-purposed. The Akonadi parts are all kept in their own files, while the functionality of the shell itself is entirely generic. You can add new custom syntax by adding new modules that register syntax which references functions to run in response. A simple command module looks like this:

namespace Example
{

bool hello(const QStringList &args, State &state)
{
    state.printLine("Hello to you, too!");
}

Syntax::List syntax()
{
    return Syntax::List() << Syntax("hello", QObject::tr("Description"), &Example::hello);
}

REGISTER_SYNTAX(Example)

}

Automcompletion is provided via a lambda assigned to the Syntax object's completer member:

sync.completer = &AkonadishUtils::resourceCompleter;

and sub-commands can be added by adding Syntax object to the children member:

get.children << Syntax("debug", QObject::tr("The current debug level from 0 to 6"), &CoreSyntax::printDebugLevel);

Commands can be run in an event loop when async results are needed by adding the EventDriven flag:

Syntax sync("sync", QObject::tr("..."), &AkonadiSync::sync, Syntax::EventDriven);

and autocompleters can do similarly using the State object passed in which provides commandStarted/commandFinished methods.
... all in all, pretty straightforward. If there is enough demand for it, I could even make it load commands from a plugin that matches the name of the binary (think: ln -s genericappsh myappsh), allowing it to be used entirely generically with little fuss. shrug I doubt it will come to that, but these are the possibilities that float through my head as I wait for compiles to finish. ;)

For the curious, the code can be found here.

Inital sync akonadi from commandline

by Sandro Knauß in Decrypted mind at 10:55, Friday, 18 December

When you start with Kontact you have to wait until the first sync of your mails with the IMAP or Kolab server is done. This is very annoying, because the first impression is that kontact is slow. So why not start this first sync with a script, and then the data is already available when the user starts kontact the first time?

1. Setup akonadi & kontact

We need to add the required config files to a new user home. This is simply copying config files to the new user home. We just need to replace username, email address and the password. Okay, that sounds quite easy, doesn't it? Oh wait - the password must be stored inside KWallet. KWallet can be accessed from the command line with kwalletcli. Unfortunatelly we can only use kwallet files not encrypted with a password because there is no way to enter the password with kwalletcli. Maybe pam-kwallet would be a solution; for plasma5 it there is an offical part for this, kwallet-pam, but I haven't tested it yet.

As an alternative to copying files around, we could have used kiosk system from KDE. With that you are able to preseed the configuration files for an user and have additionally the possibility to roll out changes. F.ex. if the server addresses changes. But for a smaller setup this is kind of overkill.

2. Start needed services

For starting a sync, we first need Akonadi running and Akonadi depends on a running DBus and kwalletd. KWallet refuses to start without a running XServer and is not happy with just xvfb.

3. Triggering the sync via DBus

akonadi has a great Dbus interface so it is quite easy to trigger a sync and track the end of the sync:

import gobject  
import dbus  
from dbus.mainloop.glib import DBusGMainLoop

def status(status, msg):  
    if status == 0:
        gobject.timeout_add(1, loop.quit)

DBusGMainLoop(set_as_default=True)  
session_bus = dbus.SessionBus()

proxy = session_bus.get_object('org.freedesktop.Akonadi.Resource.akonadi_kolab_resource_0', "/")  
proxy.connect_to_signal("status", status, dbus_interface="org.freedesktop.Akonadi.Agent.Status")  
proxy.synchronize(dbus_interface='org.freedesktop.Akonadi.Resource')

loop = gobject.MainLoop()  
loop.run()  

The status function gets all updates, and status=0 indicates the end of a sync.
Other than that is just getting the SessionBus and trigger the sychronize method and wait for till the loop ends.

4. Glue everything together

After having all parts in place, it can be glued into a nice script. As language I use python, together with some syntactic sugar it is quite small:

config.setupConfigDirs(home, fullName, email, name, uid, password)

with DBusServer():  
        logging.info("set kwallet password")
        kwalletbinding.kwallet_put("imap", akonadi_kolab_resource_0rc", password)

        with akonadi.AkonadiServer(open("akonadi.log", "w"), open("akonadi.err", "w")):
            logging.info("trigger fullSync")
            akonadi.fullSync(akonadi_resource_name)

first create the config files. If they are in place we need a DBus Server. If it is not available it is started (and stoped after leaving the with statement). Now the passwort is inserted in kwallet and the akonadiserver is started. If akonadi is running the fullSync is triggered.

You can find the whole at github:hefee/akonadi-initalsync

5. Testing

After having a nice script, the last bit that we want to test it. To have a fully controlled environment we docker images for that. One image for the server and one with this script. As base we use a Ubuntu 12.04 and our obs builds for kontact.

Because we already started with docker images for other parts of the depolyment of kontact I added them to the known repository github:/cmollekopf/docker

ipython ./automatedupdate/build.py #build kolabclient/percise  
python testenv.py start set1  #start the kolab server (set1)

start the sync:

% ipython automatedupdate/run.py
developer:/work$ cd akonadi-initalsync/  
developer:/work/akonadi-initalsync$ ./test.sh  
+ export QT_GRAPHICSSYSTEM=native
+ QT_GRAPHICSSYSTEM=native
+ export QT_X11_NO_MITSHM=1
+ QT_X11_NO_MITSHM=1
+ sudo setfacl -m user:developer:rw /dev/dri/card0
+ export KDE_DEBUG=1
+ KDE_DEBUG=1
+ USER=doe
+ PASSWORD=Welcome2KolabSystems
+ sleep 2
+ sudo /usr/sbin/mysqld
151215 14:17:25 [Warning] Using unique option prefix key_buffer instead of key_buffer_size is deprecated and will be removed in a future release. Please use the full name instead.  
151215 14:17:25 [Note] /usr/sbin/mysqld (mysqld 5.5.46-0ubuntu0.12.04.2) starting as process 16 ...  
+ sudo mysql --defaults-extra-file=/etc/mysql/debian.cnf
+ ./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0
INFO:root:setup configs  
INFO:DBusServer:starting dbus...  
INFO:root:set kwallet password  
INFO:Akonadi:starting akonadi ...  
INFO:root:trigger fullSync  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 started  
INFO:AkonadiSync:fullSync for akonadi_kolab_resource_0 was successfull.  
INFO:Akonadi:stopping akonadi ...  
INFO:DBusServer:stopping dbus...  

To be honest we need some more quirks, because we need to setup the X11 forward into docker. And in this case we also want to run one MySQL server for all users and not a MySQL server per user, that's why we also need to start mysql by hand and add a database, that can be used from akonadi. The real syncing begins with the line:

./initalsync.py 'John Doe' doe@example.com doe Welcome2KolabSystems akonadi_kolab_resource_0

Kolab @ Fosdem

by Aaron Seigo in aseigo at 23:21, Tuesday, 15 December

Kolab will once again have a booth at FOSDEM, that fantastic event held in Brussels at the end of January. Several Kolab developers and deployers (and generally fun people) will be there wandering the halls, talking about Kolab and looking to connect with and learn from all the other fantastic people and projects who are there. It's going to be a great event! Be sure to find us if you are attending ... and come prepared for awesome :)

Kolab: Bringing All Of Us Together, Part 1

by Aaron Seigo in aseigo at 18:09, Tuesday, 15 December

Kolab: Bringing All Of Us Together, Part 1

The new year is looming, so this seems like a good time to share some of what we are thinking about at Kolab Systems when it comes to the Kolab ecosystem. As a result of careful examination of the Kolab ecosystem, we put together some priority adjustments to make. These include:

  • Kolab release process improvements
  • kolab.org reboot
  • partner enablement

Each of these are big, exciting and fundamental improvements in their own right, so I will cover each in its own blog entry in which I will attempt to explain the challenges we see and what we are doing to address them. First up is the Kolab release process.

tl;dr

Kolab Enterprise as a software product is merging with the Kolab community edition. There will be a single Kolab software product open to all, because everyone needs Kolab, not only "enterprise" customers! Both the version selection and development process around Kolab will be substantially simplified as a direct result. Read on for the details!

Kolab Three Ways

Kolab currently comes in a few different "flavors": what you find in the source repositories and build servers; the Kolab Community Edition releases; and Kolab Enterprise. What is the difference?

Well, the code and raw package repos are essentially "server yourself": you download it, you put it together. Not so easy. The Community Edition releases are easy to install and were being released every six months, but support is left to community members and there is no long term release strategy for them. By contrast, you can purchase commercial support and services for Kolab Enterprise, and those releases are supported for a minimum of 5 years. The operating system platforms supported by each of these variants also varies, and moving between the Community Edition and Enterprise could at times be a bit of effort.

Yet they all come from the same source, with features and fixes flowed between them. However, the flow of those fixes and where features landed was not standardized. Sometimes features would land in Enterprise first and then debut in the Community Edition. Often it was the other way around. Where fixes would appear was similarly done on a case-by-case basis.

The complex relationship can be seen in the timeline below:

Kolab: Bringing All Of Us Together, Part 1

This has resulted duplication of effort, confusion over which edition to use when and where, and not enough predictability. We've been thinking about this situation quite deeply over the past months and have devised a plan that we feels improves the situation across the board.

A Better Plan Emerges

Starting in 2016 we will focus our efforts on a single Kolab product release that combines the Q/A of Kolab Enterprise with the availability of the Kolab community edition. Professional services and support, optimized operating system / platform integration and long term updates will all remain available from Kolab Systems, but everyone will be able to install and use the same Kolab packages.

It also means that both fixes and features will land in a consistent fashion. Development will be focused on the master branches of the Kolab source repositories, which have been hosted for a while now with Phabricator sporting open access to sprints and work boards. With our eyes and hands firmly on the main branches of the repositories, we will focus on bringing continuous delivery to them for increased quality.

Fixes and features alike will all flow into Kolab releases, bringing long desired predictability to that process, and Kolab Systems will continue to provide a minimum of 5 years of support for Kolab customers. This will also have the nice effect of making it easier for us to bring Kolab improvements live to Kolab Now.

These universal Kolab releases will be made available for all supported operating systems, including ones that the broader community elects to build packages for. This opens the way for the "enterprise-grade" Kolab packages on all operating systems, rather than "just" the community editions.

You can see how much clarity and simplicity this will bring to Kolab releases by comparing the diagram below with the previous one:

Kolab: Bringing All Of Us Together, Part 1

You can read more about Kolab release and development at Jeroen van Meeuwen's blog: The Evolution of Kolab Development and Short- vs. Long-term Commitments.

eimap: because what the world needs is another IMAP client

by Aaron Seigo in aseigo at 12:23, Friday, 11 December

eimap: because what the world needs is another IMAP client

Erlang is a very nice fit for many of the requirements various components in Kolab have ... perhaps one of these days I'll write something more in detail about why that is. For now, suffice it to say that we've started using Erlang for some of the new server-side components in Kolab.

The most common application protocol spoken in Kolab is IMAP. Unfortunately there was no maintained, functional IMAP client written in Erlang that we could find which met our needs. So, apparently the world needed another IMAP client, this time written in Erlang. (Note: When I say "IMAP client" I do not mean a GUI for users, but rather something that implements the client-side of the IMAP protocol: connect to a server, authenticate, run commands, etc.)

So say hello to eimap.

Usage Overview

eimap is implemented as a finite state machine that is meant to run in its own Erlang process. Each instance of an eimap represents a single connection to an IMAP server which can be used by one or more other processes to connect, authenticate and run commands against the server.

The public API of eimap consists mostly of requests that queue commands to be sent to the server. These functions take the process ID (PID) to send the result of the command to, and an optional response token that will accompany the response. Commands in the queue are processed in sequence, and the server responses are parsed into nice normal Erlang terms so one does not need to concern themselves with the details of the IMAP message protocols. Details like selecting folders before accessing them or setting up TLS is handled automagically by eimap by inserting necessary commands into the queue for the user.

Here is a short example of using eimap:

ServerConfig = #eimap_server_config{ host = "192.168.56.101", port = 143, tls = false },
{ ok, Conn } = eimap:start_link(ServerConfig),
eimap:login(Conn, self(), undefined, "doe", "doe"),
eimap:get_folder_metadata(Conn, self(), folder_metadata, "*", ["/shared/vendor/kolab/folder-type"]),
eimap:logout(Conn, self(), undefined),
eimap:connect(Conn).

It starts an eimap process, queues up a login, getmetadata and logout command, then connects. The connect call could have come first, but it doesn't matter. When the connection is established the command queue is processed. eimap exits automatically when the connection closes, making cleanup nice and easy. You can also see the response routing in each of the command functions, e.g. self(), folder_metadata which means that the results of that GETMADATA IMAP command will be sent to this process as { folder_metadata, ParsedResponse } once completed. This is typically handled in a handle_info/3 function for gen_server processes (and similar).

Internally, each IMAP command is implemented in its own module which contains at least a new and a parse function. The new function creates the string to send the server for a given command, and parse does what it says returning a tuple that tells eimap whether it is completed, needs to consume more data from the server, or has encountered an error. This allows simple commands can be implemented very quickly, e.g.:

-module(eimap_command_compress).
-behavior(eimap_command).
-export([new/1, parse/2]).
new(_Args) -> <<"COMPRESS DEFLATE">>.
parse(Data, Tag) -> formulate_reponse(eimap_utils:check_response_for_failure(Data, Tag)).
formulate_reponse(ok) -> compression_active;
formulate_reponse({ _, Reason }) -> { error, Reason }.

There is also a "passthrough" mode which allows a user to use eimap as a pipe between it and the IMAP server directly, bypassing the whole command queueing mechanism. However, if commands are queued, eimap drops out of passthrough to run those commands and process their responses before returning to passthrough.

It is not a complicated design by any means, and that's a virtue. :)

Plans and more plans!

As we write more Erlang code for use with Kolab and IMAP in general, eimap will be increasingly used and useful. The audit trail system for groupware objects needs some very basic IMAP functionality; the Guam IMAP proxy/filter heavily relies on this; and future projects such as a scalable JMAP proxy will also be needing it. So we will have a number of consumers for eimap as time goes on.

While the core design is mostly in place, there are quite a few commands that need to be implemented which you can see on the eimap workboard. Writing commands is quite straightforward as each goes into its own module in the src/commands directory and is developed with a corresponding test in the test/ directory; you don't even need an IMAP server, just the lovely (haha) relevant IMAP RFC. Once complete add a function to eimap itself to queue the command, and eimap handles the rest for you from there. Easy, peasy.

I've personally been adding the commands that I have immediate use for, and will be generally adding the rest over time. Participation, feedback and patches are welcome!

guam: an IMAP session filter/proxy

by Aaron Seigo in aseigo at 21:59, Thursday, 10 December

guam: an IMAP session filter/proxy

These days, the bulk of my work at Kolab Systems does not involve writing code. I have been spending quite a bit of time on the business side of things (and we have some genuinely exciting things coming in 2016), customer and partner interactions, as well as on higher-level technical design and consideration. So I get to roll around Roundcube Next, Kube (an Akonadi2-based client for desktop and mobile ... but more on that another time), Kolab server hardware pre-installs .. and that's all good and fun. Still, I do manage to write a bit of code most weeks, and one of the projects I've been working on lately is an IMAP filter/proxy called Guam.

I've been wanting to blog about it for a while, and as we are about to roll version 0.4 I figured now is as good a time as any.

The Basics of Guam

Guam provides a simple framework to alter data being passed between an IMAP client and server in real time. This "interference" is done using sets of rules. Each port that Guam listens has a set of rules with their own order and configuration. Initially rules start out passive and based on the data flow may elect to become active. Once active, a rule gets to peek at the data on the wire and may take whatever actions it wish, including altering that data before it gets sent on. In this way rules may alter client messages as well as server responses; they may also record or perform other out-of-band tasks. The imagination is the limit, really.

Use Cases

The first practical use case Guam is fulfilling is selective hiding of folders from IMAP clients. Kolab stores groupware data such as calendars, notes, tags and more in plain old IMAP folders. Clients that connect over IMAP to a Kolab server which are not aware of this get shown all those folders. I've even heard of users who have seen these folders and delete them thinking they were not supposed to be there, only to then wonder where the heck their calendars went. ;)

So there is a simple rule called filter_groupware_folders that tries to detect if the client is a Kolab-aware client by looking at the ID string it sends and if it does not look like a Kolab client it goes about filtering out those groupware folders. Kolab continues on as always, and IMAP clients do as well but simply do not see those other special folders. Problem solved.

But Guam can be used for much more than this simple, if rather practical, use case. Rules could be written that prevent downloading of attachments from mobile devices, or accessing messages marked as top-secret when being accessed from outside an organization's firewall. Or they could limit message listings to just the most recent or unread ones and provide access to that as a special service on a non-standard port. They could round-robin between IMAP servers, or direct different users to different IMAP servers transparently. And all of these can be chained in whichever order suits you.

The Essential Workings

The two most important things to configure in Guam are the IMAP servers to be accessed and the ports to accept client connections on.

guam: an IMAP session filter/proxy

Listener configuration includes the interface and port to listen on, TLS settings, which IMAP backend to use and, of course, the rule set to apply to traffic. IMAP server configuration includes the usual host/port and TLS preferences, and the listeners refer to them by name. It's really not very complicated. :)

Rules are implemented in Guam as Erlang modules which implement a simple behavior (Erlangish for "interface"): new/1, applies/3, apply_to_client_message/3, apply_to_server_message/3, and optionally imap_data/3. The name of the module defines the name of the rule in the config: a rule named foobar would be implemented in a module named kolab_guam_rule_foobar.

... and for a quick view that's about all there is to it!

Under the hood

I chose to write it in Erlang because the use case is pretty much perfect for it: lots of simultaneous connections that must be kept separate from one another. Failure in any single connection (including a crash of some sort in the code) does not interfere with any other connection; everything is asynchronous while remaining simple (the core application is a bit under 500 lines of code); and Erlang's VM scales very well as you add cores. In other words: stability, efficiency, simplicity.

Behind the scenes, Guam uses an Erlang IMAP client library that I've been working on called eimap. I won't get any awards for creativity in the naming of it, certainly, but "erlang IMAP" does what it says on the box: it's IMAP in Erlang. That code base is rather larger than the Guam one, and is quite interesting in its own right: finite state machines! passthrough modes! commands as independent modules! async and multi-process! ooh! aaaaaah! sparkles! eimap is a very easy project to get your fingers dirty with (new commands can be implemented in well under 6 lines of code) and will be used by a number of applications in future. More in the next blog entry about that, however.

In the meantime, if you want to get involved, check out the git repo, read the docs in there and take a look at the Guam workboard.

This week in Kolab-Tech

by Mads Petersen in The Kolaborator at 12:41, Saturday, 03 October

It's always fun when your remote colleagues are coming to visit the office. It helps communication putting a face to the name in the chat client - and the voice on the phone.

Giles, our creative director, was visiting from London the first days of the week, which made a lot of the work switch context to be about design and usability. As Giles is fairly new in the company we also spent some time discussing a few of our internal processes and procedures with him. It is great to have him onboard to fill in a previously not so investigated space with his large experience.

The server development team kept themselves busy with a few Roundcube issues, and with a few issues that we had in the new KolabNow dashboard. Additionally work was being done on the Roundcube-Next POC. We hope soon to have something to show on that front.

On the desktop side, we finalized the sprint 201539 and delivered a new version of Kontact on Windows and on Linux. The Windows installer is named Kontact-E14-2015-10-02-12-35.exe, and as always it is available on our mirror.

This Sunday our datacenter is doing some maintenance. They do not expect any interruption, but be prepared for a bit of connection troubles on Sunday night.

On to the future..

Last week in Kolab-Tech

by Mads Petersen in The Kolaborator at 15:51, Monday, 28 September

As we started the week already the previous Friday night (by shutting off the KolabNow cockpit and starting the big migration) it turned out to be a week all about (the bass) KolabNow.

Over the weekend we made a series of improvements to KolabNow that will improve the over all user experience with

  • Better performance
  • More stable environment
  • Less downtime
  • Our ability to update the environment with a minimum of interruption for endusers.

After the update, there were of course a few issues that needed to be tweeked, but details aside, the weekend was a big success. Thanks to the OPS staff for doing the hard work.

One thing we changed with this update was the way users get notified when their accounts are suspended. Before this weekend, users with suspended accounts would still be able to login and receive mail on KolabNow. After this update, users with suspended accounts will not be able to login. This was of course leading to a small breeze of users with suspended accounts contacting support with requests for re-enabeling of their accounts.

On the development side we were making progress on two fronts:

  • We are getting close to the end of the list of urgent Kontact defects. The second week of this sprint should get rid of that list. Our Desktop people will then get time to look forward again, and look at the next generation of Kolab Desktop Client.
  • We started experimenting with one (- of perhaps more to come) POCs for Roundcube-Next. We now need to start talking about the technologies and ideas behind that new product. More to follow about that.

Thank you for your interest - if you are still reading. :-)

This week in Kolab Tech

by Mads Petersen in The Kolaborator at 15:48, Friday, 18 September

Another week passed by; super fast, as we know that: Time is running fast when you have fun.

The client developers are on a roll. They have been hacking away on a defined bundle of issues in Korganizer and Zanshin, which has been annoying for users, and has prevented some organizations from adapting the desktop client. This work will proceed during the next sprint - and most probably the sprint after that.

One of our collaborative editing developers took part in the ODF plugfest. According to his report, a lot of good experiences were had, a lot of contacts were made, and there was good feedback for the plans of the deep Kolab/Manticore integration.

Our OPS people was busy most of the week with preparations for this weekends big KolabNow update. This is a needed overhaul of our background systems and software. As we now have the new hardware in place, and it has been running it's test circles around itself, we can finally start applying many of the improvements that we have prepared for some time. This weekend is very much a backend update; but an important one, which will make it easier for us to apply more changes in the future with a minimal amount of interruptions.

All y'all have a nice weekend now..

This week in Kolab tech..

by Mads Petersen in The Kolaborator at 11:01, Friday, 11 September

The week in development:

  • Our desktop people were spending time in Randa, a small town in the Swiss mountains, where they were discussing KDE related issues and hacking away together with similar minded people. Most probably they also got a chance or two for some social interaction.
  • Work was continued on the Copenhagen (MAPI integration) project. Where as it was easy to spot progress in the beginning, the details around folder permissions and configuration objects that are being worked out now are not as visible.
  • The Guam project (the scalable IMAP session and payload filter) is moving along as planned. The filter handling engine is in place. It is now being implanted into the main body of the system, and then work on the actual filter formulation can be started.
  • A few defects in Kolab on UCS was discovered in the beginning of the week. Those were investigated and are getting fixed as I am writing this. Hopefully we will be able to push a new package for this product early next week.

In other news: The engineering people are working hard to prepare the backend systems for some interesting upcoming KolabNow changes. There will be more information about those changes in other more appropriate places.

Only thing left is, to wish everyone a very nice weekend.

Last week @ Kolab Tech

by Mads Petersen in The Kolaborator at 14:00, Monday, 07 September

After a summer with ins and outs of the super hot Zurich office, this week finally brought some rain and a little chill. I can't wait for the snow to start.

The week started early and in full speed, as we had our hardware vendor visiting on Monday to replace a defect hypervisor. I sleep better at night knowing that everything is in order again.

A few of us was jumping on a bus to the fair city of Munich, to meet the techies at IT@M IT@M for a Kontact workshop; 3 days of intense desktop client talks, discussions and experiments. It was inspiring to see the work groups get together to resolve issues, do packaging on the LiMux platform and prepare pre-deployment configurations. A big value of the workshop was the opportunity to collect and consolidate a lot of end user experience. Luckily we also got time for a bit of pretaste of the special Wiesn bier.

Aside from discussing the desktop clients, creating packages and listening to use cases, Christian finally found and resolved the issue that for a while has prevented me from installing the latest Kontact on my fedora 22. Thanks Christian!

Kontact and GnuPG under Windows

by Sandro Knauß in Decrypted mind at 23:53, Wednesday, 02 September

Kontact has, in contrast to Thunderbird, integrated crypto support (OpenPGP and S/MIME) out-of-the-box.
That means on Linux you can simply start Kontact and read crypted mails (if you have already created keys).
After you select your crypto keys, you can immediately start writing encrypted mails. With that great user experince I never needed to dig further in the crypto stack.

select cryptokeys step1 select cryptokeys step2

But on Windows there is no GnuPG installed as default, so I need to dig into the whole world of crypto layers,
that are between Kontact and the actual part that does the de-/encryption.


Crypto Stack

Kontact uses a number of libraries that the team has written around GPGME.

The lowest level one is gpgmepp which is an object oriented wrapper for gpgme. This lets us avoid having to write code in C for KMail. Than we have libkleo which is a library built on top of gpgmepp that KMail uses to trigger de-/encryption in the lower levels. GPGME is the only required dependency to compile Kontact with crypto support.

But this is not enough to send and receive encrypted mail with Kontact on Windows, as I mentioned earlier. There are still runtime dependencies that we need to have in place. Fortunatelly the runtime crypto stack is already packaged by the GPG4Win team. Simply installing is still not enough to have crypto support, though. With GPG4Win, it is possible to select OpenPGP keys, create and read encrypted mails, but unfortunatelly it doesn't work with S/MIME.

So I had to dig futher into how GnuPG is actually working.

OpenPGP is handled by the gpg binary and for S/MIME we have gpgsm. Both are directly called from GPGME, using libassuan. Both application than talk to gpg-agent, which is actually the only programm that interacts with the key data. Both application can be used from the commandline, so it was easy to verify, that they were working and that we have no problems with GnuPG setup.

So first we start by creating keys (gpg --gen-key and gpgsm --gen-key) and than further testing what works with GPG4Win and what does not. We found a bug in GnuPG in the used version, but this one was closed in a newer version. Still Kontact didn't want to communicate with GPG4Win. The reason was a wrong standard path, preventing gpgme from finding gpgsm. With that fixed, we now have a working crypto stack under windows.

But to be honest, there are more application involved in a working crypto stack. At first we need gpgconf and gpgme-w32-spawn to be available in the Kontact directory. gpgconf helps gpgme to find gpg and gpgsm and is responsible to modify the content of .gnupg in the user's home directoy. Additionally, it infoms you about changes in config files. gpgme-w32-spawn is responsible for creating the other needed processes.

For having a UI where you can enter ypur password you need pinentry. S/MIME needs another agent, that does the CRL / OCSP checks. This is done by dirmgnr. In GnuPG 2.1 dirmgnr is the only component that performs connections to the outside. So every request that requires the Internet is done via dirmgnr.

This is, in short, the crypto stack that needs to work together to give you working encrypted mail support.

We are happy, that we now have a fully working Kontact under windows (again!). There are rumours, that Kontact was working also before that under windows with crypto support, but unfortunatelly when we started the crypted part was not working.

This work has done in the kolabsys branch, which is based on KDE Libraries 4. The next steps are to merge changes over to make sure that the current master branch of Kontact, which uses KDE Frameworks 5, is also working.

Randa

Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

randa meeting

Kolab Now was first launched January 2013 and we were anxious to find out: If someone offered a public cloud service for people that put their privacy and security first. A service that would not just re-sell someone else’s platform with some added marketing, but did things right. Would there be a demand for it? Would people choose to pay with money instead of their privacy and data? These past two and a half years have provided a very clear answer. Demand for a secure and private collaboration platform has grown in ways we could have only hoped for.

To stay ahead of demand we have undertaken a significant upgrade to our hosted solution that will allow us to provide reliable service to our community of users both today and in the years to come. This is the most significant set of changes we’ve ever made to the service, which have been months in the making. We are very excited to unveil these improvements to the world as we complete the roll-out in the coming weeks.

From a revamped and simplified sign-up process to a more robust directory
service design, the improvements will be visible to new and existing users
alike. Everyone can look forward to a significantly more robustness and
reliable service, along with faster turnaround times on technical issues. We
have even managed to add some long-sought improvements many of you have been
asking for.

The road travelled

Assumptions are the root of all evil. Yet in the absence of knowledge of the future, sometimes informed assumptions need to be made. And sometimes the world just changes. It was February 2013 when MyKolab was launched into public beta.

Our expectation was that a public cloud service oriented on full business collaboration focusing on privacy and security would primarily attract small and medium enterprises between 10 and 200 users. Others would largely elect to use the available standard domains. So we expected most domains to be in the 30 users realm, and a handful of very large ones.

That had implications for the way the directory service was set up.

In order to provide the strongest possible insulation between tenants, each domain would exist in its own zone within the directory service. You can think of this as o dedicated installations on shared infrastructure instead of the single domain public clouds that are the default in most cases. Or, to use a slightly less technical analogies, between serial houses or apartments in a large apartment block.

So we expected some moderate growth for which we planned to deploy some older hardware to provide adequate redundancy and resource so there would be a steady show-case for how to deploy Kolab into the needs of Application and Internet Service Providers (ASP/ISP).

Literally on the very day when we carried that hardware into the data centre did Edward Snowden and his revelations become visible to the world. It is a common quip that assumptions and strategies usually do not outlive their contact with reality. Ours did not even make it that far.

After nice, steady growth during the early months, MyKolab.com took us on a wild ride.

Our operations managed to work miracles with the old hardware in ways that often made me think this would be interesting learning material for future administrators. But efficiency only gets you so far.

Within a couple of months however we ended up replacing it in its entirety. And to the largest extent all of this was happening without disruption to the production systems. New hardware was installed, services switched over, old hardware removed, and our team also managed to add a couple of urgently sought features to Kolab and deploy them onto MyKolab.com as well.

What we did not manage to make time for is re-work the directory service in order to adjust some of the underlying assumptions to reality. Especially the number of domains in relation to the number of users ended up dramatically different from what we initially expected. The result of that is a situation where the directory service has become the bottleneck for the entire installation – with a complete restart easily taking in the realm of 45 minutes.

In addition, that degree of separation translated to more restrictions of sharing data with other users, sometimes to an extent that users felt this was lack of a feature, not a feature in and of itself.

Re-designing the directory service however carries implications for the entire service structure, including also the user self-administration software and much more. And you want to be able to deploy this within a reasonable time interval and ensure the service comes back up better than before for all users.

On the highway to future improvements

So there is the re-design, the adaptation of all components, the testing, the migration planning, the migration testing and ultimately also the actual roll-out of the changes. That’s a lot of work. Most of which has been done by this point in time.

The last remaining piece of the puzzle was to increase hardware capacity in order to ensure there is enough reserve to build up an entire new installation next to existing production systems, and then switch over, confirm successful switching, and then ultimately retire the old setup.

That hardware has been installed last week.

So now the roll-out process will go through the stages and likely complete some time in September. That’s also the time when we can finally start adding some features we’ve been holding back to ensure we can re-adjust our assumptions to the realities we encountered.

For all users of Kolab Now that means you can look forward to a much improved service resilience and robustness, along with even faster turnaround times on technical issues, and an autumn of added features, including some long-sought improvements many of you have been asking for.

Stay tuned.

Akonadi with a remote database

by Aaron Seigo in aseigo at 13:02, Friday, 14 August

Akonadi with a remote database

The Kontact groupware client from the KDE community, which also happens to be the premier desktop client for Kolab, is "just" a user interface (though that seriously undersells its capabilities, as it still does a lot in that UI), and it uses a system service to actually manage the groupware data. In fact, that same service is used by applications such as KDE Plasma to access data; this is how calendar events end up being shown in the desktop clock's calendar for instance. That service (as you might already know) is called Akonadi.

In its current design, Akonadi uses an external1 database server to store much of its data2. The default configuration is a locally-running MySQL server that Akonadi itself starts and manages. This can be undesirable in some cases, such as multi-user systems where running a separate MySQL instance for each and every user may be more overhead than desired, or when you already have a MySQL instance running on the system for other applications.

While looking into some improvements for a corporate installation of Kontact where the systems all have user directories hosted on a server and mounted using NFS, I tried out a few different Akonadi trick. One of those tricks was using a remote MySQL server. This would allow this particular installation to move Akonadi's database related I/O load off the NFS server and share the MySQL instance between all their users. For a larger number of users this could be a pretty significant win.

How to accomplish this isn't well documented, unfortunately, at least not that I could readily find. Thankfully I can read the source code and work with some of the best Akonadi and Kontact developers that currently work on it. I will be improving the documentation around this in the coming weeks, though.3 Until then, here is how I went about it.

Configuring Akonadi

Note: as Dan points out in the comments below, this is only safe to do with a "fresh" Akonadi that has no data thus far. You'll want to first clean out (and possibly backup) all the data in $XDG_DATA_HOME/akonadi as well as be prepared to do some cleaning in the Kontact application configs that reference Akonadi entities by id. (Another practice we aim to light on fire and burn in Akonadi Next.)

First, you want Akonadi to not be running. Close Kontact if it is running and then run akonadictl stop. This can take a little while, even though that command returns immediately. To ensure Akonadi actually is stopped run akonadictl status and make sure it says that it is, indeed, stopped.

Next, start the Akonadi control panel. The command line approach is kcmshell4 kcm_akonadi_resources, but you can also open the command runner in Plasma (ALt+F2 or Alt+Space, depending) and type in akonadi to get something like this:

Akonadi with a remote database

It's the first item listed, at least on my laptop: Akonadi Configuration. You can also go the "slower" route and open System Settings and either search for akonadi or go right into the Personal Information panel. No matter how you go about it, you'll see something like this:

Akonadi with a remote database

Switch to the Akonadi Server Configuration tab and disable the Use internal MySQL server option. Then you can go about entering a hostname. This would be localhost for MySQL7 running on the same machine, or an IP address or domain name that is reachable from the system. You will also need to supply a database name4 (which defaults to akonadi), username5 and password. Clear the Options line of text, and hit the ol' OK button. That's it.

Akonadi with a remote database

Assuming your MySQL is up and running and the username and password you supplied are correct, Akonadi will now be using a remote MySQL database. Yes, it is that easy.

Caveats

In this configuration, the limitations are twofold:

  • network quality
  • local configuration is now tied to that database

Network quality is the biggest factor. Akonadi can send a lot of database queries and each of those results in a network roundtrip. If your network latency for a roundtrip is 20ms, for instance, then you are pretty well hard-limited to 50 queries per second. Given that Akonadi can issue several queries for an item during initial sync, this can result in quite slow initial synchronization performance on networks with high latency.6

Past latency, bandwidth is the other important factor. If you have lots of users or just tons of big mails, consider the network traffic incurred in sending that data around the network.

For typical even semi-modern network in an office environment, however, the network should not be a big issue in terms of either latency or bandwidth.

The other item to pay attention to is that the local configuration and file data kept outside the database by Akonadi will now be tied to the contents of that database, and vice versa. So you can not simply setup a single database in a remote database server and then connect simultaneously to it from multiple Akonadi instances. In fact, I will guarantee you that this will eventually screw up your data in unpleasant ways. So don't do it. ;)

In an office environment where people don't move between machines and/or when the user data is stored centrally as well, this isn't an issue. Otherwise, create one database for each device you expect to connect to it. Yes, this means multiple copies of the data, but it will work without trashing your data and that's more important thing.

How well does it work?

Now for the Big Question: Is this actually practical and safe enough for daily use? I've been using this with my Kolab Now account since last week. To really stretch the realm of reality, I put the MySQL instance on a VM hosted in Germany. In spite of forcing Akonadi to trudge across the public internet (and over wifi), so far, so good. Once through a pretty slow initial synchronization, Kontact generally "feels" on par with and often even a bit snappier than most webmail services that I've used, though certainly slower than a local database. In an office environment, however, I would hope that the desktop systems have better network than "my laptop on wifi accessing a VM in Germany".

As for server load, for one power user with a ton of email (my life seems to revolve around email much of the time) it is entirely negligible. MySQL never budged much above 1% CPU usage during my monitoring of it, and after sync was usually just idling.

I won't be using this configuration for daily use. I still have my default-configured Akonadi as well, and that is not only faster but travels with my laptop wherever it is, network or not. Score one for offline access.

Footnotes

1: If you are thinking something along the lines of "the real issue is that it uses a database server at all", I would partially agree with you. For offline usage, good performance, and feature consistency between accounts, a local cache of some sort is absolutely required. So some local storage makes sense. A full RDBMS carries more overhead than truly necessary and SQL is not a 100% perfect fit for the dataset in question. Compared to today, there were far fewer options available to the Akonadi developers a decade ago when the Akonadi core was being written. When the choice is between "not perfect, but good enough" and "nothing", you usually don't get to pick "nothing". ;) In the Akonadi Next development, we've swapped out the external database process and the use of SQL for an embedded key/value store. Interestingly, the advancements in this area in the decade since Akonadi's beginning were driven by a combination of mobile and web application requirements. That last sentence could easily be unpacked into a whole other blog entry.

2: There is a (configurable) limit to the size of payload content (e.g. email body and attachments) that Akonadi will store in the database which defaults to 4KiB. Anything over that limit will get stored as a regular file on the file system with a reference to that file stored in the database.

3: This blog entry is, in part, a way to collect my thoughts for that documentation.

4: If the user is not allowed to create new databases, then you will need to pre-create the database in MySQL.

5: The user account is a MySQL account, not your regular system user account ... unless MySQL is configured to authenticate against the same user account information that system account login is, e.g. PAM / LDAP.

6: Akonadi appears to batch these queries into transactions that exist per-folder being sync'd or every 100 emails, whichever comes first, so if you are watching the database during sync you will see data appear in batches. This can be done pretty easily with an SQL statement like select count(*) from PartTable; Device this number by 3 to get the number of actual items, time how long it takes for a new batch to arrive and you'll quickly have your performance numbers for synchronization.

7: That same dialog also offers options for things other than MySQL. There are pros and cons to each of the options in terms of stability and performance. Perhaps I'll write about those in the future, but this blog entry with its stupid number of footnotes is already too long. ;)

Bringing together an alliance that will liberate our future web and mobile collaboration was the most important motive behind our launching the Roundcube Next campaign at the 2015 Kolab Summit. This goal we reached fully.

There is now a group of some of the leading experts for messaging and collaboration in combination with service providers around the world that has embarked with us on this unique journey:

bHosted

Contargo

cPanel

Fastmail

Sandstorm.io

sys4

Tucows

TransIP

XS4ALL

The second objective for the campaign was to get enough acceleration to be able to allow two, three people to focus on Roundcube Next over the coming year. That goal we reached partially. There is enough to get us started and go through the groundwork, but not enough for all the bells and whistles we would have loved to go for. To a large extent that’s because we would have a lot of fantasy for bells and whistles.

Roundcube Next - The Bells and Whistles

But perhaps it is a good thing that the campaign did not complete all the way into the stretch goals.

Since numbers are part of my responsibility, allow me to share some with you to give you a first-hand perspective of being inside an Indiegogo Campaign:

 

Roundcube Next Campaign Amount

$103,541.00

100.00%

Indiegogo Cost

-$4,141.64

4.00%

PayPal Cost

-$4,301.17

4.15%

Remaining Amount

$95,098.19

91.85%

So by the time the money was in our PayPal account, we are down 8.15%.

The reason for that is simple: Instead of transferring the complete amount in one transaction, which would have incurred only a single transaction fee, they transferred it individually per contribution. Which means PayPal gets to extract the per transaction fee. I assume the rationale behind this is that PayPal may have acted as the escrow service and would have credited users back in case the campaign goal had not been reached. Given our transactions were larger than average for crowd funding campaigns, the percentage for other campaigns is likely going to be higher. It would seem this can even go easily beyond the 5% that you see quoted on a variety of sites about crowd funding.

But it does not stop there. Indiegogo did not allow to run the campaign in Swiss Franc, and PayPal forces transfers into our primary currency, resulting in another fee for conversion. On the day the Roundcube Next Campaign funds were transferred to PayPal, XE.com lists the exchange rate as 0.9464749579 CHF per USD.

USD

CHF

% of total

Roundcube Next Campaign Amount

$103,541.00

SFr. 97,998.96

100.00%

Remaining at PayPal

$95,098.19

SFr. 90,008.06

91.85%

Final at bank in CHF

$92,783.23

SFr. 87,817.00

89.61%

So now we’re at 10.39% in fees, of which 4% go to Indiegogo for their services. A total of 6.39% went to PayPal. Not to mention this is before any t-shirt is printed or shipped, and there is of course also cost involved in creating and running a campaign.

The $4,141.64 we paid to Indiegogo are not too bad, I guess. Although their service was shaky and their support non-existent. I don’t think we ever got a response to our repeated support inquiries over a couple of weeks. And we experienced multiple downtimes of several hours which were particularly annoying during the critical final week of the campaign where we can be sure to have lost contributions.

PayPal’s overhead was $6,616.27 – the equivalent of another Advisor to the Roundcube Next Campaign. That’s almost 60% more than the cost for Indiegogo. Which seems excessive and is reminding me of one of Bertolt Brecht’s more famous quotes.

But of course you also need to add the effort for the campaign itself, including preparation, running and perks. Considering that, I am no longer surprised that many of the campaigns I see appear to be marketing instruments to sell existing products that are about to be released, and less focused on innovation.

In any case, Roundcube Next is going to be all about innovation. And Kolab Systems will continue contribute plenty of its own resources as we have been doing for Roundcube and Roundcube Next, including a world class Creative Director and UI/UX expert who is going to join us in a month from now.

We also remain open to others to come aboard.

The advisory group is starting to constitute itself now, and will be taking some decisions about requirements and underlying architecture. Development will then begin and continue up until well into next year. So there is time to engage even in the future. But many decisions will be made in the first months, and you can still be part of that as Advisor to Roundcube Next.

It’s not too late to be part of the Next. Just drop a message to contact@kolabsystems.com.

"... if nothing changes"

by Aaron Seigo in aseigo at 17:41, Friday, 17 July

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?

A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.

This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.

The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.

Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.

I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.

Roundcube Next: The Next Steps

by Aaron Seigo in aseigo at 15:17, Friday, 03 July

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps

Perks

The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Roundcube Next: The Next Steps

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night Sandstorm.io became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.

Roundcube Next crowdfunding success and community

by Aaron Seigo in aseigo at 21:36, Monday, 29 June

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!


1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.

Riak KV, Basho and Kolab

by Aaron Seigo in aseigo at 10:22, Saturday, 27 June

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!

If you are a user of Roundcube, you want to contribute to roundcu.be/next. If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.

 

Key Update

by Georg Greve in freedom bits at 20:58, Monday, 18 May

I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”
In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <greve@gnu.org>
uid                  Georg C. F. Greve <greve@fsfeurope.org>
uid                  Georg C. F. Greve <greve@brave-gnu-world.org>
uid                  Brave GNU World <column@gnu.org>
uid                  Georg C. F. Greve <greve@fsfe.org>
uid                  Georg C. F. Greve <greve@gnuhh.org>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <georg.greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.ch>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Community) <georg@kolab.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfeurope.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfe.org>
uid                  Georg C. F. Greve (digitalSTROM.org Board) <georg.greve@digitalSTROM.org>
uid                  Georg C. F. Greve <mail@georggreve.net>
uid                  Georg C. F. Greve (GNU Project) <greve@gnu.org>
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.

Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.

 

 

event and data logging

by Aaron Seigo in aseigo at 08:58, Wednesday, 01 April

Working with Kolab has kept me busy on numerous fronts since I joined near the end of last year. There is the upcoming Kolab Summit, refreshing Kolab Systems' messaging, helping with progress around Kolab Now, collaborating on development process improvement, working on the design and implementation of Akonadi Next, the occassional sales engineering call ... so I've been kept busy, and been able to work with a number of excellent people in the process both in Kolab Systems and the Kolab community at large.

While much of that list of topics doesn't immediately bring "writing code" to mind, I have had the opportunity to work on a few "hands on keyboard, writing code" projects. Thankfully. ;)

One of the more interesting ones, at least to me, has been work on an emerging data loss prevention and audit trail system for Kolab called Bonnie. It's one of those things that companies and governmental users tend to really want, but which is fairly non-trivial to achieve.

There are, in broad strokes, three main steps in such a system:

  1. Capturing and recording events
  2. Storing data payloads associated with those events
  3. Recreating histories which can be reviewed and even restored from

I've been primarily working on the first two items, while a colleague has been focusing on the third point. Since each of these points is a relatively large topic on their own, I'll be covering each individually in subsequent blog entries.

We'll start in the next blog entry by looking at event capture and storage, why it is necessary (as opposed to simply combing through system logs, e.g.) and what we gain from it. I'll also introduce one of the Bonnie components, Egara, which is responsible for this set of functionality.

eGoverment in the Netherlands

by Aaron Seigo in aseigo at 18:47, Friday, 27 March

Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly swift and far-reaching shift that has touched people's personal and professional lives.

So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse.

The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio.

They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident.

In their own words:

The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens.

In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions.

That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this.

Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.

Kolab Summit 2015

by Aaron Seigo in aseigo at 11:52, Monday, 16 March

Kolab Summit 2015

We just announced that registration and presentation proposal submission is now open for the Kolab Summit 2015 which is being held in The Hague on May 2-3.

Just as Kolab itself is made up of many technologies, many technologies will be present at the summit. In addition to topics on Kolab, there will be presentations covering Roundcube, KDE Kontact and Akonadi, cyrus imap, and OpenChange among others. We have some pretty nifty announcements and reveals already lined up for the event, which will be keynoted by George Greve (CEO of Kolab Systems AG) and Jeroen van Meeuwen (lead Kolab architect). Along with the usual BoFs and hacking rooms, this should be quite an enjoyable event.

As an additional and fun twist, the Kolab Summit will be co-located with the openSUSE conference which is going on at the same time. So we'll have lots of opportunity for "hallway talks" with Geekos as well. In fact, I'll be giving a keynote presentation at the openSUSE conference about freedom as innovation. A sort of "get the engines started" presentation that I hope provokes some thought and gets some energy flowing.

SFD Call to Action: Let the STEED run!

by Georg Greve in freedom bits at 09:00, Friday, 20 September

Information Technology is a hype driven industry, a fact that has largely contributed to the current situation where the NSA and GCHQ have unprecedented access to the global communication and information. Including for a very Realpolitik based approach to how that information may be used. Economic and political manipulation may not be how these measures are advertised, but it may very well be the actual motivation. It’s the economy, stupid!

Ever since all of this started, many people have asked the question how to protect their privacy. Despite some there is still a lack of comprehensive answers to this question. There is an obvious answer that most mainstream media seem to have largely missed: Software freedom advocates had it right all along. You cannot trust proprietary cryptography, or proprietary software. If a company has a connection to the legal nexus of the United States, it is subject to US law and must comply with demands of the NSA and other authorities. But if that company also provides proprietary software it is virtually impossible for you to know what kind of agreements it has with the NSA, as most of their management prefer not to go to jail. But one would have to be very naive to think the United States is the only country where secret agreements exist.

Security unfortunately is a realm full of quacks and it is just as easy to be fooled as it is to fool yourself. In fact many of the discussions I’ve had over the past weeks painfully reminded me of what Cory Doctorow called “Schneier’s Law” although Bruce Schneier himself points out the principle has been around for much longer. He has dated it back to Charles Babbage in 1864:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

So in my experience it makes good sense to listen to what Bruce Schneier and a few others have to say, which is why I think his guide to staying secure on the internet is probably something everyone should have read. In that list of recommendations there are some points that ought to read familiar:

4) Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.

5) Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

So you were right, good for you” I hear you think. The point I am trying to make is a different one. It has been unbelievably difficult in the past to consequently do the right thing that would now give us the answers to the questions posed by the NSA and others. Both the Free Software Foundation Europe (FSFE) as an organisation and Kolab as a technology have a very long history to that extent. In fact if you’ve read the background of MyKolab.com, you’ll hopefully see the same kind of approach there, as well. Having been involved with both has given me a unique perspective.

So when Bruce Schneier is listing GnuPG as the first of several applications he is using and recommending to stay secure, I can’t help but find this rather ironic and rewarding at the same time. Because I know what has been necessary for this crucial piece of software to come so far. Especially Werner Koch, but also Markus Brinkmann are two people all of us are indebted to, even though most people don’t realize it. Excellent software developers, but entrepreneurs with much room for improvement and (I’m sorry, guys) horrible at marketing and fundraising. So they pretty much exploited themselves over many years in order to be able to keep the development going because they knew their work was essential. Over the course of the past 12 years the entire Kolab team and especially individuals such as Bernhard Reiter at Intevation have always done what they could to involve them in development projects and push forward the technology.

And we will continue to do that, both through MyKolab.com and some other development projects we are pushing with Kolab Systems for customers that have an interest in these technologies. But they have a whole lot more in mind than we could make possible immediately, such as dramatically increasing the usability for end-to-end cryptography. The concept they have developed is based on over a decade of working on obstacles to end user adoption. It’s called STEED — Usable End-to-End Encryption and has been available for two years now. I think it’s time to be finalized and implemented.

That’s why I am using tomorrow’s Software Freedom Day to ask for volunteers to help them run a crowdfunding campaign so they can finally put it into practice, in the open, to everyone’s benefit. Because that’s going to contribute more than just a little bit towards a world where privacy will once more be the default. So please help spread the word and let the STEED run!

Groklaw shutting down.

by Georg Greve in freedom bits at 12:31, Tuesday, 20 August

Today is a sad day for the world of Information Technology and the cause of software freedom. PJ just announced she’ll be shutting down Groklaw.

It’s hard to overestimate the role that Groklaw has played in the past years. Many of us, myself included, have worked with Groklaw over the years. I still take pride my article about the dangers of OOXML for Free Software and Open Standards might have been the first of many calls to arms on this topic. Or how Groklaw followed the Microsoft antitrust case that FSFE fought for and with the Samba team, and won for all of software freedom. Groklaw was essential in helping us counter some of the Microsoft spin-doctoring. Or the Sean Daly interview with Volker Lendecke, Jeremy Allison, Carlo Piana and myself for Groklaw after the landslide victory against Microsoft in court.

I remember very well how giddy I still was during the interview for having realized that Microsoft would not be able to take down FSFE, because that would have been the consequence had they gotten their way. We bet our life’s work at the time. And won. The relief was incredible.

So there is a good deal of personal sadness to hear about this, as well as a general concern which Paul Adams just summarized rather well on the #kolab IRC channel:

the world of IT is just that little bit less safe without groklaw

And it’s true. Groklaw has been the most important platform to counter corporate spin doctoring, has practiced an important form of whistleblowing long before Wikileaks, and has been giving alternative and background perspective on some of the most important things going on inside and outside the media limelight. without Groklaw, all of us will lack that essential information.

So firstly, I’d like to thank PJ for all the hard years of work on Groklaw. Never having had the pleasure of meeting her in real life, I still feel that I know her from the conversations we had over email over so many years. And I know how she got weary of the pressure, the death threats and the attempts at intimidating her into silence. Thank you for putting up with it for so long, and for doing what you felt was right and necessary despite the personal cost to yourself! The world needs more people like you.

But with email having been the only channel of communication she was comfortable using for reasons of personal safety, when Edward Snowden revealed the PRISM program, when Lavabit and Silent Circle shut down, when the boyfriends of journalists get detained at Heathrow, she apparently drew the conclusion this was no longer good enough to protect her own safety and the safety of the people she was in communication with.

That she chose MyKolab.com as the service to confide in with her remaining communication lines at least to me confirms that we did the right thing when we launched MyKolab.com and also that we did the right thing in the way we did it. But it cannot mitigate the feeling of loss for seeing Groklaw fall victim to the totalitarian tendencies our societies are exhibiting and apparently willingly embracing over the past years.

While we’re happy to provide a privacy asylum in a safe legislation, society should not need them. Privacy should be the default, not the exception.

In January this year we started the MyKolab beta phase and last week we finally moved it to its production environment, just in time for the Swiss national day. This seemed oddly fitting since the Swiss national day celebrates its independence and self-determination, as they were liberating themselves from the feudal system. So when Bruce Schneier wrote about how the Internet right now resembles a feudal system, it was too tempting an opportunity to miss. And of course PRISM and Tempora played their part in the timing, as well, although we obviously had no idea this leak was coming when we started the beta in January.

Anyhow. So now MyKolab.com has its new home.

Step 1: Hardware & Legislation

It should be highlighted that we actually run this on our own hardware, in a trustworthy, secure data centre, in a rack which we physically control. Because that is where security starts, really. Also, we run this in Switzerland, with a Swiss company, and for a reason. Most people do not seem to realize the level of protection data enjoys in Switzerland. We tried to explain it in the FAQ, and the privacy information. But it seems that too many people still don’t get it.

Put frankly, in these matters, legislation trumps technology and even cryptography.

Because when push comes to shove, people would rather not go to jail. So no matter what snake oil someone may be trying to sell you about your data being secure because “it is encrypted on our server with your passphrase, so even we don’t have access” – choice of country and legislation trumps it all.

As long as server-side cryptography is involved a provider can of course access your data even when it is encrypted. Especially when the secret is as simple as your password which all your devices submit to the server every time you check mail. Better yet, when you have push activated, your devices even keep the connection open. And if the provider happens to be subject to a requirement to cooperate and hand over your data, of course they will. Quite often they don’t even necessarily know that this is going on if they do not control the physical machines.

XKCD 538: Security

XKCD 538: Security

So whenever someone tries to serve you that kind of snake oil, you should avoid that service at all cost, because you do not know which lies you are not catching them in the act with. And yes, it is a true example, unfortunately. The romantic picture of the internet as a third place above nation states has never had much evidence on its side. Whoever was harbouring these notions and missed XKCDs take on the matter should definitely have received their wakeup call by Lavabit and Silent Circle.

The reality of the matter is:

  1. There is no digital security without physical security, and
  2. Country and applicable legislation always win.

Step 2: Terms of Service & Pricing

So legislation, hardware. What else? Terms of Service come to mind. Too often they are deliberately written to obfuscate or frankly turn you into the product. Because writing software, buying hardware, physical security, maintaining systems, staffing help desks, electricity: All these things cost money. If you do not pay for it, make sure you know who does. Because otherwise it’s like this old poker adage: If you cannot tell who is the weakest player at the table, it’s you. Likewise for any on-line service: if you cannot tell who is paying for this, it’s probably you.

Sometimes this may just in ways you did not expect, or may not have been aware of. So while most people only look for the lowest price, the question you actually should be asking yourself is: Am I paying enough for this service that I think it can be profitable even when it does everything right and pays all its employees fairly even if they have families and perhaps even mortgages?

The alternative are services that are run by enthusiasts for the common good, or subsidized by third parties – sometimes for marketing purposes. If it is run by an enthusiast, the question is how long they can afford to run this service well, and what will happen if their priorities or interests change. Plus few enthusiasts are willing to dish out the kind of cash that comes with a physically controlled, secure system in a data centre. So more often than not, this is either a box in someone’s basement where pretty much anyone has access while they go out for a pizza or cinema, or – at least as problematic – a cheap VM at some provider with unknown physical, legislative and technical security.

If it is a subsidized service, it’s worse. Just like subsidies on food in Europe destroy the farming economy in Africa, making almost a whole continent dependent on charity, subsidized services cannibalize those that are run for professional interest.

In this case that means they damage the professional development community around Open Source, leading to less Free Software being developed. Why is that? Because such subsidized services typically do not bother with contributing upstream – which is a pure cost factor and this is already charity, so no-one feels there is a problem not to support the upstream – and they are destroying the value proposition of those services that contribute upstream. So the developers of the upstream technologies need to find other ways to support their work on Open Source, which typically means they get to spend less time on Free Software development.

This is the well-meaning counterpart to providers who simply take the software, do not bother to contribute upstream, but use it to provide a commercial service that near-automatically comes in below the price if you were to price it sustainably by factoring in the upstream contribution and ongoing development. The road to hell and all that.

None of this is anything we wanted to contribute to with MyKolab.com.

So we made sure to write Terms of Service that were as good, honest and clear as we could make them, discussed them with the people behind the important Terms of Service; Didn’t Read project, and even link to that project from our own Terms of Service so people have a fair chance to compare them without being lawyers or even reading them.

Step 3: Contributing to the Commons

Kolab Web Client - Roundcube++

Roundcube++ - The Kolab Web Client

We also were careful to not choose a pricing point that would cannibalize anything but proprietary software. Because we pay the developers. All of who write Open Source exclusively. This has made sure that we have been the largest main contributor to the Roundcube web mailer by some margin, for instance. In doing so, we deliberately made sure to keep the project independent and did not interfere with its internal management. Feel free to read the account of Thomas Brüderli on that front.

So while hundreds of thousands of sites use Roundcube world wide, and it is popular with millions of users, only a handful of companies bother to contribute to its development, and none as much as Kolab Systems AG, which is the largest contributor by orders of magnitude. Don’t get me wrong. That’s all fine. We are happy about everyone who makes use of the software we develop, and we firmly believe there is a greater good achieved through Free Software.

But the economics are nonetheless the same: The developers working on Roundcube have lives, families even, monthly bills to pay, and we pay them every month to continue working on the technology for everyone’s good. Within our company group, similar things can probably be said for more than 60 people. And of course there are other parts of our stack that we do not contribute as much to, in some cases we are primarily the beneficiary of others doing the same.

It’s a give and take among companies who operate in this way that works extremely well. But there are some who almost never contribute. And if, as a customer, you choose them over those that are part of the developer community, you are choosing to have less Open Source Software developed.

So looking at contribution to Free Software as one essential criterion for whether the company you are about to choose is acting sustainably or trying to work towards a tragedy of the commons is something I would certainly suggest you do.

This now brings us to an almost complete list of items you want to check

  • Physical control, including hardware
  • Legal control, choice of applicable legislation
  • Terms of Service that are honest and fair
  • Contribution to Open Source / Free Software

and you want to make sure you pay enough for all of these to meet the criteria you expect.

Bringing it all together

On all these counts simultaneously, we made sure to put MyKolab.com into the top 10%. Perhaps even the top 5%, because we develop, maintain and publish the entire stack, as a solution, fully Open Source and more radically Open Standards based than any other solution in this area. So in fact you never need to rely upon MyKolab.com continuing to provide the service you want.

You can always continue to use the exact same solution, on your own server, in your own hands.

That is a claim that is unique, as far as I am aware. And you know that whatever you pay for the service never contributes to the development of proprietary software, but contributes to the state of the art in Free Software, available for everyone to take control of their own computing needs, as well as also improving the service itself.

For me, it’s this part that truly makes MyKolab.com special. Because if you ever need to break out of MyKolab.com, your path to self-reliance and control is already built into the system, delivered with and supported by the service itself: It’s called Kolab.

 

Following the disclosures about details on how the United States and other countries are monitoring the world there has been a global discussion about this subject that’s been long overdue. In previous articles I tried to put together what actually has been proven thus far, what that means for society, and what are the implications for businesses around the world.

Now I’d like to take a look at governments. Firstly, of course governments have a functional aspect not entirely unlike business, and of course governments should be conscious about the society and values they promote. Purely on these grounds it would likely be possible to say quite a few things about the post PRISM society.

Secondly, there is of course also the question to which extent governments have known about this previously and may even have colluded with what has been going on – in some cases possibly without democratic support for doing so. It has been pointed by quite a few journalists that “I had no idea” amounts to saying you have not been following technical progress since the typewriter was invented, and there is some truth to that. Although typewriters have also known to be bugged, of course.

In fact when spending so much time at the United Nations, one of the typical sights would be a diplomat talking on their mobile phone while covering their mouth with one hand in order to ward off the possibility of lip readers. So there is clearly an understanding that trying to know more about anyone you may have common or opposing interests with will give you an advantage, and that everyone is trying to gain that advantage to the best of their ability.

What I think is really at play here are two different things: Having been blind-sided by the actual threat, and having been found naïve.

Defending against bunnies, turning your back on lions

Smart politicians will now have understood their threat model has been totally off. It’s much easier to intercept that mobile phone call (and get both sides of the conversation) than it is to learn to lip read, guarantee to speak the same language and try and make sure you have line of sight. In other words: They were spending lots of effort protecting the wrong things while ignoring the right things. So there is no way for them to know how vulnerable they have been, what damage arose from that, and what will follow from that for their future work.

So intelligent people should now be very worried, indeed. Because either they did not know better, or perhaps even let a sense of herd safety drag them along into behaviour that has compromised their professional integrity in so far as it may have exposed their internal thoughts to people they did not want to share them with. If you’ve ever seen how international treaties are being negotiated it does not take a whole lot of fantasy to imagine how this might be a problem. Given the levels of secrecy and apparent lack of supervision if highest level politicians truly had no idea, there is also a major concern about possible abuse of the system to influence the political balance within a country by those in government.

Politicians are also romantic

The other part of the surprise seems to stem from a certain romantic notion of friendship among nations harboured by many politicians and deliberately nurtured by nations that do not share such romantic perspectives, most importantly in this context the United States.

The allies of the United States, in particular also the European Union know that the US has these capabilities and is not afraid to use them to gain an advantage for the home team. But for some reason they thought they were part of that home team because the United States have been telling them they’re best friends forever. It does not lack a certain irony that Germany fell for this, not realizing that the United States are following their default approach abroad, which is commonly referred to as Realpolitik in the US.

So when European politicians suddenly realize that it may be problematic to negotiate free trade agreements with someone who is reading your internal briefings and mails and is listening to your phone calls, it is not so much out of a shock that the US is doing this in general. They know the US is not shy to use force at any level to obtain results. It’s about the fact they’re using these methods universally, no matter who you are. That they were willing to do so against Switzerland, a country in the heart of Europe, should have demonstrated that aptly. Only that in this particular case, EU politicians were hoping to ride on the coat-tails of the US cavalry.

International Organizations

Of course that surprise also betrays the level of collaboration that has been present for a long time. The reason they thought they were part of the home team is that in some cases, they were. So when they were the benefactors of this system as they worked side by side with the United States at the Intergovernmental Organizations to put in place the global power structures that rule the world, this sort of advantage might have seemed very handy and very welcome. Not too many questions were asked, I presume.

But if you’re one of the countries in transition, a country from the developing world, or simply a country that got on the wrong sides of the United States and their power block, you now have to wonder: How much worse are you off for having been pushed back in negotiation much further than if the “Northern” power block had not known all your internal assessments, plans and contingencies? And how can Intergovernmental Organizations truly function if all your communications with these organizations are unilaterally transparent to this power block?

It’s time to understand that imbalance, and address it. I know that several countries are aware of this, of course, and some of them are actively seeking ways to address that strategic disadvantage, since parts of our company group have been involved in that. But too many countries do not yet seem to have the appropriate measures in place, nor are they addressing it with sufficient resource and urgency, perhaps out of a underestimation of the analytic capabilities.

The PRISM leaks should have been the wakeup call for these countries. But I’d also expect them to raise their concerns at the Intergovernmental Organizations, asking the secretariats how the IT and communications strategy of these organizations adequately protects the mandate of the organizations, for they can only function if a certain minimum level of confidence can be placed into them and the integrity of their work flow.

Global Powerstructures

But on a more abstract level, all of this once more establishes a trend of the United States as a nexus of global destabilisation subject only to national interest. Because it is for the US government to decide which countries to bless with access to that information, and whose information to access. Cooperate and be rewarded. Be defiant and be punished. For example by ensuring your national business champion does not get that deal since we might just employ our information to ensure our competing US business will. This establishes a gravitation towards pleasing the interests of the United States that I find problematic. As I would find a similar imbalance towards any other nation.

But in this case it is the United States that has moved to “economic policy by spook” as a good friend recently called it. Although of course there may be other countries doing the same, as right now it seems more or less confirmed this is at least in part collusion at NATO level. Be that as it may, countries need to understand that their sovereignty and economic well-being is highly dependent upon the ability to protect your own information and that of your economy.

Which is why Brazil and India probably feel confirmed in their focus on strategic independence. With the high dependency of virtually any economic sector, Information Technology has become as fundamental as electricity, roads or water. Perhaps it is time to re-assess to which level governments want to ensure an independent, stable supply that holds up to the demands of their nation.

Estonias president recently suggested to establish European cloud providers, other areas of the world may want to pay close attention to this.

The Opportunity Exists, Does The Will?

Let’s say a nation wanted to address these issues. Imagine they had to engineer the entire stack of software. The prospects would be daunting.

Fortunately they don’t have to. Nothing runs your data centres and infrastructures better, and with higher security than Free Software does. Our community has been building these tools for a long time now, and they have the potential to serve as the big equalizer in the global IT power struggle. The UNCTAD Information Economy Reports provide some excellent fact based, neutral analysis of the impact of software freedom on economies around the world.

Countries stand to gain or lose a lot in this central question. Open Source may have been the answer all along, but PRISM highlighted the need is both real and urgent.

Any government should be able to answer the following question: What is your policy on a sovereign software supply and digital infrastructure?

If that question cannot be answered, it’s time to get to work. And soon.


All articles:

After a primer on the realities of today’s world, and the totalitarian tendencies that follow from this environment and our behaviour in it, let’s take a look at what this means for professional use of information technology.

Firstly, it should be obvious that when you use the cloud services of a company, you have no secrets from that company other than the ones this company guarantees you to keep. That promise is good up to the level of guarantee that such a company can make due to the legal environment it is situated in and of course subject to the level of trust you can place into the people running and owning the company.

So when using Google Apps for your business, you have no secrets from Google. Same for Office 365 and Microsoft. iCloud and Apple. Also, these companies are known for having very good internal data analytics. Google for instance has been using algorithms to identify employees that are about to leave in order to make them a better offer to keep them on board. Naturally that same algorithm could be used to identify which of your better employees might be susceptible to being head hunted.

Of course no-one will ever know whether that actually took place or whether it contributed to your company losing that important employee to Google. But the simple truth is: In some ways, Google/Microsoft/Apple is likely to know a lot more about your business than you do yourself. That knowledge has value, and it may be tempting to turn that value into shareholder value for either of these businesses.

If you are a US business, or a small, local business elsewhere that may not be an issue.

But if you are into software, or have more than regional reach, it may become a major concern. Because thanks to what we now know about PRISM, your using these services means the US intelligence services also have real-time access to your business and its development. And since FISA explicitly empowers these services to make use of those capabilities for the general interests of the United States – including foreign policy and economic goals – the conclusion is simple: You might just be handing your internal business information to people who are champions for your competition.

Your only protection is your own lack of success. And you might be right, you might be too small for them to use too many resources, because while their input bandwidth is almost unlimited, their output bandwidth for acting upon it of course has limits. But that’s about it.

The US has a long tradition of putting their public services at the disposal of industry, trying to promote their “tax paying home team.” It’s a cultural misunderstanding to assume they would be pulling their punches just because you like to watch Hollywood movies and sympathise with the American Dream.

Which is why the US has been active to promote GM crops in Europe, or uphold the interests of their pharmaceutical industry. Is anyone at Roche reading this? No shareholder is concerned about this? To me it would seem like a good example of what risks are unwittingly taken when you let the CFO manage the IT strategy. Those two professions rarely mix, in my experience.

The United States are not the only nation in the world doing this, of course. Almost every nation has at least a small agency trying to support its own industry in building international business, and the German chancellor typically has a whole entourage of industry representatives when she’s visiting countries that are markets of interests. I guess it’s a tribute to their honesty that the United States made it explicit for its intelligence services to feed into this system in this way.

Several other countries are likely to do the same, but probably not as successfully or aggressively.

Old school on site IT as the solution?

Some people may now feel fairly smart they did not jump on the Public Cloud bandwagon. Only that not all of them are as secure as they think they are. Because we also learned that access to data is not only happening through the public clouds. Some software vendors, most importantly Microsoft, are also supplying the NSA with priority access to vulnerabilities in their software. Likely they will do their best to manage the pipe of disclosure and resolution in a way that there is always at least one way for the NSA to get into your locally installed system in an automated fashion that is currently not publicly known.

This would also explain the ongoing debate about the “NSA back door in Windows” which were always denied, but the denial could have been carefully concealing this alternative way of achieving the same effect. So running your local installation of Windows is likely a little better for your business secrets than using public cloud services by US businesses, but not as much as you might want to believe. But it’s not just Windows, of course, Lotus has been called out on the same practice a long time ago, and one may wonder whether the other software vendors avoided doing it, or simply avoided being caught.

Given the discussions among three-letter agencies about wanting that level of access into any software and service originating in the United States, and provided the evident lack of public disclosure in this area, a rather large question mark remains. So on-site IT is not necessarily the solution, unless it is done to certain standards. In all honesty, most installations probably do not meet those at the moment.  And the cost associated with doing it properly may be considered excessive for your situation.

So it’s not as simple and not a black and white decision between “all on-site self-run” and “all in public cloud by US business”. There is a whole range of options in between that provide different advantages, disadvantages, costs and risks.

Weighing the risks

So whatever you do: There is always a risk analysis involved.

All businesses take risks based on educated guesses and sometimes even instinct. And they need to weigh cost against benefit. The perfect solution is rarely obtained, typically because it is excessively costly, so often businesses stick with “what works.” And their IT is no different in that regard.

It is a valid decision to say you’re not concerned about business secrets leaking, or consider the likely damage smaller than the risk of running a poorly secured IT under your own control either directly or through a third party. And the additional cost of running that kind of installation well does not seem justified in comparison to what you gain. So you go to a more trustworthy local provider that runs your installation on Open Source and Open Standards. Or you use the services of a large US public cloud vendor. It’s your call to make.

But I would argue this call should always be made consciously, in full knowledge of all risks and implications. And truth is that in too many cases people did not take this into account, it was more convenient to ignore and dismiss as unproven speculation . Only that it’s only speculation as long as it hasn’t been proven. So virtually any business right now should be re-evaluating its IT strategy to see what risks and benefits are associated with their current strategy, and whether another strategy might provide a more adequate approach.

And when that evaluation is done, I would suggest to look at the Total Cost of Operations (TCO). But not in an overly simplistic way, because most often the calculation is skewed in favour of proprietary lock-in. So always make sure to factor in cost of decommissioning the solution you are about to introduce. And the TCO isn’t everything.

IT is not just a cost, there is a benefit. All too often two alternatives are compared purely on the grounds of their cost. So more often than not the slightly cheaper solution will be chosen despite offering dramatically fewer benefits and a poor strategic outlook. And a year later you find out that it actually wasn’t cheaper, at all, because of hidden costs. And that you would have needed the benefits of the other solution. And that you’re in a strategic dead-end.

So I would always advocate to also take into account the benefits, both in things you require right now, and in things that you might be able to achieve in the future. For lack of a common terminology, let’s call this the Option Value Generated (OVG) for your business, both in gained productivity, as well as innovative potential. And then there is what I now conveniently name the Customer Confidence Impact (CCI) of both your ability to devise an efficient IT strategy, as well as how you handle their business, data and trust.

After all is said and done, you might still want to run US technology. And you might still want to use a public cloud service. If you do, be transparent about it, so your customers can choose whether or not they agree to that usage by being in business with you. Because some people are likely to take offence due to the social implications and ownership of their own data. In other words: Make sure those who communicate with you and use your services know where that data ends up.

This may not be a problem for your business and your customers. They may consider this entirely acceptable, and that is fine. Being able to make that call is part of what it means to have freedom to try out business approaches and strategies.

But if you do not communicate your usage of this service, be aware of the risks you might be incurring. The potential impact for customer confidence and public image for having misled your business associates and customers is dramatic. Just look at the level of coverage PRISM is getting and you’ll get an idea.

The door is wide open

When reviewing your strategy, keep in mind that you may require some level of ability to adapt to a changed world in the future. Nothing guarantees that better than Open Source and Open Standards. So if you have ignored this debate throughout the past years, now would be the time to take a look at the strategic reasons for the adoption of Free Software. Most importantly transparency, security, control, ability to innovate.

While the past ten years most of the debate has been around how Open Source can provide more efficient IT at better price for many people, PRISM has demonstrated that the strategic values of Free Software were spot on and are providing benefits for professional use of IT that proprietary software cannot hope to match.

Simultaneously the past 20 years have seen a dramatic growth of professional services in the area. Because benefits are nice in theory, but if they cannot be made use of because the support network is missing, they won’t reach the average business.

In fact, in the spirit of full disclosure, I speak of personal experience in this regard. Since 2009 I dedicated myself to building up such a business: Kolab Systems is an Open Source ISV for the Kolab Groupware Solution. We built this company because Kolab had a typical Open Source problem. Excellent concepts and technology, but a gap in professional support in services to allow wide adoption and use of that technology. That’s been fixed. We now provide support for on-site hosting as well as Kolab as a service through MyKolab.com. We even structured our corporate group to be able to take care of high security requirements in a verifiable way.

But we are of course not the only business that has built its business around combining the advantages of software freedom with professional services for its customers. There are so many businesses working on this that it would be impossible to list them all. And they provide services for customers of all sizes – up to the very largest businesses and governments of this world.

So the concerns are real, as are the advantages. And there is a plethora of professional services at your disposal to make use of the advantages and address the concerns.

The only question is whether you will make use of them.


All articles: