Planet

Subscribe to the RSS feed of this planet! RSS
Aaron Seigo's picture
Sat, 2015-06-27 10:22

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!


Aaron Seigo's picture
Fri, 2015-06-19 16:51

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!


Timotheus Pokorra's picture
Sat, 2015-06-13 22:35

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.4 running on CentOS 7.

I have installed Fedora 21 on a Jiffybox.

Now install docker:

sudo yum install docker-io
systemctl start docker
systemctl enable docker

Install container
The image for the container is available here:
https://registry.hub.docker.com/u/tpokorra/kolab34_centos7/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2015/06/building-a-docker-container-for-kolab-3-4-on-jiffybox/.

To install this image, you need to type in this command:

docker pull tpokorra/kolab34_centos7

You can create a container from this image and run it:

MYAPP=$(sudo docker run --name centos7_kolab34 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 443:443 -h kolab34.test.example.org -d -t -i tpokorra/kolab34_centos7 /bin/bash)

You can see all your containers:

docker ps -a

You should attach to the container, and inside the container change the root password:

docker attach $MYAPP
  # you might need to press Enter to see the login screen
  # login with user root and password root
  # enter a secure password:
  passwd root

To stop the container:

docker stop $MYAPP

To delete the container:

docker rm $MYAPP

You can reach the Kolab Webadmin on this URL (replace localhost with the IP address of the Jiffybox):
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test

The Webmail interface is available here:
https://localhost/roundcubemail.


Timotheus Pokorra's picture
Sat, 2015-06-13 22:31

This article is an update of the previous post that built a Docker container for Kolab 3.3 from September 2014.

Preparation
I am using a Jiffybox provided by DomainFactory for building the Docker container.

I have installed Fedora 21 on a Jiffybox.

Now install docker:

sudo yum install docker-io
systemctl start docker
systemctl enable docker

Create a Docker image

To learn more about Dockerfiles, see the Dockerfile Reference

My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.4/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.

This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab34_centos7 will be created, and the container will be deleted:

sudo docker build -t tpokorra/kolab34_centos7 .

You can see all your local images with this command:

sudo docker images

To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:

MYAPP=$(sudo docker run --name centos7_kolab34 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 443:443 -h kolab34.test.example.org -d -t -i tpokorra/kolab34_centos7 /bin/bash)
docker attach $MYAPP
# you might need to press the Enter key to see the login prompt...
# login with user root and password root
# run inside the container:
echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
cd /root/KolabScripts-Kolab3.4/kolab
./initHttpTunnel.sh
./initSSL.sh test.example.org
shutdown -h now

Now you commit this last manual change:

docker commit $MYAPP tpokorra/kolab34_centos7
# delete the container
docker rm $MYAPP

You can push this image to https://registry.hub.docker.com:

#create a new account, or login with existing account:
sudo docker login
# there is currently an issue with the Fedora 21 rpm package (docker-io-1.6.0-4.git350a636.fc21.x86_64)
# see also https://forums.docker.com/t/docker-push-error-fata-0001-respository-does-not-exist/1309/18
# solution: yum install --enablerepo=updates-testing docker-io
sudo docker push tpokorra/kolab34_centos7

You can now see the image available here: https://registry.hub.docker.com/u/tpokorra/kolab34_centos7/

See this post Installing Demo Version of Kolab 3.4 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.

Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.


Timotheus Pokorra's picture
Thu, 2015-06-11 08:30

This post originates in the idea from Stephen Gallagher, who is working on rolekit: “rolekit is a daemon for Linux systems providing a stable D-BUS interface to manage the deployment of [Fedora] Server Roles”.
The code of Rolekit is available here: https://github.com/sgallagher/rolekit

On his blog, Stephen stated in this post:

A few that I’d love to see (but don’t have time to start on yet):

  • A fileserver role that manages Samba and NFS file-shares (maybe [s]ftp as well).
  • A mail and/or groupware server role built atop something like Kolab
  • A backup server

This made me wonder, how would that be, if Kolab became a Server Role for Fedora, and could be installed from the Fedora repositories? Through my work on OpenPetra and Mono I got involved with Fedora, and noticed that the Fedora community tries out new technology, proves if it works, and then the technology will eventually end up in other distributions as well.

First steps

On IRC, we agreed that the first step would be to create a Copr repo, that contains the Kolab packages, and to write this blog post describing how to install and configure Kolab.

Creating the Copr Repo

So, here is the Copr repository for Fedora 22: https://copr.fedoraproject.org/coprs/tpokorra/kolab/

I created it by getting the src rpm packages from the Kolab repository, from 3.4 and 3.4 updates, in this order:

  • kolab-utils
  • roundcubemail-plugins-kolab
  • kolab-webadmin
  • kolab
  • pykolab
  • chwala
  • iRony
  • kolab-freebusy
  • roundcubemail-skin-chameleon
  • php-Net-LDAP3
  • roundcubemail
  • kolab-syncroton
  • roundcubemail-plugin-contextmenu
  • kolab-schema
  • kolab-autodiscover
  • python-sievelib
  • php-pear-Net-LDAP2
  • cyrus-imapd

The packages libkolab and libkolabxml and kdepim are already in Fedora, and I did not update them:

Cyrus Imapd is also in Fedora, https://admin.fedoraproject.org/pkgdb/package/cyrus-imapd/, but not on the latest version. So I used version 2.5 from Kolab.

Roundcubemail is uptodate in Fedora, https://admin.fedoraproject.org/pkgdb/package/roundcubemail, but somehow does not provide roundcubemail(core) >= 1.1 as required by some Kolab packages. So I also used the package from Kolab.

I have patched the pykolab package, and backported some features to extend the setup-kolab command so that it can be used non-interactively, which is probably required to be integrated into rolekit. In Kolab 3.5 (release planned for August 2015), those features will be included.

Installing Kolab from the Copr Repo

I have tested this with Fedora 22.

Please disable SELinux, since there isn’t a SELinux policy available yet for Kolab.
Jeroen van Meeuwen has worked on it a while ago, but it probably needs updating and testing: https://github.com/kanarip/kolab-selinux

Another thing: the server should have a FQDN, eg. kolab.example.org. See the installation instructions for details.

dnf install dnf-plugins-core
dnf copr enable tpokorra/kolab
dnf install kolab
mytz=Europe/Brussels
pwd=test
setup-kolab --default --mysqlserver=new --timezone=$mytz --directory-manager-pwd=$pwd

On my setup, I need to add this line to /etc/kolab/kolab.conf, in section [kolab-wap], because I am running it inside an LXC container with an iptables tunnel for port 80, and the Kolab webadmin does not calculate the url for the API properly:

api_url = http://localhost/kolab-webadmin/api

You also need to add these lines to /etc/roundcubemail/config.inc.php (this will be fixed in Kolab 3.5):

    # required for php 5.6, see https://bbs.archlinux.org/viewtopic.php?id=193012 and http://php.net/manual/de/context.ssl.php
    # production environment requires real security settings!!!
    $config['imap_conn_options']=array(
            'ssl'=>array(
            'verify_peer_name'=>false,
            'verify_peer'=>false,
            'allow_self_signed'=>true));
    $config['smtp_conn_options']=array(
            'ssl'=>array(
            'verify_peer_name'=>false,
            'verify_peer'=>false,
            'allow_self_signed'=>true));

After this, the Kolab Server should run, and you can go to http://localhost/kolab-webadmin and login with the user “cn=Directory Manager” (without the quotes) and the password that you specified as parameter for setup-kolab.

The webmail runs at http://localhost/roundcubemail

Conclusion

I hope this shows the possibilities, and what amount of work still needs to be done.

I guess the existing packages in Fedora should be kept uptodate, and missing Kolab packages need to be added to Fedora as well.

Work on SELinux policy is also required (see above).

The other thing: with the server role Kolab, how much should the role define how the server is configured securely? In Kolab Upstream, we documented how to secure the server, but left it to the Sysadmin to actually enforce security, because the Kolab community cannot take responsibility for the server.

I have a number of scripts, that might be useful for rolekit: https://github.com/TBits/KolabScripts There is eg. a script for setting up a self-signed SSL Certificate, etc.


Timotheus Pokorra's picture
Wed, 2015-06-10 09:28

Some weeks ago, I did significant work on getting Kolab 3.4 running on Debian Jessie.

I did this work in my own time, because at TBits.net we focus on CentOS7.
Still the work was benefitial to CentOS as well, because I had to do some fixes for PHP 5.6, which will eventually be part of CentOS sometime in the future.

For your interest, here are the bugs I have worked on:

For several weeks, my nightly tests succeed now for Debian Jessie as well, on LBS: see https://lbs.solidcharity.com/package/tbits.net/kolab-test/kolab-test#Kolab3.4_debian/jessie/amd64

I just updated the installation instructions: https://docs.kolab.org/installation-guide/debian.html

I am not using Debian Jessie in production, and that means two things:

  • I cannot say if it actually works in production. I only can say that it passes my nightly tests.
  • In the future, I think I might need to focus on CentOS more, and cannot invest so much of my own free time into the Debian packaging. I am open for suggestions or sponsorship :) (perhaps https://www.bountysource.com/?)

Timotheus Pokorra's picture
Wed, 2015-06-10 08:50

I realized it would be good to blog here about updates for the Kolab 3.4 Community Edition.

Although it is a community release, and therefore does not come with any guarantuee (that is up to the Enterprise version), some people are using the community edition in production, and we as the community are contributing fixes and maintaining the release.

Thanks to Daniel Hoffend, we now have the latest Roundcube 1.1.2 in Kolab 3.4 Updates. Just run yum update (CentOS) or apt-get update && apt-get upgrade (Debian)…

Daniel also backported a week ago a fix for the installation of Kolab, the Roundcube configuration for the Addressbook was not correct.
More details can be seen in the Change Reqest on OBS.
You might want to manually update your existing installation in the same way…

And another fix from 10 days ago: Daniel backported a fix for the Roundcube Context menu.
See details in the Change Request on OBS.

In the future, I will aim to post here as soon as we accept updates into Kolab 3.4 Updates.

If you want to contribute to make the community edition of Kolab more stable and secure, please make suggestions on the mailing list about fixes that you know of, and if you enjoy creating a change request on OBS yourself, then go for it, you would be very welcome!


roundcube's picture
Fri, 2015-06-05 02:00

We just published updates to both stable versions 1.0 and 1.1
after fixing many minor bugs and adding some security improvements
to the 1.1 release branch. Version 1.0.6 comes with cherry-picked
fixes from the more recent version to ensure proper long term support
especially in regards of security and compatibility.

The security-related fixes in particular are:

  • XSS vulnerability in _mbox argument
  • security improvement in contact photo handling
  • potential info disclosure from temp directory

See the full changelog here.

Both versions are considered stable and we recommend to update all
productive installations of Roundcube with either of these versions.
Download them from roundcube.net/download.

As usual, don’t forget to backup your data before updating!

And there’s one more thing: please support our crowdfunding campaign
for Roundcube Next either directly or by
spreading the word about it. Your help is much appreciated!


Thu, 2015-06-04 00:00

Some weeks after the official Kolab 3.4 release we finally released the Gentoo packages for Kolab 3.3 including the usual benefits like the CalDAV/iCAL ready calendar plugin and the Getmail plugin which allows fetching mails from any external email account right into your Kolab Groupware.

During this release some things required much more work than we've expected. To speed things up for the next time we plan to cooperate more closely with the Kolab developers and the community. For example we finally requested the multi-driver support for the calendar plugin to be pushed upstream. The required patch is currently pending and waiting for approval. Further we had some great release planning meetings with the Kolab guys where they announced to also keep focus on quality assurance and upgrade paths for the community version. For example, as a first result, a detailled migration guide for Kolab 3.3 can be found here.

In the meantime we keep working on the upcoming Gentoo packages for Kolab 3.4. Included are the brand new chameleon skin and a lot of bugfixes which make the Kolab 3.4 release "probably the best quality assured stable release Kolab.org has yet performed".

Find detailled installation instruction in our wiki: https://wiki.awesome-it.de/howtos/kolab

Report bugs or patches to our Gitlab: https://gitlab.awesome-it.de/overlays/kolab/issues

 

 


roundcube's picture
Wed, 2015-05-27 19:43

We all know the annoyance of (web) applications not doing what we expect them to do and staring at the tumbling “Loading…” icons has become a part of our daily routine.  The more digital tools we use, the more sensitive we become for good user experience. UX is the big buzzword and Roundcube Next is not only about faster development but also very much dedicated to significantly improve the way we interact with our webmail application of choice.

By using top-notch open source technologies which have proven to work for the biggest web applications out there, Roundcube Next will be the responsive, reactive and simply gorgeous email application you want to use more than Gmail or Outlook. The core and the essentials are only the start and build a solid email client that can connect to any mailbox and will run everywhere, from your desktop browser to the device in your pocket. But our plans go beyond email and more perfectly integrated “apps” like calendar, chat, notes or cloud file access will follow.

Draft - Roundcube Next on iPad

A first draft – Roundcube Next on iPad

And we didn’t even mention the best part: Roundcube Next will be, just as its predecessor, free software and give you the freedom of choosing the email provider you trust and not the one who reads your mail.

Help us make Roundcube Next the webmail application every serious internet service provider simply has to install for their users. Join the move and talk to your ISP about backing our crowdfunding project and finally get that new shiny thing installed for you and everybody else!


bruederli's picture
Wed, 2015-05-27 19:43

We all know the annoyance of (web) applications not doing what we expect them to do and staring at the tumbling “Loading…” icons has become a part of our daily routine.  The more digital tools we use, the more sensitive we become for good user experience. UX is the big buzzword and Roundcube Next is not only about faster development but also very much dedicated to significantly improve the way we interact with our webmail application of choice.

By using top-notch open source technologies which have proven to work for the biggest web applications out there, Roundcube Next will be the responsive, reactive and simply gorgeous email application you want to use more than Gmail or Outlook. The core and the essentials are only the start and build a solid email client that can connect to any mailbox and will run everywhere, from your desktop browser to the device in your pocket. But our plans go beyond email and more perfectly integrated “apps” like calendar, chat, notes or cloud file access will follow.

Draft - Roundcube Next on iPad

A first draft – Roundcube Next on iPad

And we didn’t even mention the best part: Roundcube Next will be, just as its predecessor, free software and give you the freedom of choosing the email provider you trust and not the one who reads your mail.

Help us make Roundcube Next the webmail application every serious internet service provider simply has to install for their users. Join the move and talk to your ISP about backing our crowdfunding project and finally get that new shiny thing installed for you and everybody else!


Aaron Seigo's picture
Wed, 2015-05-27 15:28

transactional b-trees and what-not

Over the last few months I've been reading more than the usual number of papers on a selection of software development topics that are of recent interest to me. The topics have been fairly far flung as there are a few projects I have been poking at in my free time.

By way of example, I took a couple weeks reading about transitory trust algorithms that are resistant to manipulation, which is a pretty interesting problem with some rather elegant (partial) solutions which are actually implementable at the individual agent level, though computationally impractical if you wish to simulate a whole network which thankfully was not what I was interested in. (So reasonable for implementing real-world systems with, though not simulations or finding definitive solutions to specific problems.)

This past week I've been reading up on a variety of B-tree algorithms. These have been around since the early 1970s and are extremely common in all sorts of software, so one might expect that after 40+ years of continuous use of such a simple concept that there'd be very little to talk about, but it's quite a vast territory. In fact, each year for the last two decades Donald Knuth has held a public lecture around Christmas-time about trees. (Yes, they are Christmas Tree Lectures. ;) Some of the papers I've been reading were published in just the last few years, with quite a bit of interesting research having gone on in this area over the last decade.

The motivation for reading up on the topic is I've been looking for a tree that is well suited to storing the sorts of indexes that Akonadi Next is calling for. They need to be representable in a form that multiple processes can access simultaneously without problems with multiple readers and (at least) one writer; they also need to be able to support transactions, and in particular read transactions so that once a query is started the data being queried will remain consistent at least until the query is complete even if an update is happening concurrently. Preferably without blocking, or at least as little blocking as possible. Bonus points for being able to roll-back transactions and keeping representations of multiple historic versions of the data in certain cases.

In the few dozen papers I downloaded onto the tablet for evening reading, I came across Transactions on the Multiversion B+-Tree which looks like it should do the trick nicely and is also (thankfully) nice and elegant. Worth a read if you're into such things.

As those who have been following Akonadi Next development know, we are using LMDB for storage and it does a very nice job of that but, unfortunately, does not provide "secondary" indexes on data which Akonadi Next needs. Of course one can "fake" this by inserting the values to be indexed (say, the dates associated with an email or calendar event) as keys with the value being they key of the actual entry, but this is not particularly beautiful for various reasons, including:

  • this requires manually cleaning up all indexes rather than having a way to efficiently note that a given indexed key/value pair has been removed and have the indexes cleaned up for you
  • some data sets have a rather low cardinality which would be better represented with approaches such as bitmap indexes that point to buckets (themselves perhaps trees) of matching values
  • being able to index multiple boolean flags simultaneously (and efficiently) is desirable for our use cases (think: "unread mails with attachments")
  • date range queries of the sort common in calendars ("show this month", "show this week", e.g.) could also benefit from specialized indexes

I could go on. It's true that these are the sorts of features that your typical SQL database server provides "for free", but in our case it ends up being anything but "free" due to overhead and constraints on design due to schema enforcement. So I have been looking at what we might be able to use to augment LMDB with the desired features, and so the hunt for a nice B+-tree design was on. :) I have no idea what this will all lead to, if anything at all even, as it is purely an evening research project for me at the moment.

They application-facing query system itself in Akonadi Next is slowly making its way to something nice, but that's another topic for another day.


bruederli's picture
Wed, 2015-05-20 11:22

While Roundcube One originates from a private fun project with email – and only email – in mind, we have learned our lessons and are committed to do Roundcube Next right from the ground up. In the year 2015, communication combines a variety of tools we need to connect to each others. And that’s exactly what we aim to cover with the architectural design of Roundcube Next. It shall become a solid and open foundation for building communication apps on top of it. Email will certainly remain a key component as it still is the most important means of communication today. But there’s more and therefore we want to make Roundcube Next the WordPress of communication if you will.

After we opened Roundcube up for plugins in version 0.3, we witnessed an amazing creativity in what people start building around an open source email application. From a car dealer system to mailing list archives, many custom solutions were built on top of Roundcube. This definitely inspired us to support and facilitate this aspect in the very core of the new system.

The plugin infrastructure of Roundcube Next will be your new best friend for building web apps for your specific communication needs. The new core will provide an easy-to-use framework with lots of reusable components for both building the UI of your application as well as for synchronizing the data to the server and the underlying storage backend of your choice.

So if you’re a developer who got annoyed with the limitations of closed systems from the big vendors and you don’t want to build a complex web application from scratch, Roundcube Next deserves your attention and support. Go to https://roundcu.be/next and get yourself a backstage pass for the Roundcube Next forums or even a seat in the advisory committee. And don’t forget to spread the word about this new opportunity for the free software world.


roundcube's picture
Wed, 2015-05-20 11:22

While Roundcube One originates from a private fun project with email – and only email – in mind, we have learned our lessons and are committed to do Roundcube Next right from the ground up. In the year 2015, communication combines a variety of tools we need to connect to each others. And that’s exactly what we aim to cover with the architectural design of Roundcube Next. It shall become a solid and open foundation for building communication apps on top of it. Email will certainly remain a key component as it still is the most important means of communication today. But there’s more and therefore we want to make Roundcube Next the WordPress of communication if you will.

After we opened Roundcube up for plugins in version 0.3, we witnessed an amazing creativity in what people start building around an open source email application. From a car dealer system to mailing list archives, many custom solutions were built on top of Roundcube. This definitely inspired us to support and facilitate this aspect in the very core of the new system.

The plugin infrastructure of Roundcube Next will be your new best friend for building web apps for your specific communication needs. The new core will provide an easy-to-use framework with lots of reusable components for both building the UI of your application as well as for synchronizing the data to the server and the underlying storage backend of your choice.

So if you’re a developer who got annoyed with the limitations of closed systems from the big vendors and you don’t want to build a complex web application from scratch, Roundcube Next deserves your attention and support. Go to https://roundcu.be/next and get yourself a backstage pass for the Roundcube Next forums or even a seat in the advisory committee. And don’t forget to spread the word about this new opportunity for the free software world.


greve's picture
Tue, 2015-05-19 09:02

If you are a user of Roundcube, you want to contribute to roundcu.be/next. If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.

 


Aaron Seigo's picture
Wed, 2015-05-06 11:37

Today at 13:00 UTC I will be hosting a Google+ Hangout with Roundcube founder and lead developer, Thomas Brüderli. I will link the video below once we are done, but everyone is welcome to join us live and provide feedback and questions in IRC while we're chatting.

So, what are we going to talk about? Well, Roundcube, of course! :) I'll be asking Thomas why he decided that now was the appropriate time for a refactor of Roundcube, what it means for Roundcube 1.x (the current stable release), and if we have time we'll start tucking into the current feature and design thinking.

So come join us on the Roundcube G+ page / Youtube channel as well as the #roundcube channel on irc.freenode.net today at 13:00 UTC!

Hope to see you all there!

Update: The video is up on Youtube, with some blank airtime (and a fun moment of feedback) edited out .. you can watch it below:


roundcube's picture
Mon, 2015-05-04 13:38

It all started with this hypothetical question: how would we implement Roundcube if we could start over again? And now this idea has already grown into a concrete plan how to create the responsive, fast and beautiful successor of Roundcube.

The architectural changes necessary for this are clearly too big to be applied to the current Roundcube codebase without breaking the compatibility for most plugins and extensions. So we won’t take that risky path but rather define Roundcube One as feature complete and focus on a new core engine for the future Roundcube webmail application. This will enable everybody to participate in the process of reshaping the architecture and to adapt the existing plugins to the new API as we go along.

There’s no doubt that such a major refactoring is a huge endeavor and requires a substantial effort in concepts, development and testing. Nothing to be done over the weekend but we also don’t want to spend another 10 years to make this become reality. Luckily we have strong partners and supporters to push this forward. Kolab Systems has offered to drive this project by contributing their well established software development capabilities, from project management, developer power to QA and testing. In addition to that, the folks at Kolab Digital can’t wait to share their expertise on the UX and design part. However, such a level of professionalism also comes with a price.

Getting help from the crowd to back this

In order to enable both Kolab Systems and Kolab Digital to actually assign the necessary resources to the “Roundcube Next” project, we sat together and decided that it would make sense to reach out to the entire Roundcube community to help make this happen. Yesterday, we proudly announced the crowd funding campaign at the end of the Kolab Summit in The Hague.

The Funding StepsTogether, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowd funding page, and we are excited to make the run to our initial goal and beyond with you!


bruederli's picture
Mon, 2015-05-04 13:38

It all started with this hypothetical question: how would we implement Roundcube if we could start over again? And now this idea has already grown into a concrete plan how to create the responsive, fast and beautiful successor of Roundcube.

The architectural changes necessary for this are clearly too big to be applied to the current Roundcube codebase without breaking the compatibility for most plugins and extensions. So we won’t take that risky path but rather define Roundcube One as feature complete and focus on a new core engine for the future Roundcube webmail application. This will enable everybody to participate in the process of reshaping the architecture and to adapt the existing plugins to the new API as we go along.

There’s no doubt that such a major refactoring is a huge endeavor and requires a substantial effort in concepts, development and testing. Nothing to be done over the weekend but we also don’t want to spend another 10 years to make this become reality. Luckily we have strong partners and supporters to push this forward. Kolab Systems has offered to drive this project by contributing their well established software development capabilities, from project management, developer power to QA and testing. In addition to that, the folks at Kolab Digital can’t wait to share their expertise on the UX and design part. However, such a level of professionalism also comes with a price.

Getting help from the crowd to back this

In order to enable both Kolab Systems and Kolab Digital to actually assign the necessary resources to the “Roundcube Next” project, we sat together and decided that it would make sense to reach out to the entire Roundcube community to help make this happen. Yesterday, we proudly announced the crowd funding campaign at the end of the Kolab Summit in The Hague.

The Funding StepsTogether, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowd funding page, and we are excited to make the run to our initial goal and beyond with you!


roundcube's picture
Sun, 2015-05-03 19:00

Roundcube prouldy announces the crowd funding campaign to
bring our vision of a better email experience to reality.

The web has evolved a lot in the last decade, and we want Roundcube
to take full advantage of the best web technologies available today.
Therefore it’s time for a dramatic change to the Roundcube architecture
and to also to rethink email in general, how it’s used today and how we
could use the new technologies to give the best user experience to
everyday communication.

Applying what we’ve learned from our first 10 years of experience developing
Roundcube, we have been working on a development plan for how to achieve our
new goals. And in order to finally make this happen, we also need your support
to drive the professional software development process behind this plan.

Please join the fun at roundcu.be/next and support our
crowd funding campaign either directly or by simply spreading the word about it.

Roundcube Next Campaign Video


Aaron Seigo's picture
Sun, 2015-05-03 17:36

Today we closed out the first (and quite successful) Kolab Summit in front of both the Kolab and openSUSE attendees with some really big news: the Roundcube team has launched a significant new development project to give Roundcube, the world's most popular free software webmail system, a modern fluid "single-page" user interface. The UI will be rendered entirely in the browser, and the server will only do minimal business logic in support of that.

The focus is on modularity (to make it easier to extend Roundcube's core features), scalability, and deployability. At the same time, the Roundcube team needs to maintain the current version (we have commitments to clients and users that stretch years into the future) as well as build a migration strategy to the new version when it becomes available. Thomas, the founder and project lead for Roundcube, gave a great presentation explaining the whole thing.

As you might imagine, achieving these goals will involve refactoring nearly the entire codebase. We plan to commit three developers along with a UI designer to the project with support of the Kolab Systems project management infrastructure and staff.

So this is a pretty big project, but quite achievable. While discussing how best to make this all happen, the Roundcube team decided that it would make sense to reach out to the entire Roundcube user community to help make this happen, and therefore launched a crowd funding campaign today at Indiegogo.

Quite a way to close out the conference!

http://igg.me/at/roundcubenext

Together, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowdfunding page, and we are excited to make the run to our initial goal of $80,000 with you!


Aaron Seigo's picture
Sun, 2015-05-03 14:17

On the first day of the Kolab Summit we announced that Kolab is getting full extended MAPI support. That was in itself a pretty fantastic announcement, but it was accompanied by announcements of instant messaging, WebRTC and collaborative editing.

Here is a picture which I think captures what the LibreOffice and WebODF people think about this direction, captured over lunch today:


Aaron Seigo's picture
Sat, 2015-05-02 11:04

Kolab Summit - Day 1

Yesterday I delivered a keynote at the openSUSE conference about the best feature of Free software: freedom. This is a message that is easy to lose sight of in the maker/creator community around free software given the understandable focus on business goal metrics such as market penetration, developer adoption, innovation rates, etc. You can see my slides here, and the video of the presentation will be uploaded later by the conference team. (I'll link to it when it appears.) The questions after the presentation were excellent as well and the conversations continued out into the hallways afterwards.

Kolab Summit - Day 1

That was yesterday. Today, the Kolab Summit began. Georg Greve kicked things off by sharing the vision for Kolab this year (slides here).

He covered three areas of focus for Kolab this year:

  1. Real-time collaboration: IM, WebRTC, document editing. This will allow us to compliment the existing asynchronous communication Kolab excels at (email, calendaring, notes, files, etc.) with synchronous, collaborative editing.
  2. User experience refactor: major work is being done with the Kolab clients, in particular the Roundcube client. The goal here is to surpass what is available elsewhere in the market to keep free software as a leader in this area.
  3. Full extended MAPI support. Yes, Kolab will be able to support Outlook out of the box. Fully. The lead OpenChange developer is here to discuss this further later in the summit.

There are many other projects we are digging into significantly, and Kolab's system architect, Jeroen van Meeuwen, followed Georg with a technical roadmap overview. He not only filled in the details behind the three focus areas Georg highlighted, but shared our road map for data loss prevention, multi-factor authentication, in-web-browser encryption, Akonadi Next for the desktop client ... in short we're very, very busy.

Everything we are doing has a very clear use case that needs these tools so that they can choose to also use free software for their collaboration needs.


Aaron Seigo's picture
Wed, 2015-04-22 14:16

In my last blog entry, I mentioned that we have been working on a comprehensive data loss prevention (DLP) and audit trail system for use with Kolab, with the end goal being not only DLP but also a platform for business intelligence. In that entry I listed the three parts of the system, noting that I'd be writing about one at a time. I had hoped to jump on the first of those a day or two after writing the entry, but life and work intervened and then I was off on a short family vacation ... but now I'm back. So let's talk about the capture side of the system.

Kolab can be viewed as a set of cooperative microservices: smtp, imap, LDAP, spam/virus protection, invitation auto-processing, web UI, etc. etc. There are a couple dozen of these and up until now they have all done the traditional, and correct, thing of logging events to a system log.

This has numerous drawbacks, however. First, on a distributed system where different services are running on different hosts (physical or VMs), the result is data spread over many systems. Not great for subsequent reporting. At the time of logging, the events are in a "raw" state: each service likely does not know about the rest of the Kolab services and thus how their events relate to the whole system. With logs going through the host systems it makes it difficult to ensure that they are not easily tampered with; this can be somewhat alleviated by setting up remote logging but this also only goes so far. Finally, logging tends to be a firehose of data and for our specific interests here we want a very specific sub-stream of that total flow.

So we have written yet another service whose entire job is to collect events as they are generated. This service is itself distributed, allowing collection agents to be run across a cluster running a Kolab instance, and it stores its data in a dedicated key-value store which can be housed on an isolated (and specially secured, if desire) system. The program running this service is called Egara, which is Sumerian for "storehouse", and it is written in Erlang due to its robustness (this service must simply never go down), scalability and distributed communication features. The source repository can be found here. Egara itself is part of the overall DLP/auditing system we have named Bonnie.

The high-level purpose of Egera is to create a consistent and complete history of what happens to objects within the groupware system over time. An "object" might be an email, a user account, a calendar event, a tag, a note, a todo item, etc. An event (or "what happens") including things such as new objects, deletions, setting of flags or tags, changing the state (e.g. from unread to read), starting or tearing down an authenticated session, etc. In other words, its job is to create, in real-time, a complete history of who did what when. As such I've come to view it as an automated historian for your world of groupware.

Egara itself is divided into three core parts:

  • incoming handlers: these components implement a standard behavior and are responsible for collecting events from a specific service (e.g. cyrus-imap) and relaying them to the core application once received
  • event normalizers: these workers process events from the new event queue and are tasked with normalizing and augmenting the data within them, creating complete point-in-time additions to the history. Many events come in with simple references to other objects, such as a mail folder; the event normalization workers need to turn those implicit bits of information into explicit links that can be reliably followed over time
  • middleware: these are mainly the bits that provide process supervision, populate and manage the shared queues of events as information arrives from incoming handlers and is processed by normalizers.

This all happen asynchronously and provides guarantees at each step of correct handling (inasumuch as each reporting service allows for that). This means that individual normalizers can fail in even spectacular fashion and not disrupt the system, that an admin can halt and restart the system at will without fear of loss of events (save those that are generated during downtime periods, assuming a full Egara take-down), etc.

Final storage is done in a Riak database, with queues managed by the Mnesia database built into Erlang's OTP system itself. Mnesia can best be thought of as a built-in Redis: entirely in-memory (fast) with disk backing (robust); just add built-in clustering and native, first-class API for storage and retrieval (e.g. we are able to use Erlang functions to do perform updates and filtering over all or part of a queue's dataset). Data in Mnesia is stored as native Erlang records, while data in Riak is stored as JSON documents.

Incoming events may be any format and any delivery mechanism. They can be parallelized, spread across a cluster of machines ... it doesn't matter. The incoming handler is tasked with translating the stream of events into an Erlang term that can be passed on to the normalizer for processing. This allows us to extend Egara in a very easy way with new service-specific handlers to virtually any dataset we wish to keep track of within Kolab or its surroundings.

Normalizers will eventually also join this level of abstraction, though right now the sole worker implementation is specific to groupware data objects. Future releases of Egara will add support for different workers for different classes of events, giving a nice symmetry with the incoming event handlers.

The middleware is designed to be used without modification as the system grows in capability while being scalable. Multiple instances can be run across different systems and the results should (eventually) be the same. I say "eventually" since in such a system one can not guarantee the exact order of events, only the exact results after some period of time. Or, in more familiar terms, it is eventually consistent.

The whole system is quite flexible at runtime, as well. One can configure which kinds of events one cares to track; which data payloads (if any) to archive; which incoming handlers to run on a given node, etc. This will expand over time as well to allow normalizers and their helpers to be quarantined to specific systems within a cluster.

Egara works nicely with Kolab 3.4 and Kolab Enterprise 14, though Bonnie is not officially a part of either. I expect the entire system will be folded into a future Kolab release to ease usage. It will almost certainly remain an optional component, however: not everyone needs these features, and if you don't then there's no reason to pay the price of the runtime overhead and maintenance.

That's a "50,000 foot" view of the historian component of Bonnie. The next installments in this blog series will look a bit closer at the storage model, history querying and replayability and, finally, what this means for end-users and organizations running Kolab with the Bonnie suite.


Aaron Seigo's picture
Wed, 2015-04-01 08:58

Working with Kolab has kept me busy on numerous fronts since I joined near the end of last year. There is the upcoming Kolab Summit, refreshing Kolab Systems' messaging, helping with progress around Kolab Now, collaborating on development process improvement, working on the design and implementation of Akonadi Next, the occassional sales engineering call ... so I've been kept busy, and been able to work with a number of excellent people in the process both in Kolab Systems and the Kolab community at large.

While much of that list of topics doesn't immediately bring "writing code" to mind, I have had the opportunity to work on a few "hands on keyboard, writing code" projects. Thankfully. ;)

One of the more interesting ones, at least to me, has been work on an emerging data loss prevention and audit trail system for Kolab called Bonnie. It's one of those things that companies and governmental users tend to really want, but which is fairly non-trivial to achieve.

There are, in broad strokes, three main steps in such a system:

  1. Capturing and recording events
  2. Storing data payloads associated with those events
  3. Recreating histories which can be reviewed and even restored from

I've been primarily working on the first two items, while a colleague has been focusing on the third point. Since each of these points is a relatively large topic on their own, I'll be covering each individually in subsequent blog entries.

We'll start in the next blog entry by looking at event capture and storage, why it is necessary (as opposed to simply combing through system logs, e.g.) and what we gain from it. I'll also introduce one of the Bonnie components, Egara, which is responsible for this set of functionality.


Aaron Seigo's picture
Fri, 2015-03-27 18:47

Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly swift and far-reaching shift that has touched people's personal and professional lives.

So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse.

The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio.

They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident.

In their own words:

The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens.

In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions.

That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this.

Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.