Blogs

roundcube's picture

Our Wish List for Encryption Browser Extensions

PGP encryption is one of the most frequently requested features for Roundcube and for good reasons more and more people start caring about end-to-end encryption in their everyday communication. But unfortunately webmail applications currently can’t fully participate in this game and doing PGP encryption right in web-based applications isn’t a simple task. Although there are ways and even some basic implementations, all of them have their pros and cons. And yet the ultimate solution is still missing.

Browser extensions to the rescue

In our opinion, the way to go is with a browser extension to do the important work and guard the keys. A crucial point is to keep the encryption component under the user’s full control which in the browser and http world can only be provided with a native browser plugin. And the good news is, there are working extensions available today. The most prominent one probably is Mailvelope which detects encrypted message bodies in various webmail applications and also hooks into the message composition to send signed and encrypted email messages with your favorite webmail app. Plus another very promising tool for end-to-end encryption is coming our way: p≡p. A browser extension is at least planned in the longer term. And even Google just started their own project with the recently announced end-to-end Chrome extension.

That’s a good start indeed. However, the encryption capabilities of those extensions only cover the message body but leave out attachments or even pgp/mime messages. Mostly because there extension has limited knowledge about webmail app and there’s no interaction between the web app and the extension. On the other side, the webmail app isn’t aware of the encryption features available in the user’s browser and therefore suppresses certain parts of a message like signatures. A direct interaction between the webmail and the encryption extension could help adding the missing pieces like encrypted attachment upload and message signing. All we need to do is to introduce the two components to each others.

Timotheus Pokorra's picture

Proxy for Kolab yum and apt-get repositories

On the Kolab IRC we have had some issues with apt-get talking about connection failed etc.

So I updated the blogpost from last year: http://www.pokorra.de/2013/10/downloading-from-obs-repo-via-php-proxy-file/

The port of the Kolab Systems OBS is now port 80, so there is not really a need for a proxy anymore. But perhaps it helps for debugging the apt-get commands.

I have extended the scripts to work for apt-get on Debian/Ubuntu as well, the original script was for yum only it seems.

I have setup a small php script on a server somewhere on the Internet.

In my sample configuration, I use a Debian server with Lighttpd and PHP.

Install:

apt-get install lighttpd spawn-fcgi php5-curl php5-cgi

changes to /etc/lighttpd/lighttpd.conf:

server.modules = (
        [...]
        "mod_fastcgi",
        "mod_rewrite",
)
 
fastcgi.server = ( ".php" => ((
                     "bin-path" => "/usr/bin/php5-cgi",
                     "socket" => "/tmp/php.socket",
                     "max-procs" => 2,
                     "bin-environment" => (
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000"
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER"
                     ),
                     "broken-scriptfilename" => "enable"
                 )))
 
url.rewrite-once = (
    "^/obs\.kolabsys\.com/index.php" => "$0",
    "^/obs\.kolabsys\.com/(.*)" => "/obs.kolabsys.com/index.php?page=$1"
)

and in /var/www/obs.kolabsys.com/index.php:

How to monitor Kolab processes

I have been using self hosted Kolab Groupware everyday for quite a while now.
Therefore the need arose to monitor process activity and system resources using Monit utility.

Table of contents

Couple of words about monit

monit is a simple and robust utility for monitoring and automatic maintenance, which is supported on Linux, BSD, and OS X.

Software installation

Debian Wheezy currently provides Monit 5.4.

To install it execute command:

$ sudo apt-get install monit

Monit daemon will be started at the boot time. Alternatively you can use standard System V init scripts to manage service.

mollekopf's picture

Putting the code where it belongs

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

roundcube's picture

Update 1.0.3 released

We’re proud to announce the next service release to the stable version 1.0.
It contains some bug fixes and improvements we considered important for the
long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!

tobru's picture

CASino with Kolab LDAP backend


Contents

CASino is an easy to use Single Sign On (SSO) web application written in Ruby”

It supports different authentication backends, one of it is LDAP. It works very well with the
LDAP backend of Kolab. Just put the following configuration snippet into
your config/cas.yml:

production:
  authenticators:
    kolab:
      authenticator: 'LDAP'
      options:
        host: 'localhost'
        port: 389
        base: 'ou=People,dc=mydomain,dc=tld'
        username_attribute: 'uid'
        admin_user: 'uid=kolab-service,ou=Special Users,dc=mydomain,dc=tld'
        admin_password: 'mykolabservicepassword'
        extra_attributes:
          email: 'mail'
          fullname: 'uid'

You are now able to sign in using your Kolab uid and manage SSO users with the nice
Kolab Webadmin LDAP frontend.

Timotheus Pokorra's picture

Installing Demo Version of Kolab 3.3 with Docker

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

Timotheus Pokorra's picture

Building a Docker container for Kolab 3.3 on Jiffybox

This article is an update of the previous post that built a Docker container for Kolab 3.1: Building a Docker container for Kolab on Jiffybox (March 2014)

Preparation
I am using a Jiffybox provided by DomainFactory for building a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

roundcube's picture

Roundcube Next – If we would start over again

Roundcube indeed became a huge success story with tens of thousands of installations worldwide. Something I never expected back in 2005 when I started the project as a fresh alternative to the well established but already aged free webmail packages like SquirrelMail or Horde IMP. And now, some 9 years later, we find ourselves in a similar position as the ones we previously wanted to replace. Although we managed to adapt the Roundcube codebase to the ongoing technological innovations, the core architecture is still ruled by the concepts which seemed to be right back when we started. And we’re talking about building a web app for IE 5 and Netscape 6 when browsers weren’t as capable and performant as they are today and when the term AJAX has not yet been known nor did we have nifty libraries such a jQuery or Backbone.js at hand.

It more often happens that, when discussing the implementation of new features to Roundcube, we find ourselves saying “Oh man, that’s going to be an expensive endeavor to squeeze this into our current architecture! If we could just…”. This doesn’t mean that the entire codebase is crap, not at all! But sometimes you just silently wish to give the core a fresh touch which respects the increased requirements and expectations. And that’s the challenge of every software product that has been around for a while and is still intensively developed.

When looking around, I see inspiring new webmail projects slowly emerging which don’t carry the legacy of a software product designed almost a decade ago. I’m truly happy about this development and I appreciate the efforts of honest coders to create the next generation of free webmail software. On the other hand it also makes me a bit jealous to see others starting from scratch and building fast and responsive webmail clients like Mailpile or RainLoop which make Roundcube look like the old dinosaur. Although they’re not yet as feature rich as Roundcube, the core concepts are very convincing and perfectly fit the technological environment we find ourselves in today.

AMaViS, SpamAssassin, roundcube and mySQL – a wedding story

Some time ago I blogged about fighting spam with amavis for the Kolab community. Now the story continues by means of the roundcube integration with amavis.

As earlier mentioned spamassassin is able to store recipient-based preferences in a mysql table with some settings in its local.cf (see spamassassin wiki)

# Spamassassin for Roundcubemail
# http://www.tehinterweb.co.uk/roundcube/#pisauserprefs
user_scores_dsn DBI:mysql:ROUNDCUBEMAILDBNAME:localhost:3306
user_scores_sql_password ROUNCUBEMAILPASSWORD
user_scores_sql_username ROUNDCUBEMAILDBUSERNAME
user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC

However, accessing this with amavis is a real bis problem for many users. Amavis has it’s own user-based configuration policies, but email-plugins as the roundcubemail plugin sauserprefs often only use spamassassin and not amavis. Originally, SA was only called once per message by amavis and therefore recipient-based preferences were not possible at all. This has changed. Now you can use the options @sa_userconf_maps and @sa_username_maps to perform such lookups. Unfortunately these options are still poorly documented. We use them anyway.

The values in @sa_userconf_maps define where amavis has to look for the user preferences. I use mySQL lookups for all recipient addresses.