Planet

Subscribe to the RSS feed of this planet! RSS
Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar, and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar, and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


roundcube's picture
Fri, 2014-10-10 23:25

PGP encryption is one of the most frequently requested features for Roundcube and for good reasons more and more people start caring about end-to-end encryption in their everyday communication. But unfortunately webmail applications currently can’t fully participate in this game and doing PGP encryption right in web-based applications isn’t a simple task. Although there are ways and even some basic implementations, all of them have their pros and cons. And yet the ultimate solution is still missing.

Browser extensions to the rescue

In our opinion, the way to go is with a browser extension to do the important work and guard the keys. A crucial point is to keep the encryption component under the user’s full control which in the browser and http world can only be provided with a native browser plugin. And the good news is, there are working extensions available today. The most prominent one probably is Mailvelope which detects encrypted message bodies in various webmail applications and also hooks into the message composition to send signed and encrypted email messages with your favorite webmail app. Plus another very promising tool for end-to-end encryption is coming our way: p≡p. A browser extension is at least planned in the longer term. And even Google just started their own project with the recently announced end-to-end Chrome extension.

That’s a good start indeed. However, the encryption capabilities of those extensions only cover the message body but leave out attachments or even pgp/mime messages. Mostly because there extension has limited knowledge about webmail app and there’s no interaction between the web app and the extension. On the other side, the webmail app isn’t aware of the encryption features available in the user’s browser and therefore suppresses certain parts of a message like signatures. A direct interaction between the webmail and the encryption extension could help adding the missing pieces like encrypted attachment upload and message signing. All we need to do is to introduce the two components to each others.

From the webmail developer’s perspective

So here’s a loose list of functionality we’d like to see exposed by an encryption browser extension and which we believe would contribute to an integrated solution for secure emailing.

A global (window.encryption-style) object providing functions to:

  • List of supported encryption technologies (pgp, s/mime)
  • Switch to manual mode (i.e. disabling automatic detection of webmail containers)

For message display:

  • Register message content area (jQuery-like selector)
  • Setters for message headers (e.g. sender, recipient)
  • Decrypt message content (String) directly
  • Validate signature (pass signature as argument)
  • Download and decrypt attachment from a given URL and
    • a) prompt for saving file
    • b) return a FileReader object for inline display
  • Bonus points: support for pgp/mime; implies full support for MIME message structures

For message composition:

  • Setters for message recipients (or recipient text fields)
  • Register message compose text area (jQuery-like selector)
  • … or functions to encrypt and/or sign message contents (String) directly
  • Query the existence of a public key/certificate for a given recipient address
  • File selector/upload with transparent encryption
  • … or an API to encrypt binary data (from a FileReader object into a new FileReader object)

Regarding file upload for attachments to an encrypted messages, some extra challenges exist in an asynchronous client-server web application: attachment encryption requires the final recipients to be known before the (encrypted) file is uploaded to the server. If the list of recipients or encryption settings change, already uploaded attachments are void and need to be re-encrypted and uploaded again.

And presumably that’s just one example of possible pitfalls in this endeavor to add full featured PGP encryption to webmail applications. Thus, dear developers of Mailvelope, p≡p, WebPG and Google, please take the above list as a source of inspiration for your further development. We’d gladly cooperate to add the missing pieces.


Timotheus Pokorra's picture
Tue, 2014-10-07 19:02

On the Kolab IRC we have had some issues with apt-get talking about connection failed etc.

So I updated the blogpost from last year: http://www.pokorra.de/2013/10/downloading-from-obs-repo-via-php-proxy-file/

The port of the Kolab Systems OBS is now port 80, so there is not really a need for a proxy anymore. But perhaps it helps for debugging the apt-get commands.

I have extended the scripts to work for apt-get on Debian/Ubuntu as well, the original script was for yum only it seems.

I have setup a small php script on a server somewhere on the Internet.

In my sample configuration, I use a Debian server with Lighttpd and PHP.

Install:

apt-get install lighttpd spawn-fcgi php5-curl php5-cgi

changes to /etc/lighttpd/lighttpd.conf:

server.modules = (
        [...]
        "mod_fastcgi",
        "mod_rewrite",
)
 
fastcgi.server = ( ".php" => ((
                     "bin-path" => "/usr/bin/php5-cgi",
                     "socket" => "/tmp/php.socket",
                     "max-procs" => 2,
                     "bin-environment" => (
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000"
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER"
                     ),
                     "broken-scriptfilename" => "enable"
                 )))
 
url.rewrite-once = (
    "^/obs\.kolabsys\.com/index.php" => "$0",
    "^/obs\.kolabsys\.com/(.*)" => "/obs.kolabsys.com/index.php?page=$1"
)

and in /var/www/obs.kolabsys.com/index.php:

<?php 
 
$proxyurl="http://kolabproxy2.pokorra.de";
$obsurl="http://obs.kolabsys.com";
 
// it seems file_get_contents does not return the full page
function curl_get_file_contents($URL)
{
    $c = curl_init();
    curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($c, CURLOPT_URL, str_replace('&#038;', '&#038;', $URL));
    $contents = curl_exec($c);
    curl_close($c);
    if ($contents) return $contents;
    else return FALSE;
}
 
$page = $_GET['page'];
$filename = basename($page);
debug($page . "   ".$filename);
$content = curl_get_file_contents($obsurl."/".$page);
if (strpos($content, "Error 404") !== false) {
	header("HTTP/1.0 404 Not Found");
	die();
}
if (substr($page, -strlen("/")) === "/")
{
        # print directory listing
        $content = str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
        $content = str_replace('href="/', 'href=$proxyurl."/obs.kolabsys.com/', $content);
        echo $content;
}
else if (substr($filename, -strlen(".repo")) === ".repo")
{
        header("Content-Type: plain/text");
        echo str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
}
else
{
#die($filename);
        header("Content-Type: application/octet-stream");
        header('Content-Disposition: attachment; filename="'.$filename.'"');
        header("Content-Transfer-Encoding: binary\n");
        echo curl_get_file_contents($obsurl."/".$page);
}
 
function debug($msg){
 if(is_writeable("/tmp/mylog.log")){
    $fh = fopen("/tmp/mylog.log",'a+');
    fputs($fh,"[Log] ".date("d.m.Y H:i:s")." $msg\n");
    fclose($fh);
  }
} 
?>

Now it is possible to download the repo files like this:

cd /etc/yum.repos.d/
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_6/Kolab:3.3.repo
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_6/Kolab:3.3:Updates.repo
yum install kolab

For Ubuntu 14.04:

echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/kolab.list
apt-get install kolab

This works for all other projects and distributions on obs.kolabsys.com too.


Tue, 2014-10-07 00:00

I have been using self hosted Kolab Groupware everyday for quite a while now.
Therefore the need arose to monitor process activity and system resources using Monit utility.

Table of contents

Couple of words about monit

monit is a simple and robust utility for monitoring and automatic maintenance, which is supported on Linux, BSD, and OS X.

Software installation

Debian Wheezy currently provides Monit 5.4.

To install it execute command:

$ sudo apt-get install monit

Monit daemon will be started at the boot time. Alternatively you can use standard System V init scripts to manage service.

Initial configuration

Configuration files are located under /etc/monit/ directory. Default settings are stored in the /etc/monit/monitrc file, which I strongly suggest to read.
Custom configuration will be stored in the/etc/monit/conf.d/ directory.

I will override several important settings using local.conf file.

Modified settings

  • Set email address to root@example.org
  • Slightly change default template
  • Define mail server as localhost
  • Set default interval to 120 seconds with initial delay of 180 seconds
  • Enable local web server to take advantage of the additional functionality
    (currently commented out)

$ sudo cat /etc/monit/conf.d/local.conf
# define e-mail recipent
set alert root@example.org

# define e-mail template
set mail-format {
from: monit@$HOST
subject: monit alert -- $EVENT $SERVICE
message: $EVENT Service $SERVICE
Date:        $DATE
Action:      $ACTION
Host:        $HOST
Description: $DESCRIPTION
}

# define server
set mailserver localhost

# define interval and initial delay
set daemon 120 with start delay 180

# set web server for local management
# set httpd port 2812 and use the address localhost allow localhost
Please take a note that enabling built-in web-server in the way I used above will allow every local user to access and perform monit operations. Essentially it should be disabled or secured using username and password combination.

Command-line operations

Verify configuration syntax

To check configuration syntax execute the following command.

$ sudo monit -t
Control file syntax OK

Start, Stop, Restart actions

Start all services and enable monitoring for them.

$ sudo monit start all

Start all services in resources group and enable monitoring for them.

$ sudo monit -g resources start 

Start rootfs service and enable monitoring for it.

$ sudo monit start rootfs

You can initiate stop action in the same way as the above one, which will stop service and disable monitoring, or just execute restart action to stop and start corresponding services.

Monitor and unmonitor actions

Monitor all services.

$ sudo monit monitor all

Monitor all services in resources group.

$ sudo monit -g resources monitor

Monitor rootfs service.

$ sudo monit monitor rootfs

Use unmonitor action to disable monitoring for corresponding services.

Status action

Print service status.

$ sudo monit status
The Monit daemon 5.6 uptime: 27d 0h 47m 

System 'server'
  status                            Running
  monitoring status                 Monitored
  load average                      [0.26] [0.43] [0.48]
  cpu                               12.8%us 2.6%sy 0.0%wa
  memory usage                      2934772 kB [36.4%]
  swap usage                        2897376 kB [35.0%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Filesystem 'rootfs'
  status                            Accessible
  monitoring status                 Monitored
  permission                        660
  uid                               0
  gid                               6
  filesystem flags                  0x1000
  block size                        4096 B
  blocks total                      17161862 [67038.5 MB]
  blocks free for non superuser     7327797 [28624.2 MB] [42.7%]
  blocks free total                 8205352 [32052.2 MB] [47.8%]
  inodes total                      4374528
  inodes free                       4151728 [94.9%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Summary action

Print short service summary.

$ sudo monit summary
The Monit daemon 5.6 uptime: 27d 0h 48m 

System 'server'                     Running
Filesystem 'rootfs'                 Accessible

Reload action

Reload configuration and reinitialize Monit daemon.

$ sudo monit reload

Quit action

Terminate Monit daemon.

$ sudo monit quit
monit daemon with pid [5248] killed

Monitor filesystems

Configuration syntax is very consistent and easy to grasp. I will start with simple example and then proceed to a slightly more complex ideas. Just remember to check one thing at a time.

I am using VPS service due to easy backup/restore process, so I have only one filesystem on /dev/root device, which I will monitor as a named rootfs service.

Monit daemon will generate alert and send an email if space or inode usage on the rootfs filesystem [stored on /dev/root device] exceeds 80 percent of the available capacity.

$ sudo cat /etc/monit/conf.d/filesystems.conf 
check filesystem rootfs with path /dev/root
  group resources

  if space usage > 80% then alert
  if inode usage > 80% then alert

The above service is placed in resources group for easier management.

Monitor system resources

The following configuration will be stored as a named server service as it describes resource usage for the whole mail server.

Monit daemon will check memory usage, and if it exceeds 80% of the available capacity for three subsequent events, it will send alert email.
Recovery message will be sent after two subsequent events to limit number of sent messages. The same rules apply to the remaining system resources.

The system I am using have four available processors, so the alert will be generated after the five minutes load average exceeds five.

$ sudo cat /etc/monit/conf.d/resources.conf 
check system server
  group resources

  if memory usage > 80% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if swap usage > 50% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(wait) > 30% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(system) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(user) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if loadavg(5min) > 5 then alert
  else if succeeded for 2 cycles then alert

The above service is placed in resources group for easier management.

Monitor system services

cron

cron is a daemon used to execute user-specified tasks at scheduled time.

Monit daemon will use the specified pid file [/var/run/crond.pid] to monitor [cron] service and restart it if it stops for any reason.
Configuration change will generate alert message, permission issue will generate alert message and disable further monitoring.

GID of 102 translates to crontab group.

$ sudo cat /etc/monit/conf.d/cron.conf 
check process cron with pidfile /var/run/crond.pid
  group system
  group scheduled-tasks

  start program = "/usr/sbin/service cron start"
  stop  program = "/usr/sbin/service cron stop"

  if 3 restarts within 5 cycles then timeout

  depends on cron_bin
  depends on cron_rc
  depends on cron_rc.d
  depends on cron_rc.daily
  depends on cron_rc.hourly
  depends on cron_rc.monthly
  depends on cron_rc.weekly
  depends on cron_rc.spool

  check file cron_bin with path /usr/sbin/cron
    group scheduled-tasks
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file cron_rc with path /etc/crontab
    group scheduled-tasks
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.d with path /etc/cron.d
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.daily with path /etc/cron.daily
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.hourly with path /etc/cron.hourly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.monthly with path /etc/cron.monthly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.weekly with path /etc/cron.weekly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.spool with path /var/spool/cron/crontabs
    group scheduled-tasks
    if changed timestamp      then alert
    if failed permission 1730 then unmonitor
    if failed uid root        then unmonitor
    if failed gid 102         then unmonitor

The above service is placed in system and scheduled-tasks groups for easier management.

rsyslogd

rsyslogd is a message logging service.

$ sudo cat /etc/monit/conf.d/rsyslogd.conf 
check process rsyslog with pidfile /var/run/rsyslogd.pid
  group system
  group logging

  start program = "/usr/sbin/service rsyslog start"
  stop  program = "/usr/sbin/service rsyslog stop"

  if 3 restarts within 5 cycles then timeout

  depends on rsyslog_bin
  depends on rsyslog_rc
  depends on rsyslog_rc.d

  check file rsyslog_bin with path /usr/sbin/rsyslogd
    group logging
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file rsyslog_rc with path /etc/rsyslog.conf
    group logging
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory rsyslog_rc.d with path /etc/rsyslog.d
    group logging
    if changed timestamp     then alert	
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and logging groups for easier management.

ntpd

Network Time Protocol daemon will be extended by the use of port monitoring.

$ sudo cat /etc/monit/conf.d/ntpd.conf 
check process ntp with pidfile /var/run/ntpd.pid
  group system
  group time

  start program = "/usr/sbin/service ntp start"
  stop  program = "/usr/sbin/service ntp stop"

  if failed port 123 type udp then restart

  if 3 restarts within 5 cycles then timeout

  depends on ntp_bin 
  depends on ntp_rc 

  check file ntp_bin with path /usr/sbin/ntpd
    group time
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file ntp_rc with path /etc/ntp.conf
    group time
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and time groups for easier management.

OpenSSH

OpenSSH service will be extended by the use of match statement to test content of the configuration file. I assume it is self explanatory.

$ sudo cat /etc/monit/conf.d/openssh-server.conf 
check process openssh with pidfile /var/run/sshd.pid
  group system
  group sshd

  start program = "/usr/sbin/service ssh start"
  stop  program = "/usr/sbin/service ssh stop"

  if failed port 22 with proto ssh then restart

  if 3 restarts with 5 cycles then timeout

  depend on openssh_bin
  depend on openssh_sftp_bin
  depend on openssh_rsa_key
  depend on openssh_dsa_key
  depend on openssh_rc

  check file openssh_bin with path /usr/sbin/sshd
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_sftp_bin with path /usr/lib/openssh/sftp-server
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rsa_key with path /etc/ssh/ssh_host_rsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_dsa_key with path /etc/ssh/ssh_host_dsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rc with path /etc/ssh/sshd_config
    group sshd
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

    if not match "^PasswordAuthentication no" then alert
    if not match "^PubkeyAuthentication yes"  then alert
    if not match "^PermitRootLogin no"        then alert

The above service is placed in system and sshd groups for easier management.

Monitor Kolab services

MySQL

MySQL is an open-source database server used by the wide range of Kolab services.

UID of 106 translates to mysql user. GID of 106 translates to mysql group.

It is the first time I have used unixsocket statement here.

$ sudo cat /etc/monit/conf.d/mysql.conf 
check process mysql with pidfile /var/run/mysqld/mysqld.pid
  group kolab
  group database

  start program = "/usr/sbin/service mysql start"
  stop  program = "/usr/sbin/service mysql stop"

  if failed port 3306 protocol mysql then restart
  if failed unixsocket /var/run/mysqld/mysqld.sock protocol mysql then restart

  if 3 restarts within 5 cycles then timeout

  depends on mysql_bin
  depends on mysql_rc
  depends on mysql_sys_maint
  depend  on mysql_data

  check file mysql_bin with path /usr/sbin/mysqld
    group database
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_rc with path /etc/mysql/my.cnf
    group database
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_sys_maint with path /etc/mysql/debian.cnf
    group database
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory mysql_data with path /var/lib/mysql
    group database
    if failed permission 700 then unmonitor
    if failed uid 106        then unmonitor
    if failed gid 110        then unmonitor

The above service is placed in kolab and database groups for easier management.

Apache

Apache is an open-source HTTP server used to serve user/admin web-interface.

Please notice that I am checking HTTPS port.

$ sudo cat /etc/monit/conf.d/apache.conf 
check process apache with pidfile  /var/run/apache2.pid
  group kolab
  group web-server

  start program = "/usr/sbin/service apache2 start"
  stop  program = "/usr/sbin/service apache2 stop"

  if failed port 443 then restart

  if 3 restarts within 5 cycles then timeout

  depends on apache2_bin
  depends on apache2_rc
  depends on apache2_rc_mods
  depends on apache2_rc_sites

  check file apache2_bin with path /usr/sbin/apache2.prefork
    group web-server
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc with path /etc/apache2
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor	

  check directory apache2_rc_mods with path /etc/apache2/mods-enabled
    group web-server
    if changed timestamp     then alert	
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc_sites with path /etc/apache2/sites-enabled
    group web-server
    if changed timestamp     then alert	
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and web-server groups for easier management.

Kolab daemon

This is the heart of the whole Kolab unified communication and collaboration system as it is responsible for data synchronization between different services.

UID of 413 translates to kolab-n user. GID of 412 translates to kolab group.

$ sudo cat /etc/monit/conf.d/kolab-server.conf 
check process kolab-server with pidfile /var/run/kolabd/kolabd.pid
  group kolab
  group kolab-daemon

  start program = "/usr/sbin/service kolab-server start"
  stop  program = "/usr/sbin/service kolab-server stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-daemon_bin
  depends on kolab-daemon_rc

  check file kolab-daemon_bin with path /usr/sbin/kolabd
    group kolab-daemon
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file kolab-daemon_rc with path /etc/kolab/kolab.conf
    group kolab-daemon
    if failed checksum       then alert
    if failed permission 640 then unmonitor
    if failed uid 413        then unmonitor
    if failed gid 412        then unmonitor

The above service is placed in kolab and kolab-daemon groups for easier management.

Kolab saslauthd

Kolab saslauthd is the SASL authentication daemon for multi-domain Kolab deployments.

$ sudo cat /etc/monit/conf.d/kolab-saslauthd.conf 
check process kolab-saslauthd with pidfile /var/run/kolab-saslauthd/kolab-saslauthd.pid
  group kolab
  group kolab-saslauthd

  start program = "/usr/sbin/service kolab-saslauthd start"
  stop  program = "/usr/sbin/service kolab-saslauthd stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-saslauthd_bin

  check file kolab-saslauthd_bin with path /usr/sbin/kolab-saslauthd
    group kolab-saslauthd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and kolab-saslauthd groups for easier management.

It can be tempting to monitor /var/run/saslauthd/mux socket, but just leave it alone for now.

Wallace

The Wallace is a content filtering daemon.

$ sudo cat /etc/monit/conf.d/wallace.conf 
check process wallace with pidfile /var/run/wallaced/wallaced.pid
  group kolab
  group wallace

  start program = "/usr/sbin/service wallace start"
  stop  program = "/usr/sbin/service wallace stop"

  #if failed port 10026 then restart

  if 3 restarts within 5 cycles then timeout

  depends on wallace_bin 

  check file wallace_bin with path /usr/sbin/wallaced
    group wallace
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and wallace groups for easier management.

ClamAV

The ClamAV daemon is an and open-source, cross-platform antivirus software.

$ sudo cat /etc/monit/conf.d/clamav.conf 
check process clamav with pidfile /var/run/clamav/clamd.pid
  group system
  group antivirus

  start program = "/usr/sbin/service clamav-daemon start"
  stop  program = "/usr/sbin/service clamav-daemon stop"

  if 3 restarts within 5 cycles then timeout

  #if failed unixsocket /var/run/clamav/clamd.ctl type udp then alert

  depends on clamav_bin 
  depends on clamav_rc 

  check file clamav_bin with path /usr/sbin/clamd
    group antivirus
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file clamav_rc with path /etc/clamav/clamd.conf
    group antivirus
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and antivirus groups for easier management.

Freshclam

Freshclam is a software used to periodically update ClamAV virus databases.

$ sudo cat /etc/monit/conf.d/freshclam.conf 
check process freshclam with pidfile /var/run/clamav/freshclam.pid
  group system
  group antivirus-updater

  start program = "/usr/sbin/service clamav-freshclam start"
  stop  program = "/usr/sbin/service clamav-freshclam stop"

  if 3 restarts within 5 cycles then timeout

  depends on freshclam_bin 
  depends on freshclam_rc 

  check file freshclam_bin with path /usr/bin/freshclam
    group antivirus-updater
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/clamav/freshclam.conf 
    group antivirus-updater
    if failed permission 444 then unmonitor
    if failed uid 110        then unmonitor
    if failed gid 4          then unmonitor

The above service is placed in kolab and antivirus-updater groups for easier management.

amavisd-new

Amavis is a high-performance interface between Postfix mail server and content filtering services: SpamAssassin as a spam classifier, and ClamAV as an antivirus protection.

$ sudo cat /etc/monit/conf.d/amavisd-new.conf 
check process amavisd-new with pidfile /var/run/amavis/amavisd.pid
  group kolab
  group content-filter

  start program = "/usr/sbin/service amavis start"
  stop  program = "/usr/sbin/service amavis stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 10024 type tcp then restart
  #if failed unixsocket /var/lib/amavis/amavisd.sock type udp then alert

  depends on amavisd-new_bin 
  depends on amavisd-new_rc 

  check file amavisd-new_bin with path /usr/sbin/amavisd-new
    group content-filter
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory amavisd-new_rc with path /etc/amavis/
    group content-filter
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor	

The above service is placed in kolab and content-filter groups for easier management.

The main Directory Server daemon

The main Directory Server daemon is a 389 LDAP Directory Server.

$ sudo cat /etc/monit/conf.d/dirsrv.conf 
check process dirsrv with pidfile  /var/run/dirsrv/slapd-xmail.stats
  group kolab
  group dirsrv

  start program = "/usr/sbin/service dirsrv start"
  stop  program = "/usr/sbin/service dirsrv stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 389 type tcp then restart

  depends on dirsrv_bin 
  depends on dirsrv_rc 

  check file dirsrv_bin with path /usr/sbin/ns-slapd
    group dirsrv
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory dirsrv_rc with path /etc/dirsrv/
    group dirsrv
    if changed timestamp     then alert	

The above service is placed in kolab and dirsrv groups for easier management.

SpamAssasin

SpamAssasin is a content filter used for spam filtering.

$ sudo cat /etc/monit/conf.d/spamd.conf 
check process spamd with pidfile /var/run/spamd.pid
  group system
  group spamd

  start program = "/usr/sbin/service spamassassin start"
  stop  program = "/usr/sbin/service spamassassin stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 783 type tcp then restart

  depends on spamd_bin 
  depends on spamd_rc 

  check file spamd_bin with path /usr/sbin/spamd
    group spamd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory spamd_rc with path /etc/spamassassin/
    group spamd
    if changed timestamp     then alert	
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and spamd groups for easier management.

Cyrus IMAP/POP3 daemons

cyrus-imapd daemon is responsible for IMAP/POP3 communication.

$ sudo cat /etc/monit/conf.d/cyrus-imapd.conf 
check process cyrus-imapd with pidfile  /var/run/cyrus-master.pid
  group kolab
  group cyrus-imapd

  start program = "/usr/sbin/service cyrus-imapd start"
  stop  program = "/usr/sbin/service cyrus-imapd stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 143 type tcp then restart
  if failed port 4190 type tcp then restart
  if failed port 993 type tcp then restart

  depends on cyrus-imapd_bin 
  depends on cyrus-imapd_rc 

  check file cyrus-imapd_bin with path /usr/lib/cyrus-imapd/cyrus-master
    group cyrus-imapd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/cyrus.conf
    group anti-virus
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and cyrus-imapd groups for easier management.

Postfix

Postfix is an open-source mail transfer agent used to route and deliver electronic mail.

$ sudo cat /etc/monit/conf.d/postfix.conf 
check process postfix with pidfile /var/run/cyrus-master.pid
  group kolab
  group mta

  start program = "/usr/sbin/service postfix start"
  stop program = "/usr/sbin/service postfix stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 25 type tcp then restart
  #if failed port 10025 type tcp then restart
  #if failed port 10027 type tcp then restart
  if failed port 587 type tcp then restart

  depends on postfix_bin 
  depends on postfix_rc 

  check file postfix_bin with path /usr/lib/postfix/master 
    group mta
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory postfix_rc with path /etc/postfix/
    group mta
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor	

The above service is placed in kolab and mta groups for easier management.

Ending notes

This blog post is definitely too long, so I will just mention that similar configuration can be used to monitor other integrated solutions like ISPConfig, or custom specialized setups.

In my opinion Monit is a great utility which simplifies system and service monitoring. Additionally it provides interesting proactive features, like service restart, or arbitrary program execution on selected tests.

Everything is described in the manual page.

$ man monit

mollekopf's picture
Fri, 2014-10-03 01:21

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.


roundcube's picture
Mon, 2014-09-29 02:00

We’re proud to announce the next service release to the stable version 1.0.
It contains some bug fixes and improvements we considered important for the
long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


tobru's picture
Sat, 2014-09-27 00:00


Contents

CASino is an easy to use Single Sign On (SSO) web application written in Ruby”

It supports different authentication backends, one of it is LDAP. It works very well with the
LDAP backend of Kolab. Just put the following configuration snippet into
your config/cas.yml:

production:
  authenticators:
    kolab:
      authenticator: 'LDAP'
      options:
        host: 'localhost'
        port: 389
        base: 'ou=People,dc=mydomain,dc=tld'
        username_attribute: 'uid'
        admin_user: 'uid=kolab-service,ou=Special Users,dc=mydomain,dc=tld'
        admin_password: 'mykolabservicepassword'
        extra_attributes:
          email: 'mail'
          fullname: 'uid'

You are now able to sign in using your Kolab uid and manage SSO users with the nice
Kolab Webadmin LDAP frontend.

CASino with Kolab LDAP backend was originally published by Tobias Brunner at tobrunet.ch Techblog on September 27, 2014.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:33

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Install container
The image for the container is available here:
https://index.docker.io/u/tpokorra/kolab33_centos6/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2014/09/building-a-docker-container-for-kolab-3-3-on-jiffybox/.

To install this image, you need to type in this command:

docker pull  tpokorra/kolab33_centos6

You can create a container from this image and run it:

MYAPP=$(sudo docker run --name centos6_kolab33 -P -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6)

You can see all your containers:

docker ps -a

You now have to attach to the container, and inside the container start the services:

docker attach $MYAPP
  /root/start.sh

Somehow it should work to start the services automatically at startup, but I did not get it to work with CMD or ENTRYPOINT.

To stop the container, type exit on the container’s console, or run from outside:

docker stop $MYAPP

To delete the container:

docker rm $MYAPP

You can reach the Kolab Webadmin on this URL:
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test

The Webmail interface is available here:
https://localhost/roundcubemail.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:31

This article is an update of the previous post that built a Docker container for Kolab 3.1: Building a Docker container for Kolab on Jiffybox (March 2014)

Preparation
I am using a Jiffybox provided by DomainFactory for building a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Create a Docker image
I realised that if I would install Kolab in one go, the image would become too big to upload to https://index.docker.io.
Therefore I have created a Dockerfile which has several steps for downloading and installing various packages. For a detailed description of a Dockerfile, see the Dockerfile Reference

My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.3/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.

This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab33_centos6 will be created, and the container will be deleted:

sudo docker build -t tpokorra/kolab33_centos6 .

You can see all your local images with this command:

sudo docker images

To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:

MYAPP=$(sudo docker run --name centos6_kolab33  --privileged=true -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6 /bin/bash)
docker attach $MYAPP
# run inside the container:
  echo `hostname -f` > /proc/sys/kernel/hostname
  echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
  ./initHttpTunnel.sh
  ./initSSL.sh test.example.org
  /root/stop.sh
  exit

Typing exit inside the container will stop the container.

Now you commit this last manual change:

docker commit $MYAPP tpokorra/kolab33_centos6
# delete the container
docker rm $MYAPP

You can push this image to https://index.docker.io:

#create a new account, or login with existing account:
sudo docker login
sudo docker push tpokorra/kolab33_centos6

You can now see the image available here: https://index.docker.io/u/tpokorra/kolab33_centos6/

See this post Installing Demo Version of Kolab 3.3 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.

Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.


roundcube's picture
Fri, 2014-09-12 12:40

Roundcube indeed became a huge success story with tens of thousands of installations worldwide. Something I never expected back in 2005 when I started the project as a fresh alternative to the well established but already aged free webmail packages like SquirrelMail or Horde IMP. And now, some 9 years later, we find ourselves in a similar position as the ones we previously wanted to replace. Although we managed to adapt the Roundcube codebase to the ongoing technological innovations, the core architecture is still ruled by the concepts which seemed to be right back when we started. And we’re talking about building a web app for IE 5 and Netscape 6 when browsers weren’t as capable and performant as they are today and when the term AJAX has not yet been known nor did we have nifty libraries such a jQuery or Backbone.js at hand.

It more often happens that, when discussing the implementation of new features to Roundcube, we find ourselves saying “Oh man, that’s going to be an expensive endeavor to squeeze this into our current architecture! If we could just…”. This doesn’t mean that the entire codebase is crap, not at all! But sometimes you just silently wish to give the core a fresh touch which respects the increased requirements and expectations. And that’s the challenge of every software product that has been around for a while and is still intensively developed.

When looking around, I see inspiring new webmail projects slowly emerging which don’t carry the legacy of a software product designed almost a decade ago. I’m truly happy about this development and I appreciate the efforts of honest coders to create the next generation of free webmail software. On the other hand it also makes me a bit jealous to see others starting from scratch and building fast and responsive webmail clients like Mailpile or RainLoop which make Roundcube look like the old dinosaur. Although they’re not yet as feature rich as Roundcube, the core concepts are very convincing and perfectly fit the technological environment we find ourselves in today.

So what if we could start over and build Roundcube from scratch?

Here are some ideas how I could imagine to build a brand new webmail app with todays tools and a 9 years experience in developing web(mail) applications:

  • Do more stuff client side: the entire rendering of the UI should be done in Javascript and no more PHP composing HTML pages loaded in iframes.
  • The server should only become a thin wrapper for talking to backend services like IMAP, LDAP, etc.
  • Maybe even use a common API for client-server communication like the one suggested by Inbox.
  • Design a proper data model which is used by both the server and the client.
  • Separate the data model from the view and use Backbone.js for rendering.
  • Widget-based UI composition using simple HTML structures with small template snippets.
  • Keep mobile, touch and hi-res devices in mind when building the UI.
  • Do skinning solely through CSS and maybe allow single template snippets to be overridden.
  • More abstraction for storage and caching layers to allow alternative backends like MongoDB or Redis.
  • Separate user auth from IMAP. This would allow other sources or accounts to be pulled into one session.
  • Use more 3rd party libraries like require.js, moment.js, jQuery or PHPMailer, Monolog or Doctrine ORM.
  • Contribute to the 3rd party modules rather than re-inventing the wheel.

While this may now sound like a buzzword bingo from a web developers conference (and the list is certainly not complete), I indeed believe in these very useful and well developed modules that are out there at our service. This is what free software development is all about: share, use and contribute.

But finally, not every part of your current Roundcube codebase is badly outdated and should be replaced. I’d definitely keep our current IMAP, LDAP and HTML sanitizing libraries as well as the plugin system which turned out to be a stable and important component and a major contributor to the Roundcube’s success.

And what keeps us from re-building Roundcube from the ground up? Primarily time and the fear of jeopardizing the Roundcube microcosmos with a somewhat incompatible new version that would require every single plugin to be re-written.

But give use funding for 6 month of intense work and let’s see what happens…


Thu, 2014-09-11 15:06

Some time ago I blogged about fighting spam with amavis for the Kolab community. Now the story continues by means of the roundcube integration with amavis.

As earlier mentioned spamassassin is able to store recipient-based preferences in a mysql table with some settings in its local.cf (see spamassassin wiki)

# Spamassassin for Roundcubemail
# http://www.tehinterweb.co.uk/roundcube/#pisauserprefs
user_scores_dsn DBI:mysql:ROUNDCUBEMAILDBNAME:localhost:3306
user_scores_sql_password ROUNCUBEMAILPASSWORD
user_scores_sql_username ROUNDCUBEMAILDBUSERNAME
user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC

However, accessing this with amavis is a real bis problem for many users. Amavis has it’s own user-based configuration policies, but email-plugins as the roundcubemail plugin sauserprefs often only use spamassassin and not amavis. Originally, SA was only called once per message by amavis and therefore recipient-based preferences were not possible at all. This has changed. Now you can use the options @sa_userconf_maps and @sa_username_maps to perform such lookups. Unfortunately these options are still poorly documented. We use them anyway.

The values in @sa_userconf_maps define where amavis has to look for the user preferences. I use mySQL lookups for all recipient addresses.

# use userpref SQL connection from SA local.cf for ALL recipients
@sa_userconf_maps = ({
  '.' => 'sql:'
});

The variable @sa_username_maps tells amavis what to pass to spamassassin as _USERNAME_ (see above) for the mySQL lookup. By default the amavis system user is used. In my setup with Kolab and sauserprefs I use here a regexp which is supposed to match the recipient email-address:

# use recipient email address as _USERNAME_ in userpref mySQL table (_TABLE_)
@sa_username_maps = new_RE (
  [ qr'^([^@]+@.*)'i => '${1}' ]
);

With these additional bits sauserprefs should work. However it seems to me that the string “*** Spam ***”, which should be added to the subject does not work (maybe it does in the most recent version). The thresholds do, though, but better check it carefully.

Did you succeed? Comments are appreciated!

Filed under: Technik Tagged: amavis, Kolab, Roundcubemail, Spamassassin


Andreas Cordes's picture
Thu, 2014-09-04 21:41

Hi,

now I finished compiling all the +Kolab.org packages for the +Raspberry Pi . Just a short note that you can update your groupware on your Pi pto the most recent version of +Kolab.org .

Greetz


Thu, 2014-08-21 14:30

Just in time for the official Kolab 3.3 release, our Gentoo packages for Kolab 3.2 became stable and ready to use. This will clear the way for the upcoming release of Kolab 3.3 for Gentoo. Altough this release won't bring any major changes, it prepares the ground for upcoming developments and new features in Kolab 3.3. Further, with Kolab 3.2 we introduced an upgrade path between Kolab releases for Gentoo and we will try our best to keep updates as consistent and comfortable as possible.
Read more ...


grote's picture
Wed, 2014-08-20 12:19

After extensive beta testing, we are very proud that we can announce the immediate availability of Kolab.org 3.3 today. This release is the one with the most new features of all time.

In a trilogy of articles, we already presented the most exciting new features. All of these would be more than enough for one release, but we still have some new functionality in our sleeves that we have not talked about, yet.

These features include Cross-Folder-Search, a Birthday Calendar that automatically shows the birthdays of all your contacts from your address book in one neat calender. There are now dedicated Out-of-Office settings where you can specify when you are in vacation and which message should be sent under which circumstances. Never again forget to enable or disable your vacation response. Also, in the settings with the Delegation feature, you can now delegate parts of your account to somebody. This is especially useful for people with a huge workload that need help with coordinating appointments for example.

Thanks to extensive testing, issue reporting and fixing by the community this release is both on time and on par. We would like to thank the following people especially for their outstanding contributions: Daniel Hoffend, Aeneas Jaißle and Timotheus Pokorra. If you like to participate as well, there is plenty of possibilities for you.

We will highlight most of the new features below. If you are interested in more details, you are invited to check out the articles from our earlier feature trilogy:

Before you read on for the new features, you might already want to head over to the installation guide, change your repository locations and run an upgrade following the upgrade notes from our documentation.

Email Tags

It is now possible to add tags to email messages. They are shown prominently in the message list before the subject of your mails. In the bottom left corner, there is now a tag cloud where you can select tags, so only emails with those tags are shown to you.

Of course you can also assign colors to your tags and add new tags easily. Our new cross-folder-search feature also works for tags and allows you to show all emails from all folders that have a certain tag.

In the future, we plan on using the new tagging system for all Kolab modules, so the same tags can be used for emails, contacts, events, tasks, etc.

Notes

With Kolab.org 3.3 you will be able to work with notes right in the webclient. As with all things Kolab, you can also have multiple notebooks and share them with people.The notes are automatically synchronized with the Kolab Desktop Client and you will also be able to synchronize them to your mobile devices via the ActiveSync protocol.

Notes can be tagged just like tasks and they can have rich-text content including graphics. They can be printed right from the webclient and also sent via email. In the email view, you can add notes to emails. They are listed at the top of the email preview.

Resource Management

Resources are things like cars, projectors or meeting rooms that can only be used by one group of people at the same time. Kolab.org 3.3 makes it easier to manage your resources.

We added a dedicated resource selection dialog which allows you to search and browse through all the available resources. It displays additional information and attributes for the individual resources as well as an availability calendar based on the free-busy data published for the given resource.

Multiple resource of the same kind can be organized in resource collections (e.g. company cars). If someone wants to book "a car", she books the resource collection for her appointment.

Folder Management

Internally, all address books, calendars, task lists, etc. are folders. So far, we did not hide that fact well from users. Kolab.org 3.3 introduces a new folder navigation view that allows you to search and subscribe to shared calendars, address books, task lists etc. directly from within the respective view.

Searches are also expanded to LDAP, so that search results show folders grouped by matching users. When selecting a "folder" from the search results, your selection can be temporary and affect only the current session or permanently if you always want to see that calendar for example.

Calendar Quickview

The calendar got a quickview mode which allows you to open an undistorted view on a single calendar without unchecking all other calendars from the main view.

When opening the quickview for the new "virtual user calendar", the calendar view displays events from all calendars of that user you have access to. Additionally, it also shows time blocks from anonymized free/busy information where you only know that the user is unavailable during these times.

 

Accessibility Improvements

The entire web client of Kolab.org 3.3 received plenty of improvements that will benefit people that require assistive technologies. The user interface can now be fully operated with the keyboard and has support for screen readers as well as voice output as suggested by the WCAG 2.0 Guidelines and ​WAI ARIA standards.

In the email view for example you are now able to tab through all the button elements, operate the message list and the popup menus. Once the message list gains focus, the arrow keys move the cursor while <space> selects the row and <enter> opens the message. A descriptive block explaining the list navigation was added to the page, so screen readers can pick it up.

Improvements in Kolab Webadmin

We enhanced the Kolab Webadmin to make it even easier to manage the most frequent administrative tasks from a pretty web interface without the need for the command line. You will of course always be able to use the command line if you prefer.

When creating a new Shared Folder, you can directly edit its ACLs giving read rights to certain users or groups for example. Creation is now also done with sane defaults, so the folder can be used immediately.

Organizational Units from LDAP can now be managed right from the Kolab Web Admin as well. Editing their LDAP access rights (ACIs) directly is also possible.


tobru's picture
Sun, 2014-08-10 00:00


Contents

Kolab has released it’s first beta of the upcoming version 3.3.
To test it on Debian I’ve created a Vagrantfile and a small Puppet module which provisions Kolab into a Debian VM. It’s available
on Github.

How to use it

Make sure you have the latest Vagrant version installed. Please see the official documentation.
Clone the git repository with git clone https://github.com/tobru/kolab3-vagrant.git and change into this directory.
Then run vagrant up and wait a while until Vagrant and Puppet have done their jobs. When it’s finished you’re good to enter the VM with vagrant ssh.
To have a working Kolab installation, setup-kolab needs to be called as root (hint: sudo su) once. It configures the Kolab components.
The Kolab Web Admin Panel is now reachable under http://localhost:8080/kolab-webadmin and Roundcube under
http://localhost:8080/roundcubemail.

For more information about how Vagrant works, have a look at the official Getting Started guide.

Chose the Kolab version

By default Kolab will be installed from the development repository where all the latest (and maybe broken) packages are located. To install
a different version, just change the version parameter in manifests/default.pp to the desired version.

Some notes

  • The VM hostname is server.kolab3.dev and is based on chef/debian-7.6 (at this time)
  • Port 8080 on localhos is mapped to port 80 in the VM. No other ports are mapped.
  • MySQL has no password, so while running setup-kolab chose 2: New MySQL server (needs to be initialized) when asked.

PS: Pull requests are always welcome!

Kolab 3 Vagrant box with Puppet provisioning was originally published by Tobias Brunner at tobrunet.ch Techblog on August 10, 2014.


Andreas Cordes's picture
Fri, 2014-08-08 01:06

Hello,

I just finished the compiling of all modules and performed an upgrade to 3.3 beta1 on +Rasperry Pi .

For all impatient:

deb http://kolab.zion-control.org /

Changes I adopted to my installation :

/etc/kolab/kolab.conf

[wallace]
modules = resources, invitationpolicy, footer 
kolab_invitation_policy = ACT_ACCEPT_IF_NO_CONFLICT:zion-control.org, ACT_MANUAL

/etc/kolab-freebusy/config.ini
[httpauth]
type = ldap
host = ldap://localhost:389
bind_dn = "uid=kolab-service,ou=Special Users,dc=zion-control,dc=org"
bind_pw = "IwontTellYou"


[directory "local-cache"]
type = static
fbsource = file:/var/cache/kolab-freebusy/%s.ifb
expires = 10m
[directory "kolab-resources"]
type = ldap
host = ldap://localhost:389
bind_dn = "uid=kolab-service,ou=Special Users,dc=zion-control,dc=org"
bind_pw = "IwontTellYou"
base_dn = "ou=Resources,dc=zion-control,dc=org"
filter = "(&(objectClass=kolabsharedfolder)(mail=%s))"
attributes = mail, kolabtargetfolder
fbsource = "imap://cyrus-admin:IwontTellYou@localhost/%kolabtargetfolder?acl=lrs"
cacheto = /var/cache/kolab-freebusy/%mail.ifb
expires = 10m
loglevel = 100  ; Debug

So far, active-sync still working :-) and no major issues as the ones already known.
More the next days when I performed some tests
Greetz Andreas

Andreas Cordes's picture
Wed, 2014-08-06 14:55

Hi there,

+Kolab  just released the 3.3 beta1 version of kolab. (
My +Raspberry Pi is currently downloading and compiling all the packages.
Because of all the dependencies I solved during the first compile phase, I expect not so much errors during this installation.
Hope to tell you more tomorrow or even on friday.
Greets Andreas

grote's picture
Wed, 2014-08-06 11:15

After we have revealed the new features for Kolab.org 3.3 over the past few weeks, you might have already expected that a beta version is not far away. Today is the day where you can finally get your hands on the brand new 3.3 packages and try out all those new features for yourself.


As our project lead explained on the development list, the release of the final version is aimed for August 20, so you have two weeks to test this beta version thoroughly and help to make sure that no unresolved issues make its way into the final. We would also appreciate if people with test environments could test upgrades especially with existing IMAP spools.


So please warm up your virtual machines and head over to our installation guide. We have provided initial packages for CentOS and Debian. If you are using another distribution, please help to get the packages for those ready in time.


When following the installation guide for CentOS, in order to get the beta packages, please change the repository configuration to:



# cd /etc/yum.repos.d/
# wget http://obs.kolabsys.com:82/Kolab:/Development/CentOS_6/Kolab:Development.repo


When following the installation guide for Debian Wheezy, please change the repository location to



http://obs.kolabsys.com:82/Kolab:/Development/Debian_7.0/


and leave out the updates repository. Currently, there are three known issues for Kolab.org 3.3 beta1 on Debian, but work-arounds exist in the tickets. Please consider helping us resolving those tickets. Here's a helpful guide on how to do this with our Open Build System.


If you find anything, come talk to us in IRC or the development mailing list. When you are sure you found an issue, please report it directly to our issue tracker or just fix it yourself ;)


grote's picture
Tue, 2014-07-22 11:35

With development still in full swing, we are getting closer to a feature freeze and a first beta version of Kolab.org 3.3. In the last weeks, we already shared some details about the new features that will be part of this upcoming Kolab.org version. There will be improved Folder Management and a Calendar Quickview as well as Notes and Accessibility improvements. Now it is time to present two more exciting features that will be part of Kolab.org 3.3.


Pease keep in mind that work is not yet done and that this is only a sneak preview. We hope to have something packaged and ready for you to try out soon!


Email Tags


Tags are little labels that you can attach to objects to categorize them or to find them quicker. We introduced tags with our task module and are now expanding the concept to emails.


You can add tags to email messages and remove them again. The tags can have different colors and are shown prominently in the message list. In the bottom left corner, below the folder list, there is now a tag cloud where you can select tags, so only emails with those tags are shown.


For those interested in the technical details of the the tag implementation, there is a discussion on our format mailing list. In short: A tag is a Kolab Configuration Object of the type 'relation' that stores all tag information and the relation to certain messages. We also considered using IMAP flags, but decided against it for now.


In the future, the new 'kolab_tags' plugin might provide tag handling capabilities to all other plugins that can make use of tags such as calendar, tasklist, notes, etc. The format was already designed in a way that allows for storing relations between any object type.


Resource Management


With Kolab you can also manage resources like cars, presenters or meeting rooms in your organization. People can book resources themselves, if they are available. This ensures that no two groups want to use a meeting room at the same time.


To make this easier, we added a dedicated resource selection dialog as you can see on the right. The new dialog allows you to search and browse through the available resources. It displays additional information and attributes for the individual resources as well as an availability calendar based on the free-busy data published for the given resource.


The automated processing of iTip messages (invitations) to resources was refactored and now supports the fully automated resource booking and updating through the iTip protocol. It wil also be possible to define a booking policy for the resources that for example automatically accepts or refuses booking based on certain criteria.


Multiple resource of the same kind can be organized in resource collections (e.g. company cars). If someone wants to book "a car", she books the resource collection for her appointment. The Wallace module then allocates a concrete resource from that collection and delegates the booking to the next available resource. The delegation is reflected in the iTip replies and the updated user calendar.


 


There will be many various small improvements to the webclient and under the hood, but we will leave it to you to discover those on your own. This means that this will be the last feature presentation. We hope you enjoyed it!


roundcube's picture
Sun, 2014-07-20 02:00

This is the second service release to update the stable version 1.0. It contains
some bug fixes and improvements we considered important for the long term support
branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


grote's picture
Tue, 2014-07-15 16:28

Last time, we already talked about some new features that are coming with Kolab.org 3.3. Now, we will show you more features we are currently working on. Like last time, this is still work in progress. It is not ready and not packaged for you to try out just yet.


Still, there's more features to come and another post will follow soon. Stay tuned and monitor this channel for more updates.


Notes


Often you just want to note something down real quick. Today, we often use computers for that to have our notes available on all our devices everywhere and searchable.


Currently, it is only possible to work with notes using the Kolab Desktop Client. With Kolab.org 3.3 you will also be able to work with these notes in the webclient. As with all things Kolab, you can also have multiple notebooks and share them with people. The screenshot on the right shows the current state of development and as you can see it already uses the new folder management in the bottom left corner.


The notes are automatically synchronized with the Kolab Desktop Client and you will also be able to synchronize them to your mobile devices via the ActiveSync protocol.


Notes can be tagged just like tasks and they can have rich-text content including graphics. They can be printed right from the webclient and also sent via email.


In the email view, you can add little notes to emails. It will be possible to view and edit your notes before appending them to email messages. If you have notes linked with an email, they are listed at the top of the email preview, like you can see here:



Accessibility improvements


We reviewed and improved our entire web client regarding the accessibility for people that require assistive technologies. The user interface can now be fully operated with the keyboard and has support for screen readers as well as voice output as suggested by the  WCAG 2.0 Guidelines and ​WAI ARIA standards.


As an example, for the email view you are now able to tab through all the button elements, operate the message list and the popup menus. Once the message list gains focus, the arrow keys move the cursor while <space> selects the row and <enter> opens the message. A descriptive block explaining the list navigation was added to the page, so screen readers can pick it up.


All these improvements will make it a lot easier for people that require assistive technologies to use the Kolab webclient. They will also benefit the millions of Roundcube users out there as we strictly bring all our modifications back to the upstream communities.


We hope you enjoyed this quick tour through some of the new Kolab features and are excited about what we will reveal next time.


grote's picture
Wed, 2014-07-09 16:04

The last Kolab.org release was at Valentine's day. So according to our release schedule, the next version should be ready in August. In the past months, we have not been very good at keeping you updated about what we are working on. This is going to change now, because we have made good progress and now finally have something to show.


Please keep in mind that what you are going to see is still work in progress. It is not ready and not packaged for you to try out just yet. Think of it as a first sneak preview of what is going to be Kolab.org 3.3. More will follow. Stay tuned and monitor this channel for more updates.


Folder Management


Internally, all address books, calendars, task lists, etc. are folders. So far, we did not hide that fact well from users. When you wanted to show a folder that you were not subscribed to, you had to go through the folder management that listed all folders equally.


Say hi to the new folder management mechanism we came up with! With Kolab's new folder navigation, shared calendars, address books and task lists can be searched and subscribed to directly from within the respective view. The search box allows you to find "folders" that are shared with you. Searches are also expanded to LDAP, so that search results show folders grouped by matching users. When selecting a "folder" from the search results, your selection can be temporary and affect only the current session or permanently if you always want to see that calendar for example.


Some users spread their events over many calendars (one per team for example) and you might not be interested in the distinction they made. So you have the option to show a "virtual user calendar" that represents an aggregated view on "a user's calendar" and that can be subscribed to as well.



Calendar Quickview


The calendar got a quickview mode which allows you to open an undistorted view on a single calendar without unchecking all other calendars from the main view.


When opening the quickview for a "virtual user calendar" (see above), the calendar view displays events from all calendars of that user you have access to. Additionally, it also shows time blocks from anonymized free/busy information where you only know that the user is unavailable during these times.


Together with the new folder navigation, this new feature makes it even more easy to quickly check whether a certain workmate is available for a phone call or lunch.


We hope you enjoyed this quick tour through two of Kolab's new features and are excited for what more is going to come.


Tue, 2014-07-08 00:00

I have spent some time this weekend investigating SSL certificate-based authentication and implementing it in Kolab web-based user interface.

This topic is very interesting, but definitely too broad to be briefly described in a single blog post, so do not look at it as complete solution, but treat it only as a proof of concept.

Table of contents

Certification Authority

Apache

Kolab - Web-based user interface

Notes

Prepare Certification Authority

At first you need to create Certification Authority on an off-line, and secured system.

I have already created required shell scripts (miniature-octo-ca) to ease the whole operation, so just clone the following repository and move it to the CA system.

$ git clone https://github.com/milosz/miniature-octo-ca.git
Cloning into 'miniature-octo-ca'...
remote: Counting objects: 10, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 10 (delta 2), reused 10 (delta 2)
Unpacking objects: 100% (10/10), done.

Please remember to change working directory before executing any available shell script.

$ cd miniature-octo-ca

Configure Certification Authority

The next step is to configure CA by using common-ca-settings.sh configuration file.

$ vi common-ca-settings.sh 
#!/bin/sh
# common CA settings 

# simple protection - every script can be executed only from current directory
if [ "$(pwd)" != "$(dirname $(readlink -f  $0))" ]; then
  echo "Do not run CA scripts from outside of $(dirname $(readlink -f  $0)) directory"
  exit
fi

# ensure proper permissions by setting umask
umask 077

# kolab secret
# use 'openssl rand -hex 16' command to generate it
kolab_secret="d2d97d097eedb397edea79f52b56ea74"

# key length
key_length=4096

# certificates directory
cert_directory="root-ca"

# number of days to certify the certificate
cert_validfor=3650        # root   certificate
client_cert_validfor=365  # client certificate
server_cert_validfor=365  # server certificate

# default certificate settings
cert_country="PL"
cert_organization="example.org"
cert_state="state"
cert_city="city"
cert_name="example.org CA"
cert_unit="Certificate Authority"
cert_email=""

# certificate number
if [ -f "${cert_directory}/serial" ]; then
  serial=$(cat ${cert_directory}/serial)
fi

You need to modify kolab_secret variable, as it will be used as a key to encrypt/decrypt user password, and common certificate settings to match your setup.

Initialize Certification Authority

Execute prepare_ca.sh shell script to build initial configuration and directory layout.

$ sh prepare_ca.sh 

You can inspect generated OpenSSL configuration (openssl.cnf file) and tune it a bit.

Create root certificate

Execute create_ca.sh shell script to create root certificate and private key.

$ sh create_ca.sh 
Root certificate (private key) password: Generating a 4096 bit RSA private key
............................................................................++
.........................++
writing new private key to 'root-ca/ca/root-key.pem'
-----
No value provided for Subject Attribute emailAddress, skipped

Root certificate and private key will be stored inside root-ca/ca/ directory.

$ ls root-ca/ca/
root-cert.pem  root-key.pem

Create server certificate

Execute add_server.sh shell script to create new server certificate.

$ sh add_server.sh 
Server name (eg. mail.example.com): mail.example.org
Email: admin@example.org
Root certificate (private key) password: 
Server certificate (private key) password: 
Generating a 4096 bit RSA private key
.........++++++
...................++++++
writing new private key to 'root-ca/private/01.pem'
-----
Using configuration from openssl.cnf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 1 (0x1)
        Validity
            Not Before: Jul  6 12:51:12 2014 GMT
            Not After : Jul  6 12:51:12 2015 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = state
            organizationName          = example.org
            organizationalUnitName    = Certificate Authority
            commonName                = mail.example.org
            emailAddress              = admin@example.org
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                EA:3E:05:51:EE:C2:90:53:58:91:E8:D5:56:47:15:7D:5A:26:E8:C4
            X509v3 Authority Key Identifier: 
                keyid:A1:41:B0:72:60:29:1A:9B:B1:63:77:53:E7:93:71:1D:02:14:A4:7C

Certificate is to be certified until Jul  6 12:51:12 2015 GMT (365 days)

Write out database with 1 new entries
[..]
Data Base Updated
writing RSA key

Server certificate and private key (with password removed) will be stored inside root-ca/server_certs/ directory.

$ ls root-ca/server_certs/
01.crt  01.pem

Create client certificate

Execute add_client.sh shell script to create new client certificate.

$ sh add_client.sh 
User name (eg. John Doe): Milosz
Email: milosz@example.org
Export password: 
Kolab password: 
Root certificate (private key) password: 
Client certificate (private key) password: Generating a 4096 bit RSA private key
...++++++
.......++++++
writing new private key to 'root-ca/private/02.pem'
-----
Using configuration from openssl.cnf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 2 (0x2)
        Validity
            Not Before: Jul  6 12:57:39 2014 GMT
            Not After : Jul  6 12:57:39 2015 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = state
            organizationName          = example.org
            organizationalUnitName    = Certificate Authority
            commonName                = Milosz
            emailAddress              = milosz@example.org
            kolabPasswordEnc          = RX3f071sOYKxwDBhNpDVHA==
            kolabPasswordIV           = 72a1e2086a765204122109382f8d4f5d
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                3B:7A:BF:A5:B8:F4:C9:E0:0D:81:41:0D:EE:27:F4:B5:C3:B0:40:67
            X509v3 Authority Key Identifier: 
                keyid:A1:41:B0:72:60:29:1A:9B:B1:63:77:53:E7:93:71:1D:02:14:A4:7C

Certificate is to be certified until Jul  6 12:57:39 2015 GMT (365 days)

Write out database with 1 new entries
[..]
Data Base Updated

Email field will be used to identify user. Kolab password field is a password, that will be encrypted using kolab_secret key, and stored inside certificate file (alongside the initialization vector).

Certificate will be stored inside root-ca/client_certs/ directory, and protected using specified export password (can be easily imported into browser).

$ ls root-ca/client_certs/
02.p12

Apache - Enable HTTPS protocol

Enable SSL module.

# a2enmod ssl
Enabling module ssl.

Enable default SSL virtual host.

# a2ensite default-ssl 
Enabling site default-ssl.

Disable default (non SSL) virtual host.

# a2dissite default
Site default disabled.

Create simple virtual host listening on port 80, to redirect traffic to the HTTPS protocol.

cat << EOF > /etc/apache2/sites-available/default-rewrite
<VirtualHost *:80>
  ServerName mail.example.org
  Redirect / https://mail.example.org/
</VirtualHost>
EOF

Enable the site created above.

# a2ensite default-rewrite 
Enabling site default-rewrite.

Change protocol used by Kolab Files module.

# sed -i -e "s/http:/https:/" /etc/roundcubemail/kolab_files.inc.php

Restart Apache and test applied modifications.

# service apache2 restart

Apache - Switch to own Certification Authority

Create /etc/apache2/ssl/ directory.

# mkdir /etc/apache2/ssl

Copy root certificate root-cert.pem, server certificate server.crt, and server private key server.key to the directory created in the previous step.

Edit Apache configuration to use uploaded server certificate, and private key.

# sed -i -e "/SSLCACertificateFile/ s/#//;s/ssl.crt\/ca-bundle.crt/ssl\/root-cert.pem/" /etc/apache2/sites-available/default-ssl  
# sed -i -e "/SSLCertificateFile/ s/\/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/\/etc\/apache2\/ssl\/server.crt/" /etc/apache2/sites-available/default-ssl
# sed -i -e "/SSLCertificateKeyFile/ s/\/etc\/ssl\/private\/ssl-cert-snakeoil.key/\/etc\/apache2\/ssl\/server.pem/" /etc/apache2/sites-available/default-ssl

Restart web-server and test applied changes.

# service apache2 restart

Import root certificate root-cert.pem into the browser as Certification Authority, then client certificate.

Alter web-server configuration to require valid client certificate, but allow direct API calls from mail server (omit internal error when using kolab-admin).

# sed -i -e "/\/VirtualHost/i <Location />\nSSLRequireSSL\nSSLVerifyClient require\nSSLVerifyDepth 1\nOrder allow,deny\nallow from all\n</Location>\n\n<Location /kolab-webadmin/api/>\nSSLVerifyClient none\norder deny,allow\ndeny from all\nallow from mail.example.org\n</Location>" /etc/apache2/sites-available/default-ssl  

Restart web-server and test client certificate.

# service apache2 restart

Kolab - Use client certificate to fill username filed

You can use client certificate to fill username name inside login form.

to achieve this simple task you need to edit login_form function found in /usr/share/roundcubemail/program/include/rcmail_output_html.php file.

--- /usr/share/roundcubemail/program/include/rcmail_output_html.php.orig	2014-07-06 16:24:08.005325038 +0200
+++ /usr/share/roundcubemail/program/include/rcmail_output_html.php	2014-07-06 16:40:54.429360653 +0200
@@ -1551,40 +1551,47 @@
     protected function login_form($attrib)
     {
         $default_host = $this->config->get('default_host');
         $autocomplete = (int) $this->config->get('login_autocomplete');
 
         $_SESSION['temp'] = true;
 
         // save original url
         $url = rcube_utils::get_input_value('_url', rcube_utils::INPUT_POST);
         if (empty($url) && !preg_match('/_(task|action)=logout/', $_SERVER['QUERY_STRING']))
             $url = $_SERVER['QUERY_STRING'];
 
         // Disable autocapitalization on iPad/iPhone (#1488609)
         $attrib['autocapitalize'] = 'off';
 
+        $email="";
+        if ($_SERVER["HTTPS"] == "on" &&  $_SERVER["SSL_CLIENT_VERIFY"] == "SUCCESS") {
+          if (preg_match('/\/emailAddress=([^\/]*)\//',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+            $email=$matches[1];
+          }
+        }
+
         // set atocomplete attribute
         $user_attrib = $autocomplete > 0 ? array() : array('autocomplete' => 'off');
         $host_attrib = $autocomplete > 0 ? array() : array('autocomplete' => 'off');
         $pass_attrib = $autocomplete > 1 ? array() : array('autocomplete' => 'off');
 
         $input_task   = new html_hiddenfield(array('name' => '_task', 'value' => 'login'));
         $input_action = new html_hiddenfield(array('name' => '_action', 'value' => 'login'));
         $input_tzone  = new html_hiddenfield(array('name' => '_timezone', 'id' => 'rcmlogintz', 'value' => '_default_'));
         $input_url    = new html_hiddenfield(array('name' => '_url', 'id' => 'rcmloginurl', 'value' => $url));
-        $input_user   = new html_inputfield(array('name' => '_user', 'id' => 'rcmloginuser', 'required' => 'required')
+        $input_user   = new html_inputfield(array('name' => '_user', 'id' => 'rcmloginuser', 'required' => 'required', 'value' => $email)
             + $attrib + $user_attrib);
         $input_pass   = new html_passwordfield(array('name' => '_pass', 'id' => 'rcmloginpwd', 'required' => 'required')
             + $attrib + $pass_attrib);
         $input_host   = null;
 
         if (is_array($default_host) && count($default_host) > 1) {
             $input_host = new html_select(array('name' => '_host', 'id' => 'rcmloginhost'));
 
             foreach ($default_host as $key => $value) {
                 if (!is_array($value)) {
                     $input_host->add($value, (is_numeric($key) ? $value : $key));
                 }
                 else {
                     $input_host = null;
                     break;

Use client certificate to login user

Generated client certificate already contains encrypted password (using kolab_secret key) and initialization vector, so you can use them to automatically login user using /usr/share/roundcubemail/index.php file.

--- /usr/share/roundcubemail/index.php.orig	2014-07-06 18:32:40.830414058 +0200
+++ /usr/share/roundcubemail/index.php	2014-07-06 18:37:07.462423513 +0200
@@ -88,17 +88,26 @@
 $RCMAIL->action = $startup['action'];
 
 // try to log in
-if ($RCMAIL->task == 'login' && $RCMAIL->action == 'login') {
-    $request_valid = $_SESSION['temp'] && $RCMAIL->check_request(rcube_utils::INPUT_POST, 'login');
+if ($RCMAIL->task == 'login' && $_SERVER["HTTPS"] == "on" &&  $_SERVER["SSL_CLIENT_VERIFY"] == "SUCCESS") {
+    $request_valid = 1; 
+    if (preg_match('/\/emailAddress=([^\/]*)\//',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $email=$matches[1];
+    }
+    if (preg_match('/\/1.2.3.4.5.6.7.1=([^\/]*)/',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $pass=$matches[1];
+    }
+    if (preg_match('/\/1.2.3.4.5.6.7.2=([^\/]*)/',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $iv=$matches[1];
+    }
+    $pass=rtrim(openssl_decrypt(base64_decode($pass),'aes-128-cbc', hex2bin("d2d97d097eedb397edea79f52b56ea74"), true,hex2bin($iv)));
 
     // purge the session in case of new login when a session already exists 
     $RCMAIL->kill_session();
 
     $auth = $RCMAIL->plugins->exec_hook('authenticate', array(
         'host' => $RCMAIL->autoselect_host(),
-        'user' => trim(rcube_utils::get_input_value('_user', rcube_utils::INPUT_POST)),
-        'pass' => rcube_utils::get_input_value('_pass', rcube_utils::INPUT_POST, true,
-            $RCMAIL->config->get('password_charset', 'ISO-8859-1')),
+        'user' => $email,
+        'pass' => $pass,
         'cookiecheck' => true,
         'valid'       => $request_valid,
     ));

Future improvements

kolab_secret can be stored using Roundcube configuration file, and login form can be modified further to remove input fields, and include more information, .

There should be no problem to add shell script to generate CRL.

PHP code could be simplified a bit.

Please inspect shell scripts to get the idea of additional certificate parameters.


Tue, 2014-07-08 00:00

I have spent some time this weekend investigating SSL certificate-based authentication and implementing it in Kolab web-based user interface.

This topic is very interesting, but definitely too broad to be briefly described in a single blog post, so do not look at it as complete solution, but treat it only as a proof of concept.

Table of contents

Certification Authority

Apache

Kolab - Web-based user interface

Notes

Prepare Certification Authority

At first you need to create Certification Authority on an off-line, and secured system.

I have already created required shell scripts (miniature-octo-ca) to ease the whole operation, so just clone the following repository and move it to the CA system.

$ git clone https://github.com/milosz/miniature-octo-ca.git
Cloning into 'miniature-octo-ca'...
remote: Counting objects: 10, done.
remote: Compressing objects: 100% (7/7), done.
remote: Total 10 (delta 2), reused 10 (delta 2)
Unpacking objects: 100% (10/10), done.

Please remember to change working directory before executing any available shell script.

$ cd miniature-octo-ca

Configure Certification Authority

The next step is to configure CA by using common-ca-settings.sh configuration file.

$ vi common-ca-settings.sh 
#!/bin/sh
# common CA settings 

# simple protection - every script can be executed only from current directory
if [ "$(pwd)" != "$(dirname $(readlink -f  $0))" ]; then
  echo "Do not run CA scripts from outside of $(dirname $(readlink -f  $0)) directory"
  exit
fi

# ensure proper permissions by setting umask
umask 077

# kolab secret
# use 'openssl rand -hex 16' command to generate it
kolab_secret="d2d97d097eedb397edea79f52b56ea74"

# key length
key_length=4096

# certificates directory
cert_directory="root-ca"

# number of days to certify the certificate
cert_validfor=3650        # root   certificate
client_cert_validfor=365  # client certificate
server_cert_validfor=365  # server certificate

# default certificate settings
cert_country="PL"
cert_organization="example.org"
cert_state="state"
cert_city="city"
cert_name="example.org CA"
cert_unit="Certificate Authority"
cert_email=""

# certificate number
if [ -f "${cert_directory}/serial" ]; then
  serial=$(cat ${cert_directory}/serial)
fi

You need to modify kolab_secret variable, as it will be used as a key to encrypt/decrypt user password, and common certificate settings to match your setup.

Initialize Certification Authority

Execute prepare_ca.sh shell script to build initial configuration and directory layout.

$ sh prepare_ca.sh 

You can inspect generated OpenSSL configuration (openssl.cnf file) and tune it a bit.

Create root certificate

Execute create_ca.sh shell script to create root certificate and private key.

$ sh create_ca.sh 
Root certificate (private key) password: Generating a 4096 bit RSA private key
............................................................................++
.........................++
writing new private key to 'root-ca/ca/root-key.pem'
-----
No value provided for Subject Attribute emailAddress, skipped

Root certificate and private key will be stored inside root-ca/ca/ directory.

$ ls root-ca/ca/
root-cert.pem  root-key.pem

Create server certificate

Execute add_server.sh shell script to create new server certificate.

$ sh add_server.sh 
Server name (eg. mail.example.com): mail.example.org
Email: admin@example.org
Root certificate (private key) password: 
Server certificate (private key) password: 
Generating a 4096 bit RSA private key
.........++++++
...................++++++
writing new private key to 'root-ca/private/01.pem'
-----
Using configuration from openssl.cnf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 1 (0x1)
        Validity
            Not Before: Jul  6 12:51:12 2014 GMT
            Not After : Jul  6 12:51:12 2015 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = state
            organizationName          = example.org
            organizationalUnitName    = Certificate Authority
            commonName                = mail.example.org
            emailAddress              = admin@example.org
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                EA:3E:05:51:EE:C2:90:53:58:91:E8:D5:56:47:15:7D:5A:26:E8:C4
            X509v3 Authority Key Identifier: 
                keyid:A1:41:B0:72:60:29:1A:9B:B1:63:77:53:E7:93:71:1D:02:14:A4:7C

Certificate is to be certified until Jul  6 12:51:12 2015 GMT (365 days)

Write out database with 1 new entries
[..]
Data Base Updated
writing RSA key

Server certificate and private key (with password removed) will be stored inside root-ca/server_certs/ directory.

$ ls root-ca/server_certs/
01.crt  01.pem

Create client certificate

Execute add_client.sh shell script to create new client certificate.

$ sh add_client.sh 
User name (eg. John Doe): Milosz
Email: milosz@example.org
Export password: 
Kolab password: 
Root certificate (private key) password: 
Client certificate (private key) password: Generating a 4096 bit RSA private key
...++++++
.......++++++
writing new private key to 'root-ca/private/02.pem'
-----
Using configuration from openssl.cnf
Check that the request matches the signature
Signature ok
Certificate Details:
        Serial Number: 2 (0x2)
        Validity
            Not Before: Jul  6 12:57:39 2014 GMT
            Not After : Jul  6 12:57:39 2015 GMT
        Subject:
            countryName               = PL
            stateOrProvinceName       = state
            organizationName          = example.org
            organizationalUnitName    = Certificate Authority
            commonName                = Milosz
            emailAddress              = milosz@example.org
            kolabPasswordEnc          = RX3f071sOYKxwDBhNpDVHA==
            kolabPasswordIV           = 72a1e2086a765204122109382f8d4f5d
        X509v3 extensions:
            X509v3 Basic Constraints: 
                CA:FALSE
            Netscape Comment: 
                OpenSSL Generated Certificate
            X509v3 Subject Key Identifier: 
                3B:7A:BF:A5:B8:F4:C9:E0:0D:81:41:0D:EE:27:F4:B5:C3:B0:40:67
            X509v3 Authority Key Identifier: 
                keyid:A1:41:B0:72:60:29:1A:9B:B1:63:77:53:E7:93:71:1D:02:14:A4:7C

Certificate is to be certified until Jul  6 12:57:39 2015 GMT (365 days)

Write out database with 1 new entries
[..]
Data Base Updated

Email field will be used to identify user. Kolab password field is a password, that will be encrypted using kolab_secret key, and stored inside certificate file (alongside the initialization vector).

Certificate will be stored inside root-ca/client_certs/ directory, and protected using specified export password (can be easily imported into browser).

$ ls root-ca/client_certs/
02.p12

Apache - Enable HTTPS protocol

Enable SSL module.

# a2enmod ssl
Enabling module ssl.

Enable default SSL virtual host.

# a2ensite default-ssl 
Enabling site default-ssl.

Disable default (non SSL) virtual host.

# a2dissite default
Site default disabled.

Create simple virtual host listening on port 80, to redirect traffic to the HTTPS protocol.

cat << EOF > /etc/apache2/sites-available/default-rewrite
<VirtualHost *:80>
  ServerName mail.example.org
  Redirect / https://mail.example.org/
</VirtualHost>
EOF

Enable the site created above.

# a2ensite default-rewrite 
Enabling site default-rewrite.

Change protocol used by Kolab Files module.

# sed -i -e "s/http:/https:/" /etc/roundcubemail/kolab_files.inc.php

Restart Apache and test applied modifications.

# service apache2 restart

Apache - Switch to own Certification Authority

Create /etc/apache2/ssl/ directory.

# mkdir /etc/apache2/ssl

Copy root certificate root-cert.pem, server certificate server.crt, and server private key server.key to the directory created in the previous step.

Edit Apache configuration to use uploaded server certificate, and private key.

# sed -i -e "/SSLCACertificateFile/ s/#//;s/ssl.crt\/ca-bundle.crt/ssl\/root-cert.pem/" /etc/apache2/sites-available/default-ssl  
# sed -i -e "/SSLCertificateFile/ s/\/etc\/ssl\/certs\/ssl-cert-snakeoil.pem/\/etc\/apache2\/ssl\/server.crt/" /etc/apache2/sites-available/default-ssl
# sed -i -e "/SSLCertificateKeyFile/ s/\/etc\/ssl\/private\/ssl-cert-snakeoil.key/\/etc\/apache2\/ssl\/server.pem/" /etc/apache2/sites-available/default-ssl

Restart web-server and test applied changes.

# service apache2 restart

Import root certificate root-cert.pem into the browser as Certification Authority, then client certificate.

Alter web-server configuration to require valid client certificate, but allow direct API calls from mail server (omit internal error when using kolab-admin).

# sed -i -e "/\/VirtualHost/i <Location />\nSSLRequireSSL\nSSLVerifyClient require\nSSLVerifyDepth 1\nOrder allow,deny\nallow from all\n</Location>\n\n<Location /kolab-webadmin/api/>\nSSLVerifyClient none\norder deny,allow\ndeny from all\nallow from mail.example.org\n</Location>" /etc/apache2/sites-available/default-ssl  

Restart web-server and test client certificate.

# service apache2 restart

Kolab - Use client certificate to fill username filed

You can use client certificate to fill username name inside login form.

to achieve this simple task you need to edit login_form function found in /usr/share/roundcubemail/program/include/rcmail_output_html.php file.

--- /usr/share/roundcubemail/program/include/rcmail_output_html.php.orig	2014-07-06 16:24:08.005325038 +0200
+++ /usr/share/roundcubemail/program/include/rcmail_output_html.php	2014-07-06 16:40:54.429360653 +0200
@@ -1551,40 +1551,47 @@
     protected function login_form($attrib)
     {
         $default_host = $this->config->get('default_host');
         $autocomplete = (int) $this->config->get('login_autocomplete');
 
         $_SESSION['temp'] = true;
 
         // save original url
         $url = rcube_utils::get_input_value('_url', rcube_utils::INPUT_POST);
         if (empty($url) && !preg_match('/_(task|action)=logout/', $_SERVER['QUERY_STRING']))
             $url = $_SERVER['QUERY_STRING'];
 
         // Disable autocapitalization on iPad/iPhone (#1488609)
         $attrib['autocapitalize'] = 'off';
 
+        $email="";
+        if ($_SERVER["HTTPS"] == "on" &&  $_SERVER["SSL_CLIENT_VERIFY"] == "SUCCESS") {
+          if (preg_match('/\/emailAddress=([^\/]*)\//',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+            $email=$matches[1];
+          }
+        }
+
         // set atocomplete attribute
         $user_attrib = $autocomplete > 0 ? array() : array('autocomplete' => 'off');
         $host_attrib = $autocomplete > 0 ? array() : array('autocomplete' => 'off');
         $pass_attrib = $autocomplete > 1 ? array() : array('autocomplete' => 'off');
 
         $input_task   = new html_hiddenfield(array('name' => '_task', 'value' => 'login'));
         $input_action = new html_hiddenfield(array('name' => '_action', 'value' => 'login'));
         $input_tzone  = new html_hiddenfield(array('name' => '_timezone', 'id' => 'rcmlogintz', 'value' => '_default_'));
         $input_url    = new html_hiddenfield(array('name' => '_url', 'id' => 'rcmloginurl', 'value' => $url));
-        $input_user   = new html_inputfield(array('name' => '_user', 'id' => 'rcmloginuser', 'required' => 'required')
+        $input_user   = new html_inputfield(array('name' => '_user', 'id' => 'rcmloginuser', 'required' => 'required', 'value' => $email)
             + $attrib + $user_attrib);
         $input_pass   = new html_passwordfield(array('name' => '_pass', 'id' => 'rcmloginpwd', 'required' => 'required')
             + $attrib + $pass_attrib);
         $input_host   = null;
 
         if (is_array($default_host) && count($default_host) > 1) {
             $input_host = new html_select(array('name' => '_host', 'id' => 'rcmloginhost'));
 
             foreach ($default_host as $key => $value) {
                 if (!is_array($value)) {
                     $input_host->add($value, (is_numeric($key) ? $value : $key));
                 }
                 else {
                     $input_host = null;
                     break;

Use client certificate to login user

Generated client certificate already contains encrypted password (using kolab_secret key) and initialization vector, so you can use them to automatically login user using /usr/share/roundcubemail/index.php file.

--- /usr/share/roundcubemail/index.php.orig	2014-07-06 18:32:40.830414058 +0200
+++ /usr/share/roundcubemail/index.php	2014-07-06 18:37:07.462423513 +0200
@@ -88,17 +88,26 @@
 $RCMAIL->action = $startup['action'];
 
 // try to log in
-if ($RCMAIL->task == 'login' && $RCMAIL->action == 'login') {
-    $request_valid = $_SESSION['temp'] && $RCMAIL->check_request(rcube_utils::INPUT_POST, 'login');
+if ($RCMAIL->task == 'login' && $_SERVER["HTTPS"] == "on" &&  $_SERVER["SSL_CLIENT_VERIFY"] == "SUCCESS") {
+    $request_valid = 1; 
+    if (preg_match('/\/emailAddress=([^\/]*)\//',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $email=$matches[1];
+    }
+    if (preg_match('/\/1.2.3.4.5.6.7.1=([^\/]*)/',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $pass=$matches[1];
+    }
+    if (preg_match('/\/1.2.3.4.5.6.7.2=([^\/]*)/',$_SERVER['SSL_CLIENT_S_DN'],$matches)) {
+      $iv=$matches[1];
+    }
+    $pass=rtrim(openssl_decrypt(base64_decode($pass),'aes-128-cbc', hex2bin("d2d97d097eedb397edea79f52b56ea74"), true,hex2bin($iv)));
 
     // purge the session in case of new login when a session already exists 
     $RCMAIL->kill_session();
 
     $auth = $RCMAIL->plugins->exec_hook('authenticate', array(
         'host' => $RCMAIL->autoselect_host(),
-        'user' => trim(rcube_utils::get_input_value('_user', rcube_utils::INPUT_POST)),
-        'pass' => rcube_utils::get_input_value('_pass', rcube_utils::INPUT_POST, true,
-            $RCMAIL->config->get('password_charset', 'ISO-8859-1')),
+        'user' => $email,
+        'pass' => $pass,
         'cookiecheck' => true,
         'valid'       => $request_valid,
     ));

Future improvements

kolab_secret can be stored using Roundcube configuration file, and login form can be modified further to remove input fields, and include more information, .

There should be no problem to add shell script to generate CRL.

PHP code could be simplified a bit.

Please inspect shell scripts to get the idea of additional certificate parameters.