Planet

Subscribe to the RSS feed of this planet! RSS
roundcube's picture
Thu, 2014-12-18 01:00

We’re proud to announce the next service release to the stable version 1.0.
It contains a security fix along with some bug fixes and improvements for
the long term support branch of Roundcube. The most important ones are:

  • Security: Fix possible CSRF attacks to some address book operations
    as well as to the ACL and Managesieve plugins.
  • Fix attachments encoded in TNEF containers (from Outlook)
  • Fix compatibility with PHP 5.2

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


Timotheus Pokorra's picture
Mon, 2014-12-15 11:08

I have shown in the last article Kolab/Roundcube with Squirrelmail’s IMAPProxy on CentOS6 how to easily configure an IMAPProxy for Roundcube, and explained the reasons for an IMAP Proxy as well.

Because I did investigate the Nginx IMAP Proxy as well, and got it to work after some workarounds, I want to share it here as well.

stunnel
With Nginx I had this problem: I was not able to connect to the Cyrus IMAP if /etc/imapd.conf had the line allowplaintext: no. The error you get in /var/log/nginx/error.log is: Login only available under a layer
I did not want to change it to allowplaintext: yes

See also this discussion on ServerFault: Can nginx be an mail proxy for a backend server that does not accept cleartext logins?

The solution is to use stunnel.

On CentOS6, you can run yum install stunnel. Unfortunately, there seems to be no init script installed, so that you can run it as a service.

I have taken the script from the source tar.gz file from stunnel, and saved it as /etc/init.d/stunnel:

#!/bin/sh
# stunnel SysV startup file
# Copyright by Michal Trojnara 2002,2007,2008
 
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/stunnel
PIDFILE=/var/run/stunnel/stunnel.pid
 
# Source function library.
. /etc/rc.d/init.d/functions
 
test -f $DAEMON || exit 0
 
case "$1" in
    start)
        echo -n "Starting universal SSL tunnel: stunnel"
        daemon $DAEMON || echo -n " failed"
        echo "."
        ;;
    stop)
        echo -n "Stopping universal SSL tunnel: stunnel"
        if test -r $PIDFILE; then
            kill `cat $PIDFILE` 2> /dev/null || echo -n " failed"
        else
            echo -n " no PID file"
        fi
        echo "."
        ;;
     restart|force-reload)
        echo "Restarting universal SSL tunnel"
        $0 stop
        sleep 1
        $0 start
        echo "done."
        ;;
    *)
        N=${0##*/}
        N=${N#[SK]??}
        echo "Usage: $N {start|stop|restart|force-reload}" >&2
        exit 1
        ;;
esac
 
exit 0

I have created this configuration file /etc/stunnel/stunnel.conf:

; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = TLSv1
 
; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/run/stunnel/
setuid = nobody
setgid = nobody
pid = /stunnel.pid
 
; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
 
; Use it for client mode
client = yes
; foreground = yes
 
; Service-level configuration
 
[imaps]
accept  = 8993
connect = 993

Some commands you need to run for configuring stunnel:

chmod a+x /etc/init.d/stunnel
service start stunnel
chkconfig stunnel on

Nginx IMAP Proxy
Install with yum install nginx.

You have to provide a service for authentication. In my case, I let Cyrus to decide if the password is correct. So I just return the IP and port of the Cyrus server. I point to port 8993 which is the stunnel to port 993 of Cyrus.

This is my file /etc/nginx/nginx.conf

worker_processes  1;
 
events {
  worker_connections  1024;
}
 
error_log  /var/log/nginx/error.log info;
 
mail {
  auth_http  localhost:81/auth;
 
  proxy on;
  imap_capabilities  "IMAP4rev1"  "UIDPLUS"; ## default
  server {
    listen     8143;
    protocol   imap;
  }
}
 
http {
  server {
    listen localhost:81;
    location = /auth {
      add_header Auth-Status OK;
      add_header Auth-Server 127.0.0.1;  # backend ip
      add_header Auth-Port   8993;       # backend port
      return 200;
    }
  }
}

And the usual:

service nginx start
chkconfig nginx on

Roundcube configuration
You need to change the port that Roundcube connects to, instead of port 143 now use 8143 where your Nginx IMAP Proxy is running.

In file /etc/roundcubemail/config.inc.php:

$config['default_port'] = 8143;

I have added the initIMAPProxy.sh script to my TBits scripts: initIMAPProxy.sh
Just change the line at the top with up-imapproxy to nginx.


Timotheus Pokorra's picture
Mon, 2014-12-15 11:06

There is the suggestion on the page http://trac.roundcube.net/wiki/Howto_Config/Performance to use a caching IMAP proxy. This will cache the connections to the IMAP server for each user, so that not every click in Roundcube on a message leads to the creation of a new connection.

I found these alternatives:

I had a closer look at Nginx IMAP Proxy and Squirrelmail’s IMAP Proxy.

Squirrelmail’s IMAPProxy
This is actually the easiest solution, at least compared to Nginx IMAP Proxy.

Install from EPEL with: yum install up-imapproxy

I have changed the following values in /etc/imapproxy.conf:

server_hostname localhost
listen_port 8143
listen_address 127.0.0.1
server_port 143
force_tls yes

To install the service:

service imapproxy start
chkconfig imapproxy on

Roundcube configuration
You need to change the port that Roundcube connects to, instead of port 143 now use 8143 where your Squirrelmail’s IMAP Proxy is running.

In file /etc/roundcubemail/config.inc.php:

$config['default_port'] = 8143;

I have added the initIMAPProxy.sh script to my TBits scripts: initIMAPProxy.sh

Nginx
Since the configuration of the Nginx IMAP Proxy is more complicated, I have created a separate post for that: Kolab/Roundcube with Nginx IMAP Proxy on CentOS6


Timotheus Pokorra's picture
Wed, 2014-12-03 16:28

For testing, it is useful to run setup-kolab in unattended mode.

There might be other reasons too, eg. as part of a docker setup etc.

One option is to use Puppet: Github: puppet-module-kolab. I don’t know enough about Puppet yet myself, and I have not tried it yet.

My way for an unattended setup is to patch setup-kolab in this way (see initSetupKolabPatches.sh to see the action in full context):

wget https://raw.githubusercontent.com/TBits/KolabScripts/Kolab3.3/kolab/patches/setupkolab_yes_quietBug2598.patch
wget https://raw.githubusercontent.com/TBits/KolabScripts/Kolab3.3/kolab/patches/setupkolab_directory_manager_pwdBug2645.patch
 
# different paths in debian and centOS
# Debian
pythonDistPackages=/usr/lib/python2.7/dist-packages
if [ ! -d $pythonDistPackages ]; then
  # centOS6
  pythonDistPackages=/usr/lib/python2.6/site-packages
  if [ ! -d $pythonDistPackages ]; then
    # centOS7
    pythonDistPackages=/usr/lib/python2.7/site-packages
  fi
fi
 
patch -p1 -i setupkolab_yes_quietBug2598.patch -d $pythonDistPackages/pykolab
patch -p1 -i setupkolab_directory_manager_pwdBug2645.patch -d $pythonDistPackages

Now you can call setup-kolab this way:

echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test

I need the echo 2 for the mysql options, that is a quick solution for the moment.


Aaron Seigo's picture
Tue, 2014-12-02 09:38

One of the things that came out of the Winter 2014 KDE PIM sprint in Munich is that people felt we, as a team, needed to coordinate more effectively and more often. Bringing together people from Kolab, upstream KDE PIM, downstream packagers and even users of PIM was fantastically productive, and everyone wanted to keep that ball rolling.

One suggestion was to do regular team meetings on IRC to formulate plans and keep up with each other's progress. While the mailing list is great for ongoing developer discussion and review board is fantastic for pushing forward technical steps, coordinating ourselves as a team can really be helped with an hour or two of real time discussion every so often.  So we lined up the first meeting for yesterday, and I have to say that I was very impressed at the turn-out. In all, 12 people signed up on the meeting's scheduling Doodle and I think there were even more in attendance, some just listening in but many participating.

Aleix Pol was kind enough to take notes and sent a good summary to the mailing list. The big topics we covered were the Qt5 / KDE Frameworks 5 (KF5) ports of the KDE PIM libraries, a new revision of Akonadi and a release roadmap for a Qt5/KF5 based Kontact. These are all quite important topics for both Kolab, which relies on Kontact for its desktiop client, and KDE itself, so it was good to focus on them and make some progress. And make progress we did!

There will be releases of the libraries for Qt5 as frameworks coming soon. The porting effort, led largely by Laurent Montel, has done a great job to get the code to the point that such a release can be made. The kdepimutils library is nearly gone, with the useful parts finding their way to more appropriate frameworks that already exist, and kcalcore needs a bit more love ... but otherwise we're "there" and just the repository creation remains. Aleix Pol and Dan Vratil will be heading up this set of tasks, and once they are done we will be left with just the small core of libraries that rely on Akonadi. Which brings us to the next topic.

A possible major revision of Akonadi is currently being prototyped. This early development is happening in a separate repository until everyone is confident that the ideas are solid and workable in practice. The goals of this effort include producing a leaner, more robust foundation for applications that would need access to PIM data (such as Kontact), one which is also easier to develop with and for. It is still early days but we hope to have enough of an implementation in place by the end of December that we can not only start talking about it  publicly in more detail, but figure out a realistic and responsible release schedule for Kontact 5.

... and that is where we ended up with the release schedule discussion: we need more information, which  we won't have until January, before we can form a realistic schedule. So that topic has been tabled until the next IRC meeting in January.

The PIMsters won't be waiting until January for our next IRC meeting, however. There will be another one on the 15th of December. Dan Vratil will be organizing, so look for the announcement on the kde-pim at kde.org mailing list if you are interested in joining us.


Fri, 2014-11-28 18:00

ProFTPD is a versatile ftp server. I recently integrated it in my Kolab 3.3 server environment, so that user access be can easily organized by the standard kolab-webadmin. The design looks as follows:

Kolab users are be able to login to ProFTPD but every user gets jailed in his own separate (physical) home directory. According to his group memberships, aditional shared folders can be displayed and accessed within this home directory.

You will need proftpd with support for ldap and virtual root environments. In Debian and Ubuntu, this is achieved via module packages:

  • proftpd-mod-ldap, proftpd-mod-vroot

On other platforms you may need to compile your own proftpd.

Via kolab-webadmin I created a new organizational unit FTPGroups within parent unit Groups. Within this unit, you can now add groups of type (Pure) POSIX Group. These groups are later used to restrict or permit access to certain directories or apply other custom settings per group by using the IfGroup directive of ProFTPD.

Note, that you stick to sub-units of ou=Groups here, so that this unit will be recognized by the kolab-webadmin. The LDAP-record of such a group may look like this:

dn: cn=ftp_test_group,ou=FTPGroups,ou=Groups,dc=domain,dc=com
cn: ftp_test_group
gidnumber: 1234
objectclass: top
objectclass: groupofuniquenames
objectclass: posixgroup
uniquemember: uid=testuser,ou=People,dc=domain,dc=com

To make sure that our kolab-users and groups within the sub-unit get mapped correctly to their equivalents in the ftp-server, we have to edit the directives for mod_ldap. Just start with my working sample configuration ldap.conf on pastebin, which should be included in your main proftpd configuration.

Because we use the standard kolab ldap-schema, the users do neither posess a user nor group ID. Therefore, ProFTPD will fallback to the LDAPDefaultUID (example: ID of “nobody”) and LDAPDefaultGID (example: 10000). From the system side, a user with this combination of UID and GID should be allowed to read from (and maybe write to) your physical FTP directory tree. You can either add the user or group to your system and set the permissions accordingly or use the access control list (ACL). Since I use the acl-approach, the group with ID 10000 does not have to exist in /etc/group. You may install acl by executing

~# apt-get install acl

and mount your ftp storage device with the acl option (to be persistent add it in /etc/fstab) by executing

~# mount -o remount,defaults,noexec,rw,acl /dev/sda1 /var/ftp

To allow the access for users in our default group 10000 (for both existing and newly created files), we have to use the setfacl command. Think carefully about this. We want the users not to be able to remove one of the shared folders accidentally!

~# setfacl     -m g:10000:rx  /var/ftp/
~# setfacl -d  -m g:10000:rx  /var/ftp/
~# setfacl -d -Rm g:10000:rwx /var/ftp/*
~# setfacl    -Rm g:10000:rwx /var/ftp/*

We wanted all users to have their own home directory, which resides in /var/ftp/home/, so make sure this directory exists. To jail each user to their own home directory, change the DefaultRoot directive in your main configuration file /etc/proftpd.conf to look like

DefaultRoot  /var/ftp/home/%u

Nonexistent home directories /var/ftp/home/username will be created as requested by ldap.conf (see above). At this point, ldap users should be able to login and will be dropped in their empty home directory. Now we have to setup the directory permissions and have shared directories linked to the home directory. To achieve this we will make extensive use of the IfGroup directive. It’s very important, that the module mod_ifsession.c is the last module to load in /etc/proftpd/modules.conf! Additionally you should have lines, which load mod_vroot.c and mod_ldap.c.

Linking is very simple and works as follows:

<IfGroup ftp_test_group>
   VRootAlias /var/ftp/share /share>
</IfGroup>

Very useful in terms of security is to limit the use of particular FTP-Commands to the admin group

# limit login to users with valid ftp_* groups
<Limit LOGIN>
   AllowGroup ftp_admin_group,ftp_test_group
</Limit>
# in general allow ftp-commands for all users
<Directory />
   <Limit ALL ALL FTP>
      AllowGroup ftp_admin_group,ftp_test_group
   </Limit>
</Directory>
# deny deletion of files (does not cover overwriting)
<Directory />
   <Limit DELE RMD>
      DenyGroup !ftp_admin_group
   </Limit>
</Directory>

I think we are done here now. Restart your ftp server by

~# service proftpd restart

Here you go! For testing purposes set the log level to debug and monitor the login process. Also force SSL (mod_tls.c), because otherwise everything, even passwords, will be transferred as cleartext! If you run into trouble somewhere, just let me know.

Filed under: Linux, Technik Tagged: Kolab, ProFTPd


Aaron Seigo's picture
Fri, 2014-11-28 13:38

The last month has been a wonderful romp through the land of Kolab for me, getting better acquainted with both the server and client(s) side of things. I had expected to learn quite a bit in that time, of course, but had little idea just what it would end up being. That is half the fun of exploration. Rather than keeping it all to myself, I thought I'd share some of the more interesting bits with you here.

First up: chwala. Ch-what-a? Those of you know Polish will recognize that word immediately; it means "glory". So, first thing learned: there are fun names buried in the guts of Kolab. Collect them all! Ok, so what does it do?

It's the file storage backend for the Kolab web client. You can find the code here. As with Roundcube, the web-based groupware application that ships with Kolab, it is written in PHP and is there to glue file storage to groupware application. This is responsible for the "save to cloud" and "attach from cloud" features in the webmail client, for instance, which allows you to keep your files on the server side between recipients on the same host. The files are also available over webdav, making browsing and working with files from most any modern file manager easy.

The default storage system behind the API is exactly what you'd expect from Kolab: IMAP. This makes the file store essentially zero-configuration when setting up stock Kolab, and it gives the file store the same performance and access mechanisms as the other groupware data Kolab stores for you. Quite an elegant solution.

However, Chwala is not limited to IMAP storage. Let's say you want comprehensive offline file synchronization or you wish to integrate it with some network attached storage system you have. No problem: Chwala has a backend API with which you can implement integration with whatever storage system you wish.

In addition to the default IMAP store, Chwala also comes with a backend for Seafile which is a free software storage cloud system that has a cross-platform synchronization client. (Which happens to be written with Qt, by the way.) Seafile code can be found here.

I think that's pretty spiffy, and is certainly the sort of thing that makes Kolab attractive in professional settings as a "full solution". File storage is a requirement for such environments, and making it a part of a the "bigger picture" infrastructure can help lift the mundane task of file management up into where your daily workflow already is.

Chwala!

p.s. A start to a file storage access system was begun in Kontact by the ever moving, every typing, ever coding Laurent Montel. It would fantastic to see this mature over time into a full featured bridge to functionality such as provided by Chwala. I've use it to access files on MyKolab, but it isn't deeply integrated with Kontact yet (or at least that I've been able to find) or have support for the "to/from cloud" features.

p.p.s. I haven't yet tried Seafile myself, but have read good things about it online. If you have used it, I'd love to hear about your experiences in the comments below.


Cornelius Hald's picture
Tue, 2014-11-11 12:51

Today we’re showing how to extend thesingle domain setup done earlier to get a truly multi domain Kolab install. You should probably reserve a couple of hours as there are quite some changes to do and not everything totally trivial. Also, if you’ve not read the blog about single domain setup, now is a good time :)

First of all, you can find the official documentation here. It’s probably a good idea to read it as well. We start with the easy parts and end with postfix, which needs the most changes. At the very end there are a couple of things that may or may not be issues you should be aware of.

Change Amavisd

We tell amavisd to accept all domains.

vi /etc/amavisd/amavisd.conf
# Replace that line
@local_domains_maps = ( [".$mydomain"] );
# With this line
$local_domains_re = new_RE( qr'.*' );

Change Cyrus IMAPD

Tell the IMAP server how to find our other domains. Add the following to the bottom of /etc/imapd.conf

ldap_domain_base_dn: cn=kolab,cn=config
ldap_domain_filter: (&(objectclass=domainrelatedobject)(associateddomain=%s))
ldap_domain_name_attribute: associatedDomain
ldap_domain_scope: sub
ldap_domain_result_attribute: inetdomainbasedn

Change Roundcube (webmail)

Basically you need to change the base_dn at several places. The placeholder ‘%dc’ is replaced during run-time with the real domain the user belongs to.

To save me some typing I’m pasting the diff output produced by git here. So it looks more than it actually is…

diff --git a/roundcubemail/password.inc.php b/roundcubemail/password.inc.php
index c3d449c..eafc8e5 100644
--- a/roundcubemail/password.inc.php
+++ b/roundcubemail/password.inc.php
@@ -45,7 +45,7 @@

     // LDAP base name (root directory)
     // Exemple: 'dc=exemple,dc=com'
-    $config['password_ldap_basedn'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_basedn'] = 'ou=People,%dc';

     // LDAP connection method
     // There is two connection method for changing a user's LDAP password.
@@ -99,7 +99,7 @@
     // If password_ldap_searchDN is set, the base to search in using the filter below.
     // Note that you should comment out the default password_ldap_userDN_mask setting
     // for this to take effect.
-    $config['password_ldap_search_base'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_search_base'] = 'ou=People,%dc';

     // LDAP search filter
     // If password_ldap_searchDN is set, the filter to use when
diff --git a/roundcubemail/calendar.inc.php b/roundcubemail/calendar.inc.php
index 98be7b9..8f98f8a 100644
--- a/roundcubemail/calendar.inc.php
+++ b/roundcubemail/calendar.inc.php
@@ -22,11 +22,11 @@
             'hosts'                 => 'localhost',
             'port'                  => 389,
             'use_tls'               => false,
-            'base_dn'               => 'ou=Resources,dc=skolar,dc=de',
+            'base_dn'               => 'ou=Resources,%dc',
             'user_specific'         => true,
             'bind_dn'               => '%dn',
             'bind_pass'             => '',
-            'search_base_dn'        => 'ou=People,dc=skolar,dc=de',
+            'search_base_dn'        => 'ou=People,%dc',
             'search_bind_dn'        => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
             'search_bind_pw'        => 'xUlA7PzBZnRaYV4',
             'search_filter'         => '(&(objectClass=inetOrgPerson)(mail=%fu))',
diff --git a/roundcubemail/config.inc.php b/roundcubemail/config.inc.php
index bfbfba3..60dc0b2 100644
--- a/roundcubemail/config.inc.php
+++ b/roundcubemail/config.inc.php
@@ -6,7 +6,7 @@

     $config['session_domain'] = '';
     $config['des_key'] = "FMlzG7LeqiUSOSK2T8xKQTHR";
     $config['use_secure_urls'] = true;
     $config['assets_path'] = 'assets/';

@@ -154,11 +154,11 @@
                     'hosts'                     => Array('localhost'),
                     'port'                      => 389,
                     'use_tls'                   => false,
-                    'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+                    'base_dn'                   => 'ou=People,%dc',
                     'user_specific'             => true,
                     'bind_dn'                   => '%dn',
                     'bind_pass'                 => '',
-                    'search_base_dn'            => 'ou=People,dc=skolar,dc=de',
+                    'search_base_dn'            => 'ou=People,%dc',
                     'search_bind_dn'            => 'uid=kolab-service,ou=Special Users,dc=skolar,
                     'search_bind_pw'            => 'xUlA7PzBZnRaYV4',
                     'search_filter'             => '(&(objectClass=inetOrgPerson)(mail=%fu))',
@@ -196,7 +196,7 @@
                             'photo'             => 'jpegphoto'
                         ),
                     'groups'                    => Array(
-                            'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                            'base_dn'           => 'ou=Groups,%dc',
                             'filter'            => '(&' . '(|(objectclass=groupofuniquenames)(obj
                             'object_classes'    => Array("top", "groupOfUniqueNames"),
                             'member_attr'       => 'uniqueMember',
diff --git a/roundcubemail/kolab_auth.inc.php b/roundcubemail/kolab_auth.inc.php
index 9fb5335..8eff518 100644
--- a/roundcubemail/kolab_auth.inc.php
+++ b/roundcubemail/kolab_auth.inc.php
@@ -8,7 +8,7 @@
         'port'                      => 389,
         'use_tls'                   => false,
         'user_specific'             => false,
-        'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+        'base_dn'                   => 'ou=People,%dc',
         'bind_dn'                   => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
         'bind_pass'                 => 'xUlA7PzBZnRaYV4',
         'writable'                  => false,
@@ -26,11 +26,14 @@
         'sizelimit'                 => '0',
         'timelimit'                 => '0',
         'groups'                    => Array(
-                'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                'base_dn'           => 'ou=Groups,%dc',
                 'filter'            => '(|(objectclass=groupofuniquenames)(objectclass=groupofurl
                 'object_classes'    => Array('top', 'groupOfUniqueNames'),
                 'member_attr'       => 'uniqueMember',
             ),
+        'domain_base_dn'           => 'cn=kolab,cn=config',
+        'domain_filter'            => '(&(objectclass=domainrelatedobject)(associateddomain=%s))'
+        'domain_name_attr'         => 'associateddomain',
     );

Change Postfix

Now this is actually the hardest part that requires the most changes. Initially I thought there would be a way around that, but it looks like it is currently really needed.

First we apply a couple of changes that allows us to have multiple domains besides our management domain (the domain we used to install Kolab). However those changes will not support domains having aliases. E.g. having the domain kodira.de with an alias of tourschall.com. To get domains with working aliases, we need to do even more.

Postfix Part 1 (basics)

Please follow the instructions given in the official documentation here. I don’t really see how I could write that part better or more compact. Do all the changes for: mydestination, local_recipient_maps, virtual_alias_maps and transport_maps.

Now, if you don’t need aliases, you’re basically done and you can skip the next section.

Postfix Part 2 (alias domains)

For each domain that should support alias domains we need to add 4 files. We’re doing this based on the following example.

  • Domain: kodira.de
  • Alias: tourschall.com

First create the directory /etc/postfix/ldap/kodira.de (name of the real domain)

In that directory create the following 4 files, but do not just copy&past them. You have to adjust them to your setup.

# local_recipient_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(|(objectclass=kolabinetorgperson)(|(objectclass=kolabgroupofuniquenames)(objectclass=kolabgroupofurls))(|(|(objectclass=groupofuniquenames)(objectclass=groupofurls))(objectclass=kolabsharedfolder))(objectclass=kolabsharedfolder)))
result_attribute = mail
# mydestination.cf
# Adjust bind_dn, bind_pw, query_filter
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(associatedDomain=%s)(associatedDomain=kodira.de))
result_attribute = associateddomain
# transport_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mailAlternateAddress=%s)(alias=%s)(mail=%s))(objectclass=kolabinetorgperson))
result_attribute = mail
result_format = lmtp:unix:/var/lib/imap/socket/lmtp
# virtual_alias_maps.cf
# Adjust search_base, domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = dc=kodira,dc=de
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(objectclass=kolabinetorgperson))
result_attribute = mail

Almost done, but don’t forget to reference those files from /etc/postfix/main.cf.

The bad news is: you have to add and adjust those 4 files for each domain which should support aliases. But the good news is: once configured you can use as many aliases for that domain as you want. No need to change config files for that.

Postfix Part 3 (finishing up)

Restart all services or just reboot the machine. Most things should work now, but there are a couple of points you still might need to take care about.

  1. In our main.cf there have been references to some catchall maps that we do not use and that do not exist on the file system. Therefore postfix stopped looking at the rest of that maps. We simply deleted the catchall references from main.cf and got rid of that problem.
  2. In our setup we had an issue with a domain having an alias with more then two parts. E.g. mail.kodira.de. As we don’t need addresses in the form of user@host.domain.tld we removed this alias and thus solved the problem.

Create domains and users using WAP

Now you should be able to use the ‘Kolab Web Administration Panel’ (WAP) to create domains and users.

  1. Go to http://<yourserver>/kolab-webadmin
  2. Login as ‘cn=Directory Manager’
  3. Go to ‘Domains’ and add a domain (simply giving it a name is enough)
  4. If you want add an alias to this domain by clicking the ‘+’ sign
  5. Logout
  6. Login again as ‘cn=Directory Manager’
  7. In the top right corner you should be able to select your newly created domain. Select it.
  8. Go to ‘Users’ and add a user to your new domain
  9. If you want, give the user the role ‘kolab-admin’. If you do, that user is able to log into WAP and to administrate that domain. For that login you should not use LDAP notation, but simply user@domain.tld.

Now maybe create a couple of test users on various domains and try to send some mails back and forth. It should work. If not have a look at those log files:

  • /var/log/maillog
  • /var/log/dirsrv/slapd-mail/access

Also do a grep for ‘kodira’, ‘tourschall’, ‘example’ in /etc/ to make sure you didn’t accidentally forgot to change some example configuration. Last but not least, think about putting /etc/ into a git repository – that will help you to review and restore changes you’ve made.

Good luck and have fun :)

The post Kolab 3.3 Multi-Domain Setup on CentOS 7 appeared first on Kodira.


Cornelius Hald's picture
Tue, 2014-11-11 10:54

After a lot of reading and some trial-and-error, I’ve figured out a way to reproducible install Kolab 3.3 on CentOS 7 with multi-domain support. Most information can be found in the official documentation, however some parts are not that easy to understand for a Kolab noob.

I won’t go into too much details here, so it will be mostly a step-by-step thing without a lot of explanation. You really should not use this document as your only source of information. At least read through the official documentation as well. Also you should feel confident with Linux admin stuff – otherwise Kolab might not be the best choice as it is not an of-the-shelf solution.

In this document we will use the following hosts and domains. Replace them with your own.

  • Hostname: mail.skolar.de
  • Management domain: skolar.de
  • Primary hosted domain: kodira.de
  • Alias for primary hosted domain: tourschall.com
  • We could go on with a secondary hosted domain, but works exactly like the primary hosted domain, so we won’t go there…

Let’s start with a fresh minimal CentOS 7 install where you are root.

First we disable SE-Linux and the firewall. You should re-enable that later, but for now we don’t won’t both to get into our way:

# Check status of SE-Linux
sestatus
# Temporarily disable it
setenforce 0
# Stop firewall
systemctl stop firewalld
# Disable firewall (dont't start on next boot)
systemctl disable firewalld

To permanently disable SE-Linux edit /etc/selinux/config (I recommend you do this now)

Set a valid host name (needs to be resolvable via DNS)

echo "mail.skolar.de" > /etc/hostname

Add Kolab repositories and GPG keys

rpm -Uhv http://ftp.uma.es/mirror/epel/beta/7/x86_64/epel-release-7-1.noarch.rpm
cd /etc/yum.repos.d/
wget http://obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_7/Kolab:3.3.repo
wget http://obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_7/Kolab:3.3:Updates.repo
gpg --search devel@lists.kolab.org
# Do it again, for me it always just worked the second time
gpg --search devel@lists.kolab.org
gpg --export --armor devel@lists.kolab.org > devel.asc
rpm --import devel.asc
rm devel.asc

Install Kolab (including dependencies)

yum install kolab

The next command will start the various services like postfix and apache and should also start MariaDB (which is a replacement for MySQL). Due to bug #3877 it won’t, so we have to start MariaDB manually.

systemctl enable mariadb
systemctl start mariadb

Now start the Kolab setup process. It will ask you many password, most of them you can just leave as they are, but pay attention to the password for “Directory Manager”. Either remember the one setup generated or type in your own. You’ll need that password later on quite often.

setup-kolab

Besides the passwords the right answers for me have been:

Domain -> skolar.de
What MySQL server are we setting up? -> 2: New MySql server
Timezone -> Europe/Berlin

Now might be a good time to reboot and see if all the services are starting up successfully. If you do, please make sure SE-Linux is turned off permanently.

Great, you should now be able to login to the web admin. The URL and the credentials are as follows:

http://<yourserver>/kolab-webadmin
User: cn=Directory Manager
Password: The password used in setup-kolab

You should be able to create a new Kolab user and log into webmail using that new user. But because of bug #3565 you’re incoming mail is not properly scanned. Do the following to resolve that issue:

vi /etc/amavisd/amavisd.conf
# Change that line
\&ask_daemon, ["CONTSCAN {}\n", "/var/spool/amavisd/clamd.sock"],
# To look like this
\&ask_daemon, ["CONTSCAN {}\n", "/var/run/clamd.amavisd/clamd.sock"],
# Save, close and restart amavisd
systemctl restart amavisd

This should be all for a single-domain install. You should be able to send and receive mail using the web frontend or dedicated IMAP clients.

That’s all for part 1. Have a look at part 2 where we’re extending this setup to support multiple domains.

The post Kolab 3.3 Single-Domain Setup on CentOS 7 appeared first on Kodira.


roundcube's picture
Mon, 2014-11-10 21:13

We’re proud to announce that the beta release of the next major version 1.1 of
Roundcube webmail is now available for download and testing. With this
milestone we introduce a bunch of new features and some clean-up with the 3rd
party libraries Roundcube uses:

  • Allow searching across multiple folders
  • Improved support for screen readers and assistive technology using
    WCAG 2.0 and WAI ARIA standards
  • Support images in HTML signatures (copy & paste)
  • Added namespace filter and folder searching in folder manager
  • New config option to disable UI elements/actions
  • Stronger password encryption using OpenSSL
  • Support for the IMAP SPECIAL-USE extension
  • Support for Oracle databases
  • Moved 3rd party libs to vendor directory, managed by Composer

And of course plenty of small improvements and bug fixes.

IMPORTANT: with this version, we dropped support for PHP < 5.3.7 and
Internet Explorer < 9. IE7/IE8 support can be restored by enabling the
legacy_browser plugin.

See the complete Changelog at trac.roundcube.net/wiki/Changelog
and download the new packages from roundcube.net/download. Please note that this
is a beta release and we recommend to test it on a separate environment. And
don’t forget to backup your data before installing it.


Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar, and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


Fri, 2014-10-24 00:00

Just in time for the official Kolab 3.3 release, our Gentoo packages for Kolab 3.2 became stable and ready to use. This will clear the way for the upcoming release of Kolab 3.3 for Gentoo. Altough this release won't bring any major changes, it prepares the ground for upcoming developments and new features in Kolab 3.3. Further, with Kolab 3.2 we introduced an upgrade path between Kolab releases for Gentoo and we will try our best to keep updates as consistent and comfortable as possible.


Aaron Seigo's picture
Tue, 2014-10-21 13:45

I've been a long time fan of Kolab, the free software collaboration and groupware system. I have recommended it, and even helped deploy it a few times, since it launched some ten years ago. I used it back then with KDE's Kontact, and still do to this day.

Kolab interested me because it had the opportunity to join such key free software products as LibreOffice (then Open Office) and Firefox in terms of importance and usage. Think about it: in a professional setting (business, government or educational) what key software tools are universally required? Certainly among them are tools to read and edit office documents; a world-class web browser; and collaboration software (email, calendaring, contacts, resource booking, notes, task lists, file sharing ...). The first two were increasingly well covered, but that last one? Not so much.

And then Kolab walked on to the stage and held out the promise of completing the trifecta.
However, there were years in between then and now when it was less obvious to me that Kolab had a glowing future. It was an amazing early-stage product that filled a huge gap in the free software stack, but development seemed to slow up and promotion was extremely limited. This felt like a small tragedy.

So when I heard that Kolab Systems was launching back in 2010 as a company centered around Kolab, I was excited: Could this be a vehicle which tows Kolab forward towards success? Could this new company propel Kolab effectively into the market which is currently the domain of proprietary products? Only time would tell ... I knew the founders personally, and figured that if anyone could pull this off it would be them. I also knew that they would work with freedom and upstream communities as priorities.

Four years later and Kolab Systems has indeed been successful in bringing Kolab significantly forward technologically and in adoption. Today Kolab is more reliable and has a spectacular set of features, thanks to the solid engineering team that has come together with the help and support of Kolab Systems.

Their efforts have also resulted in Kolab being used more: Fortune 100 companies are using Kolab, the city of Munich is currently migrating to it, there are educational systems using it and, of course, there is My Kolab which is a hosted instance of Kolab hat is being used by an ever growing number of people.

Kolab Systems has also helped the free software it promotes and relies on flourish by investing in it: developers are paid to work on upstream free software such as Roundcube and Kontact in addition to the Kolab serer; community facilitation and public promotion are in focus ... there's a rather nice balance between company and community at play.

There is still a lot to do, however. This is not the end of a success story, perhaps only the end of the beginning. So when the opportunity arose to join Kolab Systems I didn't have to think twice. Starting this month I am joining the Kolab Systems team where I will be engaged in technical efforts (more so in the near term) as well as business and community development. I'm really excited to be joining what is a pretty stellar team of people working on technology I believe in.

Before I wrapping up, I'd like to share something that helped convince me about Kolab Systems. I've known Georg Greve, Kolab Systems' CEO and Free Software Foundation Europe founder, for a good number of years. One afternoon during a friendly walk-and-chat in the countryside near his house, he noted that we should not be satisfied with just making software that is free-as-in-freedom; it should also be awesome software, presented as something worth wanting. It is unrealistic to expect everyone to use free software solely because it is ethically the right thing to do (which it is), but we might expect people to choose free software because it is the most desirable option they know of. To phrase it as an aspiration:
 

Through excellence we can spread freedom.

I'll probably write more about this philosophy another time, as there are a number of interesting facets to it. I'll also write from time to time about the the interesting things going on in the Kolab world .. but that's all for another time. Right now I need to get back to making notes-on-emails-sync'd-with-a-kolab-server work well. :)


roundcube's picture
Fri, 2014-10-10 23:25

PGP encryption is one of the most frequently requested features for Roundcube and for good reasons more and more people start caring about end-to-end encryption in their everyday communication. But unfortunately webmail applications currently can’t fully participate in this game and doing PGP encryption right in web-based applications isn’t a simple task. Although there are ways and even some basic implementations, all of them have their pros and cons. And yet the ultimate solution is still missing.

Browser extensions to the rescue

In our opinion, the way to go is with a browser extension to do the important work and guard the keys. A crucial point is to keep the encryption component under the user’s full control which in the browser and http world can only be provided with a native browser plugin. And the good news is, there are working extensions available today. The most prominent one probably is Mailvelope which detects encrypted message bodies in various webmail applications and also hooks into the message composition to send signed and encrypted email messages with your favorite webmail app. Plus another very promising tool for end-to-end encryption is coming our way: p≡p. A browser extension is at least planned in the longer term. And even Google just started their own project with the recently announced end-to-end Chrome extension.

That’s a good start indeed. However, the encryption capabilities of those extensions only cover the message body but leave out attachments or even pgp/mime messages. Mostly because there extension has limited knowledge about webmail app and there’s no interaction between the web app and the extension. On the other side, the webmail app isn’t aware of the encryption features available in the user’s browser and therefore suppresses certain parts of a message like signatures. A direct interaction between the webmail and the encryption extension could help adding the missing pieces like encrypted attachment upload and message signing. All we need to do is to introduce the two components to each others.

From the webmail developer’s perspective

So here’s a loose list of functionality we’d like to see exposed by an encryption browser extension and which we believe would contribute to an integrated solution for secure emailing.

A global (window.encryption-style) object providing functions to:

  • List of supported encryption technologies (pgp, s/mime)
  • Switch to manual mode (i.e. disabling automatic detection of webmail containers)

For message display:

  • Register message content area (jQuery-like selector)
  • Setters for message headers (e.g. sender, recipient)
  • Decrypt message content (String) directly
  • Validate signature (pass signature as argument)
  • Download and decrypt attachment from a given URL and
    • a) prompt for saving file
    • b) return a FileReader object for inline display
  • Bonus points: support for pgp/mime; implies full support for MIME message structures

For message composition:

  • Setters for message recipients (or recipient text fields)
  • Register message compose text area (jQuery-like selector)
  • … or functions to encrypt and/or sign message contents (String) directly
  • Query the existence of a public key/certificate for a given recipient address
  • File selector/upload with transparent encryption
  • … or an API to encrypt binary data (from a FileReader object into a new FileReader object)

Regarding file upload for attachments to an encrypted messages, some extra challenges exist in an asynchronous client-server web application: attachment encryption requires the final recipients to be known before the (encrypted) file is uploaded to the server. If the list of recipients or encryption settings change, already uploaded attachments are void and need to be re-encrypted and uploaded again.

And presumably that’s just one example of possible pitfalls in this endeavor to add full featured PGP encryption to webmail applications. Thus, dear developers of Mailvelope, p≡p, WebPG and Google, please take the above list as a source of inspiration for your further development. We’d gladly cooperate to add the missing pieces.


Timotheus Pokorra's picture
Tue, 2014-10-07 19:02

On the Kolab IRC we have had some issues with apt-get talking about connection failed etc.

So I updated the blogpost from last year: http://www.pokorra.de/2013/10/downloading-from-obs-repo-via-php-proxy-file/

The port of the Kolab Systems OBS is now port 80, so there is not really a need for a proxy anymore. But perhaps it helps for debugging the apt-get commands.

I have extended the scripts to work for apt-get on Debian/Ubuntu as well, the original script was for yum only it seems.

I have setup a small php script on a server somewhere on the Internet.

In my sample configuration, I use a Debian server with Lighttpd and PHP.

Install:

apt-get install lighttpd spawn-fcgi php5-curl php5-cgi

changes to /etc/lighttpd/lighttpd.conf:

server.modules = (
        [...]
        "mod_fastcgi",
        "mod_rewrite",
)
 
fastcgi.server = ( ".php" => ((
                     "bin-path" => "/usr/bin/php5-cgi",
                     "socket" => "/tmp/php.socket",
                     "max-procs" => 2,
                     "bin-environment" => (
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000"
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER"
                     ),
                     "broken-scriptfilename" => "enable"
                 )))
 
url.rewrite-once = (
    "^/obs\.kolabsys\.com/index.php" => "$0",
    "^/obs\.kolabsys\.com/(.*)" => "/obs.kolabsys.com/index.php?page=$1"
)

and in /var/www/obs.kolabsys.com/index.php:

<?php 
 
$proxyurl="http://kolabproxy2.pokorra.de";
$obsurl="http://obs.kolabsys.com";
 
// it seems file_get_contents does not return the full page
function curl_get_file_contents($URL)
{
    $c = curl_init();
    curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($c, CURLOPT_URL, str_replace('&#038;', '&#038;', $URL));
    $contents = curl_exec($c);
    curl_close($c);
    if ($contents) return $contents;
    else return FALSE;
}
 
$page = $_GET['page'];
$filename = basename($page);
debug($page . "   ".$filename);
$content = curl_get_file_contents($obsurl."/".$page);
if (strpos($content, "Error 404") !== false) {
	header("HTTP/1.0 404 Not Found");
	die();
}
if (substr($page, -strlen("/")) === "/")
{
        # print directory listing
        $content = str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
        $content = str_replace('href="/', 'href=$proxyurl."/obs.kolabsys.com/', $content);
        echo $content;
}
else if (substr($filename, -strlen(".repo")) === ".repo")
{
        header("Content-Type: plain/text");
        echo str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
}
else
{
#die($filename);
        header("Content-Type: application/octet-stream");
        header('Content-Disposition: attachment; filename="'.$filename.'"');
        header("Content-Transfer-Encoding: binary\n");
        echo curl_get_file_contents($obsurl."/".$page);
}
 
function debug($msg){
 if(is_writeable("/tmp/mylog.log")){
    $fh = fopen("/tmp/mylog.log",'a+');
    fputs($fh,"[Log] ".date("d.m.Y H:i:s")." $msg\n");
    fclose($fh);
  }
} 
?>

Now it is possible to download the repo files like this:

cd /etc/yum.repos.d/
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_6/Kolab:3.3.repo
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_6/Kolab:3.3:Updates.repo
yum install kolab

For Ubuntu 14.04:

echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/kolab.list
apt-get install kolab

This works for all other projects and distributions on obs.kolabsys.com too.


Tue, 2014-10-07 00:00

I have been using self hosted Kolab Groupware everyday for quite a while now.
Therefore the need arose to monitor process activity and system resources using Monit utility.

Table of contents

Couple of words about monit

monit is a simple and robust utility for monitoring and automatic maintenance, which is supported on Linux, BSD and OS X.

Software installation

Debian Wheezy currently provides Monit 5.4.

To install it execute command:

$ sudo apt-get install monit

Monit daemon will be started at the boot time. Alternatively you can use standard System V init scripts to manage service.

Initial configuration

Configuration files are located under /etc/monit/ directory. Default settings are stored in the /etc/monit/monitrc file, which I strongly suggest to read.
Custom configuration will be stored in the/etc/monit/conf.d/ directory.

I will override several important settings using local.conf file.

Modified settings

  • Set email address to root@example.org
  • Slightly change default template
  • Define mail server as localhost
  • Set default interval to 120 seconds with initial delay of 180 seconds
  • Enable local web server to take advantage of the additional functionality
    (currently commented out)

$ sudo cat /etc/monit/conf.d/local.conf
# define e-mail recipent
set alert root@example.org

# define e-mail template
set mail-format {
from: monit@$HOST
subject: monit alert -- $EVENT $SERVICE
message: $EVENT Service $SERVICE
Date:        $DATE
Action:      $ACTION
Host:        $HOST
Description: $DESCRIPTION
}

# define server
set mailserver localhost

# define interval and initial delay
set daemon 120 with start delay 180

# set web server for local management
# set httpd port 2812 and use the address localhost allow localhost
Please take a note that enabling built-in web-server in the way I used above will allow every local user to access and perform monit operations. Essentially it should be disabled or secured using username and password combination.

Command-line operations

Verify configuration syntax

To check configuration syntax execute the following command.

$ sudo monit -t
Control file syntax OK

Start, Stop, Restart actions

Start all services and enable monitoring for them.

$ sudo monit start all

Start all services in resources group and enable monitoring for them.

$ sudo monit -g resources start

Start rootfs service and enable monitoring for it.

$ sudo monit start rootfs

You can initiate stop action in the same way as the above one, which will stop service and disable monitoring, or just execute restart action to stop and start corresponding services.

Monitor and unmonitor actions

Monitor all services.

$ sudo monit monitor all

Monitor all services in resources group.

$ sudo monit -g resources monitor

Monitor rootfs service.

$ sudo monit monitor rootfs

Use unmonitor action to disable monitoring for corresponding services.

Status action

Print service status.

$ sudo monit status
The Monit daemon 5.6 uptime: 27d 0h 47m

System 'server'
  status                            Running
  monitoring status                 Monitored
  load average                      [0.26] [0.43] [0.48]
  cpu                               12.8%us 2.6%sy 0.0%wa
  memory usage                      2934772 kB [36.4%]
  swap usage                        2897376 kB [35.0%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Filesystem 'rootfs'
  status                            Accessible
  monitoring status                 Monitored
  permission                        660
  uid                               0
  gid                               6
  filesystem flags                  0x1000
  block size                        4096 B
  blocks total                      17161862 [67038.5 MB]
  blocks free for non superuser     7327797 [28624.2 MB] [42.7%]
  blocks free total                 8205352 [32052.2 MB] [47.8%]
  inodes total                      4374528
  inodes free                       4151728 [94.9%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Summary action

Print short service summary.

$ sudo monit summary
The Monit daemon 5.6 uptime: 27d 0h 48m

System 'server'                     Running
Filesystem 'rootfs'                 Accessible

Reload action

Reload configuration and reinitialize Monit daemon.

$ sudo monit reload

Quit action

Terminate Monit daemon.

$ sudo monit quit
monit daemon with pid [5248] killed

Monitor filesystems

Configuration syntax is very consistent and easy to grasp. I will start with simple example and then proceed to a slightly more complex ideas. Just remember to check one thing at a time.

I am using VPS service due to easy backup/restore process, so I have only one filesystem on /dev/root device, which I will monitor as a named rootfs service.

Monit daemon will generate alert and send an email if space or inode usage on the rootfs filesystem [stored on /dev/root device] exceeds 80 percent of the available capacity.

$ sudo cat /etc/monit/conf.d/filesystems.conf
check filesystem rootfs with path /dev/root
  group resources

  if space usage > 80% then alert
  if inode usage > 80% then alert

The above service is placed in resources group for easier management.

Monitor system resources

The following configuration will be stored as a named server service as it describes resource usage for the whole mail server.

Monit daemon will check memory usage, if it exceeds 80% of the available capacity for three subsequent events, it will send an alert email.
Recovery message will be sent after two subsequent events to limit number of sent messages. The same rules apply to the remaining system resources.

The system I am using have four available processors, so the alert will be generated after the five minutes load average exceeds five.

$ sudo cat /etc/monit/conf.d/resources.conf
check system server
  group resources

  if memory usage > 80% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if swap usage > 50% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(wait) > 30% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(system) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(user) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if loadavg(5min) > 5 then alert
  else if succeeded for 2 cycles then alert

The above service is placed in resources group for easier management.

Monitor system services

cron

cron is a daemon used to execute user-specified tasks at scheduled time.

Monit daemon will use the specified pid file [/var/run/crond.pid] to monitor [cron] service and restart it if it stops for any reason.
Configuration change will generate alert message, permission issue will generate alert message and disable further monitoring.

GID of 102 translates to crontab group.

$ sudo cat /etc/monit/conf.d/cron.conf
check process cron with pidfile /var/run/crond.pid
  group system
  group scheduled-tasks

  start program = "/usr/sbin/service cron start"
  stop  program = "/usr/sbin/service cron stop"

  if 3 restarts within 5 cycles then timeout

  depends on cron_bin
  depends on cron_rc
  depends on cron_rc.d
  depends on cron_rc.daily
  depends on cron_rc.hourly
  depends on cron_rc.monthly
  depends on cron_rc.weekly
  depends on cron_rc.spool

  check file cron_bin with path /usr/sbin/cron
    group scheduled-tasks
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file cron_rc with path /etc/crontab
    group scheduled-tasks
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.d with path /etc/cron.d
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.daily with path /etc/cron.daily
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.hourly with path /etc/cron.hourly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.monthly with path /etc/cron.monthly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.weekly with path /etc/cron.weekly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.spool with path /var/spool/cron/crontabs
    group scheduled-tasks
    if changed timestamp      then alert
    if failed permission 1730 then unmonitor
    if failed uid root        then unmonitor
    if failed gid 102         then unmonitor

The above service is placed in system and scheduled-tasks groups for easier management.

rsyslogd

rsyslogd is a message logging service.

$ sudo cat /etc/monit/conf.d/rsyslogd.conf
check process rsyslog with pidfile /var/run/rsyslogd.pid
  group system
  group logging

  start program = "/usr/sbin/service rsyslog start"
  stop  program = "/usr/sbin/service rsyslog stop"

  if 3 restarts within 5 cycles then timeout

  depends on rsyslog_bin
  depends on rsyslog_rc
  depends on rsyslog_rc.d

  check file rsyslog_bin with path /usr/sbin/rsyslogd
    group logging
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file rsyslog_rc with path /etc/rsyslog.conf
    group logging
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory rsyslog_rc.d with path /etc/rsyslog.d
    group logging
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and logging groups for easier management.

ntpd

Network Time Protocol daemon will be extended by the use of port monitoring.

$ sudo cat /etc/monit/conf.d/ntpd.conf
check process ntp with pidfile /var/run/ntpd.pid
  group system
  group time

  start program = "/usr/sbin/service ntp start"
  stop  program = "/usr/sbin/service ntp stop"

  if failed port 123 type udp then restart

  if 3 restarts within 5 cycles then timeout

  depends on ntp_bin
  depends on ntp_rc

  check file ntp_bin with path /usr/sbin/ntpd
    group time
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file ntp_rc with path /etc/ntp.conf
    group time
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and time groups for easier management.

OpenSSH

OpenSSH service will be extended by the use of match statement to test content of the configuration file. I assume it is self explanatory.

$ sudo cat /etc/monit/conf.d/openssh-server.conf
check process openssh with pidfile /var/run/sshd.pid
  group system
  group sshd

  start program = "/usr/sbin/service ssh start"
  stop  program = "/usr/sbin/service ssh stop"

  if failed port 22 with proto ssh then restart

  if 3 restarts with 5 cycles then timeout

  depend on openssh_bin
  depend on openssh_sftp_bin
  depend on openssh_rsa_key
  depend on openssh_dsa_key
  depend on openssh_rc

  check file openssh_bin with path /usr/sbin/sshd
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_sftp_bin with path /usr/lib/openssh/sftp-server
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rsa_key with path /etc/ssh/ssh_host_rsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_dsa_key with path /etc/ssh/ssh_host_dsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rc with path /etc/ssh/sshd_config
    group sshd
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

    if not match "^PasswordAuthentication no" then alert
    if not match "^PubkeyAuthentication yes"  then alert
    if not match "^PermitRootLogin no"        then alert

The above service is placed in system and sshd groups for easier management.

Monitor Kolab services

MySQL

MySQL is an open-source database server used by the wide range of Kolab services.

UID of 106 translates to mysql user. GID of 106 translates to mysql group.

It is the first time I have used unixsocket statement here.

$ sudo cat /etc/monit/conf.d/mysql.conf
check process mysql with pidfile /var/run/mysqld/mysqld.pid
  group kolab
  group database

  start program = "/usr/sbin/service mysql start"
  stop  program = "/usr/sbin/service mysql stop"

  if failed port 3306 protocol mysql then restart
  if failed unixsocket /var/run/mysqld/mysqld.sock protocol mysql then restart

  if 3 restarts within 5 cycles then timeout

  depends on mysql_bin
  depends on mysql_rc
  depends on mysql_sys_maint
  depend  on mysql_data

  check file mysql_bin with path /usr/sbin/mysqld
    group database
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_rc with path /etc/mysql/my.cnf
    group database
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_sys_maint with path /etc/mysql/debian.cnf
    group database
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory mysql_data with path /var/lib/mysql
    group database
    if failed permission 700 then unmonitor
    if failed uid 106        then unmonitor
    if failed gid 110        then unmonitor

The above service is placed in kolab and database groups for easier management.

Apache

Apache is an open-source HTTP server used to serve user/admin web-interface.

Please notice that I am checking HTTPS port.

$ sudo cat /etc/monit/conf.d/apache.conf
check process apache with pidfile  /var/run/apache2.pid
  group kolab
  group web-server

  start program = "/usr/sbin/service apache2 start"
  stop  program = "/usr/sbin/service apache2 stop"

  if failed port 443 then restart

  if 3 restarts within 5 cycles then timeout

  depends on apache2_bin
  depends on apache2_rc
  depends on apache2_rc_mods
  depends on apache2_rc_sites

  check file apache2_bin with path /usr/sbin/apache2.prefork
    group web-server
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc with path /etc/apache2
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc_mods with path /etc/apache2/mods-enabled
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc_sites with path /etc/apache2/sites-enabled
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and web-server groups for easier management.

Kolab daemon

This is the heart of the whole Kolab unified communication and collaboration system as it is responsible for data synchronization between different services.

UID of 413 translates to kolab-n user. GID of 412 translates to kolab group.

$ sudo cat /etc/monit/conf.d/kolab-server.conf
check process kolab-server with pidfile /var/run/kolabd/kolabd.pid
  group kolab
  group kolab-daemon

  start program = "/usr/sbin/service kolab-server start"
  stop  program = "/usr/sbin/service kolab-server stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-daemon_bin
  depends on kolab-daemon_rc

  check file kolab-daemon_bin with path /usr/sbin/kolabd
    group kolab-daemon
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file kolab-daemon_rc with path /etc/kolab/kolab.conf
    group kolab-daemon
    if failed checksum       then alert
    if failed permission 640 then unmonitor
    if failed uid 413        then unmonitor
    if failed gid 412        then unmonitor

The above service is placed in kolab and kolab-daemon groups for easier management.

Kolab saslauthd

Kolab saslauthd is the SASL authentication daemon for multi-domain Kolab deployments.

$ sudo cat /etc/monit/conf.d/kolab-saslauthd.conf
check process kolab-saslauthd with pidfile /var/run/kolab-saslauthd/kolab-saslauthd.pid
  group kolab
  group kolab-saslauthd

  start program = "/usr/sbin/service kolab-saslauthd start"
  stop  program = "/usr/sbin/service kolab-saslauthd stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-saslauthd_bin

  check file kolab-saslauthd_bin with path /usr/sbin/kolab-saslauthd
    group kolab-saslauthd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and kolab-saslauthd groups for easier management.

It can be tempting to monitor /var/run/saslauthd/mux socket, but just leave it alone for now.

Wallace

The Wallace is a content filtering daemon.

$ sudo cat /etc/monit/conf.d/wallace.conf
check process wallace with pidfile /var/run/wallaced/wallaced.pid
  group kolab
  group wallace

  start program = "/usr/sbin/service wallace start"
  stop  program = "/usr/sbin/service wallace stop"

  #if failed port 10026 then restart

  if 3 restarts within 5 cycles then timeout

  depends on wallace_bin

  check file wallace_bin with path /usr/sbin/wallaced
    group wallace
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and wallace groups for easier management.

ClamAV

The ClamAV daemon is an open-source, cross-platform antivirus software.

$ sudo cat /etc/monit/conf.d/clamav.conf
check process clamav with pidfile /var/run/clamav/clamd.pid
  group system
  group antivirus

  start program = "/usr/sbin/service clamav-daemon start"
  stop  program = "/usr/sbin/service clamav-daemon stop"

  if 3 restarts within 5 cycles then timeout

  #if failed unixsocket /var/run/clamav/clamd.ctl type udp then alert

  depends on clamav_bin
  depends on clamav_rc

  check file clamav_bin with path /usr/sbin/clamd
    group antivirus
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file clamav_rc with path /etc/clamav/clamd.conf
    group antivirus
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and antivirus groups for easier management.

Freshclam

Freshclam is a software used to periodically update ClamAV virus databases.

$ sudo cat /etc/monit/conf.d/freshclam.conf
check process freshclam with pidfile /var/run/clamav/freshclam.pid
  group system
  group antivirus-updater

  start program = "/usr/sbin/service clamav-freshclam start"
  stop  program = "/usr/sbin/service clamav-freshclam stop"

  if 3 restarts within 5 cycles then timeout

  depends on freshclam_bin
  depends on freshclam_rc

  check file freshclam_bin with path /usr/bin/freshclam
    group antivirus-updater
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/clamav/freshclam.conf
    group antivirus-updater
    if failed permission 444 then unmonitor
    if failed uid 110        then unmonitor
    if failed gid 4          then unmonitor

The above service is placed in kolab and antivirus-updater groups for easier management.

amavisd-new

Amavis is a high-performance interface between Postfix mail server and content filtering services: SpamAssassin as a spam classifier and ClamAV as an antivirus protection.

$ sudo cat /etc/monit/conf.d/amavisd-new.conf
check process amavisd-new with pidfile /var/run/amavis/amavisd.pid
  group kolab
  group content-filter

  start program = "/usr/sbin/service amavis start"
  stop  program = "/usr/sbin/service amavis stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 10024 type tcp then restart
  #if failed unixsocket /var/lib/amavis/amavisd.sock type udp then alert

  depends on amavisd-new_bin
  depends on amavisd-new_rc

  check file amavisd-new_bin with path /usr/sbin/amavisd-new
    group content-filter
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory amavisd-new_rc with path /etc/amavis/
    group content-filter
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and content-filter groups for easier management.

The main Directory Server daemon

The main Directory Server daemon is a 389 LDAP Directory Server.

$ sudo cat /etc/monit/conf.d/dirsrv.conf
check process dirsrv with pidfile  /var/run/dirsrv/slapd-xmail.stats
  group kolab
  group dirsrv

  start program = "/usr/sbin/service dirsrv start"
  stop  program = "/usr/sbin/service dirsrv stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 389 type tcp then restart

  depends on dirsrv_bin
  depends on dirsrv_rc

  check file dirsrv_bin with path /usr/sbin/ns-slapd
    group dirsrv
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory dirsrv_rc with path /etc/dirsrv/
    group dirsrv
    if changed timestamp     then alert

The above service is placed in kolab and dirsrv groups for easier management.

SpamAssasin

SpamAssasin is a content filter used for spam filtering.

$ sudo cat /etc/monit/conf.d/spamd.conf
check process spamd with pidfile /var/run/spamd.pid
  group system
  group spamd

  start program = "/usr/sbin/service spamassassin start"
  stop  program = "/usr/sbin/service spamassassin stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 783 type tcp then restart

  depends on spamd_bin
  depends on spamd_rc

  check file spamd_bin with path /usr/sbin/spamd
    group spamd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory spamd_rc with path /etc/spamassassin/
    group spamd
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and spamd groups for easier management.

Cyrus IMAP/POP3 daemons

cyrus-imapd daemon is responsible for IMAP/POP3 communication.

$ sudo cat /etc/monit/conf.d/cyrus-imapd.conf
check process cyrus-imapd with pidfile  /var/run/cyrus-master.pid
  group kolab
  group cyrus-imapd

  start program = "/usr/sbin/service cyrus-imapd start"
  stop  program = "/usr/sbin/service cyrus-imapd stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 143 type tcp then restart
  if failed port 4190 type tcp then restart
  if failed port 993 type tcp then restart

  depends on cyrus-imapd_bin
  depends on cyrus-imapd_rc

  check file cyrus-imapd_bin with path /usr/lib/cyrus-imapd/cyrus-master
    group cyrus-imapd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/cyrus.conf
    group anti-virus
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and cyrus-imapd groups for easier management.

Postfix

Postfix is an open-source mail transfer agent used to route and deliver electronic mail.

$ sudo cat /etc/monit/conf.d/postfix.conf
check process postfix with pidfile /var/run/cyrus-master.pid
  group kolab
  group mta

  start program = "/usr/sbin/service postfix start"
  stop program = "/usr/sbin/service postfix stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 25 type tcp then restart
  #if failed port 10025 type tcp then restart
  #if failed port 10027 type tcp then restart
  if failed port 587 type tcp then restart

  depends on postfix_bin
  depends on postfix_rc

  check file postfix_bin with path /usr/lib/postfix/master
    group mta
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory postfix_rc with path /etc/postfix/
    group mta
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and mta groups for easier management.

Ending notes

This blog post is definitely too long, so I will just mention that similar configuration can be used to monitor other integrated solutions like ISPConfig, or custom specialized setups.

In my opinion Monit is a great utility which simplifies system and service monitoring. Additionally it provides interesting proactive features, like service restart, or arbitrary program execution on selected tests.

Everything is described in the manual page.

$ man monit

mollekopf's picture
Fri, 2014-10-03 01:21

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.


roundcube's picture
Mon, 2014-09-29 02:00

We’re proud to announce the next service release to the stable version 1.0.
It contains some bug fixes and improvements we considered important for the
long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


tobru's picture
Sat, 2014-09-27 00:00


Contents

CASino is an easy to use Single Sign On (SSO) web application written in Ruby”

It supports different authentication backends, one of it is LDAP. It works very well with the
LDAP backend of Kolab. Just put the following configuration snippet into
your config/cas.yml:

production:
  authenticators:
    kolab:
      authenticator: 'LDAP'
      options:
        host: 'localhost'
        port: 389
        base: 'ou=People,dc=mydomain,dc=tld'
        username_attribute: 'uid'
        admin_user: 'uid=kolab-service,ou=Special Users,dc=mydomain,dc=tld'
        admin_password: 'mykolabservicepassword'
        extra_attributes:
          email: 'mail'
          fullname: 'uid'

You are now able to sign in using your Kolab uid and manage SSO users with the nice
Kolab Webadmin LDAP frontend.

CASino with Kolab LDAP backend was originally published by Tobias Brunner at tobrunet.ch Techblog on September 27, 2014.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:33

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Install container
The image for the container is available here:
https://index.docker.io/u/tpokorra/kolab33_centos6/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2014/09/building-a-docker-container-for-kolab-3-3-on-jiffybox/.

To install this image, you need to type in this command:

docker pull  tpokorra/kolab33_centos6

You can create a container from this image and run it:

MYAPP=$(sudo docker run --name centos6_kolab33 -P -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6)

You can see all your containers:

docker ps -a

You now have to attach to the container, and inside the container start the services:

docker attach $MYAPP
  /root/start.sh

Somehow it should work to start the services automatically at startup, but I did not get it to work with CMD or ENTRYPOINT.

To stop the container, type exit on the container’s console, or run from outside:

docker stop $MYAPP

To delete the container:

docker rm $MYAPP

You can reach the Kolab Webadmin on this URL:
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test

The Webmail interface is available here:
https://localhost/roundcubemail.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:31

This article is an update of the previous post that built a Docker container for Kolab 3.1: Building a Docker container for Kolab on Jiffybox (March 2014)

Preparation
I am using a Jiffybox provided by DomainFactory for building a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Create a Docker image
I realised that if I would install Kolab in one go, the image would become too big to upload to https://index.docker.io.
Therefore I have created a Dockerfile which has several steps for downloading and installing various packages. For a detailed description of a Dockerfile, see the Dockerfile Reference

My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.3/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.

This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab33_centos6 will be created, and the container will be deleted:

sudo docker build -t tpokorra/kolab33_centos6 .

You can see all your local images with this command:

sudo docker images

To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:

MYAPP=$(sudo docker run --name centos6_kolab33  --privileged=true -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6 /bin/bash)
docker attach $MYAPP
# run inside the container:
  echo `hostname -f` > /proc/sys/kernel/hostname
  echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
  ./initHttpTunnel.sh
  ./initSSL.sh test.example.org
  /root/stop.sh
  exit

Typing exit inside the container will stop the container.

Now you commit this last manual change:

docker commit $MYAPP tpokorra/kolab33_centos6
# delete the container
docker rm $MYAPP

You can push this image to https://index.docker.io:

#create a new account, or login with existing account:
sudo docker login
sudo docker push tpokorra/kolab33_centos6

You can now see the image available here: https://index.docker.io/u/tpokorra/kolab33_centos6/

See this post Installing Demo Version of Kolab 3.3 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.

Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.


roundcube's picture
Fri, 2014-09-12 12:40

Roundcube indeed became a huge success story with tens of thousands of installations worldwide. Something I never expected back in 2005 when I started the project as a fresh alternative to the well established but already aged free webmail packages like SquirrelMail or Horde IMP. And now, some 9 years later, we find ourselves in a similar position as the ones we previously wanted to replace. Although we managed to adapt the Roundcube codebase to the ongoing technological innovations, the core architecture is still ruled by the concepts which seemed to be right back when we started. And we’re talking about building a web app for IE 5 and Netscape 6 when browsers weren’t as capable and performant as they are today and when the term AJAX has not yet been known nor did we have nifty libraries such a jQuery or Backbone.js at hand.

It more often happens that, when discussing the implementation of new features to Roundcube, we find ourselves saying “Oh man, that’s going to be an expensive endeavor to squeeze this into our current architecture! If we could just…”. This doesn’t mean that the entire codebase is crap, not at all! But sometimes you just silently wish to give the core a fresh touch which respects the increased requirements and expectations. And that’s the challenge of every software product that has been around for a while and is still intensively developed.

When looking around, I see inspiring new webmail projects slowly emerging which don’t carry the legacy of a software product designed almost a decade ago. I’m truly happy about this development and I appreciate the efforts of honest coders to create the next generation of free webmail software. On the other hand it also makes me a bit jealous to see others starting from scratch and building fast and responsive webmail clients like Mailpile or RainLoop which make Roundcube look like the old dinosaur. Although they’re not yet as feature rich as Roundcube, the core concepts are very convincing and perfectly fit the technological environment we find ourselves in today.

So what if we could start over and build Roundcube from scratch?

Here are some ideas how I could imagine to build a brand new webmail app with todays tools and a 9 years experience in developing web(mail) applications:

  • Do more stuff client side: the entire rendering of the UI should be done in Javascript and no more PHP composing HTML pages loaded in iframes.
  • The server should only become a thin wrapper for talking to backend services like IMAP, LDAP, etc.
  • Maybe even use a common API for client-server communication like the one suggested by Inbox.
  • Design a proper data model which is used by both the server and the client.
  • Separate the data model from the view and use Backbone.js for rendering.
  • Widget-based UI composition using simple HTML structures with small template snippets.
  • Keep mobile, touch and hi-res devices in mind when building the UI.
  • Do skinning solely through CSS and maybe allow single template snippets to be overridden.
  • More abstraction for storage and caching layers to allow alternative backends like MongoDB or Redis.
  • Separate user auth from IMAP. This would allow other sources or accounts to be pulled into one session.
  • Use more 3rd party libraries like require.js, moment.js, jQuery or PHPMailer, Monolog or Doctrine ORM.
  • Contribute to the 3rd party modules rather than re-inventing the wheel.

While this may now sound like a buzzword bingo from a web developers conference (and the list is certainly not complete), I indeed believe in these very useful and well developed modules that are out there at our service. This is what free software development is all about: share, use and contribute.

But finally, not every part of your current Roundcube codebase is badly outdated and should be replaced. I’d definitely keep our current IMAP, LDAP and HTML sanitizing libraries as well as the plugin system which turned out to be a stable and important component and a major contributor to the Roundcube’s success.

And what keeps us from re-building Roundcube from the ground up? Primarily time and the fear of jeopardizing the Roundcube microcosmos with a somewhat incompatible new version that would require every single plugin to be re-written.

But give use funding for 6 month of intense work and let’s see what happens…


Thu, 2014-09-11 15:06

Some time ago I blogged about fighting spam with amavis for the Kolab community. Now the story continues by means of the roundcube integration with amavis.

As earlier mentioned spamassassin is able to store recipient-based preferences in a mysql table with some settings in its local.cf (see spamassassin wiki)

# Spamassassin for Roundcubemail
# http://www.tehinterweb.co.uk/roundcube/#pisauserprefs
user_scores_dsn DBI:mysql:ROUNDCUBEMAILDBNAME:localhost:3306
user_scores_sql_password ROUNCUBEMAILPASSWORD
user_scores_sql_username ROUNDCUBEMAILDBUSERNAME
user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC

However, accessing this with amavis is a real bis problem for many users. Amavis has it’s own user-based configuration policies, but email-plugins as the roundcubemail plugin sauserprefs often only use spamassassin and not amavis. Originally, SA was only called once per message by amavis and therefore recipient-based preferences were not possible at all. This has changed. Now you can use the options @sa_userconf_maps and @sa_username_maps to perform such lookups. Unfortunately these options are still poorly documented. We use them anyway.

The values in @sa_userconf_maps define where amavis has to look for the user preferences. I use mySQL lookups for all recipient addresses.

# use userpref SQL connection from SA local.cf for ALL recipients
@sa_userconf_maps = ({
  '.' => 'sql:'
});

The variable @sa_username_maps tells amavis what to pass to spamassassin as _USERNAME_ (see above) for the mySQL lookup. By default the amavis system user is used. In my setup with Kolab and sauserprefs I use here a regexp which is supposed to match the recipient email-address:

# use recipient email address as _USERNAME_ in userpref mySQL table (_TABLE_)
@sa_username_maps = new_RE (
  [ qr'^([^@]+@.*)'i => '${1}' ]
);

With these additional bits sauserprefs should work. However it seems to me that the string “*** Spam ***”, which should be added to the subject does not work (maybe it does in the most recent version). The thresholds do, though, but better check it carefully.

Did you succeed? Comments are appreciated!

Filed under: Technik Tagged: amavis, Kolab, Roundcubemail, Spamassassin


Andreas Cordes's picture
Thu, 2014-09-04 21:41

Hi,

now I finished compiling all the +Kolab.org packages for the +Raspberry Pi . Just a short note that you can update your groupware on your Pi pto the most recent version of +Kolab.org .

Greetz