Planet

Subscribe to the RSS feed of this planet! RSS
Fri, 2014-11-28 18:00

ProFTPD is a versatile ftp server. I recently integrated it in my Kolab 3.3 server environment, so that user access be can easily organized by the standard kolab-webadmin. The design looks as follows:

Kolab users are be able to login to ProFTPD but every user gets jailed in his own separate (physical) home directory. According to his group memberships, aditional shared folders can be displayed and accessed within this home directory.

You will need proftpd with support for ldap and virtual root environments. In Debian and Ubuntu, this is achieved via module packages:

  • proftpd-mod-ldap, proftpd-mod-vroot

On other platforms you may need to compile your own proftpd.

Via kolab-webadmin I created a new organizational unit FTPGroups within parent unit Groups. Within this unit, you can now add groups of type (Pure) POSIX Group. These groups are later used to restrict or permit access to certain directories or apply other custom settings per group by using the IfGroup directive of ProFTPD.

Note, that you stick to sub-units of ou=Groups here, so that this unit will be recognized by the kolab-webadmin. The LDAP-record of such a group may look like this:

dn: cn=ftp_test_group,ou=FTPGroups,ou=Groups,dc=domain,dc=com
cn: ftp_test_group
gidnumber: 1234
objectclass: top
objectclass: groupofuniquenames
objectclass: posixgroup
uniquemember: uid=testuser,ou=People,dc=domain,dc=com

To make sure that our kolab-users and groups within the sub-unit get mapped correctly to their equivalents in the ftp-server, we have to edit the directives for mod_ldap. Just start with my working sample configuration ldap.conf on pastebin, which should be included in your main proftpd configuration.

Because we use the standard kolab ldap-schema, the users do neither posess a user nor group ID. Therefore, ProFTPD will fallback to the LDAPDefaultUID (example: ID of “nobody”) and LDAPDefaultGID (example: 10000). From the system side, a user with this combination of UID and GID should be allowed to read from (and maybe write to) your physical FTP directory tree. You can either add the user or group to your system and set the permissions accordingly or use the access control list (ACL). Since I use the acl-approach, the group with ID 10000 does not have to exist in /etc/group. You may install acl by executing

~# apt-get install acl

and mount your ftp storage device with the acl option (to be persistent add it in /etc/fstab) by executing

~# mount -o remount,defaults,noexec,rw,acl /dev/sda1 /var/ftp

To allow the access for users in our default group 10000 (for both existing and newly created files), we have to use the setfacl command. Think carefully about this. We want the users not to be able to remove one of the shared folders accidentally!

~# setfacl     -m g:10000:rx  /var/ftp/*
~# setfacl -d -Rm g:10000:rwx /var/ftp/{share,home}/*
~# setfacl    -Rm g:10000:rwx /var/ftp/{share,home}/*

We wanted all users to have their own home directory, which resides in /var/ftp/home/, so make sure this directory exists. To jail each user to their own home directory, change the DefaultRoot directive in your main configuration file /etc/proftpd.conf to look like

DefaultRoot  /var/ftp/home/%u

Nonexistent home directories /var/ftp/home/username will be created as requested by ldap.conf (see above). At this point, ldap users should be able to login and will be dropped in their empty home directory. Now we have to setup the directory permissions and have shared directories linked to the home directory. To achieve this we will make extensive use of the IfGroup directive. It’s very important, that the module mod_ifsession.c is the last module to load in /etc/proftpd/modules.conf! Additionally you should have lines, which load mod_vroot.c and mod_ldap.c.

Linking is very simple and works as follows:

<IfGroup ftp_test_group>
   VRootAlias /var/ftp/share /share>
</IfGroup>

Very useful in terms of security is to limit the use of particular FTP-Commands to the admin group

# limit login to users with valid ftp_* groups
<Limit LOGIN>
   AllowGroup ftp_admin_group,ftp_test_group
</Limit>
# in general allow ftp-commands for all users
<Directory />
   <Limit ALL ALL FTP>
      AllowGroup ftp_admin_group,ftp_test_group
   </Limit>
</Directory>
# deny deletion of files (does not cover overwriting)
<Directory />
   <Limit DELE RMD>
      DenyGroup !ftp_admin_group
   </Limit>
</Directory>

I think we are done here now. Restart your ftp server by

~# service proftpd restart

Here you go! For testing purposes set the log level to debug and monitor the login process. Also force SSL (mod_tls.c), because otherwise everything, even passwords, will be transferred as cleartext! If you run into trouble somewhere, just let me know.

Filed under: Linux, Technik Tagged: Kolab, ProFTPd


Cornelius Hald's picture
Tue, 2014-11-11 12:51

Today we’re showing how to extend thesingle domain setup done earlier to get a truly multi domain Kolab install. You should probably reserve a couple of hours as there are quite some changes to do and not everything totally trivial. Also, if you’ve not read the blog about single domain setup, now is a good time :)

First of all, you can find the official documentation here. It’s probably a good idea to read it as well. We start with the easy parts and end with postfix, which needs the most changes. At the very end there are a couple of things that may or may not be issues you should be aware of.

Change Amavisd

We tell amavisd to accept all domains.

vi /etc/amavisd/amavisd.conf
# Replace that line
@local_domains_maps = ( [".$mydomain"] );
# With this line
$local_domains_re = new_RE( qr'.*' );

Change Cyrus IMAPD

Tell the IMAP server how to find our other domains. Add the following to the bottom of /etc/imapd.conf

ldap_domain_base_dn: cn=kolab,cn=config
ldap_domain_filter: (&(objectclass=domainrelatedobject)(associateddomain=%s))
ldap_domain_name_attribute: associatedDomain
ldap_domain_scope: sub
ldap_domain_result_attribute: inetdomainbasedn

Change Roundcube (webmail)

Basically you need to change the base_dn at several places. The placeholder ‘%dc’ is replaced during run-time with the real domain the user belongs to.

To save me some typing I’m pasting the diff output produced by git here. So it looks more than it actually is…

diff --git a/roundcubemail/password.inc.php b/roundcubemail/password.inc.php
index c3d449c..eafc8e5 100644
--- a/roundcubemail/password.inc.php
+++ b/roundcubemail/password.inc.php
@@ -45,7 +45,7 @@

     // LDAP base name (root directory)
     // Exemple: 'dc=exemple,dc=com'
-    $config['password_ldap_basedn'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_basedn'] = 'ou=People,%dc';

     // LDAP connection method
     // There is two connection method for changing a user's LDAP password.
@@ -99,7 +99,7 @@
     // If password_ldap_searchDN is set, the base to search in using the filter below.
     // Note that you should comment out the default password_ldap_userDN_mask setting
     // for this to take effect.
-    $config['password_ldap_search_base'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_search_base'] = 'ou=People,%dc';

     // LDAP search filter
     // If password_ldap_searchDN is set, the filter to use when
diff --git a/roundcubemail/calendar.inc.php b/roundcubemail/calendar.inc.php
index 98be7b9..8f98f8a 100644
--- a/roundcubemail/calendar.inc.php
+++ b/roundcubemail/calendar.inc.php
@@ -22,11 +22,11 @@
             'hosts'                 => 'localhost',
             'port'                  => 389,
             'use_tls'               => false,
-            'base_dn'               => 'ou=Resources,dc=skolar,dc=de',
+            'base_dn'               => 'ou=Resources,%dc',
             'user_specific'         => true,
             'bind_dn'               => '%dn',
             'bind_pass'             => '',
-            'search_base_dn'        => 'ou=People,dc=skolar,dc=de',
+            'search_base_dn'        => 'ou=People,%dc',
             'search_bind_dn'        => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
             'search_bind_pw'        => 'xUlA7PzBZnRaYV4',
             'search_filter'         => '(&(objectClass=inetOrgPerson)(mail=%fu))',
diff --git a/roundcubemail/config.inc.php b/roundcubemail/config.inc.php
index bfbfba3..60dc0b2 100644
--- a/roundcubemail/config.inc.php
+++ b/roundcubemail/config.inc.php
@@ -6,7 +6,7 @@

     $config['session_domain'] = '';
     $config['des_key'] = "FMlzG7LeqiUSOSK2T8xKQTHR";
     $config['use_secure_urls'] = true;
     $config['assets_path'] = 'assets/';

@@ -154,11 +154,11 @@
                     'hosts'                     => Array('localhost'),
                     'port'                      => 389,
                     'use_tls'                   => false,
-                    'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+                    'base_dn'                   => 'ou=People,%dc',
                     'user_specific'             => true,
                     'bind_dn'                   => '%dn',
                     'bind_pass'                 => '',
-                    'search_base_dn'            => 'ou=People,dc=skolar,dc=de',
+                    'search_base_dn'            => 'ou=People,%dc',
                     'search_bind_dn'            => 'uid=kolab-service,ou=Special Users,dc=skolar,
                     'search_bind_pw'            => 'xUlA7PzBZnRaYV4',
                     'search_filter'             => '(&(objectClass=inetOrgPerson)(mail=%fu))',
@@ -196,7 +196,7 @@
                             'photo'             => 'jpegphoto'
                         ),
                     'groups'                    => Array(
-                            'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                            'base_dn'           => 'ou=Groups,%dc',
                             'filter'            => '(&' . '(|(objectclass=groupofuniquenames)(obj
                             'object_classes'    => Array("top", "groupOfUniqueNames"),
                             'member_attr'       => 'uniqueMember',
diff --git a/roundcubemail/kolab_auth.inc.php b/roundcubemail/kolab_auth.inc.php
index 9fb5335..8eff518 100644
--- a/roundcubemail/kolab_auth.inc.php
+++ b/roundcubemail/kolab_auth.inc.php
@@ -8,7 +8,7 @@
         'port'                      => 389,
         'use_tls'                   => false,
         'user_specific'             => false,
-        'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+        'base_dn'                   => 'ou=People,%dc',
         'bind_dn'                   => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
         'bind_pass'                 => 'xUlA7PzBZnRaYV4',
         'writable'                  => false,
@@ -26,11 +26,14 @@
         'sizelimit'                 => '0',
         'timelimit'                 => '0',
         'groups'                    => Array(
-                'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                'base_dn'           => 'ou=Groups,%dc',
                 'filter'            => '(|(objectclass=groupofuniquenames)(objectclass=groupofurl
                 'object_classes'    => Array('top', 'groupOfUniqueNames'),
                 'member_attr'       => 'uniqueMember',
             ),
+        'domain_base_dn'           => 'cn=kolab,cn=config',
+        'domain_filter'            => '(&(objectclass=domainrelatedobject)(associateddomain=%s))'
+        'domain_name_attr'         => 'associateddomain',
     );

Change Postfix

Now this is actually the hardest part that requires the most changes. Initially I thought there would be a way around that, but it looks like it is currently really needed.

First we apply a couple of changes that allows us to have multiple domains besides our management domain (the domain we used to install Kolab). However those changes will not support domains having aliases. E.g. having the domain kodira.de with an alias of tourschall.com. To get domains with working aliases, we need to do even more.

Postfix Part 1 (basics)

Please follow the instructions given in the official documentation here. I don’t really see how I could write that part better or more compact. Do all the changes for: mydestination, local_recipient_maps, virtual_alias_maps and transport_maps.

Now, if you don’t need aliases, you’re basically done and you can skip the next section.

Postfix Part 2 (alias domains)

For each domain that should support alias domains we need to add 4 files. We’re doing this based on the following example.

  • Domain: kodira.de
  • Alias: tourschall.com

First create the directory /etc/postfix/ldap/kodira.de (name of the real domain)

In that directory create the following 4 files, but do not just copy&past them. You have to adjust them to your setup.

# local_recipient_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(|(objectclass=kolabinetorgperson)(|(objectclass=kolabgroupofuniquenames)(objectclass=kolabgroupofurls))(|(|(objectclass=groupofuniquenames)(objectclass=groupofurls))(objectclass=kolabsharedfolder))(objectclass=kolabsharedfolder)))
result_attribute = mail
# mydestination.cf
# Adjust bind_dn, bind_pw, query_filter
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(associatedDomain=%s)(associatedDomain=kodira.de))
result_attribute = associateddomain
# transport_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mailAlternateAddress=%s)(alias=%s)(mail=%s))(objectclass=kolabinetorgperson))
result_attribute = mail
result_format = lmtp:unix:/var/lib/imap/socket/lmtp
# virtual_alias_maps.cf
# Adjust search_base, domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = dc=kodira,dc=de
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(objectclass=kolabinetorgperson))
result_attribute = mail

Almost done, but don’t forget to reference those files from /etc/postfix/main.cf.

The bad news is: you have to add and adjust those 4 files for each domain which should support aliases. But the good news is: once configured you can use as many aliases for that domain as you want. No need to change config files for that.

Postfix Part 3 (finishing up)

Restart all services or just reboot the machine. Most things should work now, but there are a couple of points you still might need to take care about.

  1. In our main.cf there have been references to some catchall maps that we do not use and that do not exist on the file system. Therefore postfix stopped looking at the rest of that maps. We simply deleted the catchall references from main.cf and got rid of that problem.
  2. In our setup we had an issue with a domain having an alias with more then two parts. E.g. mail.kodira.de. As we don’t need addresses in the form of user@host.domain.tld we removed this alias and thus solved the problem.

Create domains and users using WAP

Now you should be able to use the ‘Kolab Web Administration Panel’ (WAP) to create domains and users.

  1. Go to http://<yourserver>/kolab-webadmin
  2. Login as ‘cn=Directory Manager’
  3. Go to ‘Domains’ and add a domain (simply giving it a name is enough)
  4. If you want add an alias to this domain by clicking the ‘+’ sign
  5. Logout
  6. Login again as ‘cn=Directory Manager’
  7. In the top right corner you should be able to select your newly created domain. Select it.
  8. Go to ‘Users’ and add a user to your new domain
  9. If you want, give the user the role ‘kolab-admin’. If you do, that user is able to log into WAP and to administrate that domain. For that login you should not use LDAP notation, but simply user@domain.tld.

Now maybe create a couple of test users on various domains and try to send some mails back and forth. It should work. If not have a look at those log files:

  • /var/log/maillog
  • /var/log/dirsrv/slapd-mail/access

Also do a grep for ‘kodira’, ‘tourschall’, ‘example’ in /etc/ to make sure you didn’t accidentally forgot to change some example configuration. Last but not least, think about putting /etc/ into a git repository – that will help you to review and restore changes you’ve made.

Good luck and have fun :)

The post Kolab 3.3 Multi-Domain Setup on CentOS 7 appeared first on Kodira.


Cornelius Hald's picture
Tue, 2014-11-11 10:54

After a lot of reading and some trial-and-error, I’ve figured out a way to reproducible install Kolab 3.3 on CentOS 7 with multi-domain support. Most information can be found in the official documentation, however some parts are not that easy to understand for a Kolab noob.

I won’t go into too much details here, so it will be mostly a step-by-step thing without a lot of explanation. You really should not use this document as your only source of information. At least read through the official documentation as well. Also you should feel confident with Linux admin stuff – otherwise Kolab might not be the best choice as it is not an of-the-shelf solution.

In this document we will use the following hosts and domains. Replace them with your own.

  • Hostname: mail.skolar.de
  • Management domain: skolar.de
  • Primary hosted domain: kodira.de
  • Alias for primary hosted domain: tourschall.com
  • We could go on with a secondary hosted domain, but works exactly like the primary hosted domain, so we won’t go there…

Let’s start with a fresh minimal CentOS 7 install where you are root.

First we disable SE-Linux and the firewall. You should re-enable that later, but for now we don’t won’t both to get into our way:

# Check status of SE-Linux
sestatus
# Temporarily disable it
setenforce 0
# Stop firewall
systemctl stop firewalld
# Disable firewall (dont't start on next boot)
systemctl disable firewalld

To permanently disable SE-Linux edit /etc/selinux/config (I recommend you do this now)

Set a valid host name (needs to be resolvable via DNS)

echo "mail.skolar.de" > /etc/hostname

Add Kolab repositories and GPG keys

rpm -Uhv http://ftp.uma.es/mirror/epel/beta/7/x86_64/epel-release-7-1.noarch.rpm
cd /etc/yum.repos.d/
wget http://obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_7/Kolab:3.3.repo
wget http://obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_7/Kolab:3.3:Updates.repo
gpg --search devel@lists.kolab.org
# Do it again, for me it always just worked the second time
gpg --search devel@lists.kolab.org
gpg --export --armor devel@lists.kolab.org > devel.asc
rpm --import devel.asc
rm devel.asc

Install Kolab (including dependencies)

yum install kolab

The next command will start the various services like postfix and apache and should also start MariaDB (which is a replacement for MySQL). Due to bug #3877 it won’t, so we have to start MariaDB manually.

systemctl enable mariadb
systemctl start mariadb

Now start the Kolab setup process. It will ask you many password, most of them you can just leave as they are, but pay attention to the password for “Directory Manager”. Either remember the one setup generated or type in your own. You’ll need that password later on quite often.

setup-kolab

Besides the passwords the right answers for me have been:

Domain -> skolar.de
What MySQL server are we setting up? -> 2: New MySql server
Timezone -> Europe/Berlin

Now might be a good time to reboot and see if all the services are starting up successfully. If you do, please make sure SE-Linux is turned off permanently.

Great, you should now be able to login to the web admin. The URL and the credentials are as follows:

http://<yourserver>/kolab-webadmin
User: cn=Directory Manager
Password: The password used in setup-kolab

You should be able to create a new Kolab user and log into webmail using that new user. But because of bug #3565 you’re incoming mail is not properly scanned. Do the following to resolve that issue:

vi /etc/amavisd/amavisd.conf
# Change that line
\&ask_daemon, ["CONTSCAN {}\n", "/var/spool/amavisd/clamd.sock"],
# To look like this
\&ask_daemon, ["CONTSCAN {}\n", "/var/run/clamd.amavisd/clamd.sock"],
# Save, close and restart amavisd
systemctl restart amavisd

This should be all for a single-domain install. You should be able to send and receive mail using the web frontend or dedicated IMAP clients.

That’s all for part 1. Have a look at part 2 where we’re extending this setup to support multiple domains.

The post Kolab 3.3 Single-Domain Setup on CentOS 7 appeared first on Kodira.


roundcube's picture
Mon, 2014-11-10 21:13

We’re proud to announce that the beta release of the next major version 1.1 of
Roundcube webmail is now available for download and testing. With this
milestone we introduce a bunch of new features and some clean-up with the 3rd
party libraries Roundcube uses:

  • Allow searching across multiple folders
  • Improved support for screen readers and assistive technology using
    WCAG 2.0 and WAI ARIA standards
  • Support images in HTML signatures (copy & paste)
  • Added namespace filter and folder searching in folder manager
  • New config option to disable UI elements/actions
  • Stronger password encryption using OpenSSL
  • Support for the IMAP SPECIAL-USE extension
  • Support for Oracle databases
  • Moved 3rd party libs to vendor directory, managed by Composer

And of course plenty of small improvements and bug fixes.

IMPORTANT: with this version, we dropped support for PHP < 5.3.7 and
Internet Explorer < 9. IE7/IE8 support can be restored by enabling the
legacy_browser plugin.

See the complete Changelog at trac.roundcube.net/wiki/Changelog
and download the new packages from roundcube.net/download. Please note that this
is a beta release and we recommend to test it on a separate environment. And
don’t forget to backup your data before installing it.


Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


Wed, 2014-10-29 00:00

Recently, I noticed "Failed to save changes" error when tried to move events between distinct calendars.
After a short investigation I found that this bug is already fixed, but not packaged for an easy upgrade, so I will shortly describe how to apply fix for Debian Wheezy and Kolab 3.3.

The above-mentioned bug is already fixed in roundcubemail-plugins-kolab repository.
You can jump directly to the a3d5f717 commit and read details.

Quick Remedy

We need to replace calendar, and libkolab plugins.

Download code from already fixed roundcubemail-plugins-kolab repository to /tmp temporary directory.

# cd /tmp
# wget http://git.kolab.org/roundcubemail-plugins-kolab/snapshot/roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Rename mentioned roundcubemail plugins directories that will be replaced in next step.

# mv /usr/share/roundcubemail/plugins/{calendar,calendar.before_fix}
# mv /usr/share/roundcubemail/plugins/{libkolab,libkolab.before_fix}

Extract both plugins from downloaded archive.

# tar xvfz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545/plugins/{calendar,libkolab} --strip-components 2

Install and configure plugins.

# mv {calendar,libkolab} /usr/share/roundcubemail/plugins/
# ln -s /etc/roundcubemail/calendar.inc.php /usr/share/roundcubemail/plugins/calendar/config.inc.php
# ln -s /etc/roundcubemail/libkolab.inc.php /usr/share/roundcubemail/plugins/libkolab/config.inc.php

Remove downloaded archive.

# rm roundcubemail-plugins-kolab-a3d5f717a2250cfbd7a5652a445adcd6a0845545.tar.gz

Simple and easy.


Fri, 2014-10-24 00:00

Just in time for the official Kolab 3.3 release, our Gentoo packages for Kolab 3.2 became stable and ready to use. This will clear the way for the upcoming release of Kolab 3.3 for Gentoo. Altough this release won't bring any major changes, it prepares the ground for upcoming developments and new features in Kolab 3.3. Further, with Kolab 3.2 we introduced an upgrade path between Kolab releases for Gentoo and we will try our best to keep updates as consistent and comfortable as possible.


roundcube's picture
Fri, 2014-10-10 23:25

PGP encryption is one of the most frequently requested features for Roundcube and for good reasons more and more people start caring about end-to-end encryption in their everyday communication. But unfortunately webmail applications currently can’t fully participate in this game and doing PGP encryption right in web-based applications isn’t a simple task. Although there are ways and even some basic implementations, all of them have their pros and cons. And yet the ultimate solution is still missing.

Browser extensions to the rescue

In our opinion, the way to go is with a browser extension to do the important work and guard the keys. A crucial point is to keep the encryption component under the user’s full control which in the browser and http world can only be provided with a native browser plugin. And the good news is, there are working extensions available today. The most prominent one probably is Mailvelope which detects encrypted message bodies in various webmail applications and also hooks into the message composition to send signed and encrypted email messages with your favorite webmail app. Plus another very promising tool for end-to-end encryption is coming our way: p≡p. A browser extension is at least planned in the longer term. And even Google just started their own project with the recently announced end-to-end Chrome extension.

That’s a good start indeed. However, the encryption capabilities of those extensions only cover the message body but leave out attachments or even pgp/mime messages. Mostly because there extension has limited knowledge about webmail app and there’s no interaction between the web app and the extension. On the other side, the webmail app isn’t aware of the encryption features available in the user’s browser and therefore suppresses certain parts of a message like signatures. A direct interaction between the webmail and the encryption extension could help adding the missing pieces like encrypted attachment upload and message signing. All we need to do is to introduce the two components to each others.

From the webmail developer’s perspective

So here’s a loose list of functionality we’d like to see exposed by an encryption browser extension and which we believe would contribute to an integrated solution for secure emailing.

A global (window.encryption-style) object providing functions to:

  • List of supported encryption technologies (pgp, s/mime)
  • Switch to manual mode (i.e. disabling automatic detection of webmail containers)

For message display:

  • Register message content area (jQuery-like selector)
  • Setters for message headers (e.g. sender, recipient)
  • Decrypt message content (String) directly
  • Validate signature (pass signature as argument)
  • Download and decrypt attachment from a given URL and
    • a) prompt for saving file
    • b) return a FileReader object for inline display
  • Bonus points: support for pgp/mime; implies full support for MIME message structures

For message composition:

  • Setters for message recipients (or recipient text fields)
  • Register message compose text area (jQuery-like selector)
  • … or functions to encrypt and/or sign message contents (String) directly
  • Query the existence of a public key/certificate for a given recipient address
  • File selector/upload with transparent encryption
  • … or an API to encrypt binary data (from a FileReader object into a new FileReader object)

Regarding file upload for attachments to an encrypted messages, some extra challenges exist in an asynchronous client-server web application: attachment encryption requires the final recipients to be known before the (encrypted) file is uploaded to the server. If the list of recipients or encryption settings change, already uploaded attachments are void and need to be re-encrypted and uploaded again.

And presumably that’s just one example of possible pitfalls in this endeavor to add full featured PGP encryption to webmail applications. Thus, dear developers of Mailvelope, p≡p, WebPG and Google, please take the above list as a source of inspiration for your further development. We’d gladly cooperate to add the missing pieces.


Timotheus Pokorra's picture
Tue, 2014-10-07 19:02

On the Kolab IRC we have had some issues with apt-get talking about connection failed etc.

So I updated the blogpost from last year: http://www.pokorra.de/2013/10/downloading-from-obs-repo-via-php-proxy-file/

The port of the Kolab Systems OBS is now port 80, so there is not really a need for a proxy anymore. But perhaps it helps for debugging the apt-get commands.

I have extended the scripts to work for apt-get on Debian/Ubuntu as well, the original script was for yum only it seems.

I have setup a small php script on a server somewhere on the Internet.

In my sample configuration, I use a Debian server with Lighttpd and PHP.

Install:

apt-get install lighttpd spawn-fcgi php5-curl php5-cgi

changes to /etc/lighttpd/lighttpd.conf:

server.modules = (
        [...]
        "mod_fastcgi",
        "mod_rewrite",
)
 
fastcgi.server = ( ".php" => ((
                     "bin-path" => "/usr/bin/php5-cgi",
                     "socket" => "/tmp/php.socket",
                     "max-procs" => 2,
                     "bin-environment" => (
                       "PHP_FCGI_CHILDREN" => "16",
                       "PHP_FCGI_MAX_REQUESTS" => "10000"
                     ),
                     "bin-copy-environment" => (
                       "PATH", "SHELL", "USER"
                     ),
                     "broken-scriptfilename" => "enable"
                 )))
 
url.rewrite-once = (
    "^/obs\.kolabsys\.com/index.php" => "$0",
    "^/obs\.kolabsys\.com/(.*)" => "/obs.kolabsys.com/index.php?page=$1"
)

and in /var/www/obs.kolabsys.com/index.php:

<?php 
 
$proxyurl="http://kolabproxy2.pokorra.de";
$obsurl="http://obs.kolabsys.com";
 
// it seems file_get_contents does not return the full page
function curl_get_file_contents($URL)
{
    $c = curl_init();
    curl_setopt($c, CURLOPT_RETURNTRANSFER, 1);
    curl_setopt($c, CURLOPT_URL, str_replace('&#038;', '&#038;', $URL));
    $contents = curl_exec($c);
    curl_close($c);
    if ($contents) return $contents;
    else return FALSE;
}
 
$page = $_GET['page'];
$filename = basename($page);
debug($page . "   ".$filename);
$content = curl_get_file_contents($obsurl."/".$page);
if (strpos($content, "Error 404") !== false) {
	header("HTTP/1.0 404 Not Found");
	die();
}
if (substr($page, -strlen("/")) === "/")
{
        # print directory listing
        $content = str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
        $content = str_replace('href="/', 'href=$proxyurl."/obs.kolabsys.com/', $content);
        echo $content;
}
else if (substr($filename, -strlen(".repo")) === ".repo")
{
        header("Content-Type: plain/text");
        echo str_replace($obsurl."/", $proxyurl."/obs.kolabsys.com/", $content);
}
else
{
#die($filename);
        header("Content-Type: application/octet-stream");
        header('Content-Disposition: attachment; filename="'.$filename.'"');
        header("Content-Transfer-Encoding: binary\n");
        echo curl_get_file_contents($obsurl."/".$page);
}
 
function debug($msg){
 if(is_writeable("/tmp/mylog.log")){
    $fh = fopen("/tmp/mylog.log",'a+');
    fputs($fh,"[Log] ".date("d.m.Y H:i:s")." $msg\n");
    fclose($fh);
  }
} 
?>

Now it is possible to download the repo files like this:

cd /etc/yum.repos.d/
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_6/Kolab:3.3.repo
wget http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_6/Kolab:3.3:Updates.repo
yum install kolab

For Ubuntu 14.04:

echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3/Ubuntu_14.04/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb http://kolabproxy2.pokorra.de/obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/Ubuntu_14.04/ ./" >> /etc/apt/sources.list.d/kolab.list
apt-get install kolab

This works for all other projects and distributions on obs.kolabsys.com too.


Tue, 2014-10-07 00:00

I have been using self hosted Kolab Groupware everyday for quite a while now.
Therefore the need arose to monitor process activity and system resources using Monit utility.

Table of contents

Couple of words about monit

monit is a simple and robust utility for monitoring and automatic maintenance, which is supported on Linux, BSD and OS X.

Software installation

Debian Wheezy currently provides Monit 5.4.

To install it execute command:

$ sudo apt-get install monit

Monit daemon will be started at the boot time. Alternatively you can use standard System V init scripts to manage service.

Initial configuration

Configuration files are located under /etc/monit/ directory. Default settings are stored in the /etc/monit/monitrc file, which I strongly suggest to read.
Custom configuration will be stored in the/etc/monit/conf.d/ directory.

I will override several important settings using local.conf file.

Modified settings

  • Set email address to root@example.org
  • Slightly change default template
  • Define mail server as localhost
  • Set default interval to 120 seconds with initial delay of 180 seconds
  • Enable local web server to take advantage of the additional functionality
    (currently commented out)

$ sudo cat /etc/monit/conf.d/local.conf
# define e-mail recipent
set alert root@example.org

# define e-mail template
set mail-format {
from: monit@$HOST
subject: monit alert -- $EVENT $SERVICE
message: $EVENT Service $SERVICE
Date:        $DATE
Action:      $ACTION
Host:        $HOST
Description: $DESCRIPTION
}

# define server
set mailserver localhost

# define interval and initial delay
set daemon 120 with start delay 180

# set web server for local management
# set httpd port 2812 and use the address localhost allow localhost
Please take a note that enabling built-in web-server in the way I used above will allow every local user to access and perform monit operations. Essentially it should be disabled or secured using username and password combination.

Command-line operations

Verify configuration syntax

To check configuration syntax execute the following command.

$ sudo monit -t
Control file syntax OK

Start, Stop, Restart actions

Start all services and enable monitoring for them.

$ sudo monit start all

Start all services in resources group and enable monitoring for them.

$ sudo monit -g resources start

Start rootfs service and enable monitoring for it.

$ sudo monit start rootfs

You can initiate stop action in the same way as the above one, which will stop service and disable monitoring, or just execute restart action to stop and start corresponding services.

Monitor and unmonitor actions

Monitor all services.

$ sudo monit monitor all

Monitor all services in resources group.

$ sudo monit -g resources monitor

Monitor rootfs service.

$ sudo monit monitor rootfs

Use unmonitor action to disable monitoring for corresponding services.

Status action

Print service status.

$ sudo monit status
The Monit daemon 5.6 uptime: 27d 0h 47m

System 'server'
  status                            Running
  monitoring status                 Monitored
  load average                      [0.26] [0.43] [0.48]
  cpu                               12.8%us 2.6%sy 0.0%wa
  memory usage                      2934772 kB [36.4%]
  swap usage                        2897376 kB [35.0%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Filesystem 'rootfs'
  status                            Accessible
  monitoring status                 Monitored
  permission                        660
  uid                               0
  gid                               6
  filesystem flags                  0x1000
  block size                        4096 B
  blocks total                      17161862 [67038.5 MB]
  blocks free for non superuser     7327797 [28624.2 MB] [42.7%]
  blocks free total                 8205352 [32052.2 MB] [47.8%]
  inodes total                      4374528
  inodes free                       4151728 [94.9%]
  data collected                    Mon, 29 Sep 2014 22:47:49

Summary action

Print short service summary.

$ sudo monit summary
The Monit daemon 5.6 uptime: 27d 0h 48m

System 'server'                     Running
Filesystem 'rootfs'                 Accessible

Reload action

Reload configuration and reinitialize Monit daemon.

$ sudo monit reload

Quit action

Terminate Monit daemon.

$ sudo monit quit
monit daemon with pid [5248] killed

Monitor filesystems

Configuration syntax is very consistent and easy to grasp. I will start with simple example and then proceed to a slightly more complex ideas. Just remember to check one thing at a time.

I am using VPS service due to easy backup/restore process, so I have only one filesystem on /dev/root device, which I will monitor as a named rootfs service.

Monit daemon will generate alert and send an email if space or inode usage on the rootfs filesystem [stored on /dev/root device] exceeds 80 percent of the available capacity.

$ sudo cat /etc/monit/conf.d/filesystems.conf
check filesystem rootfs with path /dev/root
  group resources

  if space usage > 80% then alert
  if inode usage > 80% then alert

The above service is placed in resources group for easier management.

Monitor system resources

The following configuration will be stored as a named server service as it describes resource usage for the whole mail server.

Monit daemon will check memory usage, if it exceeds 80% of the available capacity for three subsequent events, it will send an alert email.
Recovery message will be sent after two subsequent events to limit number of sent messages. The same rules apply to the remaining system resources.

The system I am using have four available processors, so the alert will be generated after the five minutes load average exceeds five.

$ sudo cat /etc/monit/conf.d/resources.conf
check system server
  group resources

  if memory usage > 80% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if swap usage > 50% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(wait) > 30% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(system) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if cpu(user) > 60% for 3 cycles then alert
  else if succeeded for 2 cycles then alert

  if loadavg(5min) > 5 then alert
  else if succeeded for 2 cycles then alert

The above service is placed in resources group for easier management.

Monitor system services

cron

cron is a daemon used to execute user-specified tasks at scheduled time.

Monit daemon will use the specified pid file [/var/run/crond.pid] to monitor [cron] service and restart it if it stops for any reason.
Configuration change will generate alert message, permission issue will generate alert message and disable further monitoring.

GID of 102 translates to crontab group.

$ sudo cat /etc/monit/conf.d/cron.conf
check process cron with pidfile /var/run/crond.pid
  group system
  group scheduled-tasks

  start program = "/usr/sbin/service cron start"
  stop  program = "/usr/sbin/service cron stop"

  if 3 restarts within 5 cycles then timeout

  depends on cron_bin
  depends on cron_rc
  depends on cron_rc.d
  depends on cron_rc.daily
  depends on cron_rc.hourly
  depends on cron_rc.monthly
  depends on cron_rc.weekly
  depends on cron_rc.spool

  check file cron_bin with path /usr/sbin/cron
    group scheduled-tasks
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file cron_rc with path /etc/crontab
    group scheduled-tasks
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.d with path /etc/cron.d
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.daily with path /etc/cron.daily
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.hourly with path /etc/cron.hourly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.monthly with path /etc/cron.monthly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.weekly with path /etc/cron.weekly
    group scheduled-tasks
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory cron_rc.spool with path /var/spool/cron/crontabs
    group scheduled-tasks
    if changed timestamp      then alert
    if failed permission 1730 then unmonitor
    if failed uid root        then unmonitor
    if failed gid 102         then unmonitor

The above service is placed in system and scheduled-tasks groups for easier management.

rsyslogd

rsyslogd is a message logging service.

$ sudo cat /etc/monit/conf.d/rsyslogd.conf
check process rsyslog with pidfile /var/run/rsyslogd.pid
  group system
  group logging

  start program = "/usr/sbin/service rsyslog start"
  stop  program = "/usr/sbin/service rsyslog stop"

  if 3 restarts within 5 cycles then timeout

  depends on rsyslog_bin
  depends on rsyslog_rc
  depends on rsyslog_rc.d

  check file rsyslog_bin with path /usr/sbin/rsyslogd
    group logging
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file rsyslog_rc with path /etc/rsyslog.conf
    group logging
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory rsyslog_rc.d with path /etc/rsyslog.d
    group logging
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and logging groups for easier management.

ntpd

Network Time Protocol daemon will be extended by the use of port monitoring.

$ sudo cat /etc/monit/conf.d/ntpd.conf
check process ntp with pidfile /var/run/ntpd.pid
  group system
  group time

  start program = "/usr/sbin/service ntp start"
  stop  program = "/usr/sbin/service ntp stop"

  if failed port 123 type udp then restart

  if 3 restarts within 5 cycles then timeout

  depends on ntp_bin
  depends on ntp_rc

  check file ntp_bin with path /usr/sbin/ntpd
    group time
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file ntp_rc with path /etc/ntp.conf
    group time
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in system and time groups for easier management.

OpenSSH

OpenSSH service will be extended by the use of match statement to test content of the configuration file. I assume it is self explanatory.

$ sudo cat /etc/monit/conf.d/openssh-server.conf
check process openssh with pidfile /var/run/sshd.pid
  group system
  group sshd

  start program = "/usr/sbin/service ssh start"
  stop  program = "/usr/sbin/service ssh stop"

  if failed port 22 with proto ssh then restart

  if 3 restarts with 5 cycles then timeout

  depend on openssh_bin
  depend on openssh_sftp_bin
  depend on openssh_rsa_key
  depend on openssh_dsa_key
  depend on openssh_rc

  check file openssh_bin with path /usr/sbin/sshd
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_sftp_bin with path /usr/lib/openssh/sftp-server
    group sshd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rsa_key with path /etc/ssh/ssh_host_rsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_dsa_key with path /etc/ssh/ssh_host_dsa_key
    group sshd
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file openssh_rc with path /etc/ssh/sshd_config
    group sshd
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

    if not match "^PasswordAuthentication no" then alert
    if not match "^PubkeyAuthentication yes"  then alert
    if not match "^PermitRootLogin no"        then alert

The above service is placed in system and sshd groups for easier management.

Monitor Kolab services

MySQL

MySQL is an open-source database server used by the wide range of Kolab services.

UID of 106 translates to mysql user. GID of 106 translates to mysql group.

It is the first time I have used unixsocket statement here.

$ sudo cat /etc/monit/conf.d/mysql.conf
check process mysql with pidfile /var/run/mysqld/mysqld.pid
  group kolab
  group database

  start program = "/usr/sbin/service mysql start"
  stop  program = "/usr/sbin/service mysql stop"

  if failed port 3306 protocol mysql then restart
  if failed unixsocket /var/run/mysqld/mysqld.sock protocol mysql then restart

  if 3 restarts within 5 cycles then timeout

  depends on mysql_bin
  depends on mysql_rc
  depends on mysql_sys_maint
  depend  on mysql_data

  check file mysql_bin with path /usr/sbin/mysqld
    group database
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_rc with path /etc/mysql/my.cnf
    group database
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file mysql_sys_maint with path /etc/mysql/debian.cnf
    group database
    if failed checksum       then unmonitor
    if failed permission 600 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory mysql_data with path /var/lib/mysql
    group database
    if failed permission 700 then unmonitor
    if failed uid 106        then unmonitor
    if failed gid 110        then unmonitor

The above service is placed in kolab and database groups for easier management.

Apache

Apache is an open-source HTTP server used to serve user/admin web-interface.

Please notice that I am checking HTTPS port.

$ sudo cat /etc/monit/conf.d/apache.conf
check process apache with pidfile  /var/run/apache2.pid
  group kolab
  group web-server

  start program = "/usr/sbin/service apache2 start"
  stop  program = "/usr/sbin/service apache2 stop"

  if failed port 443 then restart

  if 3 restarts within 5 cycles then timeout

  depends on apache2_bin
  depends on apache2_rc
  depends on apache2_rc_mods
  depends on apache2_rc_sites

  check file apache2_bin with path /usr/sbin/apache2.prefork
    group web-server
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc with path /etc/apache2
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc_mods with path /etc/apache2/mods-enabled
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory apache2_rc_sites with path /etc/apache2/sites-enabled
    group web-server
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and web-server groups for easier management.

Kolab daemon

This is the heart of the whole Kolab unified communication and collaboration system as it is responsible for data synchronization between different services.

UID of 413 translates to kolab-n user. GID of 412 translates to kolab group.

$ sudo cat /etc/monit/conf.d/kolab-server.conf
check process kolab-server with pidfile /var/run/kolabd/kolabd.pid
  group kolab
  group kolab-daemon

  start program = "/usr/sbin/service kolab-server start"
  stop  program = "/usr/sbin/service kolab-server stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-daemon_bin
  depends on kolab-daemon_rc

  check file kolab-daemon_bin with path /usr/sbin/kolabd
    group kolab-daemon
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file kolab-daemon_rc with path /etc/kolab/kolab.conf
    group kolab-daemon
    if failed checksum       then alert
    if failed permission 640 then unmonitor
    if failed uid 413        then unmonitor
    if failed gid 412        then unmonitor

The above service is placed in kolab and kolab-daemon groups for easier management.

Kolab saslauthd

Kolab saslauthd is the SASL authentication daemon for multi-domain Kolab deployments.

$ sudo cat /etc/monit/conf.d/kolab-saslauthd.conf
check process kolab-saslauthd with pidfile /var/run/kolab-saslauthd/kolab-saslauthd.pid
  group kolab
  group kolab-saslauthd

  start program = "/usr/sbin/service kolab-saslauthd start"
  stop  program = "/usr/sbin/service kolab-saslauthd stop"

  if 3 restarts within 5 cycles then timeout

  depends on kolab-saslauthd_bin

  check file kolab-saslauthd_bin with path /usr/sbin/kolab-saslauthd
    group kolab-saslauthd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and kolab-saslauthd groups for easier management.

It can be tempting to monitor /var/run/saslauthd/mux socket, but just leave it alone for now.

Wallace

The Wallace is a content filtering daemon.

$ sudo cat /etc/monit/conf.d/wallace.conf
check process wallace with pidfile /var/run/wallaced/wallaced.pid
  group kolab
  group wallace

  start program = "/usr/sbin/service wallace start"
  stop  program = "/usr/sbin/service wallace stop"

  #if failed port 10026 then restart

  if 3 restarts within 5 cycles then timeout

  depends on wallace_bin

  check file wallace_bin with path /usr/sbin/wallaced
    group wallace
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and wallace groups for easier management.

ClamAV

The ClamAV daemon is an open-source, cross-platform antivirus software.

$ sudo cat /etc/monit/conf.d/clamav.conf
check process clamav with pidfile /var/run/clamav/clamd.pid
  group system
  group antivirus

  start program = "/usr/sbin/service clamav-daemon start"
  stop  program = "/usr/sbin/service clamav-daemon stop"

  if 3 restarts within 5 cycles then timeout

  #if failed unixsocket /var/run/clamav/clamd.ctl type udp then alert

  depends on clamav_bin
  depends on clamav_rc

  check file clamav_bin with path /usr/sbin/clamd
    group antivirus
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file clamav_rc with path /etc/clamav/clamd.conf
    group antivirus
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and antivirus groups for easier management.

Freshclam

Freshclam is a software used to periodically update ClamAV virus databases.

$ sudo cat /etc/monit/conf.d/freshclam.conf
check process freshclam with pidfile /var/run/clamav/freshclam.pid
  group system
  group antivirus-updater

  start program = "/usr/sbin/service clamav-freshclam start"
  stop  program = "/usr/sbin/service clamav-freshclam stop"

  if 3 restarts within 5 cycles then timeout

  depends on freshclam_bin
  depends on freshclam_rc

  check file freshclam_bin with path /usr/bin/freshclam
    group antivirus-updater
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/clamav/freshclam.conf
    group antivirus-updater
    if failed permission 444 then unmonitor
    if failed uid 110        then unmonitor
    if failed gid 4          then unmonitor

The above service is placed in kolab and antivirus-updater groups for easier management.

amavisd-new

Amavis is a high-performance interface between Postfix mail server and content filtering services: SpamAssassin as a spam classifier and ClamAV as an antivirus protection.

$ sudo cat /etc/monit/conf.d/amavisd-new.conf
check process amavisd-new with pidfile /var/run/amavis/amavisd.pid
  group kolab
  group content-filter

  start program = "/usr/sbin/service amavis start"
  stop  program = "/usr/sbin/service amavis stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 10024 type tcp then restart
  #if failed unixsocket /var/lib/amavis/amavisd.sock type udp then alert

  depends on amavisd-new_bin
  depends on amavisd-new_rc

  check file amavisd-new_bin with path /usr/sbin/amavisd-new
    group content-filter
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory amavisd-new_rc with path /etc/amavis/
    group content-filter
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and content-filter groups for easier management.

The main Directory Server daemon

The main Directory Server daemon is a 389 LDAP Directory Server.

$ sudo cat /etc/monit/conf.d/dirsrv.conf
check process dirsrv with pidfile  /var/run/dirsrv/slapd-xmail.stats
  group kolab
  group dirsrv

  start program = "/usr/sbin/service dirsrv start"
  stop  program = "/usr/sbin/service dirsrv stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 389 type tcp then restart

  depends on dirsrv_bin
  depends on dirsrv_rc

  check file dirsrv_bin with path /usr/sbin/ns-slapd
    group dirsrv
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory dirsrv_rc with path /etc/dirsrv/
    group dirsrv
    if changed timestamp     then alert

The above service is placed in kolab and dirsrv groups for easier management.

SpamAssasin

SpamAssasin is a content filter used for spam filtering.

$ sudo cat /etc/monit/conf.d/spamd.conf
check process spamd with pidfile /var/run/spamd.pid
  group system
  group spamd

  start program = "/usr/sbin/service spamassassin start"
  stop  program = "/usr/sbin/service spamassassin stop"

  if 3 restarts within 5 cycles then timeout

  #if failed port 783 type tcp then restart

  depends on spamd_bin
  depends on spamd_rc

  check file spamd_bin with path /usr/sbin/spamd
    group spamd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory spamd_rc with path /etc/spamassassin/
    group spamd
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and spamd groups for easier management.

Cyrus IMAP/POP3 daemons

cyrus-imapd daemon is responsible for IMAP/POP3 communication.

$ sudo cat /etc/monit/conf.d/cyrus-imapd.conf
check process cyrus-imapd with pidfile  /var/run/cyrus-master.pid
  group kolab
  group cyrus-imapd

  start program = "/usr/sbin/service cyrus-imapd start"
  stop  program = "/usr/sbin/service cyrus-imapd stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 143 type tcp then restart
  if failed port 4190 type tcp then restart
  if failed port 993 type tcp then restart

  depends on cyrus-imapd_bin
  depends on cyrus-imapd_rc

  check file cyrus-imapd_bin with path /usr/lib/cyrus-imapd/cyrus-master
    group cyrus-imapd
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check file freshclam_rc with path /etc/cyrus.conf
    group anti-virus
    if failed checksum       then alert
    if failed permission 644 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and cyrus-imapd groups for easier management.

Postfix

Postfix is an open-source mail transfer agent used to route and deliver electronic mail.

$ sudo cat /etc/monit/conf.d/postfix.conf
check process postfix with pidfile /var/run/cyrus-master.pid
  group kolab
  group mta

  start program = "/usr/sbin/service postfix start"
  stop program = "/usr/sbin/service postfix stop"

  if 3 restarts within 5 cycles then timeout

  if failed port 25 type tcp then restart
  #if failed port 10025 type tcp then restart
  #if failed port 10027 type tcp then restart
  if failed port 587 type tcp then restart

  depends on postfix_bin
  depends on postfix_rc

  check file postfix_bin with path /usr/lib/postfix/master
    group mta
    if failed checksum       then unmonitor
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

  check directory postfix_rc with path /etc/postfix/
    group mta
    if changed timestamp     then alert
    if failed permission 755 then unmonitor
    if failed uid root       then unmonitor
    if failed gid root       then unmonitor

The above service is placed in kolab and mta groups for easier management.

Ending notes

This blog post is definitely too long, so I will just mention that similar configuration can be used to monitor other integrated solutions like ISPConfig, or custom specialized setups.

In my opinion Monit is a great utility which simplifies system and service monitoring. Additionally it provides interesting proactive features, like service restart, or arbitrary program execution on selected tests.

Everything is described in the manual page.

$ man monit

mollekopf's picture
Fri, 2014-10-03 01:21

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.


roundcube's picture
Mon, 2014-09-29 02:00

We’re proud to announce the next service release to the stable version 1.0.
It contains some bug fixes and improvements we considered important for the
long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


tobru's picture
Sat, 2014-09-27 00:00


Contents

CASino is an easy to use Single Sign On (SSO) web application written in Ruby”

It supports different authentication backends, one of it is LDAP. It works very well with the
LDAP backend of Kolab. Just put the following configuration snippet into
your config/cas.yml:

production:
  authenticators:
    kolab:
      authenticator: 'LDAP'
      options:
        host: 'localhost'
        port: 389
        base: 'ou=People,dc=mydomain,dc=tld'
        username_attribute: 'uid'
        admin_user: 'uid=kolab-service,ou=Special Users,dc=mydomain,dc=tld'
        admin_password: 'mykolabservicepassword'
        extra_attributes:
          email: 'mail'
          fullname: 'uid'

You are now able to sign in using your Kolab uid and manage SSO users with the nice
Kolab Webadmin LDAP frontend.

CASino with Kolab LDAP backend was originally published by Tobias Brunner at tobrunet.ch Techblog on September 27, 2014.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:33

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Install container
The image for the container is available here:
https://index.docker.io/u/tpokorra/kolab33_centos6/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2014/09/building-a-docker-container-for-kolab-3-3-on-jiffybox/.

To install this image, you need to type in this command:

docker pull  tpokorra/kolab33_centos6

You can create a container from this image and run it:

MYAPP=$(sudo docker run --name centos6_kolab33 -P -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6)

You can see all your containers:

docker ps -a

You now have to attach to the container, and inside the container start the services:

docker attach $MYAPP
  /root/start.sh

Somehow it should work to start the services automatically at startup, but I did not get it to work with CMD or ENTRYPOINT.

To stop the container, type exit on the container’s console, or run from outside:

docker stop $MYAPP

To delete the container:

docker rm $MYAPP

You can reach the Kolab Webadmin on this URL:
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test

The Webmail interface is available here:
https://localhost/roundcubemail.


Timotheus Pokorra's picture
Wed, 2014-09-17 12:31

This article is an update of the previous post that built a Docker container for Kolab 3.1: Building a Docker container for Kolab on Jiffybox (March 2014)

Preparation
I am using a Jiffybox provided by DomainFactory for building a Docker container for Kolab 3.3 running on CentOS 6.

I have installed Ubuntu 12.04 LTS on a Jiffybox.
I am therefore following Docker Installation instructions for Ubuntu for the installation instructions:

Install a kernel that is required by Docker:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

After that, in the admin website of JiffyBox, select the custom kernel Bootmanager 64 Bit (pvgrub64); see also the german JiffyBox FAQ. Then restart your JiffyBox.

After the restart, uname -a should show something like:

Linux j89610.servers.jiffybox.net 3.8.0-37-generic #53~precise1-Ubuntu SMP Wed Feb 19 21:37:54 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Now install docker:

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
 
sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
 
sudo apt-get update
sudo apt-get install lxc-docker

Create a Docker image
I realised that if I would install Kolab in one go, the image would become too big to upload to https://index.docker.io.
Therefore I have created a Dockerfile which has several steps for downloading and installing various packages. For a detailed description of a Dockerfile, see the Dockerfile Reference

My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.3/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.

This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab33_centos6 will be created, and the container will be deleted:

sudo docker build -t tpokorra/kolab33_centos6 .

You can see all your local images with this command:

sudo docker images

To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:

MYAPP=$(sudo docker run --name centos6_kolab33  --privileged=true -h kolab33.test.example.org -d -t -i tpokorra/kolab33_centos6 /bin/bash)
docker attach $MYAPP
# run inside the container:
  echo `hostname -f` > /proc/sys/kernel/hostname
  echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
  ./initHttpTunnel.sh
  ./initSSL.sh test.example.org
  /root/stop.sh
  exit

Typing exit inside the container will stop the container.

Now you commit this last manual change:

docker commit $MYAPP tpokorra/kolab33_centos6
# delete the container
docker rm $MYAPP

You can push this image to https://index.docker.io:

#create a new account, or login with existing account:
sudo docker login
sudo docker push tpokorra/kolab33_centos6

You can now see the image available here: https://index.docker.io/u/tpokorra/kolab33_centos6/

See this post Installing Demo Version of Kolab 3.3 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.

Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.


roundcube's picture
Fri, 2014-09-12 12:40

Roundcube indeed became a huge success story with tens of thousands of installations worldwide. Something I never expected back in 2005 when I started the project as a fresh alternative to the well established but already aged free webmail packages like SquirrelMail or Horde IMP. And now, some 9 years later, we find ourselves in a similar position as the ones we previously wanted to replace. Although we managed to adapt the Roundcube codebase to the ongoing technological innovations, the core architecture is still ruled by the concepts which seemed to be right back when we started. And we’re talking about building a web app for IE 5 and Netscape 6 when browsers weren’t as capable and performant as they are today and when the term AJAX has not yet been known nor did we have nifty libraries such a jQuery or Backbone.js at hand.

It more often happens that, when discussing the implementation of new features to Roundcube, we find ourselves saying “Oh man, that’s going to be an expensive endeavor to squeeze this into our current architecture! If we could just…”. This doesn’t mean that the entire codebase is crap, not at all! But sometimes you just silently wish to give the core a fresh touch which respects the increased requirements and expectations. And that’s the challenge of every software product that has been around for a while and is still intensively developed.

When looking around, I see inspiring new webmail projects slowly emerging which don’t carry the legacy of a software product designed almost a decade ago. I’m truly happy about this development and I appreciate the efforts of honest coders to create the next generation of free webmail software. On the other hand it also makes me a bit jealous to see others starting from scratch and building fast and responsive webmail clients like Mailpile or RainLoop which make Roundcube look like the old dinosaur. Although they’re not yet as feature rich as Roundcube, the core concepts are very convincing and perfectly fit the technological environment we find ourselves in today.

So what if we could start over and build Roundcube from scratch?

Here are some ideas how I could imagine to build a brand new webmail app with todays tools and a 9 years experience in developing web(mail) applications:

  • Do more stuff client side: the entire rendering of the UI should be done in Javascript and no more PHP composing HTML pages loaded in iframes.
  • The server should only become a thin wrapper for talking to backend services like IMAP, LDAP, etc.
  • Maybe even use a common API for client-server communication like the one suggested by Inbox.
  • Design a proper data model which is used by both the server and the client.
  • Separate the data model from the view and use Backbone.js for rendering.
  • Widget-based UI composition using simple HTML structures with small template snippets.
  • Keep mobile, touch and hi-res devices in mind when building the UI.
  • Do skinning solely through CSS and maybe allow single template snippets to be overridden.
  • More abstraction for storage and caching layers to allow alternative backends like MongoDB or Redis.
  • Separate user auth from IMAP. This would allow other sources or accounts to be pulled into one session.
  • Use more 3rd party libraries like require.js, moment.js, jQuery or PHPMailer, Monolog or Doctrine ORM.
  • Contribute to the 3rd party modules rather than re-inventing the wheel.

While this may now sound like a buzzword bingo from a web developers conference (and the list is certainly not complete), I indeed believe in these very useful and well developed modules that are out there at our service. This is what free software development is all about: share, use and contribute.

But finally, not every part of your current Roundcube codebase is badly outdated and should be replaced. I’d definitely keep our current IMAP, LDAP and HTML sanitizing libraries as well as the plugin system which turned out to be a stable and important component and a major contributor to the Roundcube’s success.

And what keeps us from re-building Roundcube from the ground up? Primarily time and the fear of jeopardizing the Roundcube microcosmos with a somewhat incompatible new version that would require every single plugin to be re-written.

But give use funding for 6 month of intense work and let’s see what happens…


Thu, 2014-09-11 15:06

Some time ago I blogged about fighting spam with amavis for the Kolab community. Now the story continues by means of the roundcube integration with amavis.

As earlier mentioned spamassassin is able to store recipient-based preferences in a mysql table with some settings in its local.cf (see spamassassin wiki)

# Spamassassin for Roundcubemail
# http://www.tehinterweb.co.uk/roundcube/#pisauserprefs
user_scores_dsn DBI:mysql:ROUNDCUBEMAILDBNAME:localhost:3306
user_scores_sql_password ROUNCUBEMAILPASSWORD
user_scores_sql_username ROUNDCUBEMAILDBUSERNAME
user_scores_sql_custom_query SELECT preference, value FROM _TABLE_ WHERE username = _USERNAME_ OR username = '$GLOBAL' OR username = CONCAT('%',_DOMAIN_) ORDER BY username ASC

However, accessing this with amavis is a real bis problem for many users. Amavis has it’s own user-based configuration policies, but email-plugins as the roundcubemail plugin sauserprefs often only use spamassassin and not amavis. Originally, SA was only called once per message by amavis and therefore recipient-based preferences were not possible at all. This has changed. Now you can use the options @sa_userconf_maps and @sa_username_maps to perform such lookups. Unfortunately these options are still poorly documented. We use them anyway.

The values in @sa_userconf_maps define where amavis has to look for the user preferences. I use mySQL lookups for all recipient addresses.

# use userpref SQL connection from SA local.cf for ALL recipients
@sa_userconf_maps = ({
  '.' => 'sql:'
});

The variable @sa_username_maps tells amavis what to pass to spamassassin as _USERNAME_ (see above) for the mySQL lookup. By default the amavis system user is used. In my setup with Kolab and sauserprefs I use here a regexp which is supposed to match the recipient email-address:

# use recipient email address as _USERNAME_ in userpref mySQL table (_TABLE_)
@sa_username_maps = new_RE (
  [ qr'^([^@]+@.*)'i => '${1}' ]
);

With these additional bits sauserprefs should work. However it seems to me that the string “*** Spam ***”, which should be added to the subject does not work (maybe it does in the most recent version). The thresholds do, though, but better check it carefully.

Did you succeed? Comments are appreciated!

Filed under: Technik Tagged: amavis, Kolab, Roundcubemail, Spamassassin


Andreas Cordes's picture
Thu, 2014-09-04 21:41

Hi,

now I finished compiling all the +Kolab.org packages for the +Raspberry Pi . Just a short note that you can update your groupware on your Pi pto the most recent version of +Kolab.org .

Greetz


Thu, 2014-08-21 14:30

Just in time for the official Kolab 3.3 release, our Gentoo packages for Kolab 3.2 became stable and ready to use. This will clear the way for the upcoming release of Kolab 3.3 for Gentoo. Altough this release won't bring any major changes, it prepares the ground for upcoming developments and new features in Kolab 3.3. Further, with Kolab 3.2 we introduced an upgrade path between Kolab releases for Gentoo and we will try our best to keep updates as consistent and comfortable as possible.
Read more ...


grote's picture
Wed, 2014-08-20 12:19

After extensive beta testing, we are very proud that we can announce the immediate availability of Kolab.org 3.3 today. This release is the one with the most new features of all time.


In a trilogy of articles, we already presented the most exciting new features. All of these would be more than enough for one release, but we still have some new functionality in our sleeves that we have not talked about, yet.


These features include Cross-Folder-Search, a Birthday Calendar that automatically shows the birthdays of all your contacts from your address book in one neat calender. There are now dedicated Out-of-Office settings where you can specify when you are in vacation and which message should be sent under which circumstances. Never again forget to enable or disable your vacation response. Also, in the settings with the Delegation feature, you can now delegate parts of your account to somebody. This is especially useful for people with a huge workload that need help with coordinating appointments for example.



Thanks to extensive testing, issue reporting and fixing by the community this release is both on time and on par. We would like to thank the following people especially for their outstanding contributions: Daniel Hoffend, Aeneas Jaißle and Timotheus Pokorra. If you like to participate as well, there is plenty of possibilities for you.


We will highlight most of the new features below. If you are interested in more details, you are invited to check out the articles from our earlier feature trilogy:


Before you read on for the new features, you might already want to head over to the installation guide, change your repository locations and run an upgrade following the upgrade notes from our documentation.


Email Tags


It is now possible to add tags to email messages. They are shown prominently in the message list before the subject of your mails. In the bottom left corner, there is now a tag cloud where you can select tags, so only emails with those tags are shown to you.


Of course you can also assign colors to your tags and add new tags easily. Our new cross-folder-search feature also works for tags and allows you to show all emails from all folders that have a certain tag.


In the future, we plan on using the new tagging system for all Kolab modules, so the same tags can be used for emails, contacts, events, tasks, etc.



Notes


With Kolab.org 3.3 you will be able to work with notes right in the webclient. As with all things Kolab, you can also have multiple notebooks and share them with people.The notes are automatically synchronized with the Kolab Desktop Client and you will also be able to synchronize them to your mobile devices via the ActiveSync protocol.


Notes can be tagged just like tasks and they can have rich-text content including graphics. They can be printed right from the webclient and also sent via email. In the email view, you can add notes to emails. They are listed at the top of the email preview.


Resource Management


Resources are things like cars, projectors or meeting rooms that can only be used by one group of people at the same time. Kolab.org 3.3 makes it easier to manage your resources.


We added a dedicated resource selection dialog which allows you to search and browse through all the available resources. It displays additional information and attributes for the individual resources as well as an availability calendar based on the free-busy data published for the given resource.


Multiple resource of the same kind can be organized in resource collections (e.g. company cars). If someone wants to book "a car", she books the resource collection for her appointment.



Folder Management


Internally, all address books, calendars, task lists, etc. are folders. So far, we did not hide that fact well from users. Kolab.org 3.3 introduces a new folder navigation view that allows you to search and subscribe to shared calendars, address books, task lists etc. directly from within the respective view.


Searches are also expanded to LDAP, so that search results show folders grouped by matching users. When selecting a "folder" from the search results, your selection can be temporary and affect only the current session or permanently if you always want to see that calendar for example.



Calendar Quickview


The calendar got a quickview mode which allows you to open an undistorted view on a single calendar without unchecking all other calendars from the main view.


When opening the quickview for the new "virtual user calendar", the calendar view displays events from all calendars of that user you have access to. Additionally, it also shows time blocks from anonymized free/busy information where you only know that the user is unavailable during these times.


 


Accessibility Improvements


The entire web client of Kolab.org 3.3 received plenty of improvements that will benefit people that require assistive technologies. The user interface can now be fully operated with the keyboard and has support for screen readers as well as voice output as suggested by the WCAG 2.0 Guidelines and ​WAI ARIA standards.


In the email view for example you are now able to tab through all the button elements, operate the message list and the popup menus. Once the message list gains focus, the arrow keys move the cursor while <space> selects the row and <enter> opens the message. A descriptive block explaining the list navigation was added to the page, so screen readers can pick it up.


Improvements in Kolab Webadmin


We enhanced the Kolab Webadmin to make it even easier to manage the most frequent administrative tasks from a pretty web interface without the need for the command line. You will of course always be able to use the command line if you prefer.


When creating a new Shared Folder, you can directly edit its ACLs giving read rights to certain users or groups for example. Creation is now also done with sane defaults, so the folder can be used immediately.


Organizational Units from LDAP can now be managed right from the Kolab Web Admin as well. Editing their LDAP access rights (ACIs) directly is also possible.


tobru's picture
Sun, 2014-08-10 00:00


Contents

Kolab has released it’s first beta of the upcoming version 3.3.
To test it on Debian I’ve created a Vagrantfile and a small Puppet module which provisions Kolab into a Debian VM. It’s available
on Github.

How to use it

Make sure you have the latest Vagrant version installed. Please see the official documentation.
Clone the git repository with git clone https://github.com/tobru/kolab3-vagrant.git and change into this directory.
Then run vagrant up and wait a while until Vagrant and Puppet have done their jobs. When it’s finished you’re good to enter the VM with vagrant ssh.
To have a working Kolab installation, setup-kolab needs to be called as root (hint: sudo su) once. It configures the Kolab components.
The Kolab Web Admin Panel is now reachable under http://localhost:8080/kolab-webadmin and Roundcube under
http://localhost:8080/roundcubemail.

For more information about how Vagrant works, have a look at the official Getting Started guide.

Chose the Kolab version

By default Kolab will be installed from the development repository where all the latest (and maybe broken) packages are located. To install
a different version, just change the version parameter in manifests/default.pp to the desired version.

Some notes

  • The VM hostname is server.kolab3.dev and is based on chef/debian-7.6 (at this time)
  • Port 8080 on localhos is mapped to port 80 in the VM. No other ports are mapped.
  • MySQL has no password, so while running setup-kolab chose 2: New MySQL server (needs to be initialized) when asked.

PS: Pull requests are always welcome!

Kolab 3 Vagrant box with Puppet provisioning was originally published by Tobias Brunner at tobrunet.ch Techblog on August 10, 2014.


Andreas Cordes's picture
Fri, 2014-08-08 01:06

Hello,

I just finished the compiling of all modules and performed an upgrade to 3.3 beta1 on +Rasperry Pi .

For all impatient:

deb http://kolab.zion-control.org /

Changes I adopted to my installation :

/etc/kolab/kolab.conf

[wallace]
modules = resources, invitationpolicy, footer 
kolab_invitation_policy = ACT_ACCEPT_IF_NO_CONFLICT:zion-control.org, ACT_MANUAL

/etc/kolab-freebusy/config.ini
[httpauth]
type = ldap
host = ldap://localhost:389
bind_dn = "uid=kolab-service,ou=Special Users,dc=zion-control,dc=org"
bind_pw = "IwontTellYou"


[directory "local-cache"]
type = static
fbsource = file:/var/cache/kolab-freebusy/%s.ifb
expires = 10m
[directory "kolab-resources"]
type = ldap
host = ldap://localhost:389
bind_dn = "uid=kolab-service,ou=Special Users,dc=zion-control,dc=org"
bind_pw = "IwontTellYou"
base_dn = "ou=Resources,dc=zion-control,dc=org"
filter = "(&(objectClass=kolabsharedfolder)(mail=%s))"
attributes = mail, kolabtargetfolder
fbsource = "imap://cyrus-admin:IwontTellYou@localhost/%kolabtargetfolder?acl=lrs"
cacheto = /var/cache/kolab-freebusy/%mail.ifb
expires = 10m
loglevel = 100  ; Debug

So far, active-sync still working :-) and no major issues as the ones already known.
More the next days when I performed some tests
Greetz Andreas

Andreas Cordes's picture
Wed, 2014-08-06 14:55

Hi there,

+Kolab  just released the 3.3 beta1 version of kolab. (
My +Raspberry Pi is currently downloading and compiling all the packages.
Because of all the dependencies I solved during the first compile phase, I expect not so much errors during this installation.
Hope to tell you more tomorrow or even on friday.
Greets Andreas

grote's picture
Wed, 2014-08-06 11:15

After we have revealed the new features for Kolab.org 3.3 over the past few weeks, you might have already expected that a beta version is not far away. Today is the day where you can finally get your hands on the brand new 3.3 packages and try out all those new features for yourself.


As our project lead explained on the development list, the release of the final version is aimed for August 20, so you have two weeks to test this beta version thoroughly and help to make sure that no unresolved issues make its way into the final. We would also appreciate if people with test environments could test upgrades especially with existing IMAP spools.


So please warm up your virtual machines and head over to our installation guide. We have provided initial packages for CentOS and Debian. If you are using another distribution, please help to get the packages for those ready in time.


When following the installation guide for CentOS, in order to get the beta packages, please change the repository configuration to:



# cd /etc/yum.repos.d/
# wget http://obs.kolabsys.com:82/Kolab:/Development/CentOS_6/Kolab:Development.repo


When following the installation guide for Debian Wheezy, please change the repository location to



http://obs.kolabsys.com:82/Kolab:/Development/Debian_7.0/


and leave out the updates repository. Currently, there are three known issues for Kolab.org 3.3 beta1 on Debian, but work-arounds exist in the tickets. Please consider helping us resolving those tickets. Here's a helpful guide on how to do this with our Open Build System.


If you find anything, come talk to us in IRC or the development mailing list. When you are sure you found an issue, please report it directly to our issue tracker or just fix it yourself ;)


grote's picture
Tue, 2014-07-22 11:35

With development still in full swing, we are getting closer to a feature freeze and a first beta version of Kolab.org 3.3. In the last weeks, we already shared some details about the new features that will be part of this upcoming Kolab.org version. There will be improved Folder Management and a Calendar Quickview as well as Notes and Accessibility improvements. Now it is time to present two more exciting features that will be part of Kolab.org 3.3.


Pease keep in mind that work is not yet done and that this is only a sneak preview. We hope to have something packaged and ready for you to try out soon!


Email Tags


Tags are little labels that you can attach to objects to categorize them or to find them quicker. We introduced tags with our task module and are now expanding the concept to emails.


You can add tags to email messages and remove them again. The tags can have different colors and are shown prominently in the message list. In the bottom left corner, below the folder list, there is now a tag cloud where you can select tags, so only emails with those tags are shown.


For those interested in the technical details of the the tag implementation, there is a discussion on our format mailing list. In short: A tag is a Kolab Configuration Object of the type 'relation' that stores all tag information and the relation to certain messages. We also considered using IMAP flags, but decided against it for now.


In the future, the new 'kolab_tags' plugin might provide tag handling capabilities to all other plugins that can make use of tags such as calendar, tasklist, notes, etc. The format was already designed in a way that allows for storing relations between any object type.


Resource Management


With Kolab you can also manage resources like cars, presenters or meeting rooms in your organization. People can book resources themselves, if they are available. This ensures that no two groups want to use a meeting room at the same time.


To make this easier, we added a dedicated resource selection dialog as you can see on the right. The new dialog allows you to search and browse through the available resources. It displays additional information and attributes for the individual resources as well as an availability calendar based on the free-busy data published for the given resource.


The automated processing of iTip messages (invitations) to resources was refactored and now supports the fully automated resource booking and updating through the iTip protocol. It wil also be possible to define a booking policy for the resources that for example automatically accepts or refuses booking based on certain criteria.


Multiple resource of the same kind can be organized in resource collections (e.g. company cars). If someone wants to book "a car", she books the resource collection for her appointment. The Wallace module then allocates a concrete resource from that collection and delegates the booking to the next available resource. The delegation is reflected in the iTip replies and the updated user calendar.


 


There will be many various small improvements to the webclient and under the hood, but we will leave it to you to discover those on your own. This means that this will be the last feature presentation. We hope you enjoyed it!