Planet

Subscribe to the RSS feed of this planet! RSS
vanmeeuwen's picture
Fri, 2015-02-27 16:22

What is the most original birthday present one can give their spouse? Well, how about a release of your favorite collaboration software suite!

I'm pleased to announce that Kolab 3.4 is available via our OBS repositories immediately!

Please see our Installation Guide for instructions on getting you some Kolab on your favorite platform, and if you already have Kolab running, check out the Upgrade Note from Kolab 3.3 to 3.4, kindly contributed for your benefit by Daniel Hoffend.

Kolab 3.4 comes with a new skin, called chameleon, that is a nice and clean-cut, modern skin for the web client -- courtesy of Kolab Systems, patron of the Kolab Groupware solution.

Two weeks ago, we have had our first release planning meeting on IRC, which has resulted in very fruitful feedback, contributions and most importantly a significant chunk of quality assurance from various participants in the community. A special thanks goes out to Daniel Hoffend and Timotheus Pokorra, who've spent plenty of their spare time on ensuring that Kolab 3.4 is the best it can be right out of the box. One slice of the pie on your right is theirs.

We're definitely going to continue to open up more processes, such as, for example, the Kolab 3.5 Roadmap.

The Kolab 3.4 release also marks the first release with an actual release party - though naturally many people are not able to attend. We're celebrating the Kolab 3.4 release is probably the best quality assured stable release Kolab.org has yet performed


Aaron Seigo's picture
Mon, 2015-02-23 12:08
Protocol Plugest - http://www.protocolsplugfest.com/europe/

The "world wide web" has been such an amazing success in large part because it was based on open protocols and formats that anyone can implement and use on a level playing field. This opened the way for interoperability on a grand and global scale, and is why http and HTML succeeded where many others failed previously.

Unfortunately, not all areas of computing are as blessed with open protocols and formats. Some are quite significant, too, with hundreds of millions of people using and relying on them every single day. Thankfully, some brave souls have stepped up to implement these proprietary protocols and formats using open technology, specifically as free software. The poster child for this is Samba and the ubiquitous file and print server protocols from Microsoft.

Such formats abound and are a key component in every day business (and personal) computer usage, and since the protocols are often not as open as we'd like it can be tricky to provide free, open and interoperable implementations of them. Again, just ask the Samba team. Yet bringing the option of freedom in technologies used by business and government around the world relies on these efforts.

The free software community is moving rapidly on all fronts of this landscape, and to help ensure that our efforts actually do work as expected and that we are sharing useful information with each other between projects, a fantastic conference has been set up: the Protocol Plugfest which will be held in Zaragoza, Spain on May 12-14. This is much like the ODF Plugfest which focuses on office file formats, but with a stronger focus on protocols found in products such as Active Directory, Exchange, Sharepoint, CIFS and LDAP.

The call for papers is now open and several speakers have already been confirmed. This include Kolab System's own Georg Greve who will be speaking on the topic of "Achieving All-Platform Interoperability", reflecting on Kolab's path towards supporting the wide world of clients, data formats and wire protocols such as ActiveSync.

He will also have some exciting announcements to make in this area during the presentation, so I hope everyone interested in free software collaboration suites will keep an eye on the event!

The other speakers include people from LibreOffice, Samba, Zentyal and ... Microsoft! Yes, I count no fewer than six speakers from Microsoft who are there to speak about the various protocols they've inflicted upon the world. This kind of engagement, while not perfect compared to having proper open standards, will certainly help push forward the state of support for all the protocols of the world in free software products.

I hope to see many of you there!


vanmeeuwen's picture
Thu, 2015-02-19 21:24

Welcome back to the ongoing series of blog posts on benchmarking storage pods! Today is another beautiful Thursday and we have some extra information for you.

In my first blog post in this series, I had just barely gotten my hands on a Storage Pod -- and I was out to set a baseline for the storage performance. I mentioned that our intention had been to use SSDs for really fast access, and bulk SATA drives for the massive amounts of storage. I may also have mentioned that the controllers were seemingly unfairly balanced. More details of course are in part I.

First of all, I have to send a big thank you to our vendor, who almost immediately responded with some quick tips, clearly showing that the Storage Pod crowd of 45drives.com is paying attention, and wants you to get the loudest bang for your buck. Much appreciated, and nothing but kudos!

Now, admittedly, I don't know all that much about hardware -- plenty of people are much better designated experts. I do appreciate the effects of (a bit of, pun intended) electrical interference on a 4-layer mainboard PCB under high frequency throughput though, as well as the accuracy of x86 CPU architectures vs. let's say, s390(x). That is to say, I'm completely out of my comfort zone when I ask for advice in a computer shop, because I have to dumb it down (a lot), not even expecting an answer that helps me.

That said, loud the bang will be, because 45 readily available SATA drives of 4 TB each give you a raw storage volume of 180 TB, regardless of what you use it for. At the cost of the chassis and the cost of the individual drives, all put together, you will elect to put both performance and data redundancy in the hands of replicating and balancing entire pods rather than using multiple 3U storage arrays from the usual vendors.

However, naturally you will still want a single storage pod to perform well. You will also want each Storage Pod to be some level of redundant in and by itself, to stop you from having to go in to the datacenter with a few drives too often.

I talked about the initial synchronization of the two RAID-5 arrays I created taking a little while at a sustained throughput of about ~75 - ~90 MBps. I'm fairly certain you do already appreciate, but I have to mention anyway, that this is an implied throughput per disk, rather than for the array as a whole. It is therefore not a baseline, but it does give us information; If each disk can do ~85 MBps, then 19 such disks should (theoretically) be able to accumatively achieve a total throughput of just under 1.6 GBps, right? Right, but also wrong. The I/O pattern of a RAID resync is completely different from the I/O pattern of regular use. Luckily though, each SATA backplane is capable of sustaining 5 GBps, so we're also not maxing out the backplane.

This gives us two interesting paths to explore;

  1. Does "md126" (with 20 partipant, active drives) outperform "md128" (with 17 participant, active drives)?
  2. Does substituting the SATA drives for SSDs mean anything (on Highpoint Rocket 370 controllers)?

In this blog post, I will likely only get around to answering question #1 -- sorry to dissappoint.

We first seek a baseline comparison on the current situation -- remember 20 disks on one array, 17 on the other, RAID 5, each has two hot spares (not included in the count).

Q: What shall be your benchmark?

The answer is simple: Bonnie++.

Q: How shall you run it?

This is a simple one too, but the answer is a little longer. One opts for a bunch of defaults without tweaking at first. This shall be your baseline.

In our particular scenario, running Red Hat Enterprise Linux 7 on these Storage Pods, the default filesystem is XFS. I come from an era in which this was not the default, however, and we want to compare "now" vs. "then" -- meaning we'll start out with EXT4.

The choice of filesystem, I think, has logical implications on the performance, even in a Bonnie++ scenario (with a default chunk size of just under 16GB). I'm taking in to account things like journalling, and there may also be points where one filesystem driver is inclined to hook in to kernel-level storage interfaces slightly different from another filesystem driver.

Hence we have set the scenario: We'll want a genuine comparison of different filesystems for Bonnie++ to benchmark using the two different RAID 5 arrays.

Q: Proper scientists formulate hypothesis before they start running in circles, so what's yours?

The hypothesis is that md126 (20 disks) will outperform md128 (17 disks), and XFS will outperform EXT4 but not by as much as the md126 array will outperform md128.

I feel inclined to acknowledge the following assumptions with regards to this hypothesis, since you started to go all scientist on me;

  1. XFS surely isn't equal to or more than 118% more efficient than EXT4 at the pattern we're about to throw at it, and
  2. whatever pattern you throw at it when benchmarking very rarely represents the actual patterns thrown at it in a production environment.

First Things First

  1. Reading the output from Bonnie++, and secondly interpreting what it means, is a skill in and by itself -- I know because I've learned the hard way. More on this later.
  2. We're continuously running Munin on the "localhost" -- with its default 5 minute interval, and a non-CGI HTML and graph strategy. This means as much as that every 5 minutes, Munin is eating CPU and I/O (though not on the same disks), and therefore we have to repeat runs of Bonnie++ 10 times, in order to get results that better represent actual as opposed to ficticious througput.
  3. Bonnie++ is run with only a -d /path/to/mountpoint command-line, with /path/to/mountpoint being a given Logical Volume we use for the specific test. That is to say that each RAID array has been made a Physical Volume, each PV has been added to a Volume Group unique to that PV, and test has a Logical Volume (of a set, constant size) in the appropriate VG.

Recognizing and acknowledging the I/O pattern likely to be thrown at the Storage Pod helps in determining the type of benchmark you will want to run. After all, a virtualization guest's node disk image -- despite the contents inside that disk -- does establish a pattern slightly different from an IMAP spool or even a number of package build roots / docker images.

To obtain this information, let's see what a guest's disk image tends to do; internally to the guest, a disk is partitioned and filesystems be mounted. Data is read from and written to places in this filesystem, but underneath it all may be something like a qcow2 thin-provisioned image file. This basically means that the I/O pattern be random, yet -- for most tech running inside your vm -- of a block-stream level.

An RPM installation however uses cpio, and a build root tends to install many RPMs -- much like a yum update does, or the yum installs for a Docker container. This particular I/O pattern tends to be supremely expensive on the disk / array controller -- which is why most build roots are in (the in-memory) tmpfs.

Long story short, whether a Storage Pods' individual controllers need to maintain mirrors of entire RAID arrays, whether RAID arrays are themselves (a level of) robust, what your expectations are of the individual disks and how many Storage Pods you planned on including in your environment, as well as the particular technology you're thinking of using to communicate with your Storage Pods -- software iSCSI (tgtd)? NFS? GlusterFS (Over NFS? Replicated? Distributed? Both)? -- all of it matters. This is subject to requirement engineering, backed by large amounts of expertise, experience, information, skill and proper judgement.

How About that 20-Disk vs. 17-Disk Array?

Well, luckily one part of the hypothesis turns out to be true: The "md126" array (20 disks) outperforms the "md128" array (17 disks). Or does it?

However, it only does so using a particular I/O pattern that Bonnie++ tests:

Bonnie++ per-character
  K putc() per second CPU Usage
md126 894.4 97.2%
md128 898.1 93.9%

 

Say what?

When you run Bonnie++, it will default to doing 16 GB worth of putc() calls, or otherwise individual bits and bytes, or 16 billion in total or so -- give or take a dozen. What Bonnie++ has reported here is an average of 10 individual runs, where md126 averages out putting through 894.4K putc() calls a second, and md128 more -- 898.1K per second to be precise.

Let us back up, and see if these numbers somewhat represent what we find in real life;

Some ~900k calls per second, with ~16 billion or so in total, would mean that the test takes ~5 minutes to complete, right? Check, it does take that approximately that long.

So md128 is the clear winner, with more putc() calls per second, and lower CPU usage. Huh?

Bonnie++ "efficient block write"
  K per second CPU Usage
md126 678483.9 46.1%
md128 582650.7 38.8%

 

Here, what Bonnie++ calls "efficient block writes" achieve a significantly higher rate for the md126 array than for the md128 array. However, when you calculate back the amount of CPU usage involved, md128 again outperforms md126 in efficiency (at 15k per %1 vs. 14k per 1%).

What are "efficient block writes"? I'm more than happy to admit I cannot say for sure. Deriving from the context I imagine efficient block writes have to do with the I/O scheduler in the kernel and subsequent subsystems. This tells me I will want to perform the same tests using different kernel I/O schedulers for the set of individual block devices in each array. Noted.

Note however these are ext4 based tests. We have the XFS tests to go still:

Bonnie++ per-character
  K putc() per second CPU Usage
md126 1987.3 96.4%
md128 1794.0 98.0%

 

This brings us to the second part of the hypothesis -- XFS outperforming EXT4, but by no more than 117%. Well, as far as this benchmark goes, it is busted. Let's look at the "efficient block write" stats:

Bonnie++ "efficient block writes"
  K per second CPU Usage
md126 699922.7 31.0%
md128 588464.0 29.0%

 

Not that much gain in throughput, although apparently XFS is slightly faster than EXT4, but what a decrease in CPU usage -- XFS seems much more efficient at around and about a rate of some 30%!

Finally ...

I hope you enjoy reading about Storage Pods and the novice level approach I'm taking to try and comprehensively benchmark them, using exclusively Free and Open Source Software. That said, your feedback and ideas on things to also try is much appreciated! Please do not hesitate and call out vanmeeuwen@kolabsys.com, or hit me on Twitter (@kanarip).

While it is not part of this series of blog posts about my attempts to get the loudest bang for my buck, we do appreciate collaboration. I've consistently collected the raw statistics and reports, and I'm fully aware that the aforementioned numbers to not mean anything without them. Please do not hesitate and contact Kolab Systems if you are interested in reviewing the raw data yourself.

I also appreciate your feedback on how you think multiple Storage Pods would fit in your infrastructure. Are you considering NFS servers for a virtualization environment, perhaps replicated through DRBD, or take a Ceph/GlusterFS approach? How do you think the concept of shared storage would fit in with future technologies such as Docker/Atomic? Hit me at vanmeeuwen@kolabsys.com or Twitter.

 


vanmeeuwen's picture
Thu, 2015-02-12 03:41

I have recently obtained access to a so-called Storage Pod -- an open hardware design for stuffing up to 45 SATA III drives in to a chassis no higher than 4U -- a.k.a. "Storinator".

How does one benchmark such goodness? Well, most importantly you first need to set your baseline. This I will do in this Part I of what hopefully becomes a series of posts on the subject.

The pod comes with 2 Highpoint Rocket 750 controller cards connected to 3 SATA back-planes, each of which capable of transferring up to 5 GBps. This seems unfairly balanced, since the slots these controller cards are stuffed in each have their own maximum transfer rate. Let's keep this in mind while we dive in deeper.

It's stuffed with 39 times 4 TB Western Digital "Red" drives, striking a balance between overall power consumption, capacity and Mean Time Between Failure (MTBF). This leaves 6 slots open, in which are 1 TB Samsung SSDs.

Your first mistake will be to order 39 drives (or actually, 78+) from the same vendor in one fell swoop. Don't do this, unless you have the necessary appreciation of what is likely to happen to your other drives once one drive fails -- the likelihood of all other drives with a too much similar serial number may suffer from the same manufacturing error. Suggested solution: Be close to or on-site, and have plenty of spare drives. This storage solution is not intended to make you care about each individual drive of CHF 200,- as much as you would when it is your only drive.

Second order of business is to appreciate the mainboard, CPU and available RAM do not necessarily comprise your ideal server system -- mine has only 8 GB of RAM, and an Intel Core i3 quad-core CPU. Nothing to write home about, yet presumably sufficient for some decent performance -- we'll find out.

So how about the baseline? Well, let's take this on a controller-by-controller basis. Two primary entry-points are available when you fire up the system:

  • /dev/disk/by-id/

    This directory contains symbolic links to block devices /dev/sd*, where the link is named such that it represents a humanly readible name for the device, namely "type", "make", "model" and "serial". This is your first assistence for inventory.
     

  • /dev/disk/by-path/

    This directory contains the "path" you would think the disk is at from the CPU's perspective. You'll find something like "pci-0000:01:" and "pci-0000:02:" types of names in here. You are not mistaken in thinking these numbers may have something to do with the controller number -- as indeed there are more drives connected to the first controller than there are to the second controller, remember the unfair balancing I mentioned earlier.
     

So What Is Your Baseline?

A not too unfair baseline is to have as many disks as possible, all on one controller, all of one type, be in a type of RAID array where even continuous blocks are distributed over disks. So not a JBOD type of array, because there continous blocks end up on the same disk unless you cross a disk's boundary. I have chosen RAID 5 for the job, because that is likely a large part of what we'll be using when we start using these systems for real.

Using information from the disk devices, write down the following numbers:

# cat /sys/block/sdd/queue/optimal_io_size
0
# cat /sys/block/sdd/queue/minimum_io_size
512
# cat /sys/block/sdd/alignment_offset
0
# cat /sys/block/sdd/queue/physical_block_size
512

If your alignment offset is non-zero, use it as the offset.

If your optimal I/O size is zero, use the literal string "1MiB" as the start for your "Linux RAID auto-detect" partition.

If your optimal I/O size is non-zero, some calculations are involved to align the partition properly.

For the end sector of the partitions, I favored the "flat on one's face" approach:

# parted /dev/sdd mklabel gpt
# parted -a optimal /dev/sdd -- mkpart primary 1MiB -1s
Warning: You requested a partition from 1000kB to 4001GB (sectors 1953..7814037167).
The closest location we can manage is 1000kB to 4001GB (sectors 1953..7814037134).
Is this still acceptable to you?
Yes/No? No

So rather than "-1s" we are to use "-34s".

Since all my drives are exactly the same (up to the last few characters of the serial number), I can start cracking -- note that all my disks are completely empty and have no pre-existing medical conditions such as partitions and whatnot:

# cd /dev/disk/by-id/
# for disk in $(readlink ata-WDC* | grep -vE -- "-part[0-9]+$"); do
    parted -a optimal $disk -- mklabel gpt
    parted -a optimal $disk -- mkpart primary 1MiB -34s
    parted -a optimal $disk -- set 1 raid on
done

Before you actually start, it might be worthwhile to get some statistics gathered and graphed:

# yum -y install munin
# systemctl start munin-node.service

Now, create a RAID 5 array from the disks on controller 1 (21 SATA drives):

# cd /dev/disk/by-path/
# unset disks
# declare -a disks
# for dbp in $(ls -1 pci-0000:01* | grep -E -- "-part[0-9]+$"); do
    disk=$(basename $(readlink $dbp))
    dbi=$(ls -l /dev/disk/by-id/ | grep $disk | grep -vE "(md|wwn)" | awk '{print $9}')
    if [ ! -z "$(echo $dbi | grep ata-WDC)" ]; then
        disks[${#disks[@]}]="/dev/$disk"
    fi
done
# mdadmin --create /dev/md126 --level=5 --raid-devices=${#disks[@]} ${disks[@]}

The initial process of syncing can easily take up to 15 hours. You could assume the array is clean of course, but that is not putting your drives through their paces -- remember you will want to find drive errors early.

I'm at a sustained throughput of ~75 - ~90 MBps for the resync, without any tweaking whatsoever. This is by no means the baseline, but we're going to have to wait for the resync to complete to start establishing the actual.

Stay "tuned" for part II -- pun intended.


vanmeeuwen's picture
Thu, 2015-02-12 02:27

So far, Kolab.org releases have largely been planned by me, Jeroen van Meeuwen. That is to say, I may have previously pulled the trigger at your proverbial 04:00 military time and subsequently pulled power-nap assisted binges of between 48 and 72 hours.

This can clearly be orchestrated better, and so we are further opening up this process. This time, we call on you in advance to participate in making sure the Kolab 3.4 release is as smooth as silk.

On Friday, February 13th, at 15:00 CET, we shall meet in the IRC channel #kolab on the FreeNode network. We have a little bit of an agenda to discuss with regards to this release, and in the future, we'll have more to discuss in terms of roadmap and development.

For more details, read the announcement to the development mailing list.


Aaron Seigo's picture
Wed, 2015-02-11 16:03

An interesting problem came up during a recent bit of Kolab server-side work: tracking the lifetime of objects in the IMAP store. How hard can it be, right? In my experience, it is the deep, dark holes which start with that question. ;)

In our case, the IMAP server keeps objects (e.g. emails, events, whatever) in directories that appear as folders to the client. It keeps each object in a separate file, so each object gets a unique filename in that folder. That file name plus the folder path is unique at that point in time, something enforced by the filesystem. Let's call this the "object id" or OID for short. Of course, when you move an object from one folder to another that id changes; in fact, the file name may also change and so the OID is not constant over time. We can track those changes, however.
Inside the file is the message content of the object. There is an ID there as well, and generally those are unique to a given user. Let's call this the "message id", or MID for short. However, it is possible to share messages with others, such as when sending a calendar invitation around between users. This can lead to situations quite easily where multiple users have a message with the same MID. They of course have different OIDs .. right?
Well, yes, but only at a given point in time. It is possible that over time, with "clever" moving of objects between folders, particularly once we add in the concept of shared folders (which is the same as saying shared calendars, notes, etc.), it is possible that at different points in time there can be objects with the same OID and the same MID but which are actually physically different things.
Now if we want to get the history for a given object in the IMAP storage and list all the things relevant to it in an entirely deterministic way, how can we guarantee proper results? In a simple logging model one might want to simply key change events with the OID, MID or some combination, but since those are not guaranteed to be unique over time in any combination this leaves a small challenge on the table to make it deterministic.
We didn't want to change the data in the messages or the way storage worked. Changing less means fewer opportunities for problems (read: bugs), and besides it's always fun to work out these kinds of problems. After an hour or so of pen and paper doodling, we were able to devise a deterministic algorithm for tracking object history without changing anything in the storage system. 
The key to the puzzle? Message move events are generated when objects move around on the server, allowing a UID+MID change timeline to be logged in parallel which can be compared with the general modification history and, with a bit of effort, reliably differentiate objects even if they had the same OID and MID at different points in time. Given that history queries always have a known starting state in time (now) and that OID+MID is guaranteed unique at any given moment in time (e.g. now) it is possible to track history working backwards from now (or forwards towards it if one must). There is very little overhead to do this, so we elected for this route.
So even though OID + MID uniqueness is not guaranteed over time, it is possible to accurately track object history through time without resorting to injecting some sort of true UUID data in each and every message on top of all the other data that is in there ... not to mention ensuring that it continues to work as expected with all client software and other parts of the Kolab stack.

roundcube's picture
Sun, 2015-02-08 15:12

We’re proud to announce the arrival of the next major version 1.1.0 of
Roundcube webmail which is now available for download. With this
milestone we introduce new features since version 1.0 as well as
some clean-up with the 3rd party libraries:

  • Allow searching across multiple folders
  • Improved support for screen readers and assistive technology using
    WCAG 2.0 and WAI ARIA standards
  • Update to TinyMCE 4.1 to support images in HTML signatures (copy & paste)
  • Added namespace filter and folder searching in folder manager
  • New config option to disable UI elements/actions
  • Stronger password encryption using OpenSSL
  • Support for the IMAP SPECIAL-USE extension
  • Support for Oracle as database backend
  • Manage 3rd party libs with Composer

In addition to that, we added some new features to improve protection
against possible but yet unknown CSRF attacks - thanks to the help of
Kolab Systems who supplied the concept
and development resources for this.

Although the new security features are yet experimental and disabled by default,
our wiki describes how to enable the Secure URLs
and give it a try.

And of course, this new version also includes all patches for reported
CSRF and XSS vulnerabilities previously released in the 1.0.x series.

IMPORTANT: with the 1.1.x series, we drop support for PHP < 5.3.7
and Internet Explorer < 9. IE7/IE8 support can be restored by
enabling the ‘legacy_browser’ plugin.

See the complete Changelog at trac.roundcube.net/wiki/Changelog
and download the new packages from roundcube.net/download.


mollekopf's picture
Sun, 2015-02-08 01:36

Ever since we introduced our ideas the next version of akonadi, we’ve been working on a proof of concept implementation, but we haven’t talked a lot about it. I’d therefore like to give a short progress report.

By choosing decentralized storage and a key-value store as the underlying technology, we first need to prove that this approach can deliver the desired performance with all pieces of the infrastructure in place. I think we have mostly reached that milestone by now. The new architecture is very flexible and looks promising so far. We managed IMO quite well to keep the levels of abstraction to a necessary minimum, which results in a system that is easily adjusted as new problems need to be solved and feels very controllable from a developer perspective.

We’ve started off with implementing the full stack for a single resource and a single domain type. For this we developed a simple dummy-resource that currently has an in-memory hash map as backend, and can only store events. This is a sufficient first step, as turning that into the full solution is a matter of adding further flatbuffer schemas for other types and defining the relevant indexes necessary to query what we want to query. By only working on a single type we can first carve out the necessary interfaces and make sure that we make the effort required to add new types minimal and thus maximize code reuse.

The design we’re pursuing, as presented during the pim sprint, consists of:

  • A set of resource processes
  • A store per resource, maintained by the individual resources (there is no central store)
  • A set of indexes maintained by the individual resources
  • A clientapi that knows how to access the store and how to talk to the resources through a plugin provided by the resource implementation.

By now we can write to the dummyresource through the client api, the resource internally queues the new entity, updates it’s indexes and writes the entity to storage. On the reading part we can execute simple queries against the indexes and retrieve the found entities. The synchronizer process can meanwhile generate also new entities, so client and synchronizer can write concurrently to the store. We therefore can do the full write/read roundtrip meaning we have most fundamental requirements covered. Missing are other operations than creating new entities (removal and modifications), and the writeback to the source by the synchronizer. But that’s just a matter of completing the implementation (we have the design).

To the numbers: Writing from the client is currently implemented in a very inefficient way and it’s trivial to drastically improve this, but in my latest test I could already write ~240 (small) entities per second. Reading works around 40k entities per second (in a single query) including the lookup on the secondary index. The upper limit of what the storage itself can achieve (on my laptop) is at 30k entities per second to write, and 250k entities per second to read, so there is room for improvement =)

Given that design and performance look promising so far, the next milestone will be to refactor the codebase sufficiently to ensure new resources can be added with sufficient ease, and making sure all the necessary facilities (such as a proper logging system), or at least stubs thereof, are available.

I’m writing this on a plane to Singapore which we’re using as gateway to Indonesia to chase after waves and volcanoes for the next few weeks, but after that I’m  looking forward to go full steam ahead with what we started here. I think it’s going to be something cool =)


Timotheus Pokorra's picture
Thu, 2015-01-29 09:22

TBits.net is glad to announce that nightly tests now run every night against the nightly development packages of Kolab, for both CentOS 6 and Debian Wheezy.

At the moment, the nightly tests also run for the Kolab 3.3 Updates for CentOS 6.

Through these tests, we can spot problems and obvious bugs in the code early, when they are still easy to fix. This is TBits.net’s contribution to making Kolab more stable and reliable.

As soon as Kolab 3.4 is out (expected for the middle of February 2015), we will enable nightly tests for CentOS 6 and Debian Wheezy for Kolab 3.4.

You can see the tests and the results here: https://lbs.solidcharity.com/package/tbits.net/kolab-test/kolab-test

I use the TBits KolabScripts to install the server unattended.

The TBits KolabScripts also contain some Selenium tests written in Python. These tests check if the user can change his own password, and tests creating new domains and new users, sending emails, catchall addresses, etc.

Also the TBits scripts for ISPs are tested: These scripts add domain admins functionality to give customers the option to manage their own domains, limited by a domain quota and maximum accounts number.

We use the LightBuildServer for running the tests. This software is written in Python, by Timotheus Pokorra. It is very light-weight, does not come with much code, and is easy to extend and configure. It uses LXC to create virtual machines for each test or build run. You can see it live in action for Kolab

We now also initiate the nightly development packages with the LightBuildServer: https://lbs.solidcharity.com/package/tbits.net/kolab-nightly-sync/updatecode. This job is run every night, fetches the source from git.kolab.org, modifies the package version number, and uploads the new package instructions to obs.kolabsys.com so that they can be built on the OBS server sponsored by Kolab Systems.

If you want to test the nightly packages yourself, please read the chapter in the Kolab Developer Guide: http://docs.kolab.org/developer-guide/nightly-builds/install.html


Andreas Cordes's picture
Mon, 2015-01-26 20:35
Hi,

due to a new job since the beginning of the year and moving to a new country I was a bit stressed. In addition to this, my provider changed from IPv4 to IPv6 at a DSLite line.

So my port forwarding is no longer working and I had to organize all the stuff to get access to my mails again :-). Well, that's a disadvantage of my solution but I hope this will be solved somehow.

So my script to install Kolab on my truck:

export RELEASE=3.3
export RELEASEDIR=kolab_3.3_src
mkdir -p /mnt/kolab/$RELEASEDIR
mkdir -p /mnt/kolab/kolab_${RELEASE}_deb
echo "deb-src http://obs.kolabsys.com:82/Kolab:/$RELEASE/Debian_7.0/ ./" > /etc/apt/sources.list.d/kolab.list
echo "deb-src http://obs.kolabsys.com:82/Kolab:/$RELEASE:/Updates/Debian_7.0/ ./" >> /etc/apt/sources.list.d/kolab.list
#echo "deb http://kolab-deb.zion-control.org /" >> /etc/apt/sources.list.d/kolab.list
echo "Package: *" >  /etc/apt/preferences.d/kolab
echo "Pin: origin obs.kolabsys.com" >>  /etc/apt/preferences.d/kolab
echo "Pin-Priority: 501" >>  /etc/apt/preferences.d/kolab
wget -qO - http://obs.kolabsys.com:82/Kolab:/$RELEASE/Debian_7.0/Release.key | apt-key add -
wget -qO - http://obs.kolabsys.com:82/Kolab:/$RELEASE:/Updates/Debian_7.0/Release.key | apt-key add -
aptitude update
cd /mnt/kolab/$RELEASEDIR
echo Get debian sources

wget http://obs.kolabsys.com:82/Kolab:/$RELEASE/Debian_7.0/Packages -O packages.txt
wget http://obs.kolabsys.com:82/Kolab:/$RELEASE:/Updates/Debian_7.0/Packages -O packages_updates.txt
apt-get -ym source `grep Package packages.txt | awk '{print $2}' | grep -v "kolab-ucs" | sort -u`
apt-get -ym source `grep Package packages_updates.txt | awk '{print $2}' | grep -v "kolab-ucs" | sort -u`
apt-get -y build-dep `grep Package packages.txt | awk '{print $2}' | grep -v kolab-ucs | sort -u`
dpkg -i 389-ds-base-libs_1.2.11.29-0_armhf.deb
dpkg -i 389-ds-base-dev_1.2.11.29-0_armhf.deb
cd python-icalendar-3.4/
debuild -us -uc -b
cd ..
apt-get install python-dateutil python-tz
dpkg -i python-icalendar_3.4-1_all.deb
cd libcalendaring_4.9.0
debuild -us -uc -b
cd ..
dpkg -i libcalendaring_4.9.0-3_armhf.deb
dpkg -i libcalendaring-dev_4.9.0-3_armhf.deb
apt-get install libboost-program-options-dev
cd libkolabxml-1.1~dev20140624/
debuild -us -uc -b
cd ..
dpkg -i libkolabxml1_1.1~dev20140624-0~kolab1_armhf.deb
dpkg -i libkolabxml-dev_1.1~dev20140624-0~kolab1_armhf.deb
cd libkolab-0.6~dev20140624/
debuild -us -uc -b
cd ..
dpkg -i libkolab0_0.6~dev20140624-0~kolab1_armhf.deb
dpkg -i libkolab-dev_0.6~dev20140624-0~kolab1_armhf.deb
apt-get -y build-dep `grep Package packages.txt | awk '{print $2}' | grep -v kolab-ucs | sort -u`
apt-get -y build-dep `grep Package packages_updates.txt | awk '{print $2}' | grep -v kolab-ucs | sort -u`
ls -ld * | grep '^d' | awk '{print $9}' | while read verz
do
        cd $verz
        debuild -us -uc -b
        cd ..
done

cp *deb ../kolab_${RELEASE}_deb/
cd ../kolab_${RELEASE}_deb
dpkg-scanpackages -m .  | gzip -c9 > Packages.gz

After a while it was installed and I had to move all mails to my truck. I decided to reorganize all the mails according to the recipient in a separate folder (because of my C@tch@ll). I used "imapfilter" for that and for this I wrote a little LUA script:

function processMailbox(raspberryPi,cubietruck, mb,subfolders)
        -- Status
        total, recent,unseen,nextid = raspberryPi[mb]:check_status()
        -- Alle Nachrichten auswaehlen
        results = raspberryPi[mb]:select_all()
        number = 1
        for _, message in ipairs(results) do
                print ("")
                mailbox, uid = table.unpack(message)
                header = mailbox[uid]:fetch_header()
                recipient = string.gsub(header, ".*for <([A-Za-z0-9-]+@example.com)>.*","%1")
                if (recipient:find(":")) then
                        headerTo = mailbox[uid]:fetch_field('To')
                        if (headerTo == nil) then
                                recipient = "invalid_to@example.com"
                        else
                                headerTo = headerTo:gsub("\r","")
                                headerTo = headerTo:gsub("\n","")
                                headerTo = headerTo:gsub(" ","")
                                headerTo = headerTo:gsub("To: ","")
                                if (headerTo == "") then
                                        recipient = "invalid_to@example.com"
                                else
                                        recipient = headerTo:gsub("To: ","")
                                end
                        end
                end
                if (recipient:find("<")) then
                        recipient = recipient:gsub(".*<(.*)>","%1")
                end
                toFolder = string.gsub(recipient,"(.-)(@.*)","%1")
                messageId = mailbox[uid]:fetch_field("Message-id")
                messageId = string.gsub(messageId,".-<(.-)>","%1")
                subject   = mailbox[uid]:fetch_field('Subject')
                subject   = subject:gsub("\r"," ")
                subject   = subject:gsub("\n"," ")
                print('Processing : <' .. mb .. '>')
                print('Subject    : <' .. subject .. '>')
                print('Recipient  : <' .. recipient .. '>')
                print('Folder     : <' .. toFolder .. '>')
                print('ID         : <' .. messageId .. '>')
                prozent = number / total * 100
                print('Number     : <' .. number .. '/' .. total .. '> in ' .. mb .. ' (' .. prozent .. ' %)')
                cubietruck:create_mailbox('Archive/' .. toFolder)
                toMove = raspberryPi[mb]:contain_field("Message-id",messageId)
                for _, messageM in ipairs(toMove) do
                        tmMB,tmUID = table.unpack(message)
                end
                toMove:copy_messages(cubietruck['Archive/' .. toFolder])
                number = number + 1
        end
        -- process all subfolder
        if (subfolders) then
                rPi_Mailboxes, rPi_Folders = raspberryPi:list_all(mb)
                for _, mbSub in ipairs(rPi_Mailboxes) do
                        processMailbox(raspberryPi,cubietruck,mbSub,true)
                end
        end
end
---------------
--  Options  --
---------------
options.timeout = 120
options.subscribe = true
options.create = true
----------------
--  Accounts  --
----------------
-- Quelle ist Pi und Ziel ist Truck
raspberryPi = IMAP {
    server = 'raspberrypi.example.com',
    username = 'username@example.com',
    password = ':-)',
}
cubietruck = IMAP {
    server = 'cubietruck.example.com',
    username = 'username@example.com',
    password = ';-)',
}
processMailbox(raspberryPi,cubietruck,'Subfolder',false)

Don't hesitate to ask any questions
Greetz

roundcube's picture
Sat, 2015-01-24 17:52

We just published a security update to the stable version 1.0.
Beside a recently reported Cross-Site-Scripting vulnerability,
this release also contains some bug fixes and improvements we
found important for the long term support branch of Roundcube.

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


greve's picture
Mon, 2015-01-19 12:42
I’m a fossil, apparently. My oldest PGP key dates back to 1997, so around the time when GnuPG just got started – and I switched to it early. Over the years I’ve been working a lot with GnuPG, which perhaps isn’t surprising. Werner Koch has been one of the co-founders of the Free Software Foundation Europe (FSFE) and so we share quite a bit of a long and interesting history together. I was always proud of the work he did – and together with Bernhard Reiter and others was doing what I could to try and support GnuPG when most people did not seem to understand how essential it truly was – and even many security experts declared proprietary encryption technology acceptable. Bernhard was also crucial to start the more than 10 year track record of Kolab development supporting GnuPG over the years. And especially the usability of GnuPG has always been something I’ve advocated for. As the now famous video by Edward Snowden demonstrated, this unfortunately continued to be an unsolved problem but hopefully will be solved “real soon now.”

 

In any case. I’ve been happy with my GnuPG setup for a long time. Which is why the key I’ve been using for the past 16 years looked like this:
sec# 1024D/86574ACA 1999-02-20
uid                  Georg C. F. Greve <greve@gnu.org>
uid                  Georg C. F. Greve <greve@fsfeurope.org>
uid                  Georg C. F. Greve <greve@brave-gnu-world.org>
uid                  Brave GNU World <column@gnu.org>
uid                  Georg C. F. Greve <greve@fsfe.org>
uid                  Georg C. F. Greve <greve@gnuhh.org>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <georg.greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
ssb>  1024R/B7DB041C 2005-05-02
ssb>  1024R/7DF16B24 2005-05-02
ssb>  1024R/5378AB47 2005-05-02
You’ll see that I kept the actual primary key off my work machines (look for the ‘#’) and I also moved the actual sub keys onto a hardware token. Naturally a FSFE Fellowship Smart Card from the first batch ever produced.
Given that smart card is battered and bruised, but its chip is still intact with 58470 signatures and counting, the key itself is likely still intact and hasn’t been compromised for lack of having been on a networked machine. But unfortunately there is no way to extend the length of a key. And while 1024 is probably still okay today, it’s not going to last much longer. So I finally went through the motions of generating a new key:
sec#  4096R/B358917A 2015-01-11 [expires: 2020-01-10]
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.com>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsystems.ch>
uid                  Georg C. F. Greve (Kolab Systems AG, CEO) <greve@kolabsys.com>
uid                  Georg C. F. Greve (Kolab Community) <georg@kolab.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfeurope.org>
uid                  Georg C. F. Greve (Free Software Foundation Europe, Founding President) <greve@fsfe.org>
uid                  Georg C. F. Greve (digitalSTROM.org Board) <georg.greve@digitalSTROM.org>
uid                  Georg C. F. Greve <mail@georggreve.net>
uid                  Georg C. F. Greve (GNU Project) <greve@gnu.org>
ssb>  4096R/AD394E01 2015-01-11
ssb>  4096R/B0EE38D8 2015-01-11
ssb>  4096R/1B249D9E 2015-01-11

My basic setup is still the same, and the key has been uploaded to the key servers, signed by my old key, which I have meanwhile revoked and which you should stop using. From now on please use the key
pub   4096R/B358917A 2015-01-11 [expires: 2020-01-10]
      Key fingerprint = E39A C3F5 D81C 7069 B755  4466 CD08 3CE6 B358 917A
exclusively and feel free to verify the fingerprint with me through side channels.

 

Not that this key has any chance to ever again make it among the top 50… but then that is a good sign in so far as it means a lot more people are using GnuPG these days. And that is definitely good news.

And in case you haven’t done so already, go and support GnuPG right now.

 

 


Aaron Seigo's picture
Fri, 2015-01-16 16:16

I spent the middle of this week at the Univention Summit in Bremen where I presented on Kolab in the open cloud appliances track. While there, we unveiled the first parts of the new corporate identity for Kolab Systems. Over the coming weeks, we will be rolling this new identity out across the various Kolab websites, services and informational materials.

The new look provides a very nice punctuation point for the beginning of 2015 and hints at our evolving focus. A new version of Kolab is nigh, and we have an intriguing roadmap to follow from there. In addition, MyKolab is going to be upgraded to this new version of Kolab bringing new features with it and will simultaneously get a beautiful new skin reflecting the new branding position as well as a shift in name from MyKolab to Kolab Now.

Even more exciting is that we will be holding the very first Kolab Summit at The Hague on May 2-3. The openSUSE Conference will also be underway at the same time in The Hague as well, so there will be lots of opportunities to mix and mingle with both Kolab and openSUSE participants and users. In fact, I'll be dashing over the openSUSE Conference to deliver a keynote presentation.

This is just the start of what is going to be a busy and fruitful 2015 ...


Andreas Cordes's picture
Sat, 2015-01-03 17:38

Hello,

currently my +Raspberry Pi compiles the packages to the latest version available. (Update from 01.01.2015)
In the next couple of weeks I'm going to migrate from +Raspberry Pi  to my new Cubietruck. I decided to upgrade to a new SBC because the Cubietruck has more power (2 GB Ram instead of 512MB, ARMv7 instead ARMv6 a.s.o.)
For the Cubietruck I decided to go the debootsrap way for a debian image and followed these instructions :
After that I had an amazing fast boot sequence on my "truck". :-) 
I had to fix an  issue on my host because the module "binfmt_misc" was not integrated in my kernel which I compiled myself.
The main difference beetween Raspberry Pi and the Cubietruck is the CPU.
Debian is available for ARMv7 CPUs as "armhf" and for ARMv6 CPUs as "armel" (soft float). The difference is the performance. If you use "armel" on the Raspberry Pi you will loose lots of performance, that's why the guys from Raspbian created their own repository for ARMv6 and "hard float". They compiled nearly every package for the Pi and I did that for the +Kolab packages.
The debian-armhf repository does not contain all of the packages which are necessary for the Kolab installation so my "truck" is compiling all the stuff as well. :-)
Greetz
Andreas

Sat, 2014-12-27 15:33

I have recently updated my Kolab Groupware install from version 3.2 to version 3.3, there are not a ton of new features but I wanted to see if this would be a huge process or go fairly quickly.

First of all, take a backup. Really take a backup. You never know what your going to blow up with Kolab updates. Sometime they work great, and they are getting better. Just do it. At the very least backup your IMAP store. If you are like me at all and have your IMAP mounted over NFS, stop the Cyrus service and unmount the IMAP store.

Also, I am using CentOS 6 this guide will be based on that, the fixes at the end might apply though if you are not running CentOS 6.

Here is what I did, also I will list a few things I did to fix some issues.

Backup the server. I use VMWare ESX so I made a snapshot.

Stop the Cyrus Server.
service cyrus-imapd stop

I unmounted the IMAP store since I use NFS.
umount /var/spool/imap

Follow this guide (I will copy it’s content below, possibly with some differences).
https://docs.kolab.org/administrator-guide/upgrading-from-kolab-3.1-to-3.3.html

Update your CentOS Installation

# cd /etc/yum.repos.d/
# rm Kolab*.repo
# wget http://obs.kolabsys.com/repositories/Kolab:/3.3/CentOS_6/Kolab:3.3.repo
# wget http://obs.kolabsys.com/repositories/Kolab:/3.3:/Updates/CentOS_6/Kolab:3.3:Updates.repo
# yum update

FILE TO EDIT: /etc/kolab/kolab.conf
Replace example.org with your LDAP and installation primary domain name.

[ldap]
sharedfolder_acl_entry_attribute = acl
modifytimestamp_format = %Y%m%d%H%M%SZ

[kolab_smtp_access_policy]
delegate_sender_header = True
alias_sender_header = True
sender_header = True
xsender_header = True
cache_uri = <copy and paste mysql uri from the kolab_wap section>

[wallace]
modules = resources, invitationpolicy, footer
kolab_invitation_policy = ACT_ACCEPT_IF_NO_CONFLICT:example.org, ACT_MANUAL

If you’re planning to make use of wallace please make sure wallace is enabled to start using chkconfig on RHEL/Centos or /etc/default/wallace on debian.

# service kolab-server restart
# service wallace restart

FILE TO EDIT: /etc/kolab-freebusy/config.ini
Instead of editing the configuration by hand it’s easier to just recreate the configuration using the setup-kolab tool. Your choice.
This step did not work for me, but I do not use freebusy!

# cp /etc/kolab-freebusy/config.ini.rpmnew /etc/kolab-freebusy/config.ini
or
# setup-kolab freebusy

FILE TO EDIT: /etc/roundcubemail/config.inc.php
The plugins where correct on my server excepting adding the new ones, kolab_notes and kolab_tags.

Change the plugin load order the follwing way:

    move kolab_auth to the top position
    move kolab_config after kolab_addressbook
    add kolab_notes after kolab_folders
    add kolab_tags after kolab_notes
$config['use_secure_urls'] = true;
$config['assets_path'] = '/roundcubemail/assets/';

FILE TO EDIT: /etc/roundcubemail/password.inc.php
Change the password driver from ldap to ldap_simple.

$config['password_driver'] = 'ldap_simple';

FILE TO EDIT: /etc/roundcubemail/kolab_files.inc.php
Update the kolab_files_url to /chwala/ to be protocol independent.
This would not work for me, I kept my old setup.

$config['kolab_files_url'] = '/chwala/';

FILE TO EDIT: /etc/iRony/dav.inc.php
The iRony configuration doesn’t have anything special configurations. You might want to consider just to take the new default config file or change it based on the differences between the previous version.
For me, nothing changed from 3.2 to 3.3 but you should check.

# cp /etc/iRony/dav.inc.php.rpmnew /etc/iRony/dav.inc.php

FILE TO EDIT: /etc/postfix/ldap/virtual_alias_maps_sharedfolders.cf
To fix the handling of resource invitations you’ve to adjust your existing virtual alias maps, otherwise you end up with non-delivery-reports.
I just had to add the last part.
query_filter = (&(|(mail=%s)(alias=%s))(objectclass=kolabsharedfolder)(kolabFolderType=mail))

FILE TO EDIT: /etc/postfix/master.cf
This will put wallace as the next content-filter after the mail has been returned from amavis to postfix. If you’re don’t want to make use of iTip processing or resource management you can skip this section.

[...]
127.0.0.1:10025     inet        n       -       n       -       100     smtpd
    -o cleanup_service_name=cleanup_internal
    -o content_filter=smtp-wallace:[127.0.0.1]:10026
    -o local_recipient_maps=
[...]

Restart Postfix

# service postfix restart

Update MySQL Database
Connect to MySQL, use the password you use for SQL on that server.

# mysql -u root -p -D kolab
--
-- Table structure for table `ou_types`
--

DROP TABLE IF EXISTS `ou_types`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `ou_types` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `key` text NOT NULL,
  `name` varchar(256) NOT NULL,
  `description` text NOT NULL,
  `attributes` longtext NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `ou_types`
--

LOCK TABLES `ou_types` WRITE;
/*!40000 ALTER TABLE `ou_types` DISABLE KEYS */;
INSERT INTO `ou_types` VALUES (1,'unit','Standard Organizational Unit','A standard organizational unit definition','{"auto_form_fields":[],"fields":{"objectclass":["top","organizationalunit"]},"form_fields":{"ou":[],"description":[],"aci":{"optional":true,"type":"aci"}}}');
/*!40000 ALTER TABLE `ou_types` ENABLE KEYS */;
UNLOCK TABLES;


--
-- Table structure for table `sharedfolder_types`
--

DROP TABLE IF EXISTS `sharedfolder_types`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `sharedfolder_types` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `key` text NOT NULL,
  `name` varchar(256) NOT NULL,
  `description` text NOT NULL,
  `attributes` longtext NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `sharedfolder_types`
--

LOCK TABLES `sharedfolder_types` WRITE;
/*!40000 ALTER TABLE `sharedfolder_types` DISABLE KEYS */;
INSERT INTO `sharedfolder_types` VALUES (1,'addressbook','Shared Address Book','A shared address book','{"auto_form_fields":[],"fields":{"kolabfoldertype":["contact"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(2,'calendar','Shared Calendar','A shared calendar','{"auto_form_fields":[],"fields":{"kolabfoldertype":["event"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(3,'journal','Shared Journal','A shared journal','{"auto_form_fields":[],"fields":{"kolabfoldertype":["journal"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(4,'task','Shared Tasks','A shared tasks folder','{"auto_form_fields":[],"fields":{"kolabfoldertype":["task"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(5,'note','Shared Notes','A shared Notes folder','{"auto_form_fields":[],"fields":{"kolabfoldertype":["note"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(6,'file','Shared Files','A shared Files folder','{"auto_form_fields":[],"fields":{"kolabfoldertype":["file"],"objectclass":["top","kolabsharedfolder"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[]}}'),(7,'mail','Shared Mail Folder','A shared mail folder','{"auto_form_fields":[],"fields":{"kolabfoldertype":["mail"],"objectclass":["top","kolabsharedfolder","mailrecipient"]},"form_fields":{"acl":{"type":"imap_acl","optional":true,"default":"anyone, lrs"},"cn":[],"alias":{"type":"list","optional":true},"kolabdelegate":{"type":"list","autocomplete":true,"optional":true},"kolaballowsmtprecipient":{"type":"list","optional":true},"kolaballowsmtpsender":{"type":"list","optional":true},"kolabtargetfolder":[],"mail":[]}}');
/*!40000 ALTER TABLE `sharedfolder_types` ENABLE KEYS */;
UNLOCK TABLES;

Go ahead and restart the server now to load stuff, you don’t really have to I suppose.

Stuff I fixed/updated/changed to make stuff work….

The assets will not load, images, css and such on roundcube web interface:

Edit /etc/roundcubemail/config.inc.php and change

$config['assets_path'] = '/roundcubemail/assets/';

to

$config['assets_path'] = '/assets/';

Still no assets, using SSL? No images and stuff? Let’s check your Apache configuration. I had to add an Include line in the ssl.conf

Edit /etc/httpd/conf.d/ssl.conf
I just added the Include line below, your setup may be different as some people use a VHOST some use SSL some use mod_ssl (like me) some use other SSL setups. Some people need to include the roundcubemail.conf and some won’t.

#SSLRequire (    %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \
#            and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \
#            and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \
#            and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \
#            and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20       ) \
#           or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/
#</Location>

Include conf.d/roundcubemail.conf

#   SSL Engine Options:

Okay, I also use a custom port number with my setup to access the web, it’s SSL but it’s not on port 443. You need to turn off secure_urls and change a PHP variable.
Edit /etc/roundcubemail/config.inc.php

$config['use_secure_urls'] = false;

Edit /usr/share/roundcubemail/program/include/rcmail_output_html.php
Line ~187. You can see I changed the $_SERVER line to be HTTP_HOST this will catch the custom port.

                $base = implode('/', $_base);
            }

            $path = (rcube_utils::https_check() ? 'https' : 'http') . '://'
                . $_SERVER['HTTP_HOST'] . $base . '/' . $path;
        }

        $this->assets_path = $path;
        $this->set_env('assets_path', $path);

Do you use the files portion of Kolab and it’s not working? Lets check our configuration for Chwala.

Edit /usr/share/roundcubemail/config/kolab_files.inc.php
Here is my file, check the top lines, the URL fields. I at one point needed to specify HTTPS, maybe you do. I no longer need to.

<?php

// URL of kolab-chwala installation
//$config['kolab_files_url'] = 'http://' . $_SERVER['HTTP_HOST'] . '/chwala/';
//$config['kolab_files_url'] = 'https://' . $_SERVER['HTTP_HOST'] . '/chwala/';
$config['kolab_files_url'] = '/chwala/';

// List of files list columns. Available are: name, size, mtime, type
$config['kolab_files_list_cols'] = array('name', 'mtime', 'size');

// Name of the column to sort files list by
$config['kolab_files_sort_col'] = 'name';

// Order of the files list sort
$config['kolab_files_sort_order'] = 'asc';

?>

Have an android device or other Exchange client that you know supports sub-folders and user created folders and they never show up? Do the emails all clump together in the inbox? Try this.

Edit /usr/share/kolab-syncroton/lib/kolab_sync_data_email.php
Line ~108. Change the windowsoutlook15 to android

    public function __construct(Syncroton_Model_IDevice $device, DateTime $syncTimeStamp)
    {
        parent::__construct($device, $syncTimeStamp);

        $this->storage = rcube::get_instance()->get_storage();

        // Outlook 2013 support multi-folder
        //$this->ext_devices[] = 'windowsoutlook15';
        $this->ext_devices[] = 'android';

        if ($this->asversion >= 14) {
            $this->tag_categories = true;
        }
    }

Make sure you go into roundcube settings, and then folders. Check the new folders are checked. Then settings and ActiveSync and under your device check the new folders as well.

After everything was good, I was still getting some odd errors with the notes portion. I could make a new Noteboot in Roundcube and add notes there, but I could not add or move notes into the primary notebook ‘Notes’ I kept getting an error. My androids could not add or read notes from it either. Very odd. After poking around I figured it’s probably something with the IMAP storage portion. I was right for my issue, and this is what I did.

Login to kolab server using SSH or console as root and then change user to cyrus.

su - cyrus

If you get an error with that command, then you need to probably check the login shell for cyrus. I changed the users login shell to bash.

usermod -s /bin/bash cyrus

Once you are running as Cyrus we need to reconstruct the IMAP mailbox.

cd /usr/lib/cyrus-imapd
./reconstruct -r user/test.user

For me the user/test.user was user/username@domain.net

While here I also ran

./cyr_expire -E 3 -D 3 -X 3

To remove some deleted crap and clear out stuff.


Fri, 2014-12-19 18:23

I thought it might be helpful if I wrote a quick note about my previous work on Debian packaging and Kolab. As my blog can attest, I have written a few articles about packaging and the effort required to make Kolab somewhat more amenable to a convenient and properly functioning installation on Debian systems. Unfortunately, perhaps due to a degree of overambition and perhaps due to me being unable to deliver a convincing and/or palatable set of modifications to achieve these goals, no progress was made over the year or so I spent looking at the situation. I personally do not feel that there was enough of a productive dialogue about aligning these goals with those of the core developers of Kolab, and despite a certain level of interest from others, I no longer have the motivation to keep working on this problem.

Occasionally, I receive mails from people who have read about my experiments with Debian packaging or certain elements of Kolab configuration that became part of the packaging work. This message is intended to communicate that I am no longer working on such things. Getting Kolab to work with other mail transport/delivery/storage systems or other directory systems is not particularly difficult for those with enough experience (and I am a good example of someone who has been able to gain that experience relatively quickly), but integrating this into setup-kolab in an acceptable fashion ultimately proved to be an unrealisable goal.

Other people will presumably continue their work packaging various Kolab libraries for Debian, and some of these lower-level packages may even arrive in the stable release of Debian fairly soon, perhaps even delivering a recent version of those libraries. I do not, however, see any progress on getting other packages into Debian proper. I continue to have the opinion that this unfortunate situation will limit wider adoption of the Kolab technologies and does nobody but the proprietary competition any good.

Since I do not believe in writing software that will never be used – having had that experience in a professional setting where I at least had the consolation of getting paid for such disappointing outcomes (and for the waste of my time) – my current focus is on developing a low-impact form of standards-based calendaring for existing mail systems, without imposing extensive infrastructure requirements when adopting such a solution, and I hope to have something useful to show in the fairly near future. This time last year, I was much more upbeat about the prospect of getting Kolab into Debian and into more people’s hands. Now, I only wish that I had changed course earlier and started on my current endeavour considerably sooner.

But as people like to say: better late than never. I look forward to sharing my non-Kolab groupware developments in the coming months.


mollekopf's picture
Fri, 2014-12-19 10:28

In our current kdepim code we use some classes throughout the codebase. I’m going to line out the problems with that and propose how we can do better.

The Application Domain

Each application has a “domain” it was created for. KOrganizer has for instance the calendar domain, and kmail the email domain, and each of those domains can be described with domain objects, which make up the domain model. The domain model of an application is essential, because it is what defines how we can represent the problems of that domain. If Korganizer didn’t have a domain model with attendees to events, we wouldn’t have any way to represent attendees internally, and thus couldn’t develop a feature based on that.

The logic implementing the functionality on top of those domain objects is the domain logic. It implements for instance what has to happen if we remove an event from a calendar, or how we can calculate all occurrences of a recurring event.

In the calendaring domain we use KCalCore to provide many of those domain objects and a large part of the domain logic. KCalCore::Event for instance, represents an event, can hold all necessary data of that event, and has the domain logic directly built-in to calculate recurrences.
Since it is a public library, it provides domain-objects and the domain-logic for all calendaring applications, which is awesome, right? Only if you use it right.

KCalCore

KCalCore provides additionally to the containers and the calendaring logic also serialization to the iCalendar format, which is also why it more or less tries to adhere to the iCalendar RFC, for both representation an interpretation of calendaring data. This is of course very useful for applications that deal with that, and there’s nothing particularly wrong with it. One could argue that serialization and interpretation of calendaring data should be split up, but since both is described by the same RFC I think it makes a lot of sense to keep the implementations together.

Coupling

A problem arises when classes like KCalCore::Event are used as domain objects, and interface for the storage layer, and as actual storage format, which is precisely what we do in kdepim.

The problem is that we introduce very high coupling between those components/layers and by choosing a library that adheres to an RFC the whole system is even locked down by a fully grown specification. I suppose that would be fine if only one application is using the storage layer,
and that application’s sole purpose is to implement exactly that RFC and nothing else, ever. In all other cases I think it is a mistake.

Domain Logic

The domain logic of an application has to evolve with the application by definition. The domain objects used for that are supposed to model the problem at hand precisely, in a way that a domain logic can be built that is easy to understand and evolve as requirements change. Properties that are not used by an application only hide the important bits of a domain objects, and if a new feature is added it must be possible to adjust the domain object to reflect that. By using a class like KCalCore::Event for the domain object, these adjustments become largely impossible.

The consequence is that we employ workarounds everywhere. KCalCore doesn’t provide what you need? Simply store it as “custom property”. We don’t have a class for calendars? Let’s use Akonadi::Collection with some custom attributes. Mechanisms have been designed to extend these rigid structures so we can at least work with it, but that only lead to more complex code that is ever harder to understand.

Instead we could write domain logic that expresses precisely what we need, and is easier to understand and maintain.

Zanshin for instance took the calendaring domain, and applied the GettingThingsDone (GTD) methodology to it. It takes a rather simple approach to todo’s and initially only required a description, a due date and a state. However, it introduced the notion that only “projects” can have subtodo’s. This restriction needs to be reflected in the domain model, and implemented in the domain logic.
Because there are no projects in KCalCore, it was simply defined that todo’s with a magic property “X-Project” are defined as project. There’s nothing wrong with that itself, but you don’t want to litter your code with “if (todo->hasProperty(X-Project))”. So what do you do? You create a wrapper. And that wrapper is now already your new domain object with a nice interface. Kevin fortunately realized that we can do better, and rewrote zanshin with its own custom domain objects, that simply interface with the KCalCore containers in a thin translation layer to akonadi. This made the code much clearer, and keeps those “x-property”-workarounds in one place only.

Layering

A useful approach to think about application architecture are IMO layers. It’s not a silver bullet, and shouldn’t be done too excessively I think, but in some cases layer do make a lot of sense. I suggest to think about the following layers:

  • The presentation layer: Displays stuff and nothing else. This is where you expose your domain model to the UI, and where your QML sits.
  • The domain layer: The core of the application. This is where all the useful magic happens.
  • The data access layer: A thin translation layer between domain and storage. It makes it possible to use the same storage layer from multiple domains and to replace the storage layer without replacing all the rest.
  • The storage layer: The layer that persists the domain model. Akonadi.

By keeping these layer’s in mind we can do a better job at keeping the coupling at a reasonable level, allowing individual components to  change as required.

The presentation layer is required in any case if we want to move to QML. With QML we can no longer have half of the domain logic in the UI code, and most of the domain model should probably be exposed as a model that is directly accessible by QML.

The data access layer is where akonadi provides a standardized interface for all data, so multiple applications can shared the same storage layer. This is currently made up by the i.e. KCalCore for calendars, the akonadi client API, and a couple of akonadi objects, such as Akonadi::Item and Akonadi::Collection. As this layer defines what data can be accessed by all applications, it needs to be flexible and likely has to be evolved frequently.

The way forward

For akonadi’s client API, aka the data access layer, I plan on defining a set of interfaces for things like calendars, events, mailfolders, emails, etc. This should eventually replace KCalCore, KContacts and friends from being the canonical interface to the data.

Applications should eventually move to their own domain logic implementation. For reading and structuring data, models are IMO a suitable tool, and if we design them right this will also pave the way for QML interfaces. Of course i.e. KCalCore still has its uses for its calendaring routines, or as a serialization library to create iTip messages, but we should IMO stop using it for everything. The same of course applies to KContacts.

What we still could do IMO, is share some domain logic between applications, including some domain objects. A KDEPIM::Domain::Contact could be used across applications, just like KContact::Adressee was. This keeps different applications from implementing the same logic, but of course also introduces coupling between those again.

What IMO has to stay separate is the data access layer, which implements an interface to the storage layer, and that doesn’t necessarily conform to the domain layer (you could i.e. store “Blog posts” as notes in storage). This separation is IMO useful, as I expect the application domain to evolve separately from what actual storage backends provide (see zanshin).

This is of course quite a chunk of work, that won’t happen at once. But need to know now where we want to end up in a couple of years, if we intend to ever get there.


roundcube's picture
Thu, 2014-12-18 01:00

We’re proud to announce the next service release to the stable version 1.0.
It contains a security fix along with some bug fixes and improvements for
the long term support branch of Roundcube. The most important ones are:

  • Security: Fix possible CSRF attacks to some address book operations
    as well as to the ACL and Managesieve plugins.
  • Fix attachments encoded in TNEF containers (from Outlook)
  • Fix compatibility with PHP 5.2

It’s considered stable and we recommend to update all productive installations
of Roundcube with this version. Download it from roundcube.net/download,
see the full changelog here.

Please do backup before updating!


Timotheus Pokorra's picture
Mon, 2014-12-15 11:08

I have shown in the last article Kolab/Roundcube with Squirrelmail’s IMAPProxy on CentOS6 how to easily configure an IMAPProxy for Roundcube, and explained the reasons for an IMAP Proxy as well.

Because I did investigate the Nginx IMAP Proxy as well, and got it to work after some workarounds, I want to share it here as well.

stunnel
With Nginx I had this problem: I was not able to connect to the Cyrus IMAP if /etc/imapd.conf had the line allowplaintext: no. The error you get in /var/log/nginx/error.log is: Login only available under a layer
I did not want to change it to allowplaintext: yes

See also this discussion on ServerFault: Can nginx be an mail proxy for a backend server that does not accept cleartext logins?

The solution is to use stunnel.

On CentOS6, you can run yum install stunnel. Unfortunately, there seems to be no init script installed, so that you can run it as a service.

I have taken the script from the source tar.gz file from stunnel, and saved it as /etc/init.d/stunnel:

#!/bin/sh
# stunnel SysV startup file
# Copyright by Michal Trojnara 2002,2007,2008
 
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/usr/bin/stunnel
PIDFILE=/var/run/stunnel/stunnel.pid
 
# Source function library.
. /etc/rc.d/init.d/functions
 
test -f $DAEMON || exit 0
 
case "$1" in
    start)
        echo -n "Starting universal SSL tunnel: stunnel"
        daemon $DAEMON || echo -n " failed"
        echo "."
        ;;
    stop)
        echo -n "Stopping universal SSL tunnel: stunnel"
        if test -r $PIDFILE; then
            kill `cat $PIDFILE` 2> /dev/null || echo -n " failed"
        else
            echo -n " no PID file"
        fi
        echo "."
        ;;
     restart|force-reload)
        echo "Restarting universal SSL tunnel"
        $0 stop
        sleep 1
        $0 start
        echo "done."
        ;;
    *)
        N=${0##*/}
        N=${N#[SK]??}
        echo "Usage: $N {start|stop|restart|force-reload}" >&2
        exit 1
        ;;
esac
 
exit 0

I have created this configuration file /etc/stunnel/stunnel.conf:

; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = TLSv1
 
; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/run/stunnel/
setuid = nobody
setgid = nobody
pid = /stunnel.pid
 
; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
 
; Use it for client mode
client = yes
; foreground = yes
 
; Service-level configuration
 
[imaps]
accept  = 8993
connect = 993

Some commands you need to run for configuring stunnel:

chmod a+x /etc/init.d/stunnel
service start stunnel
chkconfig stunnel on

Nginx IMAP Proxy
Install with yum install nginx.

You have to provide a service for authentication. In my case, I let Cyrus to decide if the password is correct. So I just return the IP and port of the Cyrus server. I point to port 8993 which is the stunnel to port 993 of Cyrus.

This is my file /etc/nginx/nginx.conf

worker_processes  1;
 
events {
  worker_connections  1024;
}
 
error_log  /var/log/nginx/error.log info;
 
mail {
  auth_http  localhost:81/auth;
 
  proxy on;
  imap_capabilities  "IMAP4rev1"  "UIDPLUS"; ## default
  server {
    listen     8143;
    protocol   imap;
  }
}
 
http {
  server {
    listen localhost:81;
    location = /auth {
      add_header Auth-Status OK;
      add_header Auth-Server 127.0.0.1;  # backend ip
      add_header Auth-Port   8993;       # backend port
      return 200;
    }
  }
}

And the usual:

service nginx start
chkconfig nginx on

Roundcube configuration
You need to change the port that Roundcube connects to, instead of port 143 now use 8143 where your Nginx IMAP Proxy is running.

In file /etc/roundcubemail/config.inc.php:

$config['default_port'] = 8143;

I have added the initIMAPProxy.sh script to my TBits scripts: initIMAPProxy.sh
Just change the line at the top with up-imapproxy to nginx.


Timotheus Pokorra's picture
Mon, 2014-12-15 11:06

There is the suggestion on the page http://trac.roundcube.net/wiki/Howto_Config/Performance to use a caching IMAP proxy. This will cache the connections to the IMAP server for each user, so that not every click in Roundcube on a message leads to the creation of a new connection.

I found these alternatives:

I had a closer look at Nginx IMAP Proxy and Squirrelmail’s IMAP Proxy.

Squirrelmail’s IMAPProxy
This is actually the easiest solution, at least compared to Nginx IMAP Proxy.

Install from EPEL with: yum install up-imapproxy

I have changed the following values in /etc/imapproxy.conf:

server_hostname localhost
listen_port 8143
listen_address 127.0.0.1
server_port 143
force_tls yes

To install the service:

service imapproxy start
chkconfig imapproxy on

Roundcube configuration
You need to change the port that Roundcube connects to, instead of port 143 now use 8143 where your Squirrelmail’s IMAP Proxy is running.

In file /etc/roundcubemail/config.inc.php:

$config['default_port'] = 8143;

I have added the initIMAPProxy.sh script to my TBits scripts: initIMAPProxy.sh

Nginx
Since the configuration of the Nginx IMAP Proxy is more complicated, I have created a separate post for that: Kolab/Roundcube with Nginx IMAP Proxy on CentOS6


Timotheus Pokorra's picture
Wed, 2014-12-03 16:28

For testing, it is useful to run setup-kolab in unattended mode.

There might be other reasons too, eg. as part of a docker setup etc.

One option is to use Puppet: Github: puppet-module-kolab. I don’t know enough about Puppet yet myself, and I have not tried it yet.

My way for an unattended setup is to patch setup-kolab in this way (see initSetupKolabPatches.sh to see the action in full context):

wget https://raw.githubusercontent.com/TBits/KolabScripts/Kolab3.3/kolab/patches/setupkolab_yes_quietBug2598.patch
wget https://raw.githubusercontent.com/TBits/KolabScripts/Kolab3.3/kolab/patches/setupkolab_directory_manager_pwdBug2645.patch
 
# different paths in debian and centOS
# Debian
pythonDistPackages=/usr/lib/python2.7/dist-packages
if [ ! -d $pythonDistPackages ]; then
  # centOS6
  pythonDistPackages=/usr/lib/python2.6/site-packages
  if [ ! -d $pythonDistPackages ]; then
    # centOS7
    pythonDistPackages=/usr/lib/python2.7/site-packages
  fi
fi
 
patch -p1 -i setupkolab_yes_quietBug2598.patch -d $pythonDistPackages/pykolab
patch -p1 -i setupkolab_directory_manager_pwdBug2645.patch -d $pythonDistPackages

Now you can call setup-kolab this way:

echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test

I need the echo 2 for the mysql options, that is a quick solution for the moment.


Aaron Seigo's picture
Tue, 2014-12-02 09:38

One of the things that came out of the Winter 2014 KDE PIM sprint in Munich is that people felt we, as a team, needed to coordinate more effectively and more often. Bringing together people from Kolab, upstream KDE PIM, downstream packagers and even users of PIM was fantastically productive, and everyone wanted to keep that ball rolling.

One suggestion was to do regular team meetings on IRC to formulate plans and keep up with each other's progress. While the mailing list is great for ongoing developer discussion and review board is fantastic for pushing forward technical steps, coordinating ourselves as a team can really be helped with an hour or two of real time discussion every so often.  So we lined up the first meeting for yesterday, and I have to say that I was very impressed at the turn-out. In all, 12 people signed up on the meeting's scheduling Doodle and I think there were even more in attendance, some just listening in but many participating.

Aleix Pol was kind enough to take notes and sent a good summary to the mailing list. The big topics we covered were the Qt5 / KDE Frameworks 5 (KF5) ports of the KDE PIM libraries, a new revision of Akonadi and a release roadmap for a Qt5/KF5 based Kontact. These are all quite important topics for both Kolab, which relies on Kontact for its desktiop client, and KDE itself, so it was good to focus on them and make some progress. And make progress we did!

There will be releases of the libraries for Qt5 as frameworks coming soon. The porting effort, led largely by Laurent Montel, has done a great job to get the code to the point that such a release can be made. The kdepimutils library is nearly gone, with the useful parts finding their way to more appropriate frameworks that already exist, and kcalcore needs a bit more love ... but otherwise we're "there" and just the repository creation remains. Aleix Pol and Dan Vratil will be heading up this set of tasks, and once they are done we will be left with just the small core of libraries that rely on Akonadi. Which brings us to the next topic.

A possible major revision of Akonadi is currently being prototyped. This early development is happening in a separate repository until everyone is confident that the ideas are solid and workable in practice. The goals of this effort include producing a leaner, more robust foundation for applications that would need access to PIM data (such as Kontact), one which is also easier to develop with and for. It is still early days but we hope to have enough of an implementation in place by the end of December that we can not only start talking about it  publicly in more detail, but figure out a realistic and responsible release schedule for Kontact 5.

... and that is where we ended up with the release schedule discussion: we need more information, which  we won't have until January, before we can form a realistic schedule. So that topic has been tabled until the next IRC meeting in January.

The PIMsters won't be waiting until January for our next IRC meeting, however. There will be another one on the 15th of December. Dan Vratil will be organizing, so look for the announcement on the kde-pim at kde.org mailing list if you are interested in joining us.


Fri, 2014-11-28 18:00

ProFTPD is a versatile ftp server. I recently integrated it in my Kolab 3.3 server environment, so that user access be can easily organized by the standard kolab-webadmin. The design looks as follows:

Kolab users are be able to login to ProFTPD but every user gets jailed in his own separate (physical) home directory. According to his group memberships, aditional shared folders can be displayed and accessed within this home directory.

You will need proftpd with support for ldap and virtual root environments. In Debian and Ubuntu, this is achieved via module packages:

  • proftpd-mod-ldap, proftpd-mod-vroot

On other platforms you may need to compile your own proftpd.

Via kolab-webadmin I created a new organizational unit FTPGroups within parent unit Groups. Within this unit, you can now add groups of type (Pure) POSIX Group. These groups are later used to restrict or permit access to certain directories or apply other custom settings per group by using the IfGroup directive of ProFTPD.

Note, that you stick to sub-units of ou=Groups here, so that this unit will be recognized by the kolab-webadmin. The LDAP-record of such a group may look like this:

dn: cn=ftp_test_group,ou=FTPGroups,ou=Groups,dc=domain,dc=com
cn: ftp_test_group
gidnumber: 1234
objectclass: top
objectclass: groupofuniquenames
objectclass: posixgroup
uniquemember: uid=testuser,ou=People,dc=domain,dc=com

To make sure that our kolab-users and groups within the sub-unit get mapped correctly to their equivalents in the ftp-server, we have to edit the directives for mod_ldap. Just start with my working sample configuration ldap.conf on pastebin, which should be included in your main proftpd configuration.

Because we use the standard kolab ldap-schema, the users do neither posess a user nor group ID. Therefore, ProFTPD will fallback to the LDAPDefaultUID (example: ID of “nobody”) and LDAPDefaultGID (example: 10000). From the system side, a user with this combination of UID and GID should be allowed to read from (and maybe write to) your physical FTP directory tree. You can either add the user or group to your system and set the permissions accordingly or use the access control list (ACL). Since I use the acl-approach, the group with ID 10000 does not have to exist in /etc/group. You may install acl by executing

~# apt-get install acl

and mount your ftp storage device with the acl option (to be persistent add it in /etc/fstab) by executing

~# mount -o remount,defaults,noexec,rw,acl /dev/sda1 /var/ftp

To allow the access for users in our default group 10000 (for both existing and newly created files), we have to use the setfacl command. Think carefully about this. We want the users not to be able to remove one of the shared folders accidentally!

~# setfacl     -m g:10000:rx  /var/ftp/
~# setfacl -d  -m g:10000:rx  /var/ftp/
~# setfacl -d -Rm g:10000:rwx /var/ftp/*
~# setfacl    -Rm g:10000:rwx /var/ftp/*

We wanted all users to have their own home directory, which resides in /var/ftp/home/, so make sure this directory exists. To jail each user to their own home directory, change the DefaultRoot directive in your main configuration file /etc/proftpd.conf to look like

DefaultRoot  /var/ftp/home/%u

Nonexistent home directories /var/ftp/home/username will be created as requested by ldap.conf (see above). At this point, ldap users should be able to login and will be dropped in their empty home directory. Now we have to setup the directory permissions and have shared directories linked to the home directory. To achieve this we will make extensive use of the IfGroup directive. It’s very important, that the module mod_ifsession.c is the last module to load in /etc/proftpd/modules.conf! Additionally you should have lines, which load mod_vroot.c and mod_ldap.c.

Linking is very simple and works as follows:

<IfGroup ftp_test_group>
   VRootAlias /var/ftp/share /share>
</IfGroup>

Very useful in terms of security is to limit the use of particular FTP-Commands to the admin group

# limit login to users with valid ftp_* groups
<Limit LOGIN>
   AllowGroup ftp_admin_group,ftp_test_group
</Limit>
# in general allow ftp-commands for all users
<Directory />
   <Limit ALL ALL FTP>
      AllowGroup ftp_admin_group,ftp_test_group
   </Limit>
</Directory>
# deny deletion of files (does not cover overwriting)
<Directory />
   <Limit DELE RMD>
      DenyGroup !ftp_admin_group
   </Limit>
</Directory>

I think we are done here now. Restart your ftp server by

~# service proftpd restart

Here you go! For testing purposes set the log level to debug and monitor the login process. Also force SSL (mod_tls.c), because otherwise everything, even passwords, will be transferred as cleartext! If you run into trouble somewhere, just let me know.

Filed under: Linux, Technik Tagged: Kolab, ProFTPd


Aaron Seigo's picture
Fri, 2014-11-28 13:38

The last month has been a wonderful romp through the land of Kolab for me, getting better acquainted with both the server and client(s) side of things. I had expected to learn quite a bit in that time, of course, but had little idea just what it would end up being. That is half the fun of exploration. Rather than keeping it all to myself, I thought I'd share some of the more interesting bits with you here.

First up: chwala. Ch-what-a? Those of you know Polish will recognize that word immediately; it means "glory". So, first thing learned: there are fun names buried in the guts of Kolab. Collect them all! Ok, so what does it do?

It's the file storage backend for the Kolab web client. You can find the code here. As with Roundcube, the web-based groupware application that ships with Kolab, it is written in PHP and is there to glue file storage to groupware application. This is responsible for the "save to cloud" and "attach from cloud" features in the webmail client, for instance, which allows you to keep your files on the server side between recipients on the same host. The files are also available over webdav, making browsing and working with files from most any modern file manager easy.

The default storage system behind the API is exactly what you'd expect from Kolab: IMAP. This makes the file store essentially zero-configuration when setting up stock Kolab, and it gives the file store the same performance and access mechanisms as the other groupware data Kolab stores for you. Quite an elegant solution.

However, Chwala is not limited to IMAP storage. Let's say you want comprehensive offline file synchronization or you wish to integrate it with some network attached storage system you have. No problem: Chwala has a backend API with which you can implement integration with whatever storage system you wish.

In addition to the default IMAP store, Chwala also comes with a backend for Seafile which is a free software storage cloud system that has a cross-platform synchronization client. (Which happens to be written with Qt, by the way.) Seafile code can be found here.

I think that's pretty spiffy, and is certainly the sort of thing that makes Kolab attractive in professional settings as a "full solution". File storage is a requirement for such environments, and making it a part of a the "bigger picture" infrastructure can help lift the mundane task of file management up into where your daily workflow already is.

Chwala!

p.s. A start to a file storage access system was begun in Kontact by the ever moving, every typing, ever coding Laurent Montel. It would fantastic to see this mature over time into a full featured bridge to functionality such as provided by Chwala. I've use it to access files on MyKolab, but it isn't deeply integrated with Kontact yet (or at least that I've been able to find) or have support for the "to/from cloud" features.

p.p.s. I haven't yet tried Seafile myself, but have read good things about it online. If you have used it, I'd love to hear about your experiences in the comments below.


Cornelius Hald's picture
Tue, 2014-11-11 12:51

Today we’re showing how to extend thesingle domain setup done earlier to get a truly multi domain Kolab install. You should probably reserve a couple of hours as there are quite some changes to do and not everything totally trivial. Also, if you’ve not read the blog about single domain setup, now is a good time :)

First of all, you can find the official documentation here. It’s probably a good idea to read it as well. We start with the easy parts and end with postfix, which needs the most changes. At the very end there are a couple of things that may or may not be issues you should be aware of.

Change Amavisd

We tell amavisd to accept all domains.

vi /etc/amavisd/amavisd.conf
# Replace that line
@local_domains_maps = ( [".$mydomain"] );
# With this line
$local_domains_re = new_RE( qr'.*' );

Change Cyrus IMAPD

Tell the IMAP server how to find our other domains. Add the following to the bottom of /etc/imapd.conf

ldap_domain_base_dn: cn=kolab,cn=config
ldap_domain_filter: (&(objectclass=domainrelatedobject)(associateddomain=%s))
ldap_domain_name_attribute: associatedDomain
ldap_domain_scope: sub
ldap_domain_result_attribute: inetdomainbasedn

Change Roundcube (webmail)

Basically you need to change the base_dn at several places. The placeholder ‘%dc’ is replaced during run-time with the real domain the user belongs to.

To save me some typing I’m pasting the diff output produced by git here. So it looks more than it actually is…

diff --git a/roundcubemail/password.inc.php b/roundcubemail/password.inc.php
index c3d449c..eafc8e5 100644
--- a/roundcubemail/password.inc.php
+++ b/roundcubemail/password.inc.php
@@ -45,7 +45,7 @@

     // LDAP base name (root directory)
     // Exemple: 'dc=exemple,dc=com'
-    $config['password_ldap_basedn'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_basedn'] = 'ou=People,%dc';

     // LDAP connection method
     // There is two connection method for changing a user's LDAP password.
@@ -99,7 +99,7 @@
     // If password_ldap_searchDN is set, the base to search in using the filter below.
     // Note that you should comment out the default password_ldap_userDN_mask setting
     // for this to take effect.
-    $config['password_ldap_search_base'] = 'ou=People,dc=skolar,dc=de';
+    $config['password_ldap_search_base'] = 'ou=People,%dc';

     // LDAP search filter
     // If password_ldap_searchDN is set, the filter to use when
diff --git a/roundcubemail/calendar.inc.php b/roundcubemail/calendar.inc.php
index 98be7b9..8f98f8a 100644
--- a/roundcubemail/calendar.inc.php
+++ b/roundcubemail/calendar.inc.php
@@ -22,11 +22,11 @@
             'hosts'                 => 'localhost',
             'port'                  => 389,
             'use_tls'               => false,
-            'base_dn'               => 'ou=Resources,dc=skolar,dc=de',
+            'base_dn'               => 'ou=Resources,%dc',
             'user_specific'         => true,
             'bind_dn'               => '%dn',
             'bind_pass'             => '',
-            'search_base_dn'        => 'ou=People,dc=skolar,dc=de',
+            'search_base_dn'        => 'ou=People,%dc',
             'search_bind_dn'        => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
             'search_bind_pw'        => 'xUlA7PzBZnRaYV4',
             'search_filter'         => '(&(objectClass=inetOrgPerson)(mail=%fu))',
diff --git a/roundcubemail/config.inc.php b/roundcubemail/config.inc.php
index bfbfba3..60dc0b2 100644
--- a/roundcubemail/config.inc.php
+++ b/roundcubemail/config.inc.php
@@ -6,7 +6,7 @@

     $config['session_domain'] = '';
     $config['des_key'] = "FMlzG7LeqiUSOSK2T8xKQTHR";
     $config['use_secure_urls'] = true;
     $config['assets_path'] = 'assets/';

@@ -154,11 +154,11 @@
                     'hosts'                     => Array('localhost'),
                     'port'                      => 389,
                     'use_tls'                   => false,
-                    'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+                    'base_dn'                   => 'ou=People,%dc',
                     'user_specific'             => true,
                     'bind_dn'                   => '%dn',
                     'bind_pass'                 => '',
-                    'search_base_dn'            => 'ou=People,dc=skolar,dc=de',
+                    'search_base_dn'            => 'ou=People,%dc',
                     'search_bind_dn'            => 'uid=kolab-service,ou=Special Users,dc=skolar,
                     'search_bind_pw'            => 'xUlA7PzBZnRaYV4',
                     'search_filter'             => '(&(objectClass=inetOrgPerson)(mail=%fu))',
@@ -196,7 +196,7 @@
                             'photo'             => 'jpegphoto'
                         ),
                     'groups'                    => Array(
-                            'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                            'base_dn'           => 'ou=Groups,%dc',
                             'filter'            => '(&' . '(|(objectclass=groupofuniquenames)(obj
                             'object_classes'    => Array("top", "groupOfUniqueNames"),
                             'member_attr'       => 'uniqueMember',
diff --git a/roundcubemail/kolab_auth.inc.php b/roundcubemail/kolab_auth.inc.php
index 9fb5335..8eff518 100644
--- a/roundcubemail/kolab_auth.inc.php
+++ b/roundcubemail/kolab_auth.inc.php
@@ -8,7 +8,7 @@
         'port'                      => 389,
         'use_tls'                   => false,
         'user_specific'             => false,
-        'base_dn'                   => 'ou=People,dc=skolar,dc=de',
+        'base_dn'                   => 'ou=People,%dc',
         'bind_dn'                   => 'uid=kolab-service,ou=Special Users,dc=skolar,dc=de',
         'bind_pass'                 => 'xUlA7PzBZnRaYV4',
         'writable'                  => false,
@@ -26,11 +26,14 @@
         'sizelimit'                 => '0',
         'timelimit'                 => '0',
         'groups'                    => Array(
-                'base_dn'           => 'ou=Groups,dc=skolar,dc=de',
+                'base_dn'           => 'ou=Groups,%dc',
                 'filter'            => '(|(objectclass=groupofuniquenames)(objectclass=groupofurl
                 'object_classes'    => Array('top', 'groupOfUniqueNames'),
                 'member_attr'       => 'uniqueMember',
             ),
+        'domain_base_dn'           => 'cn=kolab,cn=config',
+        'domain_filter'            => '(&(objectclass=domainrelatedobject)(associateddomain=%s))',
+        'domain_name_attr'         => 'associateddomain',
     );

Change Postfix

Now this is actually the hardest part that requires the most changes. Initially I thought there would be a way around that, but it looks like it is currently really needed.

First we apply a couple of changes that allows us to have multiple domains besides our management domain (the domain we used to install Kolab). However those changes will not support domains having aliases. E.g. having the domain kodira.de with an alias of tourschall.com. To get domains with working aliases, we need to do even more.

Postfix Part 1 (basics)

Please follow the instructions given in the official documentation here. I don’t really see how I could write that part better or more compact. Do all the changes for: mydestination, local_recipient_maps, virtual_alias_maps and transport_maps.

Now, if you don’t need aliases, you’re basically done and you can skip the next section.

Postfix Part 2 (alias domains)

For each domain that should support alias domains we need to add 4 files. We’re doing this based on the following example.

  • Domain: kodira.de
  • Alias: tourschall.com

First create the directory /etc/postfix/ldap/kodira.de (name of the real domain)

In that directory create the following 4 files, but do not just copy&past them. You have to adjust them to your setup.

# local_recipient_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(|(objectclass=kolabinetorgperson)(|(objectclass=kolabgroupofuniquenames)(objectclass=kolabgroupofurls))(|(|(objectclass=groupofuniquenames)(objectclass=groupofurls))(objectclass=kolabsharedfolder))(objectclass=kolabsharedfolder)))
result_attribute = mail
# mydestination.cf
# Adjust bind_dn, bind_pw, query_filter
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(associatedDomain=%s)(associatedDomain=kodira.de))
result_attribute = associateddomain
# transport_maps.cf
# Adjust domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = cn=kolab,cn=config
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mailAlternateAddress=%s)(alias=%s)(mail=%s))(objectclass=kolabinetorgperson))
result_attribute = mail
result_format = lmtp:unix:/var/lib/imap/socket/lmtp
# virtual_alias_maps.cf
# Adjust search_base, domain, bind_dn, bind_pw
server_host = localhost
server_port = 389
version = 3
search_base = dc=kodira,dc=de
scope = sub
domain = ldap:/etc/postfix/ldap/kodira.de/mydestination.cf
bind_dn = uid=kolab-service,ou=Special Users,dc=skolar,dc=de
bind_pw = XXX
query_filter = (&(|(mail=%s)(alias=%s))(objectclass=kolabinetorgperson))
result_attribute = mail

Almost done, but don’t forget to reference those files from /etc/postfix/main.cf.

The bad news is: you have to add and adjust those 4 files for each domain which should support aliases. But the good news is: once configured you can use as many aliases for that domain as you want. No need to change config files for that.

Postfix Part 3 (finishing up)

Restart all services or just reboot the machine. Most things should work now, but there are a couple of points you still might need to take care about.

  1. In our main.cf there have been references to some catchall maps that we do not use and that do not exist on the file system. Therefore postfix stopped looking at the rest of that maps. We simply deleted the catchall references from main.cf and got rid of that problem.
  2. In our setup we had an issue with a domain having an alias with more then two parts. E.g. mail.kodira.de. As we don’t need addresses in the form of user@host.domain.tld we removed this alias and thus solved the problem.

Create domains and users using WAP

Now you should be able to use the ‘Kolab Web Administration Panel’ (WAP) to create domains and users.

  1. Go to http://<yourserver>/kolab-webadmin
  2. Login as ‘cn=Directory Manager’
  3. Go to ‘Domains’ and add a domain (simply giving it a name is enough)
  4. If you want add an alias to this domain by clicking the ‘+’ sign
  5. Logout
  6. Login again as ‘cn=Directory Manager’
  7. In the top right corner you should be able to select your newly created domain. Select it.
  8. Go to ‘Users’ and add a user to your new domain
  9. If you want, give the user the role ‘kolab-admin’. If you do, that user is able to log into WAP and to administrate that domain. For that login you should not use LDAP notation, but simply user@domain.tld.

Now maybe create a couple of test users on various domains and try to send some mails back and forth. It should work. If not have a look at those log files:

  • /var/log/maillog
  • /var/log/dirsrv/slapd-mail/access

Also do a grep for ‘kodira’, ‘tourschall’, ‘example’ in /etc/ to make sure you didn’t accidentally forgot to change some example configuration. Last but not least, think about putting /etc/ into a git repository – that will help you to review and restore changes you’ve made.

Good luck and have fun :)

The post Kolab 3.3 Multi-Domain Setup on CentOS 7 appeared first on Kodira.