Planet

Subscribe to the RSS feed of this planet! RSS
Aaron Seigo's picture
Fri, 2015-07-17 17:41

I try to keep memory of how various aspects of development were for me in past years. I do this by keeping specific projects I've been involved with fresh in my memory, revisiting them every so often and reflecting on how my methods and experiences have changed in the time since. This allows me to wander backwards 5, 10, 15, 20 years in the past and reflect.

Today I was presenting the "final" code-level design for a project I've been tasked with: an IMAP payload filter for use with Kolab. The best way I can think to describe it is as a protocol-level firewall (of sorts) for IMAP. The first concrete use case we have for it is to allow non-Kolab-aware clients (e.g. Thunderbird) to connect to a Kolab server and see only the mail folders, implying that the groupware folders are filtered out of the IMAP session. There are a large number of other use case ideas floating about, however, and we wanted to make sure that we could accommodate those in future by extending the codebase. While drawing out on the whiteboard how I planned for this to come together, along with a break-out of the work into two-week sprints, I commented in passing that it was actually a nicely simple program.

In particular, I'm quite pleased with how the "filter groupware folders" will actually be implemented quite late in the project as a very simple, and very isolated, module that sits on top of a general use scaffolding for real-time manipulation of an IMAP stream.

When I arrived back at my desk, I took a moment to reflect on how I would have perceived the same project earlier in my career. One thing that sprung out at me was that the shape of the program was very clear in my head. Roll back a decade and the details would have been much more fuzzy. Roll back 15 years and it probably would have been quite hand-wavy at the early stages. Today, I can envision a completed codebase.

If someone had presented that vision to me 10 or 15 years ago, I would have accepted it quite happily ("Yes! A plan!"). Today, I know that plan is a lie in much the same way as a 14-day weather report is: it is the best we can say about the near future from our knowledge of today. If nothing changes, that's what it will be. Things always change, however. This is one of life's few constants.

So what point is there to being able to see an end point? That's a good question and I have to say that I never attempted to develop the ability to see a codebase in this amount of detail before writing it. It just sort of happened with time and experience, one of the few bonuses of getting older. ;) As such, one might think that since it the final codebase will almost certainly not look exactly like what is floating about in my head, this is not actually a good thing to have at all. Could it perhaps lock one mentally into a path which can be realized, but which when complete will not match what is there?

A lot of modern development practice revolves around the idea of flexibility. This shows up in various forms: iteration, approaching design in a "fractal" fashion, implementing only what you need now, etc. A challenge inherent in many of these approaches is growing short-sighted. So often I see projects switch data storage systems, for instance, as they run into completely predictable scalability, performance or durability requirements over time. It's amazing how much developer time is thrown away simply by misjudging at the start what an appropriate storage system would be.

This is where having a long view is really helpful. It should inform the developer(s) about realistic possible futures which can eliminate many classes of "false starts" right at the beginning. It also means that code can be written with purpose and confidence right from the start, because you know where you are headed.

The trick comes in treating this guidance as the lie it is. One must be ready and able to modify that vision continuously to reflect changes in knowledge and requirement. In this way one is not stuck in an inflexible mission while still having enough direction to usefully steer by. My experience has been that this saves a hell of a lot of work in the long run and forces one to consider "flexible enough" designs from the start.

Over the years I've gotten much better at "flexible enough" design, and being able to "dance" the design through the changing sea of time and realities. I expect I will look back in 5, 10, 15 and 20 years and remark on how much I've learned since now, as well.

I am reminded of steering a boat at sea. You point the vessel to where you want to go, along a path you have in your mind that will take around rocks and currents and weather. You commit to that path. And when the ocean or the weather changes, something you can count on happening, you update your plans and continue steering. Eventually you get there.


mollekopf's picture
Fri, 2015-07-10 14:49

I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)

Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.

To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.

If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.

[0] https://github.com/cmollekopf/kdepimwindows
[1] http://mirror.kolabsys.com/pub/upload/windows/Kontact-E5-2015-06-30-19-41.exe

Kontact_WindowsAccount_Wizard


Aaron Seigo's picture
Fri, 2015-07-03 15:17

Roundcube Next: The Next Steps

The crowdfunding campaign to provide funding and greater community engagement around the refactoring of Roundcube's core to give it a secure future has just wrapped up. We managed to raise $103,531 from 870 people. This obviously surpassed our goal of $80,000, so we're pretty ecstatic. This is not the end, however: now we begin the journey to delivering a first release of Roundcube Next. This blog entry outines some of that path forward

Roundcube Next: The Next Steps

Perks

The most obvious thing on our list is to get people's t-shirts and stickers out to them. We have a few hundred of them to print and ship, and it looks like we may be missing a few shipping addresses so I'll be following up with those people next week. Below is a sneak peak of what the shirts might look like. We're still working out the details, so they may look a bit different than this once they come off the presses, but this should give you an idea. We'll be in touch with people for shirt sizes, color options, etc. in the coming week.

Roundcube Next: The Next Steps

Those who elected for the Kolaborator perk will get notified by email how to redeem your free months on Kolab Now. Of course, everyone who elected for the in-application-credits-mention will get that in due time as well. We've got you all covered! :)

Note that it takes a couple of weeks for Indiegogo to get the funds to us, and we need to waitn on that before confirming our orders and shipping for the physical perk items.

Roundcube Backstage

We'll be opening the Roundcube Backstage area in the ~2 weeks after wrap-up is complete next week. This will give us enough time to create the Backstage user accounts and get the first set of content in place. We will be using the Discourse platform for discussions and posting our weekly Backstage updates to. I'm really looking forward to reading your feedback there, answering questions, contemplating the amazing future that lays ahead of us, ...

The usual channels of Roundcube blogging, forums and mailing lists will of course remain in use, but the Backstage will see all sorts of extras and closer direct interaction with the developers. If you picked up the Backstage perk, you will get an email next week with information on when and where you can activate your account.

Advisory Committee

The advisory committee members will also be getting an email next week with a welcome note. You'll be asked to confirm who the contact person should be, and they'll get a welcome package with further information. We'll also want some information for use in the credits badge: a logo we can use, a short description you'd like to see with that logo describing your group/company, and the web address we should point people to.

The Actual Project!

The funds we raised will cover getting the new core in place with basic email, contacts and settings apps. We will be able to adopt JMap into this and build the foundations we so desperately need. The responsive UI that works on phones, tablets and desktop/laptop systems will come as a result of this work as well, something we are all really looking forward to.

Today we had an all-hands meeting to take our current requirements, mock-ups and design docs and reflect on how the feedback we received during the campaign should influence those. We are now putting all this together in a clear and concise form that we can share with everyone, particularly our Advisory Committee members as well as in the Backstage area. This will form the bases for our first round of stakeholder feedback which I am really looking forward to.

We are committed to building the most productive and collaborative community around any webmail system out there, and these are just our first steps. That we have the opportunity here to work with the likes of Fastmail and Mailpile, two entities that one may have thought of as competitors rather than possible collaborators, really shows our direction in terms of inclusivity and looking for opportunities to collaborate.

Though we are at the end of this crowdfunding phase, this is really just the beginning, and the entire team here isn't waiting a moment to get rolling! Mostly because we're too excited to do anything else ;)

Roundcube Next: The Next Steps


greve's picture
Thu, 2015-07-02 10:01

Software is a social endeavour. The most important advantage of Free Software is its community. Because the best Open Source is built by a community of contributors. Contribution being the single most important currency and differentiation between users and community. You want to be part of that community at least by proxy because like any community, members of our community spend time together, exchange ideas, and create cohesion that translates into innovation, features, best practices.

We create nothing less than a common vision of the future.

By the rules of our community, anyone can take our software and use it, extend it, distribute it. A lot of value can be created this way and not everyone has the capabilities to contribute. Others choose not to contribute in order to maximise their personal profits. Short of actively harming others, egoism, even in its most extreme forms, is to be accepted. That is not to say it is necessarily a good idea for you to put the safeguarding of your own personal interests into the hands of an extreme egoist. Or that you should trust in their going the extra mile for you in all the places that you cannot verify.

That is why the most important lesson for non-developers is this: Choose providers based on community participation. Not only are they more likely to know early about problems, putting them in a much better position to provide you with the security you require. They will also ensure you will have a future you like.

Developers know all this already, of course, and typically apply it at least subconsciously.

Growing that kind of community has been one of the key motives to launch Roundcube Next, which is now coming close to closing its phase of bringing together its key contributors. Naturally everyone had good reasons to get involved, as recently covered on Venturebeat.

Last night Sandstorm.io became the single greatest contributor to the campaign in order to build that better future together, for everyone. Over the past weeks, many other companies, some big, some small, have done the same.

Together, we will be that community that will build the future.


mollekopf's picture
Wed, 2015-07-01 17:22

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.

Docker

Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:
https://github.com/cmollekopf/docker.git


Timotheus Pokorra's picture
Wed, 2015-07-01 12:52

Just before the Kolab Summit, at the end of April 2015, the Phabricator instance for Kolab went online! Thanks to Jeroen and the team from Kolab Systems who made that happen!

I have to admit it took me a while to get to understand Phabricator, and how to use it. I am still learning, but I now know enough to write an initial post about it.

Phabricator describes itself as an “open source, software engineering platform”. It aims to provide all the tools you need to engineer a software. In their words: “a collection of open source web applications that help software companies build better software”

To some degree, it replaces solutions like Github or Gitlab, but it has much more than Code Repository, Bug Tracking and Wiki functionality. It also has tools for Code Review, Notifications and Continuous builds, Project and Task Management, and much more. For a full list, see http://phabricator.org/applications/

In this post, I want to focus on how you work with the code, and how to submit patches. I am quite used to the idea of Pull Requests as Github does it. Things are a little bit different with Phabricator. But when you get used to it, they are probably more powerful.

Starting with browsing the code: there is the Diffusion application. You can see all the Kolab projects there.
It also shows the “git clone” command at the top for each project.
Admittedly, that is quite crowded, and if you still want the simple cgit interface, you get it here: https://cgit.kolab.org/

Now imagine you have fixed a bug or want to submit a change for the Kolab documentation (project docs). You clone the repository, locally edit the files, and commit them locally.

You can submit patches online with the application Differential: Go to Differential, and at the top right, you find the link “Create Diff“: There you can paste your patch or upload it from file, and specify which project/repository it is for. All the developers of that repository will be notified of your patch. They will review it, and if they accept it, the patch is ready to land. I will explain that below.

Alternatively, you can submit a patch from the command line as well!
Let me introduce you to Arcanist: this is a command line application which is part of Phabricator, that helps with integration of your git directory with Phabricator. There is a good manual for Arcanist: Arcanist User Guide
Arcanist is not part of Fedora yet (I have not checked other distributions), but you can install it from the Kolab Development repository like this, eg on Fedora 21:

# import the Kolab key
rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x830C2BCF446D5A45"
curl http://obs.kolabsys.com/repositories/Kolab:/Development/Fedora_21/Kolab:Development.repo -o /etc/yum.repos.d/KolabDevelopment.repo
yum install arcanist
 
# configure arcanist (file: ~/.arcrc)
arc set-config default https://git.kolab.org
arc install-certificate
# go to https://git.kolab.org/conduit/login/ and copy the token and paste it to arc

Now you can create a clone of the repository, in this example the Kolab Documentation:

git clone https://git.kolab.org/diffusion/D/docs.git
# if you have already an account on git.kolab.org, and uploaded your SSH key to your configuration:
# git clone ssh://git@git.kolab.org/diffusion/D/docs.git
cd docs
# do your changes
# vi source/installation-guide/centos.rst
git commit -a
arc diff            # Creates a new revision out of ALL unpushed commits on
                    # this branch.

This will also create a code review item on Differential!

For more options of arc diff, see the Arcanist User Guide on arc diff

By the way, have a look at this example: https://git.kolab.org/D23

Now after your code change was reviewed and accepted, your code change is “ready to land”.
It depends if you have write permissions on the repository. If you don’t have them, ask on IRC (freenode #kolab) or on the kolab developers’ mailinglist for someone to merge your change.

If you have push permissions, this is what you do (if D23 is your Differential id):

# assuming you have Arcanist configured as described above...
arc patch --nobranch D23
# if we are dealing with a branch:
# arc land D23
git push origin master

I hope this helps to get started with using Phabricator, and it encourages you to keep or start submitting patches to make Kolab even better!


Aaron Seigo's picture
Mon, 2015-06-29 21:36

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!


1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.


Aaron Seigo's picture
Sat, 2015-06-27 10:22

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!


Aaron Seigo's picture
Fri, 2015-06-19 16:51

Riak KV, Basho and Kolab

As I have mentioned in earlier blog entries, Kolab Enteprise has gained data loss prevention (DLP) functionality this year that goes above and beyond what one tends to find in other groupware products. Kolab's DLP is not just a back-up system that copies mails and other objects to disk for later restore, it actually creates a history of every groupware object in real-time that can later be examined and restored from. This will eventually lead to some very interesting business intelligent features.

The storage system for the Kolab DLP system is Basho's industry-leading distributed NoSQL database, Riak KV. (The "KV" stands for key/value.) We chose Riak KV because it scales naturally (it is designed to be run as a cluster of nodes by default), is robust by design (CAP Theorem ftw), and is dead simple to deploy on development and production machines alike. A further key factor for us is that Basho provides proven enterprise-grade support for its line of Riak products. This was a requirement for us as we need to provide enterprise-grade support for the entire Kolab Enterprise stack.

(It was a nice coincidence that both Riak and core parts of Kolab's DLP system are written in Erlang. ;)

I sat down with Manu Marchel, Managing Director for EMEA at Basho Technologies Inc., recently for a mutual interview. You can read my interview on the Basho blog (I'll update this entry with a link to their blog when it is published lived); here is a transcript of my conversation with Manu:

NoSQL is a quite a new technology in the Big Data space. Many people might have heard about things like Hadoop but how does NoSQL fit in? Could you give everyone the quick cheatsheet on what NoSQL databases are and specifically what is RiakKV your key value NoSQL database?

NoSQL databases are the new generation of databases that were designed to address the needs of enterprises to store, manage, analyse and serve the ever increasing amounts of unstructured data that make over 80% of all data being created nowadays in public clouds or private infrastructures. Apache Hadoop has done a great job of handling batch analytics use cases at massive scale for unstructured data, what I would call exploratory or discovery analytics. What NoSQL databases like riak do in comparison is help organisations manage their active data workloads as well and providing near real time operational analytics at scale. Most importantly, most businesses need scalability, availability and fault tolerance attributes as core requirements to their current and future applications architecture and these are deciding factors for NoSQL against traditional relational databases. NoSQL databases started as one of 4 types: Key-Value, Column store, Document and Graph but nowadays they are becoming multi model whereby for example riak can efficiently handle key value, but also documents as well as log/time series as demonstrated by our wide range of customers including Kolab Systems.

Riak KV is the most widely adopted NoSQL Key Value database with scalability, high availability, fault tolerance and operational simplicity as its key properties. It is used most often for mission critical use cases and works great for handling User Data, Session Data, Profile Data, Social Data, Real-time Data and Logging Data use cases. It provides near realtime analytics with its secondary indexes, search through Solr, in-situ Map/Reduce and soon to come Apache Spark support. Finally its multi data center replication capability makes it easy to ensure business continuity, geo location of data for low latency access across continents or by segregating workloads to ensure very reliable low latency.

RiakKV is known for it’s durability, it’s part of the reason we chose it for Kolab's DLP system. Could you give us some insight into how RiakKV achieves this?

Hardware does fail, and when it does your IT Infrastructure needs to be able to cope and your systems must continue to operate while getting the resources back online as soon as possible. Riak KV was designed to eliminate the impact of the unexpected. Even if network partitions or hardware failures cause unanticipated outages, Riak KV can still read and write your data. This means you never lose data even when the part of the system goes down. For Kolab customers, it means that they have the security of knowing that the data loss and auditing that they are paying for is backed up by the best system available to deliver on this promise.

Availability seems to be a very important thing for databases in today’s digital age. How is Riak providing this key feature to Kolab and how does this enhance the Kolab offering?

Simply, Riak KV is designed to intelligently replicate and retrieve data making sure that applications based on the database are always available. Scalability also comes into play here as well. Unlike traditional databases, Riak is designed to respond to the billions of data points and terabytes of data that are being produced -- often in real-time -- as it is able to scale in a near linear fashion to give Kolab the best possible performance. Ultimately this means that Kolab’s application is always available so as an end-user you don’t experience any system outages no matter how busy or active your users are.

We integrate RiakKV with Kolab’s data loss prevention system to store groupware object histories in realtime for auditing and roll-back if needed. Is this unique?

Yes! This is a great example of two great technologies working together to provide an excellent customer experience. Combining the power of Riak KV’s high availability, fault tolerance, and scalability with Kolab’s data loss prevention system means that you have an incredibly strong and powerful system.

Basho is a really unique name for a technology company - is there any history or background to it?

Thank you, we really like our name too. Basho’s name was inspired by the real life Matsuo Basho (1644 – 1694) who is considered by many to be Japan's most renowned and revered writer of haiku. Haiku are known for their balance of lines and syllables where the simplicity of the structure is important. This is a founding, guiding principle that Riak KV is based on, as our operational simplicity is core to our architecture eliminating the need for mindless manual operations as data can be automatically and uniformly distributed.

To see the partnership of Basho's Riak and Kolab Enterprise in action together, come see us in Munich at the TDWI European Conference 22-24th June. We'll be in a booth showing both Riak KV and Kolab Enterprise, and will be happy to answer your questions!


Timotheus Pokorra's picture
Sat, 2015-06-13 22:35

This describes how to install a docker image of Kolab.

Please note: this is not meant to be for production use. The main purpose is to provide an easy way for demonstration of features and for product validation.

This installation has not been tested a lot, and could still use some fine tuning. This is just a demonstration of what could be done with Docker for Kolab.

Preparing for Docker
I am using a Jiffybox provided by DomainFactory for downloading a Docker container for Kolab 3.4 running on CentOS 7.

I have installed Fedora 21 on a Jiffybox.

Now install docker:

sudo yum install docker-io
systemctl start docker
systemctl enable docker

Install container
The image for the container is available here:
https://registry.hub.docker.com/u/tpokorra/kolab34_centos7/
If you want to know how this image was created, read my other blog post http://www.pokorra.de/2015/06/building-a-docker-container-for-kolab-3-4-on-jiffybox/.

To install this image, you need to type in this command:

docker pull tpokorra/kolab34_centos7

You can create a container from this image and run it:

MYAPP=$(sudo docker run --name centos7_kolab34 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 443:443 -h kolab34.test.example.org -d -t -i tpokorra/kolab34_centos7 /bin/bash)

You can see all your containers:

docker ps -a

You should attach to the container, and inside the container change the root password:

docker attach $MYAPP
  # you might need to press Enter to see the login screen
  # login with user root and password root
  # enter a secure password:
  passwd root

To stop the container:

docker stop $MYAPP

To delete the container:

docker rm $MYAPP

You can reach the Kolab Webadmin on this URL (replace localhost with the IP address of the Jiffybox):
https://localhost/kolab-webadmin. Login with user: cn=Directory Manager, password: test

The Webmail interface is available here:
https://localhost/roundcubemail.


Timotheus Pokorra's picture
Sat, 2015-06-13 22:31

This article is an update of the previous post that built a Docker container for Kolab 3.3 from September 2014.

Preparation
I am using a Jiffybox provided by DomainFactory for building the Docker container.

I have installed Fedora 21 on a Jiffybox.

Now install docker:

sudo yum install docker-io
systemctl start docker
systemctl enable docker

Create a Docker image

To learn more about Dockerfiles, see the Dockerfile Reference

My Dockerfile is available on Github: https://github.com/TBits/KolabScripts/blob/Kolab3.4/kolab/Dockerfile. You should store it with filename Dockerfile in your current directory.

This command will build a container with the instructions from the Dockerfile in the current directory. When the instructions have been successful, an image with the name tpokorra/kolab34_centos7 will be created, and the container will be deleted:

sudo docker build -t tpokorra/kolab34_centos7 .

You can see all your local images with this command:

sudo docker images

To finish the container, we need to run setup-kolab, this time we define a hostname as a parameter:

MYAPP=$(sudo docker run --name centos7_kolab34 -v /sys/fs/cgroup:/sys/fs/cgroup:ro -p 443:443 -h kolab34.test.example.org -d -t -i tpokorra/kolab34_centos7 /bin/bash)
docker attach $MYAPP
# you might need to press the Enter key to see the login prompt...
# login with user root and password root
# run inside the container:
echo 2 | setup-kolab --default --timezone=Europe/Brussels --directory-manager-pwd=test
cd /root/KolabScripts-Kolab3.4/kolab
./initHttpTunnel.sh
./initSSL.sh test.example.org
shutdown -h now

Now you commit this last manual change:

docker commit $MYAPP tpokorra/kolab34_centos7
# delete the container
docker rm $MYAPP

You can push this image to https://registry.hub.docker.com:

#create a new account, or login with existing account:
sudo docker login
# there is currently an issue with the Fedora 21 rpm package (docker-io-1.6.0-4.git350a636.fc21.x86_64)
# see also https://forums.docker.com/t/docker-push-error-fata-0001-respository-does-not-exist/1309/18
# solution: yum install --enablerepo=updates-testing docker-io
sudo docker push tpokorra/kolab34_centos7

You can now see the image available here: https://registry.hub.docker.com/u/tpokorra/kolab34_centos7/

See this post Installing Demo Version of Kolab 3.4 with Docker about how to install this image on the same or a different machine, for demo and validation purposes.

Current status: There are still some things not working fine, and I have not tested everything.
But this should be a good starting point for other people as well, to help with a good demo installation of Kolab on Docker.


Timotheus Pokorra's picture
Thu, 2015-06-11 08:30

This post originates in the idea from Stephen Gallagher, who is working on rolekit: “rolekit is a daemon for Linux systems providing a stable D-BUS interface to manage the deployment of [Fedora] Server Roles”.
The code of Rolekit is available here: https://github.com/sgallagher/rolekit

On his blog, Stephen stated in this post:

A few that I’d love to see (but don’t have time to start on yet):

  • A fileserver role that manages Samba and NFS file-shares (maybe [s]ftp as well).
  • A mail and/or groupware server role built atop something like Kolab
  • A backup server

This made me wonder, how would that be, if Kolab became a Server Role for Fedora, and could be installed from the Fedora repositories? Through my work on OpenPetra and Mono I got involved with Fedora, and noticed that the Fedora community tries out new technology, proves if it works, and then the technology will eventually end up in other distributions as well.

First steps

On IRC, we agreed that the first step would be to create a Copr repo, that contains the Kolab packages, and to write this blog post describing how to install and configure Kolab.

Creating the Copr Repo

So, here is the Copr repository for Fedora 22: https://copr.fedoraproject.org/coprs/tpokorra/kolab/

I created it by getting the src rpm packages from the Kolab repository, from 3.4 and 3.4 updates, in this order:

  • kolab-utils
  • roundcubemail-plugins-kolab
  • kolab-webadmin
  • kolab
  • pykolab
  • chwala
  • iRony
  • kolab-freebusy
  • roundcubemail-skin-chameleon
  • php-Net-LDAP3
  • roundcubemail
  • kolab-syncroton
  • roundcubemail-plugin-contextmenu
  • kolab-schema
  • kolab-autodiscover
  • python-sievelib
  • php-pear-Net-LDAP2
  • cyrus-imapd

The packages libkolab and libkolabxml and kdepim are already in Fedora, and I did not update them:

Cyrus Imapd is also in Fedora, https://admin.fedoraproject.org/pkgdb/package/cyrus-imapd/, but not on the latest version. So I used version 2.5 from Kolab.

Roundcubemail is uptodate in Fedora, https://admin.fedoraproject.org/pkgdb/package/roundcubemail, but somehow does not provide roundcubemail(core) >= 1.1 as required by some Kolab packages. So I also used the package from Kolab.

I have patched the pykolab package, and backported some features to extend the setup-kolab command so that it can be used non-interactively, which is probably required to be integrated into rolekit. In Kolab 3.5 (release planned for August 2015), those features will be included.

Installing Kolab from the Copr Repo

I have tested this with Fedora 22.

Please disable SELinux, since there isn’t a SELinux policy available yet for Kolab.
Jeroen van Meeuwen has worked on it a while ago, but it probably needs updating and testing: https://github.com/kanarip/kolab-selinux

Another thing: the server should have a FQDN, eg. kolab.example.org. See the installation instructions for details.

dnf install dnf-plugins-core
dnf copr enable tpokorra/kolab
dnf install kolab
mytz=Europe/Brussels
pwd=test
setup-kolab --default --mysqlserver=new --timezone=$mytz --directory-manager-pwd=$pwd

On my setup, I need to add this line to /etc/kolab/kolab.conf, in section [kolab-wap], because I am running it inside an LXC container with an iptables tunnel for port 80, and the Kolab webadmin does not calculate the url for the API properly:

api_url = http://localhost/kolab-webadmin/api

You also need to add these lines to /etc/roundcubemail/config.inc.php (this will be fixed in Kolab 3.5):

    # required for php 5.6, see https://bbs.archlinux.org/viewtopic.php?id=193012 and http://php.net/manual/de/context.ssl.php
    # production environment requires real security settings!!!
    $config['imap_conn_options']=array(
            'ssl'=>array(
            'verify_peer_name'=>false,
            'verify_peer'=>false,
            'allow_self_signed'=>true));
    $config['smtp_conn_options']=array(
            'ssl'=>array(
            'verify_peer_name'=>false,
            'verify_peer'=>false,
            'allow_self_signed'=>true));

After this, the Kolab Server should run, and you can go to http://localhost/kolab-webadmin and login with the user “cn=Directory Manager” (without the quotes) and the password that you specified as parameter for setup-kolab.

The webmail runs at http://localhost/roundcubemail

Conclusion

I hope this shows the possibilities, and what amount of work still needs to be done.

I guess the existing packages in Fedora should be kept uptodate, and missing Kolab packages need to be added to Fedora as well.

Work on SELinux policy is also required (see above).

The other thing: with the server role Kolab, how much should the role define how the server is configured securely? In Kolab Upstream, we documented how to secure the server, but left it to the Sysadmin to actually enforce security, because the Kolab community cannot take responsibility for the server.

I have a number of scripts, that might be useful for rolekit: https://github.com/TBits/KolabScripts There is eg. a script for setting up a self-signed SSL Certificate, etc.


Timotheus Pokorra's picture
Wed, 2015-06-10 09:28

Some weeks ago, I did significant work on getting Kolab 3.4 running on Debian Jessie.

I did this work in my own time, because at TBits.net we focus on CentOS7.
Still the work was benefitial to CentOS as well, because I had to do some fixes for PHP 5.6, which will eventually be part of CentOS sometime in the future.

For your interest, here are the bugs I have worked on:

For several weeks, my nightly tests succeed now for Debian Jessie as well, on LBS: see https://lbs.solidcharity.com/package/tbits.net/kolab-test/kolab-test#Kolab3.4_debian/jessie/amd64

I just updated the installation instructions: https://docs.kolab.org/installation-guide/debian.html

I am not using Debian Jessie in production, and that means two things:

  • I cannot say if it actually works in production. I only can say that it passes my nightly tests.
  • In the future, I think I might need to focus on CentOS more, and cannot invest so much of my own free time into the Debian packaging. I am open for suggestions or sponsorship :) (perhaps https://www.bountysource.com/?)

Timotheus Pokorra's picture
Wed, 2015-06-10 08:50

I realized it would be good to blog here about updates for the Kolab 3.4 Community Edition.

Although it is a community release, and therefore does not come with any guarantuee (that is up to the Enterprise version), some people are using the community edition in production, and we as the community are contributing fixes and maintaining the release.

Thanks to Daniel Hoffend, we now have the latest Roundcube 1.1.2 in Kolab 3.4 Updates. Just run yum update (CentOS) or apt-get update && apt-get upgrade (Debian)…

Daniel also backported a week ago a fix for the installation of Kolab, the Roundcube configuration for the Addressbook was not correct.
More details can be seen in the Change Reqest on OBS.
You might want to manually update your existing installation in the same way…

And another fix from 10 days ago: Daniel backported a fix for the Roundcube Context menu.
See details in the Change Request on OBS.

In the future, I will aim to post here as soon as we accept updates into Kolab 3.4 Updates.

If you want to contribute to make the community edition of Kolab more stable and secure, please make suggestions on the mailing list about fixes that you know of, and if you enjoy creating a change request on OBS yourself, then go for it, you would be very welcome!


roundcube's picture
Fri, 2015-06-05 02:00

We just published updates to both stable versions 1.0 and 1.1
after fixing many minor bugs and adding some security improvements
to the 1.1 release branch. Version 1.0.6 comes with cherry-picked
fixes from the more recent version to ensure proper long term support
especially in regards of security and compatibility.

The security-related fixes in particular are:

  • XSS vulnerability in _mbox argument
  • security improvement in contact photo handling
  • potential info disclosure from temp directory

See the full changelog here.

Both versions are considered stable and we recommend to update all
productive installations of Roundcube with either of these versions.
Download them from roundcube.net/download.

As usual, don’t forget to backup your data before updating!

And there’s one more thing: please support our crowdfunding campaign
for Roundcube Next either directly or by
spreading the word about it. Your help is much appreciated!


Thu, 2015-06-04 00:00

Some weeks after the official Kolab 3.4 release we finally released the Gentoo packages for Kolab 3.3 including the usual benefits like the CalDAV/iCAL ready calendar plugin and the Getmail plugin which allows fetching mails from any external email account right into your Kolab Groupware.

During this release some things required much more work than we've expected. To speed things up for the next time we plan to cooperate more closely with the Kolab developers and the community. For example we finally requested the multi-driver support for the calendar plugin to be pushed upstream. The required patch is currently pending and waiting for approval. Further we had some great release planning meetings with the Kolab guys where they announced to also keep focus on quality assurance and upgrade paths for the community version. For example, as a first result, a detailled migration guide for Kolab 3.3 can be found here.

In the meantime we keep working on the upcoming Gentoo packages for Kolab 3.4. Included are the brand new chameleon skin and a lot of bugfixes which make the Kolab 3.4 release "probably the best quality assured stable release Kolab.org has yet performed".

Find detailled installation instruction in our wiki: https://wiki.awesome-it.de/howtos/kolab

Report bugs or patches to our Gitlab: https://gitlab.awesome-it.de/overlays/kolab/issues

 

 


bruederli's picture
Wed, 2015-05-27 19:43

We all know the annoyance of (web) applications not doing what we expect them to do and staring at the tumbling “Loading…” icons has become a part of our daily routine.  The more digital tools we use, the more sensitive we become for good user experience. UX is the big buzzword and Roundcube Next is not only about faster development but also very much dedicated to significantly improve the way we interact with our webmail application of choice.

By using top-notch open source technologies which have proven to work for the biggest web applications out there, Roundcube Next will be the responsive, reactive and simply gorgeous email application you want to use more than Gmail or Outlook. The core and the essentials are only the start and build a solid email client that can connect to any mailbox and will run everywhere, from your desktop browser to the device in your pocket. But our plans go beyond email and more perfectly integrated “apps” like calendar, chat, notes or cloud file access will follow.

Draft - Roundcube Next on iPad

A first draft – Roundcube Next on iPad

And we didn’t even mention the best part: Roundcube Next will be, just as its predecessor, free software and give you the freedom of choosing the email provider you trust and not the one who reads your mail.

Help us make Roundcube Next the webmail application every serious internet service provider simply has to install for their users. Join the move and talk to your ISP about backing our crowdfunding project and finally get that new shiny thing installed for you and everybody else!


roundcube's picture
Wed, 2015-05-27 19:43

We all know the annoyance of (web) applications not doing what we expect them to do and staring at the tumbling “Loading…” icons has become a part of our daily routine.  The more digital tools we use, the more sensitive we become for good user experience. UX is the big buzzword and Roundcube Next is not only about faster development but also very much dedicated to significantly improve the way we interact with our webmail application of choice.

By using top-notch open source technologies which have proven to work for the biggest web applications out there, Roundcube Next will be the responsive, reactive and simply gorgeous email application you want to use more than Gmail or Outlook. The core and the essentials are only the start and build a solid email client that can connect to any mailbox and will run everywhere, from your desktop browser to the device in your pocket. But our plans go beyond email and more perfectly integrated “apps” like calendar, chat, notes or cloud file access will follow.

Draft - Roundcube Next on iPad

A first draft – Roundcube Next on iPad

And we didn’t even mention the best part: Roundcube Next will be, just as its predecessor, free software and give you the freedom of choosing the email provider you trust and not the one who reads your mail.

Help us make Roundcube Next the webmail application every serious internet service provider simply has to install for their users. Join the move and talk to your ISP about backing our crowdfunding project and finally get that new shiny thing installed for you and everybody else!


Aaron Seigo's picture
Wed, 2015-05-27 15:28

transactional b-trees and what-not

Over the last few months I've been reading more than the usual number of papers on a selection of software development topics that are of recent interest to me. The topics have been fairly far flung as there are a few projects I have been poking at in my free time.

By way of example, I took a couple weeks reading about transitory trust algorithms that are resistant to manipulation, which is a pretty interesting problem with some rather elegant (partial) solutions which are actually implementable at the individual agent level, though computationally impractical if you wish to simulate a whole network which thankfully was not what I was interested in. (So reasonable for implementing real-world systems with, though not simulations or finding definitive solutions to specific problems.)

This past week I've been reading up on a variety of B-tree algorithms. These have been around since the early 1970s and are extremely common in all sorts of software, so one might expect that after 40+ years of continuous use of such a simple concept that there'd be very little to talk about, but it's quite a vast territory. In fact, each year for the last two decades Donald Knuth has held a public lecture around Christmas-time about trees. (Yes, they are Christmas Tree Lectures. ;) Some of the papers I've been reading were published in just the last few years, with quite a bit of interesting research having gone on in this area over the last decade.

The motivation for reading up on the topic is I've been looking for a tree that is well suited to storing the sorts of indexes that Akonadi Next is calling for. They need to be representable in a form that multiple processes can access simultaneously without problems with multiple readers and (at least) one writer; they also need to be able to support transactions, and in particular read transactions so that once a query is started the data being queried will remain consistent at least until the query is complete even if an update is happening concurrently. Preferably without blocking, or at least as little blocking as possible. Bonus points for being able to roll-back transactions and keeping representations of multiple historic versions of the data in certain cases.

In the few dozen papers I downloaded onto the tablet for evening reading, I came across Transactions on the Multiversion B+-Tree which looks like it should do the trick nicely and is also (thankfully) nice and elegant. Worth a read if you're into such things.

As those who have been following Akonadi Next development know, we are using LMDB for storage and it does a very nice job of that but, unfortunately, does not provide "secondary" indexes on data which Akonadi Next needs. Of course one can "fake" this by inserting the values to be indexed (say, the dates associated with an email or calendar event) as keys with the value being they key of the actual entry, but this is not particularly beautiful for various reasons, including:

  • this requires manually cleaning up all indexes rather than having a way to efficiently note that a given indexed key/value pair has been removed and have the indexes cleaned up for you
  • some data sets have a rather low cardinality which would be better represented with approaches such as bitmap indexes that point to buckets (themselves perhaps trees) of matching values
  • being able to index multiple boolean flags simultaneously (and efficiently) is desirable for our use cases (think: "unread mails with attachments")
  • date range queries of the sort common in calendars ("show this month", "show this week", e.g.) could also benefit from specialized indexes

I could go on. It's true that these are the sorts of features that your typical SQL database server provides "for free", but in our case it ends up being anything but "free" due to overhead and constraints on design due to schema enforcement. So I have been looking at what we might be able to use to augment LMDB with the desired features, and so the hunt for a nice B+-tree design was on. :) I have no idea what this will all lead to, if anything at all even, as it is purely an evening research project for me at the moment.

They application-facing query system itself in Akonadi Next is slowly making its way to something nice, but that's another topic for another day.


roundcube's picture
Wed, 2015-05-20 11:22

While Roundcube One originates from a private fun project with email – and only email – in mind, we have learned our lessons and are committed to do Roundcube Next right from the ground up. In the year 2015, communication combines a variety of tools we need to connect to each others. And that’s exactly what we aim to cover with the architectural design of Roundcube Next. It shall become a solid and open foundation for building communication apps on top of it. Email will certainly remain a key component as it still is the most important means of communication today. But there’s more and therefore we want to make Roundcube Next the WordPress of communication if you will.

After we opened Roundcube up for plugins in version 0.3, we witnessed an amazing creativity in what people start building around an open source email application. From a car dealer system to mailing list archives, many custom solutions were built on top of Roundcube. This definitely inspired us to support and facilitate this aspect in the very core of the new system.

The plugin infrastructure of Roundcube Next will be your new best friend for building web apps for your specific communication needs. The new core will provide an easy-to-use framework with lots of reusable components for both building the UI of your application as well as for synchronizing the data to the server and the underlying storage backend of your choice.

So if you’re a developer who got annoyed with the limitations of closed systems from the big vendors and you don’t want to build a complex web application from scratch, Roundcube Next deserves your attention and support. Go to https://roundcu.be/next and get yourself a backstage pass for the Roundcube Next forums or even a seat in the advisory committee. And don’t forget to spread the word about this new opportunity for the free software world.


bruederli's picture
Wed, 2015-05-20 11:22

While Roundcube One originates from a private fun project with email – and only email – in mind, we have learned our lessons and are committed to do Roundcube Next right from the ground up. In the year 2015, communication combines a variety of tools we need to connect to each others. And that’s exactly what we aim to cover with the architectural design of Roundcube Next. It shall become a solid and open foundation for building communication apps on top of it. Email will certainly remain a key component as it still is the most important means of communication today. But there’s more and therefore we want to make Roundcube Next the WordPress of communication if you will.

After we opened Roundcube up for plugins in version 0.3, we witnessed an amazing creativity in what people start building around an open source email application. From a car dealer system to mailing list archives, many custom solutions were built on top of Roundcube. This definitely inspired us to support and facilitate this aspect in the very core of the new system.

The plugin infrastructure of Roundcube Next will be your new best friend for building web apps for your specific communication needs. The new core will provide an easy-to-use framework with lots of reusable components for both building the UI of your application as well as for synchronizing the data to the server and the underlying storage backend of your choice.

So if you’re a developer who got annoyed with the limitations of closed systems from the big vendors and you don’t want to build a complex web application from scratch, Roundcube Next deserves your attention and support. Go to https://roundcu.be/next and get yourself a backstage pass for the Roundcube Next forums or even a seat in the advisory committee. And don’t forget to spread the word about this new opportunity for the free software world.


greve's picture
Tue, 2015-05-19 09:02

If you are a user of Roundcube, you want to contribute to roundcu.be/next. If you are a provider of services, you definitely want to get engaged and join the advisory group. Here is why.

Free Software has won. Or has it? Linux is certainly dominant on the internet. Every activated Android device is another Linux kernel running. At the same time we see a shift towards “dumber” devices which are in many ways more like thin clients of the past. Only they are not connected to your own infrastructure.

Alerted by the success of Google Apps, Microsoft has launched Office 365 to drive its own transformation from a software vendor into a cloud provider. Amazon and others have also joined the race to provide your collaboration platform. The pull of these providers is already enormous. Thanks to networking effects, economies of scale, and ability to leverage deliberate technical incompatibilities to their advantage, the drawing power of these providers is only going to increase.

Open Source has managed to catch up to the large providers in most functions, bypassing them in some, being slightly behind in others. Kolab has been essential in providing this alternative especially where cloud based services are concerned. Its web application is on par with Office 365 and Google Apps in usability, attractiveness and most functions. Its web application is the only fully Open Source alternative that offers scalability to millions of users and allows sharing of all data types in ways that are superior to what the proprietary competition has to offer.

Collaborative editing, chat, voice, video – all the forms of synchronous collaboration – are next and will be added incrementally. Just as Kolab Systems will keep driving the commercial ecosystem around the solution, allowing application service providers (ASP), institutions and users to run their own services with full professional support. And all parts of Kolab will remain Free and Open, as well as committed to the upstream, according to best Free Software principles. If you want to know what that means, please take a look at Thomas Brüderlis account of how Kolab Systems contributes to Roundcube.

TL;DR: Around 2009, Roundcube founder Thomas Brüderli got contacted by Kolab at a time when his day job left him so little time to work on Roundcube that he had played with the thought of just stepping back. Kolab Systems hired the primary developers of Roundcube to finish the project, contributing in the area of 95% of all code in all releases since 0.6, driving it its 1.0 release and beyond. At the same time, Kolab Systems carefully avoided to impose itself on the Roundcube project itself.

From a Kolab perspective, Roundcube is the web mail component of its web application.

The way we pursued its development made sure that it could be used by any other service provider or ISV. And it was. Roundcube has an enormous adoption rate with millions of downloads, hundreds of thousands of sites and an uncounted number beyond the tens of millions. According to cPanel, 62% of their users choose Roundcube as their web mail application. It’s been used in a wide number of other applications, including several service providers that offer mail services that are more robust against commercial and governmental spying. Everyone at Kolab considers this a great success, and finds it rewarding to see our technology contribute essential value to society in so many different ways.

But while adoption sky-rocketed, contribution did not grow in the same way. It’s still Kolab Systems driving the vast majority of all code development in Roundcube along with a small number of occasional contributors. And as a direct result of the Snowden revelations the development of web collaboration solutions fragmented further. There are a number of proprietary approaches, which should be self-evidently disqualified from being taken serious based on what we have learned about how solutions get compromised. But there are also Open Source solutions.

The Free Software community has largely responded in one of two ways. Many people felt re-enforced in their opinion that people just “should not use the cloud.” Many others declared self-hosting the universal answer to everything, and started to focus on developing solutions for the crypto-hermit.

The problem with that is that it takes an all or nothing approach to privacy and security. It also requires users to become more technical than most of them ever wanted to be, and give up features, convenience and ease of use as a price for privacy and security. In my view that ignores the most fundamental lesson we have learned about security throughout the past decades. People will work around security when they consider it necessary in order to get the job done. So the adoption rate of such technologies will necessarily remain limited to a very small group of users whose concerns are unusually strong.

These groups are often more exposed, more endangered, and more in need of protection and contribute to society in an unusually large way. So developing technology they can use is clearly a good thing.

It just won’t solve the problem at scale.

To do that we would need a generic web application geared towards all of tomorrow’s form factors and devices. It should be collaboration centric and allow deployment in environments from a single to hundreds of millions of users. It should enable meshed collaboration between sites, be fun to use, elegant, beautiful and provide security in a way that does not get into the users face.

Fully Free Software, that solution should be the generic collaboration application that could become in parts or as a whole the basis for solutions such as mailpile, which focus on local machine installations using extensive cryptography, intermediate solutions such as Mail-in-a-Box, all the way to generic cloud services by providers such as cPanel or Tucows. It should integrate all forms of on-line collaboration, make use of all the advances in usability for encryption, and be able to grow as technology advances further.

That, in short, is the goal Kolab Systems has set out to achieve with its plans for Roundcube Next.

While we can and of course will pursue that goal independently in incremental steps we believe that would be missing two rather major opportunities. Such as the opportunity to tackle this together, as a community. We have a lot of experience, a great UI/UX designer excited about the project, and many good ideas.

But we are not omniscient and we also want to use this opportunity to achieve what Roundcube 1.0 has not quite managed to accomplish: To build an active, multi-vendor community around a base technology that will be fully Open Source/Free Software and will address the collaborative web application need so well that it puts Google Apps and Office 365 to shame and provides that solution to everyone. And secondly, while incremental improvements are immensely powerful, sometimes leapfrogging innovation is what you really want.

All of that is what Roundcube Next really represents: The invitation to leapfrog all existing applications, as a community.

So if you are a user that has appreciated Roundcube in the past, or a user who would like to be able to choose fully featured services that leave nothing to be desired but do not compromise your privacy and security, please contribute to pushing the fast forward button on Roundcube Next.

And if you are an Application Service Provider, but your name is not Google, Microsoft, Amazon or Apple, Roundcube Next represents the small, strategic investment that might just put you in a position to remain competitive in the future. Become part of the advisory group and join the ongoing discussion about where to take that application, and how to make it reality, together.

 


Aaron Seigo's picture
Wed, 2015-05-06 11:37

Today at 13:00 UTC I will be hosting a Google+ Hangout with Roundcube founder and lead developer, Thomas Brüderli. I will link the video below once we are done, but everyone is welcome to join us live and provide feedback and questions in IRC while we're chatting.

So, what are we going to talk about? Well, Roundcube, of course! :) I'll be asking Thomas why he decided that now was the appropriate time for a refactor of Roundcube, what it means for Roundcube 1.x (the current stable release), and if we have time we'll start tucking into the current feature and design thinking.

So come join us on the Roundcube G+ page / Youtube channel as well as the #roundcube channel on irc.freenode.net today at 13:00 UTC!

Hope to see you all there!

Update: The video is up on Youtube, with some blank airtime (and a fun moment of feedback) edited out .. you can watch it below:


bruederli's picture
Mon, 2015-05-04 13:38

It all started with this hypothetical question: how would we implement Roundcube if we could start over again? And now this idea has already grown into a concrete plan how to create the responsive, fast and beautiful successor of Roundcube.

The architectural changes necessary for this are clearly too big to be applied to the current Roundcube codebase without breaking the compatibility for most plugins and extensions. So we won’t take that risky path but rather define Roundcube One as feature complete and focus on a new core engine for the future Roundcube webmail application. This will enable everybody to participate in the process of reshaping the architecture and to adapt the existing plugins to the new API as we go along.

There’s no doubt that such a major refactoring is a huge endeavor and requires a substantial effort in concepts, development and testing. Nothing to be done over the weekend but we also don’t want to spend another 10 years to make this become reality. Luckily we have strong partners and supporters to push this forward. Kolab Systems has offered to drive this project by contributing their well established software development capabilities, from project management, developer power to QA and testing. In addition to that, the folks at Kolab Digital can’t wait to share their expertise on the UX and design part. However, such a level of professionalism also comes with a price.

Getting help from the crowd to back this

In order to enable both Kolab Systems and Kolab Digital to actually assign the necessary resources to the “Roundcube Next” project, we sat together and decided that it would make sense to reach out to the entire Roundcube community to help make this happen. Yesterday, we proudly announced the crowd funding campaign at the end of the Kolab Summit in The Hague.

The Funding StepsTogether, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowd funding page, and we are excited to make the run to our initial goal and beyond with you!


roundcube's picture
Mon, 2015-05-04 13:38

It all started with this hypothetical question: how would we implement Roundcube if we could start over again? And now this idea has already grown into a concrete plan how to create the responsive, fast and beautiful successor of Roundcube.

The architectural changes necessary for this are clearly too big to be applied to the current Roundcube codebase without breaking the compatibility for most plugins and extensions. So we won’t take that risky path but rather define Roundcube One as feature complete and focus on a new core engine for the future Roundcube webmail application. This will enable everybody to participate in the process of reshaping the architecture and to adapt the existing plugins to the new API as we go along.

There’s no doubt that such a major refactoring is a huge endeavor and requires a substantial effort in concepts, development and testing. Nothing to be done over the weekend but we also don’t want to spend another 10 years to make this become reality. Luckily we have strong partners and supporters to push this forward. Kolab Systems has offered to drive this project by contributing their well established software development capabilities, from project management, developer power to QA and testing. In addition to that, the folks at Kolab Digital can’t wait to share their expertise on the UX and design part. However, such a level of professionalism also comes with a price.

Getting help from the crowd to back this

In order to enable both Kolab Systems and Kolab Digital to actually assign the necessary resources to the “Roundcube Next” project, we sat together and decided that it would make sense to reach out to the entire Roundcube community to help make this happen. Yesterday, we proudly announced the crowd funding campaign at the end of the Kolab Summit in The Hague.

The Funding StepsTogether, we can make this a great success! Please help spread the word, back the campaign with a pledge, and join us for what is going to be a fantastic journey. Regular updates will be posted to the crowd funding page, and we are excited to make the run to our initial goal and beyond with you!