D-BUS (Desktop Bus) is a simple inter-process communication (IPC) open-source system for software applications to communicate with one another. It replaced DCOP in KDE4 and has also been adopted by Gnome, XFCE and other desktops. It’s, in fact, the main interoperability mechanism in the “Linux desktop” world thanks to the freedesktop.org standards.

The architecture of D-Bus is pretty simple: there is a dbus-daemon server process which runs locally and acts as a “messaging broker” and applications exchange messages through the dbus-daemon.

But of course you already new that because you are supersmart developers and/or users.

D-Bus on Windows

What you may not know is how much damage is D-Bus making to open source software on Windows.

A few years ago I tried to introduce kdelibs for a large cross-platform project but I got it rejected, despite some obvious advantages, mainly due to D-Bus.

Performance and reliability back then was horrible. It works much better these days but it still scares Windows users. In fact, you may also replace “it scares Windows users” with “it scares IT departments in the enterprise world*”.

The reason?

A dozen processes randomly started, IPC with no security at all, makes difficult to upgrade/kill/know when to update applications, and many more. I’m not making this out, this has already happened to me.

* yes, I know our friends from Kolab are doing well, but how many KDE applications on desktop have you seen out of that “isolation bubble”

D-Bus on mobile

One other problem is D-Bus is not available on all platforms (Android, Symbian, iOS, etc), which makes porting KDE applications to those platforms difficult.

Sure, Android uses D-Bus internally, but that’s an implementation detail and we don’t have access to it). That means we still need a solution for platforms where you cannot run or access dbus-daemon.

Do we need a daemon?

A few months ago I was wondering: do we really need this dbus-daemon process at all?

What we have now looks like this:

As you can see, D-Bus is a local IPC mechanism, i. e. it does not allow applications to communicate over the network (although technically, it would not be difficult to implement). And every operating system these days has its own IPC mechanism. Why create a new one with a new daemon? Can’t we use an existing one?

I quickly got my first answer: D-Bus was created to expose a common API (and message and data format, i. e. a common “wire protocol”) to applications, so that it’s easy to exchange information.

As for the second answer, reusing an existing one, it’s obvious we cannot: KDE applications run on a variety of operating systems, every one of them has a different “native” IPC mechanism. Unices (Linux, BSD, etc) may be quite similar, but Windows, Symbian, etc are definitely very different.

No, we don’t!

So I though let’s use some technospeak buzzword and make HR people happy! The faade pattern!

Let’s implement a libdbusfat which offers the libdbus API on one side but talks to a native IPC service on the other side. That way we could get rid of the dbus-daemon process and use the platform IPC facilities. For each platform, a different “native IPC side” would be implemented: on Windows it could be COM, on Android something else, etc

Pros

The advantage of libdbusfat would be applications would not need any change and they would still be able to use DBus, which at the moment is important for cross-desktop interoperability.

On Unix platforms, applications would link to libdbus and talk to dbus-daemon.

On Windows, Android, etc, applications would link to libdbusfat and talk to the native IPC system.

By the magic of this faade pattern, we could compile, for instance, QtDBUS so that it works exactly like it does currently but it does not require dbus-daemon on Windows. Or Symbian. Or Android.

QtMobility?

QtMobility implements a Publish/Subscribe API with a D-Bus backend but it serves a completely different purpose: it’s not available glib/gtk/EFL/etc and it’s implemented in terms of QtDBUS (which in turn uses dbus-daemon for D-Bus services on every platform).

It’s, in fact, a perfect candidate to become a user of libdbusfat.

Cons

A lot of work.

You need to cut dbus-daemon in half, establish a clear API which can be implemented in terms of each platform’s IPC, data conversion, performance, etc. Very interesting work if you’ve got the time to do it, I must say. Perfect for a Google Summer of Code, if you already know D-Bus and IPC on a couple of different-enough two platforms (Linux and Windows, or Linux and Android, or Linux and iOS, etc).

Summary

TL;DR: The idea is to be able to compile applications that require DBus without needing to change the application. This may or may not be true on Android depending on the API, but it is true for Windows.

Are you brave enough to develop libdbusfat it in a Qt or KDE GSoC?

This is a short one and probably doable as a Summer of Code project.

The idea: add support for the Microsoft compiler and linker and Visual Studio projects and solutions (.sln, .vcproj, et)c in KDevelop, at least in the Windows version.

QtCreator has suport for the first part (compiler and linker).

For the the second part (solutions and projects), code can probably be derived (directly or indirectly) from MonoDevelop‘s and CMake‘s. The starting point would be MSBuild support, as it’s what VS2010 is based on.

Bonus points if you add C#/.NET support (Qyoto/Kimono).

In a perfectly orchestrated marketing campaign for a 100% free-libre tablet called Spark that will run KDE Plasma Active, Aaron Seigo writes today about the problems they are facing with GPL-violations.

Apparently, every Chinese manufacturer is breaking the GPLv2 by not releasing the sources for their modified Linux kernel. Conversations and conversations with Zenithink (designers of the Spark), Synrgic (designers of the Dreambook W7), etc have arrived nowhere. To the point that CordiaTab, another similar effort using Gnome instead of KDE, has been cancelled.

I have to say I am very surprised at the lack of the kernel sources. What is the Free Software Foundation doing? Why don’t we seek ban of all imports of tablets whose manufacturers don’t release the full GPL source?

Apple got the Samsung GalaxyTab imports blocked in Germany and Australia for something as ethereal as patents covering the external frame design. We are talking about license infringement, which is easier to demonstrate in court.

China may ignore intellectual property but they cannot ignore business, and no imports means no business. Let’s get all GPL-infringing tablet imports banned and we will get more source in two weeks than we can digest in two years. Heck, I’m surprised Apple is not trying this in court to block Android!

Apparently HTML5 applications are the best thing after sliced bread.

HTML5 is the first platform any mobile vendor supports: iPhone, Android, Windows Phone, BlackBerry, Symbian. All of them.

Windows 8 is said to promote HTML5 as the preferred application development solution.

I used to look kindly at that. But about a month ago I started to get worried: is HTML5 good for everything?

Long-lived applications

In military, industrial, warehouse management, medical, etc is not rare that bespoke applications are developed and stay in use for many years (and I really mean many: 10, 20 or even more!) with barely an update. It’s not rare that those applications only receive very small updates once very 5 years. Those applications, not Angry Birds, are what keeps the world running: troops know what supplies they can count on, iPhones are manufactured, FedEx is able to deliver your package and your doctor is able to check your health.

But now that everybody seems to be moving to HTML5 webapps, what happens when my warehouse management application is a webapp and the additions in the newest browsers make the webapp no longer work?

Are vain upgrades the future?

Say my webapp is released in 2014 and it works fine with Firefox 14.0 and Chrome 26.0, the newest browsers when I release the application in 2014. Fast-forward to 2020 and Firefox 14.0 and Chrome 26.0 do not even install on Windows 10 computer! What’s the solution?

Should the customer pay for a huge update and redesign to make it work with Firefox 27.1 and Chrome 41.0 in 2020?

A virtual machine with Windows 8 and Firefox 14.0? A portable Mozilla Firefox 14.0 on Windows 10 in 2020 to be able to use that line-of-business application that only requires a small update once or twice every 5 years? How are the virtual machine and/or Portable Firefox 14.0 different from or better than a fat client? What’s the advantage? I’d say none!

Native applications usually do not have that kind of problems because APIs are much more stable. You can still run Win16 applications on Windows 7!

You don’t believe me? We may soon be developing for 76 browsers!

While HTML5 may be fine for applications which are updated very often, it makes me feel very uneasy to see it used in environments where applications will be rarely updated, such as SCADAs, warehouse management, control system, medical records, etc.

A solution is needed

It looks like that choice of technology is going to make those applications much more expensive in the medium and long term, paying for “adaptations to new browsers” (sorry, I resist to call “update” or “upgrade” to something that adds zero value other than being able to run on a newer browser).

Or maybe it’s about time to define actual “HTML5 profiles”. ACID3 seems to be too weak of a profile: two very different browsers may pass ACID3 yet a webapp would work with one browser and fail with the other due to bugs, missing features/added features, etc.

Something needs to be done.

Yup, one more year I’m attending FOSDEM

I'm going to FOSDEM, the Free and Open Source Software Developers' European Meeting

If you are coming, feel free to add yourself to the KDE wiki page.

If you are coming to the beer event on Friday but you don’t know anybody, make sure you bring something that identifies you as a Qt/KDE hacker! In any case, a lot of us will be around the KDE booth in the K building.

I will also spend quite some time at the CrossDesktop DevRoom, which is being run by Christophe Fergeau and myself this year.

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

Are you interested in giving a talk about open source and Qt, KDE, Enlightenment, Gnome, XFCE, Windows, Mac OS X, general desktop matters, mobile development, applications that enhance desktops and/or web?

We have extended the deadline for a few more days, until January 8th. If you want to submit a talk proposal, hurry up!

I have to say I am very surprised to see very few Qt/KDE talk proposals. Is there nothing interesting the Qt and KDE world have to say to 5,000+ people?

There is more information in the Call for Talks we published a couple of months.

If you are interested in Qt/KDE, come visit us at the KDE booth. If you add yourself to the KDE FOSDEM 2012 wiki page, we will be able to better organize the usual dinner on Sunday and/or smaller meetings for “special interest groups”.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

Are you interested in giving a talk about open source and Qt, KDE, Enlightenment, Gnome, XFCE, Windows, Mac OS X, general desktop matters, mobile development, applications that enhance desktops and/or web?

Hurry up and submit your proposal, deadline is December 20th!

There is more information in the Call for Talks we published one month ago.

If you are interested in Qt/KDE, come visit us at the KDE booth. If you add yourself to the KDE FOSDEM 2012 wiki page, we will be able to better organize the usual dinner on Sunday and/or smaller meetings for “special interest groups”.

 

Here I am, with 9 other people, at the KDAB office in Berlin. We are in the KDE eV sprint, talking about promo stuff, eV stuff, corportate membership, future, etc. Really interesting stuff.

Most of us (including our intern Inu) spent the morning trying to improve Join the Game, others went to define a policy for what to publish in the donors page, thank you page, etc

I’d say it has been very productive. Everybody came with very nice ideas; some of them we will finish here, others we will need ask for help from some community members (especially from artists!)

The sprint continues tomorrow.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the developer rooms will be the CrossDesktop DevRoom, which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperativity amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, XFCE, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2011 schedule might give you some inspiration.

Please include the following information when submitting a proposal: your name, the title of your talk (please be descriptive, as titles will be listed with around 250 from other projects) and a short abstract of one or two paragraphs.

The deadline for submissions is December 20th 2011. FOSDEM will be held on the weekend of 4-5 February 2012. Please submit your proposals to crossdesktop-devroom@lists.fosdem.org

Also, if you are attending FOSDEM 2012, please add yourself to the KDE community wiki page so that we organize better. We need volunteers for the booth!

 

Red Hat‘s Matthew Garrett let the cat out of the bag about a month ago: when UEFI Secure Boot is adopted by mainboard manufacturers to satisfy Microsoft Windows 8 requirements, it may very well be the case that Linux and others (BSD, Haiku, Minix, OS/2, etc) will no longer boot.

Matthew has written about it extensively and seems to know very well what the issues are (part I, part II), the details about signing binaries and why Linux does not support Secure Boot yet.

The Free Software Foundation has also released a statement and started a campaign, which is, as usually, anti-Microsoft instead of pro-solutions.

Now let me express my opinion on this matter: this is not Microsoft’s fault.

Facts

Let’s see what are the facts in this controversy:

  • Secure Boot is here to stay. In my humble opinion, the idea is good and it will prevent and/or lessen malware effects, especially on Windows.
  • Binaries need to be signed with a certificate from the binaries’ vendor (Microsoft, Apple, Red Hat, etc)
  • The certificate that signs those binaries needs to be installed in the UEFI BIOS
  • Everybody wants their certificate bundled with the UEFI BIOS so that their operating system works “out of the box”
  • Given that there are many UEFI and mainboard manufacturers, getting your certificate included is not an easy task: it requires time, effort and money.

Problem

The problem stems from the fact that most Linux vendors do not have the power to get their certificates in UEFI BIOS. Red Hat and Suse will for sure get their certificates bundled in server UEFI BIOS. Debian and Ubuntu? Maybe. NetBSD, OpenIndiana, Slackware, etc? No way.

This is, in my humble opinion, a serious defect in the standard. A huge omission. Apparently while developing the Secure Boot specification everybody was busy talking about signed binaries, yet nobody thought for a second how the certificates will get into the UEFI BIOS.

What should have been done

The UEFI secure boot standard should have defined an organization (a “Secure Boot Certification Authority”) that would issue and/or receive certificates from organizations/companies (Red Hat, Oracle, Ubuntu, Microsoft, Apple, etc) that want their binaries signed.

This SBCA would also be in charge of verifying the background of those organizations.

There is actually no need for a new organization: just use an existing one, such as Verisign, that carries on with this task for Microsoft for kernel-level binaries (AuthentiCode).

Given that there is no Secure Boot Certification Authority, Microsoft asked BIOS (UEFI) developers and manufacturers to include their certificates, which looks 100% logical to me. The fact that Linux distributions do not have such power is unfortunate, but it is not Microsoft’s fault at all.

What can we do?

Given its strong ties with Intel, AMD and others, maybe the Linux Foundation could start a task force and a “Temporary Secure Boot Certification Authority” to deal with UEFI BIOS manufacturers and developers.

This task force and TSBCA would act as a proxy for minorities such as Linux, BSD, etc distributions.

I am convinced this is our best chance to get something done in a reasonable amount of time.

Complaining will not get us anything. Screaming at Microsoft will not get us anything. We need to propose solutions.

Wait! Non-Microsoft certificates? Why?

In addition to the missing Secure Boot Certification Authority, there is a second problem apparently nobody is talking about: what is the advantage mainboard manufacturers get from including non-Microsoft certificates?

For instance: why would Gigabyte (or any other mainboard manufacturer) include the certificate for, say, Haiku?

The benefit for Gigabyte would be negligible and if someone with ill-intentions gets Haiku’s certificate, that piece of malware will be installable on all Gigabyte’s mainboards.This would lead to manufacturer-targetted malware, which would be fatal to Gigabyte: “oh, want to be immune to the-grandchild-of-Stuxnet? Buy (a computer with) an MSI mainboard, which does not include Haiku’s certificate”

Given that 99% of desktops and laptops only run Windows, the result of this (yet unresolved) problem would be that manufacturers will only install Microsoft certificates, therefore they would be immune to malware signed with a Slackware certificate in the wild.

If we are lucky, mainboard manufacturers will give us an utility to install more certificates under your own risk.

The solution to the first problem looks easy to me. The solution to the second looks much more worrying to me.