FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the tracks will be the CrossDesktop  DevRoom, which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperativity amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, Unity, XFCE, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2012 schedule might give you some inspiration:
https://archive.fosdem.org/2012/schedule/track/crossdesktop_devroom.html

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio
  • Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified.

The deadline for submissions is December 14th 2012. FOSDEM will be held on the weekend of 2-3 February 2013. Please submit your proposals to crossdesktop-devroom@lists.fosdem.org (subscribtion page for the mailing list: https://lists.fosdem.org/listinfo/crossdesktop-devroom )

— The CrossDesktop DevRoom 2013 Organization Team

PS: Qt and KDE people are starting to organize for the booth, devroom, Saturday & Sunday night, etc. If you want to help, join kde-promo and add yourself to the wiki.

 

A few months ago I wrote on my disbelief of HTML5 being the right tool for everything . Some people took that as me saying HTML5 is useless.

That’s obviously not true and it’s certainly not what I think.

It’s my opinion there is room for HTML5 and there is room for native applications and the decision on what to use should not be taken lightly.

Here are a few questions that may help you to make a wise decision.

 

Target user

Is it corporate? Is it consumer?

Corporate devices are usually under control and users may not be able to install software.

Or traffic may be filtered and users cannot browse to your website to use your webapp and getting the authorization will take months, therefore they give up before they have even started using it.

Or they may be on a slow Internet connection and using that HTML5 webapp that took years to develop and add all those nice effects is hardly possible due to megabytes of JavaScript and images needing to be downloaded.

As for consumers, despite having full control of their systems, it’s not roses either: not all consumers know how to install software and/or they may be scared by UAC dialogs (hint: always sign your software with a certificate whose signature chain reaches VeriSign).

 

Target device

Is it a computer? Smartphone? Tablet? Web browser?

If a computer, is it a PC running Windows? Linux? Mac? All of them? Are you trying to reach as many platforms as possible?

How old of a computer are you targeting? Pentium 4? Core 2 Duo? Core i5? How much RAM? Try a fancy website with a lot of HTML5 niftiness on an old computer and you’ll probably be surprised at how slow HTML5 can be, even on modern browsers.

 

Deployment

Deploying native applications in corporate environments is a bit of a nightmare due to different operating system versions, hardware, etc

Deploying native applications in consumer computers is only a problem if you are targeting low-skilled users.

HTML5 is easy to deploy, provided that you can get the user to use a proper version of the browser. This is workable with consumers but often impossible with corporate, so if you go for HTML5 for a corporate application, make sure you support everything from at least down to Internet Explorer 8.

For mobile devices (smartphones and tablets), it doesn’t really matter whether it’s an HTML5 or native application: it has to be installed on the device, the device goes with the user everywhere and when the user moves to another device, re-installing all the applications is a matter of accessing the Apple Store, Android Market or equivalent and say “download it all”.

Read More →

Aaron recently posted an update about the progress of Vivaldi and the new setbacks and progresses in the project.

In case you are not familiar with Vivaldi, here’s a quick recap: the idea is to have a tablet running Linux (Mer, the continuation of Maemo and MeeGo) and Plasma Active. Apparently the easiest and cheapest way to achieve this was to get all the sources code for the software running on one of the many tablets which are sold with Android (which is, after all, a variation of Linux).

But then problems arise: those tablets run Android and vendors often provide only binary drivers, which is useless for Mer (or any other distribution of Linux). Once they finally got enough source to move Vivaldi forward, the board (electronics) changes and back to square 1 (or almost).

According to Aaron, it seems this time they have found a partner which is willing to provide the device and the source. Grea!

However, since the beginning of the Vivaldi Project (back when it was called Spark), there is one thing there is one thing I have always wondered.

Why Mer? In fact, why a Linux distribution? I know, I know, Plasma Active needs Linux to run.

But what about taking a completely different approach?

Instead of trying to get Mer and Plasma Active running on a tablet which is meant to run Android, why not taking a less radical approach?

We have Necessitas (Qt for Android).

So why not taking the opposite approach?

Instead of adapting Mer (operating system) + Plasma Active (“desktop environment”) to a tablet (the “device”) which is meant to run Android

what about this:

Port Plasma Active (desktop environment) to Android (operating system), which is already running on the tablet (the “device”).

Then create KDE’s own “CyanogenMod” that can be installed on top of many tablet devices. And sell those devices: you get to choose 7”, 9.7”, 10.1”, etc

Or maybe even sell Plasma Active and the application in the Android Market, if that’s possible (I don’t know enough about Android and the Market Terms and Conditions to know whether it’s possible technically and legally to replace the desktop).

Yes, that’s a different business and it’s probably not what Make·Play·Live had in mind.

With this approach, the business is no longer about selling hardware + software but mainly only about selling software. How to make that profitable is a topic for another post.

And there are technical limitations: Bionic, the amount of linked libraries, applications may need tablet-specific details (not unexpected, even with Mer), etc

But at least it’s something that we know how to do: it’s code, not hardware, and there is no need to deal with people who will promise source code and then won’t deliver. It will work on top of Android, and we just need to create our own distribution.

It’s just one more platform for KDE: first it was Linux, then other Unices, Mac, Windows… next Android.

Am I too crazy or dumb?

 

There is a nifty piece of software called zsync, which is kind-of like rsync, except it is totally different.

Rsync

Rsync is mainly useful when you want to synchonize a list of files, or directories, between two servers. It will only download the new files and files which have changed. It will even delete or backup the files which have been removed at the original site. Nice.

For a project I was involved until recently at work we had a slightly different problem: we generate a huge file (an ISO image) which contains about 6 GB of data. This ISO image contains the daily build of our application. It contains only a handful of files. Problem is some of them are generated and GB in size, yet from day to day only maybe 100-150 MB have changed (and it would be even less if it were not because of this “feature” of .NET that never generates identical binaries even if using exactly the same source code)

Rsync was not useful in this case: it would download the whole file, gigabytes! (some of the people downloading the ISO are on a slow link in India)

 

zsync

This is exactly the case zsync targets: zsync will only download the changed parts of the file thanks to the rolling checksum algorithm.

Best of all: no need for an rsync server, opening port TCP 873 (which requires months of arguing with BOFHs in some companies), or anything special: HTTP over port 80 and you are done. Provided that you are not using Internet Information Server, which happens to support only 6 ranges in an HTTP request (hint: configure nginx in reserve proxy mode).

But I’m digressing.

Cool. Great. Awesome. Zsync. The perfect tool for the problem.

 

Hello Windows

Except for this project is for Windows, people work on Windows, they are horrified of anything non-Windows, and zsync is only available for Unix platforms.

Uh oh.

In addition to that, the Cygwin port suffers from many connection error problems on Windows 7 and does not work on a cmd.exe prompt, it wants the Cygwin bourne shell prompt.

So I started to port zsync to Windows natively.

 

Native port howto

The starting point was:

  • C99 code
  • autotools build system
  • No external dependencies (totally self-contained)
  • Heavy use of POSIX and Unix-only features (such as reading from a socket via file descriptors, renaming a file while open, deleting a file while open and replacing it with another file yet still use the same file descriptor, etc)

To avoid breaking too much, and because I wanted to contribute my changes upstream, my intention was to do the port step by step:

  1. Linux/gcc/autotools
  2. Linux/gcc/CMake
  3. Cygwin/gcc/CMake
  4. MSYS/MinGW gcc/CMake
  5. Visual C++/CMake

 

Autotools

Autotools was the first stone in the path.

With some work (calling MSYS from a DOS prompt, etc) it would have been possible to make it generate a Visual C++ Makefile but it would have been painful.

Plus the existing autotools build system did not detect the right configuration on MinGW.

Step 1: replace autotools with CMake. On Linux. This was relatively easy (although time consuming) and did not require any change in the code.

 

Cygwin

The second step was to build zsync on Windows using Cygwin (which provides a POSIX compatibility layer) and CMake.

No code changes were required here either, only a few small adjustments to the CMake build system. I tested on Linux again, it worked fine.

At this point, I had only made a pyrrhic progress: zsync was still Unix only, but with a cross-platform build system.

 

MinGW

My next step was a serious one: port zsync to use MinGW, which generates a native Windows application with gcc.

That means using Winsock where required.

5And hitting Microsoft’s understanding of “POSIX-compliant”: the standard Windows POSIX C functions do not allow to treat sockets as files, rename open files, temporary files are created in C:\ (which fails on Windows Vista and newer), etc. And that’s when the functions do exist. In many cases (mkstemp, pread, gmtime_r…) those functions were outright inexistent and I needed to provide an implementation.

Plus adapting the build system. Fortunately, I was still using gcc and Qt Creator provides great support for MinGW and gdb on Windows, and decent support for CMake.

Some other “surprises” were large file support, a stupid “bug” and the difficulties of emulating all the file locking features of Unix on Windows.

Regarding LFS, I took the easy path: instead of using 64-bit Windows API directly, I used the mingw-w64 flavor of gcc on Windows, which implements 64-bit off_t on 32-bit platforms transparently via _FILE_OFFSET_BITS.

 

Visual C++ misery

Porting to Visual C++ was the last step.

This was not strictly required. After all, all I had been asked for as a native version, not a native version that used Visual C++.

Yet I decided to give VC++2010 a try.

The main problem was lack of C99 support (though you can partially workaround that by compiling as C++) and importing symbols due to lack of symbol exports in the shared library (attributes for symbol visibility were introduced in gcc4.0, but many libraries do not use them because gcc does its “magic”, especially MinGW, which will “guess” the symbols).

Porting to Visual C++ 2010 required either to give up some C99 features in use (e. g. moving variable declarations to the beginning of the functions) or adding a lot of C++-specific workarounds (extern “C”).

I was a bit worried upstream would not accept this code because it didn’t really provide any benefit for the application (for the developer, use of a great IDE and very powerful debugger), therefore I didn’t finish the Visual C++ port. Maybe some day if Microsoft decides to finally provide C99.

The result (so far) is available in the zsync-windows space in Assembla.

 

Lo sé, lo sé: “¿¡mal pagados!? ¡Pero si un concejal, ministro, presidente del Gobierno, etc cobran 80.000 EUR prácticamente libres de impuestos!”

Pues es un sueldo bajo, mira por donde.

 

Los sueldos de los políticos

Hace unos días, el Gobierno de Mariano Rajoy aprobó un proyecto de ley por el que el sueldo máximo de concejales y alcaldes será el mismo que un diputado: no llega a 69.000 EUR al año.

El Presidente del Gobierno cobra algo más, unos 78.000 EUR.

 

Casos reales

El señor Pizarro ganaba unos 10 millones de euros al año en Endesa. Cuando fue elegido diputado por el Partido Popular pasó a ganar unos 65.000 EUR al año. En los 2 años que fue diputado, Manuel Pizarro perdió, en números redondos, 20 millones de euros. Probablemente más, porque no estamos contando los paquetes de acciones, opciones sobre acciones, jubilaciones, etc.

Luis de Guindos, actual Ministro de Economía, cobraba más de 300.000 EUR al año como consejero de varias empresas. Ahora gana la cuarta parte, es uno de las personas más odiadas de España y no sé cómo duerme por las noches.

Hay más casos: Pedro Morenés (Ministro de Defensa del Gobierno Rajoy, ex-presidente de MDBA), Pedro Argüelles (Secretario de Estado de Defensa, ex-presidente de Boeing España), etc

 

Pero son raros

Visto que en la empresa privada se gana mucho más que en la política, lo normal es que el cambio vaya en sentido contrario: pasar de la política a la empresa privada.

Casos hay muchos: Felipe González, José María Aznar, Elena Salgado, Pedro Solbes, Eduardo Zaplana, Jordi Sevilla, Josu Jon Imaz, Josep Piqué, en fin, mil.

 

Gestores inútiles

Visto lo visto, ¿a alguien le extraña que estemos gobernados por inútiles? ¡Es la evolución natural!

  • Si uno es bueno en su trabajo, gana mucho más en la empresa privada que en la política
  • Si uno es un inútil, en la empresa privada no se va a “comer un torrao”, así que se mete a política.

 

Corolario

Los inútiles se meten en política, hacen contactos y con su gestión destrozan la economía. Al poco de dejar la política, pasan a la empresa privada, donde ganarán mucho dinero en empresas a las que “casualmente” beneficiaron mientras estaban en política.

 

La solución

Menos políticos, muchos menos, pero mucho mejor pagados: ¿cómo vamos a encontrar un ministro de Economía capaz si sólo pagamos 69.000 EUR al año?

Los únicos dispuestos a aceptar un cargo así son unos pocos gestores buenos que tienen vocación política (se cuentan con los dedos de una mano), o una caterva de políticos inútiles que quieren usar el cargo como lanzadera para irse luego a la empresa privada y ganar mucho más. Abundan los segundos.

(Todo lo dicho sobre ministros y presidentes del gobierno se aplica idéntico a concejales, alcaldes, diputados autonómicos, presidentes autonómicos, etc)

 

Wikipedia tiene un buen artículo sobre el Protocolo de Kioto:

El Protocolo de Kioto sobre el cambio climático2 es un protocolo de la CMNUCC Convención Marco de las Naciones Unidas sobre el Cambio Climático , y un acuerdo internacional que tiene por objetivo reducir las emisiones de seis gases de efecto invernadero que causan el calentamiento global: dióxido de carbono (CO2), gas metano (CH4) y óxido nitroso (N2O), además de tres gases industriales fluorados: Hidrofluorocarbonos (HFC), Perfluorocarbonos (PFC) y Hexafluoruro de azufre (SF6), en un porcentaje aproximado de al menos un 5%, dentro del periodo que va desde el año 2008 al 2012, en comparación a las emisiones al año 1990.

La realidad

Bueno, esa era la intención. Lo que ha sucedido en realidad ha sido:

  1. Muy pocos países han conseguido reducir las emisiones de CO2
  2. Se creó un mercado de “aire limpio”, en el cual los países que excedían en sus emisiones de CO2 compraban “no emisiones” a los países que sí cumplían. Un anglosajón diría que literalmente hemos creado un mercado “out of thin air“. Yo lo completaría con un “out of thin clean air”

En cuanto al primer punto, hay que destacar que España es uno de los países más idiotas del Protocolo de Kioto: asumimos unos compromisos irrealizables, a los que en realidad nadie nos obligaba, y en lugar de abandonar el Protocolo, como hicieron el resto de países que no iban a cumplir (Estados Unidos, Canadá, etc), seguimos y hemos pagado y vamos a pagar un dineral.

En cuanto al segundo punto, en realidad los países “contaminadores” no están comprando a países “limpios”, sino (mayoritariamente) a países subdesarrollados, que simplemente no contaminan más porque no tienen industria, no porque su industria sea eficiente.

 

¿Y España?

España ha fracasado absolutamente en todo: no hemos cumplido con las emisiones, y en lugar de abandonar, seguimos porque somos “buenrrollistas”, y como consecuencia vamos a pagar unos 800 millones de euros por el periodo 2008-2012.

Ya es demasiado tarde para ahorrarnos esos 800 millones de euros (la fecha límite era el 31 de diciembre de 2011), pero lo que sí podemos hacer “darnos de baja” del “club post-Kioto“.

 

¡Contaminador! ¡Antiecológico! ¡Asesino de la Naturaleza!

Para el carro.

Me gustaría que cumpliéramos con Kioto, con Durban y con Bali, pero ya que no vamos a poder hacerlo, y ya que tenemos la economía hecha trizas, lo mejor es que nos borremos y que lo hagamos lo mejor que podamos.

Y sin pagar estúpidas sanciones autoimpuestas.

Prefiero destinar esos 800 millones de euros a mil cosas más importantes.

 

D-BUS (Desktop Bus) is a simple inter-process communication (IPC) open-source system for software applications to communicate with one another. It replaced DCOP in KDE4 and has also been adopted by Gnome, XFCE and other desktops. It’s, in fact, the main interoperability mechanism in the “Linux desktop” world thanks to the freedesktop.org standards.

The architecture of D-Bus is pretty simple: there is a dbus-daemon server process which runs locally and acts as a “messaging broker” and applications exchange messages through the dbus-daemon.

But of course you already new that because you are supersmart developers and/or users.

D-Bus on Windows

What you may not know is how much damage is D-Bus making to open source software on Windows.

A few years ago I tried to introduce kdelibs for a large cross-platform project but I got it rejected, despite some obvious advantages, mainly due to D-Bus.

Performance and reliability back then was horrible. It works much better these days but it still scares Windows users. In fact, you may also replace “it scares Windows users” with “it scares IT departments in the enterprise world*”.

The reason?

A dozen processes randomly started, IPC with no security at all, makes difficult to upgrade/kill/know when to update applications, and many more. I’m not making this out, this has already happened to me.

* yes, I know our friends from Kolab are doing well, but how many KDE applications on desktop have you seen out of that “isolation bubble”

D-Bus on mobile

One other problem is D-Bus is not available on all platforms (Android, Symbian, iOS, etc), which makes porting KDE applications to those platforms difficult.

Sure, Android uses D-Bus internally, but that’s an implementation detail and we don’t have access to it). That means we still need a solution for platforms where you cannot run or access dbus-daemon.

Do we need a daemon?

A few months ago I was wondering: do we really need this dbus-daemon process at all?

What we have now looks like this:

As you can see, D-Bus is a local IPC mechanism, i. e. it does not allow applications to communicate over the network (although technically, it would not be difficult to implement). And every operating system these days has its own IPC mechanism. Why create a new one with a new daemon? Can’t we use an existing one?

I quickly got my first answer: D-Bus was created to expose a common API (and message and data format, i. e. a common “wire protocol”) to applications, so that it’s easy to exchange information.

As for the second answer, reusing an existing one, it’s obvious we cannot: KDE applications run on a variety of operating systems, every one of them has a different “native” IPC mechanism. Unices (Linux, BSD, etc) may be quite similar, but Windows, Symbian, etc are definitely very different.

No, we don’t!

So I though let’s use some technospeak buzzword and make HR people happy! The façade pattern!

Let’s implement a libdbusfat which offers the libdbus API on one side but talks to a native IPC service on the other side. That way we could get rid of the dbus-daemon process and use the platform IPC facilities. For each platform, a different “native IPC side” would be implemented: on Windows it could be COM, on Android something else, etc

Pros

The advantage of libdbusfat would be applications would not need any change and they would still be able to use DBus, which at the moment is important for cross-desktop interoperability.

On Unix platforms, applications would link to libdbus and talk to dbus-daemon.

On Windows, Android, etc, applications would link to libdbusfat and talk to the native IPC system.

By the magic of this façade pattern, we could compile, for instance, QtDBUS so that it works exactly like it does currently but it does not require dbus-daemon on Windows. Or Symbian. Or Android.

QtMobility?

QtMobility implements a Publish/Subscribe API with a D-Bus backend but it serves a completely different purpose: it’s not available glib/gtk/EFL/etc and it’s implemented in terms of QtDBUS (which in turn uses dbus-daemon for D-Bus services on every platform).

It’s, in fact, a perfect candidate to become a user of libdbusfat.

Cons

A lot of work.

You need to cut dbus-daemon in half, establish a clear API which can be implemented in terms of each platform’s IPC, data conversion, performance, etc. Very interesting work if you’ve got the time to do it, I must say. Perfect for a Google Summer of Code, if you already know D-Bus and IPC on a couple of different-enough two platforms (Linux and Windows, or Linux and Android, or Linux and iOS, etc).

Summary

TL;DR: The idea is to be able to compile applications that require DBus without needing to change the application. This may or may not be true on Android depending on the API, but it is true for Windows.

Are you brave enough to develop libdbusfat it in a Qt or KDE GSoC?

This is a short one and probably doable as a Summer of Code project.

The idea: add support for the Microsoft compiler and linker and Visual Studio projects and solutions (.sln, .vcproj, et)c in KDevelop, at least in the Windows version.

QtCreator has suport for the first part (compiler and linker).

For the the second part (solutions and projects), code can probably be derived (directly or indirectly) from MonoDevelop‘s and CMake‘s. The starting point would be MSBuild support, as it’s what VS2010 is based on.

Bonus points if you add C#/.NET support (Qyoto/Kimono).

In a perfectly orchestrated marketing campaign for a 100% free-libre tablet called Spark that will run KDE Plasma Active, Aaron Seigo writes today about the problems they are facing with GPL-violations.

Apparently, every Chinese manufacturer is breaking the GPLv2 by not releasing the sources for their modified Linux kernel. Conversations and conversations with Zenithink (designers of the Spark), Sygic (designers of the Dreambook W7), etc have arrived nowhere. To the point that CordiaTab, another similar effort using Gnome instead of KDE, has been cancelled.

I have to say I am very surprised at the lack of the kernel sources. What is the Free Software Foundation doing? Why don’t we seek ban of all imports of tablets whose manufacturers don’t release the full GPL source?

Apple got the Samsung GalaxyTab imports blocked in Germany and Australia for something as ethereal as patents covering the external frame design. We are talking about license infringement, which is easier to demonstrate in court.

China may ignore intellectual property but they cannot ignore business, and no imports means no business. Let’s get all GPL-infringing tablet imports banned and we will get more source in two weeks than we can digest in two years. Heck, I’m surprised Apple is not trying this in court to block Android!

Apparently HTML5 applications are the best thing after sliced bread.

HTML5 is the first platform any mobile vendor supports: iPhone, Android, Windows Phone, BlackBerry, Symbian. All of them.

Windows 8 is said to promote HTML5 as the preferred application development solution.

I used to look kindly at that. But about a month ago I started to get worried: is HTML5 good for everything?

Long-lived applications

In military, industrial, warehouse management, medical, etc is not rare that bespoke applications are developed and stay in use for many years (and I really mean many: 10, 20 or even more!) with barely an update. It’s not rare that those applications only receive very small updates once very 5 years. Those applications, not Angry Birds, are what keeps the world running: troops know what supplies they can count on, iPhones are manufactured, FedEx is able to deliver your package and your doctor is able to check your health.

But now that everybody seems to be moving to HTML5 webapps, what happens when my warehouse management application is a webapp and the additions in the newest browsers make the webapp no longer work?

Are vain upgrades the future?

Say my webapp is released in 2014 and it works fine with Firefox 14.0 and Chrome 26.0, the newest browsers when I release the application in 2014. Fast-forward to 2020 and Firefox 14.0 and Chrome 26.0 do not even install on Windows 10 computer! What’s the solution?

Should the customer pay for a huge update and redesign to make it work with Firefox 27.1 and Chrome 41.0 in 2020?

A virtual machine with Windows 8 and Firefox 14.0? A portable Mozilla Firefox 14.0 on Windows 10 in 2020 to be able to use that line-of-business application that only requires a small update once or twice every 5 years? How are the virtual machine and/or Portable Firefox 14.0 different from or better than a fat client? What’s the advantage? I’d say none!

Native applications usually do not have that kind of problems because APIs are much more stable. You can still run Win16 applications on Windows 7!

You don’t believe me? We may soon be developing for 76 browsers!

While HTML5 may be fine for applications which are updated very often, it makes me feel very uneasy to see it used in environments where applications will be rarely updated, such as SCADAs, warehouse management, control system, medical records, etc.

A solution is needed

It looks like that choice of technology is going to make those applications much more expensive in the medium and long term, paying for “adaptations to new browsers” (sorry, I resist to call “update” or “upgrade” to something that adds zero value other than being able to run on a newer browser).

Or maybe it’s about time to define actual “HTML5 profiles”. ACID3 seems to be too weak of a profile: two very different browsers may pass ACID3 yet a webapp would work with one browser and fail with the other due to bugs, missing features/added features, etc.

Something needs to be done.