Qt and KDE will be present at FOSDEM, the largest open-source event in Europe. One more year, we will be sharing the Desktops DevRoom with Gnome, Unity, Enlightenment, LXQt and Hawaii (a Qt Quick desktop environment). We recently published the schedule for the devroom, which will be also available in the printed booklet available at the front desk.

fosdem-logo

For the 2014 edition, the FOSDEM organization wants to achieve 100% recording of presentations. That means every presentation, in every room (devroom, lightning talk, main conference, etc) must be recorded. That’s hundreds of talks. While the FOSDEM and devrooms organization teams comprise a lot of people, we are far too busy already with the organizative stuff and cannot spend time doing the actual recordings.

Good thing is, you can help!

Do you want to join the FOSDEM Video Team and receive the t-shirt? We are now looking for volunteer cameramen (and camerawomen, of course :-) ).

FOSDEM will provide you with equipment and training, you only need to start recording, focus, make sure nobody gets between the camera and the speaker/stage, etc. You do NOT need to record the whole track, even one talk recording would help. More details on what will be required from you are available in this e-mail from Wouter.

Please contact me (pgquiles at elpauer dot org) if you are interested in recording one or more presentations from the Desktops DevRoom.

Once more, I’m going to FOSDEM 2014, the largest Free/Libre/Open Source Software event in Europe (5,000 attendants every year).

fosdem-logo

As usual, I will be in charge of the Desktops DevRoom, together with our friends from Gnome (Christophe Fergeau), Unity (Didier Roche), Enlightenment (Philippe Caseiro) and others.

See you in Brussels 1-2 February 2014!

BTW, have you already submitted your talk proposal for the Desktops DevRoom? The deadline (15th December) is very close! Do not wait any more!!! See the details here: FOSDEM 2014 Desktops DevRoom Call for Talks

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the tracks will be the Desktops DevRoom (formerly known as “CrossDesktop DevRoom”), which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperability amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, Unity, XFCE/Razor, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2013 schedule might give you some inspiration.

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)
  • Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified. You may be assigned LESS time than you request.

The deadline for submissions is December 14th 2013. FOSDEM will be held on the weekend of 1-2 February 2014. Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM14

You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: desktops-devroom@lists.fosdem.org (subscription page for the mailing list)

– The Desktops DevRoom 2014 Organization Team

Hace unos meses el Tribunal Supremo anuló la asignación de multiplex de Televisión Digital Terrestre que hizo en 2010 el Gobierno de Zapatero.

Todo el mundo dice estar sorprendido y el Gobierno de Rajoy dice que no queda más remedio que eliminar 9 canales de televisión.

Y yo digo que es mentira.

En realidad no se tiene porqué borrar ningún canal.

La sentencia del Tribunal Supremo declara nulo el reparto anterior, con lo cual, la lógica dice que hay que hacer un nuevo reparto. El quid de la cuestión es quién se llevaría los canales con ese nuevo reparto:

  • Los que hasta hoy son dueños de esos multiplex (Mediaset, Antena 3, NetTV, etc) querrán que el concurso se haga a su medida para que el nuevo reparto sea, oh casualidad, exactamente el mismo que el antiguo
  • Los que hasta hoy están alquilando canales a los dueños de los multiplex (Paramount, Disney, etc) querrán que el concurso se haga a su medida para que les den el canal y así ahorrarse el alquiler

Tema aparte es que el Estado ha cobrado a los operadores de Telecomunicaciones (Movistar, Vodafone, Orange, etc) un dinero por el espacio radioeléctrico que va a quedar libre (el “dividendo digital”) :

  • Ese dinero, de acuerdo con la normativa anterior (y la nueva) y la Directiva Europea en la mano, tiene que destinarse a ayudas para adaptar las antenas
  • Sin embargo, como el Estado no tiene un duro (bueno, lo tiene, pero se va en sobres, Olimpiadas, EREs, trajes, etc), quieren quedarse con el dinero y no darlo para adaptar antenas
  • Para poder quedarse con el dinero sin que al ciudadano de a pie le cueste dinero, lo que se les ha ocurrido es anular la adjudicación anterior y no hacer una nueva adjudicación. Al no haber nuevos multiplex, no hay canales en frecuencias nuevas, y por tanto, no tenemos que adaptar las antenas. Más propio de Austin Powers que de un Gobierno decente.

Explicado para niños:

  1. Pepito tiene una manzana, Pedrito una pera y Jaimito una naranja
  2. Pepito dice “Pedrito, si le das tu pera a Jaimito, yo te doy mi manzana” y “Jaimito, si me das tu naranja, Pedrito te dará una pera”
  3. Pedrito le da su pera a Jaimito, Jaimito le da su naranja a Pepito y Pepito NO le da su manzana a Pedrito
  4. Pedrito protesta y Pepito, para no aguantar quejas por incumplir su parte del trato, mata a Pedrito y esconde el cuerpo

Lo gracioso del asunto es que tanto Gobierno como propietarios de los multiple se declaran sorprendidos.

Sorprendidos, ¿de qué?

Todos conocían al dedillo la ley y presionaron para que los canales se asignaran por adjudicación directa en lugar de por concurso, sabiendo que violaban la ley. ¿Esperaban que quien no se llevó multiplex se quedara de brazos cruzados?

Recapitulando:

Una vez más: relee lo que escribí ayer

  • La sentencia anula el reparto anterior de canales, pero NO prohíbe hacer un nuevo reparto de esos canales. Es más, es justo al contrario: la sentencia dice que los canales hay que darlos, pero por concurso, no por adjudicación directa. Es el Gobierno Rajoy quien quiere hacernos creer que la sentencia obliga a eliminar canales
  • Efectivamente, el Gobierno Zapatero ingresó un dinero de las empresas de móviles y NO se lo gastó. Ese dinero era para darlo en la segunda mitad de 2012 y todo 2013 para adaptar antenas. Es el Gobierno Rajoy quien ha decidido quedarse con el dinero y no darlo para adaptar antenas.
  • Es más, lo de quedarse el dinero y NO darlo para adaptar antenas lo decidieron MUCHO antes de que se supiera nada de esa sentencia que anulaba el reparto anterior (puedes tirar de hemeroteca y cualquier antenista del foro te lo puede confirmar)
  • En definitiva: otro engaño más de Rajoy. Y van…

Si yo fuera Movistar, Vodafone, Orange o Yoigo, estaría muy pendiente de lo que ocurre ahora. Si el Gobierno NO convoca un concurso para dar los multiplex, inmediatamente reclamaría la devolución de mi parte de los 1800 millones de euros que el Estado ingresó en concepto de ayuda a la reantenización. Al fin y al cabo, si no hay nuevos multiplex, no hace falta reantenizar, y por tanto, no las operadoras móviles no tienen porqué pagar.

De hecho, la solución más sensata para este problema después de la sentencia sería:

  1. Convocar concurso público con plazo límite 30 de junio
  2. A 15 de agosto se publica la resolución (quién obtiene multiplex y quién no). Sí, mucha gente se va a quedar sin vacaciones. Una pena.
  3. Hasta el 15 de septiembre para alegaciones
  4. El 30 de septiembre, alegaciones resueltas
  5. El 15 de diciembre cesan las emisiones por parte de los antiguos dueños de los multiplex. Esto da 75 días a los antiguos y a los nuevos dueños para negociar alquiler de canales, en caso de que entre algún nuevo jugador y salga uno de los antiguos, sin necesidad de que haya corte en las emisiones.
  6. ¿Por qué el 15 de diciembre y no el 31, que sería lo más lógico? Porque está Navidad, que es una época de gran audiencia, y además algún canal puede querer retransmitir las campanadas y no vas a hacer que justo cuando suena la última campanada, se apague 

Pero claro, hace falta voluntad, y el Gobierno actual lo que quiere es eliminar los canales y quedarse con 1800 millones de euros por la cara. Unos sinvergüenzas.

(Actualizado con mi propuesta de plan de transición no traumático)

 

Mark Shuttleworth recently critized Jonathan Riddell for proposing Xubuntu and others join the Kubuntu community. I thought I could make a few amendments to Mark’s writing:

Jonathan Mark says that Canonical Kubuntu is not taking care of the Ubuntu community.

Consider for a minute, Jonathan Mark, the difference between our actions.

Canonical Kubuntu, as one stakeholder in the Ubuntu community, is spending a large amount of energy to evaluate how its actions might impact on all the other stakeholders, and offering to do chunks of work in support of those other stakeholder needs.

You, as one stakeholder in the Ubuntu community, are inviting people to contribute less to the broader project [all the X and Wayland -based desktops], and more to one stakeholder [Unity and Mir].

Hmm. Just because you may not get what you want is no basis for divisive leadership.

Yes, you should figure out what’s important to Kubuntu Ubuntu Unity and Mir, and yes, you should motivate folks to help you achieve those goals. But it’s simply wrong to suggest that Canonical Kubuntu isn’t hugely accommodating to the needs of others, or that it’s not possible to contribute or participate in the parts of Ubuntu which Canonical Kubuntu has a particularly strong interest in. Witness the fantastic work being done on both the system and the apps to bring Ubuntu Plasma to the phone and tablet. That may not be your cup of tea, but it’s tremendously motivating and exciting and energetic.

See Mark? I only needed to do a little search and replace on your words and suddenly, meaning is completely reversed!

Canonical started looking only after its own a couple of years ago and totally dumped the community. Many people have noticed this and written about this in the past two years.

How dare you say Jonathan or anyone from Kubuntu is proposing contributing less to the broader community? The broader community uses X and/or Wayland.

Canonical recently came with Mir, a replacement for X and Wayland, out of thin air. Incompatible with X and Wayland.

No mention of it at all to anyone from X or Wayland.

No mention of it at FOSDEM one month ago, even though I, as the organizer of the Cross Desktop DevRoom, had been stalking your guy for months because we wanted diversity (and we got it: Gnome, KDE, Razor, XFCE, Enlightenment, etc, we even invided OpenBox, FVWM, CDE and others!). I even wrote a mail to you personally warning you Unity was going to lose its opportunity to be on the stand at FOSDEM. You never answered, of course.

Don’t you think Mir, a whole new replacement for X and Wayland, which has been in development for 8 months, deserved a mention at the largest open source event in Europe?

Come on, man.

It is perfectly fine to say “yes, Canonical is not so interested in the community. It’s our way or the highway”.

But do not pretend it’s anything else or someone else is a bad guy.

In fact, is there any bad guy in this story at all!? I think there is not, it’s just people with different visions and chosen paths to achieve them.

Maybe Mir and Unity are great ideas, much better than X and Wayland. But that’s not what we are talking about. We are talking about community, and Canonical has been steadily destroying it for a long time already. If you cannot or do not want to see that, you’ve got a huge problem going on.

 

FOSDEM is one of the largest gatherings of Free Software contributors in the world and happens each February in Brussels (Belgium). One of the tracks will be the CrossDesktop  DevRoom, which will host Desktop-related talks.

We are now inviting proposals for talks about Free/Libre/Open-source Software on the topics of Desktop development, Desktop applications and interoperativity amongst Desktop Environments. This is a unique opportunity to show novel ideas and developments to a wide technical audience.

Topics accepted include, but are not limited to: Enlightenment, Gnome, KDE, Unity, XFCE, Windows, Mac OS X, general desktop matters, applications that enhance desktops and web (when related to desktop).

Talks can be very specific, such as developing mobile applications with Qt Quick; or as general as predictions for the fusion of Desktop and web in 5 years time. Topics that are of interest to the users and developers of all desktop environments are especially welcome. The FOSDEM 2012 schedule might give you some inspiration:
https://archive.fosdem.org/2012/schedule/track/crossdesktop_devroom.html

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio
  • Requested time: from 15 to 45 minutes. Normal duration is 30 minutes. Longer duration requests must be properly justified.

The deadline for submissions is December 14th 2012. FOSDEM will be held on the weekend of 2-3 February 2013. Please submit your proposals to crossdesktop-devroom@lists.fosdem.org (subscribtion page for the mailing list: https://lists.fosdem.org/listinfo/crossdesktop-devroom )

— The CrossDesktop DevRoom 2013 Organization Team

PS: Qt and KDE people are starting to organize for the booth, devroom, Saturday & Sunday night, etc. If you want to help, join kde-promo and add yourself to the wiki.

 

A few months ago I wrote on my disbelief of HTML5 being the right tool for everything . Some people took that as me saying HTML5 is useless.

That’s obviously not true and it’s certainly not what I think.

It’s my opinion there is room for HTML5 and there is room for native applications and the decision on what to use should not be taken lightly.

Here are a few questions that may help you to make a wise decision.

 

Target user

Is it corporate? Is it consumer?

Corporate devices are usually under control and users may not be able to install software.

Or traffic may be filtered and users cannot browse to your website to use your webapp and getting the authorization will take months, therefore they give up before they have even started using it.

Or they may be on a slow Internet connection and using that HTML5 webapp that took years to develop and add all those nice effects is hardly possible due to megabytes of JavaScript and images needing to be downloaded.

As for consumers, despite having full control of their systems, it’s not roses either: not all consumers know how to install software and/or they may be scared by UAC dialogs (hint: always sign your software with a certificate whose signature chain reaches VeriSign).

 

Target device

Is it a computer? Smartphone? Tablet? Web browser?

If a computer, is it a PC running Windows? Linux? Mac? All of them? Are you trying to reach as many platforms as possible?

How old of a computer are you targeting? Pentium 4? Core 2 Duo? Core i5? How much RAM? Try a fancy website with a lot of HTML5 niftiness on an old computer and you’ll probably be surprised at how slow HTML5 can be, even on modern browsers.

 

Deployment

Deploying native applications in corporate environments is a bit of a nightmare due to different operating system versions, hardware, etc

Deploying native applications in consumer computers is only a problem if you are targeting low-skilled users.

HTML5 is easy to deploy, provided that you can get the user to use a proper version of the browser. This is workable with consumers but often impossible with corporate, so if you go for HTML5 for a corporate application, make sure you support everything from at least down to Internet Explorer 8.

For mobile devices (smartphones and tablets), it doesn’t really matter whether it’s an HTML5 or native application: it has to be installed on the device, the device goes with the user everywhere and when the user moves to another device, re-installing all the applications is a matter of accessing the Apple Store, Android Market or equivalent and say “download it all”.

Read More →

Aaron recently posted an update about the progress of Vivaldi and the new setbacks and progresses in the project.

In case you are not familiar with Vivaldi, here’s a quick recap: the idea is to have a tablet running Linux (Mer, the continuation of Maemo and MeeGo) and Plasma Active. Apparently the easiest and cheapest way to achieve this was to get all the sources code for the software running on one of the many tablets which are sold with Android (which is, after all, a variation of Linux).

But then problems arise: those tablets run Android and vendors often provide only binary drivers, which is useless for Mer (or any other distribution of Linux). Once they finally got enough source to move Vivaldi forward, the board (electronics) changes and back to square 1 (or almost).

According to Aaron, it seems this time they have found a partner which is willing to provide the device and the source. Grea!

However, since the beginning of the Vivaldi Project (back when it was called Spark), there is one thing there is one thing I have always wondered.

Why Mer? In fact, why a Linux distribution? I know, I know, Plasma Active needs Linux to run.

But what about taking a completely different approach?

Instead of trying to get Mer and Plasma Active running on a tablet which is meant to run Android, why not taking a less radical approach?

We have Necessitas (Qt for Android).

So why not taking the opposite approach?

Instead of adapting Mer (operating system) + Plasma Active (“desktop environment”) to a tablet (the “device”) which is meant to run Android

what about this:

Port Plasma Active (desktop environment) to Android (operating system), which is already running on the tablet (the “device”).

Then create KDE’s own “CyanogenMod” that can be installed on top of many tablet devices. And sell those devices: you get to choose 7”, 9.7”, 10.1”, etc

Or maybe even sell Plasma Active and the application in the Android Market, if that’s possible (I don’t know enough about Android and the Market Terms and Conditions to know whether it’s possible technically and legally to replace the desktop).

Yes, that’s a different business and it’s probably not what Make·Play·Live had in mind.

With this approach, the business is no longer about selling hardware + software but mainly only about selling software. How to make that profitable is a topic for another post.

And there are technical limitations: Bionic, the amount of linked libraries, applications may need tablet-specific details (not unexpected, even with Mer), etc

But at least it’s something that we know how to do: it’s code, not hardware, and there is no need to deal with people who will promise source code and then won’t deliver. It will work on top of Android, and we just need to create our own distribution.

It’s just one more platform for KDE: first it was Linux, then other Unices, Mac, Windows… next Android.

Am I too crazy or dumb?

 

There is a nifty piece of software called zsync, which is kind-of like rsync, except it is totally different.

Rsync

Rsync is mainly useful when you want to synchonize a list of files, or directories, between two servers. It will only download the new files and files which have changed. It will even delete or backup the files which have been removed at the original site. Nice.

For a project I was involved until recently at work we had a slightly different problem: we generate a huge file (an ISO image) which contains about 6 GB of data. This ISO image contains the daily build of our application. It contains only a handful of files. Problem is some of them are generated and GB in size, yet from day to day only maybe 100-150 MB have changed (and it would be even less if it were not because of this “feature” of .NET that never generates identical binaries even if using exactly the same source code)

Rsync was not useful in this case: it would download the whole file, gigabytes! (some of the people downloading the ISO are on a slow link in India)

 

zsync

This is exactly the case zsync targets: zsync will only download the changed parts of the file thanks to the rolling checksum algorithm.

Best of all: no need for an rsync server, opening port TCP 873 (which requires months of arguing with BOFHs in some companies), or anything special: HTTP over port 80 and you are done. Provided that you are not using Internet Information Server, which happens to support only 6 ranges in an HTTP request (hint: configure nginx in reserve proxy mode).

But I’m digressing.

Cool. Great. Awesome. Zsync. The perfect tool for the problem.

 

Hello Windows

Except for this project is for Windows, people work on Windows, they are horrified of anything non-Windows, and zsync is only available for Unix platforms.

Uh oh.

In addition to that, the Cygwin port suffers from many connection error problems on Windows 7 and does not work on a cmd.exe prompt, it wants the Cygwin bourne shell prompt.

So I started to port zsync to Windows natively.

 

Native port howto

The starting point was:

  • C99 code
  • autotools build system
  • No external dependencies (totally self-contained)
  • Heavy use of POSIX and Unix-only features (such as reading from a socket via file descriptors, renaming a file while open, deleting a file while open and replacing it with another file yet still use the same file descriptor, etc)

To avoid breaking too much, and because I wanted to contribute my changes upstream, my intention was to do the port step by step:

  1. Linux/gcc/autotools
  2. Linux/gcc/CMake
  3. Cygwin/gcc/CMake
  4. MSYS/MinGW gcc/CMake
  5. Visual C++/CMake

 

Autotools

Autotools was the first stone in the path.

With some work (calling MSYS from a DOS prompt, etc) it would have been possible to make it generate a Visual C++ Makefile but it would have been painful.

Plus the existing autotools build system did not detect the right configuration on MinGW.

Step 1: replace autotools with CMake. On Linux. This was relatively easy (although time consuming) and did not require any change in the code.

 

Cygwin

The second step was to build zsync on Windows using Cygwin (which provides a POSIX compatibility layer) and CMake.

No code changes were required here either, only a few small adjustments to the CMake build system. I tested on Linux again, it worked fine.

At this point, I had only made a pyrrhic progress: zsync was still Unix only, but with a cross-platform build system.

 

MinGW

My next step was a serious one: port zsync to use MinGW, which generates a native Windows application with gcc.

That means using Winsock where required.

5And hitting Microsoft’s understanding of “POSIX-compliant”: the standard Windows POSIX C functions do not allow to treat sockets as files, rename open files, temporary files are created in C:\ (which fails on Windows Vista and newer), etc. And that’s when the functions do exist. In many cases (mkstemp, pread, gmtime_r…) those functions were outright inexistent and I needed to provide an implementation.

Plus adapting the build system. Fortunately, I was still using gcc and Qt Creator provides great support for MinGW and gdb on Windows, and decent support for CMake.

Some other “surprises” were large file support, a stupid “bug” and the difficulties of emulating all the file locking features of Unix on Windows.

Regarding LFS, I took the easy path: instead of using 64-bit Windows API directly, I used the mingw-w64 flavor of gcc on Windows, which implements 64-bit off_t on 32-bit platforms transparently via _FILE_OFFSET_BITS.

 

Visual C++ misery

Porting to Visual C++ was the last step.

This was not strictly required. After all, all I had been asked for as a native version, not a native version that used Visual C++.

Yet I decided to give VC++2010 a try.

The main problem was lack of C99 support (though you can partially workaround that by compiling as C++) and importing symbols due to lack of symbol exports in the shared library (attributes for symbol visibility were introduced in gcc4.0, but many libraries do not use them because gcc does its “magic”, especially MinGW, which will “guess” the symbols).

Porting to Visual C++ 2010 required either to give up some C99 features in use (e. g. moving variable declarations to the beginning of the functions) or adding a lot of C++-specific workarounds (extern “C”).

I was a bit worried upstream would not accept this code because it didn’t really provide any benefit for the application (for the developer, use of a great IDE and very powerful debugger), therefore I didn’t finish the Visual C++ port. Maybe some day if Microsoft decides to finally provide C99.

The result (so far) is available in the zsync-windows space in Assembla.