A while ago I said Koen from Emweb made an interesting proposal at FOSDEM about emerge, the KDE Windows build tool.

Yesterday, Jarosław Staniek and I reaffirmed our commitment to ’emerge’. Today, I’d like to go a bit further: let’s bring more developers to emerge by opening it up to other projects. Keep reading!

What is emerge, why is it important and what was Koen’s proposal?

Fact: Microsoft Windows is very different to Unix in regards to development.

On Unix platforms -that includes Linux and Mac OS X-, software is usually installed to /usr: applications in /usr/bin and /usr/sbin, libraries in /usr/lib, headers in /usr/include, common resources in /usr/share, etc. Also, dependency management is usually something you can count on: when you install kdelibs5-dev in Ubuntu, it will automatically install libqt4-dev, kdelibs5-data, libfreetype (runtime), etc That makes setting up a development environment a very easy task: look for shared libraries, header files, etc in the common places and you will probably find them.

On Windows there is nothing like that. When you want to compile an application, you need to provide (build and install) all its dependencies, and you need to tell Visual Studio where to find everything. Even CMake usually needs some help in the form of a hint for CMAKE_PREFIX_PATH. As you may imagine, building KDE, which has more than 200 third party dependencies and tens of modules (and with the move + split to git, many more) becomes an almost insurmountable task.

‘Emerge’ to the rescue: inspired by Gentoo‘s emerge, Ralf Habacker, Christian Ehrlicher, Patrick Spendrin and others (yours faithfully included) developed a tool which downloads the source, configures, builds, installs and packages KDE and its dependencies. It makes a world of difference when building KDE. Actually, it makes building KDE on Windows possible. Once more: thank you very much guys, impressive tool.

There are two well-differentiated parts in emerge, the ‘engine’ and the ‘recipes’.

Read More →

This is another idea that has been in my mind for months, and I’m so thrilled that I added it to the KDE Google Summer of Code 2011 KDE Ideas wiki: a new installer for KDE on Windows.

Currently, KDE on Windows can be installed using either emerge or the end-user installer. The installer is Cygwin-like, which means it is very different from the normal InstallShield, NSIS, MSI, etc installers Windows users are used to:

Have you seen Apple’s Mac App Store?

Read More →

Recently I have been appointed “Verification Leader, Core Foundation Team” whatever that means. Essentially it translates into taking care of functional testing. Ah, testing, well regarded, often ignored, always controversial.

Testing” usually means “unit testing” to developers. Code is what we understand (I can’t speak Russian but I can speak C++ 🙂 ) “I’ve kept my API stable but I’ve optimized the internal implementation. What did I broke?” With proper unit testing, you can go for a coffee and when you are back answer to the question is on screen: “oh, after optimization I forgot about corner case X, I’ll fix it”.

So the code is right. As Knuth would say “Beware of bugs in the above code; I have only proved it correct, not tried it”.

Try. That’s the key. The user does not care whether the application is implemented in C++2011 using lambdas, concepts, variadic templates and throws exceptions across shared libraries, or in plain old Visual Basic 6. Does it work? Does it load the file when I click File -> Open? Will it show a computer when I click and drag the computer icon from the palette to the canvas? That’s what matters.

In KDE we have unit tests in kdelibs (in fact it’s a requirement to get code in) and there are a few UI tests for the KDE User Interface module, which contains essentially widgets for use with your KDE-based application.

What about applications? What about Amarok, Gwenview, Kexi, Okular, etc? How do we know they work fine? How do we know they actions do what they should do and the application renders as it should? Correct me if I’m wrong but I think currently we rely on the input from our users: when they report something’s misbehaving, we oblige.

Can we do better? Yes, we can. UI testing has been there for decades and even the venerable expect can help with that.

There are many tools for automated GUI testing. Some are open source, like Xnee or White, others are closed-source, like Froglogic Squish and SmartBear TestComplete, which also happen to have the best support for Qt-based applications.

Are those tools useful? Why, yes, they are. For instance, there is a well-known bug in KDE Windows which makes Digikam on Windows show empty context menus when the Qt style is set to CDE (the default; if you use the Oxygen style everything looks fine). This kind of bugs can be easily detected by simulating user actions, then OCR‘ing the text of the popup menu. Or even just telling the automatic UI test to take a screenshot of the popup, then comparing pictures. Same for PDFs that render oddly in Okular on some Linux distribution due to fonts, FreeType configuration, an old version of Poppler, etc

The tool I am most familiar with is TestComplete, which suffers a bit with alien widgets (you need to build Qt without accessibility support). I have been told Squish is much better but given that there is no freely downloadable version and it does not support WPF (a requirement for the project I’m working on now), I can’t try it.

Here is a wish for KDE users who want to join development but have little or no programming skills: go to SmartBear’s website, download TestComplete and record/write a few UI tests. Let us know (in the forums, in the mailing lists) how far you got. Maybe we can make UI testing -a must in the corporate software development- part of the KDE development and release process.

Update: apaku says froglogic provides free licenses for KDE, see the comments

You sure know Qt demo. It provides a number of example applications and demonstrations that have been written to provide developers with examples of the Qt API in use, and showcase features found in each of Qt’s core technologies.

Wait. Don’t we have the Qt examples in the source tree to learn the API? Why is Qt Demo there? In my opinion, because it is a great seller: it’s a single and simple application which shows the greatest and best features of Qt. It’s a great tool to to convince your boss Qt is the right choice.

In KDE, we also have examples but we do not have a demo with bells and whistles.

My wish today: step in and create a KDE Demo showing the technologies and applications that make KDE a great platform: KIO, KParts, KAuth, Solid, Plasma, KXmlGui, KIPI plugins, Nepomuk, ThreadWeaver, Sonnet, KNewStuff, Kiosk mode, Kross, KDevPlatform, Phonon (some features are not present in the Qt version), Attica, KWallet, etc.

What would you like to see in a KDE Demo application? Do you think it could be material for a GSoC?

Update April 26th 2011 This idea has been accepted for Google Summer of Code 2011, I am mentoring Jon Ander Peñalba this summer!

My last two wishes were a generic version control system library based on git and document-level versioning for Calligra (I’m talking about Calligra here but this applies to many applications, it’s just that Calligra is one where this fits very nicely). Those were the building blocks for this wish: in-document translations.

Say you are a company present in several locations and you want to send a document to your customers. You want to please them, make them know they are loved. One effective way to do that is to write them in their native language.

I don’t know what is your workflow when you have to write a document in several languages but this is mine:

  1. Create original document, generally written in English for proper review by everybody with a say and a vote. Save it as document_english.odt.
  2. Translate to Catalan and save as document_catalan.odt
  3. Translate to Spanish and save as document_spanish.odt

Now when I want to send the document and translations to someone, I need to send several files.

Drawbacks of my workflow:

  • When attaching several documents to an e-mail, it’s easy to forget to send some translation (oh, I forgot to send you document_urdu.odt!)
  • If the original document (document_english.odt) is updated, I need to remember and manually update document_spanish.odt, document_french.odt, etc
  • Oh, and each document has to be translated on its own

Enter my imagination: I want better support for translations in Calligra. It would comprise two parts: in-document translations and automated translation.

In-document translations means instead of having document_english.odt, document_french.odt, etc, we would have a single document.odt that would contain all the translations.

How to do that? By means of the versioning method I wished about yesterday and git branches.

The “master” branch in the document would be the original language the document was written in (say, English) and then, when I want to translate the document to Japanese, I would go to Calligra Words, click on “Create new translation” and choose the language I am translating to. This would create a new branch, essentially a “git checkout -b master japanese”.

Of course it would be better to use ISO 639 codes for the branch names, so that we can show localized language names, i. e. if I am using Calligra in English, I want it to say the translations available in that document are “Original document, Japanese, French” but if I am using Calligra in Spanish I want it to say the translations available are “Documento original, Japonés, Francés”.

By using git versioning, it would also be quite easy to introduce some “marks” to know when the original document has been altered and therefore the translations need updating (“git diff” to the rescue 🙂 )

Changing the language of the master branch should also be possible by adding a “make default language” option to Calligra. The default language would be the language in which Calligra opens the document when several translations are available.

Now, let’s go for the second point of my vision: automated translations.

A bit earlier I said to create a new translation I would go to Calligra Words, click on “Create new translation” and a new branch would be created. In addition to that, we could ask the user if he wants to do an initial translation using some automated tool like Apertium (want to translate from Tajik to Persian? Apertium does), Google Translate, Babelfish, etc

And now the twist: I’m not a native English speaker. After each BehindKDE I write, I go to Jonathan Riddell and ask him to read the interview, spellcheck, make sure the words not only are correct but are in the proper order, etc (and he kindly does and never complains, thanks Jonathan!).

Would you not like to have a Jonathan Riddell in your Calligra Words for English translations, an Irina Rempt for Dutch translations, etc? I sure would!

So the idea is, after the automated translation, a small notification would pop and say “hey, I’ve noticed some supersmart bot has translated your document to Bengali. Would you like a human to review the translation and be back to you in 24 hours?”. That would send the document to some professional translator, for instance to Irina or to Prompsit (the company that develops Apertium), which would charge you their fee and Calligra would receive a commission (5%?).

The automated translation has many nitpicks but they all can be worked out and they could even create a business:

  • What services should be in for free translation? That’s not a problem: essentially, anyone that’s good. There should be a default, which provides translation services for the most used languages and provides good translations
  • What services should be in for paid translation? That could be a problem: in the near future, when Calligra overtakes Microsoft Office, every translator will ask to be in 🙂 No, really, we’d need them to offer a proper way to submit documents, notify the user they have received it and progress, etc
  • Privacy. For automated translation, I can either submit the document to the public free translation service, or I can pay a small monthly fee and have an private account on Apertium or Google Translate, so that my document is submitted over SSL and I am assured noone would use it for research or anything.
  • … and more

As a bonus point, a “translation mode” UI could be added to Calligra. It would show the document in the original language and the translated document side-by-side and make editing easy, something like Google Translator Toolkit:

In case you think I’m dreaming: no, I’m not. I have had this in mind for more than a year. Last month I talked about it with Gema Ramirez, the CEO of Prompsit (who has been a friend for I don’t know, 15 years?) and she instantly liked the idea. Maybe this could be material for a shared Apertium-Calligra GSoC?

So here is my third wish: let’s make Calligra the reference tool for users needing translations and for companies providing translation services. Your mission: take my 1,000 words essay and make it real 🙂 I would do it myself but sadly my job and real life leave little spare time for that.

Calligra Suite is an integrated work applications suite based on Qt and KDE. It is a fork continuation of KOffice, and it’s where most of the former KOffice developers have landed. Calligra includes a text processor, a spreadsheet, a presentations application, a database, vector and image editors, etc

Two years ago I made a proposal for in-document versioning for (then) KOffice applications. It would be based on libQtGit, which I wished about yesterday, and a new library tentatively called libQVersioned, which would provide a higher-level API.

This is what I wrote back then, I have added a few clarifications:

I’m working on a more general solution towards document versioning.

First, I’m developing libQtGit, which provides a Qt-like API to use git. Unfortunately, it’s been on hold for three months because I was too busy with real work. The good news is last Monday I started hacking on it again.

Second, I intend to develop a framework (tentatively named QVersioned for now) on top of libQtGit. QVersioned will provide QVersionedDir and QVersionedFile classes (amongst others, but those two are the most important). Those classes would essentially have the same API QDir and QFile have. QVersioned would open ZIP files and store a .git directory inside. There are cases, like OpenDocument files (which are already a ZIP file) where nothing special would be needed. For other cases (for instance, a .txt file), there would be a .txtv (meaning “.txt versioned”) which would be a ZIP fiel containing the .txt + .git directory.

Now, how did I intend to implement ODF versioning, which was going to be the “this thing works” case?:

  • The .git folder would store the uncompressed contents of the ODF file, i. e. XML, images, etc. This is needed to avoid duplication, allow diffs, etc
  • There would be also a a checkout, which would provide a full copy of the latest version (XML, images, etc) in the ZIP file, just like any ODF-capable application not supporting QVersioned would expect (these applications would just ignore the .git directory)
  • Clarification: what items 1 and 2 mean is to version an ODF file you would do something like:
    1. mkdir doc
    2. cd doc && unzip document.odt
    3. git init
    4. git add .
    5. git commit -a -m “<comment provided by user>”
    6. zip -9 document.odt .git *
  • When a QVersioned-capable application opens an ODF file, it compares the XML, images, etc to the latest version in git:
    • If the diff is empty, last save was by a QVersioned-capable application
    • If the diff is not empty, last save was by a QVersioned-incapable application. A new “git commit -a” is performed. Yes, we probably have lost “versions” of the document in between this commit and the former one but something’s better than nothing.

By using libQtGit and QVersioned, it would also be possible to add collaboration features such as “send me an update” (i. e. send me a diff which transforms my outdated document into your latest version), collaborative editing (send diffs back and forth) and more things I cannot think of right now.

In case you are interested in the libQtGit API (remember QVersioned will offer a higher-level API), this is the signature of the method you would call:

Git::Repository::Commit* Git::Repository::commit (
    const QString& message = QString(),
    const Commit* c = 0,
    const QString& author = QString(),
    const CommitFlags cf = DefaultCommitFlags,
    const CleanUpMode cum = DefaultCleanUpBehavior
);

That’s equivalent to “git commit -C commit_id -m “message” “. CommitFlags is a QFlags and CleanUpMode a QEnum:

enum CommitFlag {
    DefaultCommitFlags = 0x0 /*< git commit */,
    OverrideIndexCommit = 0x1 /*< git commit -a */,
    SignOffCommit = 0x2 /*< git commit -s */,
    AmendCommit = 0x4 /*< git commit --amend */,
    AllowEmptyCommit = 0x8 /*< git commit --allow empty */,
    NoVerifyCommit = 0x16 /*< git commit -n */
};
Q_DECLARE_FLAGS ( CommitFlags, CommitFlag )

enum CleanUpMode {
    DefaultCleanUpBehavior = 0x0 /*< git commit */,
    VerbatimClean = 0x1 /*< git commit --cleanup=verbatim */,
    WhiteSpaceClean = 0x2 /*< git commit --cleanup=whitespace*/,
    StripClean = 0x4 /*< git commit --cleanup=strip */,
    DefaultClean = 0x8 /*< git commit --cleanup=default */
};
Q_ENUMS ( CleanUpMode )

For the “git commit -a -m “Save latest unversioned version on ODF document opening” “, we would use:

// Assuming 'repo' is a valid Git::Repository object

repo->commit (
    "Save latest unversioned version on ODF document opening",
    0,
    "(The application would probably take the author's name from the product registration)",
    Git::Repository::OverrideIndexCommit
);

So, how is libQtGit doing? Well, the API is there for X git add, commit, init, mv, rm, checkout, clone, branch, revert, reset, clean, gc, status, merge, push, fetch, pull, rebase, config, update-server-info and (partially) symbolic-ref. When I say “the API is there” I mean “all the QFlags, QEnums, methods, classes and its translation to git parameters is done”. It’s just a matter of implementing the QProcess part, parsing output, etc. Boring and time-consuming but easy.

In addition to file versioning and collaboration, there is another interesting feature (that I will wish about tomorrow) that could be achieved.

 

You probably have heard about git, the stupid content tracker. It is a distributed revision control system with an emphasis on speed and KDE is slowly migrating from Subversion to git.

One of the main problems of git is it is a fire-and-forget tool: run one command, exit. You cannot run two commands without re-initializing everything because it was never developed with that in mind. That poses a serious problem if you want to use a git repository from your application. It was not possible to split git in two, a library and an application-linking-to-the-library (I (trivially) tried that).

The typical solution to the problem was to develop a library which invokes git processes and use that library from your application. For something I wanted to do in KOffice (which I will post as a wish soon), I started to develop a similar library with a Qt-like API. I called it libQtGit and the latest source is available from gitorious.

The main reason I put development on hold was libgit2. At least three or four efforts were started to properly split git into library + application and one of those that looked very promising was started soon after I started libQtGit. At the time, waiting for this (in the beginnings) quickly-progressing libgit2 looked like a good idea (lesson learned: use what you have NOW, not what you will have in a few months – as you may never get what you are waiting for). As I was saying, that libgit2 try never made much progress, and in the meanwhile I started working on other stuff.

These days the situation is very different: Vicent Martí started a libgit2 for his Summer of Code 2010 and his achievement was excellent.

So here is my wish: implement libQtGit using libgit2 instead of QProcess(“git”).

Today I am starting a new section I have called “a wish a day”. I will be doing exactly that: writing about something I would do myself if I had enough spare time.

Some are Qt and KDE -related, others are not. Some are very simple, others would require a lot of development time (not necessarily because of the complexity), deep knowledge of something, or both of them. In some cases, I have started and/or even made relevant progress but I did not have the time to finish because I have a real life and a job that pays the bills.

So if you have some spare time and want to develop software but you do not have ideas or find anything appealing, check “A wish a day” the coming weeks. I already have 20 wish-drafts.

Wish 1: libQtGit
Wish 2: Calligra document versioning
Wish 3: Document translation in Calligra
Wish 4: KDE demo
Wish 5: KDE UI testing
Wish 6: AppStore-like installer for KDE on Windows
Wish 7: Make emerge a generic package manager for Windows
Wish 8: Wt Creator
Wish 9: QML WPF
Wish 10: KDE Dolphin “Previous versions” support
Wish 11: Perl Compatible Regular Expressions in CMake
(Not a wish): "A wish a day" did very well in GSoC proposals
Wish 12: Should KDE become the Apache Software Foundation of the Qt world?
Wish 13: KDE-based Digital Whiteboard
Wish 14: Wine-based Application Virtualization
Wish 15: MSVC compiler in KDevelop
Wish 16: libdbusfat

I’m surprised this has gone mostly unnoticed: two Japanese researchers wanted to port the Kernel-based Virtual Machine from Linux to Windows. In order to do that, they created a compatibility layer that allows you to run Linux drivers on Windows (in kernel, no less!). They gave a talk titled WinKVM: Windows Kernel-based Virtual Machine at last year’s KVM Forum, the KVM Developers Conference.

Source code is available here.

Given that Linux never removes support for any hardware from the kernel unless the driver breaks the kernel badly, the first use I can think of is obsolete hardware whose drivers no longer work on modern versions of Windows. That’s a often-encountered case in military and industrial environments: LAN Emulation cards, industrial devices using proprietary buses and proprietary cards to communicate to PCs, medical devices, etc. In those cases, the hardware generally outlives the computer that runs the software by decades (the software is no problem thanks to virtualization).

A second target “market” could be devices which do not have a Windows driver. Believe it or not, they exist in niche markets (and not so niche: the free Kinect driver was initially only available for Linux).