Planet Tanglu
November 29, 2017

Cutelyst the Qt Web framework got a new release, this is likely to be the last of the year and will be one of lasts releases of the 1.x.x series. I’d like to add HTTP/2 support before branching 1.x.x and having master as 2.0 but I’m not yet sure I’ll do that yet.

For the next year I’d like to have Cutelyst 2 packaged on most distros soon due Ubuntu’s LTS being released in April, and H2 might delay this or I delay it since it can be done using a front-end server like Nginx.

The 1.11.0 version includes three new plugins written by Matthias Fehring, a Memcached plugin that simplifies talking to a memcached server, a memcached based session store, that stores session information on the memcached server, and a static compressed plugin to serve compressed versions of static files.

Besides that I extended the EngineRequest class to serve as a base class for Engine’s requests allowing to get rid of some ugly casts and void pointers that carry the real request/connection. This doesn’t affect user code as long as they don’t implement their own engine.

Setting a Json reply is now a bit simpler now that two overloads directly accepting QJsonObject and QJsonArray were added.

Cutelyst license preamble on files was fixed to state it’s LGPLv2.1+, and finally pkg-config is now fully supported.

Go get/break/package/.* it!

https://github.com/cutelyst/cutelyst/archive/v1.11.0.tar.gz


October 31, 2017

Cutelyst the Qt Web Framework got a new release, another round of bug fixes and this time increased unit testing coverage.

RoleACL plugin is a very useful one, it was written due the need for controlling what users could access, sadly the system I wrote that needed this got unused (although I got my money for this) so this plugin didn’t get much attention, partially because it was basically complete.

Then last release I added documentation to it due a user’s request and shortly after I found out that it wasn’t working at all, even worse it wasn’t forbidding users like it should do. So after the fix I moved on writing unit test for many other stuff. There’s still missing stuff but overall coverage is much larger now.

Enjoy https://github.com/cutelyst/cutelyst/archive/v1.10.0.tar.gz


October 11, 2017

Cutelyst the Qt web framework got a new release. This is a rather small release but has some important fixes so I decided to roll sooner.

The dispatcher logic got 30% faster, parsing URL encoded data is also a bit faster on some cases (using less memory), Context objects can now be instantiated by library users to allow for example getting notifications from SQL databases and be able to forward to Cutelyst actions or Views, pkg-config support has also improved a bit but still misses most modules.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.9.0.tar.gz


September 28, 2017

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already 🙂 .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

<provides/>
 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>

  <name>Lato</name>
  <summary>A sanserif type­face fam­ily</summary>
  <description>
    <p>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.
    </p>
  </description>

  <url type="homepage">http://www.latofonts.com/</url>

  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

<font/>
  tags to a
<provides/>
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

<custom>
  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>
</custom>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata 😉

 

August 25, 2017

Sticklyst is a web paste tool, like pastebin, Stick Notes (paste.kde.org), build with Cutelyst and KDE Frameworks.

Building this kind of tool has been on my TODO list for a long time, but never really put some effort into it. When the idea first came by, I decided to look at the code of http://paste.scsys.co.uk/ which is powered by a Perl Catalyst application, to my surprise the Perl module that handled syntax highlighting was a port of the code of Kate, and it even said it used Kate’s definitions.

If that code used Kate’s, I’d better use the code directly, I hoped it wouldn’t be too coupled with GUI classes, but failed to find the code. A week ago I saw a mention about this on KDE’s mailing list, so with a little help from some IRC fellows I met the https://cgit.kde.org/syntax-highlighting.git/ code.

Then thanks to some tips from Volker Krause who showed me how to use it without GUI widgets, I managed to build a simple web app, which I presented live on the first Qt conference here in Brazil at São Paulo https://br.qtcon.org/.

Just for record the application uses less than 3MB of RAM and renders quickly using a sqlite db for the storage.

The code is 90% completed, is on production at https://paste.cutelyst.org/, and is now up for adoption. Basically someone willing to take care of it could implement the following missing features compared to Stick Notes:

  • Extra database support
  • URL shortener
  • User authentication

The list is short and the code is clean, go test, break, hack:

https://github.com/cutelyst/sticklyst

Have fun!

 

 


July 27, 2017

Cutelyst the Qt Web Framework, has another stable release, this release is mostly filled with bug fixes, the commit log is rather small.

It got fixes on Cutelyst-WSGI to properly work on Windows, QtCreator integration fixes, properly installing dll’s on Windows, fix returning the right status from views (this allows you to know if for example View::Email sent the email with success).

Cutelyst is cross-platform, the code builds on CI running on Windows, OSX and Linux, but first class is still UNIX systems, as I don’t use Windows, nor have a VM with development configured so I have only ensured it build fine on AppVeyor, Aurel Branzeanu provided some pull requests to improve that, if you also happen to have experience with CMake, MSVC and Windows please help me review or do other pull requests.

The release also got two API additions:

  • Context::setStash(QString, ParamsMultiMap), ParamsMultiMap also known as QMap<QString, QString> is a type used on HTTP query and body parameters, having this method avoid writing ugly code full of QVariant::fromValue()
  • Application::pathTo(QString), instead of using the QStringList version which join the strings to build a path (like the Catalyst does), this just gets the string as you would pass to a QFile, that in turn deals with platform issues.

I’m a bit busy with html-qt and also planning a rewrite of simplemail-qt to be async, so expect smaller Cutelyst releases 🙂

Again help is welcome (specially on Windows usage).

Get it here https://github.com/cutelyst/cutelyst/archive/v1.8.0.tar.gz


June 14, 2017

CMlyst is a Web Content Management System built using Cutelyst, it was initially inspired by WordPress and then Ghost. So it’s a mixture of both.

Two years ago I did it’s first release, and since them I’ve been slowly improving it, it’s been on production for that long providing www.cutelyst.org web site/blog. The 0.2.0 release was a silent one which marks the transition from QSettings storage to sqlite.

Storing content on QSettings is at first quite interesting since it’s easy to use but it showed not suitable very fast, first it kept leaving .lock files, then it’s not very fast to access so I had used a cache with all data, and a notifier updated that when something changed on the directory, but this also didn’t properly triggered QFileSystemWatcher so once a new page was out the cache wasn’t properly updated.

Once it was ported to sqlite, I decided to study how Ghost worked, this was mainly due many Qt/KDE developer switching to it. Ghost is quite simplistic, so it was very easy to try to provide something quite compatible with it, porting a Ghost theme to CMlyst requires very little changes due it’s syntax being close to Grantlee/Django.

Due porting to sqlite it also became clear that an export/import tool was needed, so you can now import/export it in JSON format, pretty close to Ghost, actually you can even import all you Ghost pages with it, but the opposite won’t work, and that’s because we store pages as HTML not Markdown, my feeling about markdown is that it is simple to use, convenient to geeks but it’s yet another thing to teach users which can simply use a WYSIWYG editor.

Security wise you need to be sure that both Markdown and HTML are safe, and CMlyst doesn’t do this, so if you put it on production be sure that only users that know what they are doing use it, you can even break the layout with a not closed tag.

But don’t worry, I’m working on a fix for this, html-qt is a WHATWG HTML5 specification parser, mostly complete, but the part to have a DOM, is not done yet, with it, I can make sure the HTML won’t break layout and remove unsafe tags.

Feature wise, CMlyst has 80% of Ghost features, if you like it please help add missing features to Admin page.

Some cool numbers

Comparing CMlyst to Ghost can be trick, but it’s interesting to see the numbers.

Memory usage:

  • CMlyst uses ~5MB
  • Ghost uses ~120MB

Requests per second (using the same page content)

  • CMlyst 3500/rps (production mode), 1108/rps (developer mode)
  • Ghost 100/rps (production mode)

While the RPS number is very different, on production you can use NGINX cache which would make the slow Ghost RPS not a problem, but that comes to a price of more storage and RAM usage, if you run on an AWS micro instance with 1GB of RAM this means you can have a lot less instances running at the same time, some simple math shows you could have 200 CMlyst instaces vs 8 of Ghost.

Try it!

https://github.com/cutelyst/CMlyst/archive/v0.3.0.tar.gz

Sadly it’s also liked soon I’ll be forking Grantlee, the lack of maintenance just hit me yesterday (when I was going to release this), Qt 5.7+ has changed QDateTime::toString() to include TZ data which broke Grantlee date filter which isn’t expecting that, so I had to do a weird workaround marking date as local to avoid the extra information.


May 30, 2017

Cutelyst the Qt / C++11 web framework just got another important release.

WebSocket support is probably a key feature to have on a modern web framework, Perl Catalyst doesn’t look like it wasn’t designed with it in mind, the way I found to do WS there wasn’t intuitive.

Qt has WebSocket support via QWebSockets module, which includes client and server implementation, and I have used it for a few jobs due lack of support inside Cutelyst. While looking at it’s implementation I realized that it wouldn’t fit into Cutelyst, and it even had a TODO due a blocking call, so no go.

I’ve looked then at uWSGI implementation and while it is far from being RFC compliant or even usable (IMO) the parser was much simpler, but it required that all data was available to parse. Since messages can be split into frames (really 63bits to address payload was probably not enough) and each frame has a bit to know if the message is over, which uWSGI simply discards, thus any fragmented message can’t be merged back.

WebSockets in Cutelyst have an API closer to what QWebSocket has, but since we are a web framework we have more control over things, first on your websocket endpoint you must call:

c->response->webSocketHandshake()

If the return is false just return from the method to ignore the request or it might be the case that you want the same path to show a chat HTML page, this can happen if the client didn’t sent the proper headers or if the Engine backend/protocol doesn’t support WebSockets. If true you can connect to the Request object and get notified about pong, close, binary and text (UTF-8) messages and frames, a simple usage is like:

connect(req, &Request::webSocketTextMessage, [=] (const QString &msg) {
  qDebug() << "Got text msg" <webSocketTextMessage(msg);
});

This will work as a Echo server, the signals also include Cutelyst::Context in the case you are connecting to a slot.

Cutelyst implementation is non-blocking, a little faster than QWebSockets, uses less memory and passes all http://autobahn.ws/testsuite tests, it doesn’t support compression yet.

systemd socket activation support was added, this is another very cool feature although it’s still missing a way to die the application on idle. On my tests with systemd socket activation, Cutelyst started so fast that the first request that starts it takes only twice the time it would take if Cutelyst was already running.

This release also include:

  • Fixes for using cutelyst-wsgi talking FastCGI to Apache
  • Workaround QLocalServer closing down when an accept fail (very common when used on prefork/threaded servers)
  • Fixed when using cutelyst-wsgi on threaded mode, POST requests would use the same buffer corrupting or crashing.
  • A few other fixes.

Due WebSockets plans for Cutelyst 2 are taking shape, but I’ll probably try to add HTTP2 support first since the multiplexing part is might make Cutelyst Engine quite complex.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.7.0.tar.gz


May 10, 2017

The new TechEmpower benchmarks results are finally out, round 14 took almost 6 months to be ready, and while a more frequent testing is desirable this gave us plenty of time to improve things.

Last round a bug was crashing our threaded benchmarks (which wasted time restarting), besides having that bug fixed the most notably changes that improved our performance were:

  • Use of a custom Epoll event dispatcher instead of default glib one (the tests without epoll in it’s name are using the default glib)
  • jemalloc, this brought the coolest boost, you just need link to it (or use LD_PRELOAD) and memory allocation magically get’s faster (around 20% for the tests)
  • CPU affinity that helps the scheduler to keep each thread/process pinned to a CPU core.
  • SO_REUSEPORT which on Linux helps to get connections evenly distributed among each thread/process
  •  Several code optimizations thanks to perf

I’m very glad with our results, Cutelyst managed to stay with the top performers which really pays off all the hard work, it also provide us with good numbers to show for people interested in.

Check it out https://www.techempower.com/benchmarks/


April 25, 2017

Once 1.5.0 was release I thought the next release would be a small one, it started with a bunch of bug fixes, Simon Wilper made a contribution to Utils::Sql, basically when things get out to production you find bugs, so there were tons of fixes to WSGI module.

Then TechEmpower benchmarks first preview for round 14 came out, Cutelyst performance was great, so I was planning to release 1.6.0 as it was but second preview fixed a bug that Cutelyst results were scaled up, so our performance was worse than on round 13, and that didn’t make sense since now it had jemalloc and a few other improvements.

Actually the results on the 40+HT core server were close to the one I did locally with a single thread.

Looking at the machine state it was clear that only a few (9) workers were running at the same time, I then decided to create an experimental connection balancer for threads. Basically the main thread accepts incoming connections and evenly pass them to each thread, this of course puts a new bottleneck on the main thread. Once the code was ready which end up improving other parts of WSGI I became aware of SO_REUSEPORT.

The socket option reuse port is available on Linux >3.9, and different from BSD it implements a simple load balancer. This obsoleted my thread balancer but it still useful on !Linux. This option is also nicer since it works for process as well.

With 80 cores there’s still the chance that the OS scheduler put most of your threads on the same cores, and maybe even move them when under load. So an option for setting a CPU affinity was also added, this allows for each work be pinned to one or more cores evenly. It uses the same logic as uwsgi.

Now that WSGI module supported all these features preview 3 of benchmarks came out and the results where still terrible… further investigation revealed that a variable supposed to be set with CPU core count was set to 8 instead of 80. I’m sure all this work did improve performance for servers with a lots of cores so in the end the wrong interpretation was good after all 🙂

Preview 4 came out and we are back to the top, I’ll do another post once it’s final.

Code name “to infinity and beyond” came to head due scalability options it got 😀

Last but not least I did my best to get rid of doxygen missing documentation warnings.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.6.0.tar.gz


April 04, 2017

It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.

Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).

On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.

During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.

Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.

The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.

Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).

Good news everyone! (sort of)

So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).

So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.

If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing 🙂

The only thing that had to stop is leaving our users in the dark on what is happening.

Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about 🙂

If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists,

It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

March 13, 2017

Cutelyst the C++/Qt web framework just got a new stable release.

Right after last release Matthias Fehring made another important contribution adding support for internationalization in Cutelyst, you can now have your Grantlee templates properly translated depending on user setting.

Then on IRC an user asked if Cutelyst-WSGI had HTTPS support, which it didn’t, you could enable HTTPS (as I do) using NGINX in front of your application or using uwsgi, but of course having that build-in Cutelyst-WSGI is a lot more convenient especially since his use would be for embedded devices.

Cutelyst-WSGI also got support for –umask, –pidfile, –pidfile2 and –stop that will send a signal to stop the instance based on the pidfile provided, better documentation. Fixes for respawning and cheaping workers, and since I have it now on production of all my web applications FastCGI got some critical fixes.

The cutelyst command was updated to use WSGI library to work on Windows and OSX without requiring uwsgi, making development easier.

www.cutelyst.org got Cutelyst logo, and an updated CMlyst version, though the site still looks ugly…

Download here: https://github.com/cutelyst/cutelyst/archive/v1.5.0.tar.gz

Have fun!


February 16, 2017

Yes, it’s not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get’s to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got –socket-timeout, –lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


January 24, 2017

Only 21 days after the last stable release and some huge progress was made.

The first big addition is a contribution by Matthias Fehring, which adds a validator module, allowing you to validate user input fast and easy. A multitude of user input types is available, such as email, IP address, JSON, date and many more. With a syntax that can be used in multiple threads and avoid recreating the parsing rules:

static Validator v({ new ValidatorRequired(QStringLiteral(“username”) });

if (v.validate(c,Validator::FillStashOnError)) { … }

Then I wanted to replace uWSGI on my server and use cutelyst-wsgi, but although performance benchmark shows that NGINX still talks faster to cutelyst-wsgi using proxy_pass (HTTP), I wanted to have FastCGI or uwsgi protocol support.

Evaluating FastCGI vs uwsgi was somehow easy, FastCGI is widely supported and due a bad design decision uwsgi protocol has no concept of keep alive. So the client talks to NGINX with keep alive but NGINX when talking to your app keeps closing the connection, and this makes a huge difference, even if you are using UNIX domain sockets.

uWSGI has served us well, but performance and flexible wise it’s not the right choice anymore, uWSGI when in async mode has a fixed number of workers, which makes forking take longer and user a lot of more RAM memory, it also doesn’t support keep alive on any protocol, it will in 2.1 release (that nobody knows when will be release) support keep alive in HTTP but I still fail to see how that would scale with fixed resources.

Here are some numbers when benchmarking with a single worker on my laptop:

uWSGI 30k req/s (FastCGI protocol doesn’t support keep conn)
uWSGI 32k req/s (uwsgi protocol that also doesn’t support keeping connections)
cutelyst-wsgi 24k req/s (FastCGI keep_conn off)
cutelyst-wsgi 40k req/s (FastCGI keep_conn on)
cutelyst-wsgi 42k req/s (HTTP proxy_pass with keep conn on)

As you can see the uwsgi protocol is faster than FastCGI so if you still need uWSGI, use uwsgi protocol, but there’s a clear win in using cutelyst-wsgi.

UNIX sockets weren’t supported in cutelyst-wsgi and are now supported with a HACK, yeah sadly QLocalServer doesn’t expose the socket description, plus another few stuff which are waiting for response on their bug reports (maybe I find time to write and ask for review), so I inspect the children() until a QSocketNotifier is found and there I get it. Works great but might break in future Qt releases I know, at least it won’t crash.

With UNIX sockets command line options like –uid, –gid, –chown-socket, –socket-access, as well as systemd notify integration.

All of this made me review some code and realize a bad decision I’ve made which was to store headers in lower case, since uWSGI and FastCGI protocol bring them in upper case form I was wasting time converting them, if the request comes by HTTP protocol it’s case insensitive so we have to normalize anyway. This behavior is also used by frameworks like Django and the change brought a good performance boost, this will only break your code if you use request headers in your Grantlee templates (which is uncommon and we still have few users). When normalizing headers in Headers class it was causing QString to detach also giving us a performance penalty, it will still detach if you don’t try to access/set the headers in the stored form (ie CONTENT_TYPE).

These changes made for a boost from 60k req/s to 80k req/s on my machine.

But we are not done, Matthias Fehring also found a security issue, I dunno when but some change I did break the code that returned an invalid user which was used to check if the authentication was successful, leading a valid username to authenticate even when the logs showed that password didn’t match, with his patch I added unit tests to make sure this never breaks again.

And to finish today I wrote unit test to test PBKDF2 according to RFC 6070, and while there I noticed that the code could be faster, before my changes all tests were taking 44s, and now take 22s twice as fast is important since it’s a CPU bound code that needs to be fast to authenticate users without wasting your CPU cycles.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.3.0.tar.gz

Oh and while the FreeNode #cutelyst IRC channel is still empty I created a Cutelyst on Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


January 03, 2017

Cutelyst the C++/Qt web framework has a new release.

  • Test coverage got some additions to avoid breaks in future.
  • Responses without content-length (chunked or close) are now handled properly.
  • StatusMessage plugin got new methods easier to use and has the first deprecated API too.
  • Engine class now has a struct with the request subclass should create, benchmarks showed this as a micro optimization but I’ve heard function with many arguments (as it was before) are bad on ARM so I guess this is the first optimization for ARM 🙂
  • Chained dispatcher finally got a performance improvement, I didn’t benchmark it but it should be much faster now.
  • Increased the usage of lambdas when the called function was small/simple, for some reason they reduce the library size so I guess it’s a good thing…
  • Sql helper classes can now use QThread::objectName() which has the worker thread id as it’s name so writing thread safe code is easier.
  • WSGI got static-map and static-map2 options both work the same way as in uWSGI allowing you to serve static files without the need of uWSGI or a webserver.
  • WSGI got both auto-reload and touch-reload implemented, which help a lot on the development process.
  • Request::addressString() was added to make it easier to get the string representation of the client IP also removing the IPv6 prefix if IPv4 conversion succeeds.
  • Grantlee::View now exposes the Grantlee::Engine pointer so one can add filters to it (usefull for i18n)
  • Context got a locale() method to help dealing with translations
  • Cutelyst::Core got ~25k smaller
  • Some other small bug fixes and optimizations….

For 1.3.0 I hope WSGI module can deal with FastCGI and/or uwsgi protocols, as well as helper methods to deal with i18n. But a CMlyst release might come first 🙂

Enjoy https://github.com/cutelyst/cutelyst/archive/r1.2.0.tar.gz


December 16, 2016

Cutelyst the C++/Qt Web Framework just got a new release.

Yesterday I was going to do the 1.1.0 release, after running tests, and being happy with current state I wrote this blog post, but before I publish I went to http://www.cutelyst.org CMS (CMlyst) to paste the same post, and I got hit by a bug I had on 1.0.0, which at time I thought it was a bug in CMlyst, so decided to investigate and found a few other bugs with our WSGI not properly changing the current directory and not replacing the + sign with an ‘ ‘ space when parsing formdata, which was the main bug. So did the fixes tagged as 1.1.1 and today found that automatically setting Content-Length wasn’t working when the View rendered nothing.

Cutelyst Core and Plugins API/ABI are stable but Cutelyst-WSGI had to have changes, I forgot to expose the needed API for it to actually be useful outside Cutelyst so this isn’t a problem (nobody would be able to use it anyway). So API stability got extended to all components now.

Thanks to a KDAB video about perf I finally managed to use it and was able to detect two slow code paths in Cutelyst-WSGI.

The first and that was causing 7% overhead was due QTcpSocket emitting readyRead() signal and in the HttpParser code I was calling sender() to get the socket, turns out the sender() method implementation is not as simple as I thought it was and there is a QMutexLocker which maybe caused some thread to wait on it (not sure really), so now for each QTcpSocket there is an HttpParser class, this uses a little more memory but the code got faster. Hopefully in Qt6 the signal can have a readyRead(QIODevice *) signature.

The second was that I end up using QByteArrayMatcher to match \r\n to get request headers, perl was showing it was causing 1.2% overhead, doing some benchmarks I replaced it by a code that was faster. Using a single thread on my Intel I5 I can process 85k req/s, so things got indeed a little faster.

It got a contribution from a new developer on View::Email, which made me do a new simplemail-qt release (1.3.0), the code there still needs love regarding performance but I didn’t manage to find time to improve it yet.

This new release has some new features:

  • EPoll event loop was added to Linux builds, this is supposedly to be faster than default GLib but I couldn’t measure the difference properly, this is optional and requires CUTELYST_EVENT_LOOP_EPOLL=1 environment to be set.
  • ViewEmail got methods to set/get authentication method and connection type
  • WSGI: can now set TCP_NODELAY, TCP KEEPALIVE, SO_SNDBUF and SO_RCVBUF on command line, these use the same name as in uwsgi
  • Documentation was improved a bit and can be generated now with make docs, which doesn’t require me to changing Cutelyst version manually anymore

And also some important bug fixes:

  • Lazy evaluation of Request methods was broken on 1.0.0
  • WSGI: Fixed loading configs and parsing command line option

Download https://github.com/cutelyst/cutelyst/archive/r1.1.2.tar.gz and have fun!


November 17, 2016

Right on my first blog post about Cutelyst users asked me about more “realistic” benchmarks and mentioned TechEmpower benchmarks. Though it was a sort of easy task to build the tests at that time it didn’t seem to be useful to waste time as Cutelyst was moving target.

Around 0.12 it became clear that the API was nearly freezing and everyone loves benchmarks so it would serve as a “marketing” thing. Since the beginning of the project I’ve learned lots of optimizations which made the code get faster and faster, there is still room from improvement, but changes now need to be carefully measured.

Cutelyst is the second Web Framework on TechEmpower benchmarks that uses Qt, the other one being treefrog, but Qt’s appearance was noticed on their blog:

https://www.techempower.com/blog/2016/11/16/framework-benchmarks-round-13/

The cutelyst-thread tests suffered from a segfault (that was restarted by master process), the fix is in 1.0.0 release but it was too late for round 13, so round 14 it will get even better results.

https://www.techempower.com/benchmarks/

If you want to help MongoDB/Grantlee/Clearsilver tests are still missing 😀


November 10, 2016

Cutelyst the Qt web framework just reached it’s first stable release, it’s been 3 years since the first commit and I can say it finally got to a shape where I think I’m able to keep it’s API/ABI stable. The idea is to have any break into a 2.0 release by the end of next year although I don’t expect many changes as the I’m quite happy with it’s current state.

The biggest change from the last release is the SessionStoreFile class, it initially used QSettings for it’s simplicity but it was obvious that it’s performance would be far from ideal, so I replaced QSettings with a plain QFile and QDataStream, this produced smaller session files and made the code twice as fast, but profiling was showing it was still slow because it was writing to disk multiple times on the same request. So the code was changed to merge the changes and only save to disk when the Context get’s destroyed, on my machine the performance went from 3,5k to 8.5k on read and writes and 4k to 12k on reads, this is on the same session id, should probably be faster with different ids. This is also very limited by the disk IO as we use a QLockFile to avoid concurrency, so if you need something faster you can subclass SessionStore and use Sql or Redis…

Besides that the TechEmpower 13th preview rounds showed a segfault in the new Cutelyst WSGI in threaded mode, a very hard to reproduce but with an easy fix. The initial fix made the code a bit ugly so I searched to see if XCode clang got an update on thread_local feature and finally, XCode 8 has support for it, if you are using XCode clang now you need version 8.

Many other small bugs also got in and API from 0.13.0 is probably unchanged.

Now if you are a distro packager please 😀 package it, or if you are a dev and was afraid of API breaks keep calm and have fun!

Download: https://github.com/cutelyst/cutelyst/archive/r1.0.0.tar.gz


October 21, 2016

Last official stable release was done more than 3 years ago, it was based on Qt/KDE 4 tech, after that a few fixes got in what would be 0.4.0 but as I needed to change my priorities it was never released.

Thanks to Lukáš Tinkl it was ported to KF5, on his port he increased the version number to 0.5.0, still without a proper release distros rely on a git checkout.

Since I started writing Cutelyst I had put a break on other open source projects by mostly reviewing a few patches that comes in, Cutelyst allows me to create more stuff that I get paid to do, so I’m using my free time to pick paid projects now. A few months ago Cristiano Bergoglio asked me about a KF5 port, and getting a calibration tool not to depend on gnome-color-manager, and we made a deal to do these stuff.

This new release got a few bugs fixed, pending patches merged and I did some code modernization to make some use of Qt5/C++11 features. Oh and yes it doesn’t crash on Wayland anymore but since on Wayland the color correction is a task for the compositor you won’t be able set an ICC profile for a monitor, only for other devices.

For the next release, you will be able to calibrate your monitor without the need for the GNOME tools.

Download: http://download.kde.org/stable/colord-kde/0.5.0/src/colord-kde-0.5.0.tar.xz.mirrorlist


September 01, 2016

cutelyst-logoCutelyst the Qt web framework just got a new release, 0.13.0.

A new release was needed now that we have this nice new logo.

Special thanks to Alessandro Longo (Alex L.) for crafting this cute logo, and a cool favicon for Cutelyst web site.

But this release ain’t only about the logo, it’s full of cool things:

When I started Cutelyst a simple developer Engine (read HTTP engine) was created, it was very slow and mostly an ugly hackery but helped work on the APIs that matter, I then took a look at uWSGI due some friend saying it was awesome and it was great to be able to deal with many protocols without the hassled of writing parsers for them.

Fast forwarding to 0.12.0 release and I started to feel that I was reaching a limit on Cutelyst optimizations and uWSGI was holding us back, and it wasn’t only about performance,  memory usage (scalability) was too high for something that should be rather small, it’s written in C after all.

It also has a fixed number of requests it can take, if you start it with 5 threads or process it’s 5 blocking clients that can be processed at the same time, if you use the async option you then have a fixed number of clients per process, 5 process * 5 async clients = 25 clients at the same time, but this 5 async clients are always pre-allocated which means that each new process will also be bigger right from launch.

Think now about websockets, how can one deal with 5000 simultaneous clients? 50 process with async = 100? Performance on async mode was also slower due complexity to deal with them.

So before getting into writing an alternative to uWSGI in Cutelyst I did a simple experiment, asked uWSGI to load a Cutelyst app and fork 1000 times and wrote a simple QCoreApplication that would do the same, uWSGI used > 1GB of RAM and took around 10s to start, while the Qt app used < 300MB of RAM and around 3s. So ~700MB of RAM is a lot of RAM and that was enough to get me started.

Cutelyst-wsgi, is born, and granted the command line arguments are very similar to uWSGI and I also followed the same separation between socket and protocol handling, of course in C++ things are more reusable, so our Protocol class has a HTTP subclass and in future will have FastCGI and uWSGI ones too.

Did I say uWSGI before 2.1 doesn’t support keep-alive? And that 2.1 is not released nor someone knows when it will? Cutelyst-wsig supports keep-alive, http pipelining, is complete async and yes, performs a little better. If you put NGINX in front of uWSGI you can get keep alive support, but guess what? the uwsgi protocol closes the connection between the front server so it’s quite hard to get very high speeds. Preliminary results of TechEmpower Benchmarks #13 showed Cutelyst hitting these limits as others frameworks were using keep-alive properly.

Thanks to this new Engine the Engine API got several improvements and is quite stable now. Besides it a few other important changes were made as well:

  • Change internals to take advantage of NRVO (named return value optimization)
  • Improved speed of Context::uriFor() making Cutelyst now require Qt 5.6 due a behavior change in QUrl
  • Improved speed and memory usage of Url query parser 1s faster in 1m iterations, using QByteArray::split() is very convenient but it allocates more memory and a QList for the results, using ::indexOf() and manually getting the parts is both faster and more memory efficient but yes, this is the optimization we do in Cutelyst::Core and that makes a difference, in application code the extra complexity might not worth it.
  • C++ for ranged loops, all our Q_FOREACH & friends where replaced with for ranged loops
  • Use of new reverse and equal_range iterators
  • Use QHash for storing headers, this was done after several benchmarks that showed QHash was faster for all common cases, namely if it keept the values() in order like QMap it would be used in other places as well
  • Replaced most QList with QVector, and internally std::vector
  • Multipart/form-data got faster, it doesn’t seek() anymore but requires a not sequential QIODevice as each Upload object point to parts of the body device.
  • Add a few more unit tests.

Thanks to the above the core library size is also a bit smaller, ~640KB on x64.

I was planning to do a 1.0 after 0.13 but with this new engine I think it’s better to have a 0.14 version, and make sure no more changes in Core will be needed for additional protocols.

Download here enjoy!


June 21, 2016

Cutelyst a web framework built with Qt is now closer to have it’s first stable release, with it becoming 3 years old at the end of the year I’m doing my best to finally iron it to get an API/ABI compromise, this release is full of cool stuff and a bunch of breaks which most of the time just require recompiling.

For the last 2-3 weeks I’ve been working hard to get most of it unit tested, the Core behavior is now extensively tested with more than 200 tests. This has already proven it’s benefits resulting in improved and fixed code.

Continuous integration got broader runs with gcc and clang on both OSX and Linux (Travis) and with MSVC 12 and 14 on Windows (Appveyor), luckily most of the features I wanted where implemented but the compiler is pretty upsetting. Running Cutelyst on Windows would require uwsgi which can be built with MinGW but it’s in experimental state, the developer HTTP engine, is not production ready so Windows usefulness is limited at the moment.

One of the ‘hypes’ of the moment is non-blocking web servers, and this release also fixes this so that uwsgi –async <number_of_requests> is properly handled, of course there is no magic, if you enable this on blocking code the requests will still have to wait your blocking task to finish, but there are many benefits of using this if you have non-blocking code. At the moment once a slot is called to process the request and say you want to do a GET on some webservice you can use the QNetworkAccessManager do your call and create a local QEventLoop so once the QNetworkReply finish() is emitted you continue processing. Hopefully some day QtSql module will have an async API but you can of course create a Thread Queue.

A new plugin called StatusMessage was also introduced which generates an ID which you will use when redirecting to some other page and the message is only displayed once, and doesn’t suffer from those flash race conditions.

The upload parser for Content-Type  multipart/form-data got a huge boost in performance as it now uses QByteArrayMatcher to find boundaries, the bigger the upload the more apparent is the change.

Chunked responses also got several fixes and one great improvement which will allow to use it with classes like QXmlStreamWriter by just passing the Response class (which is now a QIODevice) to it’s constructor or setDevice(), on the first write the HTTP headers are sent and it will start creating chunks, for some reason this doesn’t work when using uwsgi protocol behind Nginx, I still need to dig and maybe disable the chunks markup depending on the protocol used by uwsgi.

A Pagination class is also available to easy the work needed to write pagination, with methods to set the proper LIMIT and OFFSET on Sql queries.

Benchmarks for the TechEmpower framework were written and will be available on Round 14.

Last but not least there is now a QtCreator integration, which allows for creating a new project and Controller classes, but you need to manually copy (or link) the qtcreator directory to ~/.config/QtProject/qtcreator/templates/wizard.

cutelyst-qtwiz

As usual many bug fixes are in.

Help is welcome, you can mail me or hang on #cutelyst at freenode.

Download here.

 


June 17, 2016

limba-smallI wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^).

As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do 😉 ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it’s natural to ask what the projects next steps will be.

Meeting with the competition

At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition.

Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn’t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system.

The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible.

Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters.

Sometimes stepping out of the way is the best way to achieve progress

So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more.

Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn’t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability.

So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that’s the thing I actually care most about.

I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don’t want, and that’s completely fine. 😉

And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.)

Moving forward

So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense.

In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. – those will receive more attention 😉

An integral part of Limba is a web service called “LimbaHub” to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board.

Let’s see 🙂

P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling 🙂

May 06, 2016

I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D.

Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D – just my personal opinion from building one project with it.

Why choose D in the first place?

The previous AppStream generator was written in Python, which wasn’t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again.

So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard – since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can’t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn’t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it).

What is great about D?

This list is just a huge braindump of things I had on my mind at the time of writing 😉

Interfacing with C

There are multiple things which make D awesome, for example interfacing with C code – and to a limited degree with C++ code – is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:

extern(C):
nothrow:

struct _mystruct {}
alias mystruct_p = _mystruct*;

mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);

You can call them from D code in two ways:

auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();

This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a string as first parameter, can also be called like a member function of string.

Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible.

Getting rid of C++ “cruft”

There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it’s still valid. E.g. getting rid of the #include preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming.

Incredibly powerful metaprogramming

Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do.

Built-in unit-test support

Unittesting with D is really easy: You just add one or more unittest { } blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them.

The unittest scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output.

Safe D

While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed.

Unfortunately, @safe is not the default for functions though.

Separate operators for addition and concatenation

D exclusively uses the + operator for addition, while the ~ operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It’s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning.

Optional garbage collector

D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier.

Built-in documentation generator

This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a unittest { } scope to generate automatic examples for the usage of a function, which is pretty cool.

Scope blocks

The scope statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and – with some GCC/Clang extensions – a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details.

Built-in syntax for parallel programming

Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the “Programming in D” book.

“Pure” functions

D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions.

D is fast!

D matches the speed of C++ in almost all occasions, so you won’t lose performance when writing D code – that is, unless you have the GC run often in a threaded environment.

Very active and friendly community

The D community is very active and friendly – so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at forum.dlang.org.

What is bad about D?

Half-proprietary reference compiler

This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem.

For the reference D compiler, Digital Mars’ D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first.

Since major Linux distributions can’t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos.

Especially for Linux distributions, there is no way to say “use this compiler to get the best and latest D compatibility”. Additionally, if people can’t simply apt install latest-d, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it’s really bad that the proprietary reference compiler is hurting D adoption in that way.

That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler.

UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:

The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.

Phobos (standard library) is deprecating features too quickly

This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences.

Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos.

This is really annoying, and probably the biggest source of unhappiness I had while working with D – especially the documentation not matching the actual code is a bad experience for someone new to the language.

Incomplete free compilers with varying degrees of maturity

LDC and GDC have bugs, and for someone new to the language it’s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat.

For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can’t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that.

No ABI compatibility

D has a defined ABI – too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can’t call a library compiled with LDC or DMD. GDC actually doesn’t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further.

The D package manager, dub, does not yet play well with distro packaging

This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved.

The GC is sometimes an issue

The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals SIGUSR1 and SIGUSR2 to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn’t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn’t a problem for asgen, but it might be for other applications.

These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements.

“version” is a reserved word

Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it’s slightly annoying. The version keyword is used for conditional compilation, and needing to abbreviate it to ver in all parts of the code sucks a little (e.g. the “Package” interface can’t have a property “version”, but now has “ver” instead).

The ecosystem is not (yet) mature

In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It’s always nothing major, usually you can easily solve these issues and go on, but it’s annoying to have these papercuts.

This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished.

Conclusion

I like to work with D, and I consider it to be a great language – the quirks it has in its toolchain are not that bad to prevent writing great things with it.

At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can’t run a GC), I would not recommend to use D. Rust seems to be the much better choice then.

In any case, D’s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice.

As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste 😉

UPDATE: To get some idea of D, check out the D tour on the new website tour.dlang.org.

April 26, 2016

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons:
  cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons:
  cached: name: foobar.png
          width: 128
          height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Správce souborů</summary>
<summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Správce souborů
  da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Características:</p>
  <ul>
    <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
    <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>
    <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>
    <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
    ....
  </ul>
</description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

Two weeks ago was the GNOME Software hackfest in London, and I’ve been there! And I just now found the time to blog about it, but better do it late than never 😉 .

Arriving in London and finding the Red Hat offices

After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard – in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn’t leave any time to visit the city much, but by being close to the center, one could hardly avoid the “London experience” 😉 .

Cool people working on great stuff

towerbridge2016That’s basically the summary for the hackfest 😉 . It was awesome to meet with Richard Hughes again, since we haven’t seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff – Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn’t make it yet, but I am no longer against the idea of having that – the remaining issues are solvable).

Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I’ve only worked with over IRC or bug reports (e.g. William, Kalev, …) was great. Also lots of “new” people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It’s pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn’t know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora 😛 )).

The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it’s adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we’ve seen some pretty great mockups done by Endless too – I hope those will make it into production somehow.

Ironically, a "snapstore" was close to the office ;-)

Ironically, a “snapstore” was close to the office 😉

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions.

I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn’t make for a good story where we can just tell developers “ship as this bundling format, and it will be supported everywhere”). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp – I will blog about that separately soon.

On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard’s bus-factor on the project a little 😉 .

I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu.

We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that’s a different story 😉 ).

The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, “app of the day” features, etc.). This problem hasn’t been solved, since it’s highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it).

In summary…

It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point 😀 ).

For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects.

So, the only thing left to do is a huge shout out of “THANK YOU” to the Ubuntu Community Fund – and therefore the Ubuntu community – for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn’t get into trouble with paying my flights.

Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical’s offices.

To worried KDE people: No, I didn’t leave the blue side – I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible 😉

April 16, 2016

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution.

Getting rid of dep11-generator

Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator.

Adventures in Go / Rust / D

Since I didn’t want to write this new piece of software in C (or basically, writing it in C was my last option 😉 ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go…), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost.

What good to expect from appstream-generator?

So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution’s package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues.

LibAppStream-based parsing, generation of YAML or XML, multi-distro support, …

As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite.

Feature Flags, Optipng, …

The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn’t use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons.

No more Contents file dependency

Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data.

It can’t all be good, right?

That is true, there are also some known issues the new generator has:

Large amounts of RAM required

The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector… That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve.

What are symbolic links?

To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don’t exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does.

I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome!

Deploying asgen

I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow – many thanks once again for the Ubuntu community for sponsoring my attendance!).

Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

March 23, 2016

Cutelyst the Qt web framework just got a new release, I was planning to do this a while back, but you know we always want to put in a few more changes and time to do that is limited.

I was also interested in seeing if the improvements in Qt 5.6 would result in better benchmark tests but they didn’t, the hello world test app is probably too simple for the QString improvements to be noticed, a real world application using Grantlee views, Sql and more user data might show some difference. Still, compared to 0.10.0 Cutelyst is benchmarking the same even tho there were some small improvements.

The most important changes of this release were:

  • View::Email allows for sending emails using data provided on the stash, being able to chain the rendering of the email to another Cutelyst::View so for example you can have your email template in Grantlee format which gets rendered and sent via email (requires simple-mail-qt which is a fork of SmtpClient-for-Qt with a sane API, the View::Email hides simple-mail-qt API)
  • Utils::Sql provides a set of functions to be used with QSql classes, most importantly are serializing QSqlQuery to QVariantList of QVariantMap (or Hashes), allowing for accessing the query data on a View, and the preparedSqlQuery() method which comes with a macro CPreparedSqlQuery, that is a lambda that keeps your prepared statement into a static QSqlQuery, this avoids the need of QSqlQuery pointers to keep the prepared queries around (which are a boost for performance)
  • A macro CActionFor which resolves the Action into a static Action * object inside a lambda removing the need to keep resolving a name to an Action
  • Unit tests at the moment a very limited testing is done, but Header class has nearly 100% of coverage now
  • Upload parser got a fix to properly find the multipart boundary, spotted due Qt client sending boundaries a bit different from Chrome and FF, this also resulted in the removal of QRegularExpression to match the boundary part on the header with a simple search that is 5 times faster
  • Require Qt 5.5 (this allows for removal of workaround for QJson::fromHash and allows for the use of Q_ENUM and qCInfo)
  • Fixed crashes, and a memory leak is Stats where enabled
  • Improved usage of QStringLiteral and QLatin1String with clazy help
  • Added CMake support for setting the plugins install directory
  • Added more ‘const’ and ‘auto’
  • Removed uwsgi –cutelyst-reload option which never worked and can be replaced by –touch-reload and –lazy
  • Improvements and fixes on cutelyst command line tool
  • Many small bugs fixed

The Cutelyst website is powered by Cutelyst and CMlyst which is a CMS, at the moment porting CMlyst from QSettings to sqlite is on my TODO list but only when I figure out why out of the blue I get a “database locked” even if no other query is running (and yes I tried query.finish()), once I figure that out I’ll make a new CMlyst release.

Download here.

Have fun!


December 14, 2015

AppStream on DebianBack in 2011, when the AppStream meeting in Nürnberg had just happened, I published the DEP-11 (Debian Extension Project 11) draft together with Michael Vogt and Julian Andres Klode, as an approach to implement AppStream in Debian.

Back then, the FTPMasters team rejected the suggestion to use the official XML specification, and so the DEP-11 specification was adapted to be based on YAML instead of XML. This wasn’t much of a big deal, since the initial design of DEP-11 was to be a superset of the AppStream specification, so it wasn’t meant to be exactly like AppStream anyway. AppStream back then was only designed for applications (as in “stuff that provides a .desktop file”), but with DEP-11 we aimed for much more: DEP-11 should also describe fonts, drivers, pkg-config files and other metadata, so in the end one would be able to ask the package manager meaningful questions like “is the firmware of device X installed?” or request actions such as “please install me the GIMP”, making it unnecessary to know package names at all, and making packages a mere implementation detail.

Then, GNOME-Software happened and demanded all these features. Back then, I was the de-facto maintainer of the AppStream upstream project already, but didn’t feel like being the maintainer yet, so I only curated the existing specification, without extending it much. The big push forward GNOME-Software created changed that dramatically, and with me taking control of the specification and documenting it properly, the very essence of DEP-11 became AppStream (that was around the AppStream 0.6 release). So today, DEP-11 is mainly a YAML-based version of the AppStream XML specification.

AppStream XML and DEP-11 YAML are implemented by two projects, GLib and Qt libraries exist to access the metadata and AppStream is used by the software centers of GNOME, KDE and Elementary.

Today there are two things to celebrate for me: First of all, there is the release of AppStream 0.9 (that happened last Saturday already), which brings some nice improvements to the API for developers and some micro-optimizations to speed up Xapian database queries. Yay!

The second thing is full DEP-11 support in Debian! This means that you don’t need to copy metadata around manually, or install extra packages: All you need to do is to install the appstream package, everything else is done for you, and the data is kept up to date automatically.

This is made possible by APT 1.1 (thanks to the whole APT team!), some dedicated support for it in AppStream directly, the work of our Sysadmin team at Debian, which set up infrastructure to build the metadata automatically, as well as our FTPMasters team where Joerg helped with the final steps of getting the metadata into the archive.

That AppStream data is now in the archive doesn’t mean we live in a perfect utopia yet – there are still issues to be handled, but all the major work is done now and we can now gradually improve the data generator and tools and squash the remaining bugs.

And another item from the good news department: It’s highly likely that Ubuntu will follow Debian in AppStream/DEP-11 support with the upcoming Xenial release!

But how can I make use of the new metadata?

Just install the appstream package – everything is done for you! Another easy way is to install GNOME-Software, which makes use of the new metadata already. KDE Discover in Debian does not enable support for AppStream yet, this will likely come later.

If you prefer to use the command-line, you can now use commands like

sudo appsteamcli install org.kde.kate.desktop

This will simply install the Kate text editor.

Who wants some statistics?

At time the Debian Sid/Unstable suite contains 1714 valid software components. It could be even more if the errors generated during metadata extraction would be resolved. For that, the metadata generator has a nice statistics page, showing the amount of each hint type in the suite and the development of the available software components in Debian and the hint types count over time (this plot feature was just added recently, so we are still a bit low on data). For packagers and interested upstreams, the data extractor creates detailed reports for each package, explaining why data was not included and how to fix the issue (in case something is unclear, please file a bug report and/or get in contact with me).

In summary

Thanks to everyone who helped to make this happen! For me this project means a lot, when writing this blog post I realized that I am basically working on it for almost 5 years (!) now (and the idea is even older). Seeing it to grow to such a huge success in other distributions was a joy, but now Debian can join the game with first-class AppStream support as well, which makes me even happier. Afterall Debian is the distribution I feel most at home.

There is still lots of work to do (and already a few bugs known), but the hardest part of the journey is done – let’s walk into a bright future with AppStream!

December 11, 2015

tanglu logo pureIt is time again for another Tanglu blogpost (to be honest, this article is pretty much overdue 😉 ). I want to shine a spotlight on the work that’s done in Tanglu, be it ongoing or past work done for the new release.

Why is the new release taking longer than usual?

As you might have noticed, usually a new Tanglu release (Tanglu 4 “Dasyatis”) should be released this month. We decided a while ago, however, to defer the release and are now aiming for an release in February / March 2016.

Reason for this change in schedule is the GCC 5 transition (and more importantly the huge amount of follow-up transitions) as well as some other major infrastructure tasks, which we would like to complete and give them a good amount of testing before releasing. Also, some issues with our build system, and generally less build power than in previous releases is a problem (At least the Debile build-system issues could be worked around or solved). The hard disk crash in the forum and bugtracking server also delayed the start of the Tanglu 4 development process a lot.

In all of these tasks, manpower is of course the main problem 😉

Software Tasks

General infrastructure tasks

Improvements on Synchrotron

Synchrotron, the software which is synchronizing the Tanglu archive with the Debian archive, received a few tweaks to make it more robust and detect “installability” of packages more reliably. We also run it more often now.

Rapidumo improvements

Rapidumo is a software written to complement dak (the Debian Archive Kit) in managing the Tanglu archive. It performs automatic QA tasks on the archive and provides a collection of tools to perform various tasks (like triggering package rebuilds).

For Tanglu 4, we sometimes drop broken packages from the archive now, to speed up transitions and to remove packages which got uninstallable more quickly. Packages removed from the release suite still have a chance to enter it again, but they need to go through Tanglu’s staging area again first. The removal process is currently semiautomatic, to avoid unneccessary removals and breakage.

Rapidumo could benefit from some improvements and an interactive web interface (as opposed to static HTML pages) would be nice. Some early work on this is done, but not completed (and has a very low priority at the moment).

DEP-11 integration

There will be a bigger announcement on AppStream and DEP-11 in the next days, so I keep this terse: Tanglu will go for full AppStream integration with the next release, which means no more packaged metadata, but data placed directly in the archive. Work on this has started in Tanglu, but I needed to get back to the drawing board with it, to incorporate some new ideas for using synergies with Debian on generating the DEP-11 metadata.

Phabricator

Phabricator has been integrated well into our infrastructure, but there are still some pending tasks. E.g. we need subprojects in Phabricator, and a more powerful Conduit interface. Those are upstream bugs on Phabricator, and are actively being worked on.

As soon as the missing features are available in Phabricator, we will also pursue the integration of Tanglu bug information with the Debian DistroTracker, which was discussed at DebConf this summer.

UEFI support

UEFI support is a tricky beast. Full UEFI support is a release-goal for Tanglu 4 (so we won’t release without it). At time, our Live-CDs start on pure EFI systems, but there are several reported issues with the Calamares live-installer as well as Debian-Installer, which fails to install GRUB correctly.

Work on resolving these problems is ongoing.

Major LiveCD rework

Tanglu 4 will ship with improved live-cds, which e.g. allow selecting the preferred locale early in the bootloader. We switched from live-boot to manage our live sessions to casper, the same tool which is also used in Ubuntu. Casper fixed a few issues we had, and brought some new, but overall using it was a good choice and work on making top-notch live-cds is progressing well.

KDE Plasma

Integration of the latest Plasma release is progressing, but its speed has slowed down since fewer people are working on it. If you want to help Tanglu’s KDE Workspace integration, please help!

For the upcoming release, the Plasma 5 packages which we created based on a collaboration with Kubuntu for the previous Tanglu 3 release have been merged with their Debian counterparts. This action fortunately was possible without major problems. Now Tanglu is (mostly) using the same Plasma 5 packages as Debian again (lots of Kudos go the the Kubuntu and Debian Qt/KDE packagers!).

GNOME

The same re-merge with Debian has been done on Tanglu’s GNOME flavor (Tanglu also shipped with a more recent GNOME release than Debian in Tanglu 3). So Tanglu 4 GNOME and Debian Testing GNOME are at (almost) the same level.

Unfortunately, the GNOME team is heavily understaffed – a GNOME team member left at the beginning of the year for personal reasons, and the team now only has one semi-active member and me.

Other

An fvwm-nightshade spin is being worked on 🙂 . Apart from that, there are no teams maintaining other flavors in Tanglu.

Security & Stable maintenance

Thanks to several awesome people, the current Tanglu Stable release (Tanglu 3 “Chromodoris”) receives regular updates, even for many packages which are not in the “fully supported” set.

Tangluverse Tasks

Tasks (not) done in the global Tanglu universe.

HTTPS for community sites

Thanks to our participation in the Let’s Encrypt closed beta program, Tanglu websites like the user forums and bugtracker have been fully encrypted for a while, which should make submitting data to these sites much more secure. So far, we didn’t encounter issues, which means that we will likely aim for getting HTTPS encryption enabled on every Tanglu website.

Tanglu.org website

The Tanglu main website didn’t receive much love at all. It could use a facelift and more importantly updated content about Tanglu.

So far, nobody volunteered to update the website, so this task is still open. It is, however, a high-priority task to increase our public visibility as a project, so updating the website is definitely something which will be done soon.

Promotion

It doesn’t hurt to think about how to sell Tanglu as “brand”: What is Tanglu? Our motivation, our goals? What is our slogan? How can we communicate all of this to new users, without having them to read long explanations? (e.g. the Fedora Project has this nicely covered on their main website, and their slogan “Freedom, Friends, Features, First” immediately communicates the project’s key values)

Those are issues engineers don’t think about often, still it is important to present Tanglu in a way that is easy to grasp for people hearing about it for the first time. So far, no people are working on this task specifically, although it regularly comes up on IRC.

Sponsoring & Government

Tanglu is – by design – not backed by any corporation. However, we still need to get our servers paid. Currently, Tanglu is supported by three sponsors, which basically provide the majority of our infrastructure. Also, developers provide additional machines to get Tanglu running and/or packages built.

Still, this dependency on very few sponsors paying the majority of the costs is very bad, since it makes Tanglu vulnerable in case one sponsor decides to end sponsorship. We have no financial backing to be able to e.g. continue to pay an important server in case it’s sponsor decides to drop sponsorship.

Also, since Tanglu is no legal entity, accepting donations is hard for us at time, since a private person would need to accept the money on behalf of Tanglu. So we can’t easily make donating to Tanglu possible (at least not in the way we want donations to be).

These issues are currently being worked on, there are a couple of possible solutions on the table. I will write about details as soon as it makes sense to go public with them.

 

In general, I am very happy with the Tanglu community: The developer community is a pleasure to work with, and interactions with our users on the forums or via the bugtracker are highly productive and friendly. Although the development speed has slowed down a bit, Tanglu is still an active and awesome project, and I am looking forward to the cool stuff we will do in future!

 

September 11, 2015

LimbaHubThe Limba project does not only have the goal to allow developers to deploy their applications directly on multiple Linux distributions while reducing duplication of shared resources, it should also make it easy for developers to build software for Limba.

Limba is worth nothing without good tooling to make it fun to use. That’s why I am working on that too, and I want to share some ideas of how things could work in future and which services I would like to have running. I will also show what is working today already (and that’s quite something!). This time I look at things from a developer’s perspective (since the last posts on Limba were more end-user centric). If you read on, you will also find a nice video of the developer workflow 😉

1. Creating metadata and building the software

To make building Limba packages as simple as possible, Limba reuses already existing metadata, like AppStream metadata to find information about the software you want to create your package for.

To ensure upstreams can build their software in a clean environment, Limba makes using one as simple as possible: The limba-build CLI tool creates a clean chroot environment quickly by using an environment created by debootstrap (or a comparable tool suitable for the Linux distribution), and then using OverlayFS to have all changes to the environment done during the build process land in a separate directory.

To define build instructions, limba-build uses the same YAML format TravisCI uses as well for continuous integration. So there is a chance this data is already present as well (if not, it’s trivial to write).

In case upstream projects don’t want to use these tools, e.g. because they have well-working CI already, then all commands needed to build a Limba package can be called individually as well (ideally, building a Limba package is just one call to lipkgen).

I am currently planning “DeveloperIPK” packages containing resources needed to develop against another Limba package. With that in place and integrated with the automatic build-environment creation, upstream developers can be sure the application they just built is built against the right libraries as present in the package they depend on. The build tool could even fetch the build-dependencies automatically from a central repository.

2. Uploading the software to a repository

While everyone can set up their own Limba repository, and the limba-build repo command will help with that, there are lots of benefits in having a central place where upstream developers can upload their software to.

I am currently developing a service like that, called “LimbaHub”. LimbaHub will contain different repositories distributors can make available to their users by default, e.g. there will be one with only free software, and one for proprietary software. It will also later allow upstreams to create private repositories, e.g. for beta-releases.

3. Security in LimbaHub

Every Limba package is signed with they key of its creator anyway, so in order to get a package into LimbaHub, one needs to get their OpenPGP key accepted by the service first.

Additionally, the Hub service works with a per-package permission system. This means I can e.g. allow the Mozilla release team members to upload a package with the component-ID “org.mozilla.firefox.desktop” or even allow those user(s) to “own” the whole org.mozilla.* namespace.

This should prevent people hijacking other people’s uploads accidentally or on purpose.

4. QA with LimbaHub

LimbaHub should also act as guardian over ABI stability and general quality of the software. We could for example warn upstreams that they broke ABI without declaring that in the package information, or even reject the package then. We could validate .desktop files and AppStream metadata, or even check if a package was built using hardening flags.

This should help both developers to improve their software as well as users who benefit from that effort. In case something really bad gets submitted to LimbaHub, we always have the ability to remove the package from the repositories as a last resort (which might trigger Limba to issue a warning for the user that he won’t receive updates anymore).

What works

Limba, LimbaHub and the tools around it are developing nicely, so far no big issues have been encountered yet.

That’s why I made a video showing how Limba and LimbaHub work together at time:

Still, there is a lot of room for improvement – Limba has not yet received enough testing, and LimbaHub is merely a proof-of-concept at time. Also, lots of high-priority features are not yet implemented.

LimbaHub and Limba need help!

At time I am developing LimbaHub and Limba alone with only occasional contributions from others (which are amazing and highly welcomed!). So, if you like Python and Flask, and want to help developing LimbaHub, please contact me – the LimbaHub software could benefit from a more experienced Python web developer than I am 😉 (and maybe having a designer look over the frontend later makes sense as well). If you are not afraid of C and GLib, and like to chase bugs or play with building Limba packages, consider helping Limba development 🙂

September 09, 2015

Cutelyst, a Qt web framework just got a new release, this version took a little longer but now I think it has evolved enough for a new version.

The more important improvements of this release were the new JSON view, which is an easy way to return a JSON response without worry with including QJson casses (all the magic is done by QJsonObject::fromVariantMap(), and the component splitting that help us get API and ABI stabilization.

The Cutelyst::Core target has all the Core functionality that simple application needs, it’s the foundation of the framework, Context, Request, Response are all core classes that one cannot live without, then you can add Cutelyst::Plugin::Session target to add Session handling functionality, which in case you don’t like it you can create your own session handling but still use Cutelyst::Core functionality. Action classes were also split out of Core, and are automatically loaded, thus only having code that you are really using ActionClass(‘REST’) will load the ActionREST plugin or fail the application if it could not be loaded.

Here is a more complete version of the changes:

  • Reduce CMake requirement to cmake 2.8.12
  • Use some Qt compile definitions to have more strict building
  • Add a JSON view which gets stash data and returns it’s data as JSON
  • Split out components, Core, View::Grantlee, View::JSON…
  • Make special Action classes loadable plugins ActionREST
  • Fix build on OSX, with a working Travis CI
  • Fix memory aliment of uWSGI plugin
  • Make Authentication methods static for convenience
  • Expose engine threads/cores information
  • Align memory of some classes
  • Fix compiling on Qt 5.5
  • Rework Session handling and extend it to match Catalyst::Plugin::Session
  • Now Controllers, Views, Plugins and DispatchTypes get automatically registered if they are children of Cutelyst::Application
  • Fix Session cookie expires
  • Require Qt 5.4 due QString::splitRef()
  • Add some more documentation
  • Removed ViewEngine class

The 0.11.0 roadmap:

  • Add unit tests, yes this is the most important thing I want to add to Cutelyst right, having API and ABI stable means nothing if internal logic changes and your app stops working the way it should on a new Cutelyst release
  • Depend on Qt 5.5
  • Declarative way of describing application (QML)
  • Some sort of Catalyst::Plugin::I18N for localization support

Have fun https://github.com/cutelyst/cutelyst/archive/r0.10.0.tar.gz!


September 07, 2015

Piwik told me that people are still sharing my post about the state of GNOME-Software and update notifications in Debian Jessie.

So I thought it might be useful to publish a small update on that matter:

  • If you are using GNOME or KDE Plasma with Debian 8 (Jessie), everything is fine – you will receive update notifications through the GNOME-Shell/via g-s-d or Apper respectively. You can perform updates on GNOME with GNOME-PackageKit and with Apper on KDE Plasma.
  • If you are using a desktop-environment not supporting PackageKit directly – for example Xfce, which previously relied on external tools – you might want to try pk-update-icon from jessie-backports. The small GTK+ tool will notify about updates and install them via GNOME-PackageKit, basically doing what GNOME-PackageKit did by itself before the functionality was moved into GNOME-Software.
  • For Debian Stretch, the upcoming release of Debian, we will have gnome-software ready and fully working. However, one of the design decisions of upstream is to only allow offline-updates (= download updates in the background, install on request at next reboot) with GNOME-Software. In case you don’t want to use that, GNOME-PackageKit will still be available, and so are of course all the CLI tools.
  • For KDE Plasma 5 on Debian 9 (Stretch), a nice AppStream based software management solution with a user-friendly updater is also planned and being developed upstream. More information on that will come when there’s something ready to show ;-).

I hope that clarifies things a little. Have fun using Debian 8!


UPDATE:

It appears that many people have problems with getting update notifications in GNOME on Jessie. If you are affected by this, please try the following:

  1. Open dconf-editor and navigate to org.gnome.settings-daemon.plugins.updates. Check if the key active is set to true
  2. If that doesn’t help, also check if at org.gnome.settings-daemon.plugins.updates the frequency-refresh-cache value is set to a sane value (e.g. 86400)
  3. Consider increasing the priority value (if it isn’t at 300 already)
  4. If all of that doesn’t help: I guess some internal logic in g-s-d is preventing a cache refresh then (e.g. because I thinks it is on a expensive network connection and therefore doesn’t refresh the cache automatically, or thinks it is running on battery). This is a bug. If that still happens on Debian Stretch with GNOME-Software, please report a bug against the gnome-software package. As a workaround for Jessie you can enable unconditional cache refreshing via the APT cronjob by installing apt-config-auto-update.
August 11, 2015

One of the things we discussed at this year’s Akademy conference is making AppStream work on Kubuntu.appstream-logo

On Debian-based systems, we use a YAML-based implementation of AppStream, called “DEP-11”. DEP-11 exists for historical reasons (the DEP-11 YAML format was a superset of AppStream once) and because YAML, unlike XML, is one accepted file format by the Debian FTPMasters team, who we want to have on board when adding support for AppStream.

So I’ve spent the last few days on setting up the DEP-11 generator for Kubuntu, as well as improving it greatly to produce more meaningful error messages and to generate better output. It became a bit slower in the process, but the greatly improved diagnostics data is worth it.

For example, maintainers of Debian packages will now get a more verbose explanation of issues found with metadata in their packages, making them easier to fix for people who didn’t get in contact with AppStream yet.

At time, we generate AppStream metadata for Tanglu, Kubuntu and Debian, but so far only Tanglu makes real use of it by shipping it in a .deb package. Shipping the data as package is only a workaround though, for a proper implementation, the data will be downloaded by Apt. To achieve that, the data needs to reach the archive first, which is something that I can hopefully discuss and implement with the FTPMasters team of Debian at this year’s Debconf.

When this is done, the new metadata will automatically become available in tools like GNOME-Software or Muon Discover.

How can I see if there are issues with my package?

The dep11-generator tool will return HTML pages to show both the extracted metadata, as well as issues with it.

You can find the information for the respective distribution here:

Each issue tag will contain a detailed explanation of what went wrong. Errors generally lead to ignoring the metadata, so it will not be processed. Warnings usually concern things which might reduce the amount of metadata or make it less useful, while Info-type hints contain information on how to improve the metadata or make it more useful.

Can I use the data already?

Yes, you can. You just need to place the compressed YAML files in /var/cache/app-info/yaml and the icons in /var/cache/app-info/icons/<suite>-<component>/<size>, for example: /var/cache/app-info/icons/jessie-amd64/64×64

I think I found a bug in the generator…

In that case, please report the issue against the appstream-dep11 package at Debian, or file an issue at Github..

The only reason why I announce this feature now is to find remaining generator bugs, before officially announcing the feature on debian-devel-announce.

When will this be officially announced?

I want to give this feature a little bit more testing, and ideally have the integration into the archive ready, so people can see how the metadata looks like when rendered in GNOME-Software/Discover. I also want to write a bit more documentation to help Debian developers and upstreams to improve their metadata.

Ideally, I also want to incorporate some feedback at Debconf when announcing full AppStream support in Debian. So take all the stuff you’ve read above as a little sneak peek 😉

Debconf15goingI will also give a talk at Debconf, titled “AppStream, Limba, XdgApp – Where we are going.” The aim of this talk is to give an insight i tnto the new developments happening in the software distribution area, and what the goal of these different projects is (the talk should give an overview of what’s in the oven and how it will impact Debian). So if you are interested, please drop by 🙂 Maybe setting up a BOF would also be a good idea.

August 10, 2015

I am very late with this, but I still wanted to write a few words about Akademy 2015.

First of all: It was an awesome conference! Meeting all the great people involved with KDE and seeing who I am working with (as in: face to face, not via email) was awesome. We had some very interesting discussions on a wide variety of topics, and also enjoyed quite some beer together. Particularly important to me was of course talking to Daniel Vrátil who is working on XdgApp, and Aleix Pol of Muon Discover fame.

Also meeting with the other Kubuntu members was awesome – I haven’t seen some of them for about 3 years, and also met many cool people for the first time.

My talk on AppStream/Limba went well, except that I got slightly confused by the timer showing that I had only 2 minutes left, after I had just completed the first half of my talk. It turned out that the timer was wrong 😉

Another really nice aspect was to be able to get an insight into areas where I am usually not involved with, like visual design. It was really interesting to learn about the great work others are doing and to talk to people about their work – and I also managed to scratch an itch in the Systemsettings application, where three categories had shown the same icon. Now Systemsettings looks like it is supposed to be, finally 🙂

The only thing I could maybe complain about was the weather, which was more Scotland/Wales like than Spanish – but that didn’t stop us at all, not even at the social event outside. So I actually don’t complain 😉

We also managed to discuss some new technical stuff, like AppStream for Kubuntu, and plenty of other things that I’ll write about in a separate blog post.

Generally, I got so many new impressions from this year’s Akademy, that I could write a 10-pages long blogpost about it while still having to leave out things.

Kudos to the organizers of this Akademy, you did a fantastic job! I also want to thank the Ubuntu community for funding my trip, and the Kubuntu community for pushing me a little to attend :-).

KDE is a great community with people driving the development of Free Software forward. The diversity of people, projects and ideas in KDE is a pleasure, and I am very happy to be part of this community.

Akademy2015

June 18, 2015

Cutelyst the Qt Web framework just got a new release!

I was planning to do this one after Qt 5.5 was out due to the new qCInfo() logger and Q_GADGET introspection, but I’ll save that for 0.10.0.

This release brings some important API breaks so if everything goes well I can do another CMlyst release.

  • Some small performance improvements with QStringRef usage and pre-cached _DISPATCH actions
  • Fixes when using uWSGI in threaded mode
  • Speedup uWSGI threaded mode, still slower than pre-forked but quite close
  • Fixes for the session handling as well as having static methods to make the code cleaner
  • Fix uWSGI –lazy-apps mode
  • Use PBKDF2 on Authentication plugin
  • Requite CMake 3.0 and it’s policy
  • Add Request::mangleParams() and Request::uriWith() so that Request API now covers all of Catalyst’s
  • Documented some functions

Cutelyst repository was moved to gitlab and then moved to github. The main website http://cutelyst.org is now powered by CMlyst and itself🙂

Download here.

I also started another project, html-qt is a library to create a DOM representation of HTML following the WHATWG specifications, it’s needed for CMlyst to be able to remove XSS of published posts, at the moment it has 90% of the tokenizer ready, and 5% of the parser/tree construction code, so if you have interest in the subject (could be potentially useful for mail clients such as KMail) give me a hand!


April 28, 2015

Cutelyst the Qt Web Framework just got a new release!

With 0.8.0 some important bugs got fixed which will allow for the CMS/Blogger app CMlyst to get a new release soon. Besides bugs being fixed a class to take out stats from the application was created, Cutelyst::Stats matches what Catalyst::Stats does, and produces a similar output:

cutelyst-stats

 

  • There you have a box, and the time which each action took to complete, looking at the /End action it is clear that it took more time but most of the time was due it calling the Grantlee View.
  • On the Request took: …  line there is the 1278.772/s value which means how many requests could be handled within a second, of course that value is for the exact same request and for the single core where it was run.
  • The first line now shows which HTTP method was used, which resource the user wants and where the request came from.

All of that can and should be disabled on production by setting cutelyst.*.debug=false on the [Rules] section of the application ini file (–ini of uWSGI). Yeah this is new too🙂

Speaking of debugging we now have more information about what user data was sent, if cutelyst.request.debug logging category is enabled you will see:

cutelyst-body A similar output is printed for url query parameters and file uploads.

The HTTP/1.1 support for chunked responses was added, once c->response()->write(…) is called the response headers are send and a chunk is sent to the client, this is new and experimental, but worked without complaints from web browsers.

An easier way to declare action arguments or capture arguments was implemented, :AutoArgs and :AutoCaptureArgs, both are not present in Perl Catalyst because there is no way to introspect methods signature in the language, @_ can provide many things, but with Qt meta information we can see which types of arguments a method has as well as how many arguments:

C_ATTR(some_method, :Local :AutoArgs)
void some_method(Context *c, const QString &year, const QString &month);

That will automatically capture two arguments matching an URL like “/some_method/2014/12”.

Last the Headers class got many fixes and improvements such as striping the fragment due RFC 2616.

As always if you have questions show up in #cutelyst at freenode or send me a mail.

Download 0.8.0 here

Enjoy!


March 30, 2015

And once again, it’s time for another Limba blogpost 🙂limba-small

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple “control” file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Limba-installed Neverball

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some “bigger” Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can’t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now – since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don’t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew – they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.


So, thanks for reading this (again too long) blogpost 🙂 There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

 

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well – this might change in a few weeks. Also, Muon does pretty well already!

March 23, 2015

Cutelyst the Qt/C++ web framework just got a new release, 0.7.0 brings many improvements, bugfixes and features.

Most notably of the changes is a shinny new Chained dispatcher that finally got implemented, with it Cutelyst Tutorial got finished, and a new command line tool to bootstrap new projects.

* Request::bodyData() can return a QJsonDocument is content-type is application/json
* Chained dispatcher
* Fixes on QStringListeral and QLatin1String usage
* More debug information
* Some API changes to match Catalyst’s
* Some rework on the API handlying Plugins (not the best thing yet)
* Moved to Gitlab (due to the gitorious being acquired by it)
* “cutelyst” command line tool

For the next release I’ll try to make the QML integration a reality, and try to improve the Plugins API, which is right now the part that needs more love would love to hear some advice.

Download it here.

Have fun!


February 12, 2015

Now that Cutelyst is allowing me to write web applications with the tools I like, I can use it to build the kind of web applications I need but am not fine with using the existing ones…

WordPress is a great software, but it’s written in PHP, which is not a language I like, and there are many other software like this written in all sorts of languages but C++/Qt/Cutelyst, so why not?

This first release is a very shine one, it allows for templating, static pages and posts (without comments) and even a RSS feed.

To make things more interesting CMlyst has a separate library, this library abstracts the Content Management part, allowing for different backends to be written, the current one is based on QSettings files, but it’s very easy to write one for SQL, git who knows😛

I have not measured it speed against WordPress or any other CMS, it probably won’t be fair. But if you are curious I have run it against weighttp and it gives me 4k req/s on a page show, 2,5k req/s on last posts listing and 6,5k on RSS feed delivering, this on a Core2Duo 2.4Ghz, using ~10MB of RAM.

Give it a try!

Download here.


Cutelyst, the Qt/C++ web framework just got another step into API stabilization.

Since 0.3.0 I’ve been trying to take the most request per second out of it, and because of that I decided to replace most QStrings with QByteArrays, the allocation call is indeed simpler in QByteArray but since most of Qt use QString for strings it started to create a problem rather than solving one. Grantlee didn’t play nice with QByteArray breaking ifequal and in the end some implicit conversions from UTF-8 were triggered.

So on 0.6.0 I’ve replaced all QByteArray usage in favor of QString, and from the benchmarks of hello world the difference is really negligible.

Another big change I actually took today was to change the Headers class that is used on Request and Response, the keys stored there were always in camel-case style and split by dashes, and if you want to get say the User-Agent in a Grantlee template it won’t work as dashes will break the variable name, so now all keys are lower casa and words are separated by underscores. Allowing for {{ ctx.request.headers.user_agent }} in Grantlee templates.

Some changes were also made to make it behave more close to what Catalyst does, and fixed a bunch of bugs from last release.

On my TODO list still need to:
Write a chained dispatcher to allow a more flexible way of dispatching.
Write tutorials, yeah this is boring…
Write QML integration to allow for writing the application logic in QML too!

This last one would allow for something like this:

Application {
    Controller {
        Action {
            attributes: ":Path(hello)"
            onRequest: {
                ctx.response.body = "Hello World!"
            }
        }
    }
}

Download here

Have fun!


January 27, 2015

Yesterday I released version 0.8 of AppStream, the cross-distribution standard for software metadata, that is currently used by GNOME-Software, Muon and Apper in to display rich metadata about applications and other software components.

 What’s new?

The new release contains some tweaks on AppStreams documentation, and extends the specification with a few more tags and refinements. For example we now recommend sizes for screenshots. The recommended sizes are the ones GNOME-Software already uses today, and it is a good idea to ship those to make software-centers look great, as others SCs are planning to use them as well. Normal sizes as well as sizes for HiDPI displays are defined. This change affects only the distribution-generated data, the upstream metadata is unaffected by this (the distro-specific metadata generator will resize the screenshots anyway).

Another addition to the spec is the introduction of an optional <source_pkgname/> tag, which holds the source package name the packages defined in <pkgname/> tags are built from. This is mainly for internal use by the distributor, e.g. it can decide to use this information to link to internal resources (like bugtrackers, package-watch etc.). It may also be used by software-center applications as additional information to group software components.

Furthermore, we introduced a <bundle/> tag for future use with 3rd-party application installation solutions. The tag notifies a software-installer about the presence of a 3rd-party application bundle, and provides the necessary information on how to install it. In order to do that, the software-center needs to support the respective installation solution. Currently, the Limba project and Xdg-App bundles are supported. For software managers, it is a good idea to implement support for 3rd-party app installers, as soon as the solutions are ready. Currently, the projects are worked on heavily. The new tag is currently already used by Limba, which is the reason why it depends on the latest AppStream release.

How do I get it?

All AppStream libraries, libappstream, libappstream-qt and libappstream-glib, are supporting the 0.8 specification in their latest version – so in case you are using one of these, you don’t need to do anything. For Debian, the DEP-11 spec is being updated at time, and the changes will land in the DEP-11 tools soon.

Improve your metadata!

This call goes especilly to many KDE projects! Getting good data is partly a task for the distributor, since packaging issues can result in incorrect or broken data, screenshots need to be properly resized etc. However, flawed upstream data can also prevent software from being shown, since software with broken data or missing data will not be incorporated in the distro XML AppStream data file.

Richard Hughes of Fedora has created a nice overview of software failing to be included. You can see the failed-list here – the data can be filtered by desktop environment etc. For KDE projects, a Comment= field is often missing in their .desktop files (or a <summary/> tag needs to be added to their AppStream upstream XML file). Keep in mind that you are not only helping Fedora by fixing these issues, but also all other distributions cosuming the metadata you ship upstream.

For Debian, we will have a similar overview soon, since it is also a very helpful tool to find packaging issues.

If you want to get more information on how to improve your upstream metadata, and how new metadata should look like, take a look at the quickstart guide in the AppStream documentation.

December 02, 2014

Disclaimer: Limba is stilllimba-small in a very early stage of development. Bugs happen, and I give to guarantees on API stability yet.

Limba is a very simple cross-distro package installer, utilizing OverlayFS found in recent Linux kernels (>= 3.18).

As example I created a small Limba package for one of the Qt5 demo applications, and I would like to share the process of creating Limba packages – it’s quite simple, and I could use some feedback on how well the resulting packages work on multiple distributions.

I assume that you have compiled Limba and installed it – how that is done is described in its README file. So, let’s start.

1. Prepare your application

The cool thing about Limba is that you don’t really have to do many changes on your application. There are a few things to pay attention to, though:

  • Ensure the binaries and data are installed into the right places in the directory hierarchy. Binaries must go to $prefix/bin, for example.
  • Ensure that configuration can be found under /etc as well as under $prefix/etc

This needs to be done so your application will find its data at runtime. Additionally, you need to write an AppStream metadata file, and find out which stuff your application depends on.

2. Create package metadata & install software

1.1 Basics

Now you can create the metadata necessary to build a Limba package. Just run

cd /path/to/my/project
lipkgen make-template

This will create a “lipkg” directory, containing a “control” file and a “metainfo.xml” file, which can be a symlink to the AppStream metadata, or be new metadata.

Now, configure your application with /opt/bundle as install prefix (-DCMAKE_INSTALL_PREFIX=/opt/bundle, –prefix=/opt/bundle, etc.) and install it to the lipkg/inst_target directory.

1.2 Handling dependencies

If your software has dependencies on other packages, just get the Limba packages for these dependencies, or build new ones. Then place the resulting IPK packages in the lipkg/repo directory. Ideally, you should be able to fetch Limba packages which contain the software components directly from their upstream developers.

Then, open the lipkg/control file and adjust the “Requires” line. The names of the components you depend on match their AppStream-IDs (<id/> tag in the AppStream XML document). Any version-relation (>=, >>, <<, <=, <>) is supported, and specified in brackets after the component-id.

The resulting control-file might look like this:

Format-Version: 1.0

Requires: Qt5Core (>= 5.3), Qt5DBus (>= 5.3), libpng12

If the specified dependencies are in the repo/ subdirectory, these packages will get installed automatically, if your application package is installed. Otherwise, Limba depends on the user to install these packages manually – there is no interaction with the distribution’s package-manager (yet?).

3. Building the package

In order to build your package, make sure the content in inst_target/ is up to date, then run

lipkgen build lipkg/

This will build your package and output it in the lipkg/ directory.

4. Testing the package

You can now test your package, Just run

sudo lipa install package.ipk

Your software should install successfully. If you provided a .desktop file in $prefix/share/applications, you should find your application in your desktop’s application-menu. Otherwise, you can run a binary from the command-line, just append the version of your package to the binary name (bash-comletion helps). Alternatively, you can use the runapp command, which lets you run any binary in your bundle/package, which is quite helpful for debugging (since the environment a Limba-installed application is run is different from the one of other applications).

Example:

runapp ${component_id}-${version}:/bin/binary-name

And that’s it! 🙂

I used these steps to create a Limba package for the OpenGL Qt5 demo on Tanglu 2 (Bartholomea), and tested it on Kubuntu 15.04 (Vivid) with KDE, as well as on an up-to-date Fedora 21, with GNOME and without any Qt or KDE stuff installed:

qt5demo-limba-kubuntuqt5demo-limba-fedora

I encountered a few obstacles when building the packages, e.g. Qt5 initially didn’t find the right QPA plugin – that has been fixed by adjusting a config file in the Qt5Gui package. Also, on Fedora, a matching libpng was missing, so I included that as well.

You can find the packages at Github, currently (but I am planning to move them to a different place soon). The biggest issue with Limba is at time, that it needs Linux 3.18, or an older kernel with OverlayFS support compiled in. Apart from that and a few bugs, the experience is quite smooth. As soon as I am sure there are now hidden fundamental issues, I can think of implementing more features, like signing packages and automatically updating them.

Have fun playing around with Limba!

November 24, 2014

A bit more than one year after the initial commit, Cutelyst makes it’s 5th release.

It’s now powering 3 commercial applications, the last one recently got into production and is the most complex of them, making heavy use of Grantlee and Cutelyst capabilities.

Speaking of Grantlee if you use it on Qt5 you will get hit by QTBUG-41469 which sadly doesn’t seems to get fixed in time for 5.4, but uWSGI can constrain your application resources so your server doesn’t go out of memory (worth the leak due to it’s usefulness).

Here is an overview since 0.4.0 release:

  • Remove hardcode build setting to “Debug”, so that one can build with “Release” increasing performance up to 20% – https://gitorious.org/cutelyst/pages/CutelystPerformance
  • Request::uploads() API was changed to be useful in real world, filling a QMap with the form field name as a key and in the proper order sent by the client
  • Introduced a new C_ATTR macro which allows to have the same Perl attributs syntax like C_ATTR(method_name, :Path(/foo/bar) :Args)
  • Added an Action class RoleACL which allows for doing authorization on control lists, making it easy to deny access to some resources if a user doesn’t match the needed role
  • Added a RenderView class to make it easier to delegate the rendering to a view such as Grantlee
  • Request class is now QObject class so that we can use it on Grantlee as ctx.request.something
  • Make use of uWSGI ini (–init) configuration file to also configure the Cutelyst application
  • Better docs
  • As always some bugs were fixed

I’m very happy with the results, all those site performance tools like webpagetest give great scores for the apps, and I have started to work on translating Catalyst tutorial to Cutelyst, but I realize that I need Chained dispatcher working before that…

If you want to try it, I’ve made a hello-world app available today at https://gitorious.org/cutelyst/hello-world

Download here!


November 10, 2014

As some of you already know, since the larger restructuring in PackageKit for the 1.0 release, I am rethinking Listaller, the 3rd-party application installer for Linux systems, as well.

During the past weeks, I was playing around with a lot of different ideas and code, to make installations of 3rd-party software easily possible on Linux, but also working together with the distribution package manager. I now have come up with an experimental project, which might achieve this.

Motivation

Many of you know Lennart’s famous blogpost on how we put together Linux distributions. And he makes a lot of good and valid points there (in fact, I agree with his reasoning there). The proposed solution, however, is not something which I am very excited about, at least not for the use-case of installing a simple application[1]. Leaving things like the exclusive dependency on technology like Btrfs aside, the solution outlined by Lennart basically bypasses the distribution itself, instead of working together with it. This results in a duplication of installed libraries, making it harder to overview which versions of which software component are actually running on the system. There is also a risk for security holes due to libraries not being updated. The security issues are worked around by a superior sandbox, which still needs to be implemented (but will definitively come soon, maybe next year).

I wanted to explore a different approach of managing 3rd-party applications on Linux systems, which allows sharing as much code as possible between applications.

Limba – Glick2 and Listaller concepts mergedlimba-small

In order to allow easy creation of software packages, as well as the ability to share software between different 3rd-party applications, I took heavy inspiration from Alexander Larssons Glick2 project, combining it with ideas from the application-directory based Listaller.

The result is Limba (named after Limba tree, not the voodoo spirit – I needed some name starting with “li” to keep the prefix used in Listaller, and for a tool like this the name didn’t really matter 😉 ).

Limba uses OverlayFS to combine an application with its dependencies before running it, as well as mount namespaces and shared subtrees. Except for OverlayFS, which just landed in the kernel recently, all other kernel features needed by Limba are available for years now (and many distributions ship with OverlayFS on older kernels as well).

How does it work?

In order to to achieve separation of software, each software component is located in a separate container (= package). A software component can be an application, like Kate or GEdit, but also be a single shared library (openssl) or even a full runtime (KDE Frameworks 5 parts, GNOME 3).

Each of these software components can be identified via AppStream metadata, which is just a few bits of XML. A Limba package can declare a dependency on any other software component. In case that software is available in the distribution’s repositories, the version found there can be used. Otherwise, another Limba package providing the software is required.

Limba packages can be provided from software repositories (e.g. provided by the distributor), or be nested in other packages. For example, imagine the software “Kate” requires a version of the Qt5 libraries, >= 5.2. The downloadable package for “Kate” can be bundled with that dependency, by including the “Qt5 5.2” Limba package in the “Kate” package. In case another software is installed later, which also requires the same version of Qt, the already installed version will be used.

Since the software components are located in separate directories under /opt/software, an application will not automatically find its dependencies, or be able to locate its own files. Therefore, each application has to be run by a special tool, which merges the directory trees of the main application and it’s dependencies together using OverlayFS. This has the nice sideeffect that the main application could override files from its dependencies, if necessary. The tool also sets up a new mount namespace, so if the application is compiled with a certain prefix, it does not need to be relocatable to find its data files.

At installation time, to achieve better system integration, certain files (like e.g. the .desktop file) are split out of the installed directory tree, so the newly installed application achieves almost full system integration.

AQNAY*

Can I use Limba now?

Limba is an experiment. I like it very much, but it might happen that I find some issues with it and kill it off again. So, if you feel adventurous, you can compile the source code and use the example “Foobar” application to play around with Limba. Before it can be used in production (if at all), some more time is needed.

I will publish documentation on how to test the project soon.

Doesn’t OverlayFS have a maximum stacking depth?

Oh yes it has! The “How does it work” explanation doesn’t tell the whole truth in that regard (mainly to keep the section small). In fact, Limba will generate a “runtime” for the newly installed software, which is a directory with links to the actual individual software components the runtime consists of. The runtime is identified by an UUID. This runtime is then mounted together with the respective applications using OverlayFS. This works pretty great, and also results in no dependency-resolution to be done immediately before an application is started.

Than dependency stuff gives me a headache…

Admittedly, allowing dependencies adds a whole lot of complexity. Other approaches, like the one outlined by Lennart work around that (and there are good reasons for doing that as well).

In my opinion, the dependency-sharing and de-duplication of software components, as well as the ability to use the components which are packaged by your Linux distribution is worth the extra effort.

Can you give an overview of future plans for Limba?

Sure, so here is the stuff which currently works:

  • Creating simple packages
  • Installing packages
  • Very basic dependency resolution (no relations (like >, <, =) are respected yet)
  • Running applications
  • Initial bits of system integration (.desktop files are registered)

These features are planned for the new future:

  • Support for removing software
  • Automatic software updates of 3rd-party software
  • Atomic updates
  • Better system integration
  • Integration with the new sandboxing features
  • GPG signing of packages
  • More documentation / bugfixes

Remember that Limba is an experiment, still 😉

XKCD 927

Technically, I am replacing one solution with another one here, so the situation does not change at all ;-). But indeed, some duplicate work is done due to more people working in this area now on similar questions.

But I think this is a good thing, because the solutions worked on are fundamentally different approaches, and by exploring multiple ways of doing things, we will come up with something great in the end. (XKCD reference)

Doesn’t the use of OverlayFS have an impact on the performance of software running with Limba?

I ran some synthetic benchmarks and didn’t notice any problems – even the startup speed of Limba applications is only a few milliseconds slower than the startup of the “raw” native application. However, I will still have to run further tests to give a definitive answer on this.

How do you solve ABI compatibility issues?

This approach requires software to keep their ABI stable. But since software can have strict dependencies on a specific version of a software (although I’d discourage that), even people who are worried about this issue can be happy. We are getting much better at tracking unwanted ABI breaks, and larger projects offer stable API/ABI during a major release cycle. For smaller dependencies, there are, as explained above, stricter dependencies.

In summary, I don’t think ABI incompatibilities will be a problem with this approach – at least not more than they have been in general. (The libuild facilities from Listaller to minimize dependencies will still be present im Limba, of course)

You are wrong because of $X!

Please leave a comment in this case! I’d love to discuss new ideas and find the limits of the Limba concept – that’s why I am writing C code afterall, since what looks great on paper might not work in reality or have issues one hasn’t thought about before. So any input is welcomed!

Conclusion

Last but not least I want to thank Alexander Larsson for writing Glick2, which Limba is heavily inspired from, and for his patient replies to my emails.

If Limba turns out to be a good idea, you can expect a few more blog posts about it soon.


* Answered questions nobody asked yet

[1]: Don’t get me wrong, I would like to have these ideas implemented – they offer great value. But I think for “simple” software deployment, the solution is an overkill.

November 06, 2014

… or “Why do I not see any update notifications on my brand-new Debian Jessie installation??”

This is a short explanation of the status quo, and also explains the “no update notifications” issue in a slightly more detailed way, since I am already getting bug reports for that.

As you might know, GNOME provides GNOME-Software for installation of applications via PackageKit. In order to work properly, GNOME-Software needs AppStream metadata, which is not yet available in Debian. There was a GSoC student working on the necessary code for that, but the code is not yet ready and doesn’t produce good results yet. Therefore, I postponed AppStream integration to Jessie+1, with an option to include some metadata for GNOME and KDE to use via a normal .deb package.

Then, GNOME was updated to 3.14. GNOME 3.14 moved lots of stuff into GNOME-Software, including the support for update-notifications (which have been in g-s-d before). GNOME-Software is also the only thing which can edit the application groups in GNOME-Shell, at least currently.

So obviously, there was no a much stronger motivation to support GNOME-Software in Jessie. The appstream-glib library, which GNOME-Software uses exclusively to read AppStream metadata, didn’t support the DEP-11 metadata format which Debian uses in place of the original AppSTream XML for a while, but does so in it’s current development branch. So that component had to be packaged first. Later, GNOME-Software was uploaded to the archive as well, but still lacked the required metadata. That data was provided by me as a .deb package later, locally generated using the current code by my SoC student (the data isn’t great, but better than nothing). So far with the good news.

But there are multiple issues at time. First of all, the appstream-data package didn’t pass NEW so far, due to it’s complex copyright situation (nothing we can’t resolve, since app-install-data, which appstream-data would replace, is in Debian as well). Also, GNOME-Software is exclusively using offline-updates (more information also on [1] and [2]) at time. This isn’t always working at the moment, since I haven’t had the time to test it properly – and I didn’t expect it to be used in Debian Jessie as well[3].

Furthermore, the offline-updates feature requires systemd (which isn’t an issue in itself, I am quite fine with that, but people not using it will get unexpected results, unless someone does the work to implement offline-updates with sysvinit).

Since we are in freeze at time, and obviously this stuff is not ready yet, GNOME is currently without update notifications and without a way to change the shell application groups.

So, how can we fix this? One way would of course be to patch notification support back into g-s-d, if the new layout there allows doing that. But that would not give us the other features GNOME-Software provides, like application-group-editing.

Implementing that differently and patching it to make it work would be more or at least the same amount of work like making GNOME-Software run properly. I therefore prefer getting GNOME-Software to run, at least with basic functionality. That would likely mean hiding things like the offline-update functionality, and using online-updates with GNOME-PackageKit instead.

Obviously, this approach has it’s own issues, like doing most of the work post-freeze, which kind of defeats the purpose of the freeze and would need some close coordination with the release-team.

So, this is the status quo at time. It is kind of unfortunate that GNOME moved crucial functionality into a new component which requires additional integration work by the distributors so quickly, but that’s something which isn’t worth to talk about. We need a way forward to bring update-notifications back, and there is currently work going on to do that. For all Debian users: Please be patient while we resolve the situation, and sorry for the inconvenience. For all developers: If you would like to help, please contact me or Laurent Bigonville, there are some tasks which could use some help.

As a small remark: If you are using KDE, you are lucky – Apper provides the notification support like it always did, and thanks to improvements in aptcc and PackageKit, it even is a bit faster now. For the Xfce and <other_desktop> people, you need to check if your desktop provides integration with PackageKit for update-checking. At least Xfce doesn’t, but after GNOME-PackageKit removed support for it (which was moved to gnome-settings-daemon and now to GNOME-Software) nobody stepped up to implement it yet (so if you want to do it – it’s not super-complicated, but knowledge of C and GTK+ is needed).

—-

[3]: It looks like dpkg tries to ask a debconf question for some reason, or an external tool like apt-listchanges is interfering with the process, which must run completely unsupervised. There is some debugging needed to resolve these Debian-specific issues.

October 10, 2014

As you might know, due to invasive changes in PackageKit, I am currently rewriting the 3rd-party application installer Listaller. Since I am not the only one looking at the 3rd-party app-installation issue (there is a larger effort going on at GNOME, based on Lennarts ideas), it makes sense to redesign some concepts of Listaller.

Currently, dependencies and applications are installed into directories in /opt, and Listaller contains some logic to make applications find dependencies, and to talk to the package manager to install missing things. This has some drawbacks, like the need to install an application before using it, the need for applications to be relocatable, and application-installations being non-atomic.

Glick2

There is/was another 3rd-party app installer approach on the GNOME side, by Alexander Larsson, called Glick2. Glick uses application bundles (do you remember Klik from back in the days?) mounted via FUSE. This allows some neat features, like atomic installations and software upgrades, no need for relocatable apps and no need to install the application.

However, it also has disadvantages. Quoting the introduction document for Glick2:

“Bundling isn’t perfect, there are some well known disadvantages. Increased disk footprint is one, although current storage space size makes this not such a big issues. Another problem is with security (or bugfix) updates in bundled libraries. With bundled libraries its much harder to upgrade a single library, as you need to find and upgrade each app that uses it. Better tooling and upgrader support can lessen the impact of this, but not completely eliminate it.”

This is what Listaller does better, since it was designed to do a large effort to avoid duplication of code.

Also, currently Glick doesn’t have support for updates and software-repositories, which Listaller had.

Combining Listaller and Glick ideas

So, why not combine the ideas of Listaller and Glick? In order to have Glick share resources, the system needs to know which shared resources are available. This is not possible if there is one huge Glick bundle containing all of the application’s dependencies. So I modularized Glick bundles to contain just one software component, which is e.g. GTK+ or Qt, GStreamer or could even be a larger framework (e.g. “GNOME 3.14 Platform”). These components are identified using AppStream XML metadata, which allows them to be installed from the distributor’s software repositories as well, if that is wanted.

If you now want to deploy your application, you first create a Glick bundle for it. Then, in a second step, you bundle your application bundle with it’s dependencies in one larger tarball, which can also be GPG signed and can contain additional metadata.

The resulting “metabundle” will look like this:

glick-libundle

 

 

 

 

 

 

 

 

 

This doesn’t look like we share resources yet, right? The dependencies are still bundled with the application requiring them. The trick lies in the “installation” step: While the application above can be executed right away without installing it, there will also be an option to install it. For the user, this will mean that the application shows up in GNOME-Shell’s overview or KDEs Plasma launcher, gets properly registered with mimetypes and is – if installed for all users – available system-wide.

Technically, this will mean that the application’s main bundle is extracted and moved to a special location on the file system, so are the dependency-bundles. If bundles already exist, they will not be installed again, and the new application will simply use the existing software. Since the bundles contain information about their dependencies, the system is able to determine which software is needed and which can simply be deleted from the installation directories.

If the application is started now, the bundles are combined and mounted, so the application can see the libraries it depends on.

Additionally, this concept allows secure updates of applications and shared resources. The bundle metadata contains an URL which points to a bundle repository. If new versions are released, the system’s auto-updater can automatically pick these up and install them – this means e.g. the Qt bundle will receive security updates, even if the developer who shipped it with his/her app didn’t think of updating it.

Conclusion

So far, no productive code exists for this – I just have a proof-of-concept here. But I pretty much like the idea, and I am thinking about going further in that direction, since it allows deploying applications on the Linux desktop as well as deploying software on servers in a way which plays nice with the native package manager, and which does not duplicate much code (less risk of having not-updated libraries with security flaws around).

However, there might be issues I haven’t thought about yet. Also, it makes sense to look at GNOME to see how the whole “3rd-party app deployment” issue develops. In case I go further with Listaller-NEXT, it is highly likely that it will make use of the ideas sketched above (comments and feedback are more than welcome!).

October 07, 2014

This is yet another big step for Cutelyst, this release bring several important fixes to the last version, and stabilizes the API a bit more. So far I have successfully deployed 3 commercial applications build on top of Cutelyst + Grantlee5, and the result is great.

If you don’t know Cutelyst yet it’s a Web Framework allowing you to build web applications using Qt, for more info check (this blog and) our website/wiki which is still work in progress: http://cutelyst.org or join #cutelyst on freenode

This release brings the following improvements:

  • Further speed improvements by simplifying several code paths and improving the dispatcher logic was an overall of 15% speedup
  • Added an API to enable Grantlee template caching which greatly improve speed when using Grantlee templating
  • Improved Query and Body parameters to allow for properly dealing with posts that contains the same field id multiple times

New features:

  • REST API – Since 0.1.0 I was asked about supporting REST, and since I needed it for another project that I got involved the support landed early, the behavior is the same as Catalyst::Action::REST which allows you to easily add a foo_DELETE method which will get automatically called if the request method is DELETE for example.
  • Added a Credential HTTP plugin to handle Basic HTTP (and in future Digest) authentication
  • Added support for Authenticate and ProxyAuthenticate basic parsing on the Headers class
  • Finished Context::uriFor() methods that allows for easily building an URI.
  • Added a method to do a DNS PTR lookup to get the hostname of the client
  • Added a C_PATH to more easily set the matching part of the path (thanks to Dan Vrátil)

Fixes:

  • Fixed a few memory leaks
  • Fixed a crash if the body wasn’t set
  • Fixed uWSGI body buffered device
  • And a lot of other misbehaviors found on post release…

For the next release I hope to be able to port the Catalyst tutorial to the Cutelyst equivalent, and finish a few other API changes.

As before the API is unstable but don’t be afraid of playing with it, most changes will simply require a rebuild of your application.

Have fun! https://gitorious.org/cutelyst/cutelyst/archive/0f28d7da9b04061fe80f7e9cf24a120250a02e02.tar.gz


September 11, 2014

Listaller-Logo (with text)It is time for another report on Listaller, the cross-distro 3rd-party package installer, which is now in development for – depending how you count – 5-6 years. This will become a longer post, so you might grab some coffee or tea 😉

The original idea

The Listaller project was initially started with the goal to make application deployment on Linux distributions as simple as possible, by providing a unified package installation format and tools which make building apps for multiple distributions easier and deployment of updates simple. The key ideas were:

  • Seamless integration of all installation steps into the system – users shouldn’t care about the origin of their application, they just handle all installed apps with the same tool and update all apps with the same interface they use for updating the system.
  • Out-of-the-boy sandboxing for all 3rd-party apps
  • Easy signing and key-validation for Listaller packages
  • Simple creation of updates for developers
  • Resource-sharing: It should always be clear which application uses which library, duplicates should be avoided. The distribution-provided software should take priority, since it is often well-maintained and receives security updates.

The current state

The current release of Listaller handles all of this with a plugin for PackageKit, the cross-distro package-management abstraction layer. It hooks into PackageKit and reads information passing through to the native distributor backend, and if it encounters Listaller software, it handles it appropriately. It can also inject update information. This results in all Listaller software being shown in any PackageKit frontends, and people can work with it just like if the packages were native packages. Listaller package installations are controlled by a machine policy, so the administrator can decide that e.g. only packages from a trusted source (= GPG signature in trusted database) can be installed. Dependencies can be pulled from the distributor’s repositories, or optionally from external sources, like the PyPI.

This sounds good on paper, but the current implementation has various problems.

The issues

The current Listaller approach has some problems. The biggest one lies in the future: Soon, there will be no PackageKit plugins anymore! PackageKit 1.0 will remove support for them, because they appear to be a major source for crashes, even the in-tree plugins cause problems. Also, the PackageKit service itself is currently being trimmed of unneeded features and less-used code. These changes in PackageKit are great and needed for the project (and I support these efforts), but they cause a pretty huge problem for Listaller: The project relies on the PackageKit plugin – if used without it, you loose the system-integration part, which is one of the key concepts of Listaller, and a primary goal.

But this issue is not the only one. There are more. One huge problem for Listaller is dependency-solving: It needs to know where to get software from in case it isn’t installed already. And that has to be done in a cross-distributional way. This is an incredibly complex task, and Listaller contains lots of workarounds for various quirks. It contains so much hacks for distro-specific stuff, that it became really hard to understand. The Listaller dependency model also became very complex, because it tried to handle many corner-cases. This is bad, of course. But the workarounds weren’t added for fun, but because it was assumed to be easier than to fixing the root cause, which would have required collaboration between distributors and some changes on the stack, which seemed unlikely to happen at the time the code was written.

The systemd effort

Also a thing which affects Listaller, is the latest push from the systemd team to allow cross-distro 3rd-party installations to happen. I definitively recommend reading the linked blogpost from Lennart, if you have some spare time! The identified problems are the same as for Listaller, but the solution they propose is completely different, and about three orders of magnitude more invasive than whatever the Listaller project had in mind (I make these numbers up, so don’t ask!). There are also a few issues I see with Lennarts approach, I will probably go into detail about that in another blogpost (e.g. it requires multiple copies of a library lying around, where one version might have a security vulnerability, and another one doesn’t – it’s hard to ensure everything is up to date and secure that way, even if you have a top-notch sandbox). I have great respect for the systemd crew and especially Lennart, and I hope them to succeed with their efforts. However, I also think Listaller can achieve a similar things with a less-invasive solution, at least for the 3rd-party app-installations (Listaller is one of the partial-fix solutions with strict focus, so not a direct competitor to the holistic systemd approach. Both solutions could happily live together.)

A step into the future

Some might have guessed it already: There are some bigger changes coming to Listaller! The most important one is that there will be no Listaller anymore, at least not in its old form.

Since the current code relies heavily on the PackageKit plugin, and contains some ugly workarounds, it doesn’t make much sense to continue working on it.

Instead, I started the Listaller.NEXT project, which is a rewrite of Listaller in C. There are a some goals for the rewrite:

  • No stupid hacks and workarounds: We will not add any workaround. If there is a problem, we will fix it at its source, even if that might be more invasive.
  • Trimmed down project: The new incarnation of Listaller will only support installations of statically linked software at the beginning. We will start with a very small, robust core, and then add more features (like dependency-solving) gradually, but only if they are useful. There will be no feature-creep like in the previous version.
  • Faster development cycle: Releases will happen much faster, not only two or three times a year
  • Integration: Since there is no PackageKit plugin anymore, but integration is still one of Listaller’s key concepts, we will integrate Listaller into downstream tools, ranging from Apper to GNOME-Software. Richard Hughes will help with the integration and user interfaces, so Listaller applications get displayed properly.
  • AppStream-first: AppStream is the ultimate tool for Listaller to detect dependencies. With the 0.6 release, the Listaller component-concept was merged into it, which makes it a very powerful and non-hackish solution for dependency-detection. We will advance the use of its metadata, and probably use it exclusively, which would restrict Listaller to only work properly on distributions which ship AppStream metadata.
  • No desktop-only focus: The previous Listaller was focused only on desktop GUI apps. The new version will be developed with a much larger target audience in mind, including server deployments (“Can I use it to deploy my server app” is one very frequently asked questions about Listaller – with the new version, the answer is yes)
  • We will continue to improve the static-linking and cross-distro development toolchain (libuild, with ligcc, lig++ and binreloc), to make building portable apps easier.

I made a last release of the 0.5.x series of Listaller, to work with PackageKit 0.9.x – the future lies in the C port.

If you are using Listaller (and I know of people who do, for example some deploy statically-linked stuff on internal test-setups with it), stay tuned. The packaging format will stay mostly compatible with the current version, so you will not see many changes there (the plan is to freeze it very soon, so no backwards-incompatible changes are made anymore). The o.5.x series will receive critical bugfixes if necessary.

Help needed!

As always, there is help needed! Writing C is not that difficult 😉 But user feedback is welcome as well, in case you have an idea. The new code will be hosted on Github in the new listaller-next branch (currently not that much to find there). Long-term, we will completely migrate away from Launchpad.

You can expect more blogposts about the Listaller concepts and progress in the next months (as soon as I am done with some AppStream-related things, which take priority).

August 25, 2014

Another mostly bugfix release to make packagers and users happy :)

Sadly I needed to change the direction of where I put most of my efforts, which means that I’m focusing more on getting some commercial products done to get bills payed (as fundraising campaigns doesn’t work well all the time). For a long time I’ve been trying to polish everything I could to have the desktop I wanted, but recently I realized that the way I was doing it would never work, first because I’d need to convince people to think like I do, second because no one in free software writes stuff for free, and this took me a lot of time to realize.

Almost everyone writes stuffs for himself, otherwise there’s no pleasure, so unlike companies you can’t tell a free software developer to work on something he/she doesn’t like, which is one explanation for why most of the projects I started received very little help, an important help (don’t get offended) but still I don’t have active developers in Apper*, print-manager, libudisks-qt, colord-kde, aptcc* (* Matthias and some Fedora dudes have added some nice features) and a few others. The KDE community has always been kind to notice my code mistakes or even fix the code by themselves but featurea are a different matter.

Don’t worry I’m not moving to OSX :P

But for a long time I’ve been building in my mind the Workspace that I want, and with Wayland I realized It would be somehow easy to achieve what I want when speaking of a desktop shell, which would basically be a shell where all widgets are independent process, where a QML compositor just properly place it’s surfaces, Aaron already covered the pros/cons of such approach however I’m stubborn …, I know it’s a huge task to start a new workspace/DE whatever and I’m not going to do that right away (tho I have played with some Wayland code already), instead I’m trying to get my commercial software to pay for it, which might take quite some time :P

So I just would like to maybe catch someone that cares for some of these stuff I maintain and give a hand, specially on KF5. I don’t yet have KF5 packages ready in my distro and as I said I’m focusing on other stuff, I’ll still maintain them and eventually port them by myself but I’m mostly in bugfix mode :P except for Cutelyst which is a project I’m actively working on as I need it for the web stuff I’ve been doing :)

A good start is porting print-manager to KF5 which should be rather easy.

And here is hopefully the last Qt4/KDE4 based Apper :P

http://download.kde.org/stable/apper/0.9.1/src/apper-0.9.1.tar.xz.mirrorlist

Enjoy.


August 16, 2014

There hasn’t been a progress-report on DEP-11 for some time, but that doesn’t mean there was no work going on on it.

DEP-11 is Debian’s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn’t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn’t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it’s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it’s in-memory data cache and enabling the use of GNOME-Software on Debian.

So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as “Components-<arch>.yml.gz” files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) – these things can easily be implemented later.

The remaining steps will be to polish the code and make it merge-ready for Debian’s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) – if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it’s Xapian db. This will lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11.

What Debian will ultimately gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to “specialized package managers”, like Python’s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can’t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize – but we are on a good way.

So, if you want some more information about my student’s awesome work, you can read his blogpost about it. He will also be at Debconf’14 (Portland). (I can’t make it this time, but I surely won’t miss the next Debconf)

Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1.

A small example on how this data will look like can be found here – a larger, actual file is available here. Any questions and feedback are highly appreciated.

July 24, 2014

Release early, release often has never been my strength especially since I don’t do a fair scheduling of all the projects I’m involved…

So since I was in need to better polish Cutelyst I took more time on it, and the result is great, around 100% speed up and a few new features added.

Cutelyst uWSGI plugin now has support for –thread, which will create a QThread to process a request, however I strongly discourage its usage in Cutelyst, the performance is ~7% inferior and a crash in your code will break other requests, and as of now ASYNC mode is not supported in threaded mode due to a limitation in uWSGI request queue.

Thanks to valgrind I managed to make a hello world application from 5K request per second on 0.2.0 to 10K req/s on 0.3.0 on an Intel Core2Duo 2.4Ghz (or 44K req/s on an AMD Phenom II 965 x4 3.4Ghz), however if you enable Grantlee templating you get around 600 req/s so if I happen to have time I will be looking into improving its performance.

Response::body() is now a QIODevice so you can set a QFile* of something else and have Cutelyst to send it back.

Now http://cutelyst.org points to a gitorious Wiki which is slowly getting populated, and API is available as http://api.cutelyst.org.

The 0.3.0 tarball can be downloaded here

Have fun :)


July 16, 2014

appstream-logoToday I am very happy to announce the release of AppStream 0.7, the second-largest release (judging by commit number) after 0.6. AppStream 0.7 brings many new features for the specification, adds lots of good stuff to libappstream, introduces a new libappstream-qt library for Qt developers and, as always, fixes some bugs.

Unfortunately we broke the API/ABI of libappstream, so please adjust your code accordingly. Apart from that, any other changes are backwards-compatible. So, here is an overview of what’s new in AppStream 0.7:

Specification changes

Distributors may now specify a new <languages/> tag in their distribution XML, providing information about the languages a component supports and the completion-percentage for the language. This allows software-centers to apply smart filtering on applications to highlight the ones which are available in the users native language.

A new addon component type was added to represent software which is designed to be used together with a specific other application (think of a Firefox addon or GNOME-Shell extension). Software-center applications can group the addons together with their main application to provide an easy way for users to install additional functionality for existing applications.

The <provides/> tag gained a new dbus item-type to expose D-Bus interface names the component provides to the outside world. This means in future it will be possible to search for components providing a specific dbus service:

$ appstream-index what-provides dbus org.freedesktop.PackageKit.desktop system

(if you are using the cli tool)

A <developer_name/> tag was added to the generic component definition to define the name of the component developer in a human-readable form. Possible values are, for example “The KDE Community”, “GNOME Developers” or even the developer’s full name. This value can be (optionally) translated and will be displayed in software-centers.

An <update_contact/> tag was added to the specification, to provide a convenient way for distributors to reach upstream to talk about changes made to their metadata or issues with the latest software update. This tag was already used by some projects before, and has now been added to the official specification.

Timestamps in <release/> tags must now be UNIX epochs, YYYYMMDD is no longer valid (fortunately, everyone is already using UNIX epochs).

Last but not least, the <pkgname/> tag is now allowed multiple times per component. We still recommend to create metapackages according to the contents the upstream metadata describes and place the file there. However, in some cases defining one component to be in multiple packages is a short way to make metadata available correctly without excessive package-tuning (which can become difficult if a <provides/> tag needs to be satisfied).

As small sidenote: The multiarch path in /usr/share/appdata is now deprecated, because we think that we can live without it (by shipping -data packages per library and using smarter AppStream metadata generators which take advantage of the ability to define multiple <pkgname/> tags)

Documentation updates

In general, the documentation of the specification has been reworked to be easier to understand and to include less duplication of information. We now use excessive crosslinking to show you the information you need in order to write metadata for your upstream project, or to implement a metadata generator for your distribution.

Because the specification needs to define the allowed tags completely and contain as much information as possible, it is not very easy to digest for upstream authors who just want some metadata shipped quickly. In order to help them, we now have “Quickstart pages” in the documentation, which are rich of examples and contain the most important subset of information you need to write a good metadata file. These quickstart guides already exist for desktop-applications and addons, more will follow in future.

We also have an explicit section dealing with the question “How do I translate upstream metadata?” now.

More changes to the docs are planned for the next point releases. You can find the full project documentation at Freedesktop.

AppStream GObject library and tools

The libappstream library also received lots of changes. The most important one: We switched from using LGPL-3+ to LGPL-2.1+. People who know me know that I love the v3 license family of GPL licenses – I like it for tivoization protection, it’s explicit compatibility with some important other licenses and cosmetic details, like entities not loosing their right to use the software forever after a license violation. However, a LGPL-3+ library does not mix well with projects licensed under other open source licenses, mainly GPL-2-only projects. I want libappstream to be used by anyone without forcing the project to change its license. For some reason, using the library from proprietary code is easier than using it from a GPL-2-only open source project. The license change was also a popular request of people wanting to use the library, so I made the switch with 0.7. If you want to know more about the LGPL-3 issues, I recommend reading this blogpost by Nikos (GnuTLS).

On the code-side, libappstream received a large pile of bugfixes and some internal restructuring. This makes the cache builder about 5% faster (depending on your system and the amount of metadata which needs to be processed) and prepares for future changes (e.g. I plan to obsolete PackageKit’s desktop-file-database in the long term).

The library also brings back support for legacy AppData files, which it can now read. However, appstream-validate will not validate these files (and kindly ask you to migrate to the new format).

The appstream-index tool received some changes, making it’s command-line interface a bit more modern. It is also possible now to place the Xapian cache at arbitrary locations, which is a nice feature for developers.

Additionally, the testsuite got improved and should now work on systems which do not have metadata installed.

Of course, libappstream also implements all features of the new 0.7 specification.

With the 0.7 release, some symbols were removed which have been deprecated for a few releases, most notably as_component_get/set_idname, as_database_find_components_by_str, as_component_get/set_homepage and the “pkgname” property of AsComponent (which is now a string array and called “pkgnames”). API level was bumped to 1.

Appstream-Qt

A Qt library to access AppStream data has been added. So if you want to use AppStream metadata in your Qt application, you can easily do that now without touching any GLib/GObject based code!

Special thanks to Sune Vuorela for his nice rework of the Qt library!

And that’s it with the changes for now! Thanks to everyone who helped making 0.7 ready, being it feedback, contributions to the documentation, translation or coding. You can get the release tarballs at Freedesktop. Have fun!

July 05, 2014

Five months ago I announced the very first release of Cutelyst, a web framework powered by Qt 5. Time passes and I started shaping my real world applications written with it, I should probably release a newer version earlier but it’s not very nice to keep releasing API that changes a lot, however next version (0.3.0) will still contains API changes. What changed from 0.1.0 to 0.2.0:

  • I have setup a documentation website http://cutelyst.org
  • I started a WordPress like blog written with Cutelyst https://gitorious.org/cutelyst/untitled this right now is a nice example app, if you like it please help :)
  • Changed the API to expose the forked behavior thus being able to setup database connection in each new process avoiding a mess
  • UWSGI got a patch to deal with workers that don’t want to work, in other words when something on post-fork fails like reaching the dababase connection limit the worker can exit and won’t be restarted.
  • A bunch of bugfixed of course
  • Make use of categorized logging (ie qCDebug)
  • Allow for UWSGI to restart the application as soon as your application .so file changes, this greatly improve development speed :)
  • Encapsulated the post body into a QIODevice subclass on UWSGI engine this means a QFile is –post-buffering limit is reached, or a custom IODevice that deals with the request memory buffer
  • A parser to get uploaded content “multipart/form-data” which is fast and can work with lots of data

For the next version I’ll try to get a properQt loop integration. Download here Have fun!


June 18, 2014
bartholomea-annulata

Bartholomea annulata | (c) Kevin Bryant

It is time for a new Tanglu update, which has been overdue for a long time now!

Many things happened in Tanglu development, so here is just a short overview of what was done in the past months.

Infrastructure

Debile

The whole Tanglu distribution is now built with Debile, replacing Jenkins, which was difficult to use for package building purposes (although Jenkins is great for other things). You can see the Tanglu builders in action at buildd.tg.o.

The migration to Debile took a lot of time (a lot more than expected), and blocked the Bartholomea development at the beginning, but now it is working smoothly. Many thanks to all people who have been involved with making Debile work for Tanglu, especially Jon Severinsson. And of course many thanks to the Debile developers for helping with the integration, Sylvestre Ledru and of course Paul Tagliamonte.

Archive Server Migration

Those who read the tanglu-announce mailinglist know this already: We moved the main archive server stuff at archive.tg.o to to a new location, and to a very powerful machine. We also added some additional security measures to it, to prevent attacks.

The previous machine is now being used for the bugtracker at bugs.tg.o and for some other things, including an archive mirror and the new Tanglu User Forums. See more about that below :-)

Transitions

There is huge ongoing work on package transitions. Take a look at our transition tracker and the staging migration log to get a taste of it.

Merging with Debian Unstable is also going on right now, and we are working on merging some of the Tanglu changes which are useful for Debian as well (or which just reduce the diff to Tanglu) back to their upstream packages.

Installer

Work on the Tanglu Live-Installer, although badly needed, has not yet been started (it’s a task ready for taking by anyone who likes to do it!) – however, some awesome progress has been made in making the Debian-Installer work for Tanglu, which allows us to perform minimal installations of the Tanglu base systems and allows easier support of alternative Tanglu falvours. The work on d-i also uncovered a bug which appeared with the latest version of findutils, which has been reported upstream before Debian could run into it. This awesome progress was possible thanks to the work of Philip Muškovac and Thomas Funk (in really hard debug sessions).

Tanglu ForumsTanglu Users

We finally have the long-awaited Tanglu user forums ready! As discussed in the last meeting, a popular demand on IRC and our mailing lists was a forum or Stackexchange-like service for users to communicate, since many people can work better with that than with mailinglists.

Therefore, the new English TangluUsers forum is now ready at TangluUsers.org. The forum software is in an alpha version though, so we might experience some bugs which haven’t been uncovered in the testing period. We will watch how the software performs and then decide if we stick to it or maybe switch to another one. But so far, we are really happy with the Misago Forums, and our usage of it already led to the inclusion of some patches against Misago. It also is actively maintained and has an active community.

Misc Thingstanglu logo pure

KDE

We will ship with at least KDE Applications 4.13, maybe some 4.14 things as well (if we are lucky, since Tanglu will likely be in feature-freeze when this stuff is released). The other KDE parts will remain on their latest version from the 4.x series. For Tanglu 3, we might update KDE SC 4.x to KDE Frameworks 5 and use Plasma 5 though.

GNOME

Due to the lack manpower on the GNOME flavor, GNOME will ship in the same version available in Debian Sid – maybe with some stuff pulled from Experimental, where it makes sense. A GNOME flavor is planned to be available.

Common infrastructure

We currently run with systemd 208, but a switch to 210 is planned. Tanglu 2 also targets the X.org server in version 1.16. For more changes, stay tuned. The kernel release for Bartholomea is also not yet decided.

Artwork

Work on the default Tanglu 2 design has started as well – any artwork submissions are most welcome!

Tanglu joins the OIN

The Tanglu project is now a proud member (licensee)  of the Open Invention Network (OIN), which build a pool of defensive patents to protect the Linux ecosystem from companies who are trying to use patents against Linux. Although the Tanglu community does not fully support the generally positive stance the OIN has about software patents, the OIN effort is very useful and we agree with it’s goal. Therefore, Tanglu joined the OIN as licensee.


And that’s the stuff for now! If you have further questions, just join us on #tanglu or #tanglu-devel on Freenode, or write to our newly created forum! – You can, as always, also subscribe to our mailinglists to get in touch.

May 26, 2014

And again, another KDE-AppStream post ;-) If you want to know more about AppStream metadata and why adding it to your project is a good idea, you might be interested in this blogpost (and several previous ones I wrote).

Originally, my plan was to directly push metadata to most KDE projects. The problem is that there is no way to reach all maintainers and have them opt-out for getting metadata pushed to their repositories. There is also no technical policy for a KDE project, since “KDE” is really only about the community right now, and there are no technical criteria a project under the KDE umbrella has to fulfill (at least to my knowledge, in theory, even GTK+ projects are perfectly fine within KDE).

Since I feel very uncomfortable in touching other people’s repositories without sending them a note first, I think the best way forward is an opt-in approach.

So, if you want your KDE project to ship metadata, follow these simple steps:

1. Check if there already is metadata for your project

That’s right – we already have some metadata available. Checkout the kde-appstream-metadata-templates repository at Github. You can take the XML file from there, if you want. Just make sure that there are no invalid tags in the description field (no <a/> nodes allowed, for example – the content is not HTML!), check if you have an SPDX-compliant <project_license/> tag, check if the provided public interfaces in the <provides/> tag match your project and also test if the URLs work.

Then you can copy the modified AppStream metadata to your project.

2. Write new metadata

How to write new metadata is described in detail at this TechBase Wiki page. Just follow the instructions.

In case you need help or want me to push the metadata to your project if you don’t have the time, you can also write me an email: matthias [{AT}] tenstral . net – or alternatively file a bug against the Github project linked above.

Don’t forget to have CMake install your shiny new metadata into /usr/share/appdata/.

All metadata you add to your project will automatically get translated by the KDE l10n scripts, no further action is required. So far, projects like Ark, K3b and Calligra are shipping metadata, and the number of AppStream-aware projects in KDE is growing constantly, which greatly improves their visibility in software centers, and will help distributions a lot in organizing installed software.

If you have further questions, please ask! .-)

May 05, 2014

This new release doesn’t bring new features but it’s an important one since it supports PackageKit 0.9.x API, also this is the first release which uses the first 100% async calls on PackageKit-Qt.

Making Pk-Qt sync-free wasn’t so complicated but since I had to rethink the API the client code of Apper also needed to be adapted, I must say that I was lazy sometimes and just created a QEventLoop in Apper to wait for the reply… and this basically means that there will be new bugs in Apper, and hopefully some will be fixed.

It’s interesting that using DBus doesn’t means the code talking to it is async, the proxy classes generated by qdbus2cpp do a blocking call everytime you ask for a property, so you have to avoid it and do a GetProperties call which returns to a callback, although this worked well for PackageKitQt I failed to do the same for libnm-qt, and that is because it’s nearly impossible to make it somehow reliable due to NetworkManager itself not using gdbus, so you can’t have a reliable snapshot of it, unless you do an introspection call which is also insane, so I prefer to wait for when they switch to gdbus :P

Apper 0.9.0 depends on Qt4, KDE4 libs and PackageKitQt 0.9.2.

I’m not giving it much attention lately since I’m slowly building Apper on Qt5 which will be using QtQuick2, as my time is short supporting this old version is mainly to make packagers happy :)

http://download.kde.org/stable/apper/0.9.0/src/apper-0.9.0.tar.xz.mirrorlist


May 03, 2014

kde-appstreamIt took some time, but now it’s finally done: KDE has translations for AppStream upstream metadata!

AppStream is a Freedesktop project to extend metadata about the software projects which is available in distributions, especially regarding applications. Distributions compile a metadata file from data collected from packages, .desktop files and possibly other information sources, and create an AppStream XML file from it, which is then – directly or via a Xapian cache – read by software-center-like applications such as GNOME-Software or KDEs Apper.

Since the metadata available from current sources is not standardized and rather poor, upstream projects can ship small XML files, AppStream upstream metadata or AppData in short. These files contain additional information about a project, such as a long description and links to screenshots. They also provide hints about public interfaces a software provides, for example binaries and libraries, making it possible for distributors to give users exactly the right package name in case they are missing a software component.

So, in order to represent graphical KDE applications like they deserve it in the new software centers making use of AppStream, we need to ship AppData files, with long descriptions, screenshots and a few URLs.

But how can you create these metadata files? In case you want your graphical KDE app to ship an AppData file, there is now a help page on the Techbase Wiki which provides all information needed to get started!

For non-visual stuff or software which just wants to publish it’s provided interfaces with AppStream metadata, there is a dedicated page for that as well. Shipping metadata for non-GUI apps will help programmers to satisfy depedencies in order to compile new software, enhance bash-completion for missing binaries and provides some other neat stuff (take a look at this blogpost to get a taste of it).

And if you want to read a FAQ about the metadata stuff and get the bigger picture, just go to the Techbase Wiki page about AppStream metadata as well.

The pages are not 100% final, so if you have questions, please write me a mail and I’ll update the pages, or simply correct/refine it by yourself (it’s a wiki afterall).

And now to the best thing: As soon as you ship an AppStream upstream metadata file (*.appdata.xml* for apps or *.metainfo.xml* for other stuff), the KDE l10n-script (Scripty!) will automatically start translating it, just like we already do with .desktop files. No further actions are necessary.

I already have a large amount of metadata files here, partially auto-generated, which show that we have about 160+ applications in KDE which could get an AppData file, not counting any frameworks or other non-GUI stuff yet. Since that is a bit much to submit via Reviewboard (which I originally planned to do), I hope I can commit the changes directly to the respective repositories, where the maintainers can take a look at it and adjust it to their liking. If that idea does not receive approval, I will just publish a set of data at some place for the KDE app maintainers to take as a reference (the auto-generated stuff needs some fixup to be commit-ready (which I’d do in case I can just commit changes)). Either way, it is safe now to write and ship AppData files in KDE projects!

In order to get your stuff translated, it is necessary that you follow the AppStream 0.6 metadata specification, and not one of ther older revisions. You can easily detect 0.6 metadata by the <component> root node, instead of <application>, or by it having a metadata_license tag. We don’t support the older versions simply because it’s not necessary, as there were only two KDE projects shipping AppData before, which are now using 0.6 data as well. Since 0.6, the metadata XML format is guaranteed to be stable, and the only reason which could make me change it in an incompatible way is to prevent something as bad as the end of the world from happening (== won’t happen ;-) ). You can find the full specification (upstream and distro data) here.

All parsers are able to handle 0.6 data now, and the existing tools are almost all migrated already (might take a few months to hit the distributions though).

So, happy metadata-writing! :-)

Thanks to all people who helped with making this happen, and especially Burkhard Lück and Albert Astals Cid for their patch-review and help with integrating the necessary changes into the KDE l10n-script.

April 25, 2014

Today I released AppStream and libappstream 0.6.1, which feature mostly bugfixes, so nothing incredibly exciting to see there (but this also means no API/ABI breaks). The release clarifies some paragraphs in the spec which people found confusing, and fixes a few issues (like one example in the docs not being valid AppStream metadata). As only spec extension, we introduce a “priority” property in distro metadata to allow metadata from one repository to override data shipped by another one. This is used (although with a similar syntax) in Fedora already to have “placeholder” data for non-free stuff, which gets overridden by the real metadata if a new application was added. In general, the property tag was added to make the answer to the question “which data is preferred” much less magic.

The libappstream library got some new API to query component data in different ways, and I also brought back support for Vala (so if you missed the Vapi file: It’s back now, although you have to manually enable this feature).

The CLI tool also got some extensions to query AppStream data. Here is a brief introduction:

First of all, we need to make sure the database is up-to-date, which should be the case already (it is rebuilt automatically):

$ sudo appstream-index refresh

The database will only be rebuilt when necessary, if you want to force a rebuild anyway, use the “–force” parameter.

Now imagine we want to search for an app containing the word “media” (in description, keywords, summary, …):

$ appstream-index s media

which will return:

Identifier: gnome-media-player.desktop [desktop-app]
Name: GNOME Media Player
Summary: A simple media player for GNOME
Package: gnome-media-player
----
Identifier: mediaplayer-app.desktop [desktop-app]
Name: Media Player
Summary: Media Player
Package: mediaplayer-app
----
Identifier: kde4__plasma-mediacenter.desktop [desktop-app]
Name: Plasma Media Center
Summary: A mediacenter user interface written with the Plasma framework
Package: plasma-mediacenter
----
etc.

If we already know the name if a .desktop-file or the ID of a component, we can have the tool print out information about the application, including which package it was installed from:

$ appstream-index get lyx.desktop

If we want to see more details, including e.g. a screenshot URL and a longer description, we can pass “–details” to the tool:

Identifier: lyx.desktop [desktop-app]
Name: LyX
Summary: An advanced document processor with the power of LaTeX.
Package: lyx-common
Homepage: http://www.lyx.org/
Icon: lyx.png
Description: LyX is a document processor that encourages an approach to writing
 based on the structure of your documents (WYSIWYM) and not simply
 their appearance (WYSIWYG).
 
 LyX combines the power and flexibility of TeX/LaTeX[...]
Sample Screenshot URL: http://alt.fedoraproject.org/pub/alt/screenshots/f21/source/lyx-ea535ddf18b5c7328c5e88d2cd2cbd8c.png
License: GPLv2+

(I truncated the results slightly ;-) )

Okay, so far so good. But now it gets really exciting (and this is a feature added with 0.6.1): We can now query a component by the items it provides. For example, I want to know which software provides the library libfoo.2:

appstream-index what-provides lib libfoo.so.2

This also works with binaries, or Python modules:

appstream-index what-provides bin apper

This stuff works distribution-agnostic, and as long as software ships upstream metadata with a valid <provides/> field, or the distributor adds it while generating AppStream distro metadata.
This means that software can – as soon as we have sufficient metadata of this kind – declare it’s dependencies upstream in form of a simple text file, referencing the needed components to build and run it on any Linux distribution. Users can simply install missing stuff by passing that file to their package manager, which can look up the components->packaging mapping and versions and do the right thing in installing the dependencies. So basically, this allows things “pip -r” does for Python, but for any application (not only Python stuff), and based on the distributors package database.

With the provides-items, we can also scan software to detect it’s dependencies automatically (and have it in a distro-agnostic form directly). We can also easily search for missing mimetype-handlers, missing kernel-modules, missing firmware etc. to install it on-demand, making the system much smarter in handling it’s dependencies. And users don’t need to do search orgies to find the right component for a given task.

Also on my todo list for the future, based on this feature: A small tool telling upstream authors which distribution has their application in which version, using just one command (and AppStream data from multiple distros).

Also planned: A cross-distro information page showing which distros ship which library versions, Python modules and application versions (and also the support status of the distro), so developers know which library versions (or GCC versions etc.) they should at least support to make their application easily available on most distributions.

As always, you can get the releases on Freedesktop, as well es the AppStream specification.

April 19, 2014

Hi,

this is just a quick bug fix release, the last one depending on PackageKit 0.8 series, it doesn’t have new features apart from having some fixed support for appstream 0.6.

For the next 0.8.3 version PackageKit 0.9 will be required.

For more information you can look at the git logs.

http://download.kde.org/stable/apper/0.8.2/src/apper-0.8.2.tar.xz.mirrorlist

Have fun :)


February 12, 2014

FOSDEM is over, and it has been a great time! I talked to many people (thanks to full devrooms ;-) Graph Databases and Config management was crazy, JavaScript as well…) and there might be some cool new things to come soon, which we discussed there :-)
I also got some good and positive feedback on the projects I work on, and met many people from Debian, Kubuntu, KDE and GNOME (haven’t seen some of them for almost 3 years) – one of the best things of being at FOSDEM is, that you not only see people “of your own kind” – for example, for me as Debian developer it was great to see Fedora people and discuss things with them, something which rarely happens at Debian conferences. Also, having GNOME and KDE closely together again (litterally, their stands were next to each other…) is something I missed since the last Desktop Summit in 2011.

My talks were also good, except for the beamer-slides-technical-error at the beginning, which took quite some time (I blame KScreen ;-) ).

In case you’re interested in the slides, here they are: slides for FOSDEM’14 AppStream/Listaller talks.

The slides can likely be understood without the talk, they are way too detailed (usually I only show images on slides, but that doesn’t help people who can’t see the talk ^^)

I hope I can make it to FOSDEM’15 as well – I’ve been there only once, but it already is my favourite FOSS-conference (and I love Belgian waffles) ;-)