Planet Tanglu
August 13, 2018

Virtlyst – a Web Interface to manage virtual machines build with Cutelyst/Qt/C++ got a new release.

This new release includes a bunch of bug fixes, most importantly probably being the ability to warn user before doing important actions to help avoid doing mistakes.

Most commits came from new contributor René Linder who is also working on a Bootstrap 4 theme and Lukas Steiner created a dockerfile for it. This is especially cool because Virtlyst repository now has 4 authors while Cutelyst which is way older has only 6.

For the next release I’ll also try to add user management (today you have a single admin account, and to add new users that need to be done via SQL) which wasn’t available on the original WebVirtMgr project but is surely the most important lacking feature.

Have Fun! https://github.com/cutelyst/Virtlyst/archive/v1.2.0.tar.gz

July 17, 2018

Cutelyst a C++ web framework based on Qt got a new release. This release has some important bug fixes so it’s really recommended to upgrade to it.

Most of this release fixes came form a side project I started called Cloudlyst, I did some work for the NextCloud client, and due that I became interested into how WebDAV protocol works, so Cloudlyst is a server implementation of WebDAV, it also passes all litmus tests. WebDAV protocol makes heavy use of REST concept, and although it uses XML instead of JSON it’s actually a good choice since XML can be parsed progressively which is important for large directories.

Since the path URL now has to deal with file paths it’s very important it can deal well with especial characters, and sadly it did not, I had tried to optimize percent encoding decoding using a single QString instead of going back and forth toLatin1() then fromUTF8() and this wasn’t working at all, in order to better fix this the URL is parsed a single time at once so the QString path() is fully decoded now, which will be a little faster and avoid allocations. And this is now unit tested 🙂

Besides that there was:

  • Fix for regression of auto-reloading apps in cutelyst-wsgi
  • Fix csrf token for multipart/form-data (Sebastian Held)
  • Allow compiling WSGI module when Qt was not built with SSL support

The last one and another commit were to fix some build issues I had with buildroot, which I also created a package so soon you will be able to select Cutelyst from buildroot menu.

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.5.0

June 07, 2018

Yesterday TechEmpower released the results for round 16 of their benchmarking tests, you can see their blog about it here. And like for round 15 I’d like add my commentary about it here.

Before you look into the results web site it’s important to be aware of a few things, first round 16 runs on a new hardware newer and more powerful than the previous rounds, they also did a Dockerization of the tests which allowed us to pull different distro images, cache package install and isolate from other frameworks. So don’t try to compare to round 15.

Having put Cutelyst under testing there has brought many benefits to it, in previous rounds we noticed that due testing on a server with many CPU cores it letting the operating system do the scheduling wouldn’t be a good idea, so we added CPU affinity feature to cutelyst-wsgi, while uWSGI also has it our logic does this core pinning for threads and not only process.

While testing at home on an AMD Phenon II x4 using pre-fork mode was much faster than threaded mode, but on my 5th Gen Intel laptop the results were closer but threading was much better, and thanks to TFB I found out that each process were being assigned to CPU core 0, which made 28 processes be bound to a single core. In this new round you can now see the difference between running threaded or in pre-fork mode is negligible in some tests pre-fork is even faster. Pre-fork does consume more RAM, running 100 threads of Virtlyst uses around 35MiB, while 100 process around 130 MiB but multiple process are better if your code happens to crash.

An important feature of Tech Empower Benchmarks are it’s filters, they allow you to filter stuff that doesn’t matter to you, due the above fix you call look at the results and see the close threads vs pre-fork match here.

Using an HTML templating engine is very important for real world apps, in round 16 I’ve added a test that uses Grantlee for rendering, they are around 46% slower than creating the HTML directly on C++, there might be room for improvements in Grantlee but the fortunes results aren’t bad.

Some people asked why there were some many occurrences of Cutelyst in the tests, the reason is that these benchmarks allow you to tests some features and see what performs better, in round 15 it became clear that using EPoll vs Qt’s glib based event loop was a clear win, so in 16 we don’t have Qt’s glib tested anymore and EPoll is now the default event dispatcher in Cutelyst 2 on Linux.

This round however I tried to make a reasoning about TCP_NODELAY and while results are closer it’s a bit clear that due the blocking nature of the SQL tests TCP_NODELAY will decrease latency and increase performance a bit due it not waiting to send a bigger TCP packet.

As I also mentioned on the release of Cutelyst 2.4.0 a fix was made to avoid stack overflow and you can see the results on the “Plain Text” test, before that fix cutelyst was crashing and respawning and that limited the results to 1,000,000 requests/second, after the fix it is 2,800,000 requests/second.

This round used Ubuntu 18.04 as base, Cutelyst 2.4.0 and Qt 5.9.5.

Last but not least if you still try to compare to round 15 🙂 you might notice Cutelyst went lower on the ranking, that’s in part due frameworks getting optimizations but mostly due new micro/platform frameworks, but you can enable the Fullstack classification filter if you want to compare frameworks that would provide similar features that what Cutelyst offers.

May 29, 2018

Cutelyst, the C++/Qt web framework got another up.

This release includes:

  • Fix for our EPoll event loop spotted by our growing community
  • An Sql query conversion optimization (also from community), we have helper methods to create QVariantHash, QVariantMap, QVariantList… from QSqlQuery to be used by Grantlee templates
  • New Sql query conversion to JSON types (QJsonObject, QJsonArray..) making it easier and faster to build REST services as shown on our blog post about Creating RESTful applications with Qt and Cutelyst
  • New boolean methods to test for the HTTP method used (isPUT(), isDelete()…) easing the work on REST apps
  • Fix for our CPU affinity logic, thanks to the TechEmpower benchmarks I found out that when creating many workers process they were all being assigned to a single CPU core, I knew that pre-fork was usually slower than threads on modern CPUs but was always intrigued why the difference was so big. Hopefully it will be smaller now.

Since I’m currently building a REST service OAuth2 seems to be something that would be of good use, one user has started working on this, hopefully this will soon be ready.

Have fun https://github.com/cutelyst/cutelyst/releases/tag/v2.4.0

 

May 17, 2018

This mini tutorial aims to show you the fundamentals of creating a RESTful application with Qt, as a client and as a server with the help of Cutelyst.

Services with REST APIs have become very popular in recent years, and interacting with them may be necessary to integrate with services and keep your application relevant, as well as it may be interesting to replace your own protocol with a REST implementation.

REST is very associated with JSON, however, JSON is not required for a service to become RESTful, the way data is exchanged is chosen by the one who defines the API, ie it is possible to have REST exchanging messages in XML or another format. We will use JSON for its popularity, simplicity and due to the QJsonDocument class being present in the Qt Core module.

A REST service is mainly characterized by making use of the little-used HTTP headers and methods, browsers basically use GET to get data and POST to send form and file data, however REST clients will use other methods like DELETE, PUT and HEAD, concerning headers many APIs define headers for authentication, for example X-Application-Token can contain a key generated only for the application of a user X, so that if this header does not contain the correct data it will not have access to the data.

Let’s start by defining the server API:

  • /api/v1/users
    • GET – Gets the list of users
      • Answer: [“uuid1”, “uuid2”]
    • POST – Register new user
      • Send: {“name”: “someone”, “age”: 32}
      • Answer: {“status”: “ok / error”, “uuid”: “new user uuid”, “error”: “msg in case of error”}
  • /api/v1/users/ – where UUID should be replaced by the user’s UUID
    • GET – Gets user information
      • Answer: {“name”: “someone”, “age”: 32}
    • PUT – Update user information
      • Send: {“name”: “someone”, “age”: 57}
      • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}
    • DELETE – Delete user
      • Answer: {“status”: “ok / error”, “error”: “msg in case of error”}

For the sake of simplicity we will store the data using QSettings, we do not recommend it for real applications, but Sql or something like that escapes from the scope of this tutorial. We also assume that you already have Qt and Cutelyst installed, the code is available at https://github.com/ceciletti/example-qt-cutelyst-rest

Part 1 – RESTful Server with C ++, Cutelyst and Qt

First we create the server application:

$ cutelyst2 --create-app ServerREST

And then we will create the Controller that will have the API methods:

$ cutelyst2 --controller ApiV1

Once the new class has been instantiated in serverrest.cpp, init() method with:

#include "apiv1.h"

bool ServerREST::init()
{
    new ApiV1 (this);
    ...

Add the following methods to the file “apiv1.h”

C_ATTR(users, :Local :AutoArgs :ActionClass(REST))
void users(Context *c);

C_ATTR(users_GET, :Private)
void users_GET(Context *c);

C_ATTR(users_POST, :Private)
void users_POST(Context *c);

C_ATTR(users_uuid, :Path('users') :AutoArgs :ActionClass(REST))
void users_uuid(Context *c, const QString &uuid);

C_ATTR(users_uuid_GET, :Private)
void users_uuid_GET(Context *c, const QString &uuid);

C_ATTR(users_uuid_PUT, :Private)
void users_uuid_PUT(Context *c, const QString &uuid);

C_ATTR(users_uuid_DELETE, :Private)
void users_uuid_DELETE(Context *c, const QString &uuid);

The C_ATTR macro is used to add metadata about the class that the MOC will generate, so Cutelyst knows how to map the URLs to those functions.

  • :Local – Map method name to URL by generating /api/v1/users
  • :AutoArgs – Automatically checks the number of arguments after the Context *, in users_uuid we have only one, so the method will be called if the URL is /api/v1/users/any-thing
  • :ActionClass(REST) ​​- Will load the REST plugin that will create an Action class to take care of this method, ActionREST will call the other methods depending on the called method
  • :Private – Registers the action as private in Cutelyst, so that it is not directly accessible via URL

This is enough to have an automatic mapping depending on the HTTP method for each function, it is important to note that the first function (without _METHOD) is always executed, for more information see the API of ActionREST

For brevity I will show only the GET code of users, the rest can be seen in GitHub:

void ApiV1::users_GET(Context *c)
{
    QSettings s;
    const QStringList uuids = s.childGroups();

    c->response()->setJsonArrayBody(QJsonArray::fromStringList(uuids));
}

After all the implemented methods start the server:

cutelyst2 -r --server --app-file path_to_it

To test the API you can test a POST with curl:

curl -H "Content-Type: application/json" -X POST -d '{"name": "someone", "age": 32}' http://localhost:3000/api/v1/users

Okay, now you have a REST server application, made with Qt, with one of the fastest answers in the old west 🙂

No, it’s serious, check out the benchmarks.

Now let’s go to part 2, which is to create the client application that will consume this API.

Part 2 – REST Client Application

First create a QWidgets project with a QMainWindow, the goal here is just to see how to create REST requests from Qt code, so we assume that you are already familiar with creating graphical interfaces with it.

Our interface will be composed of:

  • 1 – QComboBox where we will list users’ UUIDs
  • 1 – QLineEdit to enter and display the user name
  • 1 – QSpinBox to enter and view user age
  • 2 – QPushButton
    • To create or update a user’s registry
    • To delete the user record

Once designed the interface, our QMainWindow sub-class needs to have a pointer to QNetworkAccessManager, this is the class responsible for handling communication with network services such as HTTP and FTP. This class works asynchronously, it has the same operation as a browser that will create up to 6 simultaneous connections to the same server, if you have made more requests at the same time it will put them in a queue (or pipeline them if set).

Then create a QNetworkAccessManager *m_nam; as a member of your class so we can reuse it. Our request to obtain the list of users will be quite simple:

QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));

QNetworkReply *reply = m_nam->get(request);
connect(reply, &QNetworkReply::finished, this, [this, reply] {
    reply->deleteLater();
    const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
    const QJsonArray array = doc.array();

    for (const QJsonValue &value : array) {
        ui->uuidCB->addItem(value.toString());
    }
});

This fills with the data via GET from the server our QComboBox, now we will see the registration code which is a little more complex:

QNetworkRequest request(QUrl("http://localhost:3000/api/v1/users"));
request.setHeader(QNetworkRequest::ContentTypeHeader, "application/json");

QJsonObject obj {
    {"name", ui->nameLE->text()},
    ("age", ui->ageSP->value()}
};

QNetworkReply *reply = m_nam->post(request, QJsonDocument(obj).toJson());
connect(reply, &QNetworkReply::finished, this, [this, reply] {
    reply->deleteLater();
    const QJsonDocument doc = QJsonDocument::fromJson(reply->readAll());
    const QJsonObject obj = doc.object();

    if (obj.value("status").toString() == "ok") {
        ui->uuidCB->addItem(obj.value("uuid").toString());
    } else {
        qWarning() << "ERROR" << obj.value("error").toString();
    }
});

With the above code we send an HTTP request using the POST method, like PUT it accepts sending data to the server. It is important to inform the server with what kind of data it will be dealing with, so the “Content-Type” header is set to “application/json”, Qt issues a warning on the terminal if the content type has not been defined. As soon as the server responds we add the new UUID in the combobox so that it stays up to date without having to get all UUIDs again.

As demonstrated QNetworkAccessManager already has methods ready for the most common REST actions, however if you wanted to send a request of type OPTIONS for example will have to create a request of type CustomOperation:

m_nam->sendCustomRequest("OPTIONS", request);

Did you like the article? Help by giving a star to Cutelyst and/or supporting me on Patreon

https://github.com/cutelyst/cutelyst

May 10, 2018

Cutelyst – The C++ Web Framework built with Qt, has a new release.

In this release a behavior change was made, when asking for POST or URL query parameters and cookies that have multiple keys the last inserted one (closer to the right) is returned, previously QMap was filled in reverse order so that values() would have them in left to right order. However this is not desired and most other frameworks also return the last inserted value. To still have the ordered list Request::queryParameters(“key”) builds a list in the left to right order (while QMap::values() will have them reversed).

Some fixes on FastCGI implementation as well as properly getting values when uWSGI FastCGI protocol was in use.

Lastly a crash when performing TechEmpower benchmarks resulted in the removal of some nested event loops calls that were creating a huge stack, and made me realize that my proposed methods to make async apps easier were broken by design, basically you can’t use QEventLoop::exec() to wait for events. So Context::wait() and next() are deprecated and the code disabled, with the exception of WebSockets until I can make some changes to HTTP/1 and FastCGI parser both are to be considered synchronous.

Removing a QCoreApplication::processEvents() that was added in order to try to give other connections an execution slice almost doubled the speed in plain text when benchmarking locally, using a single core the requests per second have gone from 105 K to 190 K, after understanding the stupidity I did this isn’t of much surprise 🙂

https://github.com/cutelyst/cutelyst/releases/tag/v2.3.0

May 02, 2018

Virtlyst – a libvirt web interface to manage virtual machines has a new release.

This release finishes support to connect to TCP or TLS virtd servers, it also fixes creating new instances from the flavor panel. And a few other fixes.

Of course I forgot to add the mandatory screenshot last time so here it goes:

instances

console

Go get it! https://github.com/cutelyst/Virtlyst/releases/tag/v1.1.0

If you like this software you can also support my work via patreon.

April 26, 2018

Cutelyst just got a new release!

Thanks to the release of Virtlyst – a web manager for virtual machines, several bugs were found and fixed in Cutelyst, the most important one in this release is the WebSockets implementation that broke due the addition of HTTP/2 to cutelyst-wsgi2.

Fixing this was quite interesting as when I fixed the issue the first time, it started to make deleteLater() calls on null objects which didn’t crash but since Qt warn’s you about that it was hurting performance. Then I fixed the deleteLater() calls and WebSockets broke again 🙂 a little of gdb foo and I found out that when I asked for the websocket to close Qt was emitting disconnected as a result of that call, and that signal was deleting the object we rely when we called close, which crashed the app.

As a new feature Context class has two new methods to help with async apps, a wait() method that creates a counter based local event loop and a next() method to decrease the event loop counter till quitting it. This way you can for example connect two QNetworkReply::finished() signals to Context::next() and call Context::wait(2); so once the two requests are finished wait() returns and the client receives the data.

Codacy was also added to increase the number of checks in our code base, and it has already shown some real issues.

UPDATE: Right after release I found a cppcheck code fix broke listening on TCP sockets, yes, writing more unit tests especially for the WSGI module is top on my TODO list. So 2.2.1 is out now 🙂

Get it here https://github.com/cutelyst/cutelyst/releases/tag/v2.2.1

April 19, 2018

Virtlyst is a web tool that allows you to manage virtual machines.

In essence it’s a clone of webvirtmgr, but using Cutelyst as the backend, the reasoning behind this was that my father in law needs a server for his ASP app on a Win2k server, the server has only 4 GiB of RAM and after a week running webvirtmgr it was eating 300 MiB close to 10% of all available RAM. To get a VNC or SPICE tunnel it spawns websockify which on each new instance around 20 MiB of RAM get’s used.

I found this unacceptable, a tool that is only going to be used once in a while, like if the win2k freezes or goes BSOD, CPU usage while higher didn’t play a role on this.

Choosing a web interface for KVM/libvirt is no easy task, I have used Archipel and while it’s a nice app it is a pain to install, OpenStack is also overly complicated, and had a button labeled as “Terminate” button that deleted the instance entirely. A simple tool that’s specially simple to install was what I needed, and in fact is what a bunch of people need, here at work, OpenStack was also discarded due the hard installation process and a mix of virsh/virt-manager is used.

Since webvirtmgr is a DJango app, using it’s html templates was a breeze, Grantlee didn’t work with them out of the box, it has a few missing features but one can work around those, libvirt API is also quite well documented so development was quite fast, it took me 3 weeks but could easily be reduced to less than 2 if I had a bit more time.

So Virtlyst v1.0.0 is out, it uses less than 10MiB of RAM, and implements the VNC/SPICE web socket proxy in the same process thus each connections increases a almost nothing of memory usage. The software is already in production, and has all features of webvirtmgr except the migration part, that I plan to add in a future release.

Following the steps of some developer fellows I also created a Patreon account, so if you like what I do and would like to support my work please consider becoming a Patron.

Enough said, go get it!

https://github.com/cutelyst/Virtlyst/archive/v1.0.0.tar.gz

April 02, 2018

Cutelyst a Qt/C++ Web Framework got upped to 2.1. Yesterday I found a bug that although not critical it’s important enough to roll out a new release.

As a new feature we have LangSelect plugin to help with auto detection of user language (Matthias).

And some important bug fixes:

  • Fix compilation with Qt 5.11
  • clazy fixes
  • Fix undefined cutelyst2_plugin symbol of uWSGI plugin
  • Fix for IPv6 address binding
  • Fix StoreHtpasswd::addUser if file doesn’t exist (Bart)
  • Fix parsing when the url encoded (on body or query) has a key without a value but carries the equal sign (?foo=&bar)

Get it https://github.com/cutelyst/cutelyst/archive/v2.1.0.tar.gz

 

March 20, 2018

Cutelyst the Qt/C++ web framework just got a major release update, around one and half year ago Cutelyst v1 got the first release with a stable API/ABI, many improvements where made during this period but now it was time to clean up the mistakes and give room for new features.

Porting applications to v2 is mostly a breeze, since most API changes were done on the Engine class replacing 1 with 2 and recompiling should be enough on most cases, at least this was the case for CMlyst, Stickyst and my personal applications.

Due cleanup Cutelyst Core module got a size reduction, and WSGI module increased a bit due the new HTTP/2 parser. Windows MSVC was finally able to build and test all modules.

WSGI module now defaults to using our custom EPoll event loop (can be switched back to Qt’s default one with an environment variable), this allows for a steady performance without degradation when an increased number of simultaneous connections is made.

Validators plugins by Matthias got their share of improvements and a new password quality validator was added, plus manual pages for the tools.

The HTTP/2 parser adds more value to our framework, it’s binary nature makes it very easy to implement, in two days most of it was already working but HTTP/2 comes with a dependency, called HPACK which has it’s own RFC. HPACK is the header compression mechanism created for HTTP/2 because gzip compression as used in SPDY had security issues when on HTTPS called CRIME .

The problem is that HPACK is not very trivial to implement and it took many hours and made KCalc my best friend when converting hex to binary to decimal and what not…

Cutelyst HTTP/2 parser passes all tests of a tool named h2spec, using h2load it even showed more requests per second than HTTP/1 but it’s complicated to benchmark this two different protocols specially with different load tools.

Upgrading from HTTP/1.1 is supported with a switch, as well as enabling H2 on HTTPS using the ALPN negotiation (which is the only option browsers support), H2C or HTTP/2 in clear text is also supported but it’s only useful if the client can connect with previous knowledge.

If you know HTTP/2 your question is: “Does it support server push?”. No it doesn’t at the moment, SERVER_PUSH is a feature that allows the server to send CSS, Javascript without the browser asking for it, so it can avoid the request the browser would do, however this feature isn’t magical, it won’t make slow websites super fast , it’s also hard to do right, and each browser has it’s own complicated issues with this feature.

I strongly recommend reading this https://jakearchibald.com/2017/h2-push-tougher-than-i-thought/ .

This does not mean SERVER_PUSH won’t be implemented, quite the opposite, due the need to implement it properly I want more time to study the RFC and browsers behavior so that I can provide a good API.

I have also done some last minute performance improvements with the help of KDAB Hotspot/perf, and I must say that the days of profiling with weird/huge perf command line options are gone, awesome tool!

Get it! https://github.com/cutelyst/cutelyst/archive/v2.0.0.tar.gz

If you like it please give us a star on GitHub!

Have fun!

February 20, 2018

Apper the package/apps manager based on PackageKit has got it’s 1.0.0 version on it’s 10th birthday!

Screenshot_20180220_173026

Following the PackageKit-Qt 1.0.1 I’m now doing an Apper release to match it’s API, Apper source code got ported to Qt5/KF5 but not completed and using some kfsupport classes, now it’s code got modernized to C++11, new Qt connect syntax and no code depends on kfsupport, it is also properly using AppStream now, some performance improvements on the package model.

It’s now a stand alone application (no longer integrated in system settings), this was due 99% of it’s usage being by calling the apper executable and this helped fix a crash due it being a KCM module.

Feature wise I plan to add support to fwupd allowing users to update firmwares with a native tool, but time is short and Cutelyst 2 with HTTP 2 support is my top priority now, but help is welcome!

Download https://download.kde.org/stable/apper/1.0.0/apper-1.0.0.tar.xz.mirrorlist

February 16, 2018

On Valentines day TechEmpower released the results of fifth round of it’s benchmarks tests on web frameworks, it took almost a year since round 14 and I really hope round 16 comes out sooner.

Since this round took a long time and was scheduled to be release many times last year I decided not to update Cutelyst to avoid not having the chance to fix any issues and have broken results. Cutelyst 1.9.0 and Qt 5.9 were used, both had some performance improvements compared to round 14, and thus you can see better results on this round compared to 14, most notably the JSON tests went from 480K request/second to 611K req/s, also due this old Cutelyst release jemalloc was again not used due a bug we had in CMake files that didn’t link against it.

In this round some  other frameworks also done their optimizations and a few managed to do better than Cutelyst, even though we were faster in all tests compared to the last round. It might be even related to some OS tuning as most results seemed to went up a bit, however if you put the filter on “FullStack” frameworks Cutelyst is leading in most tests.

TreeFrog framework had results in TechEmpower long before I wrote the tests for Cutelyst, but due errors on TreeFrog tests on last rounds this was the first round where you can compare the results of two Qt Web Frameworks.

For the next round I expect the results to be even better now that we will properly use jemalloc, and our epoll dispatcher got a better implementation, I also plan to use Cutelyst 2 and try increasing some buffers as the servers have plenty of RAM that I didn’t care on using.

If you want to know more about Cutelyst visit https://cutelyst.org/ and give us a Star on GitHub (we are now past 300!!) https://github.com/cutelyst/cutelyst

February 07, 2018

This last month I decided to do some work on print-manager, it’s code dates back to 2010, so it’s 8 years old now, my last commits where on 2014, after that Jan Grulich did the KF5/Qt5 port and last year I tried to do some improvements but only managed to do a single commit.

Basically it took my attention when I plugged in a printer last year after a distro reinstall and it crashed kded, so last month I took some time to work on it again. Reducing the bug count from 54 to 22, there are still some bugs which I’d like to fix but I don’t have time anymore and they are not easy to reproduce. To highlight some of the improvements:

  • Code now uses Qt5 new connection style on every place possible
  • Code is very strict on building when casting from ascii, removed deprecated usage of Qt5.9<, use of for range loops and initialize lists
  • kded code a bit simpler & safer
  • Wayland add-printer icon fixed
  • one Plasma startup crash
  • and many other bug fixed

So I’m back to working on the stuff I usually do, but if you can there are still some important/valid bugs one could take care, please feel free to try to fix them I’ll do my best in reviewing  your changes

UPDATE: print-manager is part of KDE Applications so all of these will be on the upcoming 18.04.

January 12, 2018

Happy new year every one!

PackageKitQt is a Qt Library to interface with PackageKit

It’s been a while that I don’t do a proper PackageKitQt release, mostly because I’m focusing on other projects, but PackageKit API itself isn’t evolving as fast as it was, so updating stuff is quite easy.

It haven’t seems a proper release in a while, it got several patches from other contributors, including the Qt5 port, so 0.10.0 is for that, the 1.0.0 release is to match PackageKit 1.0.0 API, while 0.10.0 works with that some signals and methods where removed from PackageKit which where useless in PackageKitQt.

Add to 1.0.0 release the complete Offline updates interface some API cleanups, missing enums and a performance improvement on parsing package ids.

Due the complexity and effort used to roll a release, from now on releases will be done with tags on GitHub, so packagers stays tuned 🙂

https://github.com/hughsie/PackageKit-Qt/archive/v1.0.0.tar.gz

https://github.com/hughsie/PackageKit-Qt/archive/v0.10.0.tar.gz

December 31, 2017

The year is about to end and so is Cutelyst v1 series, I wasn’t planning for another release this year but Matthias added some nice new features that I decided to roll 1.12 in 2017 branching 1.x.x series and master is now officially Cutelyst 2 with no stable API/ABI until 2.0.0 is tagged.

HTTP/2 support will hopefully be part of Cutelyst 2.0.0, there aren’t any drastic changes in v2, most important thing is fixing MSVC builds and removing deprecated API.

Back to this release it includes a new CSRF protection plugin, with a Grantlee tag similar to what Django has. Add to this many fixes and the epoll event loop dispatcher is now even faster and got many fixes, performance wise it’s great advantage is when dealing with many simultaneous connections, it can lead to 2-3 times faster than default glib one.

https://github.com/cutelyst/cutelyst/archive/v1.12.0.tar.gz

Happy new hacking year!

November 29, 2017

Cutelyst the Qt Web framework got a new release, this is likely to be the last of the year and will be one of lasts releases of the 1.x.x series. I’d like to add HTTP/2 support before branching 1.x.x and having master as 2.0 but I’m not yet sure I’ll do that yet.

For the next year I’d like to have Cutelyst 2 packaged on most distros soon due Ubuntu’s LTS being released in April, and H2 might delay this or I delay it since it can be done using a front-end server like Nginx.

The 1.11.0 version includes three new plugins written by Matthias Fehring, a Memcached plugin that simplifies talking to a memcached server, a memcached based session store, that stores session information on the memcached server, and a static compressed plugin to serve compressed versions of static files.

Besides that I extended the EngineRequest class to serve as a base class for Engine’s requests allowing to get rid of some ugly casts and void pointers that carry the real request/connection. This doesn’t affect user code as long as they don’t implement their own engine.

Setting a Json reply is now a bit simpler now that two overloads directly accepting QJsonObject and QJsonArray were added.

Cutelyst license preamble on files was fixed to state it’s LGPLv2.1+, and finally pkg-config is now fully supported.

Go get/break/package/.* it!

https://github.com/cutelyst/cutelyst/archive/v1.11.0.tar.gz

October 31, 2017

Cutelyst the Qt Web Framework got a new release, another round of bug fixes and this time increased unit testing coverage.

RoleACL plugin is a very useful one, it was written due the need for controlling what users could access, sadly the system I wrote that needed this got unused (although I got my money for this) so this plugin didn’t get much attention, partially because it was basically complete.

Then last release I added documentation to it due a user’s request and shortly after I found out that it wasn’t working at all, even worse it wasn’t forbidding users like it should do. So after the fix I moved on writing unit test for many other stuff. There’s still missing stuff but overall coverage is much larger now.

Enjoy https://github.com/cutelyst/cutelyst/archive/v1.10.0.tar.gz

October 11, 2017

Cutelyst the Qt web framework got a new release. This is a rather small release but has some important fixes so I decided to roll sooner.

The dispatcher logic got 30% faster, parsing URL encoded data is also a bit faster on some cases (using less memory), Context objects can now be instantiated by library users to allow for example getting notifications from SQL databases and be able to forward to Cutelyst actions or Views, pkg-config support has also improved a bit but still misses most modules.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.9.0.tar.gz

September 28, 2017

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already 🙂 .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

<provides/>
 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>

  <name>Lato</name>
  <summary>A sanserif type­face fam­ily</summary>
  <description>
    <p>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.
    </p>
  </description>

  <url type="homepage">http://www.latofonts.com/</url>

  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

<font/>
  tags to a
<provides/>
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

<custom>
  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>
</custom>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata 😉

 

August 25, 2017

Sticklyst is a web paste tool, like pastebin, Stick Notes (paste.kde.org), build with Cutelyst and KDE Frameworks.

Building this kind of tool has been on my TODO list for a long time, but never really put some effort into it. When the idea first came by, I decided to look at the code of http://paste.scsys.co.uk/ which is powered by a Perl Catalyst application, to my surprise the Perl module that handled syntax highlighting was a port of the code of Kate, and it even said it used Kate’s definitions.

If that code used Kate’s, I’d better use the code directly, I hoped it wouldn’t be too coupled with GUI classes, but failed to find the code. A week ago I saw a mention about this on KDE’s mailing list, so with a little help from some IRC fellows I met the https://cgit.kde.org/syntax-highlighting.git/ code.

Then thanks to some tips from Volker Krause who showed me how to use it without GUI widgets, I managed to build a simple web app, which I presented live on the first Qt conference here in Brazil at São Paulo https://br.qtcon.org/.

Just for record the application uses less than 3MB of RAM and renders quickly using a sqlite db for the storage.

The code is 90% completed, is on production at https://paste.cutelyst.org/, and is now up for adoption. Basically someone willing to take care of it could implement the following missing features compared to Stick Notes:

  • Extra database support
  • URL shortener
  • User authentication

The list is short and the code is clean, go test, break, hack:

https://github.com/cutelyst/sticklyst

Have fun!

 

 

July 27, 2017

Cutelyst the Qt Web Framework, has another stable release, this release is mostly filled with bug fixes, the commit log is rather small.

It got fixes on Cutelyst-WSGI to properly work on Windows, QtCreator integration fixes, properly installing dll’s on Windows, fix returning the right status from views (this allows you to know if for example View::Email sent the email with success).

Cutelyst is cross-platform, the code builds on CI running on Windows, OSX and Linux, but first class is still UNIX systems, as I don’t use Windows, nor have a VM with development configured so I have only ensured it build fine on AppVeyor, Aurel Branzeanu provided some pull requests to improve that, if you also happen to have experience with CMake, MSVC and Windows please help me review or do other pull requests.

The release also got two API additions:

  • Context::setStash(QString, ParamsMultiMap), ParamsMultiMap also known as QMap<QString, QString> is a type used on HTTP query and body parameters, having this method avoid writing ugly code full of QVariant::fromValue()
  • Application::pathTo(QString), instead of using the QStringList version which join the strings to build a path (like the Catalyst does), this just gets the string as you would pass to a QFile, that in turn deals with platform issues.

I’m a bit busy with html-qt and also planning a rewrite of simplemail-qt to be async, so expect smaller Cutelyst releases 🙂

Again help is welcome (specially on Windows usage).

Get it here https://github.com/cutelyst/cutelyst/archive/v1.8.0.tar.gz

June 14, 2017

CMlyst is a Web Content Management System built using Cutelyst, it was initially inspired by WordPress and then Ghost. So it’s a mixture of both.

Two years ago I did it’s first release, and since them I’ve been slowly improving it, it’s been on production for that long providing www.cutelyst.org web site/blog. The 0.2.0 release was a silent one which marks the transition from QSettings storage to sqlite.

Storing content on QSettings is at first quite interesting since it’s easy to use but it showed not suitable very fast, first it kept leaving .lock files, then it’s not very fast to access so I had used a cache with all data, and a notifier updated that when something changed on the directory, but this also didn’t properly triggered QFileSystemWatcher so once a new page was out the cache wasn’t properly updated.

Once it was ported to sqlite, I decided to study how Ghost worked, this was mainly due many Qt/KDE developer switching to it. Ghost is quite simplistic, so it was very easy to try to provide something quite compatible with it, porting a Ghost theme to CMlyst requires very little changes due it’s syntax being close to Grantlee/Django.

Due porting to sqlite it also became clear that an export/import tool was needed, so you can now import/export it in JSON format, pretty close to Ghost, actually you can even import all you Ghost pages with it, but the opposite won’t work, and that’s because we store pages as HTML not Markdown, my feeling about markdown is that it is simple to use, convenient to geeks but it’s yet another thing to teach users which can simply use a WYSIWYG editor.

Security wise you need to be sure that both Markdown and HTML are safe, and CMlyst doesn’t do this, so if you put it on production be sure that only users that know what they are doing use it, you can even break the layout with a not closed tag.

But don’t worry, I’m working on a fix for this, html-qt is a WHATWG HTML5 specification parser, mostly complete, but the part to have a DOM, is not done yet, with it, I can make sure the HTML won’t break layout and remove unsafe tags.

Feature wise, CMlyst has 80% of Ghost features, if you like it please help add missing features to Admin page.

Some cool numbers

Comparing CMlyst to Ghost can be trick, but it’s interesting to see the numbers.

Memory usage:

  • CMlyst uses ~5MB
  • Ghost uses ~120MB

Requests per second (using the same page content)

  • CMlyst 3500/rps (production mode), 1108/rps (developer mode)
  • Ghost 100/rps (production mode)

While the RPS number is very different, on production you can use NGINX cache which would make the slow Ghost RPS not a problem, but that comes to a price of more storage and RAM usage, if you run on an AWS micro instance with 1GB of RAM this means you can have a lot less instances running at the same time, some simple math shows you could have 200 CMlyst instaces vs 8 of Ghost.

Try it!

https://github.com/cutelyst/CMlyst/archive/v0.3.0.tar.gz

Sadly it’s also liked soon I’ll be forking Grantlee, the lack of maintenance just hit me yesterday (when I was going to release this), Qt 5.7+ has changed QDateTime::toString() to include TZ data which broke Grantlee date filter which isn’t expecting that, so I had to do a weird workaround marking date as local to avoid the extra information.

May 30, 2017

Cutelyst the Qt / C++11 web framework just got another important release.

WebSocket support is probably a key feature to have on a modern web framework, Perl Catalyst doesn’t look like it wasn’t designed with it in mind, the way I found to do WS there wasn’t intuitive.

Qt has WebSocket support via QWebSockets module, which includes client and server implementation, and I have used it for a few jobs due lack of support inside Cutelyst. While looking at it’s implementation I realized that it wouldn’t fit into Cutelyst, and it even had a TODO due a blocking call, so no go.

I’ve looked then at uWSGI implementation and while it is far from being RFC compliant or even usable (IMO) the parser was much simpler, but it required that all data was available to parse. Since messages can be split into frames (really 63bits to address payload was probably not enough) and each frame has a bit to know if the message is over, which uWSGI simply discards, thus any fragmented message can’t be merged back.

WebSockets in Cutelyst have an API closer to what QWebSocket has, but since we are a web framework we have more control over things, first on your websocket endpoint you must call:

c->response->webSocketHandshake()

If the return is false just return from the method to ignore the request or it might be the case that you want the same path to show a chat HTML page, this can happen if the client didn’t sent the proper headers or if the Engine backend/protocol doesn’t support WebSockets. If true you can connect to the Request object and get notified about pong, close, binary and text (UTF-8) messages and frames, a simple usage is like:

connect(req, &Request::webSocketTextMessage, [=] (const QString &msg) {
  qDebug() << "Got text msg" <webSocketTextMessage(msg);
});

This will work as a Echo server, the signals also include Cutelyst::Context in the case you are connecting to a slot.

Cutelyst implementation is non-blocking, a little faster than QWebSockets, uses less memory and passes all http://autobahn.ws/testsuite tests, it doesn’t support compression yet.

systemd socket activation support was added, this is another very cool feature although it’s still missing a way to die the application on idle. On my tests with systemd socket activation, Cutelyst started so fast that the first request that starts it takes only twice the time it would take if Cutelyst was already running.

This release also include:

  • Fixes for using cutelyst-wsgi talking FastCGI to Apache
  • Workaround QLocalServer closing down when an accept fail (very common when used on prefork/threaded servers)
  • Fixed when using cutelyst-wsgi on threaded mode, POST requests would use the same buffer corrupting or crashing.
  • A few other fixes.

Due WebSockets plans for Cutelyst 2 are taking shape, but I’ll probably try to add HTTP2 support first since the multiplexing part is might make Cutelyst Engine quite complex.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.7.0.tar.gz

May 10, 2017

The new TechEmpower benchmarks results are finally out, round 14 took almost 6 months to be ready, and while a more frequent testing is desirable this gave us plenty of time to improve things.

Last round a bug was crashing our threaded benchmarks (which wasted time restarting), besides having that bug fixed the most notably changes that improved our performance were:

  • Use of a custom Epoll event dispatcher instead of default glib one (the tests without epoll in it’s name are using the default glib)
  • jemalloc, this brought the coolest boost, you just need link to it (or use LD_PRELOAD) and memory allocation magically get’s faster (around 20% for the tests)
  • CPU affinity that helps the scheduler to keep each thread/process pinned to a CPU core.
  • SO_REUSEPORT which on Linux helps to get connections evenly distributed among each thread/process
  •  Several code optimizations thanks to perf

I’m very glad with our results, Cutelyst managed to stay with the top performers which really pays off all the hard work, it also provide us with good numbers to show for people interested in.

Check it out https://www.techempower.com/benchmarks/

April 25, 2017

Once 1.5.0 was release I thought the next release would be a small one, it started with a bunch of bug fixes, Simon Wilper made a contribution to Utils::Sql, basically when things get out to production you find bugs, so there were tons of fixes to WSGI module.

Then TechEmpower benchmarks first preview for round 14 came out, Cutelyst performance was great, so I was planning to release 1.6.0 as it was but second preview fixed a bug that Cutelyst results were scaled up, so our performance was worse than on round 13, and that didn’t make sense since now it had jemalloc and a few other improvements.

Actually the results on the 40+HT core server were close to the one I did locally with a single thread.

Looking at the machine state it was clear that only a few (9) workers were running at the same time, I then decided to create an experimental connection balancer for threads. Basically the main thread accepts incoming connections and evenly pass them to each thread, this of course puts a new bottleneck on the main thread. Once the code was ready which end up improving other parts of WSGI I became aware of SO_REUSEPORT.

The socket option reuse port is available on Linux >3.9, and different from BSD it implements a simple load balancer. This obsoleted my thread balancer but it still useful on !Linux. This option is also nicer since it works for process as well.

With 80 cores there’s still the chance that the OS scheduler put most of your threads on the same cores, and maybe even move them when under load. So an option for setting a CPU affinity was also added, this allows for each work be pinned to one or more cores evenly. It uses the same logic as uwsgi.

Now that WSGI module supported all these features preview 3 of benchmarks came out and the results where still terrible… further investigation revealed that a variable supposed to be set with CPU core count was set to 8 instead of 80. I’m sure all this work did improve performance for servers with a lots of cores so in the end the wrong interpretation was good after all 🙂

Preview 4 came out and we are back to the top, I’ll do another post once it’s final.

Code name “to infinity and beyond” came to head due scalability options it got 😀

Last but not least I did my best to get rid of doxygen missing documentation warnings.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.6.0.tar.gz


April 04, 2017

It’s time for a long-overdue blogpost about the status of Tanglu. Tanglu is a Debian derivative, started in early 2013 when the systemd debate at Debian was still hot. It was formed by a few people wanting to create a Debian derivative for workstations with a time-based release schedule using and showcasing new technologies (which include systemd, but also bundling systems and other things) and built in the open with a community using the similar infrastructure to Debian. Tanglu is designed explicitly to complement Debian and not to compete with it on all devices.

Tanglu has achieved a lot of great things. We were the first Debian derivative to adopt systemd and with the help of our contributors we could kill a few nasty issues affecting it and Debian before it ended up becoming default in Debian Jessie. We also started to use the Calamares installer relatively early, bringing a modern installation experience additionally to the traditional debian-installer. We performed the usrmerge early, uncovering a few more issues which were fed back into Debian to be resolved (while workarounds were added to Tanglu). We also briefly explored switching from initramfs-tools to Dracut, but this release goal was dropped due to issues (but might be revived later). A lot of other less-impactful changes happened as well, borrowing a lot of useful ideas and code from Ubuntu (kudos to them!).

On the infrastructure side, we set up the Debian Archive Kit (dak), managing to find a couple of issues (mostly hardcoded assumptions about Debian) and reporting them back to make using dak for distributions which aren’t Debian easier. We explored using fedmsg for our infrastructure, went through a long and painful iteration of build systems (buildbot -> Jenkins -> Debile) before finally ending up with Debile, and added a set of own custom tools to collect archive QA information and present it to our developers in an easy to digest way. Except for wanna-build, Tanglu is hosting an almost-complete clone of basic Debian archive management tools.

During the past year however, the project’s progress slowed down significantly. For this, mostly I am to blame. One of the biggest challenges for a young project is to attract new developers and members and keep them engaged. A lot of the people coming to Tanglu and being interested in contributing were unfortunately no packagers and sometimes no developers, and we didn’t have the manpower to individually mentor these people and teach them the necessary skills. People asking for tasks were usually asked where their interests were and what they would like to do to give them a useful task. This sounds great in principle, but in practice it is actually not very helpful. A curated list of “junior jobs” is a much better starting point. We also invested almost zero time in making our project known and create the necessary “buzz” and excitement that’s actually needed to sustain a project like this. Doing more in the advertisement domain and “help newcomers” area is a high priority issue in the Tanglu bugtracker, which to the day is still open. Doing good alone isn’t enough, talking about it is of crucial importance and that is something I knew about, but didn’t realize the impact of for quite a while. As strange as it sounds, investing in the tech only isn’t enough, community building is of equal importance.

Regardless of that, Tanglu has members working on the project, but way too few to manage a project of this magnitude (getting package transitions migrated alone is a large task requiring quite some time while at the same time being incredibly boring :P). A lot of our current developers can only invest small amounts of time into the project because they have a lot of other projects as well.

The other issue why Tanglu has problems is too much stuff being centralized on myself. That is a problem I wanted to rectify for a long time, but as soon as a task wasn’t done in Tanglu because no people were available to do it, I completed it. This essentially increased the project’s dependency on me as single person, giving it a really low bus factor. It not only centralizes power in one person (which actually isn’t a problem as long as that person is available enough to perform tasks if asked for), it also centralizes knowledge on how to run services and how to do things. And if you want to give up power, people will need the knowledge on how to perform the specific task first (which they will never gain if there’s always that one guy doing it). I still haven’t found a great way to solve this – it’s a problem that essentially kills itself as soon as the project is big enough, but until then the only way to counter it slightly is to write lots of documentation.

Last year I had way less time to work on Tanglu than the project deserves. I also started to work for Purism on their PureOS Debian derivative (which is heavily influenced by some of the choices we made for Tanglu, but with different focus – that’s probably something for another blogpost). A lot of the stuff I do for Purism duplicates the work I do on Tanglu, and also takes away time I have for the project. Additionally I need to invest a lot more time into other projects such as AppStream and a lot of random other stuff that just needs continuous maintenance and discussion (especially AppStream eats up a lot of time since it became really popular in a lot of places). There is also my MSc thesis in neuroscience that requires attention (and is actually in focus most of the time). All in all, I can’t split myself and KDE’s cloning machine remains broken, so I can’t even use that ;-). In terms of projects there is also a personal hard limit of how much stuff I can handle, and exceeding it long-term is not very healthy, as in these cases I try to satisfy all projects and in the end do not focus enough on any of them, which makes me end up with a lot of half-baked stuff (which helps nobody, and most importantly makes me loose the fun, energy and interest to work on it).

Good news everyone! (sort of)

So, this sounded overly negative, so where does this leave Tanglu? Fact is, I can not commit the crazy amounts of time for it as I did in 2013. But, I love the project and I actually do have some time I can put into it. My work on Purism has an overlap with Tanglu, so Tanglu can actually benefit from the software I develop for them, maybe creating a synergy effect between PureOS and Tanglu. Tanglu is also important to me as a testing environment for future ideas (be it in infrastructure or in the “make bundling nice!” department).

So, what actually is the way forward? First, maybe I have the chance to find a few people willing to work on tasks in Tanglu. It’s a fun project, and I learned a lot while working on it. Tanglu also possesses some unique properties few other Debian derivatives have, like being built from source completely (allowing us things like swapping core components or compiling with more hardening flags, switching to newer KDE Plasma and GNOME faster, etc.). Second, if we do not have enough manpower, I think converting Tanglu into a rolling-release distribution might be the only viable way to keep the project running. A rolling release scheme creates much less effort for us than making releases (especially time-based ones!). That way, users will have a constantly updated and secure Tanglu system with machines doing most of the background work.

If it turns out that absolutely nothing works and we can’t attract new people to help with Tanglu, it would mean that there generally isn’t much interest from the developer or user side in a project like this, so shutting it down or scaling it down dramatically would be the only option. But I do not think that this is the case, and I believe that having Tanglu around is important. I also have some interesting plans for it which will be fun to implement for testing 🙂

The only thing that had to stop is leaving our users in the dark on what is happening.

Sorry for the long post, but there are some subjects which are worth writing more than 140 characters about 🙂

If you are interested in contributing to Tanglu, get in touch with us! We have an IRC channel #tanglu-devel on Freenode (go there for quicker responses!), forums and mailinglists,

It looks like I will be at Debconf this year as well, so you can also catch me there! I might even talk about PureOS/Tanglu infrastructure at the conference.

March 13, 2017

Cutelyst the C++/Qt web framework just got a new stable release.

Right after last release Matthias Fehring made another important contribution adding support for internationalization in Cutelyst, you can now have your Grantlee templates properly translated depending on user setting.

Then on IRC an user asked if Cutelyst-WSGI had HTTPS support, which it didn’t, you could enable HTTPS (as I do) using NGINX in front of your application or using uwsgi, but of course having that build-in Cutelyst-WSGI is a lot more convenient especially since his use would be for embedded devices.

Cutelyst-WSGI also got support for –umask, –pidfile, –pidfile2 and –stop that will send a signal to stop the instance based on the pidfile provided, better documentation. Fixes for respawning and cheaping workers, and since I have it now on production of all my web applications FastCGI got some critical fixes.

The cutelyst command was updated to use WSGI library to work on Windows and OSX without requiring uwsgi, making development easier.

www.cutelyst.org got Cutelyst logo, and an updated CMlyst version, though the site still looks ugly…

Download here: https://github.com/cutelyst/cutelyst/archive/v1.5.0.tar.gz

Have fun!


February 16, 2017

Yes, it’s not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get’s to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got –socket-timeout, –lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


January 24, 2017

Only 21 days after the last stable release and some huge progress was made.

The first big addition is a contribution by Matthias Fehring, which adds a validator module, allowing you to validate user input fast and easy. A multitude of user input types is available, such as email, IP address, JSON, date and many more. With a syntax that can be used in multiple threads and avoid recreating the parsing rules:

static Validator v({ new ValidatorRequired(QStringLiteral(“username”) });

if (v.validate(c,Validator::FillStashOnError)) { … }

Then I wanted to replace uWSGI on my server and use cutelyst-wsgi, but although performance benchmark shows that NGINX still talks faster to cutelyst-wsgi using proxy_pass (HTTP), I wanted to have FastCGI or uwsgi protocol support.

Evaluating FastCGI vs uwsgi was somehow easy, FastCGI is widely supported and due a bad design decision uwsgi protocol has no concept of keep alive. So the client talks to NGINX with keep alive but NGINX when talking to your app keeps closing the connection, and this makes a huge difference, even if you are using UNIX domain sockets.

uWSGI has served us well, but performance and flexible wise it’s not the right choice anymore, uWSGI when in async mode has a fixed number of workers, which makes forking take longer and user a lot of more RAM memory, it also doesn’t support keep alive on any protocol, it will in 2.1 release (that nobody knows when will be release) support keep alive in HTTP but I still fail to see how that would scale with fixed resources.

Here are some numbers when benchmarking with a single worker on my laptop:

uWSGI 30k req/s (FastCGI protocol doesn’t support keep conn)
uWSGI 32k req/s (uwsgi protocol that also doesn’t support keeping connections)
cutelyst-wsgi 24k req/s (FastCGI keep_conn off)
cutelyst-wsgi 40k req/s (FastCGI keep_conn on)
cutelyst-wsgi 42k req/s (HTTP proxy_pass with keep conn on)

As you can see the uwsgi protocol is faster than FastCGI so if you still need uWSGI, use uwsgi protocol, but there’s a clear win in using cutelyst-wsgi.

UNIX sockets weren’t supported in cutelyst-wsgi and are now supported with a HACK, yeah sadly QLocalServer doesn’t expose the socket description, plus another few stuff which are waiting for response on their bug reports (maybe I find time to write and ask for review), so I inspect the children() until a QSocketNotifier is found and there I get it. Works great but might break in future Qt releases I know, at least it won’t crash.

With UNIX sockets command line options like –uid, –gid, –chown-socket, –socket-access, as well as systemd notify integration.

All of this made me review some code and realize a bad decision I’ve made which was to store headers in lower case, since uWSGI and FastCGI protocol bring them in upper case form I was wasting time converting them, if the request comes by HTTP protocol it’s case insensitive so we have to normalize anyway. This behavior is also used by frameworks like Django and the change brought a good performance boost, this will only break your code if you use request headers in your Grantlee templates (which is uncommon and we still have few users). When normalizing headers in Headers class it was causing QString to detach also giving us a performance penalty, it will still detach if you don’t try to access/set the headers in the stored form (ie CONTENT_TYPE).

These changes made for a boost from 60k req/s to 80k req/s on my machine.

But we are not done, Matthias Fehring also found a security issue, I dunno when but some change I did break the code that returned an invalid user which was used to check if the authentication was successful, leading a valid username to authenticate even when the logs showed that password didn’t match, with his patch I added unit tests to make sure this never breaks again.

And to finish today I wrote unit test to test PBKDF2 according to RFC 6070, and while there I noticed that the code could be faster, before my changes all tests were taking 44s, and now take 22s twice as fast is important since it’s a CPU bound code that needs to be fast to authenticate users without wasting your CPU cycles.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.3.0.tar.gz

Oh and while the FreeNode #cutelyst IRC channel is still empty I created a Cutelyst on Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


January 03, 2017

Cutelyst the C++/Qt web framework has a new release.

  • Test coverage got some additions to avoid breaks in future.
  • Responses without content-length (chunked or close) are now handled properly.
  • StatusMessage plugin got new methods easier to use and has the first deprecated API too.
  • Engine class now has a struct with the request subclass should create, benchmarks showed this as a micro optimization but I’ve heard function with many arguments (as it was before) are bad on ARM so I guess this is the first optimization for ARM 🙂
  • Chained dispatcher finally got a performance improvement, I didn’t benchmark it but it should be much faster now.
  • Increased the usage of lambdas when the called function was small/simple, for some reason they reduce the library size so I guess it’s a good thing…
  • Sql helper classes can now use QThread::objectName() which has the worker thread id as it’s name so writing thread safe code is easier.
  • WSGI got static-map and static-map2 options both work the same way as in uWSGI allowing you to serve static files without the need of uWSGI or a webserver.
  • WSGI got both auto-reload and touch-reload implemented, which help a lot on the development process.
  • Request::addressString() was added to make it easier to get the string representation of the client IP also removing the IPv6 prefix if IPv4 conversion succeeds.
  • Grantlee::View now exposes the Grantlee::Engine pointer so one can add filters to it (usefull for i18n)
  • Context got a locale() method to help dealing with translations
  • Cutelyst::Core got ~25k smaller
  • Some other small bug fixes and optimizations….

For 1.3.0 I hope WSGI module can deal with FastCGI and/or uwsgi protocols, as well as helper methods to deal with i18n. But a CMlyst release might come first 🙂

Enjoy https://github.com/cutelyst/cutelyst/archive/r1.2.0.tar.gz


December 16, 2016

Cutelyst the C++/Qt Web Framework just got a new release.

Yesterday I was going to do the 1.1.0 release, after running tests, and being happy with current state I wrote this blog post, but before I publish I went to http://www.cutelyst.org CMS (CMlyst) to paste the same post, and I got hit by a bug I had on 1.0.0, which at time I thought it was a bug in CMlyst, so decided to investigate and found a few other bugs with our WSGI not properly changing the current directory and not replacing the + sign with an ‘ ‘ space when parsing formdata, which was the main bug. So did the fixes tagged as 1.1.1 and today found that automatically setting Content-Length wasn’t working when the View rendered nothing.

Cutelyst Core and Plugins API/ABI are stable but Cutelyst-WSGI had to have changes, I forgot to expose the needed API for it to actually be useful outside Cutelyst so this isn’t a problem (nobody would be able to use it anyway). So API stability got extended to all components now.

Thanks to a KDAB video about perf I finally managed to use it and was able to detect two slow code paths in Cutelyst-WSGI.

The first and that was causing 7% overhead was due QTcpSocket emitting readyRead() signal and in the HttpParser code I was calling sender() to get the socket, turns out the sender() method implementation is not as simple as I thought it was and there is a QMutexLocker which maybe caused some thread to wait on it (not sure really), so now for each QTcpSocket there is an HttpParser class, this uses a little more memory but the code got faster. Hopefully in Qt6 the signal can have a readyRead(QIODevice *) signature.

The second was that I end up using QByteArrayMatcher to match \r\n to get request headers, perl was showing it was causing 1.2% overhead, doing some benchmarks I replaced it by a code that was faster. Using a single thread on my Intel I5 I can process 85k req/s, so things got indeed a little faster.

It got a contribution from a new developer on View::Email, which made me do a new simplemail-qt release (1.3.0), the code there still needs love regarding performance but I didn’t manage to find time to improve it yet.

This new release has some new features:

  • EPoll event loop was added to Linux builds, this is supposedly to be faster than default GLib but I couldn’t measure the difference properly, this is optional and requires CUTELYST_EVENT_LOOP_EPOLL=1 environment to be set.
  • ViewEmail got methods to set/get authentication method and connection type
  • WSGI: can now set TCP_NODELAY, TCP KEEPALIVE, SO_SNDBUF and SO_RCVBUF on command line, these use the same name as in uwsgi
  • Documentation was improved a bit and can be generated now with make docs, which doesn’t require me to changing Cutelyst version manually anymore

And also some important bug fixes:

  • Lazy evaluation of Request methods was broken on 1.0.0
  • WSGI: Fixed loading configs and parsing command line option

Download https://github.com/cutelyst/cutelyst/archive/r1.1.2.tar.gz and have fun!


November 17, 2016

Right on my first blog post about Cutelyst users asked me about more “realistic” benchmarks and mentioned TechEmpower benchmarks. Though it was a sort of easy task to build the tests at that time it didn’t seem to be useful to waste time as Cutelyst was moving target.

Around 0.12 it became clear that the API was nearly freezing and everyone loves benchmarks so it would serve as a “marketing” thing. Since the beginning of the project I’ve learned lots of optimizations which made the code get faster and faster, there is still room from improvement, but changes now need to be carefully measured.

Cutelyst is the second Web Framework on TechEmpower benchmarks that uses Qt, the other one being treefrog, but Qt’s appearance was noticed on their blog:

https://www.techempower.com/blog/2016/11/16/framework-benchmarks-round-13/

The cutelyst-thread tests suffered from a segfault (that was restarted by master process), the fix is in 1.0.0 release but it was too late for round 13, so round 14 it will get even better results.

https://www.techempower.com/benchmarks/

If you want to help MongoDB/Grantlee/Clearsilver tests are still missing 😀


November 10, 2016

Cutelyst the Qt web framework just reached it’s first stable release, it’s been 3 years since the first commit and I can say it finally got to a shape where I think I’m able to keep it’s API/ABI stable. The idea is to have any break into a 2.0 release by the end of next year although I don’t expect many changes as the I’m quite happy with it’s current state.

The biggest change from the last release is the SessionStoreFile class, it initially used QSettings for it’s simplicity but it was obvious that it’s performance would be far from ideal, so I replaced QSettings with a plain QFile and QDataStream, this produced smaller session files and made the code twice as fast, but profiling was showing it was still slow because it was writing to disk multiple times on the same request. So the code was changed to merge the changes and only save to disk when the Context get’s destroyed, on my machine the performance went from 3,5k to 8.5k on read and writes and 4k to 12k on reads, this is on the same session id, should probably be faster with different ids. This is also very limited by the disk IO as we use a QLockFile to avoid concurrency, so if you need something faster you can subclass SessionStore and use Sql or Redis…

Besides that the TechEmpower 13th preview rounds showed a segfault in the new Cutelyst WSGI in threaded mode, a very hard to reproduce but with an easy fix. The initial fix made the code a bit ugly so I searched to see if XCode clang got an update on thread_local feature and finally, XCode 8 has support for it, if you are using XCode clang now you need version 8.

Many other small bugs also got in and API from 0.13.0 is probably unchanged.

Now if you are a distro packager please 😀 package it, or if you are a dev and was afraid of API breaks keep calm and have fun!

Download: https://github.com/cutelyst/cutelyst/archive/r1.0.0.tar.gz


October 21, 2016

Last official stable release was done more than 3 years ago, it was based on Qt/KDE 4 tech, after that a few fixes got in what would be 0.4.0 but as I needed to change my priorities it was never released.

Thanks to Lukáš Tinkl it was ported to KF5, on his port he increased the version number to 0.5.0, still without a proper release distros rely on a git checkout.

Since I started writing Cutelyst I had put a break on other open source projects by mostly reviewing a few patches that comes in, Cutelyst allows me to create more stuff that I get paid to do, so I’m using my free time to pick paid projects now. A few months ago Cristiano Bergoglio asked me about a KF5 port, and getting a calibration tool not to depend on gnome-color-manager, and we made a deal to do these stuff.

This new release got a few bugs fixed, pending patches merged and I did some code modernization to make some use of Qt5/C++11 features. Oh and yes it doesn’t crash on Wayland anymore but since on Wayland the color correction is a task for the compositor you won’t be able set an ICC profile for a monitor, only for other devices.

For the next release, you will be able to calibrate your monitor without the need for the GNOME tools.

Download: http://download.kde.org/stable/colord-kde/0.5.0/src/colord-kde-0.5.0.tar.xz.mirrorlist


September 01, 2016

cutelyst-logoCutelyst the Qt web framework just got a new release, 0.13.0.

A new release was needed now that we have this nice new logo.

Special thanks to Alessandro Longo (Alex L.) for crafting this cute logo, and a cool favicon for Cutelyst web site.

But this release ain’t only about the logo, it’s full of cool things:

When I started Cutelyst a simple developer Engine (read HTTP engine) was created, it was very slow and mostly an ugly hackery but helped work on the APIs that matter, I then took a look at uWSGI due some friend saying it was awesome and it was great to be able to deal with many protocols without the hassled of writing parsers for them.

Fast forwarding to 0.12.0 release and I started to feel that I was reaching a limit on Cutelyst optimizations and uWSGI was holding us back, and it wasn’t only about performance,  memory usage (scalability) was too high for something that should be rather small, it’s written in C after all.

It also has a fixed number of requests it can take, if you start it with 5 threads or process it’s 5 blocking clients that can be processed at the same time, if you use the async option you then have a fixed number of clients per process, 5 process * 5 async clients = 25 clients at the same time, but this 5 async clients are always pre-allocated which means that each new process will also be bigger right from launch.

Think now about websockets, how can one deal with 5000 simultaneous clients? 50 process with async = 100? Performance on async mode was also slower due complexity to deal with them.

So before getting into writing an alternative to uWSGI in Cutelyst I did a simple experiment, asked uWSGI to load a Cutelyst app and fork 1000 times and wrote a simple QCoreApplication that would do the same, uWSGI used > 1GB of RAM and took around 10s to start, while the Qt app used < 300MB of RAM and around 3s. So ~700MB of RAM is a lot of RAM and that was enough to get me started.

Cutelyst-wsgi, is born, and granted the command line arguments are very similar to uWSGI and I also followed the same separation between socket and protocol handling, of course in C++ things are more reusable, so our Protocol class has a HTTP subclass and in future will have FastCGI and uWSGI ones too.

Did I say uWSGI before 2.1 doesn’t support keep-alive? And that 2.1 is not released nor someone knows when it will? Cutelyst-wsig supports keep-alive, http pipelining, is complete async and yes, performs a little better. If you put NGINX in front of uWSGI you can get keep alive support, but guess what? the uwsgi protocol closes the connection between the front server so it’s quite hard to get very high speeds. Preliminary results of TechEmpower Benchmarks #13 showed Cutelyst hitting these limits as others frameworks were using keep-alive properly.

Thanks to this new Engine the Engine API got several improvements and is quite stable now. Besides it a few other important changes were made as well:

  • Change internals to take advantage of NRVO (named return value optimization)
  • Improved speed of Context::uriFor() making Cutelyst now require Qt 5.6 due a behavior change in QUrl
  • Improved speed and memory usage of Url query parser 1s faster in 1m iterations, using QByteArray::split() is very convenient but it allocates more memory and a QList for the results, using ::indexOf() and manually getting the parts is both faster and more memory efficient but yes, this is the optimization we do in Cutelyst::Core and that makes a difference, in application code the extra complexity might not worth it.
  • C++ for ranged loops, all our Q_FOREACH & friends where replaced with for ranged loops
  • Use of new reverse and equal_range iterators
  • Use QHash for storing headers, this was done after several benchmarks that showed QHash was faster for all common cases, namely if it keept the values() in order like QMap it would be used in other places as well
  • Replaced most QList with QVector, and internally std::vector
  • Multipart/form-data got faster, it doesn’t seek() anymore but requires a not sequential QIODevice as each Upload object point to parts of the body device.
  • Add a few more unit tests.

Thanks to the above the core library size is also a bit smaller, ~640KB on x64.

I was planning to do a 1.0 after 0.13 but with this new engine I think it’s better to have a 0.14 version, and make sure no more changes in Core will be needed for additional protocols.

Download here enjoy!


June 21, 2016

Cutelyst a web framework built with Qt is now closer to have it’s first stable release, with it becoming 3 years old at the end of the year I’m doing my best to finally iron it to get an API/ABI compromise, this release is full of cool stuff and a bunch of breaks which most of the time just require recompiling.

For the last 2-3 weeks I’ve been working hard to get most of it unit tested, the Core behavior is now extensively tested with more than 200 tests. This has already proven it’s benefits resulting in improved and fixed code.

Continuous integration got broader runs with gcc and clang on both OSX and Linux (Travis) and with MSVC 12 and 14 on Windows (Appveyor), luckily most of the features I wanted where implemented but the compiler is pretty upsetting. Running Cutelyst on Windows would require uwsgi which can be built with MinGW but it’s in experimental state, the developer HTTP engine, is not production ready so Windows usefulness is limited at the moment.

One of the ‘hypes’ of the moment is non-blocking web servers, and this release also fixes this so that uwsgi –async <number_of_requests> is properly handled, of course there is no magic, if you enable this on blocking code the requests will still have to wait your blocking task to finish, but there are many benefits of using this if you have non-blocking code. At the moment once a slot is called to process the request and say you want to do a GET on some webservice you can use the QNetworkAccessManager do your call and create a local QEventLoop so once the QNetworkReply finish() is emitted you continue processing. Hopefully some day QtSql module will have an async API but you can of course create a Thread Queue.

A new plugin called StatusMessage was also introduced which generates an ID which you will use when redirecting to some other page and the message is only displayed once, and doesn’t suffer from those flash race conditions.

The upload parser for Content-Type  multipart/form-data got a huge boost in performance as it now uses QByteArrayMatcher to find boundaries, the bigger the upload the more apparent is the change.

Chunked responses also got several fixes and one great improvement which will allow to use it with classes like QXmlStreamWriter by just passing the Response class (which is now a QIODevice) to it’s constructor or setDevice(), on the first write the HTTP headers are sent and it will start creating chunks, for some reason this doesn’t work when using uwsgi protocol behind Nginx, I still need to dig and maybe disable the chunks markup depending on the protocol used by uwsgi.

A Pagination class is also available to easy the work needed to write pagination, with methods to set the proper LIMIT and OFFSET on Sql queries.

Benchmarks for the TechEmpower framework were written and will be available on Round 14.

Last but not least there is now a QtCreator integration, which allows for creating a new project and Controller classes, but you need to manually copy (or link) the qtcreator directory to ~/.config/QtProject/qtcreator/templates/wizard.

cutelyst-qtwiz

As usual many bug fixes are in.

Help is welcome, you can mail me or hang on #cutelyst at freenode.

Download here.

 


June 17, 2016

limba-smallI wanted to write this blogpost since April, and even announced it in two previous posts, but never got to actually write it until now. And with the recent events in Snappy and Flatpak land, I can not defer this post any longer (unless I want to answer the same questions over and over on IRC ^^).

As you know, I develop the Limba 3rd-party software installer since 2014 (see this LWN article explaining the project better then I could do 😉 ) which is a spiritual successor to the Listaller project which was in development since roughly 2008. Limba got some competition by Flatpak and Snappy, so it’s natural to ask what the projects next steps will be.

Meeting with the competition

At last FOSDEM and at the GNOME Software sprint this year in April, I met with Alexander Larsson and we discussed the rather unfortunate situation we got into, with Flatpak and Limba being in competition.

Both Alex and I have been experimenting with 3rd-party app distribution for quite some time, with me working on Listaller and him working on Glick and Glick2. All these projects never went anywhere. Around the time when I started Limba, fixing design mistakes done with Listaller, Alex started a new attempt at software distribution, this time with sandboxing added to the mix and a new OSTree-based design of the software-distribution mechanism. It wasn’t at all clear that XdgApp, later to be renamed to Flatpak, would get huge backing by GNOME and later Red Hat, becoming a very promising candidate for a truly cross-distro software distribution system.

The main difference between Limba and Flatpak is that Limba allows modular runtimes, with things like the toolkit, helper libraries and programs being separate modules, which can be updated independently. Flatpak on the other hand, allows just one static runtime and enforces everything that is not in the runtime already to be bundled with the actual application. So, while a Limba bundle might depend on multiple individual other bundles, Flatpak bundles only have one fixed dependency on a runtime. Getting a compromise between those two concepts is not possible, and since the modular vs. static approach in Limba and Flatpak where fundamental, conscious design decisions, merging the projects was also not possible.

Alex and I had very productive discussions, and except for the modularity issue, we were pretty much on the same page in every other aspect regarding the sandboxing and app-distribution matters.

Sometimes stepping out of the way is the best way to achieve progress

So, what to do now? Obviously, I can continue to push Limba forward, but given all the other projects I maintain, this seems to be a waste of resources (Limba eats a lot of my spare time). Now with Flatpak and Snappy being available, I am basically competing with Canonical and Red Hat, who can make much more progress faster then I can do as a single developer. Also, Flatpaks bigger base of contributors compared to Limba is a clear sign which project the community favors more.

Furthermore, I started the Listaller and Limba projects to scratch an itch. When being new to Linux, it was very annoying to me to see some applications only being made available in compiled form for one distribution, and sometimes one that I didn’t use. Getting software was incredibly hard for me as a newbie, and using the package-manager was also unusual back then (no software center apps existed, only package lists). If you wanted to update one app, you usually needed to update your whole distribution, sometimes even to a development version or rolling-release channel, sacrificing stability.

So, if now this issue gets solved by someone else in a good way, there is no point in pushing my solution hard. I developed a tool to solve a problem, and it looks like another tool will fix that issue now before mine does, which is fine, because this longstanding problem will finally be solved. And that’s the thing I actually care most about.

I still think Limba is the superior solution for app distribution, but it is also the one that is most complex and requires additional work by upstream projects to use it properly. Which is something most projects don’t want, and that’s completely fine. 😉

And that being said: I think Flatpak is a great project. Alex has much experience in this matter, and the design of Flatpak is sound. It solves many issues 3rd-party app development faces in a pretty elegant way, and I like it very much for that. Also the focus on sandboxing is great, although that part will need more time to become really useful. (Aside from that, working with Alexander is a pleasure, and he really cares about making Flatpak a truly cross-distributional, vendor independent project.)

Moving forward

So, what I will do now is not to stop Limba development completely, but keep it going as a research project. Maybe one can use Limba bundles to create Flatpak packages more easily. We also discussed having Flatpak launch applications installed by Limba, which would allow both systems to coexist and benefit from each other. Since Limba (unlike Listaller) was also explicitly designed for web-applications, and therefore has a slightly wider scope than Flatpak, this could make sense.

In any case though, I will invest much less time in the Limba project. This is good news for all the people out there using the Tanglu Linux distribution, AppStream-metadata-consuming services, PackageKit on Debian, etc. – those will receive more attention 😉

An integral part of Limba is a web service called “LimbaHub” to accept new bundles, do QA on them and publish them in a public repository. I will likely rewrite it to be a service using Flatpak bundles, maybe even supporting Flatpak bundles and Limba bundles side-by-side (and if useful, maybe also support AppImageKit and Snappy). But this project is still on the drawing board.

Let’s see 🙂

P.S: If you come to Debconf in Cape Town, make sure to not miss my talks about AppStream and bundling 🙂

May 06, 2016

I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D.

Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D – just my personal opinion from building one project with it.

Why choose D in the first place?

The previous AppStream generator was written in Python, which wasn’t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again.

So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard – since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can’t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn’t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it).

What is great about D?

This list is just a huge braindump of things I had on my mind at the time of writing 😉

Interfacing with C

There are multiple things which make D awesome, for example interfacing with C code – and to a limited degree with C++ code – is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:

extern(C):
nothrow:

struct _mystruct {}
alias mystruct_p = _mystruct*;

mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);

You can call them from D code in two ways:

auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();

This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a string as first parameter, can also be called like a member function of string.

Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible.

Getting rid of C++ “cruft”

There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it’s still valid. E.g. getting rid of the #include preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming.

Incredibly powerful metaprogramming

Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do.

Built-in unit-test support

Unittesting with D is really easy: You just add one or more unittest { } blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them.

The unittest scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output.

Safe D

While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed.

Unfortunately, @safe is not the default for functions though.

Separate operators for addition and concatenation

D exclusively uses the + operator for addition, while the ~ operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It’s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning.

Optional garbage collector

D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier.

Built-in documentation generator

This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a unittest { } scope to generate automatic examples for the usage of a function, which is pretty cool.

Scope blocks

The scope statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and – with some GCC/Clang extensions – a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details.

Built-in syntax for parallel programming

Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the “Programming in D” book.

“Pure” functions

D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions.

D is fast!

D matches the speed of C++ in almost all occasions, so you won’t lose performance when writing D code – that is, unless you have the GC run often in a threaded environment.

Very active and friendly community

The D community is very active and friendly – so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at forum.dlang.org.

What is bad about D?

Half-proprietary reference compiler

This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem.

For the reference D compiler, Digital Mars’ D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first.

Since major Linux distributions can’t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos.

Especially for Linux distributions, there is no way to say “use this compiler to get the best and latest D compatibility”. Additionally, if people can’t simply apt install latest-d, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it’s really bad that the proprietary reference compiler is hurting D adoption in that way.

That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler.

UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:

The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.

Phobos (standard library) is deprecating features too quickly

This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences.

Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos.

This is really annoying, and probably the biggest source of unhappiness I had while working with D – especially the documentation not matching the actual code is a bad experience for someone new to the language.

Incomplete free compilers with varying degrees of maturity

LDC and GDC have bugs, and for someone new to the language it’s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat.

For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can’t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that.

No ABI compatibility

D has a defined ABI – too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can’t call a library compiled with LDC or DMD. GDC actually doesn’t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further.

The D package manager, dub, does not yet play well with distro packaging

This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved.

The GC is sometimes an issue

The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals SIGUSR1 and SIGUSR2 to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn’t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn’t a problem for asgen, but it might be for other applications.

These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements.

“version” is a reserved word

Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it’s slightly annoying. The version keyword is used for conditional compilation, and needing to abbreviate it to ver in all parts of the code sucks a little (e.g. the “Package” interface can’t have a property “version”, but now has “ver” instead).

The ecosystem is not (yet) mature

In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It’s always nothing major, usually you can easily solve these issues and go on, but it’s annoying to have these papercuts.

This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished.

Conclusion

I like to work with D, and I consider it to be a great language – the quirks it has in its toolchain are not that bad to prevent writing great things with it.

At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can’t run a GC), I would not recommend to use D. Rust seems to be the much better choice then.

In any case, D’s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice.

As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste 😉

UPDATE: To get some idea of D, check out the D tour on the new website tour.dlang.org.

April 26, 2016

This is a question raised quite quite often, the last time in a blogpost by Thomas, so I thought it is a good idea to give a slightly longer explanation (and also create an article to link to…).

There are basically three reasons for using XML as the default format for metainfo files:

1. XML is easily forward/backward compatible, while YAML is not

This is a matter of extending the AppStream metainfo files with new entries, or adapt existing entries to new needs.

Take this example XML line for defining an icon for an application:

<icon type="cached">foobar.png</icon>

and now the equivalent YAML:

Icons:
  cached: foobar.png

Now consider we want to add a width and height property to the icons, because we started to allow more than one icon size. Easy for the XML:

<icon type="cached" width="128" height="128">foobar.png</icon>

This line of XML can be read correctly by both old parsers, which will just see the icon as before without reading the size information, and new parsers, which can make use of the additional information if they want. The change is both forward and backward compatible.

This looks differently with the YAML file. The “foobar.png” is a string-type, and parsers will expect a string as value for the cached key, while we would need a dictionary there to include the additional width/height information:

Icons:
  cached: name: foobar.png
          width: 128
          height: 128

The change shown above will break existing parsers though. Of course, we could add a cached2 key, but that would require people to write two entries, to keep compatibility with older parsers:

Icons:
  cached: foobar.png
  cached2: name: foobar.png
          width: 128
          height: 128

Less than ideal.

While there are ways to break compatibility in XML documents too, as well as ways to design YAML documents in a way which minimizes the risk of breaking compatibility later, keeping the format future-proof is far easier with XML compared to YAML (and sometimes simply not possible with YAML documents). This makes XML a good choice for this usecase, since we can not do transitions with thousands of independent upstream projects easily, and need to care about backwards compatibility.

2. Translating YAML is not much fun

A property of AppStream metainfo files is that they can be easily translated into multiple languages. For that, tools like intltool and itstool exist to aid with translating XML using Gettext files. This can be done at project build-time, keeping a clean, minimal XML file, or before, storing the translated strings directly in the XML document. Generally, YAML files can be translated too. Take the following example (shamelessly copied from Dolphin):

<summary>File Manager</summary>
<summary xml:lang="bs">Upravitelj datoteka</summary>
<summary xml:lang="cs">Správce souborů</summary>
<summary xml:lang="da">Filhåndtering</summary>

This would become something like this in YAML:

Summary:
  C: File Manager
  bs: Upravitelj datoteka
  cs: Správce souborů
  da: Filhåndtering

Looks manageable, right? Now, AppStream also covers long descriptions, where individual paragraphs can be translated by the translators. This looks like this in XML:

<description>
  <p>Dolphin is a lightweight file manager. It has been designed with ease of use and simplicity in mind, while still allowing flexibility and customisation. This means that you can do your file management exactly the way you want to do it.</p>
  <p xml:lang="de">Dolphin ist ein schlankes Programm zur Dateiverwaltung. Es wurde mit dem Ziel entwickelt, einfach in der Anwendung, dabei aber auch flexibel und anpassungsfähig zu sein. Sie können daher Ihre Dateiverwaltungsaufgaben genau nach Ihren Bedürfnissen ausführen.</p>
  <p>Features:</p>
  <p xml:lang="de">Funktionen:</p>
  <p xml:lang="es">Características:</p>
  <ul>
    <li>Navigation (or breadcrumb) bar for URLs, allowing you to quickly navigate through the hierarchy of files and folders.</li>
    <li xml:lang="de">Navigationsleiste für Adressen (auch editierbar), mit der Sie schnell durch die Hierarchie der Dateien und Ordner navigieren können.</li>
    <li xml:lang="es">barra de navegación (o de ruta completa) para URL que permite navegar rápidamente a través de la jerarquía de archivos y carpetas.</li>
    <li>Supports several different kinds of view styles and properties and allows you to configure the view exactly how you want it.</li>
    ....
  </ul>
</description>

Now, how would you represent this in YAML? Since we need to preserve the paragraph and enumeration markup somehow, and creating a large chain of YAML dictionaries is not really a sane option, the only choices would be:

  • Embed the HTML markup in the file, and risk non-careful translators breaking the markup by e.g. not closing tags.
  • Use Markdown, and risk people not writing the markup correctly when translating a really long string in Gettext.

In both cases, we would loose the ability to translate individual paragraphs, which also means that as soon as the developer changes the original text in YAML, translators would need to translate the whole bunch again, which is inconvenient.

On top of that, there are no tools to translate YAML properly that I am aware of, so we would need to write those too.

3. Allowing XML and YAML makes a confusing story and adds complexity

While adding YAML as a format would not be too hard, given that we already support it for DEP-11 distro metadata (Debian uses this), it would make the business of creating metainfo files more confusing. At time, we have a clear story: Write the XML, store it in /usr/share/metainfo, use standard tools to translate the translatable entries. Adding YAML to the mix adds an additional choice that needs to be supported for eternity and also has the problems mentioned above.

I wanted to add YAML as format for AppStream, and we discussed this at the hackfest as well, but in the end I think it isn’t worth the pain of supporting it for upstream projects (remember, someone needs to maintain the parsers and specification too and keep XML and YAML in sync and updated). Don’t get me wrong, I love YAML, but for translated metadata which needs a guarantee on format stability it is not the ideal choice.

So yeah, XML isn’t fun to write by hand. But for this case, XML is a good choice.

Two weeks ago was the GNOME Software hackfest in London, and I’ve been there! And I just now found the time to blog about it, but better do it late than never 😉 .

Arriving in London and finding the Red Hat offices

After being stuck in trains for the weekend, but fortunately arriving at the airport in time, I finally made it to London with quite some delay due to the slow bus transfer from Stansted Airport. After finding the hotel, the next issue was to get food and a place which accepted my credit card, which was surprisingly hard – in defence of London I must say though, that it was a Sunday, 7 p.m. and my card is somewhat special (in Canada, it managed to crash some card readers, so they needed a hard-reset). While searching for food, I also found the Red Hat offices where the hackfest was starting the next day by accident. My hotel, the office and the tower bridge were really close, which was awesome! I have been to London in 2008 the last time, and only for a day, so being that close to the city center was great. The hackfest didn’t leave any time to visit the city much, but by being close to the center, one could hardly avoid the “London experience” 😉 .

Cool people working on great stuff

towerbridge2016That’s basically the summary for the hackfest 😉 . It was awesome to meet with Richard Hughes again, since we haven’t seen each other in person since 2011, but work on lots of stuff together. This was especially important, since we managed to solve quite some disagreements we had over stuff – Richard even almost managed to make me give in to adding <kudos/> to the AppStream spec, something which I was pretty against supporting (it didn’t make it yet, but I am no longer against the idea of having that – the remaining issues are solvable).

Meeting Iain Lane again (after FOSDEM) was also very nice, and also seeing other people I’ve only worked with over IRC or bug reports (e.g. William, Kalev, …) was great. Also lots of “new” people were there, like guys from Endless, who build their low-budget computer for developing/emerging countries on top of GNOME and Linux technologies. It’s pretty cool stuff they do, you should check out their website! (they also build their distribution on top of Debian, which is even more awesome, and something I didn’t know before (because many Endless people I met before were associated with GNOME or Fedora, I kind of implicitly assumed the system was based on Fedora 😛 )).

The incarnation of GNOME Software used by endless looks pretty different from what the normal GNOME user sees, since it’s adjusted for a different audience and input method. But it looks great, and is a good example for how versatile GS already is! And for upstream GNOME, we’ve seen some pretty great mockups done by Endless too – I hope those will make it into production somehow.

Ironically, a "snapstore" was close to the office ;-)

Ironically, a “snapstore” was close to the office 😉

XdgApp and sandboxing of apps was also a big topic, aside from Ubuntu and Endless integration. Fortunately, Alexander Larsson was also there to answer all the sandboxing and XdgApp-questions.

I used the time to follow up on a conversation with Alexander we started at FOSDEM this year, about the Limba vs. XdgApp bundling issue. While we are in-line on the sandboxing approach, the way how software is distributed is implemented differently in Limba and XdgApp, and it is bad to have too many bundling systems around (doesn’t make for a good story where we can just tell developers “ship as this bundling format, and it will be supported everywhere”). Talking with Alex about this was very nice, and I think there is a way out of the too-many-solutions dilemma, at least for Limba and XdgApp – I will blog about that separately soon.

On the Ubuntu side, a lot of bugs and issues were squashed and changes upstreamed to GNOME, and people were generally doing their best to reduce Richard’s bus-factor on the project a little 😉 .

I mainly worked on AppStream issues, finishing up the last pieces of appstream-generator and running it against some sample package sets (and later that week against the whole Debian archive). I also started to implement support for showing AppStream issues in the Debian PTS (this work is not finished yet). I also managed to solve a few bugs in the old DEP-11 generator and prepare another release for Ubuntu.

We also enjoyed some good Japanese food, and some incredibly great, but also suddenly very expensive Indian food (but that’s a different story 😉 ).

The most important thing for me though was to get together with people actually using AppStream metadata in software centers and also more specialized places. This yielded some useful findings, e.g. that localized screenshots are not something weird, but actually a wanted feature of Endless for their curated AppStore. So localized screenshots will be part of the next AppStream spec. Also, there seems to be a general need to ship curation information for software centers somehow (which apps are featured? how are they styled? added special banners for some featured apps, “app of the day” features, etc.). This problem hasn’t been solved, since it’s highly implementation-specific, and AppStream should be distro-agnostic. But it is something we might be able to address in a generic way sooner or later (I need to talk to people at KDE and Elementary about it).

In summary…

It was a great event! Going to conferences and hackfests always makes me feel like it moves projects leaps ahead, even if you do little coding. Sorting out issues together with people you see in person (rather than communicating with them via text messages or video chat), is IMHO always the most productive way to move forward (yeah, unless you do this every week, but I think you get my point 😀 ).

For me, being the only (and youngest ^^) developer at the hackfest who was not employed by any company in the FLOSS business, the hackfest was also motivating to continue to invest spare time into working on these projects.

So, the only thing left to do is a huge shout out of “THANK YOU” to the Ubuntu Community Fund – and therefore the Ubuntu community – for sponsoring me! You rock! Also huge thanks to Canonical for organizing the sponsoring really quickly, so I didn’t get into trouble with paying my flights.

Laney and attente walking on the Millennium Bridge after we walked the distance between Red Hat and Canonical's offices.

Laney and attente on the Millennium Bridge after we walked the distance between Red Hat and Canonical’s offices.

To worried KDE people: No, I didn’t leave the blue side – I just generally work on cross-desktop stuff, and would like all desktops to work as well as possible 😉

April 16, 2016

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution.

Getting rid of dep11-generator

Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator.

Adventures in Go / Rust / D

Since I didn’t want to write this new piece of software in C (or basically, writing it in C was my last option 😉 ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go…), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost.

What good to expect from appstream-generator?

So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution’s package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues.

LibAppStream-based parsing, generation of YAML or XML, multi-distro support, …

As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite.

Feature Flags, Optipng, …

The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn’t use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons.

No more Contents file dependency

Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data.

It can’t all be good, right?

That is true, there are also some known issues the new generator has:

Large amounts of RAM required

The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector… That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve.

What are symbolic links?

To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don’t exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does.

I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome!

Deploying asgen

I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow – many thanks once again for the Ubuntu community for sponsoring my attendance!).

Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

March 23, 2016

Cutelyst the Qt web framework just got a new release, I was planning to do this a while back, but you know we always want to put in a few more changes and time to do that is limited.

I was also interested in seeing if the improvements in Qt 5.6 would result in better benchmark tests but they didn’t, the hello world test app is probably too simple for the QString improvements to be noticed, a real world application using Grantlee views, Sql and more user data might show some difference. Still, compared to 0.10.0 Cutelyst is benchmarking the same even tho there were some small improvements.

The most important changes of this release were:

  • View::Email allows for sending emails using data provided on the stash, being able to chain the rendering of the email to another Cutelyst::View so for example you can have your email template in Grantlee format which gets rendered and sent via email (requires simple-mail-qt which is a fork of SmtpClient-for-Qt with a sane API, the View::Email hides simple-mail-qt API)
  • Utils::Sql provides a set of functions to be used with QSql classes, most importantly are serializing QSqlQuery to QVariantList of QVariantMap (or Hashes), allowing for accessing the query data on a View, and the preparedSqlQuery() method which comes with a macro CPreparedSqlQuery, that is a lambda that keeps your prepared statement into a static QSqlQuery, this avoids the need of QSqlQuery pointers to keep the prepared queries around (which are a boost for performance)
  • A macro CActionFor which resolves the Action into a static Action * object inside a lambda removing the need to keep resolving a name to an Action
  • Unit tests at the moment a very limited testing is done, but Header class has nearly 100% of coverage now
  • Upload parser got a fix to properly find the multipart boundary, spotted due Qt client sending boundaries a bit different from Chrome and FF, this also resulted in the removal of QRegularExpression to match the boundary part on the header with a simple search that is 5 times faster
  • Require Qt 5.5 (this allows for removal of workaround for QJson::fromHash and allows for the use of Q_ENUM and qCInfo)
  • Fixed crashes, and a memory leak is Stats where enabled
  • Improved usage of QStringLiteral and QLatin1String with clazy help
  • Added CMake support for setting the plugins install directory
  • Added more ‘const’ and ‘auto’
  • Removed uwsgi –cutelyst-reload option which never worked and can be replaced by –touch-reload and –lazy
  • Improvements and fixes on cutelyst command line tool
  • Many small bugs fixed

The Cutelyst website is powered by Cutelyst and CMlyst which is a CMS, at the moment porting CMlyst from QSettings to sqlite is on my TODO list but only when I figure out why out of the blue I get a “database locked” even if no other query is running (and yes I tried query.finish()), once I figure that out I’ll make a new CMlyst release.

Download here.

Have fun!


December 14, 2015

AppStream on DebianBack in 2011, when the AppStream meeting in Nürnberg had just happened, I published the DEP-11 (Debian Extension Project 11) draft together with Michael Vogt and Julian Andres Klode, as an approach to implement AppStream in Debian.

Back then, the FTPMasters team rejected the suggestion to use the official XML specification, and so the DEP-11 specification was adapted to be based on YAML instead of XML. This wasn’t much of a big deal, since the initial design of DEP-11 was to be a superset of the AppStream specification, so it wasn’t meant to be exactly like AppStream anyway. AppStream back then was only designed for applications (as in “stuff that provides a .desktop file”), but with DEP-11 we aimed for much more: DEP-11 should also describe fonts, drivers, pkg-config files and other metadata, so in the end one would be able to ask the package manager meaningful questions like “is the firmware of device X installed?” or request actions such as “please install me the GIMP”, making it unnecessary to know package names at all, and making packages a mere implementation detail.

Then, GNOME-Software happened and demanded all these features. Back then, I was the de-facto maintainer of the AppStream upstream project already, but didn’t feel like being the maintainer yet, so I only curated the existing specification, without extending it much. The big push forward GNOME-Software created changed that dramatically, and with me taking control of the specification and documenting it properly, the very essence of DEP-11 became AppStream (that was around the AppStream 0.6 release). So today, DEP-11 is mainly a YAML-based version of the AppStream XML specification.

AppStream XML and DEP-11 YAML are implemented by two projects, GLib and Qt libraries exist to access the metadata and AppStream is used by the software centers of GNOME, KDE and Elementary.

Today there are two things to celebrate for me: First of all, there is the release of AppStream 0.9 (that happened last Saturday already), which brings some nice improvements to the API for developers and some micro-optimizations to speed up Xapian database queries. Yay!

The second thing is full DEP-11 support in Debian! This means that you don’t need to copy metadata around manually, or install extra packages: All you need to do is to install the appstream package, everything else is done for you, and the data is kept up to date automatically.

This is made possible by APT 1.1 (thanks to the whole APT team!), some dedicated support for it in AppStream directly, the work of our Sysadmin team at Debian, which set up infrastructure to build the metadata automatically, as well as our FTPMasters team where Joerg helped with the final steps of getting the metadata into the archive.

That AppStream data is now in the archive doesn’t mean we live in a perfect utopia yet – there are still issues to be handled, but all the major work is done now and we can now gradually improve the data generator and tools and squash the remaining bugs.

And another item from the good news department: It’s highly likely that Ubuntu will follow Debian in AppStream/DEP-11 support with the upcoming Xenial release!

But how can I make use of the new metadata?

Just install the appstream package – everything is done for you! Another easy way is to install GNOME-Software, which makes use of the new metadata already. KDE Discover in Debian does not enable support for AppStream yet, this will likely come later.

If you prefer to use the command-line, you can now use commands like

sudo appsteamcli install org.kde.kate.desktop

This will simply install the Kate text editor.

Who wants some statistics?

At time the Debian Sid/Unstable suite contains 1714 valid software components. It could be even more if the errors generated during metadata extraction would be resolved. For that, the metadata generator has a nice statistics page, showing the amount of each hint type in the suite and the development of the available software components in Debian and the hint types count over time (this plot feature was just added recently, so we are still a bit low on data). For packagers and interested upstreams, the data extractor creates detailed reports for each package, explaining why data was not included and how to fix the issue (in case something is unclear, please file a bug report and/or get in contact with me).

In summary

Thanks to everyone who helped to make this happen! For me this project means a lot, when writing this blog post I realized that I am basically working on it for almost 5 years (!) now (and the idea is even older). Seeing it to grow to such a huge success in other distributions was a joy, but now Debian can join the game with first-class AppStream support as well, which makes me even happier. Afterall Debian is the distribution I feel most at home.

There is still lots of work to do (and already a few bugs known), but the hardest part of the journey is done – let’s walk into a bright future with AppStream!

December 11, 2015

tanglu logo pureIt is time again for another Tanglu blogpost (to be honest, this article is pretty much overdue 😉 ). I want to shine a spotlight on the work that’s done in Tanglu, be it ongoing or past work done for the new release.

Why is the new release taking longer than usual?

As you might have noticed, usually a new Tanglu release (Tanglu 4 “Dasyatis”) should be released this month. We decided a while ago, however, to defer the release and are now aiming for an release in February / March 2016.

Reason for this change in schedule is the GCC 5 transition (and more importantly the huge amount of follow-up transitions) as well as some other major infrastructure tasks, which we would like to complete and give them a good amount of testing before releasing. Also, some issues with our build system, and generally less build power than in previous releases is a problem (At least the Debile build-system issues could be worked around or solved). The hard disk crash in the forum and bugtracking server also delayed the start of the Tanglu 4 development process a lot.

In all of these tasks, manpower is of course the main problem 😉

Software Tasks

General infrastructure tasks

Improvements on Synchrotron

Synchrotron, the software which is synchronizing the Tanglu archive with the Debian archive, received a few tweaks to make it more robust and detect “installability” of packages more reliably. We also run it more often now.

Rapidumo improvements

Rapidumo is a software written to complement dak (the Debian Archive Kit) in managing the Tanglu archive. It performs automatic QA tasks on the archive and provides a collection of tools to perform various tasks (like triggering package rebuilds).

For Tanglu 4, we sometimes drop broken packages from the archive now, to speed up transitions and to remove packages which got uninstallable more quickly. Packages removed from the release suite still have a chance to enter it again, but they need to go through Tanglu’s staging area again first. The removal process is currently semiautomatic, to avoid unneccessary removals and breakage.

Rapidumo could benefit from some improvements and an interactive web interface (as opposed to static HTML pages) would be nice. Some early work on this is done, but not completed (and has a very low priority at the moment).

DEP-11 integration

There will be a bigger announcement on AppStream and DEP-11 in the next days, so I keep this terse: Tanglu will go for full AppStream integration with the next release, which means no more packaged metadata, but data placed directly in the archive. Work on this has started in Tanglu, but I needed to get back to the drawing board with it, to incorporate some new ideas for using synergies with Debian on generating the DEP-11 metadata.

Phabricator

Phabricator has been integrated well into our infrastructure, but there are still some pending tasks. E.g. we need subprojects in Phabricator, and a more powerful Conduit interface. Those are upstream bugs on Phabricator, and are actively being worked on.

As soon as the missing features are available in Phabricator, we will also pursue the integration of Tanglu bug information with the Debian DistroTracker, which was discussed at DebConf this summer.

UEFI support

UEFI support is a tricky beast. Full UEFI support is a release-goal for Tanglu 4 (so we won’t release without it). At time, our Live-CDs start on pure EFI systems, but there are several reported issues with the Calamares live-installer as well as Debian-Installer, which fails to install GRUB correctly.

Work on resolving these problems is ongoing.

Major LiveCD rework

Tanglu 4 will ship with improved live-cds, which e.g. allow selecting the preferred locale early in the bootloader. We switched from live-boot to manage our live sessions to casper, the same tool which is also used in Ubuntu. Casper fixed a few issues we had, and brought some new, but overall using it was a good choice and work on making top-notch live-cds is progressing well.

KDE Plasma

Integration of the latest Plasma release is progressing, but its speed has slowed down since fewer people are working on it. If you want to help Tanglu’s KDE Workspace integration, please help!

For the upcoming release, the Plasma 5 packages which we created based on a collaboration with Kubuntu for the previous Tanglu 3 release have been merged with their Debian counterparts. This action fortunately was possible without major problems. Now Tanglu is (mostly) using the same Plasma 5 packages as Debian again (lots of Kudos go the the Kubuntu and Debian Qt/KDE packagers!).

GNOME

The same re-merge with Debian has been done on Tanglu’s GNOME flavor (Tanglu also shipped with a more recent GNOME release than Debian in Tanglu 3). So Tanglu 4 GNOME and Debian Testing GNOME are at (almost) the same level.

Unfortunately, the GNOME team is heavily understaffed – a GNOME team member left at the beginning of the year for personal reasons, and the team now only has one semi-active member and me.

Other

An fvwm-nightshade spin is being worked on 🙂 . Apart from that, there are no teams maintaining other flavors in Tanglu.

Security & Stable maintenance

Thanks to several awesome people, the current Tanglu Stable release (Tanglu 3 “Chromodoris”) receives regular updates, even for many packages which are not in the “fully supported” set.

Tangluverse Tasks

Tasks (not) done in the global Tanglu universe.

HTTPS for community sites

Thanks to our participation in the Let’s Encrypt closed beta program, Tanglu websites like the user forums and bugtracker have been fully encrypted for a while, which should make submitting data to these sites much more secure. So far, we didn’t encounter issues, which means that we will likely aim for getting HTTPS encryption enabled on every Tanglu website.

Tanglu.org website

The Tanglu main website didn’t receive much love at all. It could use a facelift and more importantly updated content about Tanglu.

So far, nobody volunteered to update the website, so this task is still open. It is, however, a high-priority task to increase our public visibility as a project, so updating the website is definitely something which will be done soon.

Promotion

It doesn’t hurt to think about how to sell Tanglu as “brand”: What is Tanglu? Our motivation, our goals? What is our slogan? How can we communicate all of this to new users, without having them to read long explanations? (e.g. the Fedora Project has this nicely covered on their main website, and their slogan “Freedom, Friends, Features, First” immediately communicates the project’s key values)

Those are issues engineers don’t think about often, still it is important to present Tanglu in a way that is easy to grasp for people hearing about it for the first time. So far, no people are working on this task specifically, although it regularly comes up on IRC.

Sponsoring & Government

Tanglu is – by design – not backed by any corporation. However, we still need to get our servers paid. Currently, Tanglu is supported by three sponsors, which basically provide the majority of our infrastructure. Also, developers provide additional machines to get Tanglu running and/or packages built.

Still, this dependency on very few sponsors paying the majority of the costs is very bad, since it makes Tanglu vulnerable in case one sponsor decides to end sponsorship. We have no financial backing to be able to e.g. continue to pay an important server in case it’s sponsor decides to drop sponsorship.

Also, since Tanglu is no legal entity, accepting donations is hard for us at time, since a private person would need to accept the money on behalf of Tanglu. So we can’t easily make donating to Tanglu possible (at least not in the way we want donations to be).

These issues are currently being worked on, there are a couple of possible solutions on the table. I will write about details as soon as it makes sense to go public with them.

 

In general, I am very happy with the Tanglu community: The developer community is a pleasure to work with, and interactions with our users on the forums or via the bugtracker are highly productive and friendly. Although the development speed has slowed down a bit, Tanglu is still an active and awesome project, and I am looking forward to the cool stuff we will do in future!

 

September 11, 2015

LimbaHubThe Limba project does not only have the goal to allow developers to deploy their applications directly on multiple Linux distributions while reducing duplication of shared resources, it should also make it easy for developers to build software for Limba.

Limba is worth nothing without good tooling to make it fun to use. That’s why I am working on that too, and I want to share some ideas of how things could work in future and which services I would like to have running. I will also show what is working today already (and that’s quite something!). This time I look at things from a developer’s perspective (since the last posts on Limba were more end-user centric). If you read on, you will also find a nice video of the developer workflow 😉

1. Creating metadata and building the software

To make building Limba packages as simple as possible, Limba reuses already existing metadata, like AppStream metadata to find information about the software you want to create your package for.

To ensure upstreams can build their software in a clean environment, Limba makes using one as simple as possible: The limba-build CLI tool creates a clean chroot environment quickly by using an environment created by debootstrap (or a comparable tool suitable for the Linux distribution), and then using OverlayFS to have all changes to the environment done during the build process land in a separate directory.

To define build instructions, limba-build uses the same YAML format TravisCI uses as well for continuous integration. So there is a chance this data is already present as well (if not, it’s trivial to write).

In case upstream projects don’t want to use these tools, e.g. because they have well-working CI already, then all commands needed to build a Limba package can be called individually as well (ideally, building a Limba package is just one call to lipkgen).

I am currently planning “DeveloperIPK” packages containing resources needed to develop against another Limba package. With that in place and integrated with the automatic build-environment creation, upstream developers can be sure the application they just built is built against the right libraries as present in the package they depend on. The build tool could even fetch the build-dependencies automatically from a central repository.

2. Uploading the software to a repository

While everyone can set up their own Limba repository, and the limba-build repo command will help with that, there are lots of benefits in having a central place where upstream developers can upload their software to.

I am currently developing a service like that, called “LimbaHub”. LimbaHub will contain different repositories distributors can make available to their users by default, e.g. there will be one with only free software, and one for proprietary software. It will also later allow upstreams to create private repositories, e.g. for beta-releases.

3. Security in LimbaHub

Every Limba package is signed with they key of its creator anyway, so in order to get a package into LimbaHub, one needs to get their OpenPGP key accepted by the service first.

Additionally, the Hub service works with a per-package permission system. This means I can e.g. allow the Mozilla release team members to upload a package with the component-ID “org.mozilla.firefox.desktop” or even allow those user(s) to “own” the whole org.mozilla.* namespace.

This should prevent people hijacking other people’s uploads accidentally or on purpose.

4. QA with LimbaHub

LimbaHub should also act as guardian over ABI stability and general quality of the software. We could for example warn upstreams that they broke ABI without declaring that in the package information, or even reject the package then. We could validate .desktop files and AppStream metadata, or even check if a package was built using hardening flags.

This should help both developers to improve their software as well as users who benefit from that effort. In case something really bad gets submitted to LimbaHub, we always have the ability to remove the package from the repositories as a last resort (which might trigger Limba to issue a warning for the user that he won’t receive updates anymore).

What works

Limba, LimbaHub and the tools around it are developing nicely, so far no big issues have been encountered yet.

That’s why I made a video showing how Limba and LimbaHub work together at time:

Still, there is a lot of room for improvement – Limba has not yet received enough testing, and LimbaHub is merely a proof-of-concept at time. Also, lots of high-priority features are not yet implemented.

LimbaHub and Limba need help!

At time I am developing LimbaHub and Limba alone with only occasional contributions from others (which are amazing and highly welcomed!). So, if you like Python and Flask, and want to help developing LimbaHub, please contact me – the LimbaHub software could benefit from a more experienced Python web developer than I am 😉 (and maybe having a designer look over the frontend later makes sense as well). If you are not afraid of C and GLib, and like to chase bugs or play with building Limba packages, consider helping Limba development 🙂

September 09, 2015

Cutelyst, a Qt web framework just got a new release, this version took a little longer but now I think it has evolved enough for a new version.

The more important improvements of this release were the new JSON view, which is an easy way to return a JSON response without worry with including QJson casses (all the magic is done by QJsonObject::fromVariantMap(), and the component splitting that help us get API and ABI stabilization.

The Cutelyst::Core target has all the Core functionality that simple application needs, it’s the foundation of the framework, Context, Request, Response are all core classes that one cannot live without, then you can add Cutelyst::Plugin::Session target to add Session handling functionality, which in case you don’t like it you can create your own session handling but still use Cutelyst::Core functionality. Action classes were also split out of Core, and are automatically loaded, thus only having code that you are really using ActionClass(‘REST’) will load the ActionREST plugin or fail the application if it could not be loaded.

Here is a more complete version of the changes:

  • Reduce CMake requirement to cmake 2.8.12
  • Use some Qt compile definitions to have more strict building
  • Add a JSON view which gets stash data and returns it’s data as JSON
  • Split out components, Core, View::Grantlee, View::JSON…
  • Make special Action classes loadable plugins ActionREST
  • Fix build on OSX, with a working Travis CI
  • Fix memory aliment of uWSGI plugin
  • Make Authentication methods static for convenience
  • Expose engine threads/cores information
  • Align memory of some classes
  • Fix compiling on Qt 5.5
  • Rework Session handling and extend it to match Catalyst::Plugin::Session
  • Now Controllers, Views, Plugins and DispatchTypes get automatically registered if they are children of Cutelyst::Application
  • Fix Session cookie expires
  • Require Qt 5.4 due QString::splitRef()
  • Add some more documentation
  • Removed ViewEngine class

The 0.11.0 roadmap:

  • Add unit tests, yes this is the most important thing I want to add to Cutelyst right, having API and ABI stable means nothing if internal logic changes and your app stops working the way it should on a new Cutelyst release
  • Depend on Qt 5.5
  • Declarative way of describing application (QML)
  • Some sort of Catalyst::Plugin::I18N for localization support

Have fun https://github.com/cutelyst/cutelyst/archive/r0.10.0.tar.gz!


September 07, 2015

Piwik told me that people are still sharing my post about the state of GNOME-Software and update notifications in Debian Jessie.

So I thought it might be useful to publish a small update on that matter:

  • If you are using GNOME or KDE Plasma with Debian 8 (Jessie), everything is fine – you will receive update notifications through the GNOME-Shell/via g-s-d or Apper respectively. You can perform updates on GNOME with GNOME-PackageKit and with Apper on KDE Plasma.
  • If you are using a desktop-environment not supporting PackageKit directly – for example Xfce, which previously relied on external tools – you might want to try pk-update-icon from jessie-backports. The small GTK+ tool will notify about updates and install them via GNOME-PackageKit, basically doing what GNOME-PackageKit did by itself before the functionality was moved into GNOME-Software.
  • For Debian Stretch, the upcoming release of Debian, we will have gnome-software ready and fully working. However, one of the design decisions of upstream is to only allow offline-updates (= download updates in the background, install on request at next reboot) with GNOME-Software. In case you don’t want to use that, GNOME-PackageKit will still be available, and so are of course all the CLI tools.
  • For KDE Plasma 5 on Debian 9 (Stretch), a nice AppStream based software management solution with a user-friendly updater is also planned and being developed upstream. More information on that will come when there’s something ready to show ;-).

I hope that clarifies things a little. Have fun using Debian 8!


UPDATE:

It appears that many people have problems with getting update notifications in GNOME on Jessie. If you are affected by this, please try the following:

  1. Open dconf-editor and navigate to org.gnome.settings-daemon.plugins.updates. Check if the key active is set to true
  2. If that doesn’t help, also check if at org.gnome.settings-daemon.plugins.updates the frequency-refresh-cache value is set to a sane value (e.g. 86400)
  3. Consider increasing the priority value (if it isn’t at 300 already)
  4. If all of that doesn’t help: I guess some internal logic in g-s-d is preventing a cache refresh then (e.g. because I thinks it is on a expensive network connection and therefore doesn’t refresh the cache automatically, or thinks it is running on battery). This is a bug. If that still happens on Debian Stretch with GNOME-Software, please report a bug against the gnome-software package. As a workaround for Jessie you can enable unconditional cache refreshing via the APT cronjob by installing apt-config-auto-update.
August 11, 2015

One of the things we discussed at this year’s Akademy conference is making AppStream work on Kubuntu.appstream-logo

On Debian-based systems, we use a YAML-based implementation of AppStream, called “DEP-11”. DEP-11 exists for historical reasons (the DEP-11 YAML format was a superset of AppStream once) and because YAML, unlike XML, is one accepted file format by the Debian FTPMasters team, who we want to have on board when adding support for AppStream.

So I’ve spent the last few days on setting up the DEP-11 generator for Kubuntu, as well as improving it greatly to produce more meaningful error messages and to generate better output. It became a bit slower in the process, but the greatly improved diagnostics data is worth it.

For example, maintainers of Debian packages will now get a more verbose explanation of issues found with metadata in their packages, making them easier to fix for people who didn’t get in contact with AppStream yet.

At time, we generate AppStream metadata for Tanglu, Kubuntu and Debian, but so far only Tanglu makes real use of it by shipping it in a .deb package. Shipping the data as package is only a workaround though, for a proper implementation, the data will be downloaded by Apt. To achieve that, the data needs to reach the archive first, which is something that I can hopefully discuss and implement with the FTPMasters team of Debian at this year’s Debconf.

When this is done, the new metadata will automatically become available in tools like GNOME-Software or Muon Discover.

How can I see if there are issues with my package?

The dep11-generator tool will return HTML pages to show both the extracted metadata, as well as issues with it.

You can find the information for the respective distribution here:

Each issue tag will contain a detailed explanation of what went wrong. Errors generally lead to ignoring the metadata, so it will not be processed. Warnings usually concern things which might reduce the amount of metadata or make it less useful, while Info-type hints contain information on how to improve the metadata or make it more useful.

Can I use the data already?

Yes, you can. You just need to place the compressed YAML files in /var/cache/app-info/yaml and the icons in /var/cache/app-info/icons/<suite>-<component>/<size>, for example: /var/cache/app-info/icons/jessie-amd64/64×64

I think I found a bug in the generator…

In that case, please report the issue against the appstream-dep11 package at Debian, or file an issue at Github..

The only reason why I announce this feature now is to find remaining generator bugs, before officially announcing the feature on debian-devel-announce.

When will this be officially announced?

I want to give this feature a little bit more testing, and ideally have the integration into the archive ready, so people can see how the metadata looks like when rendered in GNOME-Software/Discover. I also want to write a bit more documentation to help Debian developers and upstreams to improve their metadata.

Ideally, I also want to incorporate some feedback at Debconf when announcing full AppStream support in Debian. So take all the stuff you’ve read above as a little sneak peek 😉

Debconf15goingI will also give a talk at Debconf, titled “AppStream, Limba, XdgApp – Where we are going.” The aim of this talk is to give an insight i tnto the new developments happening in the software distribution area, and what the goal of these different projects is (the talk should give an overview of what’s in the oven and how it will impact Debian). So if you are interested, please drop by 🙂 Maybe setting up a BOF would also be a good idea.

August 10, 2015

I am very late with this, but I still wanted to write a few words about Akademy 2015.

First of all: It was an awesome conference! Meeting all the great people involved with KDE and seeing who I am working with (as in: face to face, not via email) was awesome. We had some very interesting discussions on a wide variety of topics, and also enjoyed quite some beer together. Particularly important to me was of course talking to Daniel Vrátil who is working on XdgApp, and Aleix Pol of Muon Discover fame.

Also meeting with the other Kubuntu members was awesome – I haven’t seen some of them for about 3 years, and also met many cool people for the first time.

My talk on AppStream/Limba went well, except that I got slightly confused by the timer showing that I had only 2 minutes left, after I had just completed the first half of my talk. It turned out that the timer was wrong 😉

Another really nice aspect was to be able to get an insight into areas where I am usually not involved with, like visual design. It was really interesting to learn about the great work others are doing and to talk to people about their work – and I also managed to scratch an itch in the Systemsettings application, where three categories had shown the same icon. Now Systemsettings looks like it is supposed to be, finally 🙂

The only thing I could maybe complain about was the weather, which was more Scotland/Wales like than Spanish – but that didn’t stop us at all, not even at the social event outside. So I actually don’t complain 😉

We also managed to discuss some new technical stuff, like AppStream for Kubuntu, and plenty of other things that I’ll write about in a separate blog post.

Generally, I got so many new impressions from this year’s Akademy, that I could write a 10-pages long blogpost about it while still having to leave out things.

Kudos to the organizers of this Akademy, you did a fantastic job! I also want to thank the Ubuntu community for funding my trip, and the Kubuntu community for pushing me a little to attend :-).

KDE is a great community with people driving the development of Free Software forward. The diversity of people, projects and ideas in KDE is a pleasure, and I am very happy to be part of this community.

Akademy2015

June 18, 2015

Cutelyst the Qt Web framework just got a new release!

I was planning to do this one after Qt 5.5 was out due to the new qCInfo() logger and Q_GADGET introspection, but I’ll save that for 0.10.0.

This release brings some important API breaks so if everything goes well I can do another CMlyst release.

  • Some small performance improvements with QStringRef usage and pre-cached _DISPATCH actions
  • Fixes when using uWSGI in threaded mode
  • Speedup uWSGI threaded mode, still slower than pre-forked but quite close
  • Fixes for the session handling as well as having static methods to make the code cleaner
  • Fix uWSGI –lazy-apps mode
  • Use PBKDF2 on Authentication plugin
  • Requite CMake 3.0 and it’s policy
  • Add Request::mangleParams() and Request::uriWith() so that Request API now covers all of Catalyst’s
  • Documented some functions

Cutelyst repository was moved to gitlab and then moved to github. The main website http://cutelyst.org is now powered by CMlyst and itself🙂

Download here.

I also started another project, html-qt is a library to create a DOM representation of HTML following the WHATWG specifications, it’s needed for CMlyst to be able to remove XSS of published posts, at the moment it has 90% of the tokenizer ready, and 5% of the parser/tree construction code, so if you have interest in the subject (could be potentially useful for mail clients such as KMail) give me a hand!


April 28, 2015

Cutelyst the Qt Web Framework just got a new release!

With 0.8.0 some important bugs got fixed which will allow for the CMS/Blogger app CMlyst to get a new release soon. Besides bugs being fixed a class to take out stats from the application was created, Cutelyst::Stats matches what Catalyst::Stats does, and produces a similar output:

cutelyst-stats

 

  • There you have a box, and the time which each action took to complete, looking at the /End action it is clear that it took more time but most of the time was due it calling the Grantlee View.
  • On the Request took: …  line there is the 1278.772/s value which means how many requests could be handled within a second, of course that value is for the exact same request and for the single core where it was run.
  • The first line now shows which HTTP method was used, which resource the user wants and where the request came from.

All of that can and should be disabled on production by setting cutelyst.*.debug=false on the [Rules] section of the application ini file (–ini of uWSGI). Yeah this is new too🙂

Speaking of debugging we now have more information about what user data was sent, if cutelyst.request.debug logging category is enabled you will see:

cutelyst-body A similar output is printed for url query parameters and file uploads.

The HTTP/1.1 support for chunked responses was added, once c->response()->write(…) is called the response headers are send and a chunk is sent to the client, this is new and experimental, but worked without complaints from web browsers.

An easier way to declare action arguments or capture arguments was implemented, :AutoArgs and :AutoCaptureArgs, both are not present in Perl Catalyst because there is no way to introspect methods signature in the language, @_ can provide many things, but with Qt meta information we can see which types of arguments a method has as well as how many arguments:

C_ATTR(some_method, :Local :AutoArgs)
void some_method(Context *c, const QString &year, const QString &month);

That will automatically capture two arguments matching an URL like “/some_method/2014/12”.

Last the Headers class got many fixes and improvements such as striping the fragment due RFC 2616.

As always if you have questions show up in #cutelyst at freenode or send me a mail.

Download 0.8.0 here

Enjoy!


March 30, 2015

And once again, it’s time for another Limba blogpost 🙂limba-small

Limba is a solution to install 3rd-party software on Linux, without interfering with the distribution’s native package manager. It can be useful to try out different software versions, use newer software on a stable OS release or simply to obtain software which does not yet exist for your distribution.

Limba works distribution-independent, so software authors only need to publish their software once for all Linux distributions.

I recently released version 0.4, with which all most important features you would expect from a software manager are complete. This includes installing & removing packages, GPG-signing of packages, package repositories, package updates etc. Using Limba is still a bit rough, but most things work pretty well already.

So, it’s time for another progress report. Since a FAQ-like list is easier to digest. compared to a long blogpost, I go with this again. So, let’s address one important general question first:

How does Limba relate to the GNOME Sandboxing approach?

(If you don’t know about GNOMEs sandboxes, take a look at the GNOME Wiki – Alexander Larsson also blogged about it recently)

First of all: There is no rivalry here and no NIH syndrome involved. Limba and GNOMEs Sandboxes (XdgApp) are different concepts, which both have their place.

The main difference between both projects is the handling of runtimes. A runtime is the shared libraries and other shared ressources applications use. This includes libraries like GTK+/Qt5/SDL/libpulse etc. XdgApp applications have one big runtime they can use, built with OSTree. This runtime is static and will not change, it will only receive critical security updates. A runtime in XdgApp is provided by a vendor like GNOME as a compilation of multiple single libraries.

Limba, on the other hand, generates runtimes on the target system on-the-fly out of several subcomponents with dependency-relations between them. Each component can be updated independently, as long as the dependencies are satisfied. The individual components are intended to be provided by the respective upstream projects.

Both projects have their individual up and downsides: While the static runtime of XdgApp projects makes testing simple, it is also harder to extend and more difficult to update. If something you need is not provided by the mega-runtime, you will have to provide it by yourself (e.g. we will have some applications ship smaller shared libraries with their binaries, as they are not part of the big runtime).

Limba does not have this issue, but instead, with its dynamic runtimes, relies on upstreams behaving nice and not breaking ABIs in security updates, so existing applications continue to be working even with newer software components.

Obviously, I like the Limba approach more, since it is incredibly flexible, and even allows to mimic the behaviour of GNOMEs XdgApp by using absolute dependencies on components.

Do you have an example of a Limba-distributed application?

Yes! I recently created a set of package for Neverball – Alexander Larsson also created a XdgApp bundle for it, and due to the low amount of stuff Neverball depends on, it was a perfect test subject.

One of the main things I want to achieve with Limba is to integrate it well with continuous integration systems, so you can automatically get a Limba package built for your application and have it tested with the current set of dependencies. Also, building packages should be very easy, and as failsafe as possible.

You can find the current Neverball test in the Limba-Neverball repository on Github. All you need (after installing Limba and the build dependencies of all components) is to run the make_all.sh script.

Later, I also want to provide helper tools to automatically build the software in a chroot environment, and to allow building against the exact version depended on in the Limba package.

Creating a Limba package is trivial, it boils down to creating a simple “control” file describing the dependencies of the package, and to write an AppStream metadata file. If you feel adventurous, you can also add automatic build instructions as a YAML file (which uses a subset of the Travis build config schema)

This is the Neverball Limba package, built on Tanglu 3, run on Fedora 21:

Limba-installed Neverball

Which kernel do I need to run Limba?

The Limba build tools run on any Linux version, but to run applications installed with Limba, you need at least Linux 3.18 (for Limba 0.4.2). I plan to bump the minimum version requirement to Linux 4.0+ very soon, since this release contains some improvements in OverlayFS and a few other kernel features I am thinking about making use of.

Linux 3.18 is included in most Linux distributions released in 2015 (and of course any rolling release distribution and Fedora have it).

Building all these little Limba packages and keeping them up-to-date is annoying…

Yes indeed. I expect that we will see some “bigger” Limba packages bundling a few dependencies, but in general this is a pretty annoying property of Limba currently, since there are so few packages available you can reuse. But I plan to address this. Behind the scenes, I am working on a webservice, which will allow developers to upload Limba packages.

This central ressource can then be used by other developers to obtain dependencies. We can also perform some QA on the received packages, map the available software with CVE databases to see if a component is vulnerable and publish that information, etc.

All of this is currently planned, and I can’t say a lot more yet. Stay tuned! (As always: If you want to help, please contact me)

Are the Limba interfaces stable? Can I use it already?

The Limba package format should be stable by now – since Limba is still Alpha software, I will however, make breaking changes in case there is a huge flaw which makes it reasonable to break the IPK package format. I don’t think that this will happen though, as the Limba packages are designed to be easily backward- and forward compatible.

For the Limba repository format, I might make some more changes though (less invasive, but you might need to rebuilt the repository).

tl;dr: Yes! Plase use Limba and report bugs, but keep in mind that Limba is still in an early stage of development, and we need bug reports!

Will there be integration into GNOME-Software and Muon?

From the GNOME-Software side, there were positive signals about that, but some technical obstancles need to be resolved first. I did not yet get in contact with the Muon crew – they are just implementing AppStream, which is a prerequisite for having any support for Limba[1].

Since PackageKit dropped the support for plugins, every software manager needs to implement support for Limba.


So, thanks for reading this (again too long) blogpost 🙂 There are some more exciting things coming soon, especially regarding AppStream on Debian/Ubuntu!

 

[1]: And I should actually help with the AppStream support, but currently I can not allocate enough time to take that additional project as well – this might change in a few weeks. Also, Muon does pretty well already!

March 23, 2015

Cutelyst the Qt/C++ web framework just got a new release, 0.7.0 brings many improvements, bugfixes and features.

Most notably of the changes is a shinny new Chained dispatcher that finally got implemented, with it Cutelyst Tutorial got finished, and a new command line tool to bootstrap new projects.

* Request::bodyData() can return a QJsonDocument is content-type is application/json
* Chained dispatcher
* Fixes on QStringListeral and QLatin1String usage
* More debug information
* Some API changes to match Catalyst’s
* Some rework on the API handlying Plugins (not the best thing yet)
* Moved to Gitlab (due to the gitorious being acquired by it)
* “cutelyst” command line tool

For the next release I’ll try to make the QML integration a reality, and try to improve the Plugins API, which is right now the part that needs more love would love to hear some advice.

Download it here.

Have fun!


February 12, 2015

Now that Cutelyst is allowing me to write web applications with the tools I like, I can use it to build the kind of web applications I need but am not fine with using the existing ones…

WordPress is a great software, but it’s written in PHP, which is not a language I like, and there are many other software like this written in all sorts of languages but C++/Qt/Cutelyst, so why not?

This first release is a very shine one, it allows for templating, static pages and posts (without comments) and even a RSS feed.

To make things more interesting CMlyst has a separate library, this library abstracts the Content Management part, allowing for different backends to be written, the current one is based on QSettings files, but it’s very easy to write one for SQL, git who knows😛

I have not measured it speed against WordPress or any other CMS, it probably won’t be fair. But if you are curious I have run it against weighttp and it gives me 4k req/s on a page show, 2,5k req/s on last posts listing and 6,5k on RSS feed delivering, this on a Core2Duo 2.4Ghz, using ~10MB of RAM.

Give it a try!

Download here.


Cutelyst, the Qt/C++ web framework just got another step into API stabilization.

Since 0.3.0 I’ve been trying to take the most request per second out of it, and because of that I decided to replace most QStrings with QByteArrays, the allocation call is indeed simpler in QByteArray but since most of Qt use QString for strings it started to create a problem rather than solving one. Grantlee didn’t play nice with QByteArray breaking ifequal and in the end some implicit conversions from UTF-8 were triggered.

So on 0.6.0 I’ve replaced all QByteArray usage in favor of QString, and from the benchmarks of hello world the difference is really negligible.

Another big change I actually took today was to change the Headers class that is used on Request and Response, the keys stored there were always in camel-case style and split by dashes, and if you want to get say the User-Agent in a Grantlee template it won’t work as dashes will break the variable name, so now all keys are lower casa and words are separated by underscores. Allowing for {{ ctx.request.headers.user_agent }} in Grantlee templates.

Some changes were also made to make it behave more close to what Catalyst does, and fixed a bunch of bugs from last release.

On my TODO list still need to:
Write a chained dispatcher to allow a more flexible way of dispatching.
Write tutorials, yeah this is boring…
Write QML integration to allow for writing the application logic in QML too!

This last one would allow for something like this:

Application {
    Controller {
        Action {
            attributes: ":Path(hello)"
            onRequest: {
                ctx.response.body = "Hello World!"
            }
        }
    }
}

Download here

Have fun!


January 27, 2015

Yesterday I released version 0.8 of AppStream, the cross-distribution standard for software metadata, that is currently used by GNOME-Software, Muon and Apper in to display rich metadata about applications and other software components.

 What’s new?

The new release contains some tweaks on AppStreams documentation, and extends the specification with a few more tags and refinements. For example we now recommend sizes for screenshots. The recommended sizes are the ones GNOME-Software already uses today, and it is a good idea to ship those to make software-centers look great, as others SCs are planning to use them as well. Normal sizes as well as sizes for HiDPI displays are defined. This change affects only the distribution-generated data, the upstream metadata is unaffected by this (the distro-specific metadata generator will resize the screenshots anyway).

Another addition to the spec is the introduction of an optional <source_pkgname/> tag, which holds the source package name the packages defined in <pkgname/> tags are built from. This is mainly for internal use by the distributor, e.g. it can decide to use this information to link to internal resources (like bugtrackers, package-watch etc.). It may also be used by software-center applications as additional information to group software components.

Furthermore, we introduced a <bundle/> tag for future use with 3rd-party application installation solutions. The tag notifies a software-installer about the presence of a 3rd-party application bundle, and provides the necessary information on how to install it. In order to do that, the software-center needs to support the respective installation solution. Currently, the Limba project and Xdg-App bundles are supported. For software managers, it is a good idea to implement support for 3rd-party app installers, as soon as the solutions are ready. Currently, the projects are worked on heavily. The new tag is currently already used by Limba, which is the reason why it depends on the latest AppStream release.

How do I get it?

All AppStream libraries, libappstream, libappstream-qt and libappstream-glib, are supporting the 0.8 specification in their latest version – so in case you are using one of these, you don’t need to do anything. For Debian, the DEP-11 spec is being updated at time, and the changes will land in the DEP-11 tools soon.

Improve your metadata!

This call goes especilly to many KDE projects! Getting good data is partly a task for the distributor, since packaging issues can result in incorrect or broken data, screenshots need to be properly resized etc. However, flawed upstream data can also prevent software from being shown, since software with broken data or missing data will not be incorporated in the distro XML AppStream data file.

Richard Hughes of Fedora has created a nice overview of software failing to be included. You can see the failed-list here – the data can be filtered by desktop environment etc. For KDE projects, a Comment= field is often missing in their .desktop files (or a <summary/> tag needs to be added to their AppStream upstream XML file). Keep in mind that you are not only helping Fedora by fixing these issues, but also all other distributions cosuming the metadata you ship upstream.

For Debian, we will have a similar overview soon, since it is also a very helpful tool to find packaging issues.

If you want to get more information on how to improve your upstream metadata, and how new metadata should look like, take a look at the quickstart guide in the AppStream documentation.

December 02, 2014

Disclaimer: Limba is stilllimba-small in a very early stage of development. Bugs happen, and I give to guarantees on API stability yet.

Limba is a very simple cross-distro package installer, utilizing OverlayFS found in recent Linux kernels (>= 3.18).

As example I created a small Limba package for one of the Qt5 demo applications, and I would like to share the process of creating Limba packages – it’s quite simple, and I could use some feedback on how well the resulting packages work on multiple distributions.

I assume that you have compiled Limba and installed it – how that is done is described in its README file. So, let’s start.

1. Prepare your application

The cool thing about Limba is that you don’t really have to do many changes on your application. There are a few things to pay attention to, though:

  • Ensure the binaries and data are installed into the right places in the directory hierarchy. Binaries must go to $prefix/bin, for example.
  • Ensure that configuration can be found under /etc as well as under $prefix/etc

This needs to be done so your application will find its data at runtime. Additionally, you need to write an AppStream metadata file, and find out which stuff your application depends on.

2. Create package metadata & install software

1.1 Basics

Now you can create the metadata necessary to build a Limba package. Just run

cd /path/to/my/project
lipkgen make-template

This will create a “lipkg” directory, containing a “control” file and a “metainfo.xml” file, which can be a symlink to the AppStream metadata, or be new metadata.

Now, configure your application with /opt/bundle as install prefix (-DCMAKE_INSTALL_PREFIX=/opt/bundle, –prefix=/opt/bundle, etc.) and install it to the lipkg/inst_target directory.

1.2 Handling dependencies

If your software has dependencies on other packages, just get the Limba packages for these dependencies, or build new ones. Then place the resulting IPK packages in the lipkg/repo directory. Ideally, you should be able to fetch Limba packages which contain the software components directly from their upstream developers.

Then, open the lipkg/control file and adjust the “Requires” line. The names of the components you depend on match their AppStream-IDs (<id/> tag in the AppStream XML document). Any version-relation (>=, >>, <<, <=, <>) is supported, and specified in brackets after the component-id.

The resulting control-file might look like this:

Format-Version: 1.0

Requires: Qt5Core (>= 5.3), Qt5DBus (>= 5.3), libpng12

If the specified dependencies are in the repo/ subdirectory, these packages will get installed automatically, if your application package is installed. Otherwise, Limba depends on the user to install these packages manually – there is no interaction with the distribution’s package-manager (yet?).

3. Building the package

In order to build your package, make sure the content in inst_target/ is up to date, then run

lipkgen build lipkg/

This will build your package and output it in the lipkg/ directory.

4. Testing the package

You can now test your package, Just run

sudo lipa install package.ipk

Your software should install successfully. If you provided a .desktop file in $prefix/share/applications, you should find your application in your desktop’s application-menu. Otherwise, you can run a binary from the command-line, just append the version of your package to the binary name (bash-comletion helps). Alternatively, you can use the runapp command, which lets you run any binary in your bundle/package, which is quite helpful for debugging (since the environment a Limba-installed application is run is different from the one of other applications).

Example:

runapp ${component_id}-${version}:/bin/binary-name

And that’s it! 🙂

I used these steps to create a Limba package for the OpenGL Qt5 demo on Tanglu 2 (Bartholomea), and tested it on Kubuntu 15.04 (Vivid) with KDE, as well as on an up-to-date Fedora 21, with GNOME and without any Qt or KDE stuff installed:

qt5demo-limba-kubuntuqt5demo-limba-fedora

I encountered a few obstacles when building the packages, e.g. Qt5 initially didn’t find the right QPA plugin – that has been fixed by adjusting a config file in the Qt5Gui package. Also, on Fedora, a matching libpng was missing, so I included that as well.

You can find the packages at Github, currently (but I am planning to move them to a different place soon). The biggest issue with Limba is at time, that it needs Linux 3.18, or an older kernel with OverlayFS support compiled in. Apart from that and a few bugs, the experience is quite smooth. As soon as I am sure there are now hidden fundamental issues, I can think of implementing more features, like signing packages and automatically updating them.

Have fun playing around with Limba!

November 24, 2014

A bit more than one year after the initial commit, Cutelyst makes it’s 5th release.

It’s now powering 3 commercial applications, the last one recently got into production and is the most complex of them, making heavy use of Grantlee and Cutelyst capabilities.

Speaking of Grantlee if you use it on Qt5 you will get hit by QTBUG-41469 which sadly doesn’t seems to get fixed in time for 5.4, but uWSGI can constrain your application resources so your server doesn’t go out of memory (worth the leak due to it’s usefulness).

Here is an overview since 0.4.0 release:

  • Remove hardcode build setting to “Debug”, so that one can build with “Release” increasing performance up to 20% – https://gitorious.org/cutelyst/pages/CutelystPerformance
  • Request::uploads() API was changed to be useful in real world, filling a QMap with the form field name as a key and in the proper order sent by the client
  • Introduced a new C_ATTR macro which allows to have the same Perl attributs syntax like C_ATTR(method_name, :Path(/foo/bar) :Args)
  • Added an Action class RoleACL which allows for doing authorization on control lists, making it easy to deny access to some resources if a user doesn’t match the needed role
  • Added a RenderView class to make it easier to delegate the rendering to a view such as Grantlee
  • Request class is now QObject class so that we can use it on Grantlee as ctx.request.something
  • Make use of uWSGI ini (–init) configuration file to also configure the Cutelyst application
  • Better docs
  • As always some bugs were fixed

I’m very happy with the results, all those site performance tools like webpagetest give great scores for the apps, and I have started to work on translating Catalyst tutorial to Cutelyst, but I realize that I need Chained dispatcher working before that…

If you want to try it, I’ve made a hello-world app available today at https://gitorious.org/cutelyst/hello-world

Download here!


November 10, 2014

As some of you already know, since the larger restructuring in PackageKit for the 1.0 release, I am rethinking Listaller, the 3rd-party application installer for Linux systems, as well.

During the past weeks, I was playing around with a lot of different ideas and code, to make installations of 3rd-party software easily possible on Linux, but also working together with the distribution package manager. I now have come up with an experimental project, which might achieve this.

Motivation

Many of you know Lennart’s famous blogpost on how we put together Linux distributions. And he makes a lot of good and valid points there (in fact, I agree with his reasoning there). The proposed solution, however, is not something which I am very excited about, at least not for the use-case of installing a simple application[1]. Leaving things like the exclusive dependency on technology like Btrfs aside, the solution outlined by Lennart basically bypasses the distribution itself, instead of working together with it. This results in a duplication of installed libraries, making it harder to overview which versions of which software component are actually running on the system. There is also a risk for security holes due to libraries not being updated. The security issues are worked around by a superior sandbox, which still needs to be implemented (but will definitively come soon, maybe next year).

I wanted to explore a different approach of managing 3rd-party applications on Linux systems, which allows sharing as much code as possible between applications.

Limba – Glick2 and Listaller concepts mergedlimba-small

In order to allow easy creation of software packages, as well as the ability to share software between different 3rd-party applications, I took heavy inspiration from Alexander Larssons Glick2 project, combining it with ideas from the application-directory based Listaller.

The result is Limba (named after Limba tree, not the voodoo spirit – I needed some name starting with “li” to keep the prefix used in Listaller, and for a tool like this the name didn’t really matter 😉 ).

Limba uses OverlayFS to combine an application with its dependencies before running it, as well as mount namespaces and shared subtrees. Except for OverlayFS, which just landed in the kernel recently, all other kernel features needed by Limba are available for years now (and many distributions ship with OverlayFS on older kernels as well).

How does it work?

In order to to achieve separation of software, each software component is located in a separate container (= package). A software component can be an application, like Kate or GEdit, but also be a single shared library (openssl) or even a full runtime (KDE Frameworks 5 parts, GNOME 3).

Each of these software components can be identified via AppStream metadata, which is just a few bits of XML. A Limba package can declare a dependency on any other software component. In case that software is available in the distribution’s repositories, the version found there can be used. Otherwise, another Limba package providing the software is required.

Limba packages can be provided from software repositories (e.g. provided by the distributor), or be nested in other packages. For example, imagine the software “Kate” requires a version of the Qt5 libraries, >= 5.2. The downloadable package for “Kate” can be bundled with that dependency, by including the “Qt5 5.2” Limba package in the “Kate” package. In case another software is installed later, which also requires the same version of Qt, the already installed version will be used.

Since the software components are located in separate directories under /opt/software, an application will not automatically find its dependencies, or be able to locate its own files. Therefore, each application has to be run by a special tool, which merges the directory trees of the main application and it’s dependencies together using OverlayFS. This has the nice sideeffect that the main application could override files from its dependencies, if necessary. The tool also sets up a new mount namespace, so if the application is compiled with a certain prefix, it does not need to be relocatable to find its data files.

At installation time, to achieve better system integration, certain files (like e.g. the .desktop file) are split out of the installed directory tree, so the newly installed application achieves almost full system integration.

AQNAY*

Can I use Limba now?

Limba is an experiment. I like it very much, but it might happen that I find some issues with it and kill it off again. So, if you feel adventurous, you can compile the source code and use the example “Foobar” application to play around with Limba. Before it can be used in production (if at all), some more time is needed.

I will publish documentation on how to test the project soon.

Doesn’t OverlayFS have a maximum stacking depth?

Oh yes it has! The “How does it work” explanation doesn’t tell the whole truth in that regard (mainly to keep the section small). In fact, Limba will generate a “runtime” for the newly installed software, which is a directory with links to the actual individual software components the runtime consists of. The runtime is identified by an UUID. This runtime is then mounted together with the respective applications using OverlayFS. This works pretty great, and also results in no dependency-resolution to be done immediately before an application is started.

Than dependency stuff gives me a headache…

Admittedly, allowing dependencies adds a whole lot of complexity. Other approaches, like the one outlined by Lennart work around that (and there are good reasons for doing that as well).

In my opinion, the dependency-sharing and de-duplication of software components, as well as the ability to use the components which are packaged by your Linux distribution is worth the extra effort.

Can you give an overview of future plans for Limba?

Sure, so here is the stuff which currently works:

  • Creating simple packages
  • Installing packages
  • Very basic dependency resolution (no relations (like >, <, =) are respected yet)
  • Running applications
  • Initial bits of system integration (.desktop files are registered)

These features are planned for the new future:

  • Support for removing software
  • Automatic software updates of 3rd-party software
  • Atomic updates
  • Better system integration
  • Integration with the new sandboxing features
  • GPG signing of packages
  • More documentation / bugfixes

Remember that Limba is an experiment, still 😉

XKCD 927

Technically, I am replacing one solution with another one here, so the situation does not change at all ;-). But indeed, some duplicate work is done due to more people working in this area now on similar questions.

But I think this is a good thing, because the solutions worked on are fundamentally different approaches, and by exploring multiple ways of doing things, we will come up with something great in the end. (XKCD reference)

Doesn’t the use of OverlayFS have an impact on the performance of software running with Limba?

I ran some synthetic benchmarks and didn’t notice any problems – even the startup speed of Limba applications is only a few milliseconds slower than the startup of the “raw” native application. However, I will still have to run further tests to give a definitive answer on this.

How do you solve ABI compatibility issues?

This approach requires software to keep their ABI stable. But since software can have strict dependencies on a specific version of a software (although I’d discourage that), even people who are worried about this issue can be happy. We are getting much better at tracking unwanted ABI breaks, and larger projects offer stable API/ABI during a major release cycle. For smaller dependencies, there are, as explained above, stricter dependencies.

In summary, I don’t think ABI incompatibilities will be a problem with this approach – at least not more than they have been in general. (The libuild facilities from Listaller to minimize dependencies will still be present im Limba, of course)

You are wrong because of $X!

Please leave a comment in this case! I’d love to discuss new ideas and find the limits of the Limba concept – that’s why I am writing C code afterall, since what looks great on paper might not work in reality or have issues one hasn’t thought about before. So any input is welcomed!

Conclusion

Last but not least I want to thank Alexander Larsson for writing Glick2, which Limba is heavily inspired from, and for his patient replies to my emails.

If Limba turns out to be a good idea, you can expect a few more blog posts about it soon.


* Answered questions nobody asked yet

[1]: Don’t get me wrong, I would like to have these ideas implemented – they offer great value. But I think for “simple” software deployment, the solution is an overkill.