|February 12, 2014|
I also got some good and positive feedback on the projects I work on, and met many people from Debian, Kubuntu, KDE and GNOME (haven’t seen some of them for almost 3 years) – one of the best things of being at FOSDEM is, that you not only see people “of your own kind” – for example, for me as Debian developer it was great to see Fedora people and discuss things with them, something which rarely happens at Debian conferences. Also, having GNOME and KDE closely together again (litterally, their stands were next to each other…) is something I missed since the last Desktop Summit in 2011.
My talks were also good, except for the beamer-slides-technical-error at the beginning, which took quite some time (I blame KScreen ).
In case you’re interested in the slides, here they are: slides for FOSDEM’14 AppStream/Listaller talks.
The slides can likely be understood without the talk, they are way too detailed (usually I only show images on slides, but that doesn’t help people who can’t see the talk ^^)
I hope I can make it to FOSDEM’15 as well – I’ve been there only once, but it already is my favourite FOSS-conference (and I love Belgian waffles)
|February 11, 2014|
Three months ago Custelyst was started,
it was somehow a proof of concept to see if I could build something that could be used in the real world, but the more I progressed on getting all the pieces together the happier with the overall result I got. And the initial result is here today!
I have made a few benchmarks comparing some simple pages delivery using Django (python) and Perl Catalyst, and overall the time to first byte was around 3 times faster than both, comparing RAM usage it was like 2MB vs 50MB from the Perl version and 20MB of the Python one, the CPU time was also much smaller if compared to both which probably means it could handle more requests, but this isn’t a completely fair benchmark (as always) because there are several other things to measure…
Below I’ll try to clarify some doubts raised from my last post and what makes this new Web Framework unique:
But there is already QDjango and Tufao…
I didn’t know about these ones when I first started Cutelyst, but after taking a look at them, both misses a templating system (or integration with one), I also didn’t like the way one would register actions to be dispatched, IMO the way it’s done in Cutelyst is easier and nicer. And they doesn’t count with WSGI integration, they have their own HTTP parser/server. I also didn’t like looking at code
Why not DJango or any other well established framework out there?….
Apart from QDjango and Tufao there are no other (AFAIK) Qt based frameworks, there are indeed some C++ ones that I knew before but I didn’t like the way of doing things there, in fact when I started with Perl Catalyst I really liked how it was written. So in short I don’t know well enough other languages and I have no plans wasting my time learning other languages, I’d rather try to master at least one (although I know I’m far from it).
What’s in this release?
If you are concerned about API/ABI stability I don’t promise it just right now it’s almost stable, probably 0.2 or 0.3 versions will have a promised stable API.
And yes, I have just put it on production http://serial.igloobox.com.br is running it right now (but the site is user restricted).
My plans now is to setup cutelyst.org with docs and examples, as well as writing a web blog, bug tracker and wiki based on it.
|January 26, 2014|
After years of missing this conference, I will finally make it to Brussels this time!
I will even have some talks, one about Listaller (in the lightning-talks section) and one about AppStream and Listaller in the Distributions track. The lightning-talk will explain why distributing (some) applications independent from the distribution makes sense, and how Listaller does this. I will also very briefly talk about the concepts behind Listaller and which tools it offers for application developers to create & package cross-distro apps.
The AppStream & Listaller talk will be much more detailed. It will cover the rationale for AppStream, what AppData is good for and how AppData files relate to AppStream. I will also reflect the AppData adoption in GNOME/Fedora and why GNOME-Software is the right way to go forward in developing software-centers. It will of course also include our future plans for AppStream.
On the Listaller side, I will talk about how Listaller is connected to AppStream and PackageKit and why distributions should ship a way to install cross-distro packaged apps at all. I will explain module-definitions and why they are useful. An overview of the internals of Listaller and it’s system integration is also included, as well as how it can be compared to competing installation solutions.
If you are at FOSDEM and have questions about AppStream/PackageKit/Listaller/Tanglu/Debian/etc., please ask them! See you there!
|January 19, 2014|
Since yesterday, we have an (installable!) Beta2 release of Tanglu!
Compared to the previous snapshot, it comes with a huge amount of improvements and bugfixes:
You can download the release from one of these mirrors:
As a note to Debian: systemd is working pretty well for Tanglu so far
I am pretty happy with this Beta2 release, because Tanglu is shaping out to be the distribution we imagined in the beginning.
Have fun! And as bonus, here are some images from Beta2:
|November 04, 2013|
While reading stuff posted by others about AppStream, and because of the discussion happening about AppData on kde-core-devel right now, I feel the need to clarify a few things. Especially because some are implementing AppStream in a way which is not really ideal right now. This is to some extend my fault, because I should have communicated this in a much more visible way.
To those people who don’t know it already: AppStream is a Freedesktop project aiming at providing basic building blocks to create distro- and desktop-agnostic software centers.
So, let’s answer some questions about AppStream!
No, not at all! It was originally created by people from at least 4 different distributions, and I took great care of it not becoming specific to any desktop or distribution. GNOME just happened to go ahead and implement the specs, which was absolutely necessary, since there was less progress in implementing AppStream for a long time.
AppStream is a bunch of things, so I will only focus on what we have specified right now and what is working.
Basically, the distributor compiles a list of applications available in the repositories, and makes it available in some defined directories on the system. AppStream defines an XML specification for that, but since some peple don’t want to use it or can’t use it, there are also others ways to publish AppStream application data. For example, Debian will likely use YAML for that.
This data is taked by the AppStream software (running as a PackageKit plugin) and transformed into a Xapian database. This database is then in turn used by software-centers, in combination with PackageKit, to present applications.
This is the reason why it is bad to use the XML data directly – it might not be available on every distribution. The Xapian database is what matters. The database can be accessed using libappstream, a GLib based library (so far, there was no need for a Qt version).
The libappsream stuff was under heavy construction, and GNOME wanted to be fast and ship the stuff with Fedora in their next release. They’ll likely switch to the Xapian db soon, or offer it as backend.
Yes, Apper can utilize it, see one of my previous blogposts.
AppData is an initiative started by Richard Hughes in order to enhance the quality of applications descriptions shipped with AppStream. It defines a small file $appname.appdata.xml, which describes the application, sets screenshots etc. These files can be parsed at the distribution’s side in order to enhance the app metadata. They can also be translated upstream.
AppData might be merged into the XDG AppStream pages later, but that is to be discussed.
No, nobody is forced However, the GNOME Software application-center prefers applications shipping more metadata, and “punishes” the others, so shipping an AppData file makes sense for higher lising in GS. This is a policy decision by GNOME, KDE can make it’s own ones here.
Shipping AppData files makes sense, in general, because it enhances the metadata distributed with your application. It is also the file-format used by Listaller (well, a superset of it) in order to generate cross-distro app-packages, so you might want to consider adding one.
Yes, you can find them on the AppData specification page. However, these are more recommendations than “forced”, and it is currently aimed at GNOME apps. I later want to generalize that and create an own page with recommendations for KDE (martin had some good ideas already).
The screenshots are defined in AppData files, and the cached by the distributor. If there are no “official” screenshots, user-provided screenshots will be taken, using a screenshots-service with screenshots.d.n-like API.
The official AppStream spec, the thing which distributors should implement, can be found on Freedesktop.
The AppData spec can be found here. It also includes some nice hints on how to handle screenshots etc. and includes it’s own FAQ.
Great! Please get in contact with me or Richard. The only feature we would not consider for the official standard is desktop/distro-specific stuff (which should be obvious ).
I will extend this FAQ, if I feel the need for it, so this article might change a bit.
|November 01, 2013|
Do you recall that “Qt everywhere” branding?
Well that’s one of the things I really like in Qt, being able to deploy everywhere with minimal to zero changes in source code. But sometimes your application needs a web interface some sort of web backend or web admin, and when that happens it breaks that slogan (AFAIK)…
I got in contact with web languages (not HTML) only when at university, ColdFusion and PHP where the first ones, and while PHP was sort of cool because it has a syntax close to C, I never really like it. I’m not a web designer nor know how to do pretty websites, but in today’s world building web apps is inevitable.
So years ago I learned about RubyOnRails, and people where talking all the good stuff about it, it’s was the framework of the hype, although I knew Ruby wasn’t the fastest scripted language my need didn’t demand that, so I got a book, read it cover to cover and happily started my web app, a week latter I got a complex problem with an apparently impossible solution. It seemed that RoR makes easy things stupidly easy and complex things impossible…
The I met Catalyst a Perl framework, got exited again, but I knew nothing about Perl so I read more 3 books and was started to love the language, I still do, but last weekend when trying to resurrect and old Catalyst application I got an error which I had no idea how to fix, on IRC I was told the problem could be that I mixed CPAN packages with distro packages, not very convinced I realized the bigger problem I was actually facing, I’m not fluent in Perl, well no matter how many books you read about a language you only get fluency in it after speaking it almost everyday, this applies to computing languages too of course.
So if 99% of the time (or less now with QML) I’m programming C++ it means that I know a lot more about how to debug it than with Perl, so Cutelyst! Writting web apps in Qt/C++ will be much more efficient to me and hopefully to you too.
I share lots of the Catalyst API and design ideas, tho the implementation is of course completely different. Also for the sake of HTML templating I plan to create a Grantlee View plugin.
You can take a look at the proof of concept:
|October 28, 2013|
Long time with no article about Tanglu! This was mainly because we were busy with the project itself, improving various aspects of the distribution.
So, here is a new summary on what has been done and what you can expect from the first release
We further improved the automatic archive QA. There is now qa.tanglu.org, which constantly monitors the number of uninstallable or unbuildable packages in the Tanglu suites. It also provides status information on the metapackage generator, which helps us in finding out which packages are available on the live-cds. Furthermore, information about the staging->devel migration process is provided, to answer the question why a package does not migrate (this still needs some improvements, but it is being worked on).
We also use some code from Ubuntu to monitor package versions in Debian and upstream, which helps to see if others have released newer versions of software shipped with Tanglu.
This already resulted in many improvements: The Tanglu Aequorea suite does not contain unbuildable packages (at least not due to build-dependency changed), and all live-cds are working well.
We will soon migrate the archive to a new server, which frees some server capacities we can use for automated QA and things like automatic live-CD building.
We currently don’t do Alpha-Releases of Tanglu, but we create Live-CD snapshots of Tanglu, which are available at releases.tanglu.org (or mirror1, mirror2). These snapshots still have issues and are just early previews. They also ship without and installer, we are still working on that part. Please note that “CD” more or less means DVD or USB-Stick right now (and this won’t change – the expected image size will be around 800MB).
I am happy to announce that we will do a release this year, most likely in December. But what can you expect from the release?
We will ship with KDE 4.11, which will be the only desktop we officially support so far. The reason is simply lack of manpower – we could promise to support more, but that would just be not realistic for the small team. So we focus on KDE (Plasma Shells) right now, and try to make it awesome. Also, the team consists mostly of KDE people right now, which contributed to that decision ;-).
If you want to try Tanglu, right now the KDE live-images are the best to try it out.
We will also provide images with GNOME. The problem with GNOME is, that the GNOME team does not have enough manpower to maintain the whole desktop or to upgrade it to the latest version (it is essentially just me looking at it from time to time). So GNOME will be available in a “preview” state. We invite everyone with GNOME knowledge to join the project and help improving Tanglu-GNOME – GNOMErs, we want you!
We ship with systemd by default, which works nicely so far, although more testing needs to be done. The logind service will be used to replace ConsoleKit, if we manage to get everything in place and working in time (if there are issues, we might switch back to CK for one release). There are some plans to use a higher systemd version, due to some improvements made there, but if this will be done is still unclear (Debian will most likely stick to 204 for some time, because with systemd > 205, running it as pid 1 will be mandatory to use logind (and Debian is just in the process to decide which init-system we will use there)).
Systemd will run in SysVInit compatibility mode for most of the available services. This will improve in later Tanglu releases. Of course, systemd is usable, even if not every init-script has been converted to a service file. It just has an impact on startup times, so Tanglu will not be the distributions with the fastest startup times (yet ).
Tanglu consists mostly of packages from Debian Testing (Jessie), but we take full advantage of the Debian archive, so you will also find package versions from Unstable or even Experimental (where it made sense). A very small portion of packages has also been merged with Ubuntu. Although stuff has been changed, the incompatibilities with Debian are almost zero, so if you are installing Tanglu, it will currently feel like an more up-to-date Debian Testing with some extras.
Still, the differences are large enough that upgrading a Debian system to Tanglu might result in some issues.
Right now, the installer is a major field of work. Tanglu will most likely ship with the Debian-Installer, because it is the easiest thing to do right now.
For later releases, it is planned to also offer the Ubiquity installer (the thing Ubuntu uses), or a new installer with a similar UI and concept.
Tanglu will ship with a fully working Qt5 (which is currently being tested and updated) and the latest version of Wayland/Weston as a preview for developers to play around with.
We also ship with Perl 5.18 and Haskell GHC 7.6, as well as with GCC 4.8 as default compiler (although the whole distribution does not yet compile with GCC 4.8). We might ship with Linux 3.12, but this also depends on other things. The Linux kernel build will be the same as in Debian. There might be more things to come
Please do always keep in mind that Tanglu is a new project, so the first release might still have some teething problems – but we’re working on it. The Aequorea release will be supported one to two more months after the next release is out.
I started to draft a Tanglu policy, which defines stuff like the procedures to become a Tanglu project member and/or developer, some basic values of the distribution, decision making processes etc. This work is at a very early stage right now, and will need lots of feedback later. But as soon as it is done, joining the project will be easier and what Tanglu is will be well-defined.
The policy will also include a Code-of-Conduct as an additional document from the start.
First of all, thanks to everyone working on Tanglu! You are amazing! Also, many thanks to every Debian developer or contributor who helped a lot in setting the project up and contributed knowledge. And of course, thanks to everyone else who contributed by creating awesome artwork, helped with code, Tanglu archive mirrors, buildd-servers or by testing the distribution and providing feedback! (given the state the aequorea suite was in at the beginning, testing was a really frustrating activity – but people still did it!)
So, if you would like to help, just find us on #tanglu-devel on Freenode or join the tg-devel mailinglist. We also really need some people taking care of the Tanglu website and updating it with some recent information from time to time, so people can see what is going on. Before we have the Tanglu policy finished and the Tanglu members and developers directory in place (a software which allows us to track all registered developers and gives them an @tanglu.org mailalias), the start might be a bit confusing, but we do our best to make it easy for new people to join. The best way is asking people.
Tanglu is still created by an incredibly small team, which has a large task to accomplish. Help is welcome
|August 25, 2013|
We now use a modified version of Debian’s Britney tool to migrate packages from the newly-created “staging” suite to our current development branch “aequorea”. This ensures that all packages are completely built on all architectures and don’t break other packages.
New uploads and syncs/merges now happen through the staging area and can be tested there as well as being blocked on demand, so our current aequorea development branch stays installable and usable for development. People who want the *very* latest stuff, can embed the staging-sources into their sources.list (but we don’t encourage you to do that).
The Synchrotron toolset we use to sync packages with Debian recently gained the ability to sync packages using regular expressions. This makes it possible to sync many packages quickly which match a pattern.
The infrastructure has been tweaked a lot to remove quirks, and it now works quite smoothly. Also, all Debian tools now work flawless in the Tanglu environment.
A few issues are remaining, but nothing really important is affected anymore (and some problems are merely cosmetic).
Long term we plan to replace the Jenkins build-infrastructure with the software which is running Paul Tagliamontes debuild.me (only the buildd service, the archive will still be managed by dak). This requires lots of work, but will result in software not only usable by Tanglu, but also by Debian itself and everyone who wants a build-service capable of building a Debian-distro.
Tanglu now offers KDE 4.10 by default, a sync of GNOME 3.8 is currently in progress. The packages will be updated depending on our resources and otherwise just be synced from Debian unstable/experimental (and made working for Tanglu).
Tanglu now offers systemd 204 as default init system, and we transitioned the whole distribution to the latest version of udev. This even highlighted a few issues which could be fixed before the latest systemd reached Debian experimental. The udev transition went nicely, and hopefully Debian will fix bug#717418 too, soon, so both distributions run with the same udev version (which obviously makes things easier for Tanglu ^^)
We now have a plymouth-bootscreen and wallpapers and stuff is in progress
This is what we are working on – we have some issues in creating a working live-cd, since live-build does have some issues with our stack. We are currently resolving issues, but because of the lack of manpower, this is progressing slowly (all contributors also work on other FLOSS projects and of course also have their work :P)
As soon as we have working live-media, we can do a first alpha release and offer installation media.
Tanglu is a large task. And we need every help we can get, right now especially technical help from people who can build packages (Debian Developers/Maintainers, Ubuntu developers, etc.) We especially need someone to take care of the live-cd.
But also the website needs some help, and we need more artwork or improved artwork In general, if you have an idea how to make Tanglu better (of course, something which matches it’s goals ) and want to realize it, just come back to us! You can reach us on the mailiglists (tanglu-project for stuff related to the project, tanglu-devel for development of the distro) or in #tanglu / #tanglu-devel (on Freenode).
|August 06, 2013|
Some days ago I was listening the radio and a famous Brazilian comedian was saying how stupid it is to express you thoughts about things because latter you probably regret what you said, and realize how dumb you are… although I partly agree with him I decided to share this as maybe someone is facing the same problems I’m facing.
I don’t want to spread FUD, so if my tests/conclusion or everything I said is wrong please explain and I’ll update or drop the text entirely
Lately I’ve been working a lot on an Qt5/Quick2 application and I decided to share some thoughts about this new piece of tech. The application that I’m working on basically has to index >16k of videos and music files, and show their covers so the user can choose what to listen/see, it’s an app for jukebox machines. Xapian rocks!
First I need to say that I really like Quick2, the name is right about how fast can you write your app, on Quick2 we have some very cool graphics effects like blur, shadows and others, and those are quite interesting for this kind of app. I can blur the background, draw shadows on the CD covers and so on. The multimedia part is also very easy to use, so how to play a video or music file is not something you worry about (tho I need to find a way to play files that I encrypt stupid recorders).
Most people also probably heard about Scene Graph which is the new rendering of QML 2 which uses OpenGL (EGL iirc), this brings many improvements on performance and quality, and expands the QML2 possibilities.
However maybe you are considering it for your own projects, the QWidgets vs QtQuick is something many developers don’t know the answer yet. To me it’s now a bit more clear on what to choose.
Every now and then I listen people saying that OSX is good because it’s software made for the hardware, and while in parts I agree with that the same is valid for Quick2. Martin recently wrote an interesting blog about “what makes a lightweight desktop lightweight“, which describes the problem of properly describing a technology.
My point is that with Quick2 we need to set a barrier between legacy and supported hardware, to me this is quite complicated because you can have some very powerful CPUs from like 8yo coupled with very slow GPU, and brand new hardware with the opposite configuration, Raspberry PI for example has a rather slow CPU with a great GPU.
So while QWidgets/Quick1 performed quite well on those old machines Quick2 doesn’t. To exemplify I have a computer to test my jukebox application that has a crap VIA graphics with a 3.2 intel D CPU. When I went to deploy/test my super cool and animated application there, I almost cried, after removing blur/shadow/transparency effects I could get a maximun of 10fps on fullscreen with 1024×768. Toggling the composite switch didn’t make any difference (nice work KWin team), and using other Window Managers also didin’t improve the poor experience.
Aaron give me a hint about llvmpipe, which is something I had heard but never truly understand what it was, basically llvmpipe is a software render for this exactly case where the GPU can’t handle the OpenGL instructions, but after some investigation it seems all new distros have it to render on these poor GPUs. Now that I knew about it I went to look at CPU usage and it was pretty high and leaking…
Before you jump on the wrong conclusion that Quick2 is bad please note that for any hardware capable of OpenGL it’s an excellent choice because it can offload the rendering stuff from your CPU to the GPU. And this is where we have a new era of “lightweight” of desktops as they use less CPU.
Something that does bother me is that Plasma 2 will need newer hardware specs, I won’t be able to run KDE SC 5 (or whatever it will be called) on such hardware, I went on thinking that it would be nice to have something like lxde-qt as our QWidget shell and still use Qt5 with our KDE stuff. On the long run I believe we won’t be able to share much of the GUI apps as Quick2 will probably become more and more common, not to mention it’s a nicer way of developing GUI stuff.
And if you wonder what I’m going to do, I not totally sure, but right now I have 2 user interfaces on the same binary, one QWidget based and another Quick2. I thought about going back to Quick1/Qt4, but then I wouldn’t have the cool Quick2 stuff. Thanks to the model based approach I’m sharing most code between the two interfaces. And yes, since QWidget on Qt5 still uses raster rendering it provides a much more responsive experience for the application.
I hope this gives you some clarity if you might be targeting hardware like that, in the future I hope to be able to deploy on Raspberry PIs but the lack of RTC doesn’t make it an option right now.
If you know some magic way to improve FPS please let me know, I’d love to have a single GUI
|July 30, 2013|
I finally managed to roll a new release of Apper
Here is a small list of changes:
This release is almost a bug-fix only one, more features are planned but I lack time right now…
Here is the download link:
PS. My wife is back at home, thank you all who donated a bit for Litteras campaign, but it is still far from reaching the goal, and I believe it won’t reach it. As I said I’ll continue it’s development but not the way I planned since I’ll need to manage other ways to fund myself.
|July 25, 2013|
I don’t even know how to start this one but wow, when I think my life can’t get worst it surprises me again.
So last month I started a campaign in Indeigogo about Litteras a new email client with EWS (Exchange) support. It was great, in like 2 days I had 5% of the goal, but then it stuck completely, that was probably because I failed to blog more about it and it’s progress, but it also showed how I was wrong on thinking people dislike Akonadi, I got lots of feedback of users that like it, I gave it a second try and it was more or less what I had experienced in the past. It worked with my Gmail account but still I don’t like it much, I was able to make kmail unusable by killing the mysql process, I should fill a bug but lots of things happened after this.
One of the greatest things of the campaign was that I was informed about KDSoap which is not on “Qt Soap”google search, right now I didn’t start to use it but it will help a lot developing the libews-qt library, the API of the lib is quite nice to use, and it is a bit close to the .Net one but async
Soon after the campaign stopped receiving funds I got a proposal to write a software for an specific machine, the software isn’t that hard to do (it’s almost done) and it will allow for me to finally run my own company (which I plan to have a few paid people making KDE stuff at certain hours). Then I got my vacations and went to meet this people so soon I expect it to be selling.
My vacations finished, thanks God last month I managed to get rid of my bank debts (still have my car and the German lawyer to pay), my wife was to go to Argentina to deal with the issue that happened in Germany but at the beginning of the year she felt over her feet and broke it, she got some metal in it and till today she says it hurts sometimes. I was counting now I’d had money to pay her trip and the German/Argetinian lawyers and it happened again…
She was with my kid shopping some stuff and with my kid letting her crazy like most kids she mistakenly put some stuff she was buying on the baby car, and not to help that happened in two stores, she even called me if she could buy what she wanted and I said ok. But then the woman on the second store started to scream loudly and the mess was made. She was waiting for me to get out of work to go to the first store and buy the stuff.
The police was called and since it had things from two stores she had to stay arrested. Impossible not to remember when it happened to me in Germany but I’m sure here is probably worse, not to mention we are having the coldest days this year (it even snowed at some places), luckly I had some cloths to leave with her.
Now I’ll have to spend 5k USD with the lawyer and probably some more cash for the police, I’m glad tomorrow I get my pay check and next week my vacations, but still it’s a lot of money so it isn’t going to be easy to deal with this.
Most of the people that read my blog probably don’t believe in God, I have my own logical and non logical reasons to believe and I really feel that something is trying to set my family apart, in two years after my daughter died we were finally starting to love each other again tanks to this vacations, debts gone and lots of talking, and now this… If that’s not evil surely it’s real bad luck..
So I would really appreciate if you could help us (again! thank you very much if you did it the last times) a bit on this and donate a penny for that campaign, due to the feedback I realized that not supporting akonadi was a bad decision, so my plans for it now are:
* Make libews-qt have it’s own repo
* Make libews-qt use KDSoap
* Cover more of the EWS API
* Implement an Akonadi resource for managing contacts
* _MAYBE_ Implement an Akonadi resource for managing calendar, if the above proves to be as easy as the docs say
This is already lots of work, and I plan to do this as soon as I have my company up and running (hopefully in 1mo maximum), since we are moving to Qt5 it will probably make more sense not to promise more stuff since some stuff might change, if everything goes well I’ll also try to make use of the lib in KMail, but I don’t know how long would it take so I prefer not to promise that, the contacts I know I can manage to implement it. And btw folder syncing is mostly working on litteras already (not the emails in it).
If you can’t help with money please pray for my family or just wish us luck.
|June 03, 2013|
Before you jump to the comments section and start a flame war, please let me give you a little ground.
Two years ago when I got back to Brazil I went to the same job but hired by a different company, the new company uses Microsoft Exchange as the mail solution. No matter how hard I could try it’d almost impossible to think they would ever move away from it, I’d need to convince people that I don’t know personally which are also in another continent…
I believe I’m not alone in this, so, instead of dreaming with the IMAP day I decided to take a look at what email clients could talk to Exchange, at that time there was ZERO Linux email clients supporting it, and to be fair I’m not talking about MAPI, which is another protocol used to communicate with Exchange but it’s disabled/blocked on my company and in many others.
There is some program that takes the OWA (Outlook Web Access) and convert it to IMAP, but I didn’t like the idea nor had the will to setup it as it looked complex. So for some time I just gave up and used the webmail, but it’s really bad, especially in 2007 version it doesn’t auto refresh the page, and even if it did it wouldn’t notify me about new emails.
So when my boss told me he could easily setup it on his OSX Mail I got intrigued, how could it talk to exchange if MAPI was not an option? After some research I became aware of EWS (Exchange Web Services), which is a SOAP specification to talk to Exchange using HTTP. I then tried to use gSOAP library to auto generate the interfaces and code to talk to it (as I have used it before for another SOAP project), but as soon as the code was also linked to any KDE library I got some DSO weird error from the linker… I tried to find how to fix this linker issue but couldn’t get help or find a solution.
I then sort of gave up and time passed again, and Evolution Mail got EWS support, but it also didn’t work with my company setup, no idea why, but still it didn’t work. Recently a new version of it started to work with my company’s server, and I started using it, but well besides the fact that it’s a GTK app it doesn’t work well for me, it’s slow, the address completion is quite useless…
So time to give Litteras another try, but wait! Why not give KMail EWS support instead of the yet another email client?
To put it simple: I don’t like MySQL on my servers (you can imagine how I fell about it on a desktop), even if it was PostgreSQL I simply think it’s wrong to store my mails on a SQL database. Granted KMail works great for lots of users, but I myself don’t like the underground tech, it’s probably much more a matter of taste.
So, KMail developer are telling me the emails are stored as regular files which is something I do know. But then there is a dependency with akonadi, which is the one that can use MySQL, sqlite… So to not spread FUD I’ll try to put it another way, I myself while trying to use it didn’t find a way to avoid akonadi, and I saw lots of other people not being able to do it too. Every place I look for information it says that Akonadi will cache the email information to be easily retrieved in other places, so it’s not the same as storing the emails on the data base. Still I myself (as mentioned earlier) don’t like this idea much. If one can still disable akonadi I actually find hard to believe it’s possible since all information I have found is that right now Akonadi has the resources to fetch the emails. So basically: continue using what you have been using and is working for you, in my taste I just don’t like, and that’t the reason why not everyone uses GTK or Qt or PostgreSQL, there are sometimes technical reasons, but it’s not entirely the case here, MySQL might perform well for this use case but past experience with it on servers gave me a trauma.
PS: I hope I clarified this part, as I knew it would be hard to explain it’s a personal matter (tho I ended the line stating this), and I needed to state my reasons for not willing to go the KMail/Akonadi way right now as that would be 1st question.
BUT If you like KMail and would like EWS support be happy. I’m building an EWS Qt library so that this will benefit any KDE/Qt developer willing to write a yet another mail client, and adding this kind of support for KMail should be much easier when the library is in place, I could even try to do it myself.
So what about Litteras?
Litteras already is a KDE application (as it uses KDE libraries), and I want it to feature EWS, POP3 and IMAP support. Locally I plan to store the emails in MailDir format, and index them with Xapian. I also want it to feature a clean user interface and most importantly do lot’s of magic when setting up an email account.
Right now you can just type your email and password and it will find the EWS server if it was deployed following Microsoft specification, it’s actually even better than EWS than Evolution in this regard right now as they didn’t do DNS SRV search to find the right server (which is the case needed for my company). It also download your folders and messages but doesn’t get the body yet, nor store then locally.
Now let’s do business :D I want your support:
Here is the indiegogo campaign to support two months of development, an amount of 6500 USD. Details on what I’ll do can be found there.
Go grab the code and hack it if you want: https://gitorious.org/litteras
|May 29, 2013|
For those who doesn’t know print-manager is a project I started back in 2010 (yes it’s 3 years old now!), but only got included into KDE 4.10 (the current stable release). The reason for that is that since it was meant to replace system-config-printer-kde it needed to provide at least the most used features, and that was only possible as the logic to find a good PPD for a given printer was then exposed through DBus, so we could use that but not it’s GUI.
I’m very pleased to say that my expectation of receiving bug reports of missing features, surprised me by being just one of the few bugs it got (sadly it’s not the easiest to one to fix). The number of bugs was also quite low and it’s acceptance quite positive, which helped me managing fixing most of them then in a short time. Currently there are only 2 standing bugs, and none of them are crashes (all were fixed), one is a missing convenience feature and the other is half fixed but I failed to setup a similar environment to figure out what was happening, but this next version will include some debug info to try to figure that out.
So CUPS is not a new project that earns features everyday, actually we fear to have more features removed, that being said print-manager already covers most of it’s functionality. One CUPS feature I didn’t make use in 4.10 was the ability to reprint canceled/completed jobs, this feature also only works if the job is preserved which is something that can be disabled on the server. Of course we only enable the button if the job can do that: Notice the Reprint button, now there’s something even cooler than that, the last column “From Hostname” is also new, but the coolest part of it was that this was actually a feature request from a KDE 3 (kprint) bug, funny enough I fixed it on the exact same day were it completed 10 years (I accept cookies…) Next we have a bunch under the hood changed and fixes.
When I first presented the print-manager plasmoid it would have a feature to configure which jobs to show. Having “Active jobs”, “Completed jobs” and “All jobs” ended up becoming an issue, since if the completed job list had like 20 entries the plasmoid was taking several seconds to load, so since the “Active jobs” is always the smallest list I removed the other options to avoid bug reports. Latter Apper gained a plasmoid and I run into the same slowness very quickly, actually a list of 200 updates (which is not uncommon) was taking half a minute to load.
My previous investigations showed the Plasma DataEngines were too slow, in fact I believe it’s the mapping between the DataEngine and QML since plasmaengineexplorer is not _that_ slow. This was a no go for Apper, so I created a hybird C++/QML plasmoid, and quickly I noticed that this would also be the best thing for printer manager. It’s not only is blazing fast now, but also has the important benefit of not keeping four models implementations as the QWidgets dialogs already had them, so now I fix one model and both plasmoid/KCM/print-queue get it. If you have used the 4.10 version you will also notice an important improvement on the above, instead of a weird LineEdit to select which printers the plasmoid will display, now you have a nice checkable list.
Bonus points to Kai Uwe Broulik that added the second icon there (and did some more stuff) which is the full System Settings printers module so you don’t need to open system settings. And yes I’d like to have a different icon for the first option but I failed to find the option (if it even exists) to do that as Plasma uses the plasmoid icon for the first item. The internal library that talks to CUPS also got several improvements and fixes which made it Qt5 safe, due to QHash getting items stored randomly (yes Qt4 says we can’t rely on the order but we know we can :P). To be honest I don’t really know how well this worked with the mess it was… The plasmoid also got several improvements:
And finally the plasmoid full of printers and a few jobs, with the new NIHSwitch, fully draggable and with I/O visual to avoid confusing, done my real world testing and so far nobody got confused:
Until KDE Frameworks 5 isn’t released (and probably also packaged), the development of print-manager will continue in the next SC 4.x releases, of course the list of TODOs is quite small and if you are willing to give a hand send me an email.
|May 26, 2013|
I have spent the last two weeks working to make print-manager experience in 4.11 the best I could. And this post should be about it. Sadly it’s not.
Whenever I write free software I write because I want to, because I have the need and since I don’t paid to do this I spend the time I can. Besides the selfishness I value user feedback a lot, Apper is an example of user feed back, not perfect yet but lots of things there changed because of this, yet sometimes one has to take a final word.
There has been some noise over the last years about “not invented here”, forks and diversity. People blame the Linux ecosystem of having no direction, no focus and hence failing on the Desktop. But they forget to remember that even Countries with much more control were divided because people are different. Heck yes everyone is different so there is no way of pleasing everyone, and this is what I like in Linux.
I’d rather be using OSX if it wasn’t for that, really OSX has awesome applications, iPhoto, Mail, Finder… and now I even have a MacBook, I could instead be build OSX apps and making money! Why don’t I use it?
Because I can’t change it. No matter how good it is for lots of people, it’s not good for me at several points. And heck our Desktop if much far from what I wanted a Desktop to be but still I can help changing this and have lots of fun.
So why am I sad? Simply because I’ll need to fork a component which I was actually willing to improve, and no it’s not because my improvements were rejected or ignored, but because some people don’t like Switches. Yes, you don’t like it either? Fine, but take a look on what has just happened:
Now think for a moment what is that checkbox trying to tell you
What about these?
Easy, that’s the list of printers I want to go shopping
No I’m not questioning if you like switches better that checkboxes, I’m fine if you do, I’m questioning the API that was changed post freeze, without being listed on the feature plan and that has just given me more work to do.
A checkbox must have a description text unless it’s a list of things, even then it is bound to some action normally described with a text or an icon. So even if I was ok with the change I’d need fixing this at soft freeze.
Yes I could instead just fork the component and don’t waste time writing a blog post. But as mailing list didn’t work out (I raised my points on three different threads and suddenly it got commited) I’d like to hear what users of printer-manager or kscreen or any application using Switches think about.
Granted I’ll keep the switches there, hopefully I’ll manage to find time to write a better one as I agree with the fact that this one is indeed confusing. But not because it’s confusing that we should replace instead of fixing.
And let me apologize for making this public, but we ain’t an evil company that must hide into mailing lists. I believe users should be able to give feedback even it goes to /dev/null.
This is my personal opinion.
and that being said I must say I’m very sad, really.
UPDATE: I have changed this text a lot trying to make it not sound like a personal critique or FUD. But then I didn’t notice I removed the last sentence which was the actual reason of this post. I did notice some users a bit confusing about what was the conclusion but only now I see the text got lost (as you can notice on the text above I did mention feedback but what feedback?)…
So the question was “How do you feel with these screenshots, do you think if a plasmoid is written for Plasma Active where switches are allowed, they can run as checkboxes on the Desktop (with no change)?”
In other words “Does my plasmoid on the Desktop looks fine now? So that I don’t have to actually create an specific version of it for the Desktop.”
IMO taking the decision apart which is not what I planned to change (as I know it wouldn’t just because of blog) I think simply replacing a switch with a checkbox without any change is not sufficient for the applications. All the work I do on System Settings module use QWidgets and though I love switches and feel that there are at least two modules that I maintain that could have them they don’t have switches. The reason is simple, there is not QSwitch or KSwitch, they never existed so I never intended to write one as I’d simply look different, once Plasma started shipping one on their API I was very happy and when programming with QWidgets I envy the Plasma for having it….
|May 12, 2013|
I was asked by some people to write a status report about the whole PK/AS/LI stuff – sorry guys that it took so much time to write it ;-).
PackageKit is an incredibly successful project. With the 0.8.x series, it received many performance improvements, and has now the same speed on my computer than the distribution’s native tools. PackageKit is used in almost all major Linux distributions, except for Ubuntu. But even Ubuntu has written some compatibility layer, so most calls to PackageKit will work.
The only major distro where PackageKit is currently not available, seems to be Gentoo (and I am not sure about the shape of the Gentoo PackageKit backend too).
Debian Wheezy includes PackageKit by default, and in Jessy we are going to replace some distribution-specific tools with PackageKit frontends (mostly the old and unmaintained update-notifier and Software-Updater – no worries, we are not going for a Synaptic replacement (currently this won’t be possible with PK anyway))
Unfortunately, some PackageKit backends are still not adjusted for the 0.8.x API and are only running on 0.7.x. This is bad, since 0.8.x is a huge step forward for PackageKit. But the situation is slowly improving, with the latest OpenSUSE release, the Zypper backend is now available on 0.8.x too.
Being able to run a PackageKit from the 0.8.x series is a requirement for both AppStream and Listaller.
(AppStream is a cross-distro effort for building Software-Center-like applications. It contains stuff like a screenshot-service, ratings&reviews etc. The most important component is a Xapian database, storing information about all available applications in the distribution’s repositories. The Xapian DB is distro-agnostik, but distributors need to provide data to fill it. AppStream offers an application-centric way to look at a package database)
The AppStream side doesn’t look incredibly great, but the situation is improving. As far as I know, OpenSUSE is shipping AppStream XML to generate the database. Ubuntu ships the desktop-files, and I am working on AppStream support in Debian’s Archive Kit. On the Fedora side, negotiations with the infrastructure-team are still going on. I haven’t heard anything from Mageia and other AppStream participants yet.
Unfortunately, at least for OpenSUSE, the AppStream efforts seem to be stalled, due to people having moved to different tasks. But efforts to add the remaining missing things exist.
On the software side, Apper (KDE PackageKit frontend) has full support for AppStream. Apper just needs to be compiled with some extra flags to make it use the AppStream database.
On the GNOME-side, GNOME-Software is being developed. The tool will make use of the AppStream database, on distributions where it is available.
Also, a Software-Center for Elementary and other GTK+-based desktops is being developed, which is based on AppStream (already quite usable!).
Using the Ubuntu Software Center on not-Ubuntu-based distributions ist still not much fun, but with the AppStream database available and a working PackageKit 0.8.x with a backend which supports parallel transactions, it is possible to use it.
On the infrastructure side: I recently landed some patches in AppStream-Core, which will improve the search function a lot. AppStream-Core contains all tools necessary to generate the AppStream database. It also contains libappstream, a GObject-based library which can be used to access the AppStream database.
Also, we discuss dropping PackageKit’s internal desktop-file-cache in favour of using the AppStream database. If we do that, we will also add software descriptions to the AppStream db, to improve search results and to speed up search for applications. Because we have to deprecate API for that, I expect this change to happen with PackageKit 0.9.x.
As soon as the Freedesktop-Wiki is alive again and my account is re-enabled, I will create compatibility-list, showing which distribution implements what of the PK/AS/LI stuff, especially focusing on components needed for AppStream.
Only a few distributions package AppStream-Core so far. Although it is beta-software, creating packages for it and shipping the required data to generate the AppStream database would be a very huge step forward.
(Listaller is a cross-distro 3rd-party software installer, which integrates into PackageKit and AppStream. It allows installing 3rd-party applications, which are not part of the distributor’s repositories, using standard tools used also for native-package handling. Everything which uses PackageKit can make use of Listaller packages too. Listaller also allows sandboxing of new applications, and uses an AppDir-like approach for installing software.)
Listaller is currently undergoing it’s last transition before a release with stable API and specifications can be made. Dependency solving will be improved a lot during the current release-cycle, making it less powerful, but working on all distributions instead. (Fedora always had an advantage in dependency-solving, due to RPM providing more package metadata for Listaller to use) This change was delayed due to discussing a possible use of ZeroInstall-feeds to provide missing dependencies with the ZeroInstall team. We did not come to a conclusion about extending the XML, so Listaller will contain an own format to define a dependency, which can reference a ZeroInstall feed. That should be a good solution for everyone.
All these changes will result in IPK1.2, a new version of the IPK spec with small changes in the pkoptions file syntax and huge changes in dependency-handling. The new code is slowly stabilizing in a separate branch, and will soon be merged into master.
The next Listaller release will be the last one of the 0.5.x series, we will start 0.6.x then. KDE currently has support for Listaller through Apper, which is enabled on a few distributions. In GNOME, optional Listaller support is being developed and will be available in one of the upcoming releases.
Currently, to my knowledge, only a few distributions package Listaller. This should improve, so it is easier for application developers to deploy IPK packages.
The upcoming changes in KDE and GNOME to build stable developer platforms will help Listaller a lot in finding matching dependencies, and for stuff which only depends on one software frameworks, installations should be a matter of seconds.
As you can see, lots of things are happening, and there is improvement in all components related to installing and presenting software on Linux machines. However, all these projects have a severe lack of manpower, especially AppStream and Listaller have the lowest number of developers working on the tools (at time, only two active developers). This is the main reason for the slow development. But I am confident that we will have something shipped in the next distribution releases. At least AppStream should be ready then.
: I don’t blame Ubuntu for that – during the time they wrote an own solution, PackageKit did not have all the required features. (This situation has changed now, fortunately.)
NOTE: I might extend this post with feedback from the different distributions, as soon as I get it.
|May 01, 2013|
It’s been a while since I want to do a new colord-kde release, the fact is that there is still some stuff to do and I have to balance the time between a bunch of other stuff I do in parallel, I just found boring stick to one thing at time
This release is very recommended as it has lots of fixes compared to 0.2.0, some distros shipped backports to some of those issues but over the last week I have fixed even more stuff. Here is a quick list of changes:
For the next release I’ll try to make sure KWin color correction feature works with colord-kde (it should just work but seems it’s not working so I have to dig into this), and also a cool new feature is that we will have a native KDE dialogs for the calibration phase!
|April 26, 2013|
For those who don’t follow Hughsie’s blog, I’m reposting it here. It’s about helping with statics data if you use colord-kde.
What he asks is quite simple but doesn’t make sense if don’t have colord-kde installed, you don’t need to have ever touched it, colord-kde creates an edid ICC profile for your display automatically so the kded module only needed to have run once, please try:
A favour, my geeky friends:
gnome-settings-daemon and colord-kde create an ICC profile based on the color information found in the EDID blob. Sometimes the EDID data returns junk, and so the profile is also junk. This can do weird things to color managed applications. I’m trying to find a heuristic for when to automatically suppress the profile creation for bad EDIDs, such as the red primary being where blue belongs and that kind of thing. To do this, I need data. If you could run this command, I’d be very grateful.for f in $HOME/.local/share/icc/edid-* ; do curl -i -F upload=@$f http://www.hughski.com/profile-store.php done
This uploads the auto-EDID profile to my webserver. There is no way I can trace this data back to any particular user, and no identifiable data is stored in the profile other than the MD5 of the EDID blob. I’ll be sharing the processed data when I’ve got enough uploads. If you think that your EDID profile is wrong then I’d really appreciate you also emailing me with the “Location:” output from CURL, although this is completely optional. Thanks!
|April 12, 2013|
Stop me if you can
If you followed my last post about sessionk you might be wondering “what the hell…”, well I like to code on stuff I’m in need, about sessionk I hope soon I give it an update now that I have more or less the whole picture.
So what’s up with networking? If you didn’t see the new plasma network manager go take a look, the greatest thing about it in my opinion is to have new blood around, so when I look at it I decided I should stop complaining and do something I wanted for a long time.
There’s nothing basically wrong with the NM plasmoids, it’s just that for the use case I’m interested in no plasmoid will ever fit it. The Mom’s use-case. If you have non nerd friends, wife, kids, parents that use Linux you know that they will someday call you. And when they do you need some sort of script to diagnose why isn’t “Facebook” opening. My script is like this:
As you can it’s hard to describe a plasmoid UI by phone, also the user might have removed the plasmoid from the tray or might be using plasma-netbook (I took half an hour trying to explaing where the K menu was till I figured out it was netbook edition…). Also the current Network Manager KCM only handle connections which means you must have a plasmoid if you want to manage network.
This is where System Settings comes in:
With this new plasma-nm I felt it was just the right time for me to do this, more people active on looking at NM means people can fix your code and the other way too. Last week then I started this and at the same time I tried to give some Qt/C++ classes to Jayson Rowe and we immediately feel that some parts of the API was hard to use, like the IPv4 class was giving you an Int, when I saw this I had no idea how to convert that easily to an string, luckly there is a QHostAddress class that I never had used but it turns out I decided to make libnm-qt actually return a QHostAddress, and I started lot’s of changes on the lib, among them a change on how to handle pointers which has fixed some crashes here.
And here is the first screenshot
If you want to take a look it’s at git://anongit.kde.org/plasma-nm.git
|April 08, 2013|
Hello everyone! I am very excited to report about the awesome progress we made with Tanglu, the new Debian-based Linux-Distribution.
First of all, some off-topic info: I don’t feel comfortable with posting too much Tanglu stuff to Planet-KDE, as this is often not KDE-related. So, in future Planet-KDE won’t get Tanglu information, unless it is KDE-related You might want to take a look at Planet-Tanglu for (much) more information.
So, what happened during the last weeks? Because I haven’t had lectures, I nearly worked full-time on Tanglu, setting up most of the infrastructure we need. (this will change with the next week, where I have lectures again, and I also have work to do on other projects, not just Tanglu ^^) Also, we already have an awesome community of translators, designers and developers. Thanks to them, the Tanglu-Website is now translated to 6 languages, more are in the pipeline and will be merged later. Also, a new website based on the Django framework is in progress.
We’ve run a logo-contest, to find a new and official Tanglu logo, as the previous logo draft was too close to the Debian logo (I asked the trademark people at Debian). More than 30 valid votes (you had to be subscribed to a Tanglu Mailinglist) were received for 7 logo proposals, and we now have a final logo:
I like it very much
I decided to use dak, the Debian Archive Kit, to handle the Tanglu archive. Choosing dak over smaller and easy-to-use solutions had multiple reasons, the main reason is that dak is way more flexible than the smaller solution (like reprepro or min-dak) and able to handle the large archive of Tanglu. Also, dak is lightning fast. And I would have been involved with dak sooner or later anyway, because I will implement the DEP-11 extension to the Debian Archive later (making the archive application-friendly).
Working with dak is not exactly fun. The documentation is not that awesome, and dak contains many hardcoded stuff for Debian, e.g. it often expects the “unstable” suite to be present. Also, running dak on Debian Wheezy turned out to be a problem, as the Python module apt_pkg changed the API and dak naturally had problems with that. But with the great help of some Debian ftpmasters (many thanks to that!), dak is now working for Tanglu, managing the whole archive. There are still some quirks which need to be fixed, but the archive is in an usable state, accepting and integrating packages.
The work on dak is also great for Debian: I resolved many issues with non-Debian dak installations, and made many parts of dak Wheezy-proof. Also, I added a few functions which might also be useful for Debian itself. All patches have of course been submitted to upstream-dak.
This is also nearly finished Wanna-build, the software which manages all buildds for an archive, is a bit complicated to use. I still have some issues with it, but it does it’s job so far. (need to talk to the Debian wanna-build admins for help, e.g. wanna-build seems to be unable to handle arch:all-only packages, also, build logs are only submitted in parts)
The status of Tanglu builds can be viewed at the provisoric Buildd-Status pages.
Setting up a working buildd is also a tricky thing, it involves patching sbuild to escape bug #696841 and applying various workarounds to make the buildd work and upload packages correctly. I will write instructions how to set-up and maintain a buildd soon. At time, we have only one i386 buildd up and running, but more servers (in numbers: 3) are prepared and need to be turned into buildds.
After working on Wanna-build and dak, I fully understand why Canonical developed Launchpad and it’s Soyuz module for Ubuntu. But I think we might be able to achieve something similar, using just the tools Debian already uses (maybe a little less confortable than LP, but setting up an own LP instance would have been much more trouble).
The import of packages from the Debian archive has finished. Importing the archive resulted in many issues and some odd findings (I didn’t know that there are packages in the archive which didn’t receive an upload since 2004!), but it has finally finished, and the archive is in a consistent state at time. To have a continuous package import from Debian while a distribution is in development, we need some changes to wanna-build, which will hopefully be possible.
The Online-Package-Search is (after resolving many issues, who expected that? :P) up and running. You can search for any package there. Some issues are remaining, e.g. the file-contents-listing doesn’t work, and changelog support is broken, but the basic functionality is there.
We now also have a bugtracker which is based on the Trac software. The Tanglu-Bugtracker is automatically synced with the Tanglu archive, meaning that you find all packages in Trac to report bugs against them. The dak will automatically update new packages every day. Trac still needs a few confort-adjustments, e.g. submitting replies via email or tracking package versions.
The Tanglu metapackages have been published in a first alpha version. We will support GNOME-3 and KDE4, as long as this is possible (= enough people working on the packaging). The Tanglu packages will also depend on systemd, which we will need in GNOME anyway, and which also allows some great new features in KDE.
Side-effect of using systemd is – at least for the start – that Tanglu boots a bit slow, because we haven’t done any systemd adjustments, and because systemd is very old. We will have to wait for the systemd and udev maintainers to merge the packages and release a new version first, before this will improve. (I don’t want to do this downstream in Tanglu, because I don’t know the plans for that at Debian (I only know the information Tollef Fog Heen & Co. provided at FOSDEM))
The community really surprised me! We got an incredible amount of great feedback on Tanglu, and most of the people liked the idea of Tanglu. I think we are one of the less-flamed new distributions ever started ;-). Also, without the very active community, kickstarting Tanglu would not have been possible. My guess was that we might be able to have something running next year. Now, with the community help, I see a chance for a release in October
The only thing people complained about was the name of the distribution. And to be really honest, I am not too happy with the name. But finding a name was an incredibe difficult process (finding something all parties liked), and Tanglu was a good compromise. “Tanglu” has absolutely no meaning, it was taken because it sounded interesting. The name was created by combining the Brazilian “Tangerine” (Clementine) and the German “Iglu” (igloo). I also dont think the name matters that much, and I am more interested in the system itself than the name of it. Also, companies produce a lot of incredibly weird names, Tanglu is relatively harmless compared to that ;-).
In general, thanks to everyone participating in Tanglu! You are driving the project forward!
I hereby announce the name of the first Tanglu release, 1.1 “Aequorea Victoria“. It is Daniel’s fault that Tanglu releases will be named after jellyfishes, you can ask him why if you want I picked Aequorea, because this kind of jellyfish was particularly important for research in molecular biology. The GFP protein, a green fluorescent protein (=GFP), caused a small revolution in science and resulted in a Nobel Prize in 2008 for the researchers involved in GFP research (for the interested people: You can tag proteins with GFP and determine their position using light microscopy. GFP also made many other fancy new lab methods possible).
Because Tanglu itself is more or less experimental at time, I found the connection to research just right for the very first release. We don’t have a time yet when this version will be officially released, but I expect it to be in October, if the development speed increases a little and more developers get interested and work on it.
We will need to formalize the Tanglu project policy soon, both the technical and the social policies. In general, regarding free software and technical aspects, we strictly adhere to the Dbian Free Software Guidelines, the Debian Social Contract and the Debian Policy. Some extra stuff will be written later, please be patient!
I was approached by the Open Invention Network, to join it as member. In general, I don’t have objections to do that, because it will benefit Tanglu. However, the OIN has a very tolerant public stance on software patents, which I don’t like that much. Debian did not join the OIN for this reason. For Tanglu, I think we could still join the OIN without someone thinking that we support the stance on software patents. Joining would simply be pragmatic: We support the OIN as a way to protect the Linux ecosystem from software patents, even if we don’t like the stance on software patents and see it differently.
Because this affects the whole Tanglu project, I don’t want to decide this alone, but get some feedback from the Tanglu community before making a decision.
Yes and no. We don’t provide installation images yet, so trying Tanglu is a difficult task (you need to install Debian and then upgrade it to Tanglu) – if you want to experiment with it, I recomment trying Tanglu in a VM.
Great, then please catch one of us on IRC or subscribe to the mailinglists. The best thing is not to ask for work, but suggest something you want to do, others will then tell you if that is possible and maybe help with the task.
Packages can for now only be uploaded by Debian Developers, Ubuntu Developers or Debian Maintainers who contacted me directly and whose keys have been verified. This will be changed later, but at the current state of the Tanglu archive (= less safety checks for packages), I only want people to upload stuff who definitely have the knowledge to create sane packages (you can also proove that otherwise, of course). We will later establish a new-member process.
If you want to provide a Tanglu archive mirror, we would be very happy, so that the main server doesn’t have to carry all the load.
If you have experience in creating Linux Live-CDs or have worked with the Ubiquity installer, helping with these parts would be awesome!
Unfortunately, we cannot reuse parts of Linux Mint Debian, because many of their packages don’t build from source and are repackaged binaries, which is a no-go for the Tanglu main archive.
And here is a screenshot of the very first Tanglu installation (currently more Debian than Tanglu):
I am involved in Debian for a very long time now, first as Debian Maintainer and then as Debian Developer – and I never thought much about the work the Debian system administrators do. I didn’t know how dak worked or how Wanna-build handles the buildds and what exactly the ftpmasters have to do. By not knowing, I mean I knew the very basic theory and what these people do. But this is something different than experiencing how much work setting up and maintaining the infrastructure is and what an awesome job the people do for Debian, keeping it all up and running and secure! Kudos for that, to all people maintaining Debian infrastructure! You rock! (And I will never ever complain about slow buildds or packages which stay in NEW for too long )
|April 01, 2013|
|March 26, 2013|
|March 21, 2013|
Tanglu will be based on Debian Testing and follow the Debian development closely. It will have a 6-months release-cycle and it’s target audience are Linux desktop users. We will make installing and setting up the distro as easy as possible.
Tanglu will be usable for both developers of upstream software and the average Linux user and Linux newbie. This is possible because in our opinion developers and users don’t have different needs for a desktop system. Both kinds of users like a polished desktop which “just works”. We will, hwever, not apply any kind of fancy modification on upstream software, we will basically just distribute what upstream created, so users can get an almost “pure” GNOME and KDE experience.
Tanglu is designed to be able to solve the issue that Debian is frozen for a long time and Debian Developers can’t make new upstream versions available for testing easily. During a Debian freeze, DDs can upload their software to the current Tanglu development version and later start the new Debian cycle with already tested packages from Tanglu. The delta between Tanglu and Debian should be kept as minimal as possible. However, Tanglu is not meant as experimental distribution for Debian, so please upload experimental stuff to Experimental. Only packages good enough for a release should go into Tanglu.And the best part (to me, anyway :-) ):
Which desktop will you use?
Everyone can add a new desktop to Tanglu, as long as the desktop-environment is present in Debian. Long term, we will have to offer Linux-newbies a default flavour, probably by setting a default download on the website. But as long as there is a community for a given desktop-environment, the desktop is considered as supported.
At the beginning, we will focus on KDE, as many people have experience with it. But adding vanilla GNOME is planned too.
|March 14, 2013|
Today I make an announcement I thought I would never ever make. But things changed.
Discussion about this has a long history, starting as a non-serious suggestion at DesktopSummit 2011, continued with people on IRC, but it was decided back then that it wouldn’t be worth the effort. This has changed too, and a small team has formed to work on it.
We hereby announce Tanglu, a new Debian-based-Linux distribution.
Tanglu will be based on Debian Testing and follow the Debian development closely. It will have a 6-months release-cycle and it’s target audience are Linux desktop users. We will make installing and setting up the distro as easy as possible.
Tanglu will be usable for both developers of upstream software and the average Linux user and Linux newbie. This is possible because in our opinion developers and users don’t have different needs for a desktop system. Both kinds of users like a polished desktop which “just works”. We will, hwever, not apply any kind of fancy modification on upstream software, we will basically just distribute what upstream created, so users can get an almost “pure” GNOME and KDE experience.
Tanglu is designed to be able to solve the issue that Debian is frozen for a long time and Debian Developers can’t make new upstream versions available for testing easily. During a Debian freeze, DDs can upload their software to the current Tanglu development version and later start the new Debian cycle with already tested packages from Tanglu. The delta between Tanglu and Debian should be kept as minimal as possible. However, Tanglu is not meant as experimental distribution for Debian, so please upload experimental stuff to Experimental. Only packages good enough for a release should go into Tanglu.
Ideally, Tanglu and Debian should be working well together in mixed environments, where you for example have Debian servers and multiple Tanglu desktops with the new software, targeted at desktop user. Since the differences between Tanglu and Debian should not be very high, administering both systems should be very easy (if you know Debian).
Tanglu will be an open project, driven by community. At the beginning of each cycle, people can make suggestions for release goals they want to implement (similar to Fedora, but without FESCo). These proposals are discussed in public and are rejected if there are major technical concerns. If consensus about a certain proposal is lacking, a vote is done for it. The proposal can be accepted with absolute majority. If this does not happen, the proposal is postponed for the next release, where people can vote for it again. If nobody wants that function, it is rejected. In general, decisions made by Debian are superior and have to be followed.
We don’t think we know every package and every software better than the original upstream. That’s why it makes much sense to rely on feedback from others and to have a community-based and peer-reviewed distribution, instead of secretly developing stuff and dumping it on the community. Tanglu will have a highly predictable set of features, defined at the beginning of a cycle, so you will know what you can expect from the next release as soon as possible and plan for it.
Tanglu will make it easy to deploy applications for it. It will contain a software-center, similar to what Ubuntu has. We will also try to establish a solution for a “Linux-AppCenter”, a place for Linux applications, which will be open not only for Tanglu, but can be implemented in any other distribution too. Possible income will flow back into development of the platform.
Now, let’s answer the FQA (Future Questions Asked):
Why don’t you contribute to Debian directly and create yet another distribution?
First of all, we contribute to Debian And for me, I can say that I will contribute to Debian even more. The point is that Debian can not cover all possible use-cases, and with Tanglu we want to make a distro which solves this. You might ask why we have to create a new distro for that, instead of creating improvements inside Debian? Creating a new distro allows us to do stuff we can never do in Debian. For example, we will include proprietary firmware in that distro, we will make installations of proprietary stuff possible easily (but don’t ship with it by default) and we will have a time-based release cycle. These are already things which are a no-go for Debian, and that’s fine. We don’t want Debian to support these cases, as it is already a great distribution. We want to offer a distro as close to Debian as possible, but with a few modifications for use-cases which are not covered by Debian itself. Of course we will participate in DEX.
If Debian Developers contribute to Tanglu, freezes will take even longer!
This is an often-heard concern, it comes up on every mailinglist discussion about continuing development while freeze. I would disagree here, packaging new upstream stuff is not slowing down testing and improving of packages in Testing. Also, Tanglu is an offer for Debian developers to participate (we will sync privileges for their packages) – we don’t expect anyone to work on it, but as we think DDs know their packages best, we will make it possible for them to participate without extra barriers. We hope that Tanglu can add value to Debian and that Debian cycles can start with better-tested packages.
You said you are a small team – you cannot develop a whole distribution with it!
Let’s put that to the test! All people working on this are well aware of the issue that the project can not survive without much community-involvement on the long run. But we see a chance that many people are interested in it and that there is a high demand for it.
At the beginning, we will just start with a small set of packages. We will also sync many packages from Ubuntu, to reduce workload. For example, it is planned to use the Ubuntu-Kernel and KDE packaging. By doing this, we keep the workload at the beginning low. We also reduce duplicate work with that.
We even have some possible sponsors for the new distribution. But nothing is set in stone yet, so just wait for it to happen.
Why not participate in Arch, OpenSUSE $other_distro?
These are not Debian . I know, it sounds odd, but if you like the Debian way of doing things, you want to use a Debian-based distribution. There is nothing wrong with OpenSUSE. And Debian has issues too. But we want to be close to Debian and use it’s tools and way of doing things.
I hate you!!! You are doing it wrong!! The project is useless!
Well, that’s fine. But there is no reason for hating us. If you dislike our idea, there are basically two options: First, you hate us but the project is successful. In that case, you have been wrong with hate, as there are definitely people who liked the project and contributed to it. Second, you hate us and we fail. In this case, there is no reason for hate, as the project will just vanish and you don’t have to worry about it. So hating it would’ve been just a big waste of energy.
Also keep in mind that forking is a way to keep development healthy and to adapt software to new use-cases which it didn’t target before. And we are not introducing incompatibilities here (like e.g. writing our own display server could). Instead, we want to stay close to Debian and reuse as much code as possible.
Which desktop will you use?
Everyone can add a new desktop to Tanglu, as long as the desktop-environment is present in Debian. Long term, we will have to offer Linux-newbies a default flavour, probably by setting a default download on the website. But as long as there is a community for a given desktop-environment, the desktop is considered as supported.
At the beginning, we will focus on KDE, as many people have experience with it. But adding vanilla GNOME is planned too.
Can you say something about the software used in Tanglu?
Yes, but this is still in flow, so I can’t promise final decisions here. On the server, side, we will try to use the Dak for repository management, as soon as we have enough server capacity. We will also use the standard Debian repository GUI and basically reuse most of the software there, to diverge less from Debian.
The distribution itself could probably use a Linux Kernel from Ubuntu and systemd as the primary init system, as well as the login manager logind. It will be based on current Debian Testing with some fresh packages from Unstable and Experimental. We might also use the Ubuntu driver packages and KDE packaging. We expect to have a very rough start with the first release, but there will be enough time to polish Tanglu.
UPDATE20140214: Just because this pops up online incredibly often: Tanglu does not and will likely not use an Ubuntu Kernel. Tanglu 1 (Aequorea Victoria) ships with Linux 3.12 derived directly from Debian.
Nice idea! How can I help?
Well, you can help with basically anything at time – from writing manuals, over designing logos and pages to administering a webserver and create packages. We are at an early stage of development at the moment, but we wanted to go public with it as soon as possible, to include the community and receive feedback so we can make that distro community-centric from the beginning. Most of the infrastructure is currently in progress too.
So, if you want to get started with Tanglu, subscribe to our mailinglist tanglu-devel and write a mail to it, intruducing yourself. We can then include you in the loop. Generally, if you want to get access to our machines, a trusted GPG-signature will help a lot.
If you want to talk to us, join #tanglu-devel or Freenode! Most discussions are currently happening there.
And that’s it! Tanglu will be awesome!
Some other projects of mine will develop a bit slower because I am now involved in Tanglu. But nothing will stop, and there is some pretty cool stuff coming for both GNOME and KDE (and I still have to implement DEP-11 for Debian).
|March 06, 2013|
Disclaimer: This post just sums up a concept for a new distribution which matches certain ideals. It is not the announcement of a new distribution. These are just abstract ideas. (However, if there is high interest in a project like this, it might of course develop into something real…)
I have been involved in Debian and Ubuntu for a long time now. When Ubuntu started, I was a Debian Testing user, and I immediately switched to Ubuntu when it started, because I liked the idea of a short-release-cycle, user-centric company-supported Debian based Linux distribution. However, I am now back to Debian for a long time, because of many reasons which nearly all had to do with Canonical policy. But this is not a post to criticise Ubuntu, so I’ll leave out most of that part. I am highly disappointed on how Ubuntu develops – not only the technical decisions are at least questionable, but also the social and community part is not that great anymore. There is a high asymetry in the relation between Canonical and other developers, Ubuntu mailinglists basically don’t create meaningful results, they sometimes even mutate to a Canonical Q/A session. The community does not seem to have a large influence on decisions about core services, and it can’t have it if there are things developed behind closed doors. (This is all, of course my subjective impression)
But really nobody can argue against the basic idea of Ubuntu and the great things Ubuntu created Also, many of the processes Ubuntu uses to develop the distribution are just great and well-working, as well as there is a highly active community around it. As you simply cannot argue with Canonical to change their policy (they are a company and have hidden plans, also they have every right to apply whatever policy they want), the natural way in any OSS project would be to fork it. But doing that blindly would just create another distribution, which would almost certainly vanish again soon, since there are already many Ubuntu derivatives which cover many use-cases using an Ubuntu base.
I discussed this stuff with Daniel some time ago, and we did some kind of brainstorming about what a perfect distribution would look like, from the perspective of a developer who wants to use a Debian-based distribution.
Here is a list of points which would define such a project:
This is basically what we would like to have in a new distribution. If you take a closer look, you will see that an effort like this would basically create a close-to-upstream, user-centric short-cycle Debian-based and close-to-Debian distribution, which would cover many use cases, including fixing the “use experimental for new packages during freeze” issue at Debian (DISTRO could be used as environment to run cutting-edge technology, which is generally stable enough, but not yet “Debian-Stable”-stable). Something like this does not exist at the moment. If you take a second look at the list above, you will also see that I mixed technical aspects with organizational aspects. This is intentional. This is just brainstorming, because it is good to know what you would like to have, instead of complaing about the status quo of other projects.
But maybe there will be a distribution which matches some of the above points, to create an upstream-friendly entirely community based Ubuntu.
|February 27, 2013|
A few years ago when I was attending at UDS-M, the Kubuntu guys and I tried to understand why KDE took so long to start, we used bootchart and started debugging startkde & friends, after disabling lots of kded modules and a bunch of stuff we couldn’t get rid of the ~10 seconds marker.
Time has passed and no matter how fast your computer is we still have to wait a lot for KDE to start, is KDE slow? Is Qt Slow? Is C++ slow? All these questionings kept coming to my mind, and well lots of friends ask why is KDE slow to start, after it has started it’s fast. So the other day I decided to try Unity to know if all the FUD were true or not, and I got stunned with a desktop bloated with python scripts and stuff it was much faster than KDE to start, it’s really fast to load, after a few hours trying to use it and not being able to even move the panel to the bottom I gave up and installed Kubuntu again…
I keep seeing Martin (KWin maintainer) blogging about the milliseconds he was able to get rid of, and make KWin startup faster, still I can’t notice it, it’s like there is a 10 seconds timer for KSplash to go away…
So I decided to dissemble it, and created a thing called “sessionk” (yeah like systemd but with a K), the idea behind sessionk was to debug and understand all phases of startup and find who is being slow. The very first code started KWin and plasma but KRunner couldn’t launch anything, so I had to go deeper and deeper till I think now I fully understand what happens today:
In short our code works but it’s showing it’s age, surelly KWin starts much faster than that, actually it hardly takes more than one second to start.
What needs improvements? Lots of stuff, even without those 4 seconds timer plasma still takes quite a while to load, with Plasma 2 it’s said that it’s starting almost instantly, but you can get that behavior in trade of removing some widgets another killer is kded4, if I load all the modules before Plasma and KWin, it delays startup by around 2 seconds and makes plasma to freeze a lot…
A suggestion would be to improve ksmserver, so I took a look at the job, and really I don’t feel I can make “safe” changes in it, I mean it’s 13 years old code that WORKS! If I change anything and break it your session is gone, startkde will return and X is closed. A second reason is it’s current license BSD, and I refuse to code using this license (won’t get into the license merit), a third reason is that I want to sessionk to be small with the fewer deps it can have (like removing the shutdown dialog from the main process), the fourth and last reason is that I don’t think ksmserver is doing a good job on restoring sessions, for instance Chromiun always complain that it was closed abruptally, it could be fixed but due to the code license, age and well the code isn’t easy for me to understand I decided to push sessionk a bit further, I might not replace ksmserver/startkde but it will surely show other components of KDE that we have a large room for improvements.
What sessionk does:
Videos say more than a thousand words…
Looking for code? Be careful as it’s still work in progress and might crash your session…
EDIT : Forgot to thank Kai Uwe Broulik for that awesome plasma zoom in effect
|January 08, 2013|
Happy new year everyone
I was planning a to release Apper as Christmas gift, but well the release wasn’t good, lot’s of small bugs here and there due to the changes to accommodate QML and the new plasmoid updater.
The new release was initially just to be an update to PackageKit 0.8.0 new API, but then I realize I could do lot’s of improvements without losing time with things I want to port to QML.
One of the biggest changes in PackageKit 0.8 series is a feature from the moment I joined the PackageKit project, and thanks to Matthias Klumpp we now can run parallel transactions in the daemon, which means you will be able to browse packages while installing (if the backend supports that), right now Apper is still designed to the old behavior of only one transaction at time, but most backends doesn’t have this feature anyway… I’m planning on doing this for APTcc but forking a process and doing IPC is not something you want to do every day… So it’s likely Apper 0.8.1 or later will be able to do two things at time.
So what changed from Apper 0.7.x to 0.8.0:
I also want to thank the people who reported bugs, committed fixes and tested it, so it can be a good release.
For the future I first want to make APTcc to handle parallel transactions so I can test this new way of dealing with PackageKit transactions, then I’ll start porting it to a QML interface which will make the experience much nicer.
Thank you for using it and enjoy the new release
|December 12, 2012|
I’m very glad to be able to be making new posts about stuff I like, though my situation still not finished, we have sent some papers to lawyers at Argentina and since this year booking is too much expensive my wife will only be going there next year. The donations money helped a lot but almost half of it is already gone, we still need to pay the Germany lawyer and if you know BRL vs EUR it doesn’t play nice Maybe next year I’ll set another pledgie, and I hope to be able to have payed most of bank debts till next year…
Back to the topic, I’ve already said I was planning a new updater, and now I want to share it with you. I’m so happy about it because it makes the updating use case much easier… come on! Just click the icon (maybe do a quick) and press update how easier could it be?
Apper 0.8.0 will probably be released next week, tomorrow I’ll try to do the last strings changes, and then I’ll give the translators a week to update the few missing strings, 0.8.1 hopefully will follow by January to fix the new introduced features that certainly broke something.
I’m not entirely happy yet with how it works but it works. I still have to show some KDialogs to review dependencies, agree with not trusted packages… hopefully in near future all of that will be inside the plasmoid.
The old tray icon will be gone with this and more changes (I’ll blog when I handle the release) to move away from the regular tray have been done too.
And really thank you very much for your support, thanks to everyone that cared about me I was able to be at my wife’s birthday 7th, my soon 8th and today mine on the coolest day of the year 12/12/12
Enjoy the screen shots:
UPDATE – I forgot to put the screen shot when you click and expand an update, to see it’s changelog (still misses some info):
|November 29, 2012|
|November 16, 2012|
After a real nightmare of 5 days in prison I’m finally safe at home, first of all I must thank God, my wife, all of my friends local media and family that really provided us with great help. I took me a lot of time to read all messages that were exchanged while I was in prison. Without the help of all of you I’d probably still be there, when the police officer said on Monday morning that I was free to go I couldn’t believe, the Brazil’s embassy said up to 3 months, the lawyer I got in Munich said at least a few weeks and my wife was informed up to 6 months, not to mention that I would probably stay even more time at Argentina.
Though the trial is not over yet, at least now I can stay home with my family, without the possibility of losing my job, and my wife will go to Argentina to defend me and do what else is needed. I was amazed at how much money was also raised in such short time. As some of you know the travel to the Color Management Hackfest was already a bit hard due to my bank issues but thankfully pretty much everything was already paid. So we will be using the collected money to pay lawyers and travel expenses to Argentina, I don’t know if that will be sufficient but it’s enough to start the process, if you didn’t had the change to donate and still would like to do, just send me an email and I tell the paypal account, this does quite a difference.
Next month is my wife’s, my son, then my own birthdays, so when I got arrested pretty much all I could think of was missing the 3rd birthday of my kid, really would be the worst holidays of my life since my daughter had died.
Being home now with my kid messing around is really priceless, really thank you all very much!
Sorry for my wife’s noise on the planet KDE but I think you all can understand how odd this situation was.
As for the geek stuff I’ll soon be pushing more code
|November 13, 2012|
Hello friends! We have wonderful news! The Argentine lawyers challenged before the Argentine courts and challenged the entire procedure was applied to Daniel, the Brazilian press also entered the event with strength and charging the Foreign Ministry saying that if something was not done quickly everything would be exposed in the media. And I think that he will be released today, with probation! I can not believe, know that all the strength he gave the testimony in favor of Daniel, saying that they know and that is a KDE developer, who has worked hard in developing short, and expressions of affection and helped a lot in this whole situation, so we’re seeing a new ticket so he can return to Brazil so get out of jail, and so we will continue fighting this process, but the most important is that he will be paroled here in Brazil, I can hardly believe!! I can only thank God for having wonderful friends like you guys for it! And thank you all for the help of all forms. I explain to them that the money that was collected is being used to pay lawyers, documents and now the passing of Daniel himself back in Brazil. It will be a great battle and he still runs the risk of another prison, we will fight for that not to happen! Ask you to continue praying to God for his safety and well he is released back to Avidar everyone! I greatly appreciate everyone!! God bless you!
|November 12, 2012|
Hello friends! We are having many advances, we have 4 Argentine lawyers willing to ajudarnos with case and 5 Brazilian Lawyers also committed to take Daniel as soon as possible of the whole situation. A lawyer who handles the case in Germany has shown a great person, very full and willing to help in whatever possible, she has kept me informed of every step, we spend hours every day on the phone and seeking solutions enumerating lists the documents which I am providing with the Brazilian and Argentine authorities, even she has scheduled a visit to Daniel today and asked for a message to that thousands of the We used some of the money from donations to defray expenses and legal documentation and help money sent to prison on an account that was given to me by the Brazilian consulate in germany so it does not need any password. We are very hopeful that everything will go well, this morning Argentine lawyers were at the forum and already had access to the process and should have already started for a defense that is at least given him a parole. With the grace of God this whole nightmare will end brve and in the best way possible. I ask you to continue helping us in every possible way … Many thanks to all! Elisabeth Nicoletti.
|November 11, 2012|
Hello friends … I’m a little absent because I’m running with the documents and evidence possible and even my documentation and flights … I’m also going to personally official organ of the Brazilian courts to travel provided all the documentation possible. I ask you to continue helping and praying for us and will post updates as soon as possible. Thank you for all your friends!! God bless us!
|November 10, 2012|
Good day friends … We are very happy with the collections and affective demonstrations of affection for my husband Daniel Nicoletti. On behalf of our family and I really appreciate it! God bless you, we’re planning a trip to tomorrow or after tomorrow, already got money to buy a one way ticket. I am gathering all documentation and photos so you can help him as best as possible. I had contact with many of his friends who are now my friends and I noticed also that there is no Argentine, but I would FAER a call so we can get to a good lawyer Daniel Argentine and Brazilian a good lawyer, I ask you to help us get both . Contact me at any time of day or night! Thank you! Elisabeth, Christopher Nicoletti.
|November 09, 2012|
Hello! This blogpost is personal and not technology-related at all, but I would appreciate it if you would read it anyway, in this case. Generally, I don’t like sending off-topic posts to the Planets, but this is an extreme situation and I feel it is important that you know what has happened.
First of all, who is Daniel Nicoletti? He is a KDE and Freedesktop developer, the author of many great things like Apper, the PackageKit-based KDE package manager, a new KDE Print Manager, colord-kde and the Aptcc backend PackageKit uses on Debian-based systems. Also, he is a friend of mine and we work on many projects together.
Recently, he was traveling to the Linux Color Management Hackfest 2012, but he never reached Brno, instead he was arrested by Interpol at the airport in Munich.
The reason is a tragic accident about a year ago. If you are a KDE developer/user, you might know the story already. Daniel and his family wanted to move to Argentina. On the way, a car suddenly stopped on the road. To avoid a collision, Daniel tried to dodge the car, which then collided with a truck. His daughter died later in cause of the accident. The KDE release 4.6.3 was dedicated in memory of her.
After these events, the family decided to move back to Brazil for various reasons. Meanwhile – and without Daniel knowing anything about it – Argentina filed an Interpol request to get him, accusing him for murdering his daughter. (As – of course – he was no longer in Argentina)
At time, he is arrested in Munich, and Argentina asks for his extradition, but Germany does not totally agree with his guilt.
Thanks to our really awesome community (really, I can’t express how impressed and proud I am) he received some help quickly, after his wife posted a desperate request for help via his Google+ account. – At first, she didn’t even know what exactly had happened, because no communication was allowed to Daniel (and still is not), and we all assumed an accident.
Now he has a lawyer who is communicating with his wife. In the worst case, Daniel will be struck for at least another 6 months, unless his wife can come over and intercede for him.
This surreal story really came out of nowhere. The day before he left for the hackfest, I was reviewing a patch with him and everything was fine, and he was excited to go to the CM Hackfest.
At time, his family is collecting money so his wife is able to go to Munich and clear the situation, because due to these incidents, there is not much money left. So, if you want you can donate for him.
I really appreciate any help to get Daniel back and clear this injustice, and I wish him and his family the all the best! And I hope that this nightmare will end quickly.
Thank you for reading through this…
UPDATE: Daniel’s wife will soon travel to Germany with an one-way ticket. She will meet a member of the community at the airport who can show her around and help in communicating with the locals. (And offer her a place to stay) At time, they seek a lawyer who is specialized in this international affairs. (Brazilian citizen, accused in Argentina, currently sitting in Germany…)
Because some people asked this, you can find the whole story about the accident on Daniel’s Blog. Also, a local newspaper reported about it. (But my Portuguese or Spanish is as good as the Google Translator allows it) Also, as far as I know communication between the Brazilian side and Germany is going on.
Currently, nobody can speak to Daniel. I’ll keep you updated.
Also, for Brazilians: You can also donate here, if Pledgie does not work for you. (requested some times) – I just hope they will get the money even if goals are not reached, I don’t know the rules these platforms apply.
|October 04, 2012|
Many awesome new releases have been made this week so far!
The new PackageKit features lots of bugfixes and small improvements. It also contains a few new features, like showing PolicyKit dialogs on the command-line, even if no GUI is running.
If you want to try the new PackageKit and are a Debian user, you can install it from Debian Experimental. (Packages are available) The next Apper release will support this PackageKit series too.
The “we-break-everything” Listaller release. Contains a finalized IPK1.1 spec which is frozen now, so if you create packages, you can be sure future Listaller releases will support them too. Also the Listaller API was refreshed so it is much easier to use.
This new release does not include the promised support for IPK-repositories/software-updates, this feature was not ready for 0.5.6, but it might come with 0.5.7 – then it will also include deltaIPKs, one of the most-requested features by game developers (and probably LibreOffice users :P).
Instead the new Listaller contains rewritten support for GPG-key handling, giving distributors and users full control of the GPG public keys Listaller trusts. Also, Listaller won’t install packages with security level “Low” anymore, by default. Of course you can change this setting, if you want.
The documentation of Listaller is currently rewritten, so we can kill the Wiki, so you will soon have excellent documentation of Listaller’s features. Also, lots of other improvements happened, check the release notes for details.
The new Listaller requires at least PackageKit 0.8.4, because it requires some changes done in this PackageKit version.
I am really excited about this one! AppStream-Core is a collection of tools I started writing during my SoC which allows creating & accessing the AppStream database. In combination with PackageKit, AppStream-Core offers a nice way to build Software-Center-like applications. AppStream-Core is also a central place where all changes for AppStream support can be implemented in a distro-agnostic way. If your distribution wants to support software-centers, it should ship this collection of tools. (of course, AppStream data still needs to be provided by distribution-specific tools)
The 0.1 release is not meant as a stable release, it is only something you can play with at time. Right after 0.1 was released, I already did some important changes in the master branch, so the whole project is moving at time, and there is no API-stability-promise yet. But of course I suggest everyone to test the project
AppStream-Core is a prerequisite to make stuff like application-display in KDE’s Apper work or to help GNOME implementing a tool like GNOME-Software. (it is also recommended to run the USC on other distros)
|September 03, 2012|
Nearly everyone who’s reading this blog will know about Listaller, a project started by me to make cross-distro 3rd-party software installations possible & secure.
This project has been started years ago, and lots of things have changed in the Linux desktop world and down the stack, so I think it is time to look back now and summarize what has happened – and also to finally write down my evil plan about what Listaller should achieve. People who know me and who I’m working with already know what I would like to achieve and pushed me a little to write this “vision” down.
I’m not a big fan of project “visions”, but in this case to define a final goal visible to everyone (and not just me) will help a lot.
So, if you want to read something about the state of the Linux desktop and my view on the recent “Linux desktop has failed” discussion as well as lots of history and future visions of the Listaller Project, keep on reading! (If not, go here and enjoy the day ;-))
When Listaller was started in December 2007 (I guess, I don’t have an exact date), software management on Linux distributions was in a very sad state. It was usual to manage software in distributions using a native package manager, like Synaptic or YaST. Also, extending the software repository was not really easy. Ubuntu did it’s first experiments with an application-centric software manager that time and PPAs weren’t that common. (I don’t know if Launchpad already offered this service back then – I think it was implemented later)
Regarding cross-distro 3rd-party software installing, there were a few competing projects like LOKI/MoJo, Autopackage, ZeroInstall, Klik, etc. which all weren’t that common and only a few people used that software. Also, they didn’t solve the main issue: Packages aren’t user-friendly! A “package” is a somewhat abstract concept which is hard to understand for non-technical users. These projects only focused on installing cross-distro packages, but I wanted more.
So I started the Listaller project. Listaller was designed to build a cache of applications installed on a distribution and offer uninstalling these apps, no matter how they were installed. Listaller implemented it’s own package-manager abstraction layer, which basically worked with plugins which then called native distribution tools like “dpkg”, “rpm”, “apt-get” or “yum”. The only distributions supported were Debian, Ubuntu, SUSE and we had very poor support for Fedora.
Listaller also contained a new format to install applications, which was designed to abstract package management complexity. The Listaller package format was able to carry links to existing native packages and download & install them in order. It also was able to contain any binary installer, so it could also carry Autopackage or LOKI installers and perform necessary steps to execute these binaries and make sure everything is set up correctly. Finally, it was also possible to include the application install files directly into a Listaller package (file extension *.lipkg at the beginning, later changed to *.ipk) – It was planned that this solution should later be the only possible way to install software, the other IPK-package-modes were made to ease transition. The Listaller package generator also produced a logobutton, showing all distributions this package supported, which developers were able to put on their websites, because the generated Listaller packages were sometimes still limited to some distributions. (Listaller was able to detect that to a certain degree) I found only one of these buttons on my disk. It looked like the thing on the right side.
The software manager was very similar to later tools managing applications instead of packages, which was the goal of this effort.
Here’s a screenshot of the very early Listaller application manager (and it’s crappy UI):
With the rise of PackageKit, the whole situation shifted a little: Now we finally had one abstraction layer for all package managers distributions offered, which was much more powerful than the poor thing I created. I followed the PackageKit project since version 0.1.18 (I think) and later joined the project as developer and implemented support for it in Listaller.
During that time, the Klik installer project died and Autopackage merged into Listaller (after both projects first only did collaboration), leaving only very few projects with cross-distro software management in scope.
The the AppStream project was created (unfortunately I wasn’t able to attend the meeting due to exams) and suddenly all points the Listaller Project wanted to achieve back then had been solved – except for one: Cross-distro software installations. (AppStream implements the app-centric software management in a cross-distro way, so this Listaller goal is now achieved)
So I shifted the scope of Listaller to 3rd-party software-installations only and started a new implementation of it from scratch using the new Vala language. I also extended PackageKit with support for Listaller, because having multiple UIs for package management is highly confusing for users and the new design of Listaller made it possible to extend any PackageKit GUI with the ability to manage Listaller packages.
Listaller was extended to use ZeroInstall feeds to satisfy dependencies too, so resolving dependencies would work better, and both ZeroInstall and Listaller could benefit from the availability of more feeds. The Listaller support in PackageKit support was then split out into a plugin, so distributions are able to enable/disable Listaller as they like. (and also the plugin code made PackageKit’s code much better and easier to handle!)
Many other changes happened too, which I won’t summarize, this blogpost is already too long – Unfortunately the rewrite made Listaller unusable for a very long time and some people already considered the project dead.
So, many people might now ask the following:
I consider cross-distro software management a key element for the success of Linux desktops.
At time we have a very old protocol how software reaches end-users: Upstream creates it, downstream the distributions package it and deliver it to their users. This unfortunately does not leave any choice for people to selectively install or update the software they want. It is also not easily possible to install software which is not in the repositories. Of course, someone could create packages – but then he has to build his software for a multitude of distributions, which is lots of work (you can’t even target all RPM-based distributions using just one spec-file without doing many distro-specific checks)
It is important that developers can get quick feedback from their users on new software, without needing to go into the details of package management and having to wait for distributors to ship new software. (Take Debian stable as an example) Also, users should be able to install software without having to worry about compromising their system. (By installing a foreign package, you’re basically giving someone else root access to your computer)
The OpenBuildService, although being a great tool, is not a solution for this – also PPAs aren’t, because any change on the native package configuration can make the system unusable, install malicious software or break upgrade paths, and they’re hard to handle for non-technical users.
There is a big lack of commercial software for Linux. Why? Usually we hear the argument that “Linux has not a big enough market share”. I don’t think this argument counts that much, as Apple also had a very small market share and they had commercial apps too back then. The situation we have in Linux is that you – as a proprietary software vendor – not just have to support “Linux”. You need to support a multitude of different configurations and distributions, and this fragmented world is very hard to handle.
First, some clarifications about my point of view on certain “social” aspects in the Linux community, so you can understand me easier:
I think both is important and we should fully support both. For me, I want the OS platform as open as possible to have unlimited possibilities regarding stuff I can do with it. And there should never ever be something limiting my freedom do make any change I want on the Linux platform. BUT, if people want to deploy proprietary software on that free-and-opensource platform, I welcome them too, because everyone should be able to use the FLOSS platform for everything they want. Also, it is better if more people use a free platform and be aware of it – this is the best way to prevent stuff like locked hardware devices and the UEFI hell. More app developers will certainly help in adopting the Linux platform. That’s why I welcome Valve on Linux too, although they’re proprietary and I don’t play games.
I always prefer free software over proprietary software, but there are some valid arguments for developing closed-source stuff too, and I don’t want to exclude these developers and add impose limits for our users to get software.
What is “Linux”? Many people already did lots of fighting about the name – I don’t want to do this here.
Sure, Linux is a OS kernel – technically. But what is Linux in society? For me, “Linux” is not a kernel or OS, instead it’s a value and brand, which defines all Linux OS’es. When I talk about “Linux” I refer to all components a Linux distribution is made of, the kernel, the plumbing layer, the toolkits, the desktop the GPL and the freedom we want to give to our users, as well as the great community. In population, Linux is a know word for “free operating system”, Linux is recognized – many small distributions are not. If I want to speak about the kernel, I just say “the Kernel” or “the Linux Kernel”.
People port software to “Linux”, people use “Linux” etc. – Linux is used as a brand and word for a free operating system.
So, it is important that software companies can target the Linux market if they want – not only a small fragment of it called “Ubuntu” or “Fedora”, but the whole Linux world using just one software build. People will see a download link for Linux software, not just a simple tarball and recognize Linux as equal (better!) to Windows and Mac.
The Linux world is big – it’s just not recognized as big. The Humble Indie Bundles already showed Linux users pay for software (I never doubted that and it doesn’t really surprise me), so we just need more app developers now.
Mark Shuttleworth recognized the potential Linux has – with programs to promote the Ubuntu desktop, workshops for app developers, the PPA-way to submit software and app-cebtric software management, they’re doing everything right. But of course, they’re Ubuntu-centric, software is developed “for Ubuntu”, you won’t find the word “Linux” anywhere on the Ubuntu homepage. Having a universal software distribution format for Linux would also be fair for smaller distributions, so they can receive new applications very fast too, although they don’t support Ubuntu PPAs.
Listaller is currently designed to do exactly that – create packages to target the whole Linux “market”. These packages provide FLOSS developers a great way to push new releases directly to their end-users, removing the “distributor” part in the software release pipeline. Proprietary developers instead get a great way to target any Linux distribution.
The package format is based on standards like DOAP, so every application which ships DOAP informations (i.E. every GNOME application) can generate an IPK package really fast.
Of course, there are some limits: Listaller is designed to install applications, and only applications. Installing core infrastructure is not supported. This means you can’t install the full KDE with it or install systemd. You also can’t execute arbitrary code at install-time, most-used tasks are hardcoded.
Removing flexibility means adding security here.
The recent changes in Listaller wouldn’t be possible if the Linux desktop isn’t already becoming more app-developer friendly and more consistent: There is an increased awareness of ABI compatibility, systemd is unifying many OS functions which have been completely random before and Freedesktop is a great project which allows collaboration in components which work on all distributions.
For everything not yet covered, Listaller provides tools to make applications binary-compatible with many distributions, but I hope these things won’t be used in future anymore.
There is a lot of new development in the GNOME community, which will make the life of a cross-distro packager and therefore Listaller easier: Stable APIs, a “GNOME SDK” etc. These changes might make it possible to make Listaller packages containing pure GNOME applications just depend on “GNOMESDK>=3.8″ instead of handling every single dependency separately.
On the KDE side, the project is splitting into “Workspaces”, “Frameworks” and “Applications”, which is also a great move forward for cross-distro apps and will make packaging KDE apps in general easier.
So, with this new development, I think it will finally be possible to build a cross-distro software installer with a design driven by user’s needs and not creating a big hack and workaround-set to overcome all technical limitations of cross-distro software: We might be able to fix these problems at the source. (The increased caring about ABI stability in FLOSS community already helps a lot)
Also, Listaller itself needs to be future-proof: We have two developments which affect Listaller:
For the first point: I do not believe in WebApps replacing traditional software – there are just too many limitations, and I also don’t like the idea of web apps storing much personal data on servers I don’t control. (Although there are efforts to fix that) I also think the Web technology is in many points inferior to what we use on the desktop today (be it Qt or GTK+). Instead, I think we’ll have more and more “hybrid” applications soon, which e.g. have logic implemented in C and run a UI built with web technology. Also QML is a very innovative and great approach for UI construction.
For the second point: I consider every application store as limitation of user’s freedom, as long as it is the only possible appstore and/or adding other stores is impossible. Competition between appstore vendors is very important.
So I imagine the following (far away) scenario: Distributions ship Listaller with their own software-store enabled, if they have one. The base OS is made of native packages, all additional software (also newer versions) is delivered through Listaller packages. (This doesn’t mean there are no applications like Firefox in the native packages – everything stays as-it-is, Listaller packages are an addition)
Because Listaller packages are distro-independent, OpenSUSE users can also use Ubuntu’s appstore-source, so there is no longer any dominance of a distribution regarding availability of precompiled software.
Ideally, one organisation creates a “Linux AppStore”, which is added to many distributions by default. (but maybe disabled and easy-to-enable) This Software Store will sell applications by commercial software vendors and – because it should be carried by a non-profit-organisation – sends most of the earned money back to distributions, which itself could use it to improve their distribution.
This software store would be very attractive for software vendors, because they could target the whole Linux market and would be very visible to potential customers, on distributions from Gentoo to Debian. Also Linux itself would be much more visible to others.
Listaller packages are by design very secure, so these packages can’t harm the system. (they’re usually signed and don’t install stuff in system directories – also a sandbox interface exists (but is not used at time)) Upgrades will work without problems, because the native package configuration is never touched, so distributors will receive less bug reports about those issues. PPAs will instead be used by people who want the latest Linux infrastructure, like a new KDE, GNOME, systemd, PackageKit etc. version, and not by people who just want the latest GIMP.
Because the setup process is 100% controlled by Listaller (no custom user scripts in packages, many things handled automagically), distributors can control every single 3rd-party software setup on their distribution just by adjusting Listaller to their needs. So distributors still have control about every piece of software installed on their distribution.
Also, Listaller is nicely integrated in all existing application-managers if they use PackageKit, so extra-UIs aren’t needed and users don’t even know that the software is there.
Of course, installing standalone software packages is still possible – they might of course be run in a sandbox by default.
Listaller will have logic to maintain the different software sources and provide information in a way frontends can display it nicely.
Of course I’m working on the code to make it ready for future use. The first step was adding support for package-updates using package deltas in Listaller 0.5.6 – unfortunately this feature has been delayed because of my SoC project and because I completely broke the API Listaller uses to talk to PackageKit during my work on PackageKit. So the next release will be without a major new feature, but with many bugfixes.
To make Listaller releases more frequent and because Listaller depends completely on PackageKit and internal APIs of PackageKit, I’ll sync the Listaller release cycle with the one of PackageKit. This means you will get a new Listaller version one week after a PackageKit release. By doing this I also hope new changes will reach people faster – and there will never be the case where Listaller and PackageKit are incompatible. (At time, Listaller 0.5.5 won’t work with PackageKit 0.8.x series)
Of course, the project needs contributors! At time, “we” are only me and a few part-time people who create probably one small commit every two months, since the main contributors started to write their bachelor thesis or do other stuff (at work).
I would be very happy about comments regarding the plan above, which is of course very rough at time… To make it more concrete, I’ll try to talk to people at GNOME about their vision, because it seems they also have similar plans or at least have one for application management improvements, which could at least be valuable input.
And finally: Is the Linux desktop dead? The answer is NO! There might have been mistakes in the past, but both KDE and GNOME have clear goals for the future, are more end-user focused than ever, care about ABI stability, collaborate on an increasing amount of projects (although it could be more ^^), prepare for targeting mobile devices and are used by millions of people. We should be very happy Let’s see what we achieve in future, even if it is not desktop-dominance over all other OSes, it will be great anyway!
If you want to contact me, you’ll find me on IRC: #PackageKit (Freenode) There’s also a Google Group about Listaller, but I’m looking for a traditional mailinglist at time… Reaching me via mail is always the easiest way
|August 30, 2012|
This year I did a Google Summer of Code Project for OpenSUSE as part of their cross-distribution collaboration track. (Again, many thanks for letting me work on this and for doing a cross-distro track!)
So, what did I achieve this summer? (Leaving out all the problems and stuff which didn’t work as I expected )
I did work on three components: AppStream, the Software Center and PackageKit.
At the beginning of the SoC, I thought I would be spending most of my time at the Software Center and in seeing Python code. Instead, I put lots of effort into PackageKit and writing C code. The reason for that was that when I started, all operations I performed in the Software Center were very slow, because PackageKit was slow. Also, some things weren’t possible, e.g. I couldn’t fetch details about an application from PackageKit while an installation was going on, because the installation blocked all other requests on PackageKit. So, I first focused on PackageKit.
I implemented various small speed optimizations, which made PackageKit a little bit faster. (on Ubuntu, it’s faster than Aptd now, but that’s not a fair comparison) I also added a package sqlcache, which can be used to access package data very fast, if enabled.
The biggest thing I implemented was support for parallel transactions. Parallel transactions allow PackageKit backends to run some transactions in parallel, if they support this feature. This means that you can now query package details or search the package database using PackageKit while installing or removing packages at the same time.
It also enabled frontends to query more data faster, which will speed up every PackageKit client if the PackageKit backend supports parallelization. At time, the Zif backend fully supports this feature and the Aptcc backend supports it partially.
Together with Richard Hughes I refactored the backend API, so backends now have some more options to optimize they way they handle cache openings and jobs. (most of the credits for these API changes go to Richard)
The Software Center is a fork of the original Ubuntu Software Center, because it was not possible to have Canonical drop the CLA they apply on Software Center’s code. (and which is a problem for new contributors, especially those employed by companies)
At the Software Center side I did lots of bugfixing and speed improvements, for example the SC now starts really fast (if the PackageKit backend is fast too) and data fetching is super-fast too. I also removed some problems preventing the SC to work properly on non-Ubuntu distributions and ported the code to our newly created APIs.
I am still not happy with the state of the Software Center, as there are many unsolved bugs and the tool is not yet user-friendly. But I also think these problems will vanish when distributions start to ship AppStream data to fill the Software Center database and when PackageKit backends are improved.
Here are some screenshots of the PackageKit-based Software Center running on Debian and Fedora:
You can already test the code if you have at least PackageKit 0.8.4 installed. Unfortunately distributions don’t provide all AppStream data (the information which matches package names with applications and contains icons & stuff) yet, so using it is quite difficult at time.
For the AppStream project itself I created infrastructure to create & maintain the AppStream Xapian database. This database is used by application managers to query data about applications. It also makes searching applications really easy. At the beginning, all code which created this database was hard-wired in the original Ubuntu Software Center. So, if you wanted AppStream data, you had to install the USC. Now, I wrote a nice tool in C++ and Vala, which will build the database from various sources and which allows querying the database using a very simple interface.
These changes will allow alternative Software Center implementations to use AppStream and it will make some other interesting development possible (e.g. application support in Apper, which I implemented after my SoC project was completed, see this post for details.)
There are already some new software centers in progress, for example the Light Software Center, which was designed for use with PackageKit from the beginning.
At time I’m running a request at Freedesktop for space to present the new project & APIs, as well as to upload release tarballs, so this code can hit distribution repositories soon.
It bothers me a little that I cannot present you a 100% bug-free end-user-usable Software Center with the end of my project. But instead, I killed all the big technical problems which made implementing a Software Center on other distributions impossible before.
I also did lots of changes which will make using PackageKit even more pleasant to use now and I created Software Center infrastructure, which will allow many projects to implement AppStream features. For example, GNOME developers can get their application-centric software managers, and KDE people will receive an AppStream-ready version of Apper soon.
So in the end, this project was very successful and I’m really happy with the results. Also, it feels like the AppStream project itself gained momentum now, and I will of course continue developing stuff around AppStream – stay tuned, there will be more news later
I can only recommend every student out there to consider applying for the Summer of Code! It has been an absolutely great time, meeting new people and discussing stuff with them has always been fun for me, but also the feeling of being able to really move a project forward and taking the time to do that is awesome!
During my SoC project, I really had some problems to find a balance between university stuff and GSoC work, because university was giving me a hard time (lots of stuff to learn!). In the end, even a harddrive crash in the hottest development period couldn’t stop me (now I have a new drive and I was fortunately able to restore most of the old data)
Working with Vincent and of course Richard (who I’m working with for years now) has been a lot of fun, and also interacting with various people in the community was great, but also sometimes very exhausting – in cross-distro and cross-desktop world you really have to put lots of energy in convincing other people to make certain changes happen. But during my SoC I achieved more things than ever in the area of application-centric software management, so it was totally worth it.
When I look at the projects fellow SoC students completed, there are even more great projects to see! (Just take a look at it!) Some other students I talked to also agree that the experience alone was already worth doing their SoC project.
So, if you are a student and are able to write code or do some other cool stuff in FLOSS projects: Consider applying for GSoC 2013!
So, that’s it for now, I already planned some new blogposts later, so stay tuned!
|August 26, 2012|
During my SoC project, I also created appstream-core, a small library and infrastructure to create & use the AppStream application database. This project enables all other tools to use AppStream data and combine it with PackageKit information.
So it was just natural to bring this stuff to Apper too, our favorite PackageKit-based KDE package manager Apper has had a function like this before, but it was limited to Kubuntu. Now, the AppStream stuff will work on all distributions which support AppStream and ship at least PackageKit 0.8.4 (currently unreleased).
The result of Apper+AppStream looks like this:
As you can see, Apper is showing applications now instead of packages. (All packages which don’t ship applications are still presented as packages) It also uses the Debian screenshot service to display a screenshot. This feature is experimental and still not completely finished on the AppStream side.
Additional credits for this go to Daniel Nicoletti, because I based my work on previous work done by him.
The new code is in Apper Git already. You will need PackageKit>=0.8.4 and AppStream-Core to test it, but because these changes are experimental at time I’d suggest waiting for your distribution to ship it. (many software is even unreleased at time)
So, this is just a sneak preview of the cool stuff to come. Stay tuned!
|August 12, 2012|
My SoC project is nearly finished now, which unfortunately doesn’t mean that we will have a completely usable Software Center on all distributions – but the most difficult steps are done, all specification issues solved and I talked to many people from other distributions about the AppStream project and we made lots of progress. I’ll publish details in my GSoC final report This post is meant as a guide which steps are necessary to make distribution X support the AppStream project and – using AppStream – an application-centric software management solution and Software-Center-like applications.
We rely on PackageKit for package management, so to support your distribution, we need a backend for your package manager.
Make sure that your backend supports parallelization! This feature is used heavily in application-management frontends like the Ubuntu Software Center and will generally make your backend much faster.
First of all you will need of course to ship PackageKit. Also we need AppStream-Core (with UAI) to be installed. (I’ll make a first alpha release of this after my GSoC project is complete and I discussed some things regarding this piece of software with my mentor(s)) If you want, you can also ship the cross-distro version of Software Center – you’ll need to package it’s dependencies too, some extra Python components aren’t present in all distributions.
This is the most important step. AppStream data can be provided in the Ubuntu AppInstall format or Debian DEP-11, but we suggest you use the AppStream XML spec which is supported on every distro. You need a script which generates this data from packages and application’s desktop files which fits your distribution’s needs. (This script needs to run on the distribution’s servers) OpenSUSE and Fedora already have these generator tools. You also need to make this script extract icons from packages, so the software-center-implementations can display icons of not-installed packages.
The data should probably be regenerated automatically every week on distro branches which are rolling-release or in development. For stable distros where package names don’t change and no applications are added or removed, generating the data at release-time should be enough.
You can ship AppStream data any way you want. You can package it as a regular package (done in Ubuntu) or advise the package manager to download it as part of the repository metadata (planned on-request for Debian). You just need to make sure the XML files end up in /usr/share/app-info/xmls and the icons are placed in /usr/share/app-info/icons . It makes sense to encode the repository name in the XML file name to avoid filename-conflicts if another (3rd-party?) repository wants to install application data with the same name.
If you use the package-way to make the data available, make sure to depend on the “update-appstream-index” tool and execute “appstream-index –refresh –nowait” in the package postinstall script, so the AppStream index is rebuilt. If you use another way to deliver the data, make sure to trigger the cache rebuild there too.
If you use the cross-distro non-CLA-fork of the Ubuntu Software Center, you maybe need to add a profile for your distribution to it. At time we support Ubuntu, Debian, Fedora and OpenSUSE already. You also might want to do other adjustments to fit your distribution’s needs.
If you don’t use the USC, you can skip this step and just use another implementation. (A “Light Software Center” by the Xubuntu/Lubuntu team is already in progress, as well as something done by the Elementary project. I’ll write a blog post about these projects as soon as they become more mature)
There might be some quirks in your data which need to be removed, e.g. applications listed which don’t belong there, wrong descriptions, apps listed twice, system services (KDE!) listed, very slow PackageKit backend etc. So, this feature just needs testing now. Otherwise everything should be set up and ready now
Please note: I’ll update this blogpost with new information if I find something I forgot to mention above. If I do changes, I’ll add a short note about the changes here.
|August 09, 2012|
With the most recent release of PackageKit, PackageKit 0.8.3 (published last Monday!), all my changes regarding parallelization have been merged into our master branch, which means parallelization features are now available for backends to use! Yay!
The best thing about the new parallelization is that it will have an incredibly high impact on PackageKit’s speed: Frontends are now able to process many resolve-requests at the same time. Also, you will no longer have to wait for PackageKit to finish installing packages before you can continue browsing the list of installed packages again. (This was the main reason to develop this feature in my SoC, as it is crucial for Software-Center-like applications)
Internally, many other things have changed also, which allow backends to do very clever cache-handling, so the cache can stay open for a longer time, so we can avoid the delay while a backend reopens the cache when running many transactions. We now have a “Backend” and multiple “BackendJobs”, which process specific requests, for example Resolve() or InstallPackages() which provides backend authors with a flexible way to handle transactions.
Parallel processing is implemented in a way where it is nearly impossible to get to a situation where everything is waiting and the daemon is dead-locked. (only a bug in PK or the backend could cause this) Avoiding this situation was – of course – very important, as otherwise the package database could be damaged or a database lock could just be never released.
The new changes also require some work of our backend authors. By default, parallel-transactions are disabled for backends which haven’t declared that they support it. Backend authors need to explicitly enable the feature as soon as their backend supports it. We highly recommend to enable parallelization, as PackageKit frontends might start to rely on it and will be much slower without a parallelized PackageKit backend. Also, backend authors will have to port to the new PkBackendJob infrastructure. Most necessary changes have already been made by automatic scripts, but of course it is better if people using the backend and knowing the package-manager would take a look at it.
So, backend authors: We want you! (to fix the backends ) Richard and I have created a porting guide for backends which summarizes all changes required for backends to be PK 0.8.x compatible. (backends/PORTING.txt)
Parallelization is a very invasive change and we have only one release with it enabled, so please help testing to find possible remaining bugs!
And end-users can be happy about a much faster PackageKit soon, which will also be working as Software-Center engine
|July 09, 2012|
It’s been a very long time since my last blogpost, but you can be sure I haven’t gone lost! I was very busy with writing exams (and preparing for them) the last weeks, and university was giving me a hard time. Now I still have to write two more exams, but one of them is in August and I won’t have lectures anymore with the end of next week, so I have much more time again to work on my GSoC project on the Software Center.
So, what has happened so far? (If you read the mailinglists, you might want to skip that part)
I looked into the code, updated some very odd parts the SC used to access PackageKit, but there are still many pieces of code left which don’t look good and need optimization. During my work on the SC, I found out that with the current way PackageKit works it would be impossible to implement the Software Center in an user-friendly way. For example not being able to see the details of an application while another one is installing sucks. Also, the round-trips to the daemon slowed the whole thing down.
First I did some optimizations on the code which loads the package cache, so the SC now starts very fast. (It had an incredible startup time of ~30 minutes on my machine before, now it’s down to 4sec and faster (to show the UI, to be ready it takes ~20 more seconds)) I also did a few modifcations on PackageKit, which saved us some msecs to secs (depending on the requested action). To solve the general issues with PackageKit, I wrote a module for PK to create a cache of all packages. This solution resulted in a massive speed gain for the Software Center, but slowed down PackageKit actions a little, for example Refresh(). Also, the cache suffered – naturally – from all problems of a cache, e.g. it was going out of sync extremely fast. (You just needed to use a native package management tool) Additionally it was duplicating or even triplicating package data, and generally disliked by many people, including me in the first place. I first chose to implement the cache, because it was the easiest way to get the needed functionality (parallel access to package-data and fast cache-loading) in-time for my SoC project. But a workaround solution serves nobody in the long run. So I sat down with Richard Hughes, thinking about how we could change PackageKit to serve the needs of a Software Center. He rejected some of my proposals and we had a very extensive discussion about a suggestion by Daniel Nicoletti, but in the end we ended up with a solution which will allow PackageKit to execute software management tasks in parallel, if the backend supports it. The chosen solution is one which requires massive changes on the backend API and the most invasive changes in general, but it’s also a solution which does things right(tm), without any workarounds or more layers to access package information.
Implementing this functionality is not exactly trivial, we broke the PackageKit backend API completely, so all backends will now need massive changes to support the new functions and to even compile again. So, here’s a call for backend authors: Please fix your backends! You can see how the Yum, Zif and Aptcc backends were changed, there’s also very simple documentation of required changes in backends/PORTING.txt (file will be updated soon). The implementation is not complete, I’m working on the missing pieces now, so some changes might still happen.
All these changes mean PackageKit will soon be able to execute several actions in parallel if the backend supports it, for example running GetDetails() on a package while InstallPackages() is running too. We count on backend authors to implement this functionality, otherwise distributions running PackageKit without a backend which supports parallelization will not deliver a good “Software-Center experience”.
At the Software Center side, I’m currently implementing the PackageKit history feature, after that I will have to do some polishing and many ichanges on the code which talks to PackageKit, I guess – there’s lots of room for improvements! Also, there are some Ubuntu-specifics which need to be solved, as well as it is needed to split out the code which generates the AppStream Xapian database, so other alternative Software Center implementations are possible without the need to install the “original” Ubuntu Software Center. (I will work with Vincent and Michael on this)
If you want to try the current SC, just grab the code from Gitorious and try it – but be careful! You’ll need the latest development version of PackageKit (master branch) to make it work, but at the moment the only working backends are Yum, Zif, Aptcc and Dummy. So, Fedora, Debian and Ubuntu users are happy at time, all other distributions will need to update their backends as soon as the dust settled down. (There are also some more new and cool features in the unstable series at time, for example the systemd-support Richard has implemented, and the ofline-upgrade feature.) If your PackageKit is not the latest version, the SC will just crash on startup – I haven’t found out how to check version numbers of imported GObject Introspection data in Python to throw a proper error message. (in general it seems like version-checking is not used very much in Python-world) At the moment, using the SC is still not trivial, but in the end of this project everyone should be able to make use of it. (which is not the case at time, to be honest )
During the next weeks you can expect more code changes again, as the discussion part seems to be over, and I have more time again. (Doing cross-distro projects is 60% talking and 40% coding) I’ll also try to blog more, so everyone can stay informed without having to read mailinglist threads.
In the end, (and I think I should make that my signature) kudos to Richard Hughes for beeing an excellent maintainer and for helping me so much with the most difficult changes on PK. Also, once again thank you OpenSUSE for letting me work on this!
planet.tanglu.org is powered by Venus,
and the tanglu.org community.