You’ve seen the news: the first release candidate for Qt 5.0 has just been released. .And if you haven’t, you can go download it from http://qt-project.org/downloads. I’d like to first of all congratulate everyone involved in getting it out, with a special nod to the release team. Thanks for all the work!
You can dowload the source code and binaries from the official Qt Project CDN. The tarballs for building on Unix systems like Linux are in the split_sources subdir — Linux distribution packagers will want to use them. Once those packages exist for distributions, they will be listed in the Qt 5 unofficial builds page.
This is the first Qt 5 release to also include installable binaries. Windows and Mac users have had them for ages in Qt 4, and Linux users have enjoyed them in the past in the form of the Qt SDK binary installers. As far as I can remember, this is the first pure Qt library release to contain Linux installers.
But, as the nature of the beast goes, those installers are known to work only on Ubuntu distributions. The main reason for that is because Qt 5 depends on the ICU libraries, whose developers went to the “OpenSSL School of Releasing” (along with the Boost developers) and haven’t learned yet to make binary-compatible releases. Sorry about that. If you don’t have the build capacity to compile the sources yourself, you may want to wait until packaged, binary builds show up for your distribution (in the form of RPMs and DEBs).
The goal of the beta, as I explained in a previous blog post release is to gather feedback on the implementation and to get bug reports. From this point on, the Qt 5 API is “soft-frozen”, meaning that it will not change incompatibly any more, except to fix major issues that we encounter or we’re told about in the form of feedback. If that happens, we’ll make sure to make a note of it in the release notes.
That means that Qt 5.0 beta1 is a suitable starting point for porting applications and writing new code. Your work will not be wasted. But you might run into bugs, so please report them to us, in the Qt Project Task Tracker. We’re also very interested in bugs related to packaging, building, the installers, documentation, etc. Just be sure to look first at the Known Issues page before reporting anything.
Last week, I wrote three blogs about the situation with starting child processes on Unix and being notified of their exit. I raised several problems with the current implementation, which I have tried to solve and I have now a proposal for. If you haven’t yet, you should take some time to read the previous three blogs:
- Part 1: Launching processes on Unix;
- Part 2: http://www.macieira.org/blog/2012/07/forkfd-part-2-finding-out-that-a-child-process-exited-on-unix/;
- Part 3: QProcess’s requirements and current solution;
In the previous posts onmy series of blogs about starting and managing sub-processes on Unix, I talked about how it’s implemented and how the current solutions have limitations. On this post, I’ll show how QtCore has solved the problem (to the extent that it can be solved) and what requirements a new solution must fulfill.
Links to the previous posts:
On my previous blog, I said that the solutions we’ve got implemented on Linux are a good start, but not the full solution. We can start a child process properly, but we still can’t properly find out when it exited.
Early in the Qt 5 development cycle, we had made the decision to deprecate QPointer and replace it with the more modern QWeakPointer. That decision is now reversed, so please continue using QPointer where you were using them. Moreover, don’t use QWeakPointer except in conjunction with QSharedPointer.
To understand the reason behind this back and forth, we need to go back a little in history.
Yesterday, one of my contributions to Qt was merged which finally adds better support for optimised raster painting on Windows, with SSE2 and AVX instructions. This feature has long been present on the Unix systems, but it was somewhat lacking on Windows.
If you’ve read my past blogs, you know I often talk about and work on Single Instruction Multiple Data (SIMD) improvements. The idea is quite simple: if you have a lot of identical operations to do and your source data is independent from one another, you can execute all of those operations in parallel, improving the throughput (processors are optimised for loading chunks of memory of a certain size, so if we only use small quantities, we’ve wasted resources). In the past, I’ve mostly worked on SIMD for string operations, like comparison, searching, and conversion to and from Latin-1. That’s sometimes unrewarding because strings are quite small, so we don’t get the full gain of the improved throughput.
But you might not know that SIMD in Qt actually started in the QtGui library, in the raster drawing code. There, the data sizes are often in the order from several kilobytes to multiple megabytes — a tiny 16×16 icon has 256 pixels, each of which is 4 bytes wide, which adds up to 1 kB; you reach 1 MB at 512×512. As you might gather, even copying such data blocks is a somewhat expensive operation. So it’s no wonder that the more common ones of compositing, alpha blending, etc., needed optimisation. And I cannot claim credit for doing them, those were done by very talented hackers working at Trolltech back in the day.
My history with the drawables started about 6 months ago, during the last romjul, when I realised that the optimisations applied to the raster painting code could use some love. Back then, we were still mixing MMX code into the painting code, even when we reported we were using SSE. In fact, when Qt said it was enabling SSE (not SSE2), it was actually just using some new instructions that came with SSE, but on the old MMX technology registers. My first action in that area for Qt 5.0 was to finally remove support for the old MMX-era optimisations, all of which only increased the code size in a Qt build but weren’t used anywhere. The next-level of optimisations (SSE2 and above) overrode the older ones — remember that all 64-bit capable processors have SSE2 support.
Another thing I noticed back then was that we weren’t using the full extent of the optimisations possible. With GCC, we were forced to pass some extra compiler options so GCC would allow us to use some intrinsic functions to execute SSE2 and SSSE3, but that was not the case for the Microsoft compiler. In addition, the Windows configuration did not try to use the intrinsics to verify if they were really available, it simply checked for the presence of the header that usually declares them. What’s more, those checks had not been updated for the SSSE3 optimisations that were done in 2010 in cooperation with Intel, which meant that those optimisations were disabled on Windows.
On Unix, right after removing the old MMX-era code, I proceeded to a very quick and easy gain: add AVX support, the new generation of SIMD instructions from Intel. It was easy because I barely had to write code: if you compile SSE2-era code with GCC’s -msse2avx option (which is automatically enabled by -mavx), it will generate the code using the new AVX instructions. The advantage lies in the fact that the AVX instructions use a new coding mechanism (called the VEX prefix) which specifies an additional register, allowing the compiler to use fewer instructions to accomplish the same goal. Using the expanded 256-bit registers will have to wait for AVX2, coming next year.
Except that even this easy improvement had never come to Windows either. Until now.
To enable Windows support, I had to update the way that the configuration detected the capabilities of the compiler, which is what took most of my time: dealing with building on Windows and with the binary configure.exe is not exactly my forte. Now, like on Unix, the Windows configuration will ask the compiler to try and compile some code. The checks are now shared with Unix, so we have the full range of checks available: SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX, and AVX2. Previously, the only one that remained after I removed the MMX-era checks was SSE2.
Another update I made was to tell the Microsoft compiler to improve code generated. Since it did not require special compiler options to enable its support for SSE2, no one had thought until now to pass it the /arch:SSE2 option. Like on Unix, now we pass this option to the compiler whenever we’re compiling code that uses SSE2 anyway, making the compiler use the extended instruction set for generic code, not just what we wrote with intrisincs. From there, adding support for /arch:AVX was trivial: if you have Microsoft Visual C++ 10.0 or higher (it comes with Visual Studio 2010), you also now get the AVX-era instructions and Qt will enable them at runtime if it detects that your processor has them.
I’m not done. I have also a couple of other quick wins in terms of performance, all by improving code generation. Those changes are a bit more complex than the previous ones and I haven’t cleaned them up properly after 6 months of rebasing. I hope to add them to Qt 5.1 soon after its branch opens.
Every now and again, someone posts on IRC or to a mailing lists about an issue they’ve had and their description of the problem is that “it doesn’t work”. There’s nothing more annoying to the person giving help than to see that description…
That happened to me twice again today, which is what prompted me to write this blog. At this point, I’d like to shamelessly plug in here my work on Lydia Pintscher’s Open Advice book: I wrote chapter 10 in that book, called “The Art of Problem Solving“. And if you haven’t read the book yet, or even skimmed through it, I recommend you do it. It’s full of great advice from experienced people, in many areas related to open source development, contribution, advocacy, or other forms of participation.
The first section of my chapter is called Phrasing the question correctly, where I wrote:
The most useless problem statement that one can face is “it doesn’t work”, yet we seem to get it far too often. It is a true statement, as evidently something is off. Nevertheless, the phrasing does not provide any clue as to where to start looking for answers.
The question is where we start off. In the context of asking for assistance on IRC, mailing list, or forum, it’s supposed to give the help-giver hints as to what is askew, so that they can begin forming theories as to the root causes of the problem and applying problem-solving techniques to confirm or deny it. But note how all the techniques rest on knowing what exactly is wrong.
I’m not saying I expect a full analysis of the situation by the original poster, just as I don’t expect a person who is not a health professional to do the same when going to the doctor’s. But a minimum of information is necessary. Imagine you were to go to the doctor and tell him or her that “I’m feeling ill”. What do you think the doctor will do with that information? So why do people think that “it doesn’t work” is enough information for an engineering help-giver?
At this point, you might say that “it’s just a conversation starter,” a way to break the ice and begin the discussion. And while I might be inclined to agree with you in a social context, in a live face-to-face discussion, I do not when it comes to interaction via the Internet. It’s definitely the case when the communication is not in real-time: if it takes six hours for an answer to come, then the first usable theory won’t reach the poster until 12 hours after the first post.
But even in real-time communications it’s necessary, as more often than not, it’s a matter of attracting attention of the help-givers. If I’m somewhat busy, you cannot expect me to spend precious minutes asking, “so, what exactly happened? what did you expect to happen?”
How would you know what to say in the original post, then? Here are a couple of suggestions, some of which are, I hope, obvious:
- the description of what actually happened;
- the description of what was supposed to happen;
- the actions that you took that led up to the event;
- a description of the environment, such as versions of the relevant programs and settings you changed;
- the logs of any programs involved that might include relevant information;
- if you’ve tested other conditions and whether they’ve failed or succeeded;
- if it’s a recent issue, when it started happening and when the last time you noticed it not to happen was;
- any theories you might have about the issue;
- what you have already done, so far, to fix the issue;
- what sources of information you’ve used to diagnose the problem.
Try to provide as much information as possible, in a concise manner, appropriate to the medium. For example, on IRC, you cannot write a 20-line description of all the theories you may have, but it’s certainly doable to describe the event that led you to think, “hang on, this isn’t right,” and provide a link to further information such as a pastebin of the logs.
Some other quick advice:
- Use your brain! Exercise it, that’s how it develop. Read the logs that you’ve got, especially compiler error logs, and interpret them. Form your own theories and test them if you can, disproving some and proving others.
- Don’t argue with the evidence. If the compiler tells you there’s an error, then you’ve got an error (the exception is when you suspect a compiler bug);
- Do your homework: use Google and other search tools to find out more information. For example, if you’ve got an error message, search for that specific message. If you don’t do this, you may get the answer in the form of a LMGTFY link.
- Use the appropriate channels: asking the wrong audience will not get you closer to the answer, but might raise your frustration, that of your audience and may delay the process.
- Know your tools, know how to use them. It might be acceptable for a newbie, student or hobbyist not to know them, but it’s not for a professional. Not only should you know how to use them, but also a little of how they work.
- The first error is usually the most important one.
- A warning is (often) not an error, but warnings weren’t meant to be ignored.
I’m sure there are more advice that my readers can give. What else would you suggest as advice or as information a help-giver could use?