Linux Kernel News - July 2013
The Linux kernel community is busy integrating and testing 3.11 content, working on 3.12 development, and finalizing the topic agenda for the upcoming Linux Conference Europe and Kernel Summit that are scheduled to be held in Edinburgh, UK from October 21-23 2013. Let's start with the release news.
Mainline Release (Linus's tree) News
Since my last report, 3.10 has been released and 3.11 is now at 3.11-rc3. The ACPI backlight changes that went into 3.11-rc2 caused regressions and have been reverted in this rc. I will go into the details on the nature of backlight changes later in this article. The crc t10 dif crypto support has been reverted, since this change has initrd infrastructure problems.
For more information on this rc, please refer to Linus's 3.11-rc3 release notes: https://lkml.org/lkml/2013/7/29/16
Linus says rc3 has 50% more commits than rc2. On a humorous note, Linus thinks, the increase in the number of commits is partly because of his direction to people asking them to break up the water cooler conversations and get back to work. Now after having received 50% more commits compared to rc-2, he is asking people to enjoy the summer and take a break. Linus is asking for just regression fixes. We will just have to wait and see if the next rc will have fewer commits.
Stable releases News
- Current latest stable release is 3.10.4.
- Previous stable is 3.9.11 and is now EOL. There will be no more updates to this stable branch.
- Longterm stable releases are 3.0.88, 3.2.49, and 3.4.55.
- Extended stable releases are 220.127.116.11 and 18.104.22.168.
- Longterm releases for 2.6.y are 22.214.171.124 and 126.96.36.199.
- Linux RT stable releases are 3.0.85-rt113, 3.2.48-rt69, 188.8.131.52-rt38, and 3.4.52-rt67.
As of this writing:
ACPI backlight changes
Windows 8 doesn't use ACPI to control backlight and leaves the backlight control up to individual graphics drivers. However, not all graphics drivers have the backlight control functionality. In addition to drivers lacking backlight control, backlight doesn't work correctly on several platforms, when the OS tells BIOS it supports Windows 8, using the standard ACPI _OSI method. You must be wondering why does Linux kernel care if Windows 8 no longer use ACPI to control backlight. The reasons are not as straight forward and the following explains why.
The ACPI _OSI method is defined so that an OS can call this method to inform the platform BIOS, the features it can support. However, platform BIOS-es started using this method to query for OS identification instead of using the _OS method which is defined for querying for OS identification. On several platforms, platform BIOS tunes firmware features dynamically after it determines the OS identity by querying for it using the _OSI method, and applies work-arounds for known OS bugs if any, specific to that OS.
Linux kernel doesn't identify itself as Linux to the BIOS when the BIOS queries for the OS identification using the _OSI method. The intent and reasoning behind this decision to not identify itself as Linux is to eliminate the proliferation of Linux special handling cases in vendor platform BIOS code. Vendor special handling code could result in Linux kernel bugs and performance problems, especially when Linux can handle a feature better in the kernel and when the BIOS overrides it with a suboptimal implementation of its own. Essentially, vendor code specific to an OS introduces a tie between the OS and the platform based on assumptions that might be incorrect and/or no longer valid as OS features keep on evolving. Eliminating such special ties will lead to simpler maintenance model at both ends.
For more information on the _OSI strings Linux supports, please refer to the, struct acpi_interface_info acpi_default_supported_interfaces defined in drivers/acpi/acpica/utosi.c
With that background, now let's talk about why Linux cares about Windows 8 backlight control status. Windows 8 requires minimum backlight support level for platforms to run Windows 8, for example, at least 101 different brightness control levels. In Linux 3.7, the Linux kernel started returning true in response to BIOS _OSI query for Windows 8 to make BIOS enable enhanced Windows 8 required backlight features on the platform. The good thing is now the platform has enhanced backlight feature. However, this resulted in the issues related to lack of backlight support in drivers and broken platform backlight implementation to surface causing the backlight support in the Linux kernel unstable.
Disabling (not registering) the ACPI backlight interface on these platforms sounds like an obvious simple solution, however it doesn't work in all cases. For example, disabling the ACPI backlight interface on a platform that has good working ACPI backlight interface and a graphics driver that can control the backlight feature is not prudent as it turns off a feature for no good reason. At the other end of the spectrum is a platform with broken ACPI backlight interface and a platform vendor backlight driver that is broken as well. In the second case, if kernel doesn't register a backlight interface, platform will register its own broken backlight driver. Another equally bad situation to be in.
A set of patches went into Linux 3.11-rc2 to address these ACPI backlight issues specific to Intel i915 graphics driver. These patches change the kernel to register the ACPI backlight interface until i915 is loaded and then the driver will unregister the interface when firmware calls _OSI checking for Windows 8 identification. This fix addresses the two cases discussed earlier. When a platform has working ACPI backlight interface, and graphics driver that implements backlight controls, registering ACPI backlight interface ensures that the feature stays enabled. On a platform with working ACPI backlight interface with 1915 driver, the ACPI backlight interface gets unregistered when the driver is loaded addressing the need for disabling the interface when i915 is in use. Registering ACPI backlight interface early on, ensures that a broken vendor backlight driver will not be registered by the BIOS.
However, since this patch set introduced regressions, it has been reverted in Linux 3.11-rc3 and work is underway to address the regressions. A future commit without regressions that fixes the original problems could be expected in a later 3.11-rc.
For more information on the problem and the patch set, please refer to: http://permalink.gmane.org/gmane.linux.kernel.commits.head/396675
When a patch adds new files, how to ask git to track them?
A recent discussion on needing to delete untracked files after applying a stable release patch, resulted in several good tips on dealing with patches that add new files from several maintainers and the git author Linus himself. When a patch adds a new file and if it is applied as a patch file via "patch -p1 < ../file.patch", git doesn't know about the new files and they will be treated as untracked files. "git diff" will not show the files in its output and "git status" will show the files as untracked. For the most part, there are no issues with building and installing kernels and so on, however, "git reset --hard" will not remove the newly added files and a subsequent git pull will fail. A couple of ways to tell git about the new files and have it track them there by avoiding the above issues:
Option 1: After applying a patch that adds new files, run "git clean" to remove untracked files. For example, git clean -dfx will force removes untracked directories and files, ignoring any standard ignore rules specified in the .gitignore file. You could include -q option to run git clean in quiet mode, if you don't care to know which files are removed.
Option 2: An alternate approach is to tell git to track the newly added files by running "git apply --index file.patch". This will result in git applying the patch and adding the result to the index. Once this is done, git diff will show the newly added files in its output and git status will report the status correctly tagging these files as newly created files.
As for me, I like the second choice and plan upon using that instead of the patch command to apply stable release patches from now on.
Please find the thread on this topic at https://lkml.org/lkml/2013/7/24/488
Tips on how to implement good tracepoint code
A recent tracepoint patch I sent for review resulted in a discussion on best practices and tips on tracepoint implementation from the tracepoint author and maintainer Steve Rostedt. I am sharing what I learned to help others that embark on adding new tracepoints to the kernel code.
The tracepoints use jump-labels which is basically a code modification of a branch.
[ code ] nop back: [ code ] return; tracepoint: [ tracepoint code ] jmp back;
And when we enable the tracepoint, the code is modified to be:
[ code ] jmp tracepoint [ code ] tracepoint: [ tracepoint code ] jmp back;
This is clever and should result in tracepoint code not adding any overhead when the tracepoint is in disabled state. However, in some cases, gcc gets confused in its optimization and pushes the tracepoint parameter processing work in the code part which is run even when a tracepoint is disabled. So the moral of the story is, it is usually a better practice to implement a tracepoint do as much work in the TRACE_EVENT() macro as possible.
As an example:
trace_event_example(dev_name(dev), dev_driver_string(dev), dev->parent ? dev_name(dev->parent) : "none");
trace_event_example(dev); And in the TRACE_EVENT() macro: TP_fast_assign( const char *tmp = dev->parent ? dev_name(dev->parent) : "none"; const char *tmp_i = pm_ops ? pm_ops : "none "; __assign_str(device, dev_name(dev)); __assign_str(driver, dev_driver_string(dev)); __assign_str(parent, tmp);
You can see how the parameter processing should be done in the macro in the good example. When in doubt, comparing .s output for the tracepoint code that does the work outside the TRACE_EVENT() macro with the tracepoint code that does the work inside the TRACE_EVENT macro can show you the differences in optimization.
As you might have observed, there has been a steady progress since my last report, towards another Linux release with an emphasis on quality over new features and without regressions. If it takes reverting features to achieve the quality goal, that is goodness in the end.
Shuah Khan is a Senior Linux Kernel Developer at Samsung's Open Source Group.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Tech Tip: Really Simple HTTP Server with Python
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Returning Values from Bash Functions
- Doing for User Space What We Did for Kernel Space
- Rogue Wave Software's Zend Server
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide