diff -u: What's New in Kernel Development
One problem with Linux has been its implementation of system calls. As Andy Lutomirski pointed out recently, it's very messy. Even identifying which system calls were implemented for which architectures, he said, was very difficult, as was identifying the mapping between a call's name and its number, and mapping between call argument registers and system call arguments.
Some user programs like strace and glibc needed to know this sort of information, but their way of gathering it together—although well accomplished—was very messy too.
Andy proposed slogging through the kernel code and writing up a text file that would serve as a "master list" of system calls, giving the call name, the corresponding call number, the supported architectures and other information. Among other things, this would allow tools like glibc to eliminate their ugly implementations and use a simple library to get this information out of the kernel.
H. Peter Anvin liked the idea, but said it would take a lot of work to get it right. He mentioned that he'd been advocating something along the same lines for a long time, dating back to his work on klibc.
Various other folks liked Andy's idea as well—particularly anyone involved with user code that currently had to deduce system call organization piecemeal. David Howells remarked that it would be wonderful if strace could rely on Andy's master list as well. And, Michael Kerrisk said the manpages project also would be interested in tracking the progress of the master list.
There's always a special case that would benefit from tweaking the process scheduler just a little bit beyond The Good. Recently, Khalid Aziz from Oracle submitted some code to allow user processes to claim additional timeslices. Typically, the kernel itself controls that sort of resource allocation, because otherwise the system is dependent on the friendliness or well-codedness of user applications.
But, Khalid's database folks had noticed a problem with large numbers of threads vying for the same mutex. If one of those threads had the mutex and was almost ready to give it up, the scheduler might run through the whole queue of other processes, none of which could actually run because they were all waiting for that one mutex. And like a thumb in the eye, the process holding the mutex was all set to give it up, but couldn't, since it had been preempted. Much better, Khalid said, would be to allow the process holding the mutex to delay preemption, long enough to give up that mutex. Then all the other processes could take their turn and do actual work, rather than spend their precious timeslices spinning on an unavailable lock.
Khalid said his code showed a 3–5% speedup relative to the previous case. But, there was still a fair bit of reluctance to accept his code into the kernel.
In particular, H. Peter Anvin pointed out that Khalid's code allowed userspace to transform the kernel's natural preemptive multitasking into a cooperative multitasking model, in which processes all had to agree on who would get timeslices, and when—and some processes could aggressively claim timeslices at the expense of the others.
Davidlohr Bueso pointed out that a voluntary preemption model might work better with the kernel's existing implementation, allowing processes to give up their timeslice to another process voluntarily. There was no danger from hostile processes there.
There were various suggestions for alternatives to Khalid's design, but Khalid always pointed out that his way was fastest. But, Thomas Gleixner said that "It's a horrible idea. What you are creating is a crystal ball-based form of time-bound priority ceiling with the worst userspace interface I've ever seen."
That was the real problem, apparently. Giving user code the ability to preempt the normal scheduling process meant that neither the kernel nor other userspace processes could predict the behavior of the system, or even properly debug problems.
At one point Thomas said, "What you're trying to do is essentially creating an ABI which we have to support and maintain forever. And that definitely is worth a few serious questions." He added, "If we allow you to special-case your database workload, then we have no argument why we should not do the same thing for real-time workloads where the SCHED_FAIR housekeeping thread can hold a lock shortly to access some important data in the SCHED_FIFO real-time computation thread. Of course the RT people want to avoid the lock contention as much as you do, just for different reasons."
Eric W. Biederman also objected to Khalid's code, saying, "You allow any task to extend its timeslice. Which means I will get the question why does why does really_important_job only miss its latency guarantees when running on the same box as sched_preempt_using_job?" And he said, "Your change appears to have extremely difficult to debug non-local effects."
There seems to be a lot of interest in implementing a feature like what Khalid has proposed, but there also seems to be security concerns, debugability concerns and maintainability concerns that make the whole thing very iffy. But, it's still possible that Khalid could address those concerns and come up with a patch that does what the database people want, without the mess.
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Happy Birthday Linux
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- New Version of GParted
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- All about printf
- Tech Tip: Really Simple HTTP Server with Python
- Returning Values from Bash Functions
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide