Interview with Ted Ts'o

Ted discusses his work on the Linux kernel, Linux International, Linux Standard Base and other areas of the Open Source community.
______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

searching unexpected ABI breaks

Andrey's picture

To control ABI breaks in linux shared libraries may be used tools for static analysis of library header files and shared objects:
1) ABI-compliance-checker, http://ispras.linux-foundation.org/index.php/ABI_compliance_checker
2) icheck, http://www.digipedia.pl/man/icheck.1.html

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

It is time that the process of linux kernel development is formalized-- soon it may be too late. Linux may loose momentum, and the current state of linux kernel development only plays in the hands of Microsoft.

For a true acceptance of linux within the IT industry, a credible organization, an entity, must stand behind the kernel development--not an individual. At present, no sane large business enterprise would built its infrastructure on linux. The reason for that is simple: how do I trust my multi-billion $ business to an OS whose development is controlled by a single guy in his spare time? What if he gets sick, or tired, or simply goes nuts? No large enterprise would take such a stupid step.

Heavyweights of the industry should come together and invest resources in a body which would coordinate linux kernel development full-time -- an organization initially headed by Linus himself. Such a body could plan linux's long term future and release new kernel versions, and employ full-time the best linux kernel developers on competitive basis. Such a body would ensure that linux does not fork, and give a lot more confidence to the real world about linux. It would be a rock solid guarantee of the continuity and stability of the linux kernel. Otherwise, once the linux hype is over, we may wake up in the world still dominated by Microsoft-- if they survive the linux wave nothing will put an end to their madness and arrogance.

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

Any plans to go to a microkernel, a la QNX's neutrino for example? This seems to be the logical next step in the layering approach that UNIX initiated.

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

Linux, to my understanding, is quite efficiently modular as is. But out of curiosity, what advantages would the pure micro-kernel approach bring to the present Linux marketplace?

Re: Interview: Interview with Ted Ts'o

Anonymous's picture

>...or simply goes nuts...

...or bananas, from all the excited infrastructure big manufactures, jostling to get their favorite toys into the official Linux kernel tree!

libgal is not supposed to be a stable library

Anonymous's picture

libgal is not an official GNOME library. It was only created to share code between Evolution and Gnumeric, and it was never supposed neither to be stable nor to be part of the platform. It is more of a playground and a code sharing device than a system library. That's why its ABI is unstable; it was never advertised as stable either.

So I don't think bringing it as an example of lack of ABI stability is fair. ;-)

It's time for the Linux Community Process

Anonymous's picture

A formalization of the Linux kernel development process, akin to Java's JCP (www.jcp.org), is necessary at this time. Linux will never be truly open otherwise, IMHO.

Re: An informed view from the outside

Anonymous's picture

I think everything is just about fine as it is, maybe there could a bit more formalism in the process, but I think this could come from new tools or basic organization more than yet another 'commercialish offical organization'.

The use of BitKeeper instead of relying on CVS and/or straight source has probably speeded things up for Linus. And he had a bit of a 'slump' in 'through put and scheduling' before and after the transition occured.

2.5 looks like a major, major, step forward compared to the previous, kernel generations, there are a lot of issues, alot of new technologies being incorporated. This is not a time to put too much haste into development, there are alot of far reaching descisions being made, they should not be rushed. Patches that are not ripe will be dropped.

The rejection of patches is a two way thing, the incorporation of too many patches against time leads too much kernel bloat. So some people get a bit narked at not having their work included in 'the kernel' but so what this is just a good argument to form more 'rings of conformance and testing' round the kernel proper, the tendancy to need this is seen as forking, but if things are done right with the right mechanisms put in place, 'the organization', then forking becomes just special interest groups of kernel hackers doing more work and testing on individual patches or sets of patches, before putting them up for acceptance in the 'kernel proper'.

This is where maybe a bit of organization is needed, maybe a web space devoted to allowing individuals or parties to 'formally' express interest in a given area/topic or sets of topics for/under development. Then developers can search a database of kernel development 'initiatives' and either join and existing one working on the area or form a new interest party, for others to come along on either on the coding and/or testing fronts.

There I will leave it.

Re: An informed view from the outside

Anonymous's picture

Still, there will always be that question of undue influence, rightly or wrongly.

Re: It's time for the Linux Community Process

Anonymous's picture

I am amazed how easy it the process is of getting patches included in the stock kernel. I don't think it could get much easier.

Perhaps it's because my changes were so small and obvious, or perhaps I've just been lucky. But I've had no problem getting patches accepted.

Re: It's time for the Linux Community Process

Anonymous's picture

A formalization of the Linux kernel development process, akin to Java's JCP (www.jcp.org), is necessary at this time. Linux will never be truly open otherwise, IMHO.Troll. But I'll bite.The development process has nothing to do with how "open" it is; Linux's "openness" is described precisely in the GPL. Besides, what makes you think adding a layer of bureaucracy will improve kernel development in any way? Perhaps if the developers weren't volunteers, they would tolerate having a political, bureaucratic process interfering with their hacking. Maybe.

Re: It's time for the Linux Community Process

Anonymous's picture

APIs in the kernel do tend to be a bit unstable, but at the same time are worked in only after "reviewed" by the community. for instance async IO is not in the kernel because linus rejected it since there was no feedback on it.

if there is no feedback it means that noone uses it, and it is by no means perfect, so this argument makes sense.

so linux indeed, as a community project, is already following a community process. and all that is not new like this follows POSIX, which is standard for ages.

so try and think: what issue are you trying to solve? if you can't come up with a real world problem you saw, maybe there is no problem at all :O)

emmanuel.

Re: It's time for the Linux Community Process

Anonymous's picture

POSIX is a formal process, is it not?

------

www.unix-systems.org/version3/

Re: It's time for the Linux Community Process

Anonymous's picture

You're a ***** idiot.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix