More Letters to the Editor

Word Processor Standards

Now that you have brought up the subject of standards, the issue of standards for word processors across the whole world of computers is one that needs addressing. It is already ten years overdue. The exchange of documents between Microsoft Word, Word Perfect, Apple and Applix is a catastrophe.

Thirty years ago, there was an organization called SHARE that through the umbrella organization of the Department of Defense, organized user companies from Exxon and General Motors to insurance companies, banks and the federal government to create standards for IBM.

Since GNU/Linux seems to have mastered the art of self-discipline on the herd of nerds who maintain the system, maybe we can raise the cry to start a movement to create a word processor interchange?

The Microsoft Word Standard is a mess and unacceptable. We need a standard for transferring documents between processors—import and export. Some organization with market clout needs to take the lead, but we can certainly publicize the problem to our benefit.

How do we start?

—Bob Stanfield,

Your Review

I saw a link in Linux Today to your review of the UNIX awk and sed Programmer's Interactive Workbook.

Note: I am sending this comment to the reviewer and Linux Journal as well as posting at Linux Today.

I am the author of the UNIX System Administrator's Interactive Workbook. This is the original book in this series. I feel it would be helpful to clarify a few things. The intent of the books in this series is that they are accompanied by a book that is a more standard reference book. These books are designed for the beginner. Because this is new the approach will be fine tuned as we learn more about what is really needed for the student.

I have not actually read the UNIX Awk and Sed Programmer's Interactive Workbook, but I do know the intent of the series. I also cannot address specifically the issues that are raised by the review. Basically the beginner is stepped through a process to learn the necessary skills. There is of necessity many things left out with the intent that the user will supplement the skills with other books. This may not be made clear enough in the text and accompanying publicity. I have made this clear in my text and publicity, but still people get confused on this issue.

My research interests include the process that the beginner uses to learn computers on their own. This is different, in my opinion, from how things are being taught in most Computer Science classes. Once this is better understood and utilized I think academic computer learning will be a lot more fun. It is like learning how to skate. You learn each skill as you go along. You then strengthen the skill and move on to the next skill you want to learn. This will take up a lot more text space and of necessity it will restrict what can be covered.

—Joe Kaplenk,

Prediction of the End of UNIX!

Please accept this slightly tongue-in-cheek submission to your letters section of Linux Journal.

I predict that the Unix world will cease to exist on Tuesday morning, at 14 minutes and 8 seconds past 3 in January 2038 (in Sydney Australia). Why? Because at that time it will be 2147444048 seconds since 0 o-clock (you know what I mean), 1 January 1970. For those purists out there who know what I'm talking about already, the 39600 second (11 hour) difference is a timezone - Sydney to UTC - difference (and I'm too lazy to change the mickey mouse type test program to align to UTC).

How do we stop this?

The simplest way will be to change

   typedef long    time_t;
   time.h (or somewhere included below time.h)
   typedef long long       time_t;
and then all routines associated with time_t. (If you don't accept that long long defines 64 bits, then substitute your favourite 64 bit storage class definition).

This will give us a few more years before the time flips over again.

No doubt this will take a few years to implement - perhaps a different set of routines, ie

   time(long *) => ltime(long long *)
   ctime(long *) => lctime(long long *)
etc and some macros in the time.h include file tree to help convert and warn existing users of these changes.

I realise that this problem is a long way off, but in the spirit of Unix (the socialist operating system as someone recently suggested), lets start the change now.

Just in case I have lost a lot of people, Unix time is based on the number of seconds since 00:00:00 UTC, January 1, 1970 and is stored as a long (signed 32 bit quantity), which means a maximum of 2 ^ 31 = 2147483648 seconds before the time overflows and we go to Friday Dec 13, 1901 (or at least that what my current Solaris box will do - again in Sydney Australia). By changing this storage class to a 64 bit signed quantity we get 2 ^ 63 = 9223372036854775808 seconds which should hold us for a few more years (approx 68 verses 292471208677 years).

Yes, we could make the change simpler and just go to a unsigned long storage type (given that the system call and associated routines don't make use of the high bit) but this just gives us another 68 or so years - ie move the problem to 2106 or somewhere nearby. And also, by the time this change starts most cpu/compiler ensembles will be either directly executing single op 64 bit instructions or will emulate them if not.

As a side issue, I want to congratulate Kyle Enlow (LJ letters June 1999). I was as annoyed as he was, when I read the Linux Certification article (LJ April 1999). We need Linux Certification as much as we need NT or Windows.


—Burn Alting,

Even before getting Linux

It's a great pleasure to say that Linux Journal is the greatest reason that I have installed Linux Redhat on my PC. I subscribe LJ about a year ago with my friend's help (using his credit card) and since getting my issue and reading all the good things about linux, I was hunting for it's installation CD. Well being a student with a limited budget, I manage to get Redhat 5.1 (stract marks all over) for a mere US$3.00++ which is RM10.00 in Malaysia.

Reading all the greatness about it esp that Linux is not commercially oriented unlike Microsoft products which is too expensive for me.

I admit installing Linux was not an easy task for a beginner, without a proper reference maanual except for the readme and docs in the cd I manage to get it running Linux after 100s of trial and error procedures, and after 1 month plus trying to understand how configure X windows properly, I got everything running well..

Up until now, I have no regrets for all the trouble installing it, and now after I got a proper book, which one of my lecturers gave to me, looking at how much interested I am in Linux. I find that it's so easy to install it..

Now I have customize it tailoring it, currently I am self teaching myself how to write Perl/C/C++, well learning Pascal, HTML and Java on my own was an example of I am determine enough to learn something even without a proper tools or resource.. thanks God my mother understand how much I love sitting infront of my computer all night and day just trying to better myself at it. The philosophy of linux itself thrill and move me. With everybody helping each other to better an OS for everyone's benefit is the most decent thing next to being a priest.

My life is full of exitement nowadays.. never dull like the days when DOS and Windows95 was ruling the OS for personal use. I hope someday, I would be a benefit to all those running linux, right now I just have to graduate.

P/S Linux made me an envy of my coursemates learning about Unix.. cos i tend to know more and answer the right question. :) Thanks LJ and Linus for the great OS. Keep it free so poor student trying to graduate like me can better themself.

—Desmond Lewis,


Thanks for such a great magazine. That's all, just thanks.

—Jamie Matthews,

Linux and PCI modems

Shawn McHahon (letters - June 1999) is partially right. There are PCI modems that won't work with Linux and there are PCI modems that WILL work with Linux, including the 2.2 kernel shipped with Redhat 6.0.

I'll bet there are a few heads scratching over the above statement. I fell into a working modem by shear luck. Over a year ago I purchased A Sportster PCI 33.6 modem. It worked with Redhat 5.x. I just set the jumpers for COM2 and stick it in. Set the BIOS to turn off the internal COM2 and booting Linux. I had a 14.4 that also worked.

Because of some recent changes, I purchased a PCI 56k and sure enough, it refused to work. I took it back and got an external 56k.

OK, why did the 14.4 and 33.6 work. I gave some clues, which I hope you noticed. Yep, the PCI modems that can be manually configured for the comm port settings will work. PNP modems will not. When you can manually set the modem, you are setting up the modem as if it were an external modem using a built in comm port. BTW, the 33.6 could be configured for PNP, I just never did it that way, being used to setting cards for the comm port that will be used.

So, if you can find (doubtful these days) a PCI modem that has configurable jumper settings, then it will work. Otherwise, like me, you have to use an external modem, which is a blessing because that frees up a PCI slot for a different card. You lose the comm prot no matter what, so why not use the internal comm port and use the PCI slot for something else?!

—Mike Brown,

IP Bandwidth Management article in June 99 issue

This is in response to Jamal Hadi Salim's article titled “IP Bandwidth Management”, in the June 1999 issue of Linux Journal. The article provides a clear description of the traffic control capabilities of Linux. It seems that some impressive packet scheduling work has been done for the Linux kernel. However, I take issue with some of the claims in the article. I'll address these in the following paragraphs.

Jamal states in his “Final Word”, that “As far as I know, Linux is the most sophisticated QoS enabled OS available today.” and that “I am not aware of any such functionality in Microsoft products.” This came as a great surprise to me. Anybody who has been active in the IETF (the birthplace of RSVP, diffserv and most interesting QoS technologies) is aware of the work that Microsoft folks are doing in this area, in collaboration with folks from Cisco, Intel and numerous others.

To be a “sophisticated QoS-enabled OS” requires a lot more than a packet shceduler in the kernel. A QoS enabled OS needs to include at least packet scheduling, diffserv codepoint (DSCP) marking, and RSVP signaling. It is necessary to take a holistic approach to QoS. Allow me to elaborate - the purpose of QoS is ultimately, to enhance the experience of the user of networked applications, be they multimedia applications or other mission critical applications such as ERP, e-mail, etc. These applications suffer when there is congestion in the network anywhere between sender and receiver. The network starts with the sending host's network stack. However, in most cases, the congestion is not in the sender's stack, rather, it is somewhere deeper in the network. To provide QoS in the sending stack only and not in a true end-to-end manner is analogous to putting an HOV lane in the entry ramp to a congested highway. It doesn't make my commute to work any better - it just gives me quicker access to the congested highway.

How can hosts contribute to true end-to-end QoS? A number of mechanisms are available. Primarily, these fall into the realms of marking and signaling. Marking is the act of marking packets for certain behaviour in the network. This can be done by marking a DSCP in IP packet headers. Diffserv enabled routers in the network prioritize packets based on the behaviour specified in the packet headers. An additional form of marking is 802.1p. This marking is achieved by adding to a packet's MAC header when it is submitted to an 802 LAN. Many switches prioritize packets based on the 802.1p mark. Signaling mechanisms include, at layer-3 - RSVP. At layer-2, ATM signaling, proposed extensions to DOCSIS for cable modems and a myriad other link layer specific signaling protocols. Signaling protocols are used to reserve resources for certain traffic flows in the network, to assist the network in recognizing traffic from certain users or applications, to provide admission control and feedback to applications and to coordinate QoS mechanisms among separate network elements. Recent work in the IETF has led to the integration of RSVP signaling with diffserv packet handling. This marriage of technologies enables the benefits of RSVP to be realized without the scalability concerns that impeded its early deployment.

Providing true end-to-end QoS in a host OS requires support for both signaling and marking mechanisms as well as traffic shaping. A host OS needs to provide support for these mechansims in an integrated manner, as opposed to isolated 'boxes' of functionality. The mechanisms need to be able to respond to standard network policy controls. Finally - the host needs to provide end user applications and traffic management applications a simple and cohesive set of APIs that pull together all the disaparate QoS mechanisms. Windows 2000 does so by providing a simple extension to the Winsock API which coordinates RSVP signaling, DSCP marking, 802.1p marking, ATM signaling (and additional layer-2 signaling mechansims) and packet scheduling. As such, I claim that - while Linux may have the most sophisticated packet scheduler, it is Windows 2000 that provides the most sophisticated, ready-to-use QoS support.

—Yoram Bernet,
QoS Development Lead, Microsoft

Re: IP Bandwidth Management

RE: Yoram Bernet's letter.

The Linux 2.2 is shipping, now, and the QoS work has been shipping since at least the fall of 1998. Windows 2000 is not yet shipping, nor are a dozen other products. So, with the exception of whatever W98 has, the rest is really vapourware as far as the end user is concerned.

However, when W2000 ships, I'm sure that its solution will be complete, while what is shipping with Linux is rough and lacks a good user interface.

Michael Richardson,

At The Forge Factual Error - Hidden Fields - Don't trust them

You're right about not trusting hidden fields. I'm surprised that I told people not to worry about hidden fields, or the origin of a form, since I'm aware of the potential security risks involved. Thanks for reminding me of this; I hope not to forget it when writing future columns.

Using MD5 to verify the contents of a hidden field is a rather clever idea.

I'll take a look at MIME::Lite for sending mail with attachments. It sounds like a useful module to use in my own work, as well as in future ATF articles.

As for more articles about mod_perl, I certainly expect to do so in the coming months. I find mod_perl to be increasing fascinating and useful, and think that others will benefit from trading ideas about it.

Thanks for writing, and please let me know if you have any additional ideas!

—Reuven M. Lerner,

Concerns about certification

In reading the recent articles in LJ about Linux certification I have become concerned. I read a lot about why certification is important to us all, but what I did not see is any disclosure of self interest. Will P. Tobin Maginnis or Sair, Inc. make any income from this certification process? Will Dan York or Linuxcare? Dan is a technical instructor. Will generating an environment where certification is required create additional billable training opportunities? If this is a money making venture, then be forthcoming about it, and don't veil it with the argument that this is what is good for the community. I'm not opposed to making money, but lets be honest about the objectives. (It can be profitable and good for the community, but if the primary objective is generating revenue the community will not be well served.) Anyone who thinks the primary motive of other proprietary software certifications (you know who they are) is to better serve the community, needs to follow the money to where the real motivation lies.

Another question this has raised is, can anyone offer a Linux certification? If so, can I offer the Geo's certification and administer tests and collect fees? This could lead to fragmentation and diminish the value of all Linux certifications. Which one is credible?

So what is the remedy? I don't know, but here are some thoughts. I think in order for a certification body to be credible it needs to:

1.) Be a non-profit organization. Only then will we know for sure that the goal is community, not money.

2.) Have all volunteer staff. No paid employees. Salaries to employees of even a non-profit organization can lead to self interest. Volunteers will do it because it is good for the community. Linux User Groups could be utilized to administer tests.

3.) Have no expenses outside of direct expense of test administration. No office, no expense accounts, etc.

4.) No fees outside of direct expense of test administration. Like printing, postage, website.

5.) Full public disclosure of all finances. The community needs to know that the interest of the certification body is only the community.

This may seem extreme but I think it is necessary in order to keep the objectives of the testing body in line with the community. Its true that other software certification bodies don't work this way, but again, follow the money to understand their motivation. I am not opposed to making money. I am a working programmer. I think there are plenty of opportunities for organizations other than the testing body to charge fees for training and support, but letting them create a need for training via creating a need for certification that they will administer and profit from does not put the best interest of the community first.

I don't think I am alone. I think there is a (so far) silent majority that is concerned and is watching the certification issue unfold.

Any volunteers?

George Saich,

Nasty bug LJ #61 Pthread code

Hopefully you can redirect this letter

There are a real nasty bug in the code published in the Pthread artikle.

Puts a new /focus/ your online artikle on buffer-overflow.


int main()
   pthread_t *tide; //
   int i;
   for(i=0; i<10; i++) {
int main()
   pthread_t tide[10];
   int i;
   for(i=0; i<10; i++) {

—Klaus Pedersen,

about Linux's standards article

Title: LSB: more facts, please

I'm dissapointed about Daniel Quinlan's article when it talks about Linux Standards Base (LSB) because he doesn't explained enough the current state of this standard, nor if there are concise LSB plans or scheduling, and if there are'nt, why not?. What's happening in the LSB efforts?. I believe is a good idea to keep comunity informed in that way.

Also I'm dissapointed on all distributions because I think LSB must be one of the highests priorities, LSB should be there some time ago and even it seems that currently LSB is in it's early stages.

Reading opinions from distributions' people it's also not very gratifying, even they say don't want to see fragmentation they actually bundle generic products as distribution specific. This causes more confusion to the market. Distributions should enforce themselves to avoid this in order to make Linux a stable and a really homogeneous Open Source platform.

IMHO Linux is just starting to compete in the marketplace and distribution competition should not rely on things that produce fragmentation. The LSB is a must and it needs much more than promises and good wishes.

I hope actions will be taken fast to avoid this in the future.

Thanks for this media to provide user feedback

—Ulisses Alonso Camaro,

Re: Linux and E-Commerce article in LJ 63

To: Yermos Lamers
On page 41 of the July 1999 issue (number 63) of the Linux Journal, you state in your second bulleted point on that page (sixth bullet in 'Putting it All Together'):

If the transaction was authorized ...
[...]b4 The reasoning is that if someone steals your credit card, it is unlikely they have your address as well.

Say what??? That's for the public to see, right?

The chance of having the credit card loss occur as a result of loss of a purse or wallet is significant. In every case that I'm aware of, the driver's liciense contains both address and zip/postal code. Please check the contents of all of your employee's wallets, and see. Then note if some other unique number that is also known to the credit card company is present. I don't know for fact's, but think that the home phone number would be a better check. Here in the states, it is possible to locate a super telephone directory on line, to see if the customer's name matches a phone number, which is much less likely to show up in the wallet. Even so, even this is not fool resistent, much less fool proof. If the baddies know that is part of the check, they'll consult the super phone book as well, before placing an order with you.

No, this is not the 'secure' way of authenticating the customer. I know that 1-2 percent is industry average, but there are ways to improve that. Making certain that the billing address is the same as the shipping address will help. This may have been what you implied with your address check, but if so, it may have be to subtile, and at the same time, not subtile enough.

Otherwise, a good article. I like the 'real world' approach to solving your promlems.

—Bill Staehle,

False report

On page 18, paragraph 3, line 2 of the June issue, you say that ssh fails to compile with glibc-2.1. I have been using glibc-2.1 for about 4 months now, and never failed to compile ssh. BUT, you have to have a very clean and neat set of include files in the include path, in order for the job to be accomplished. Mixing headers from new and old versions of glibc doesn't help a bit.

Thank you for your time and BTW, thank you for your magazine.

—Gerasimos Melissaratos,

Netscape and glibc 2.1

Thanks for a great magazine!

I read an article in the June99 issue on p18 by David A. Bandel about Glibc 2.1 vs Glibc 2.0 and problems with Netscape. I don't know anything about his problem except what he wrote, but I had some problems with Netscape and Stampede (based on glibc 2.1). At first I blamed Glibc 2.1 but the problem wasn't there, it was in libstdc++, Netscape 4.51 didn't like the libstdc++.2.7.x that came with Stampede but after putting just the binaries from RedHat 5.2 in /usr/local/lib it worked fine. Netscape 4.6 needs libstdc++.2.8.x and the RedHat version seems to work fine.

BTW: I think I read somewhere on the internet that there had been some sort of mixup with the Netscape binaries so that they put the libc5 and the glibc 2.0 in the wrong places. Maybe that's why he had to change to the libc5 version.

BTW2: I can recommend a great program for filemanagement, it's called FileRunner and it's able to do a lot of things, take a look at:


Re: Question on an article

Every package I review in LJ has been compiled and run on a Linux system. Just off hand, I can't tell you which system, but it will be either a modified Debian 2.1 (w/ a 2.2 kernel) or a Caldera OpenLinux 2.2 system (with updated libraries).

The biggest problem folks face in compiling things on Linux are libraries. You need lots of them and current ones.

As for compiling Saint: I looked at the Saint site. It specifically mentions both Red Hat 6.0 and Caldera OpenLinux 2.2 as capable of compiling Saint (there were problems originally with glibc-2.1).

I don't create RPMs or Deb packages for myself, so I won't start doing it for others. I just don't have the time or disk space. If you are having problems compiling the program, send me the error messages, and perhaps I can help decipher them.

Besides, if you're running a system with libc5 or glibc-2.0.7, any binaries I send you will just segfault.

I'd be happy to help you get Saint compiled. Just make sure you have current copies of libraries I listed, get the latest version of Saint, and if you have problems, e-mail me the error messages, and I'll see what we can do.

Good Luck,

—David A. Bandel,