Evergreen 486 to 586 Upgrade Processor
While overall performance is very subjective (few of us use exactly the same mix of tools and applications), I have to say that, for me, the upgrade was a huge improvement. The machine felt completely different with applications starting and running faster and boot-up and shut-down times greatly reduced.
A kernel compile ran almost exactly twice as fast, with the upgraded machine completing a make clean; make zImage in 31:36.66 seconds, a task which the original 486 processor completed in an elapsed time (measured using /usr/bin/time) of 1:00:30.24 (slightly over one hour). Given that the rest of the system (main memory and the I/O subsystem) was unchanged, this is quite a respectable increase in performance. Perhaps the biggest perceived change was to the performance of xhosted applications. Netscape, always a CPU and memory hog, started up much faster, as did xterms, a clock and the other applications which I normally run. X performance on the system also improved, but with the current, low resolution monitor and standard VGA video card, I don't envisage using it as a true desktop machine very much (perhaps that's the next upgrade target area). Network (NFS/Samba) performance seemed pretty much unchanged.
There isn't one. Nothing needs to be recompiled or changed in any way. There are no reliability issues. The fan on the Evergreen unit is completely inaudible when the covers are on the system.
The only fault I have with the whole kit is that mysterious cache configuration jumper. It is clearly shown in the illustrations, but ignored in the body of the text. The appendix contains a brief explanation of the differences between write-through and write-back cache. The kit comes with the jumper pre-set to a default of write-through and while it is unlikely anyone could get into trouble using this setting, a simple comment, even if only “leave well alone”, would have been better than nothing at all.
This lack of information on the cache configuration setting led to my trying out Evergreen's technical support by e-mail. They responded to my question within 48 hours, to let me know that the write-back cache option will work only with systems which have been specifically designed with that option in mind. The fact that Evergreen's tech-support responded within a reasonable time and that their web site is an easily accessible, round-the-clock source of information is of no little importance with a product where some degree of “do it yourself” is involved.
For anyone owning nothing more powerful than a 486 and on a limited budget, this upgrade path is certainly one which I would recommend. Overall system performance has been improved with no decrease in reliability. There were no operating system changes involved and my existing kernel worked fine.
With all of this coming at a cost roughly equivalent to one-tenth of the cheapest, bottom-end system from a mainstream manufacturer, it has to qualify as perhaps the cheapest, legal way to get yourself a “new” machine. Not only that, but your “significant other” need never know.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- Interview with Patrick Volkerding
- SUSE LLC's SUSE Manager
- Tech Tip: Really Simple HTTP Server with Python
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Returning Values from Bash Functions
- SuperTuxKart 0.9.2 Released
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide