Linux in Higher Education: Open Source, Open Minds, Social Justice
It's generally agreed that college and university students should learn the fundamentals of information technology, including the use of operating systems, office application software and the Internet. It's quite another matter, though, to pay for the necessary infrastructure--wired dormitories, industrial-strength servers, lots of PCs around campus, and pricey commercial software for student use. Now that Linux and open-source office applications such as AbiWord and Gnumeric are available for free, institutions of higher education can save big money in software costs, and more than a few campuses and university consortia are starting to take Linux seriously (see, for example, Robiette 1999). They're discovering what Linux users already know--namely, that Linux, compared to Microsoft Windows, offers an unbeatable combination of advantages, including a zero price tag, do-it-yourself flexibility, freedom from licensing headaches, stability, performance, compliance with public standards, interoperability with existing systems, and a design that reduces the threat of computer viruses (see Prasad 1999).
As I'll argue in this essay, there's much more at stake here than money. In what follows, I'll argue that open source software in general--and Linux in particiular--holds the key to the ability of colleges and universities to retain their traditions of scientific and scholarly excellence as they adapt to an increasingly computerized world. By establishing Linux as the international standard for academic computing, institutions of higher education can directly address challenges to the integrity of scientific research, do a better job of preparing students for a world of rapidly changing technology, and combat the growing and disturbing disparities in access to information technology. The following sections detail the case for Linux in higher education--a case that, in my view, amounts to a moral imperative.
Since science's earliest days, the enterprise has been based on a gift-economy notion very much like that underlying open- source software: scientists receive credit and prestige for their discoveries, but they do not receive ownership of them. On the contrary, scientists are expected to publish their findings in open, public journals, which are accessible to all. These journals print scientific articles only after a submission passes peer review, in which a scientist's peers scrutinize all of the assumptions and calculations that produced the conclusions. The journal's editor will publish a scientific article only when the peer reviewers conclude that the underlying methods are sound. To be sure, the system doesn't always work perfectly, but--like democracy--it is clearly superior to its alternatives.
Increasingly, scientists are beginning to see that their use of closed-source software poses a profound threat to the integrity of science (Kiernan 1999). Computer software is increasingly used to analyze research results or simulate real-world systems. However, scientists rarely make their software available to other scientists for scrutiny--and even if they did, they often used closed-source programs in which the underlying source code is protected by copyright and trade secrecy claims. But this practice strikes at the heart of science, namely, the notion of verifiability. To be accepted as valid, all calculations and assumptions that go into a given scientific assumption must be open to public scrutiny. Yet closed-source software makes such scrutiny impossible.
These are the simple facts, from which Dan Gazelter, a professor of biochemistry at Notre Dame University, draws the following, compelling conclusion: scientists are positively obligated to use open-source software, and what is more, the future of an increasingly computerized scientific enterprise may well depend on their decision to do so (Gezelter 1999; cf. Wilson 1999). Increasingly, scientists and university librarians are developing clearinghouses and large-scale development projects to create more open-source alternatives for use in higher education (see the Open Science Project and oss4lib).
But the use of open-source software is insufficient. If the future of science depends on scientists' use of open-source software, one can very well argue that colleges and universities are under a positive obligation to move away from closed-source computing infrastructures as well as closed- source software. Consider this: many of the instructions in computer programs do little more than issue directives to the operating system; this is done by means of the operating system's application programming interface (API). To verify scientific software fully, the scientific community may need to examine the program's interaction with the operating system. Yet Microsoft refuses to document the Windows API fully and regards the Windows source code as an immensely valuable trade secret. What is more, Microsoft has taken the lead in lobbying for proposed changes to the U.S. commercial code that would effectively criminalize reverse engineering.
It's not enough for scientists to use open-source software; they must also use an open-source operating system. Colleges and universities can help to assure the ubiquity of open-source software and operating system usage in science by moving to Linux as an international standard for academic computing.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Returning Values from Bash Functions
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide