University of Toronto WearComp Linux Project
This paper is part one of a two-part series. In this part I will describe a framework for machine intelligence that arises from the existence of human intelligence in the feedback loop of a computational process.
I will also describe the apparatus of the invention that realizes this form of intelligence, beginning with a historical perspective outlining its visual and photographic origins. The apparatus of this invention, called “WearComp”, emphasizes self-determination and personal empowerment.
I also intend to present the material within a philosophical context I call COSHER (Completely Open Source, Headers, Engineering and Research) that also emphasizes self-determination and mastery over one's own destiny.
This “personal empowerment” aspect of my work is what I believe to be a fundamental issue in operating systems such as Linux. It is this aspect that WearComp and Linux have in common, and it is for this reason that Linux is the selected operating system for WearComp.
An important goal of being COSHER is allowing anyone the option of acquiring, and thus advancing, the world's knowledge base.
I will also introduce a construct called “Humanistic Intelligence” (HI). HI is motivated by the philosophy of science, e.g., open peer review and the ability to construct one's own experimental space. HI provides a new synergy between humans and machines that seeks to involve the human rather than having computers emulate human thought or replace humans. Particular goals of HI are human involvement at the individual level and providing individuals with tools to challenge society's preconceived notions of human-computer relationships. An emphasis in this article is on computational frameworks surrounding “visual intelligence” devices, such as video cameras interfaced to computer systems.
I begin with a statement of what I believe to be a fundamental problem we face in today's society as it pertains to computers and, in particular, to computer program source code and disclosure. Later, I will suggest what I believe to be solutions to this problem. Linux is one solution, together with an outlook based on science and on self-determination and individual empowerment at the personal level.
A first, fundamental problem is that of software hegemony, seamlessness of thought and the building of computer science upon a foundation of secrecy. Advanced computer systems is an area where a single individual can make a tremendous contribution to the advancement of human knowledge, but is often prevented from doing so by various forms of software fascism. A system that excludes any individual from exploring it fully may prevent that individual from “thinking outside the box” (especially when the box is “welded shut”). Such software hegemonies can prevent some individuals from participating in the culture of computer science and the advancement of the state of the art.
A second fundamental problem pertains to some of the new directions in human-computer interaction (HCI). These new directions are characterized by computers everywhere, constantly monitoring our activities and responding intelligently. This is the ubiquitous surveillance paradigm in which keyboards and mice are replaced by cameras and microphones watching us at all times. Perpetrators of this environmental intelligence claim we are being watched for our benefit and that they are making the world a better place for us.
Computers everywhere constantly monitoring our activities and responding intelligently have the potential to make matters worse from the software hegemony perspective, because of the possibility of excluding the individual user from knowledge not only of certain aspects of the computer upon his or her desk, but also of the principle of operation and the function of everyday things. Moreover, the implications of secrecy within the context of these intelligence-gathering functions puts forth a serious threat to personal privacy, solitude and freedom.
Science provides us with ever-changing schools of thought, opinions, ideas and the like, while building upon a foundation of verifiable (and sometimes evolving) truth. The foundations, laws and theories of science, although true by assumption, may at any time be called into question as new experimental results unfold. Thus, when doing an experiment, we may begin by making certain assumptions; at any time, these assumptions may be verified.
In particular, a scientific experiment is a form of investigation that leads wherever the evidence may take us. In many cases, the evidence takes us back to questioning the very assumptions and foundations we had previously taken as truth. In some cases, instead of making a new discovery along the lines anticipated by previous scientists, we learn that another previous discovery was false or inaccurate. Sometimes these are the biggest and most important discoveries—things that are found out by accident.
Any scientific system that tries to anticipate “what 99% of the users of our result will need” may be constructing a thought prison for the other 1% of users who are the very people most likely to advance human knowledge. In many ways, the entire user base is in this thought prison, but many would never know it since their own explorations do not take them to the outermost walls of this thought prison.
Thus, a situation in which one or more of the foundation elements are held in secret is contrary to the principles of science. Although many results in science are treated as a “black box”, for operational simplicity there is always the possibility that the evidence may want to lead us inside that box.
Imagine, for example, conducting an experiment on a chemical reaction between a proprietary solution “A”, mixed with a secret powder “B”, brought to a temperature of 212 degrees T. (Top-secret temperature scale which you are not allowed to convert to other units.) It is hard to imagine where one might publish results of such an experiment, except perhaps in the Journal of Non-Reproducible Results.
Now, it is quite likely that one could make some new discoveries about the chemical reaction between A and B without knowing what A and B are. One might even be able to complete a doctoral dissertation and obtain a Ph.D. for the study of the reaction between A and B (assuming large enough quantities of A and B were available).
Results in computer science that are based, in part, on undisclosed matters inhibit the ability of the scientist to follow the evidence wherever it may lead. Even in a situation where the evidence does not lead inside one of the secret “black boxes”, science conducted in this manner is irresponsible in the sense that another scientist in the future may wish to build upon the result and may, in fact, conduct an experiment that leads backwards as well as forwards. Should the new scientist follow evidence that leads backwards, inside one of these secret black boxes, then the first scientist will have created a foundation contaminated by secrecy. In the interest of academic integrity, better science would result if all the foundations upon which it was built were subject to full examination by any scientist who might, at some time in the future, wish to build upon a given discovery.
Thus, although many computer scientists may work at a high level, there would be great merit in a computational foundation open to examination by others, even if the particular scientist using the computational foundation does not wish to examine it. For example, the designer of a high-level numerical algorithm who uses a computer with a fully disclosed operating system (such as Linux) does other scientists a great service, even if he uses it only at the API level and never intends to look at its source code or that of the Linux operating system underneath it.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide