PyCon DC 2004
The most entertaining talk was "Faster than C: Static Type Interface with Starkiller", by Michael Salib. That was only the fourth best title of the conference, however. The top three were "'Scripting Language' My Arse: Using Python for Voice over IP", by Anthony Baxter; "Flour and Water Make Bread", by David Ascher; and "Two Impromptus, or How Python Helped Us Design Our Kitchen", by Andrew Koenig.
The Starkiller talk was enjoyable because Mike is one of two speakers who doesn't pull any punches--he calls a spade a shovel. His justification for writing a Python-to-C++ compiler was, "For the 15% of applications where speed matters, Python is slow. Sure, you can write it in C++, but C++ sucks. That's why we're using Python." Which actually makes a lot of sense if you think about it.
He went on to explain how Python's dynamic features that we love so dearly is the reason Python is so hard to optimize. "Python has lots of runtime choice points, but thirty years of compiler optimization research depends on eliminating runtime choices." By "runtime choices" he's referring to the fact that a variable may change type, attributes may be added after startup and the like, and all this happens after a traditional optimizer would have finished and said sayonara. "But dynamic capability is good because Python kicks ass." So Starkiller follows the 80/20 rule by optimizing what it can and leaving the rest. In particular, it refuses to optimize functions that contain eval, exec or dynamic module loading. But that's okay because most users don't use them.
Once those cases are eliminated, we examine the assignment statements to determine the types:
x=3; y=x; z=y; z=4.3 # x is int. y is int. z is int or float.
The types can be traced similarly through function arguments and return values. What about polymorphic functions? Starkiller handles them the same way C++ does: by generating distinct same-name functions for all argument combinations (aka overloading). Mike offers a few benchmarks to demonstrate the speed of this approach but also cautions, "All benchmarks are lies."
If you want to play with Starkiller, you're out of luck because there's no public download yet. There actually were several talks this conference presenting software that's not yet available, either because it's not robust enough or it's waiting for legal paperwork. I didn't see that in previous conferences. A few attendees commented, "Well, it's not as useful as a talk on something that's available, but on the other hand it's good to learn about cutting-edge research as soon as possible." The conference reviewers seemed to have done their job of approving pre-alpha talks only if they covered an area that was central to Python and long on Python's wishlist. Mike had his own reason for not revealing the code. "If you kill me now, you'll never get it."
Mike offered an obligatory acknowledgment. "Who owns Starkiller? MIT! Who paid for Starkiller's development? You did! Pat yourselves on the back! Thank you, taxpayers!!!" He then begs the audience not to tell DARPA that Starkiller is a Python-to-C++ converter rather than the sun-destroying weapon they think he's building. The presentation slides have a few more zingers too; they're available under the Session Papers link on the PyConDC2004 Aftermath link in Resources.
Finally, Mike ended with an indictment of the sun. "Destroy the sun! We hatessss it! It burns! The pale yellow face mocks us, keeps us from hearing the machine. It causes global warming. It causes sunburns. DARPA says the sun is bad, it warms our enemies. It weakens our dependence on foreign oil. There's only one logical conclusion: we must destroy the sun."
Guido had one question during Q&A. He asked, "Is it difficult to have so much attitude all the time?"
(There's a discrepancy about when Guido asked that question. My notes say it was during this talk. The SubEthaEdit notes say it was during Anthony Baxter's VoIP talk, which was almost as feisty. But of course my notes are right.)
Ah yes, that VoIP talk. Voice over IP may be Python's next killer application. Internet telephony currently is growing slowly, but it has the potential to become really big when it reaches critical mass. As VoIP becomes ubiquitous, people will need a new generation of applications and phones. Stoom, which runs under Twisted, is one small step in that direction. It needs a better UI, but it proves that Python is up to the task, even though VoIP requires generating sound packets exactly 20 milliseconds apart.
By the way, the title of the VoIP paper is slightly modified from the original. It's currently "'Scripting Language' My Arse: Using Python for Voice over IP." Originally it was "'Scripting Language' My Shiny Metal Arse: Using Python for Voice over IP".
Zope 3 hasn't changed much since last year; it's just further developed. It still aims to be friendlier to application developers than Zope 2, more Pythonic, more modularized, more specific in its API use (for example, less implicit acquisition), with better documentation early and a better integration of Web-based and filesystem-based application development methods. This will make applications more portable between Zope and other environments and make individual Zope features more accessible to non-Zope applications.
The PyPy talk showed that a Python virtual machine can be written in 16,000 lines of Python. The prompt looks like this: >>>>, with one extra > for each recursive level of PyPy. PyPy is markedly slower than Python and exponentially slower recursively: the innermost interpreter interprets the code, the next interpreter interprets the interpreter interpreting the code. But the PyPy developers have faith that they eventually can make it faster than CPython.
Pyrex stands alongside ctypes as an indispensable part of the C extension writer's toolkit. Pyrex compiles ordinary Python to C, but for better optimization you can use the cdef statement to declare variables as certain C types. Expressions using those variables are compiled directly to C, bypassing Python's slow object infrastructure.
Several other talks are worth mentioning but I don't have the space. Browse the Session Papers link on PyConDC2004Aftermath site and see which topics night interest you.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- SourceClear Open
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide