Keeping Up with Python: the 2.2 Release
Python 2.2 made its debut at the end of 2001, and its first bug-fix version 2.2.1 was recently released from the core developers at PythonLabs. The 2.2.x family is full of new features and capabilities, some considered significant additions and improvements to the language. These updates give Python developers a significant boost in terms of flexibility.
Python is a simple yet robust language combining the ease of scripting tools with the application-building power of compiled object-oriented programming languages. With Jython, the Java-compiled edition of the Python interpreter, Java programmers are discovering a tool that raises their productivity and development speed to a new level.
You can stay up-to-speed on these changes by reading the PEPs (Python Enhancement Proposals), which are created to give any reasonable idea an ear from the Python community. Before consideration is made for any update to the language, the problems and proposed solutions are presented along with the rationale for, and details behind, the change. Not only can you get the exact details on a PEP at the web site (see Resources), but you can also find out the status of a PEP. After reaching a consensus, a subset of PEPs is approved and slated for each release. For example, the changes in 2.2 (meaning the entire 2.2.x set of releases) consist primarily of five major PEPs: 234, 238, 252, 253 and 255.
For starters, 2.2 begins the process of unifying Python integers and long integers. Integer calculations can no longer raise overflow errors because they will automatically be cast into longs if the value overflows. Statically nested scopes, introduced in 2.1 and now standard, free Python from its restrictive two-scope model (see PEP 227). Previously, one had to put from __future__ import nested_scopes at the start of the script to enable nested scopes. Now that directive is no longer necessary as it has become standard. Unicode support has also been upgraded for UCS-4 (32-bit unsigned integers; see PEP 261). Minor updates to the Python Standard Library include a new e-mail package, a new XML-RPC module, the ability to add IPv6 support to the socket module and the new hot-shot profiler (PEP).
The most significant changes and additions to 2.2 are iterators and generators, changing the division operator and unifying types and classes.
Iterators give the programmer the ability to traverse or “iterate through” elements of a data set. They are especially useful when the items of such sets are of differing types. Python has already simplified part of this programming process, as its sequence data types (lists, strings, tuples) are already heterogenous, and iterating through them is as simple as a “for” loop without having to create any special mechanism.
The new iteration support in Python works seamlessly with Python sequences but now also allows programmers to iterate through nonsequence types, including user-defined objects. An additional benefit is the improvement of iteration through other Python types.
Now, that all sounds good, but why iterators in Python? In particular, PEP 234 cites that the enhancement will:
Provide an extensible iterator interface.
Bring performance enhancements to list iteration.
Allow for big performance improvements in dictionary iteration.
Allow for the creation of a true iteration interface as opposed to overriding methods originally meant for random element access.
Be backward-compatible with all existing user-defined classes and extension objects that emulate sequences and mappings.
Result in more concise and readable code that iterates over nonsequence collections (mappings and files for instance).
Iterators can be created directly by using the new iter() built-in function or implicitly for objects that come with their own iteration interface. For example, lists have a built-in iteration interface, so “for eachItem in myList” will not change at all.
Calling iter(obj) returns an iterator for that type of object. An iterator has a single method, next(), that returns the next item in the set. A new exception, StopIteration, signals the end of the set.
Iterators do have restrictions, however. You can't move backward, go back to the beginning or copy an iterator. If you want to iterate over the same objects again (or simultaneously), you have to create another iterator object.
As mentioned before, iterating through Python sequence types is as expected:
>>> myTuple = (123, 'xyz', 45.67) >>> i = iter(myTuple) >>> i.next() 123 >>> i.next() 'xyz' >>> i.next() 45.67 >>> i.next() Traceback (most recent call last): File "", line 1, in ? StopIteration
If this had been an actual program, we would have enclosed the code inside a try-except block. Sequences now automatically produce their own iterators, so a “for” loop:
for i in seq: do_something_to(i)under the covers now really behaves like this:
fetch = iter(seq) while 1: try: i = fetch.next() except StopIteration: break do_something_to(i)However, your code doesn't need to change because the “for” loop itself calls the iterator's next() method.
There is also another form of the iter() built-in function, iter(callable, sentinel), which returns an iterator as before. The difference is that each call to the iterator's next() method will invoke callable() to obtain successive values and raise StopIteration when the value sentinel is returned.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Tech Tip: Really Simple HTTP Server with Python
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
- Google's SwiftShader Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide