Import This: the Tenth International Python Conference
The Zope keynote, "Open Source Content Management", was delivered by Tony Byrne of CMS Watch, an industry portal for content-management issues. What is a content management system (CMS)? It's a set of business rules and editorial processes managed by people. Specifically, it's not a category of software. Anybody with a web site has a CMS even if they do it all by hand. Even the increasingly-popular blogging is arguably a kind of personal content management. Tony calls the software "CM tools"; however, some others call them CMSs, including another author quoted below.
So, why use CM tools in your CMS?
To devolve control and avoid the webmaster bottleneck (meaning, nothing happens unless the webmaster does it).
To allow people to specialize in what they do best (creating and maintaining content), letting the machine do what it does best (the mundane tasks).
To divide content into flexible, reusable chunks.
To easily provide alternate presentation formats for the disabled.
Tony exposed a few industry buzzwords:
- Scalable platform vs out-of-the-box
These are two of Tony's favorites. In fact, they're mutually exclusive. Easy-to-install, out-of-the-box products probably won't work for you unless you situation is exactly like the one the authors envisioned. Conversely, scalable products are difficult to install.
- XML compliant
This is another gem. How can a product be XML compliant when XML itself is changing? Does merely being able to dump a data structure into an XML text file count as XML compliance? Fine, but anybody and their dog can do that.
- Intuitive interface
intuitive to whom? To the program's authors, of course. Were end-users involved in designing the "intuitive" interface?
How? How much effort would be required to take the product as shipped and create the demo program the sales staff initially showed the customer?
- Dynamic content management
This is not always necessary. More and more sites are discovering the value of pregenerating "static" pages for data that doesn't change extremely often. Not only does it cut down on server resources, but it's search-engine friendly. The only time dynamic pages are truly necessary is when the page is customized according to unpredictable user input or changes very rapidly (say, the latest stock quotes). A hybrid approach is also possible: pregenerate the portions of a page that don't change often, and leave a box for the content that must be calculated on the fly. But often you'll find that trivial personalizations ("Good morning, Sara. Your last login was 1 day 12 minutes 3 seconds ago.") are more hassle to maintain than they're worth.
CMS Watch has an article on the six questions you should ask your CMS software vendor regarding security. If they say, "Mega Big Bank uses our CMS and they wouldn't if they weren't sure of the security", the author, Colin Cornelius, responds, "I could tell you a thing or two about how financial institutions select a CMS, and security doesn't always enter into it."
There are three phases of web content management: production (what happens before somebody clicks a link to that page), publishing (what happens after they click) and distribution (how is the content reformatted and sent to alternate output devices). What's the best way to design a workflow system that adequately addresses all three phases and by which you can evaluate potential tools? Use a plain old word processor or spreadsheet.
CMS is really an immature market. There are 220 vendors of CMS software, of varying qualities. Many people use two systems, one for production and one for publishing.
Many CMS companies have gotten out of the search-engine business, and good for them. Designing a good search engine is difficult. Paying $10,000 to a dedicated search-engine company that knows their stuff is well worth it.
Syndication is one thing Tony recommends. That means the sharing of article metadata with other sites, such as LJ's news.rss file or the "recent Slashdot items" links you see on some sites. There's an article about syndication on the CMS Watch site.
Here's Tony's analysis of open-source CM technology, including Zope and all others: good cost, requires substantial support, the support is great but the documentation sucks.
Tony closed with a warning to the Zope community, a list of the top things people say when he mentions Zope:
Why does it have such a funny name?
I looked at Zope, but I still don't understand what it is.
It seems like a kind of religion.
I'd consider it for my Intranet, but it won't necessarily work with my Java or Oracle production server.
We're a Java/COM/Perl shop.
Is the Zope corporation for me or against me? I'm an integration consultant. Does the Zope company want to make me more productive or steal my business?
Tony thinks it's usually better to go with an off-the-shelf content management tool than to roll your own. He predicts that Java will continue to be more and more used for XML and that production and publishing will continue to be separate.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide