Last week in New York, I shared a cab with a friend who works for Google. He was the guy who, with permission from his company, gave me a scoop that had to stay embargoed until 8pm Pacific time tonight (Thursday, as I write this), while I was out having an anniversary dinner with my wife.
What the hell, scoops are over-rated anyway. News is news. In this case, news that Google has released Picasa, its photo editing and organizing software, on Linux. That's before they release it on Apple (if they ever do). I believe this is a first.
Picasa began as the product of a Pasadena, California company by the same name. That company was founded in October, 2001. Google bought the company in May, 2004.
Migration was done with Wine. (Details here.) While not every feature in the original Windows version is implemented, most are, and more are planned. Among the current features is the ability to detect and interact with an attached camera, which is a cool thing.
I recorded a long conversation earlier today with Chris DiBona, the open source program manager at Google. But it's late and I don't have time to go back over it right now. Suffice to say that Chris says Picasa on Linux is way cool. Judging from responses on Digg and other places, Chris isn't alone.
As of 11:50pm, less than four hours since the news hit, Google Blogsearch finds 2121 posts that mention Picasa and Linux. Technorati finds 1928. Those are just benchmarks. It'll be interesting to see how those numbers grow over the next several days.
Doc Searls is Senior Editor of Linux Journal
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Dynamic DNS—an Object Lesson in Problem Solving
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
2 hours 24 min ago
- Android is Linux -- why no better inter-operation
4 hours 39 min ago
- Connecting Android device to desktop Linux via USB
5 hours 8 min ago
- Find new cell phone and tablet pc
6 hours 6 min ago
7 hours 35 min ago
- Automatically updating Guest Additions
8 hours 43 min ago
- I like your topic on android
9 hours 30 min ago
- This is the easiest tutorial
16 hours 6 min ago
- Ahh, the Koolaid.
21 hours 44 min ago
- git-annex assistant
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?