FIASCO—An Open-Source Fractal Image and Sequence Codec
FIASCO provides a powerful compression library that is intended to replace JPEG and MPEG for low bit-rate applications. FIASCO is an asymmetric compression method; you get software-based, real-time decoding at the cost of slow encoding times. FIASCO is especially suited in an application context where images or videos are compressed only once but requested and decoded several times (e.g., world wide web applications). Finally, if FIASCO were to be combined with an open-source sound and speech compression (e.g. Vorbis, see Resources), a complete video compression system for low bit rates would be available for free.
@aa:Ullrich Hafner (email@example.com, ulli.linuxave.net) has been a software engineer in the software management & design AG (sd&m), Germany since 1999. He developed FIASCO for his PhD thesis Low Bit-Rate Image and Video Coding with Weighted Finite Automats (see Resources) from 1994—1999.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?