The KiBS CRM is a Web-enabled, SaaS-based CRM module for small- and medium-sized businesses, offering “integrated sales, marketing, customer service and support” together in one package. It is the first application in the Kyliptix Integrated Business Suite (KiBS), which is targeted at small- and mid-sized businesses. Kyliptix claims that KiBS “is capable of integrating with existing front- and back-office applications”, meaning that customers are “no longer forced to engage a system integrator to create problematic patch code to ensure interoperability and communication between the multiple software applications”. By working with existing data rather than replicating or porting data to other locations, says Kyliptix, “KiBS eliminates compatibility issues and errors stemming from improper synchronizations”. KiBS is built upon a LAMP platform and utilizes an Ajax methodology. Additional modules are forthcoming, according to the company.
Getting your TV fix delivered to you via IP is becoming ever more common, and one way to understand that universe better is with Joseph Weber and Tom Newberry's new book, IPTV Crash Course. This work is an “accessible overview” of IPTV—that is, the convergence of the Internet and digital video technology. Its mission is to “explain the fundamentals of IPTV”, as well as “how the business models of service carriers will change” due to the utilization of new technologies. Although much of the tech stuff will be familiar to most of us, the societal and economic impacts that are covered here are likely to tickle both the suit and the geek alike.
AML has graced this page numerous times with its offerings, and this time around it has a new data-capture device, the M5900, which aims to “supply big-business functionality at a small-business price”. AML's target customer is one needing “high performance for everyday, all-day data collection applications, including inventory control, factory-floor management, price verification, shipping/receiving, asset tracking” and so on. Feature-wise, one will find 32MB RAM/16MB Flash ROM memory (with 10MB of user-available non-volatile memory), a 200MHz ARM9 processor, a rechargeable lithium-ion battery (plus backup), backlit LCD display, a 55-key keypad and an SQLite database engine—with an embedded Linux OS running the show, of course. Other options include industrial or general-purpose configurations, as well as four different laser choices.
James Gray is Products Editor for Linux Journal
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
3 hours 57 min ago
- Yeah, user namespaces are
5 hours 13 min ago
- Cari Uang
8 hours 44 min ago
- user namespaces
11 hours 38 min ago
12 hours 4 min ago
- One advantage with VMs
14 hours 32 min ago
- about info
15 hours 5 min ago
15 hours 6 min ago
15 hours 7 min ago
15 hours 9 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?