How Can Companies That Rely on Technology Consistently Not Pay for It?
I was going ask you for your take on what will be this year’s marketing trend to boost sales. We have seen things like green computing and virtualization and the ever popular security (pick your favorite subtopic – USB data slurping, laptop encryption, firewalls etc), or whether Linux is for smart people but I was talking with a couple of friends, both technical and end user and ended up shaking my head.
The topic – how can companies, that rely on technology to make money, continue to fail to spend the money necessary to keep that technology not only functional, but recoverable. The first story came from a friend in logistics. She is an end user and sent me an email along the lines of they have been playing with the servers and just lost payroll. Now she is not savvy enough to explain what lost payroll means, but the upshot was the company was going to have to call the person who set it up and get him to fix it. This person, as I understand the story, no longer is employed by the company and lives several hundred miles away.
The second story comes from a friend who has his own small business taking care of other small business IT needs. He cannot find enough quality people to cover all of his needs and he occasionally calls me when he runs into Linux issues. He was telling me about a problem he was having at one of his customers and describing the infrastructure to me and he got about half way through the tale when I asked him why he had done it that way. The upshot of the story was that the company would not pay for a more robust solution, despite the fact that there were several single points of failure and certainly considerable risk points.
As someone who works in an enterprise environment as well as someone who has worked in small associations and small business, this is hardly a new trend, but as the economy begins to slough off jobs and spending becomes even more restrictive, companies will continue to cut essential services, further risking their IT infrastructure. This is not a new situation. IT is always the ugly duckling at the show. At best, when everything is working, no one sees us and the bean counters wonder why they are paying us. At worst, it is mass chaos and the bean counters wonder why they are paying us.
So what to do? One of the best things is to collect and keep current metrics that show dollars per cycle (feel free to define a cycle). For example, a common metric is the amount of money the company will lose if the database is down per minute, or production losses per minute of outage. Some of these numbers are pretty easy to obtain but others are not and require some creative accounting and a lot of schmoozing with the HR and financial folks that do not always like talking about things like gross salary or operating costs. Another, more subjective area is morale. When IT systems are running correctly, people do not see it and are focused on other things. When they are not running well, people are tense. For example, when I took over at a company several years ago, the network had a terrible reputation, crashing on a regular basis, almost like clockwork. This resulted in a lot of tension and excessive spending on supplemental equipment at the department level just to be able to get the job done. It took a while to beat the network into shape but once it was humming, people were more concerned about other things and actually smiled at the IT folks. Finally, there are several risk factors that can be documented. For example, every dollar that is cut in backup solutions results in a measurable time to restore in the event of failure. This is not always catastrophic failure. This could be as simple as the CEO’s assistant deleting a key memo to the shareholders (or the equity company more likely). If it cannot be retrieved, then the assistant’s time has been wasted. That has both a dollar value and a risk attached to it. You can discuss security issues. The case of the missing payroll is a big red flag to me on a number of levels, least of which is letting a former employee back into the systems, as benign as it might be.
Companies have a love/hate relationship with their IT organization. As IT people, we need to be more proactive in showing value, both to make sure we can afford to do the job expected of us, and to ensure we can properly describe the risks of cutting the costs in a language that those in charge of the purse strings will understand.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide