When does communications become spam?
An issue near and dear to my heart, both personally and professionally is that of spam. Not the lunch meat, which, when prepared correctly, I happen to enjoy, but that bane of email, the unsolicited commercial email (UCE). At what point does legitimate communication cross the line to become junk.
I am asking this question for a couple of reasons. The first is an article in this morning's Washingtong Post about Facebook users having their account's deactivated for abuse, because the user was using the system for, theoretically, just what its designers intended - staying in contact with friends. The second reason is more critical to me and that is sending out updates about hazardous weather and preparations to members of my volunteer cadre. As TS Hanna approaches, the traffic will increase and many of the systems that some of my people are on consider this uptick in mail to be spam, regardless of whether or not the end user actually chooses to receive it.
Now, on the other side of the coin are all those administrators (of which I am one) that are bemoaning the consumption of resources in handling all this stuff. These are not just the server resources, but bandwidth, router CPU cycles, firewalls etc., most of which cost money to operate. And I am the first person to tell you that real UCE is more than 50% of my inbox on a given day - that is the stuff that we all know and love from our friends in Nigeria asking us to help them out as good citizens of the world, or the congratulatory emails from Ireland telling us we are set for life, or the reminder messages from our local pharmacy about those medicines we have been looking for to increase our performance (and get more done in a day, one presumes). This stuff really and truly deserves a one way trip to
But what about the stuff that is grey. Call it mail from lists or promotional traffic because you signed up to download some code and now they want to sell you their newest toy, or, in my case, communications such as updating you to the status of deployment, current predictions, and other trivialities of storm preparations, or, in the case of the Ms. Coe, using the system the way it was theoretically intended which is keeping friends up to date on what she is doing.
This is of course, the problem and as listserves got more user friendly (including social networking sites), anyone could sign up (and any system), larger and larger volumes of email and other traffic began to flow. As more and more legitimate and not so legitimate users joined the Internet, many felt (and still feel) that it is their ISPs responsibility to monitor and manage the flow. I can understand, to a point, why this is the case, but at some point, ISPs also have a responsibility to the end users to pass the traffic without making arbitrary decisions about what is and is not legitimate.
At the moment, for example, there are three tropical storms/hurricanes spinning in the Atlantic, down from four on Tuesday. On Tuesday I was receiving one email, per storm per two hour period (give or take), plus unscheduled updates. Not a lot of traffic really, for a single individual. But I was also receiving dozens of follow up emails from various state agencies, emergency alerting systems and other distribution channels that mirrored the information coming from the National Hurricane Center. Many systems would have shut my feeds down long before even half of those messages had come in if I had not specifically turned OFF my ISPs filters on the front end (I know this because when I first signed up, it was during hurricane season and I was missing updates…I eventually found them in an on-line spam folder). The point here is that these are emails I specifically signed up for, I expect to receive and I am annoyed when I don’t get them. And frankly these are trivial when you think about it. I could tell you the stories about a people on help desks that don’t get responses back from customers because their mail servers think they are spam and never make it to the ticketing systems, and these are internal systems on internal networks.
There has to be a happy balance. Clearly, in the case of Facebook, human oversight is required and hopefully the case will be looked at and adjudicated and the user can go back to talking to her friends. At the very least from a PR perspective, that is how it should be resolved. Email is a completely different issue. Many messages are dumped automagically by programs using algorithms and other guessing engines that try to determine based on topic or frequency or keywords or combination whether or not an message is legitimate, which only results in user frustration when key messages are dumped and junk gets through.
Now there are options. But most of the ones in development or loose deployment are dependent on either preregistering with some agency or other arcane solution that isn’t quite ready to handle the load (or cannot be easily retrofitted into existing solutions).
The old chestnut of if you build it, they will come is a sure sign of success. But when a successful tool we need to ensure that it is not hobbled by its very success.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide