Here is the project.
SCOPE: Setup a basic Linux server (CentOS or other approved distribution of Linux) running basic e-mail server software such as zimbra to act as an SMTP relay server between our CRM server and our customers, dealers, prospects and leads.
PROVIDED INFORMATION: Currently sending out a batch of 5,000 e-mails at about 20KB per e-mail several times a month. The scale of the blasts could be increased to as much as 100,000 e-mails per batch with no larger than 500KB per e-mail.
HARDWARE: Need a list of requirements for the following:
- # of Processors and speed of each
- Amount of memory needed
- Amount of storage space needed
PROPOSED SOLUTION: Set up a virtual server in our current VMware environment that will run the Linux OS and the e-mail server software. Expand International of America, Inc. would provide the host machine and grant access to set up the virtual server as needed.
REQUIREMENTS: Written proposal with the following included:
- Hardware requirements
- Software to be used
- Proposed number of hours required to perform the project
- Hourly rate or total price for the project
- Training for basic troubleshooting, location of log files, backup, and maintenance
- Other as needed.
This should cover it.
Let me know.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?