AlphaMail Is Scalable and Accessible Web Mail
# telnet imap.example.com 143 # for no SSL
These commands connect you to the IMAP server and allow you to enter protocol commands. Type the following (the numbers are part of the commands):
1 login username password 2 list "" "%" 3 logout
The username and password, of course, should be real user credentials for a typical IMAP account. The responses to the second command should look like this:
* LIST (\HasNoChildren) "." "INBOX.Spam" * LIST (\HasNoChildren) "." "INBOX.Trash"
which indicates that . is the separator and makes it pretty obvious that INBOX is a common prefix (in this case all entries start with INBOX.).
The prefix parameter is primarily an interface optimization: the interface removes the prefix when displaying most folder names in order to make things more compact. You can hand-edit any of the parameters in the resulting alphamail_config file, which is a commented text file. The entry for defining a pair of typical IMAP servers that serve two mail exchanges looks like this:
imap_servers: example.com=imap.example.com:993[INBOX.], ↪example.net=imap.example.net:143[/]
The above setting indicates that users should be able to select their mail domain on login (example.com or example.net), and associates these with a corresponding IMAP server, port, prefix and IMAP path separator.
The separator in the brackets is always required, but the prefix is not. The notation [/] means no prefix, with slash as the separator. The IMAP connections will be insecure if you use anything but the SSL alternate port 993.
Attachment viewers and other external programs run in a sandbox that uses a chroot jail, user ID protections and other filesystem restrictions to ensure that a bug in a viewer cannot compromise anything more than the file the user is trying to view, which by definition would be the file containing the exploit. This is where you will use the extra user you created earlier.
The sandbox utility is installed in /usr/local/libexec/sandbox, by default, and is a setuid program. It is important that the permissions of this executable allow execution by the Web server, but it is a security hazard to allow any other user access to the utility. I recommend that AlphaMail be run on a standalone system that serves only Web mail and nothing else, with no shell access for users.
The configuration also asks you to configure the large file-sharing system. This option allows users to upload files to the AlphaMail system, so that others can download them later. Large file sharing is useful when someone needs to send a file that is larger than is allowed or recommended as part of an e-mail message. File sharing has several safeguards to prevent abuse, including terms-of-use agreements, size limits, password protection, encryption, download limits and time-based expirations. Choosing a zero size for the size limit in file sharing disables the feature.
The final step is to edit the Apache configuration. Make sure that mod_perl2 and libapreq2 are loaded with directives, such as:
LoadModule apreq_module modules/mod_apreq2.so LoadModule perl_module modules/mod_perl.so
And, include the generated alphamail.conf Apache configuration file. For example:
Apache and imap_webcache must be running for AlphaMail to work. Startup order does not matter. A sample Red Hat init script for the Web cache is included and will be installed in /usr/local/share/alphamail/util/init.d.
A garbage collection script must be run periodically from cron. AlphaMail writes numerous files as the mail system operates, most of which are decoded MIME messages and attachments. These files cannot be cleaned reliably by the Web software, as there are no guarantees about user behavior. The script is called garbage_sweeper and is well documented in the Administration Guide.
AlphaMail is in production use at the University of Oregon. The performance and usability results have been very encouraging, and the former are available at the AlphaMail home page.
However, the system is still new, and there are some latent bugs that have yet to be solved. The imap_webcache itself is a rather complicated piece of software that may have occasional problems. As a result, I recommend running an included utility called the hang_detector (in /usr/local/share/alphamail/util by default). You must edit this script before using it, and it requires a valid IMAP user in order to work.
It runs a full query against the Web cache every 15 seconds and is capable of restarting the imap_webcache (via the included init script). It is also capable of sending mail to administrators if desired.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Validate an E-Mail Address with PHP, the Right Way
- Technical Support Rep
- Senior Perl Developer
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?