Managing Projects with WebCollab
Installation is extremely simple and took me less than ten minutes. I used my distro's vanilla installations of Apache, PHP and MySQL, which worked perfectly. Linux beginners will find the most difficult part of the install to be creating a new user in MySQL that has the appropriate access to the database. Aside from that, this is definitely something that a person who is just beginning to experiment with Linux can install without complication.
There are two methods of installation, one using the command line and the other through a Web-based setup routine. I chose the latter. For the sake of an example, I use collab.example.com here.
Download and unzip the tarball into your Web directory:
# tar -zxvf WebCollab-1.62.tar.gz
Change the permissions on the main config file:
# cd WebCollab-1.62/config # chmod 666 config.php
Point a Web browser to collab.example.com/WebCollab-1.62/setup.php. This guides you through the automated portion of the setup, which includes creating an SQL database and running a table creation script, as well as setting four environment variables in the relevant config.php file.
Restore the permissions on the config file:
# chmod 664 config.php
A user name and password are required to access any part of the software and determine which projects can and cannot be viewed and/or edited. As with most other systems, only administrative users have the ability to add or remove accounts. A user also can be designated as a project or task owner, giving that account the ability to perform administrative tasks on that particular project or task.
Groups also play an important role in task designation. Projects can be assigned by the owner to a user or to an entire group. Subgroups or task groups can then be used to further delegate tasks within a given project.
The access control is fairly comprehensive and provides a lot of flexibility for administrators and project owners. For example, I frequently have projects with other engineers, which required the ability of all parties to edit tasks, mark jobs complete and change due dates. In these cases, I assign everyone in my group edit rights to the project. I also have projects that I allow clients to view, and in these cases, I want to restrict them to read-only. I also do not want clients to be able to view any project other than their own. Both of these details easily are achieved by checking or unchecking the All users can view or Anyone in the user group can edit buttons.
WebCollab is ideal for projects that can be broken down into a series of tasks with brief, one or two sentence, descriptions. Each project and task has a description field, start date, end date, priority and assigned group. When a task is created for a given project, it is treated as a subproject, with the same information fields and editable data as its parent job. This portion of the software is similar to most versions of popular to-do lists, but it adds the flexibility to be used as a quick checklist or a fairly in-depth breakdown of a task with running commentary.
The project and task views are where the software really shines. As mentioned before, each has its own file upload section and message board. This is where the collaborative aspects of WebCollab really set it apart from a traditional to-do list. Users can ask each other questions, make comments, upload relevant documents and much more. This is what separate WebCollab from task management applications and makes it more of a project-oriented groupware. I find the message board element of WebCollab to make things more interactive and thus more interesting for everyone involved.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- RSS Feeds
- Readers' Choice Awards
- Tech Tip: Really Simple HTTP Server with Python
1 hour 21 min ago
- Reply to comment | Linux Journal
1 hour 54 min ago
- All the articles you talked
4 hours 17 min ago
- All the articles you talked
4 hours 20 min ago
- All the articles you talked
4 hours 22 min ago
8 hours 46 min ago
- Keeping track of IP address
10 hours 37 min ago
- Roll your own dynamic dns
15 hours 51 min ago
- Please correct the URL for Salt Stack's web site
19 hours 2 min ago
- Android is Linux -- why no better inter-operation
21 hours 18 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?