Setting Up Subversion for One or Multiple Projects
To achieve greater granularity in the access control management, we want to define separated policies for each project. The case in which we have a trusted subnet will be analysed here because it's much more involved. In this scenario we allow password-based authentication over HTTP only from the trusted subnet. We specify the policy for each project in a separate file contained in the /svn/conf/private and /svn/conf/public. To achieve this, add the following lines to your /svn/conf/mod_dav_svn.conf file:
Include /svn/conf/policies/public/* Include /svn/conf/policies/private/*
Suppose we have foo and bar public projects. John and Bob are foo's developers, while John and Mike are bar's developers. We want one project's developers to have full access only to the project they develop over HTTP from the trusted subnet. First of all, let's fill in the users' password file:
sackville apache2 # bin/htpasswd -c /svn/conf/svnpasswd john ***** sackville apache2 # bin/htpasswd /svn/conf/svnpasswd bob ***** ....
Then, we create the users' group file: each project is associated with a group whose name has the form (public|private)_projectname and contains the users participating in the project:
public_foo: john bob public_bar: john mike
We save this file as /svn/conf/svngroups. The last operation consists of associating a file in the /svn/conf/policies/public directory with a project. foo's access control policy file is called /svn/conf/policies/public/foo and contains the following lines:
<Location /public/foo> <LimitExcept GET PROPFIND OPTIONS REPORT> AuthType Basic AuthName "Public Subversion repository for project Foo" AuthUserFile /svn/conf/svnpasswd AuthGroupFile /svn/conf/svngroups Require group public_foo </LimitExcept> </Location>
We could move AuthType, AuthUserFile and AuthGroupFile to the default policy file to avoid replication of configuration entries. We had to add the Satisfy directive to require users to authenticate from the trusted subnet during a read/write session. So modify your public_default_policy.conf file in this way:
<Location /public> Dav svn SVNParentPath /svn/repository/public <LimitExcept GET PROPFIND OPTIONS REPORT> Order deny,allow Deny from all Allow from 192.168.0.0/24 Satisfy all </LimitExcept> </Location>
The configuration for private projects is quite similar; we simply discard any LimitExcept directive so the public_default_policy.conf becomes:
<Location /private> Dav svn SVNParentPath /svn/repository/private Order deny,allow Deny from all Allow from 192.168.0.0/24 Satisfy all </Location>
and the private project worldconquest's access control policy file is:
<Location /private/worldconquest> AuthType Basic AuthName "Private Subversion repository for project WorldConquest" AuthUserFile /svn/conf/svnpasswd AuthGroupFile /svn/conf/svngroups Require group private_worldconquest </Location>
Now it's time to consider HTTPS connections, which allow users around the world to access the repository, granting password confidentiality and data integrity over insecure channels such as the Internet. Apache manages HTTPS in a separate virtual host space, which is set up using a configuration like this one:
<VirtualHost _default_:443> ... </VirtualHost>
But which Apache are we talking about? External Apache1 or internal Apache2 plus Subversion? Subversion clients using HTTPS can connect to the external Apache1 Web server, of course, and try to establish secure connections to it. So we must configure HTTPS for our external Apache1 Web server. There's no need for proxy requests to the internal Apache2 Web server to be delivered over a secure connection too, but because we want to centralize access control policies in our Apache2 Web server, we must provide a mechanism for Apache2 to discriminate proxy requests coming from an external secure channel.
We use another port (assume 8081) to discriminate when an HTTP request has been delivered to the Apache1 Web server using SSL. So when an HTTP request hits Apache1 over SSL, it is proxied internally in clear to port 8081, where Apache2 is listening. As usual, remember to block incoming connections to port 8081 from external hosts or to bind Apache2 to the loopback interface (or both).
In the Apache1 configuration file, add the following line to the SSL virtual host directive:
Proxy /svn/ http://localhost:8081/
Now tell Apache2 to listen to port 8081, adding the following entry to your httpd.conf file: Listen localhost:8081.
Now we must set up a virtual host environment for access through port 8081. The main difference with respect to HTTP connections is related to source-based access control: using HTTPS connections we drop the notion of trusted and untrusted subnet. HTTPS requests can arrive from just anywhere.
Thus, the default policy files must include a VirtualHost directive for the port 8081. Here's the one for public projects that we put in the /svn/conf/public_default_policy.conf file:
<VirtualHost _default_:8081> <Location /public> Dav svn SVNParentPath /svn/conf/repository/public Order allow,deny Allow from all <LimitExcept GET PROPFIND OPTIONS REPORT> Order deny,allow Deny from all Satisfy all </LimitExcept> </Location> Include /tmp/LJ/policies/public/* </VirtualHost>
As usual, using the Satisfy clause allows us to write access only to authenticated users. In addition, we recycled the per-project configuration files (see the Include directive) because they do not depend on the source-based access control policy, but we can specialize them for another purpose if we need to. The default policy for private projects is similar:
<VirtualHost _default_:8081> <Location /private> Dav svn SVNParentPath /svn/conf/repository/private Order allow,deny Allow from all Satisfy all </Location> Include /tmp/LJ/policies/private/* </VirtualHost>
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Introduction to MapReduce with Hadoop on Linux
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Weechat, Irssi's Little Brother
- Linux Systems Administrator
- Help with Designing or Debugging CORBA Applications
- Welcome to 1998
27 min 59 sec ago
- notifier shortcomings
51 min 41 sec ago
2 hours 28 min ago
- Android User
2 hours 30 min ago
- Reply to comment | Linux Journal
4 hours 23 min ago
7 hours 12 min ago
- This is a good post. This
12 hours 25 min ago
- Great, This is really amazing
12 hours 27 min ago
- These posts are really good
12 hours 29 min ago
- It’s a really great site you
12 hours 31 min ago