The wonderful applications of HMAC

This article describes the ”Hashed Message Authentication Code” or HMAC for short and a few examples of its applications. In many situations, the use of an HMAC, ensures a high level of security at the same as it can simplify otherwise complex solutions.

HMAC construction was first published in 1996 by Mihir Bellare, Ran Canetti, and Hugo Krawczyk, Its structure is described thoroughly in RFC2104. In short, it defines a way of verifying both the integrity and authenticity of a message, building on a hash function in combination with a secret key.

A simplified way of defining a formula for HMAC can be written as:

HMAC = hashFunction(message + key)

The above shows the principle. The actual implementation requires some additional steps, such as padding and two passes of hashing the key, but what you see here is the general, elegantly simple idea.

The hash function can be any cryptographically secure algorithm and while the RFC suggests MD5 and SHA-1, it applies equally well to arbitrary algorithms (note that HMACS is not sensitive to the known vulnerabilities  found in MD5, but is generally not recommended anyway). The key is nothing but a bunch of random or pseudo-random bytes, preferably the same length as the output of the selected hash algorithm.

Usage scenarios

The HMAC can be applied in a number of scenarios, for example:

  • Sending an e-mail with a password reset link that is valid only for a certain time and can only be used once. The HMAC ”magic” allows for this without any added server state.
  • Verifying e-mail address in order to create or activate an account
  • Authenticating form data that has been sent to the users’ web browser and then posted back.
  • Authenticating data sent by external applications – typically any scenario where you provide a service that has the notion of an “API key”. In this case you share a common secret key with the application user. The added benefit of this approach is that HMAC’s are computationally inexpensive and does not require much memory, thus very suitable for “Internet of Things” (IoT).

Code samples

This section contains a couple of code examples to illustrate the use of HMAC’s in Java and C#. The examples are simple and just illustrate the basic usage, but will gives a general idea on the language support. The goal of these examples is that they should produce the same result, as interoperability may be of interest for your particular use case.

Java code example

The following Java code example shows how to produce an HMAC using the standard Java security API functions:

import javax.crypto.Mac;
import javax.crypto.spec.SecretKeySpec;
import java.util.Formatter;
 
public class Main {
 
  public static String toHexString(byte[] bytes) {
    Formatter formatter = new Formatter();
    for (byte b : bytes) {
      formatter.format("%02x", b);
    }
    return formatter.toString();
  }
 
  public static void main(String[] args) throws Exception {
    byte[] message = "Message to be processed".getBytes("UTF8");
    byte[] keybytes = "PoorKey".getBytes("UTF8");
    SecretKeySpec key = new SecretKeySpec(keybytes,"HmacSHA1");
    Mac hmac = Mac.getInstance("HmacSHA1");
    hmac.init(key);
    byte[] bytes = hmac.doFinal(message);
    System.out.println("HMAC: " + toHexString(bytes));
  }
}

The output produced when run is:

HMAC: 14510b1f7ec15554fbadcad358dfc2230eabfdc3

C# code example

The following C# code example shows how to produce an HMAC using the standard .Net security API functions:

using System;
using System.Security.Cryptography;
using System.Text;
 
namespace HmacSample
{
  class MainClass
  {
    public static string ByteArrayToString(byte[] ba)
    {
      StringBuilder hex = new StringBuilder(ba.Length * 2);
      foreach (byte b in ba)
        hex.AppendFormat("{0:x2}", b);
      return hex.ToString();
    }
 
    public static void Main (string[] args)
    {
      string key = "PoorKey";
      string data = "Message to be processed";
      var hmac = new HMACSHA1(Encoding.UTF8.GetBytes(key));
      byte[] bytes = hmac.ComputeHash(Encoding.UTF8.GetBytes(data));
      Console.WriteLine("HMAC: " + ByteArrayToString(bytes));
    }
  }
}

The output produced when run is:

HMAC: 14510b1f7ec15554fbadcad358dfc2230eabfdc3

The standard algorithms are obviously implementing the same specification, since they both produce the same result.

Example applications

Password reset e-mail

Assume the following scenario:

  • You are operating a site where users can log in with their e-mail address and a password of choice upon registration.
  • You want to allow the users to select ”forgot password” and ensure that a reset password-link is sent to their e-mail address.
  • The link must only be valid for one hour and may only be used once.
  • You have created a secret key that is known only to you. Let’s call this K.

When the user requests a password reset-link:

  1. Construct a string consisting of the users e-mail address, the servers current time, the hash of the current user password. Let’s call this message M.
  2. Calculate the HMAC of M, using K as the secret key.
  3. Construct a URL containing the path to your password reset page, and as parameters, the user’s e-mail address, the current time and the HMAC produced in the previous step, e.g:

https://www.example.com/forgotPassword?user=user%40example.com&time=20151205T131159Z
&hmac=3902ed847ff28930b5f141abfa8b471681253673

Note that you should make sure to URL-encode the parameters, as seen in the “user”-argument, where the “@”-sign is encoded as “%40”.

  1. Include the computed URL in an e-mail message that you send to the end user.

When the user receives the e-mail, he or she follows the URL and gets to your forgotPassword page. In order to be allowed to set a new password, the following verification is performed on the server:

  1. Find the GET-parameters for e-mail, current time and HMAC.
  2. Lookup the hash of the user’s password from the password database and the server’s secret key, K.
  3. Concatenate the e-mail, current time and user’s password hash, producing M’.
  4. Calculate HMAC of M’ and K
  5. If the calculated HMAC is equal to the HMAC supplied as a GET parameter, you can be sure that the timestamp, and e-mail parameters are not fiddled with. You also know that the link has not been used before, since the current password hash is unchanged.
  6. If the timestamp is older than one hour, inform the user that the link is no longer valid. If the passed HMAC is different from the calculated HMAC, inform the user that the link can only be used once. If none of above apply, the link is valid and you should present the user with an input field that allows them to enter a new password.

Note: Rather than using a generic server key, the user’s stored password hash should be possible to use as the key for the HMAC. I cannot think of any security flaws with this method (given that the password hashes are derived in a sound manner), but as always when dealing with security, the path is beset on all sides with pitfalls to avoid, so don’t take my word for it.

Compare the above with the perhaps most intuitive (but less elegant) approach of solving the problem:

  • Generate a long, random string and store it in the database together with a time stamp of when it was created.
  • Append this string to a password reset-URL that you send to the user’s e-mail.
  • Wait for the URL to be called and check that the time is not over-due. If still valid, let the user change password.
  • Introduce a timer that periodically deletes unused password reset tickets from the database.

Clearly the above approach involves both new database information as well as timer tasks for cleanup, which is quite unnecessary when the same result can be achieved by just using an HMAC.

Account activation e-mail

The scenario for e-mail activation is very similar to the password reset scenario. Assuming a new user has entererd their e-mail address and a password for the pending account, an HMAC is created containing timestamp and the e-mail address. These are all baked together in a URL that is attached to an e-mail, which is sent to the address supplied by the user.

The verification process pretty much follows the steps above, perhaps with a timeout of a day or so, and effectively making sure that the user is in possession of the supplied e‑mail address.

Two-party authenticated communication

In this scenario, picture yourself providing an online service for Internet of Things (IoT), where embedded devices are able to send all kinds of information (typically sensor information, such as temperature readings or GPS coordinates) to be stored in a cloud storage service you provide. The connected devices typically have a limited amount of memory and computational power. They are usually able to connect to the Internet over Wi-Fi, but are often not able to use SSL, or handle the memory-consuming big integer-calculations needed for public key computations. You still want to be able to authenticate valid users of your service and make sure no one is able to impersonate others or alter information in transit between the device and your service endpoint.

In order to provide the service, you start with sharing a randomly generated secret key (kind of an API-key) that you store locally together with other user information. The users of your service (i.e. the IoT device) then reports their data to your service by following the steps below:

  1. Extract the data to report, say sensor readings for temperature and humidity
  2. Compute an HMAC of the sensor readings and user-ID.
  3. Construct a URL on the form:

http://sensorservice.example.com?user=alice&arg0=25&arg1=65&hmac=b5f141abfa8b471681253673

where user is the unique user name, arg0 is the temperature reading and arg1 is the humidity reading. The HMAC is computed by the value of (user + arg0 + arg1) together with the shared secret key.

  1. The HTTP request is made, and if everything works as expected, a HTTP status 200 OK response is returned.

As the provider of the service, you would typically act something like the following in order to validate the request:

  1. You receive the HTTP request and look for the value of the request parameter ”user”, look up the shared secret key for that user (alice in this case).
  2. Loop through the arguments (including the user name, but not the HMAC) and concatenate the values.
  3. Compute the HMAC from the resulting string in the previous step using the secret key. Compare the result with the caller-supplied “hmac”-argument.
  4. If the HMAC’s are the same, the user name is valid and the contents of the arguments cannot have been altered in transit – thus store them in the database and return HTTP status 200 OK.

Note: Make sure that you agree with the users of the service on the order of the concatenation of the arguments, as different order will produce different HMAC’s.

You may also want to compare the suggested approach with the traditional API-key use case, where the key is simply passed as an argument, allowing anyone that intercepts calls to start impersonate your users.

Final Thoughts

Hashed message authentication codes (HMAC’s) are very useful when you wish to send data out to untrusted destinations and wish to be able to verify that whenever you get the data back, the information has not been altered, as the case with a password reset e‑mail for example.

Many system interactions involve communication with just two parties, and if it is feasible to share a common secret, it is straightforward to use an HMAC to verify both the integrity of the information passed as well as authenticating the other party, as the case with the IoT cloud service for example.

Download

You may download this article in PDF form:

White Paper – the wonderful applications of HMAC

Simple interoperable encryption in Java and .net

It is quite common to require encryption of data that is being sent between different systems. More often than not, the scenario is also a simple point-to-point communication. In these cases, a public key approach adds significant complexity to the solution and could be replaced by an equally secure alternative based on symmetric encryption. This article describes a simple approach to such a solution that also demonstrates interoperability between Java and .net environments.

Based on the scenario described above, the following criteria have been identified for selection of the encryption algorithm:

  • The algorithm should build on open standards as this generally is good for interoperability.
  • The algorithm should not be protected by patents.
  • The algorithm should have no severe known vulnerabilities.
  • The encryption should build on a symmetric cipher with a shared secret, which is simple and generally good from a performance perspective.

Additional requirements on the overall solution are:

  • Every encrypted message should utilize an initialization vector (IV) in order to avoid ever producing the same ciphertext, even if the source message could be identical (which could be quite common in a system-to-system communication scenario).
  • The encryption key should be generated from a shared password. This is practical when agreeing on the key, since it can be performed without passing the key in binary form between the communicating parties.

The above criteria can be met, for example, by using the AES (Rijndael) encryption algorithm, which is a modern, well-tested and high performing block cipher. Another option could be Triple DES which has been around for a long time and is very well-tested to withstand cryptanalysis, but not very efficient in software implementations. This leaves AES as the choice.

As with pretty much all block ciphers, AES uses a binary key for encryption and decryption. This binary key needs to be derived from a password, as was one of the requirements for the solution. There are several methods of deriving binary key data from a password, and they are typically employing different hashing algorithms. One such method is described in RFC2898 and will work for our purposes.

The algorithm-specific parameters chosen for the solution are:

Encryption and decryption

  • Algorithm: AES.
  • Key length: 128 bits (which gives good protection and at the same time works well with US export restrictions).
  • Block size: 128 bits.
  • Operation modus: CBC (Cipher Block Chaining – which prevents repeated input data from producing repeated cipher text).
  • Padding: PKCS#7 (Padding is used for filling the block with data if the plaintext does not fit an even number of blocks. PKCS#7 and PKCS#5 are the same for practical purposes, but PKCS#5 is formally only defined for 64 bit block sizes)

Key generation

  • Key generator: RFC2898
  • Method: PBKDF2
  • Pseudorandom function (PRF): Hmac with SHA-1
  • Number of iterations: 1024

When the encrypted information is transferred between sender and receiver, it needs to be stored in a way that enforces interoperability. Binary information is always tricky and can be interpreted differently on different platforms. ASCII is a lot easier to work with, and in order to use ASCII as the carrier, Base64 encoding and decoding is used by the sender and receiver.

Implementation

These implementation examples shows Java and C# code. The C# examples have been verified using the Mono platform – an open source implementation of the CLR and C# language which is binary compatible with the .net framework. Mono is cross-platform and runs on various Linux platforms as well as MacOS and Windows.

Key generation in Java

The following code example creates an AES key from a password. The password should be known only to the sender and receiver. The salt used can be communicated openly and is only used to prevent keys from being reverse looked up using rainbow table attacks (in real world implementations, the salt should be configurable though, not hard-coded as in this example).

String password = "sOme*ShaREd*SecreT";
byte[] salt = new byte[]{-84, -119, 2556, -100, 100, -120, -45, 84, 67, 96, 10, 24111, 112, -119, 3};
SecretKeyFactory factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
KeySpec spec = new PBEKeySpec(password.toCharArray(), salt, 1024, 128);
SecretKey tmp = factory.generateSecret(spec);
secret = new SecretKeySpec(tmp.getEncoded(), "AES");
System.out.println("Key:" + Base64.encode(secret.getEncoded()));

Key generation in C#

The following code creates an AES key from a password in the same way as the previous, Java-based, example. The resulting key is identical. Please note that the salt is the same in both examples, the difference is that Java’s bytes are always signed, whereas C# uses unsigned bytes by default.

byte[] salt = new byte[]{172, 137, 25, 56156100136, 211, 84, 67, 96, 10, 24111112, 137, 3};
int iterations = 1024;
var rfc2898 =
new
System.Security.Cryptography.Rfc2898DeriveBytes("sOme*ShaREd*SecreT", salt, iterations);
byte[] key = rfc2898.GetBytes(16);
String keyB64 = Convert.ToBase64String(key);
System.Console.WriteLine("Key: " + keyB64);

Encryption in Java

The following example encrypts a message in the form of a string stored in the variable cleartext. By not initializing the algorithm with an IV, a unique byte sequence will be generated for every invocation of the code. This IV needs to be sent with the encrypted message in order for the receiving system to decrypt the message. The variable secret contains the binary secret key generated in the previous example.

Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.ENCRYPT_MODE, secret);
AlgorithmParameters params = cipher.getParameters();
iv = params.getParameterSpec(IvParameterSpec.class).getIV();
ciphertext = cipher.doFinal(cleartext.getBytes("UTF-8"));
System.out.println("IV:" + Base64.encode(iv));
System.out.println("Cipher text:" + Base64.encode(ciphertext));

Encryption in C#

The following example encrypts a message in the form of a string stored in the variable cleartext. The variable secret contains the binary secret key generated in the previous example.

AesManaged aesCipher = new AesManaged();
aesCipher.KeySize = 128;
aesCipher.BlockSize = 128;
aesCipher.Mode = CipherMode.CBC;
aesCipher.Padding = PaddingMode.PKCS7;
aesCipher.Key = key;
byte[] b = System.Text.Encoding.UTF8.GetBytes(cleartext);
ICryptoTransform encryptTransform = aesCipher.CreateEncryptor();
byte[] ctext = encryptTransform.TransformFinalBlock(b, 0, b.Length);
System.console.WriteLine("IV:" + Convert.ToBase64String(aesCipher.IV));
System.Console.WriteLine("Cipher text: " + Convert.ToBase64String(ctext));

Decryption in Java

Decryption is performed by initializing the algorithm with the same IV as was used during encryption and specifying the decryption mode of operation. In the example below, the variable iv is assumed to contain the correct initialization vector:

Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.DECRYPT_MODE, secret, new IvParameterSpec(iv));
String plaintext = new String(cipher.doFinal(ciphertext), "UTF-8");
System.out.println(plaintext);

Decryption in C#

Decryption is performed asuuming that aesCipher is initialized with the same parameters as earlier and that the IV used during encryption is stored in the variable iv. The encrypted message is stored in the varible cipherText:

aesCipher.IV = iv;
ICryptoTransform decryptTransform = aesCipher.CreateDecryptor();
byte[] plainText = decryptTransform.TransformFinalBlock(cipherText, 0, cipherText.Length);
System.Console.WriteLine("Decrypted: " + System.Text.Encoding.UTF8.GetString(plainText));

JavaScript (added 2018-03-12)

The same functionality can be implemented in JavaScript as well. I am using the crypto library that is available by default in node.js. The following example will create the key, perform a decryption of the outputs of the previous examples and then perform encryption again.

var crypto = require('crypto')

// Derive a key
var salt = new Uint8Array([172, 137, 25, 56, 156, 100, 136, 211, 84, 67, 96, 10, 24, 111, 112, 137, 3]);
var password = "sOme*ShaREd*SecreT";
var iterations = 1024;
var prng = "sha1"
var keylen = 128/8;
var derivedKey = crypto.pbkdf2Sync(password, salt, iterations, keylen, prng)
console.log();
console.log("--------- Key generation ------------");
console.log("Key length: " + keylen*8 + " bits")
console.log("Salt: " + salt);
console.log("Password: " + password + "(ASCII)");
console.log("Iterations: " + iterations);
console.log("Algorithm: " + prng);
console.log("Encoding: BASE64");
console.log("-------------------------------------");
console.log("Key: " + Buffer.from(derivedKey).toString('base64'));
console.log("-------------------------------------");
console.log();

// Do some decryption
var ivb64 = 'TZuWY0W5Yn9l9F2DEiU0hg==';
var iv = Buffer.from(ivb64, 'base64');
ciphertextb64 = 'eq1sel7Muoz/nOO8YFNLG629iM0qis+oDhvzT9pmvW8=';
var ciphertext = Buffer.from(ciphertextb64, 'base64');
var ciphername = 'aes-128-cbc';
var decipher = crypto.createDecipheriv(ciphername, derivedKey, iv);
let decrypted = decipher.update(ciphertext);
decrypted += decipher.final('utf8');
console.log("----------- Decryption --------------");
console.log("Method: " + ciphername);
console.log("IV: " + ivb64 + " (BASE64)");
console.log("Key: " + Buffer.from(derivedKey).toString('base64') + " (BASE64)");
console.log("Ciphertext: " + ciphertextb64 + " (BASE64)");
console.log("-------------------------------------");
console.log("Plain text: " + decrypted);
console.log("-------------------------------------");
console.log();

// Do some encryption
var cipher = crypto.createCipheriv('aes-128-cbc', derivedKey, iv);
var plaintext = '*** Top secret ***';
let encrypted = cipher.update(plaintext, 'utf8', 'base64');
encrypted += cipher.final('base64');
console.log("----------- Encryption --------------");
console.log("Method: " + ciphername);
console.log("IV: " + ivb64 + " (BASE64)");
console.log("Key: " + Buffer.from(derivedKey).toString('base64') + " (BASE64)");
console.log("Plain text: " + plaintext);
console.log("-------------------------------------");
console.log("Ciphertext: " + encrypted + " (BASE64)");
console.log("-------------------------------------");
console.log();

In order to run the JavaScript example, you should have node.js installed. Save the contents into a file, e.g. app.js and run with node app.js

If you wish to run the code in a browser (which does not implement the crypto library from node), you can install the browserify module:

npm install -g browserify

Then “browserify” the node.js code:

browserify app.js -o bundle.js

Finally, add a simple HTML wrapper (e.g. index.html) for the script:

<html>
 <head>
 </head>
 <body>
 <script src="bundle.js"></script>
 Open the JS console to view output
 </body>
</html>

Open the html file in the browser and then view the output in the debug console. Unfortunately the “browserified” output gets fairly large (a few hundred kilobytes), so it might not be a solution for everyone.

Final words

The above examples can easily be generalized into production code which can be used to ensure confidentiality between two systems in many types of integration scenarios. The code has been verified to work in both Java and .net environments, but the full code has been omitted (imports etc.) in the above examples for readbility. A working code example can be found here (ODT document with both a Java and C# class).

References

Setting up a blacklist proxy with automatic updates using Squid and SquidGuard

The versatile, open source proxy server Squid can be used together with the plug-in SquidGuard to set up a flexible blacklist proxy server. Together with a simple cron job and a shell script, the database of blacklisted sites is kept up to date. This article describes the process step-by-step of how to get up and running.

I will be setting up the solution on an Ubuntu 9 server which conveniently has the necessary software available in its repositories. The setup should be very similar for other Linux environments, but you might have to compile the software from scratch.

Install and configure Squid

First of all, install and configure Squid. I did this in a previous post when I was looking at configuring a whitelist proxy.

# sudo apt-get install squid

Edit the Squid configuration file, /etc/squid/squid.conf and find the http_port tag. By default Squid listens to port 3128 for requests. If you want to change it, uncomment the line and change the port number.

Next, define who is allowed to access the proxy. Find the TAG: http_access heading and below it the ‘INSERT YOUR OWN RULE(S) HERE…‘ Uncomment the line :

#http_access allow localnet

You will also need to define what is meant by localnet. Find the TAG: ACL heading, and look for something like the following line:

#acl localnet src 192.168.1.0/24 192.168.2.0/24

Change the IP address and netmask above so that it matches your local network. In my case, I am on a local network with addresses ranging from 192.168.0.1 to 192.168.0.255. This means that the netmask is 255.255.255.0 – i.e. 3 bytes of “ones”, or 24 bits. So for my network it looks like this:

acl localnet src 192.168.0.0/24

Now start Squid if it’s not already running and then tell it to reload its configuration:

sudo /etc/init.d/squid start
squid -k reconfigure

You should now be able to use the proxy server from your web browser. You will not be able to get anything blocked just yet, but you should get pages served if everything was set up correctly.

Install SquidGuard

Start by installing SquidGuard using apt-get:

sudo apt-get install squidguard

Next, prepare Squid for use with SquidGuard, so once more open up /etc/squid/squid.conf in your favorite text editor.

You need to tell squid where SquidGuard is. Find the TAG: url_rewrite_program heading. There is no default setting so add a new line:

url_rewrite_program /usr/bin/squidGuard –c /etc/squid/squidGuard.conf

Prepare the blacklist database

Before going in to further configuration of SquidGuard, having access to a database of blacklisted sites and URLs is desirable.

Download the file getlists.odt, set the executable flag and rename it getlists.sh:

wget https://steelmon.files.wordpress.com/2010/12/getlists.odt
sudo mv getlists.odt  /usr/local/bin/getlists.sh
sudo chmod +x /usr/local/bin/getlists.sh

The file ending is odt rather than sh since wordpress does not allow shell scripts to be uploaded.

Now, create the database by executing the script:

sudo getlists.sh

You should now see some output from the script, and after some time of processing, you should be able to see the output by listing the contents of the blacklists database directory:

ls -l /var/lib/squidguard/db/blacklists/

Configure SquidGuard

Open the SquidGuard configuration file, /etc/squid/squidGuard.conf for edit, and replace the contents with the following:

#
# CONFIG FILE FOR SQUIDGUARD
#
dbhome /var/lib/squidguard/db/blacklists
logdir /var/log/squid
dest ads {
  domainlist ads/domains
  urllist ads/urls
} 

dest aggressive {
  domainlist aggressive/domains
  urllist aggressive/urls
} 
dest drugs {
  domainlist drugs/domains
  urllist drugs/urls
} 
dest hacking {
  domainlist hacking/domains
  urllist hacking/urls
} 
dest porn {
  domainlist porn/domains
  urllist porn/urls
} 
dest redirector {
  domainlist redirector/domains
  urllist redirector/urls
} 
dest suspect {
  domainlist suspect/domains
  urllist suspect/urls
} 
dest warez {
  domainlist warez/domains
  urllist warez/urls
} 
dest audio-video {
  domainlist audio-video/domains
  urllist audio-video/urls
} 
dest gambling {
  domainlist gambling/domains
  urllist gambling/urls
} 
dest mail {
  domainlist mail/domains
} 
dest proxy {
  domainlist proxy/domains
  urllist proxy/urls
} 
dest spyware {
  domainlist spyware/domains
  urllist spyware/urls
} 
dest violence {
  domainlist violence/domains
  urllist violence/urls
} 
acl {
  default {
    pass !ads !aggressive !drugs !hacking !porn !redirector !suspect !warez !audio-video !gambling !mail !proxy !spyware !violence all
    redirect http://www.x509.se/block.html
  }
}

Among the last lines, there is a URL to a page that gets served whenever there is blocked content. You should change the URL to your own block page (unless your happy with my extremely sparse one in Swedish) .

Compile the SquidGuard database. This may take a while to complete:

sudo squidGuard –C all

Start Squid, which in turn will start SquidGuard, and reconfigure

sudo /etc/init.d/squid start
sudo squid -k reconfigure

Troubleshooting

If you are having problems, most likely it’s related to permissions. You can get some useful information by running SquidGuard from the command line:

sudo su – proxy
echo "http://www.ubuntu.com {client ip address}/ - - GET" | squidGuard -d -c /etc/squid/squidGuard.conf

You can change the URL to whatever you’d like to test for access or denial. The IP address is the address of the computer you want to simulate as surfing the net from.

If you encounter any problems with permissions, you may try the following:

sudo chown proxy:proxy /etc/squid/squidGuard.conf
sudo chown -R proxy:proxy /var/lib/squidguard/db
sudo chown -R proxy:proxy /var/log/squid/
chmod 644 /etc/squid/squidGuard.conf
chmod -R 640 /var/lib/squidguard/db
chmod -R 644 /var/log/squid/
find /var/lib/squidguard/db -type d -exec chmod 755 \{\} \; -print
chmod 755 /var/log/squid

There are more detailed trouble shooting available in the reference section.

Automating the blacklist updates

When everything is up and running, you may want to automate the update procedure. This is easily accomplished by setting up a cron job. Open the cron table in interactive mode:

sudo crontab -e

Add the following line at the end of the file:

30 3 * * * /usr/local/bin/getlists.sh

This will run the blacklist download script every night at 30 minutes past 3.

References

Create a secure backup solution with chrooted SFTP

Secure Shell (SSH) is a very versatile tool that allows you to connect remotely in a secure manner. One of its most common uses is to transfer files and in these cases it can be confusing for users to see the entire host file system. It also makes the host system more vunerable when exposed to all users. In *NIX systems, however, there is a possibility to change the root of the file system so that a user for example sees their own home directory as a virtual root. Up until version 4.9 of OpenSSH it has been quite complex to setup a chrooted environment, but with later versions of OpenSSH it has become a lot easier, provided you only need to be able to transfer files over the SSH protocol (i.e. SFTP, no shell access). This article provides step by step instructions on how to set up a chrooted SFTP solution.

Assumptions

I have been using Ubuntu 8.04 to set up the solution. Any Debian based distribution should propably work the same way provided there are reasonably new versions of OpenSSH available. It should also be quite simple to translate the instructions into any other *NIX flavour. I have access to two machines – one acting as the server and the other as the client. If you would like to try it out on a single machine first, it works equally well using virtual hosts.

1. Install OpenSSH Server

In Ubuntu, OpenSSH does not come installed by default, so the first step is to install the OpenSSH server from the repositories. On the server host, run the following command:

server$ sudo apt-get install openssh-server

2. Setup chroot jail for sftp

Next, we are going to restrict SFTP access to a certain directory that will act as a virtual root of the file system – much like what many are used with from regular FTP. Start with creating a user, bkuser, that will be used to access the server remotely:

server$ sudo adduser bkuser

You will be asked to enter a number of information about the user. Just follow the instructions, providing at least a password for the newly created user.

Next, we are going to modify the file /etc/ssh/sshd_config, so that users belonging to the sftpusers group will be restricted to a chrooted directory without access to the rest of the host file system:

server$ sudo nano /etc/ssh/sshd_config

include the following lines:

# Make sure you replace any existing 'Subsystem sftp' line with this
Subsystem sftp internal-sftp

# Add these lines at the end of sshd_config
# Put users in the sftpusers group in a chroot jail
Match Group sftpusers
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no

This means that all users you add to the sftpusers group will be chrooted to their home directory, and will only be able to run the internal SFTP process.

In order for the changes to take effect, restart the ssh daemon:

server$ sudo /etc/init.d/ssh restart
* Restarting OpenBSD Secure Shell server sshd [ OK ]

Create a directory structure for the virtual SFTP server root:

server$ sudo mkdir /var/sftp
server$ sudo mkdir /var/sftp/.ssh

Create a new group for SFTP-only users. Membership of this group determines whether a user is chrooted or not:

server$ sudo groupadd sftpusers

Next, we will configure the SFTP users as follows:

  • Assign them to the sftpusers group
  • Deny any shell access by setting their shell to /bin/false
  • Reassign their home directory to the desired chroot directory. This directory and any directories over it must be owned by root

For the user bkuser created earlier, start by assigning the user to the sftpusers group:

server$ sudo usermod -g sftpusers bkuser

Make sure the user does not have any shell access:

server$ sudo usermod -s /bin/false bkuser

Remove the old home directory and create a new directory for the user under the virtual root:

server$ sudo rm -r /home/bkuser
server$ sudo mkdir /var/sftp/bkuser

Make bkuser the owner of the bkuser subdirectory under the virtual root:

server$ sudo chown bkuser:bkuser /var/sftp/bkuser

Assign /var/sftp as the new home directory for bkuser:

server$ sudo usermod -d /var/sftp/ bkuser

3. Configure RSA key authentication

This section describes how to set up public key based authentication for the SFTP access. In order to do this, we need to head over to the client and start with creating the bkuser account here as well:

client$ sudo adduser bkuser

Follow the instructions to create the new user. Next, as the newly created user, we are going to ttpare an RSA keypair consisting of a private and public key:

client$ sudo su - bkuser
client$ mkdir ~/.ssh
client$ chmod 700 ~/.ssh
client$ ssh-keygen -q -f ~/.ssh/id_rsa -t rsa

When asked for a passphrase for the private key, we simply ttss enter in order to set a blank passphrase. This will allow us to connect later without being prompted for authentication.

Now, we need to copy the public key to server:

client$ sftp server

enter password to get to the sftp> prompt. Then:

sftp> cd bkuser
sftp> put .ssh/id_rsa.pub
Uploading .ssh/id_rsa.pub to /bkuser/id_rsa.pub
.ssh/id_rsa.pub 100% 395 0.4KB/s 00:00
sftp> exit

Now, we need set up the public key based authentication over at the server. Issue the following commands in order to copy the public key to the list of authorized keys that the SFTP server will accept for authentication:

server$ sudo -i
server$ cd /var/sftp
server$ cat bkuser/id_rsa.pub >> .ssh/authorized_keys
server$ rm bkuser/id_rsa.pub
server$ exit

Now, lets try it out, back on the client:

client$ sftp server

If successful, there should be no prompt for password…

4. Install sshfs file system and mount sftp to a local directory

In order to simplify file transfer, this step allows us to mount the SFTP directory locally, where it acts just as an ordinary directory even though the contents are pushed over the network to the SFTP server. The tool used for this purpose is SSHFS (SSH File System) which operates in user space, so there is no need for escalated privileges in order to mount the remote file system. First, install SSHFS by issuing the following command:

client$ sudo apt-get install sshfs

Next, we will su into bkuser and set up the local mount point:

client$ sudo su - bkuser
client$ mkdir mnt
client$ mkdir mnt/backup

And then, mount the remote SFTP file system to the newly created mount point:

client$ sshfs server:/bkuser ~/mnt/backup

Test to add/remove files in ~/mnt/backup, and when verified, unmount by issuing the fusermount command:

client$ fusermount -u ~/mnt/backup

5. Use rsync to perform backup

There are tons of guides available on how to use rsync, so this will be kept at a bare minimum. The goal here is just to simply create a backup copy of a single directory structure:

client$ sshfs server:/bkuser ~/mnt/backup
client$ rsync -a -v /path/to/files/ ~/mnt/backup/
client$ fusermount -u ~/mnt/backup

The above commands first mount the remote SFTP server, and then uses rsync in order to copy the contents in /path/to/files/ to the server. The flag -a stands for archive and is a convience flag that makes sure to ttserve ownership, time stamps and other attributes of the transferred files. Finally the file system is unmounted. After being unmounted, the ~/mnt/backup/ directory should be empty.

6. Combine it all into a scheduled backup

In order to work as a backup solution, it is convenient to set up the rsync operation as a scheduled task. In *NIX systems, the most straight forward way of doing this is by setting up a cron job. Cron schedules a single command, so in order to perform multiple operations, we need to set up a simple shell script that in turn gets called by the scheduler. First of all, we are going to create a directory for the scrip:

client$ mkdir ~/bin

Create a shell script called backup-files.sh in the ~/bin directory, for example with nano:

client$ nano ~/bin/backup-files.sh

Enter the following contents:

#!/bin/sh
sshfs server:/bkuser ~/mnt/backup
rsync -a -v /path/to/files/ ~/backup/
fusermount -u ~/backup

Make sure the script is runnable by setting the eXecutable flag:

client$ chmod +x ~/bin/backup-files.sh

In order to set up the scheduled task, we need to edit the crontab for bkuser:

client$ crontab -e

This will bring up an editor, where the following should be entered:

# m h dom mon dow command
0 3 * * * /home/bkuser/bin/backup-files.sh

The five first columns in the crontab specifies the schedule:

  • Minute of hour
  • Hour of day
  • Day of month
  • Month
  • Day of week

The last column specifies the command to run when the scheduled time occurs. So for our case, the backup-files.sh command is called at 03:00 every day of the month, every month and every day of the week.

Thats it! You now have a simple but working, secure backup solution.

References

Setting up a strict whitelist proxy server using Squid

Squid is an open source proxy server that comes pre installed with many linux distributions. The software can be used for a lot of neat stuff, but I came across a situation where I wanted to be able to lock down access to the whole web except for a few approved sites – kind of an information kiosk scenario.

Assumptions

I am using Ubuntu Server 9.04, which comes with Squid installed already. Apparently it is not automatically installed with Ubuntu Desktop, but it is available in the repositories and as such can be installed quite easily by:

sudo apt-get install squid

Configuration

Once you’re set with a standard installation, edit /etc/squid/squid.conf and locate the line starting with INSERT YOUR OWN... Now, add the following lines:

acl whitelist dstdomain "/etc/squid/whitelist.txt"
http_access allow whitelist

You may want to comment out the line http_access allow localhost if you want the same rules to apply for localhost as well.

You can now edit /etc/squid/whitelist.txt and add domains using the following pattern:

  • example.com will add that domain
  • .example.com will add example.com and all subdomains.

It seems possible to be a lot more sophisticated with regular expressions and stuff, but this was good enough for me.

Reload the squid configuration:

/etc/init.d/squid/reload

Error pages are located in /usr/share/squid/errors and can be customized.

Finally, you’ll need to configure your browser to use the proxy server. If you are running Firefox, follow these steps:

  • From the Firefox menu, Choose Edit > Preferences. Click “Advanced” and then “Network”
  • Click “Settings” and select the “Manual Proxy Configuration” radio button.
  • In the “HTTP Proxy” field enter the name or IP address of the machine running your proxy.
  • In the “Port” field enter the value 3128 and check “Use this proxy server for all protocols”.

Your should now be able to visit only the sites registered in the whitelist.

References

Setting up CAS on Tomcat with Apache2 and SSL on Ubuntu – part 4

A common scenario when providing services via the web is to expose all applications via an Apache front end. The Apache server acts as a dispatcher, or reverse proxy, and takes care of virtual hosting as well as any SSL traffic. This way, only one IP address needs to be exposed to the Internet while users gets the experience of multiple stand-alone sites. This article series also describes how applications can be conveniently put behind access control using the Central Authentication Service, or CAS for short.

Part 4 – Protecting resources with CAS

This article is number four in a series. The steps described here are based on configurations that have been performed in the earlier steps. It is strongly recommended to read these first:

We’ll start by downloading and deploying CAS. The CAS server is found at http://www.ja-sig.org/downloads/cas/cas-server-3.3.4-release.tar.gz. While we’re at it, we are going to download the CAS client as well: http://www.ja-sig.org/downloads/cas-clients/cas-client-java-2.1.1.tar.gz

Extract the server and client to a suitable location, e.g. your desktop. From the extracted server directory copy modules/cas-server-webapp-3.3.4.war to Tomcats webapps/ directory (if you are following all steps in this guide it should be located in /opt/apache-tomcat-5.5.28/webapps). Start Tomcat and point the web browser to http://localhost:8080/cas-server-webapp-3.3.4/login.

You should now see the CAS login page where you can enter your username and password. By default, you should be able to log in by entering password in both the user name and password fields. If everything is working, a page stating that you have logged in successfully, should be displayed.

Now, we need to configure Tomcat to use CAS for authentication. In this setup we will be looking at CAS-enabling the Tomcat Manager application (there is a link to the Manager application under the Administration headline in the top left corner of Tomcat’s welcome page). If you try to access it at this stage, you will just be asked to log in using a basic auth scheme.

Locate the web.xml descriptor for the manager web application found in the /opt/apache-tomcat-5.5.28/server/webapps/manager/WEB-INF. Now we will remove the container authentication and security configuration and replace it with CAS and a simple authorization filter. To be on the safe side, make a backup copy in case you mess things up and want to start over:

cp web.xml web.xml.bk

Open the web.xml file and locate the following line, about two thirds into the file:

<!-- Define reference to the user database for looking up roles -->

Remove everything from there to the end of the configuration file, leaving just the closing line:

</web-app>

Scroll back to the top of the configuration, and insert the following, just after the closing </description< tag, about ten lines from the top:

  <filter>
    <filter-name>CASFilter</filter-name>
    <filter-class>edu.yale.its.tp.cas.client.filter.CASFilter</filter-class>
    <init-param>
        <param-name>edu.yale.its.tp.cas.client.filter.loginUrl</param-name>
	<param-value>https://one.example.com/cas-server-webapp-3.3.4/login</param-value>
    </init-param>
    <init-param>
        <param-name>edu.yale.its.tp.cas.client.filter.validateUrl</param-name>
        <param-value>https://one.example.com/cas-server-webapp-3.3.4/serviceValidate</param-value>
    </init-param>
    <init-param>
        <param-name>edu.yale.its.tp.cas.client.filter.serverName</param-name>
        <param-value>one.example.com:443</param-value>
    </init-param>
  </filter>

  <filter>
    <filter-name>Authz Filter</filter-name>
    <filter-class>edu.yale.its.tp.cas.client.filter.SimpleCASAuthorizationFilter</filter-class>
    <init-param>
        <param-name>edu.yale.its.tp.cas.client.filter.authorizedUsers</param-name>
        <param-value>password</param-value>
    </init-param>
  </filter>

  <filter-mapping>
      <filter-name>CASFilter</filter-name>
      <url-pattern>/*</url-pattern>
  </filter-mapping>

  <filter-mapping>
      <filter-name>Authz Filter</filter-name>
      <url-pattern>/*</url-pattern>
  </filter-mapping>

What the above configuration says is that a servlet filter, CASFilter will intercept all requests to the application and check that the user is authenticated by CAS. If not, the user will be redirected to the CAS log in page. In the same way, the servlet filter Authz Filter will make sure that the user is authorized. The authorization is extremely simple in this example – it just checks that the user name is in the list specified in the filter parameter edu.yale.its.tp.cas.client.filter.authorizedUsers. The authentication filter (CASFilter) parameters might benefit from some more explanation:

  • edu.yale.its.tp.cas.client.filter.loginUrl points to the URL where users are redirected in case they are not already authenticated
  • edu.yale.its.tp.cas.client.filter.validateUrl points to the URL of the web service that provides the validation service. This URL needs to start with HTTPS – otherwise an exception will be thrown. Since we are using Apache2 as a front end to take care of all SSL handling, we point to an address that is managed by Apache (i.e. port 443). We will have to trust the SSL certificate of Apache, as will be shown shortly.
  • edu.yale.its.tp.cas.client.filter.serverName points to the server name (and port) to which the user gets redirected to after a successful login.

In order for the Tomcat Manager application to be able to communicate with CAS (through the just defined servlet filter), we need to add the CAS client jar file. Provided you extracted the client archive to the desktop, copy the file ~/Desktop/cas-client-java-2.1.1/dist/casclient.jar to the Tomcat managers lib directory:

cp ~/Desktop/cas-client-java-2.1.1/dist/casclient.jar /opt/apache-tomcat-5.5.28/server/webapps/manager/WEB-INF/lib

If you are using a commercial certificate for your Apache2 web server (i.e. a certificate that has been signed by a root CA certificate found in the JVM trust store), you are done. Since this example is based on using a self signed certificate, we need to add our certificate to the trust store manually. The reason for this is that the CAS servlet filter utilizes HTTPS, and that Java (rightfully) refuses to establish a connection if it can not verify that the identity of the remote side is valid.

To import the Apache2 certificate into the trust store, start by converting it into DER format:

cd /etc/apache2/ssl
sudo openssl x509 -in apache.pem -out apache.crt -outform DER

Now, import the DER certificate into the JVM trust store:

keytool -import -trustcacerts -keystore /usr/lib/jvm/java-6-sun/jre/lib/security/cacerts -storepass changeit -alias apache -file /etc/apache2/ssl/apache.crt

Restart Tomcat and enjoy the CAS-enabled Tomcat Manager!

References

http://www.ja-sig.org/wiki/display/CAS/CASifying+Tomcat+Manager

Setting up CAS on Tomcat with Apache2 and SSL on Ubuntu – part 3

A common scenario when providing services via the web is to expose all applications via an Apache front end. The Apache server acts as a dispatcher, or reverse proxy, and takes care of virtual hosting as well as any SSL traffic. This way, only one IP address needs to be exposed to the Internet while users gets the experience of multiple stand-alone sites. This article series also describes how applications can be conveniently put behind access control using the Central Authentication Service, or CAS for short.

Part 3 – Adding Tomcat behind an Apache2 reverse proxy

This article is the third in a series. The steps described here are based on configurations that has been performed in the earlier steps. It is strongly recommended to read these first:

Before starting any configuration, we need to make sure that the required components are installed. We need to have a working Java installation and Apache Tomcat. I have been using Java 1.6 and Tomcat 5.5, but it should probably work with later versions as well. Start by installing Java:

sudo apt-get install sun-java6-jdk

Tomcat is downloaded from the Apache web site. Choose the core package and extract it to /opt, or another place of your choice. You may even want to put Tomcat onto a separate server (on which you’ll need Java installed as well).

Now, you should be able to start Tomcat by running:

/opt/apache-tomcat-5.5.28/bin/startup.sh

If you get an error message stating that the JAVA_HOME variable is not set, you can add the following line to the file /etc/environment:

JAVA_HOME=/usr/lib/jvm/java-6-sun/

This will set the JAVA_HOME environment variable globally for all users. In order to read it into memory without rebooting, run the following command:

source /etc/environment
export JAVA_HOME

Direct your browser to http://localhost:8080 and make sure you reach the Tomcat welcome page.

Now that tomcat is up and running, we want to enable Apache2 to serve as a front end, taking care of virtual hosting and SSL acceleration. One could argue that any Tomcat installation using SSL benefits from having an Apache front end that handles SSL encryption and decryption in native code.

The preferred way of connecting Apache2 with Tomcat is by using the AJP protocol provided by the mod_jk Apache module. This requires a couple of configurations. We’ll start by installing the required Apache2 module:

sudo apt-get install libapache2-mod-jk

Next, we need to create the file /etc/apache2/conf.d/tomcat. By putting it in the conf.d directory it is automatically included into the Apache2 configuration:

# mod_jk config
# Where to find workers.properties
JkWorkersFile /etc/apache2/workers.properties
#
# Where to put jk logs
JkLogFile /var/log/apache2/jk.log
#
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
#
#JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
#
# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"

Next, create the file /etc/apache2/ workers.properties

#
# This file provides minimal jk configuration properties needed to
# connect to Tomcat.
#
# We define a worker named ‘default’
ps=/
workers.java_home=/usr/lib/jvm/java-1.5.0-sun/
worker.list=default
worker.default.port=8009
worker.default.host=localhost
worker.default.type=ajp13
worker.default.lbfactor=1

Edit the virtual host configuration /etc/apace2/sites-enabled/one.example.com-http and add the following just above the line starting with DocumentRoot:

JkMount /* default
JkMount /*.jsp default
DirectoryIndex index.jsp index.html
# Globally deny access to the WEB-INF directory
<LocationMatch ‘.*WEB-INF.*’>
deny from all
</LocationMatch>

Repeat the above with the HTTPS virtual host configuration /etc/apace2/sites-enabled/one.example.com-ssl.

Restart the Apache2 web server:

sudo /etc/init.d/apache2 restart

Now, you should be able to visit the Tomcat start page by directing the browser to either http://one.example.com or https://one.example.com.

Next Step

The next step is to get CAS up and running

References

http://blog.beplacid.net/2007/11/20/howto-apache-2-tomcat-5525-and-mod_jk-under-debian/

Setting up CAS on Tomcat with Apache2 and SSL on Ubuntu – part 2

A common scenario when providing services via the web is to expose all applications via an Apache front end. The Apache server acts as a dispatcher, or reverse proxy, and takes care of virtual hosting as well as any SSL traffic. This way, only one IP address needs to be exposed to the Internet while users gets the experience of multiple stand-alone sites. This article series also describes how applications can be conveniently put behind access control using the Central Authentication Service, or CAS for short.

Part 2 – Adding SSL support to Apache2 virtual hosts

In this part of the article series we are going to set up Apache2 so that it is possible to access it through SSL. The Apache2 configuration we are working on is described in this previous post.

Note that there is a limitation as to how you can configure virtual hosts on HTTPS. Remember that traffic is encrypted when it arrives to the web server. This includes the actual host name in the request URL as well. Thus it is impossible for Apache to know which security certificate to use in order to decrypt the request. There is however something called wildcard certificates – these can be used for domain patterns like *.example.com. If we are using a wildcard certificate, i.e. always the same certificate, we can host an arbitrary number of virtual hosts under the domain, and they will all use the same certificate.

Now, let’s get started. The first thing to do is to make sure Apache2 has the necessary module for serving HTTPS traffic:

sudo a2enmod ssl

Next, we need to set up the key and certificate to use for our virtual hosts. This can either be done by purchasing a commercial certificate, or by creating a self-signed certificate. In this article we are going to create a self-signed certificate. This means that anyone accessing our site will get a warning that the root certificate is not trusted, but otherwise it is identical to a commercial certificate. Set up the self-signed certificate by issuing the following commands:

sudo mkdir /etc/apache2/ssl
sudo make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/apache2/ssl/apache.pem
sudo chmod a+r /etc/apache2/ssl/apache.pem

When you issue the second command above, you will be asked for the certificate identity. At this point enter *.example.com. The asterisk (*) indicates that it is a wildcard certificate that maps to any URL where the host part ends with example.com

Now we are all set to setup a couple of virtual hosts that will respond to HTTPS requests. Like discussed in the previous post, this is done by adding configuration files into the /etc/apache2/sites-available directory. Start with the file /etc/apache2/sites-available/one.example.com-ssl:

#
# Virtual host (HTTPS)
# one.example.com (/var/www/one.example.com)
#
<VirtualHost *:443>
ServerAdmin webmaster@one.example.com
ServerName one.example.com:443
#
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/apache.pem
#
# Indexes + Directory Root.
DirectoryIndex index.html
DocumentRoot /var/www/one.example.com/htdocs
#
# Logfiles
ErrorLog /var/www/one.example.com/logs/error.log
CustomLog /var/www/one.example.com/logs/access.log combined
</VirtualHost>

Continue with the second host, /etc/apache2/sites-available/two.example.com-ssl:

#
# Virtual host (HTTPS)
# two.example.com (/var/www/two.example.com)
#
<VirtualHost *:443>
ServerAdmin webmaster@two.example.com
ServerName two.example.com:443
#
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/apache.pem
#
# Indexes + Directory Root.
DirectoryIndex index.html
DocumentRoot /var/www/two.example.com/htdocs
#
# Logfiles
ErrorLog /var/www/two.example.com/logs/error.log
CustomLog /var/www/two.example.com/logs/access.log combined
</VirtualHost>

The two host configurations above point to the same web content directories as their non-SSL counterparts. This is by no means necessary – they could have pointed to any location, they are in fact different virtual hosts from the perspective of the Apache2 web server.

Finally, lets see it working in action by enabling the sites, restarting Apache and pointing the browser to the new virtual hosts:

sudo a2ensite one.example.com-ssl
sudo a2ensite two.example.com-ssl
sudo /etc/init.d/apache2 restart

You should now be able to access https://one.example.com and https://two.example.com and see different content. If you are using a self-signed certificate, like I have described here, you will be asked to override the default security behavior in the browser. Assuming you trust yourself – go ahead and add the exceptions.

Next step

The next step will be to add Tomcat to the setup and make Apache2 communicate with the Tomcat backend through the AJP protocol as provided by the mod_jk module.

Setting up CAS on Tomcat with Apache2 and SSL on Ubuntu – part 1

A common scenario when providing services via the web is to expose all applications via an Apache front end. The Apache server acts as a dispatcher, or reverse proxy, and takes care of virtual hosting as well as any SSL traffic. This way, only one IP address needs to be exposed to the Internet while users gets the experience of multiple stand-alone sites. This article series also describes how applications can be conveniently put behind access control using the Central Authentication Service, or CAS for short.

Part 1 – Setting up Apache2 for virtual hosting

Installing and configuring Apache

We are going to create an environment where we can host multiple websites with a single server. This is quite straightforward to do with the use of the Apache web server. We are going to cover the use of the NameVirtualHost directive, which does not require any hard-wiring of IP addresses. The only thing you need is for your domain names to resolve to the IP address of your webserver.

For example if you have an Apache server running upon the IP address 10.0.1.20 and you wish to host the two sites one.example.com and two.example.com you’ll need to make sure that these names resolve to the IP address of your server. If you wish to do this in an isolated environment and avoid setting up DNS records, you can simply add the host names to your /etc/hosts file, which should then look something like the following (replacing the IP addresses with the address of your intended server):

127.0.0.1    localhost
10.0.1.20    one.example.com
10.0.1.20    two.example.com

After this preparation, the first thing to do is to setup an Apache2 web server. In Ubuntu, it is conveniently located in the repositories:

sudo apt-get install apache2

This installs the Apache web server and creates a basic configuration. After installation you should be able to access your newly created server on http://localhost. You should also be able to access the default server site on the addresses http://one.example.com and http://two.example.com.

The next thing to do is to enable virtual hosts in your Apache configuration. The simplest way to do this is to create a file called /etc/apache2/conf.d/virtual.conf and include the following content in it:

#
#  Handle multiple virtual hosts.
#
NameVirtualHost *:80
NameVirtualHost *:443

This effectively means that you are going to map all requests on port 80 and 443 (HTTPS) to virtual hosts. Note that there is a limitation as to how you can configure virtual hosts on HTTPS. Remember that traffic is encrypted when it arrives to the web server. This includes the actual host name in the request URL as well. Thus it is impossible for Apache to know which security certificate to use in order to decrypt the request. There is however something called wildcard certificates – these can be used for domain patterns like *.example.com. If we are using a wildcard certificate, we can host an arbitrary number of virtual hosts under the domain, and they will all use the same certificate. More on that in a later post.

Next, we need to create the two virtual web sites. All site configurations under apache2 are kept in the directory /etc/apache2/sites-available. Start by creating the file /etc/apache2/sites-available/one.example.com-http:

#
#  Virtual host
#  one.example.com (/var/www/one.example.com)
#
<VirtualHost *:80>
ServerAdmin webmaster@one.example.com
ServerName  one.example.com:80

# Indexes + Directory Root.
DirectoryIndex index.html
DocumentRoot /var/www/one.example.com/htdocs

# Logfiles
ErrorLog  /var/www/one.example.com/logs/error.log
CustomLog /var/www/one.example.com/logs/access.log combined
</VirtualHost>

Note that the directive ServerName marks the name of the virtual host, and when Apache receives a request that contains a url with this name, it will be directed to the directory /var/www/one.example.com/htdocs. Now, lets do the same with the second virtual host by creating the file /etc/apache2/sites-available/two.example.com-http:

#
#  Virtual host
#  two.example.com (/var/www/two.example.com)
#
<VirtualHost *:80>
ServerAdmin webmaster@two.example.com
ServerName  two.example.com:80

# Indexes + Directory Root.
DirectoryIndex index.html
DocumentRoot /var/www/two.example.com/htdocs

# Logfiles
ErrorLog  /var/www/two.example.com/logs/error.log
CustomLog /var/www/two.example.com/logs/access.log combined
</VirtualHost>

The last thing to do is to create a directory structure and some content to display for the sites:

sudo mkdir /var/www/one.example.com/htdocs
sudo mkdir /var/www/one.example.com/logs
sudo mkdir /var/www/two.example.com/htdocs
sudo mkdir /var/www/two.example.com/logs
sudo echo "site one.example.com" > /var/www/one.example.com/htdocs/index.html
sudo echo "site two.example.com" > /var/www/one.example.com/htdocs/index.html

Finally, lets enable the sites, restart apache and see the result. Enabling the sites boils down to creating symbolic links in the /etc/apache2/sites-enabled directory. These are automatically included by Apache when the configuration is loaded. For convenience there is  a script available that creates the link for us:

sudo a2ensite one.example.com-http
sudo a2ensite two.example.com-http
sudo /etc/init.d/apache2 restart

Now, direct the browser to http://one.example.com, followed by http://two.example.com. You should be greeted by two different web sites. Voila!

Next step

The next step is to enable SSL with the use of a wildcard certificate

References

http://www.debian-administration.org/articles/412