• 2011-02-20 13:56:00

    Keeping your Linux users in sync in a cluster

    As websites become more and more popular, your application might not be able to cope with the demand that your users put on your server. To accommodate that, you will need to move out of the one server solution and create a cluster. The easiest cluster to create would be with two servers, one to handle your web requests (HTTP/PHP etc.) and one to handle your database load (MySQL). Again this setup can only get you so far. If your site is growing, you will need a cluster of servers.


    The scope of this How-To will not cover database replication; I will need to dedicate a separate blog post for that. However, clustering your database is relatively easy with MySQL replication. You can set a virtual hostname (something like mysql.mydomain.com) which is visible only within your network. You then set up the configuration of your application to use that as the host. The virtual hostname will map to the current master server, while the slave(s) will only replicate.

    For instance, if you have two servers A and B, you can configure both of them to become master or slave in MySQL. You then set one of them as master (A) and the other as slave (B). If something happens to A, B gets promoted to master instantly. Once A comes back up, it gets demoted to a slave and the process is repeated if/when B has a problem. This can be a very good solution but you will need to have pretty evenly matched servers to keep with the demand. Alternatively B can be less powered than A and when A comes back up you keep it momentarily as a slave (until everything is replicated) and then promote it back to master.

    One thing to note about replication (that I found through trial and error). MySQL keeps binary logs to handle replication. If you are not cautious in your deployment, MySQL will never recycle those logs and therefore you will soon run out of space when having a busy site. By default those logs will be under /var/lib/mysql.

    By changing directives in my.cnf you can store the binary logs in a different folder and even set up 'garbage collection' or recycling. You can for instance set the logs to rotate every X days with the following directive in my.cnf:

    expire_logs_days = 5

    I set mine to 5 days which is extremely generous. If your replication is broken you must have the means to know about it within minutes (see nagios for a good monitoring service). In most cases 2 days is more than enough.


    There are numerous ways of keeping your cluster in sync. A really good tool that I have used when playing around with a cluster is csync2. Installation is really easy and and all you will need is to run a cron task every X minutes (up to you) to synchronize the new files. Imagine it as a two way rsync. Another tool that can do this is unison but I found it to be slow and difficult to implement - that's just me though.

    Assume an implementation of a website being served by two (or more) servers behind a load balancer. If your users upload files, you don't know where those files are uploaded, which server that is. As a result if user A uploads the file abc.txt to server A, user B might be served the content from server B and would not be able to access the file. csync2 would synchronize the file across the number of servers, thus providing access to the content and keeping multiple copies of the content (additional backup if you like).


    An alternative to keeping everything synchronized is to use a NFS. This approach has many advantages and some disadvantages. It is up to you on whether the disadvantages are something you can live with.

    • NFS is slow - slower than the direct access to a local hard drive.
    • Most likely you will use a symlink to the NFS folder, which can slow things down even more.
    • The NFS does not rely on the individual web servers for content.
    • The web servers can be low to medium spec boxes without the need to have really fast and large hard drives
    • A well designed NFS with DRDB provides a raid-1 over a network. Using gigabit Network Interface Cards you can keep performance at really high levels.

    I know that my friend Floren does not agree with my approach on the NFS and would definitely have gone with the csync2 approach. Your implementation depends on your needs.

    Users and Groups

    Using the NFS approach, we need to keep the files and permissions properly set up for our application. Assume that we have two servers and we need to create one user to access our application and upload files.

    The user has been created on both servers and the files are stored on the NFS. Connecting to server A and looking at the files we can see something like this:

    drwxr-xr-x 17 niden  niden  4096 Feb 18 13:41 www.niden.net
    drwxr-xr-x  5 niden  niden  4096 Nov 15 22:10 www.niden.net.files
    drwxr-xr-x  7 beauty beauty 4096 Nov 21 17:42 www.beautyandthegeek.it

    However when connecting to server B, the same listing tells another story:

    drwxr-xr-x 17 508    510    4096 Feb 18 13:41 www.niden.net
    drwxr-xr-x  5 508    510    4096 Nov 15 22:10 www.niden.net.files
    drwxr-xr-x  7 510    511    4096 Nov 21 17:42 www.beautyandthegeek.it

    The problem here is the uid and gid of the users and groups of each user respectively. Somehow (and this is really easy to happen) server A had one or more users added to it, thus the internal counter of the user IDs has been increased by one or more and is not identical to that one of server B. So adding a new user in server A will get the uid 510 while on server B the same process will produce a user with a uid of 508.

    To have all users setup on all servers the same way, we need to use two commands: groupadd and useradd (in some Linux distributions you might find them as addgroup and adduser).


    First of all you will need to add groups. You can of course keep all users in one group but my implementation was to keep one user and one group per access. To cater for that I had to first create a group for every user and then the user account itself. Like users, groups have unique ids (gid). The purpose of gid is:

    The numerical value of the groups ID. This value must be unique, unless the -o option is used. The value must be non-negative. The default is to use the smallest ID value greater than 999 and greater than every other group. Values between 0 and 999 are typically reserved for system accounts.

    I chose to assign each group a unique id (you can override this behavior by using the -o switch in the command below, thus allowing a gid to be used in more than one group). The arbitrary number that I chose was 2000.

    As an example, I will set niden as the user/group for accessing this site and beauty as the user/group that accesses BeautyAndTheGeek.IT. Note that this is only an example.

    groupadd --gid 2000 niden
    groupadd --gid 2001 beauty

    Repeat the process as many times as needed for your setup. Connect to the second server and repeat this process. Of course if you have more than two servers, repeat the process on each of the servers that you have (and each accesses your NFS)


    The next step is to add the users. Like groups, we will need to set the uid up. The purpose of the uid is:

    The numerical value of the users ID. This value must be unique, unless the -o option is used. The value must be non-negative. The default is to use the smallest ID value greater than 999 and greater than every other user. Values between 0 and 999 are typically reserved for system accounts.

    Like with the groups, I chose to assign each user a unique id starting from 2000.

    So to in the example above, the commands that I used were:

    useradd --uid 2000 -g niden --create-home niden
    useradd --uid 2000 -g beauty --create-home beauty

    You can also use a different syntax, utilizing the numeric gids:

    useradd --uid 2000 --gid 2000 --create-home niden
    useradd --uid 2000 --gid 2001 --create-home beauty

    Again, repeat the process as many times as needed for your setup and to as many servers as needed.

    In the example above I issued the --create-home switch (or -m) so as a home folder to be created under /home for each user. Your setup might not need this step. Check the references at the bottom of this blog post for the manual pages for groupadd and useradd.

    I would suggest that you keep a log of which user/group has which uid/gid. It helps in the long run, plus it is a good habit to keep proper documentation on projects :)


    So how about the passwords on all servers? My approach is crude but effective. I connected to the first server, and set the password for each user, writing down what the password was:

    passwd niden

    Once I had all the passwords set, I opened the /etc/shadow file.

    nano /etc/shadow

    and that revealed a long list of users and their scrambled passwords:


    Since I know that I added niden and beauty as users, I copied these two lines. I then connected to the second server, opened /etc/shadow and located the two lines where the niden and beauty users are referenced. I deleted the existing lines, and pasted the ones that I had copied from server A. Saved the file and now my passwords are synchronized in both servers.


    The above might not be the best way of keeping users in sync in a cluster but it gives you an idea on where to start. There are different implementations available (Google is your friend) and your mileage might vary. The above has worked for me for a number of years since I never needed to add more than a handful of users on the servers each year.


  • 2009-12-10 12:00:00

    Faster rsync and emege in Gentoo


    Recently I have started setting up a cluster of 7 Gentoo boxes for a project I am working on. The problem with boxes coming right out of the setup process of a hosting company is that they do not contain the packages that you need. Therefore you need to setup your USE flags and emerge the packages you require as per the role of every box.

    I have implemented the following procedure many times in my local networks (since I have more than one Gentoo boxes) and have also implemented the same process at work (we run 3 Gentoo boxes).

    The way to speed up rsync and emerge is to run a local rsync mirror and to use http-replicator. This will not make the packages compile faster but what it will do is reduce the resource usage (downloads in particular) of your network since each package will be downloaded only one time and reduce the time you have to wait for each package to be downloaded. The same applies with the rsync.

    My network has as I said 7 boxes. 5 of them are going to be used as web servers so effectively they have the same USE flags and 2 as database servers. For the purposes of this tutorial I will name the web servers ws1, ws2, ws3, ws4, ws5 and the database servers db1, db2. The ws1 box will be used as the local rsync mirror and will run http-replicator.

    I am going to set up the /etc/hosts file on each machine so that the local network is resolved in each box and no hits to the DNS are required. So for my network I have:  ws1  ws2  ws3  ws4  ws5  db1  db2

    Modify the above to your specific setup needs.

    Setting up a local rsync

    Server setup (ws1)

    There is a really good tutorial can be found in the Gentoo Documentation but here is the short version:

    The ws1 box already has the rsync package in there. All I need to do is start the daemon. Some configuration is necessary before I start the service:

    nano -w /etc/rsyncd.conf

    and what I should have in there is:

    # Restrict the number of connections
    max connections = 5
    # Important!! Always use chroot
    use chroot = yes
    # Just in case you are allowed only read only access
    read only = yes
    # The user has no privileges
    uid = nobody
    gid = nobody
    # Recommended: Restrict via IP (subnets or just IP addresses)
    hosts allow =
    # Everyone else denied
    hosts deny  = *
    # The local portage
    path = /usr/portage
    comment = niden.net Gentoo Portage tree
    exclude = /distfiles /packages

    That's it. Now I add the service to the default runlevel and start the service

    rc-update add rsyncd default
    /etc/init.d/rsyncd start

    NOTE: If you have a firewall using iptables, you will need to add the following rule:

    # RSYNC
    -A INPUT --protocol tcp --source --match state --state NEW --destination-port 873 --jump ACCEPT
    Client setup

    In my clients I need to edit the /etc/make.conf file and change the SYNC directive to:


    or I can use the IP address:


    Note that the path used in the SYNC command is what I have specified as a section in the rsyncd.conf file (niden-gentoo-portage in my setup). This path can be anything you like.


    I have already run

    emerge --sync

    in the ws1 box, so all I need to do now is run it on my clients. Once I run it I can see the following (at the top of the listing):

    emerge --sync
    >>> Starting rsync with rsync://
    receiving incremental file list

    So everything works as I expect it.

    Setting up http-replicator

    http-replicator is a proxy server. When a machine (the local or a remote) requests a package, http-replicator checks its cache and if the file is there, it passes it to the requesting machine. If the file doesn't exist though, http-replicator downloads it from a mirror and then passes it to the requesting machine. The file is then kept in http-replicator's cache for future requests. This way I save on resources by downloading once and serving many times locally.

    Although this might not seem as a 'pure speedup' it will make your installations and updates faster since the download factor will be reduced to a bare minimum. Waiting for packages like mysql, Gnome or others to be downloaded does take a long time. Multiply that time with the number of machines you have on your network and you can see the benefits of having a setup like this.

    Server setup (ws1)

    First of all I need to emerge the package

    emerge http-replicator

    Once everything is done I need to change the configuration file to suit my needs:

    nano -w /etc/conf.d/http-replicator

    and the file should have:

    GENERAL_OPTS="--dir /var/cache/http-replicator"
    GENERAL_OPTS="$GENERAL_OPTS --user portage"
    DAEMON_OPTS="$DAEMON_OPTS --alias /usr/portage/packages/All:All"
    DAEMON_OPTS="$DAEMON_OPTS --log /var/log/http-replicator.log"
    DAEMON_OPTS="$DAEMON_OPTS --ip 10.13.18.*"
    ## The proxy port on which the server listens for http requests:
    DAEMON_OPTS="$DAEMON_OPTS --port 8080"

    The last line with the --port parameter specifies the port that the http-replicator will listen to. You can change it to whatever you want. Also the --ip parameter restricts who is allowed to connect to this proxy server. I have allowed my whole internal network; change it to suit your needs. Lastly the --dir option is where the cached data is stored. You can change it to whatever you like. I have left it to what it is. Therefore I need to create that folder:

    mkdir /var/cache/http-replicator

    Since I have specified that the user that this proxy will run as is portage (see --user directive above) I need to change the owner of my cache folder:

    chown portage:portage /var/cache/http-replicator

    I add the service to the default runlevel and start the service

    rc-update add http-replicator default
    /etc/init.d/http-replicator start

    NOTE: If you have a firewall using iptables, you will need to add the following rule:

    -A INPUT --protocol tcp --source --match state --state NEW --destination-port 8080 --jump ACCEPT

    You will need also to regularly run



    rm -rf /usr/portage/distfiles/*

    to clear the distfiles folder. I have added those in a bash script and I run it every night using my cron.

    Client setup

    In my clients I need to edit the /etc/make.conf and change the SYNC directive to:

    RESUMECOMMAND=" /usr/bin/wget -t 5 --passive-ftp  \${URI} -O \${DISTDIR}/\${FILE}"</pre>

    I have commented any previous RESUMECOMMAND statements.


    The testing begins in one of the clients (you can choose any package):

    emerge logrotate

    and see in the output that everything works fine

    ws2 ~ # emerge logrotate
    Calculating dependencies... done!
    >>> Verifying ebuild manifests
    >>> Emerging (1 of 1) app-admin/logrotate-3.7.8
    >>> Downloading 'http://distfiles.gentoo.org/distfiles/logrotate-3.7.8.tar.gz'
    --2009-12-10 06:46:47--  http://distfiles.gentoo.org/distfiles/logrotate-3.7.8.tar.gz
    Resolving ws1...
    Connecting to ws1||:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 43246 (42K)
    Saving to: `/usr/portage/distfiles/logrotate-3.7.8.tar.gz'
    100%[=============================>] 43,246      --.-K/s   in 0s
    2009-12-10 06:46:47 (89.6 MB/s) - `/usr/portage/distfiles/logrotate-3.7.8.tar.gz' saved [43246/43246]

    Final thoughts

    Setting up local proxies allows your network to be as efficient as possible. It does not only reduce the download time for your updates but it is also courteous to the Gentoo community. Since mirrors are run by volunteers or non-profit organizations, it is only fair to not abuse the resources by downloading an update more than once for your network.

    I hope this quick guide will help you and your network :)