• 2011-05-18 14:49:00

    Using autounmask to Unmask Packages in Gentoo

    Gentoo is one of my favorite Linux distributions. Although I am comfortable with other distributions, Gentoo has a special place in my heart and whenever I can use it I do :)

    There are however some times that I would like to install a package - mostly to test something - and the package is masked. Masked packages are not "production ready" so they are not included in the portage tree i.e. available to be installed.

    To allow a masked package to be installed, you will need to unmask that package by adding a corresponding entry in the etc/portage/package.keywords file.

    The problem happens when the masked package (that you just unmasked) depends on other packages that are also masked. You will then need to rinse and repeat the process to ensure that everything is in place so that you can install that unmasked package.

    NOTE: Playing with masked packages is like playing with fire. If you don't know what you are doing or you are not ready to potentially have an unusable system, don't follow the instructions below or unmask any packages.

    app-portage/autounmask

    The Gentoo developers have created a little utility that will unmask each package that needs to be unmasked. The utility is app-portage/autounmask.

    I wanted to unmask www-misc/fcgiwrap so my manual method would be:

    echo "=www-misc/fcgiwrap ~amd64" >> /etc/portage/packages.keywords
    

    and would then emerge the package

    emerge =www-misc/fcgiwrap
    

    Instead I used autounmask:

    emerge app-portage/autounmask
    

    and

    autounmask www-misc/fcgiwrap-1.0.3
    
    autounmask version 0.27 (using PortageXS-0.02.09 and portage-2.1.9.42)
    * Using repository: /usr/portage
    * Using package.keywords file: /etc/portage/package.keywords
    * Using package.unmask file: /etc/portage/package.unmask
    * Using package.use file: /etc/portage/package.use
    * Unmasking www-misc/fcgiwrap-1.0.3 and its dependencies.. this might take a while..
    * Added '=www-misc/fcgiwrap-1.0.3 ~amd64' to /etc/portage/package.keywords
    * done!
    

    Once that is done I can issue the emerge command and voila!

    emerge www-misc/fcgiwrap
    

    Although in my case there was only one dependency to unmask, when trying to unmask packages that have multiple dependencies such as gnome, kde etc., autounmask can be a very helpful utility.

  • 2011-03-21 14:10:00

    Slicehost vs. Linode

    Through the years I have hosted my sites on various hosting companies. I had the really good experiences like Vertexhost and really terrible ones - I don't remember the name of the host, but that kid, as it turned out to be later on, managed to lose 1.6GB of my data. You can safely say that I have been burned by bad hosting companies but also have enjoyed the services of good ones. In the case of Vertexhost, I part owned that company a few years back and I know that the current owner is a straight up guy and really cares for his customers.

    Since I moved my emails to Google Apps I only need the hosting for my personal sites such as my blog, my wife's sites (burntoutmom.com, greekmommy.net) and a few other small sites.

    I used to host those sites on one of my company's clusters. The bandwidth consumed was nothing to write home about (I think in total it was a couple of GB per month ~ 1.00 USD) so it didn't matter that I had them there. However, recent events forced me to move them out of that cluster. I was on the market for good and relatively cheap hosting. I did not want to purchase my own server or co-locate with someone else. My solution was going to be a VPS since I would be in control of what I install and what I need.

    Slicehost

    Without much thought, I signed up for Slicehost, which is a subsidiary of Rackspace, a very well known and reputable company.

    I got their 4GB package (250.00 USD per month) and installed Gentoo on it. Apart from the price which was a bit steep, everything else was fine. I was happy to be able to host my sites in a configuration that I was comfortable with, under the understanding that if the VPS failed, then all my sites would go down. That however is the risk that everyone takes while hosting their sites on a single machine. The higher the availability and redundancy the higher the cost.

    I must admit that signing up was not a very happy experience. I went and paid with my credit card, as they pro-rate your month based on your package. Almost immediately after signing up, came the email informing me that my credit card has been charged for the relevant amount. I got into the box through ssh, updated the /etc/make.conf file with the USE flags that I needed, run emerge --sync and then emerge --update --deep --newuse --verbose world so as to update the system.

    It must have been around 5-10 minutes into the process that I received an email from Slicehost saying that they are checking my account information and that I need to confirm my credit card details. I immediately replied to their email (gotta love the desktop notifications on GMail), with the information they needed.

    After I sent the email, I noticed that the box was not responding. I tried to log back in and could not. I was also logged out (and could not log back in) to their management console on Slicehost site. I was fuming! They severed the connection to the VPS in the middle of compilation to check my credit card information. I understand that they need to perform checks for fraud but two questions came to mind:

    • Why did they have to sever the connection and not just send an email, and if I did not reply, just block access to the box? That would have been a heck of a lot of an inconvenience to myself i.e. the end user.
    • Why did the initial email say that my credit card has been charged and it had not?

    No more than 10 minutes later the whole thing had been resolved. I received an email saying that "everything is OK and your account has been restored", at which point I logged back in to redo the compilations. I also received emails from their support/billing team apologizing but stating that although the initial email states that they charge the credit card, they don't. It is something they need to correct because it pisses people (like me) off.

    There was nothing wrong with my setup - everything was working perfectly but the price was really what was bothering me. I would be able to support the sites for a few months, but since literally none of them is making money (maybe a dollar here or there from my blog but that is about it), I would have to pay out of pocket for the hosting every month. I had to find a different solution that would be:

    • cheaper than Slicehost
    • flexible in terms of setup
    • easy to use in terms of controlling your VPS

    After a lot of research I ended up with two winners: Linode and Prgrm. I opted for Linode, because although it was quite a bit more expensive than Prgmr, it had the better console in handling your VPS. I will, however, try out Prgmr's services in the near future so as to assess how good they are. They definitely cannot be beat in price.

    Linode

    Setting up an account with Linode was very easy. I didn't have any of the mini-saga I had with Slicehost. The account was created right there and then, my credit card charged and I was up and running in no time. Immediately I could see a difference in price. Linode's package for 4GB or RAM is 90.00 USD cheaper (159.00 USD vs. 250.00 USD for Slicehost). For the same package, the price difference is huge.

    I started testing the network, creating my VPS in the Atlanta, GA datacenter (Linode offers a number of data centers for you to create your own). The functionality that was available to me was identical and in some cases superior to that of Slicehost. There are a lot more distributions to choose from, and you can partition your VPS the way you want it to name a couple.

    Shifting through the documentation, I saw a few topics regarding high availability websites. The articles described using DRDB, nginX, heartbeat and pacemaker etc. to keep your sites highly available. I was intrigued by the information and set off to create a load balancer using two VPSs and nginX. I have documented the process and this is another blog post that will come later on this week.

    While experimenting with the load balancer (and it was Saturday evening) I had to add a new IP address to one of the VPS instances. At the time my account would not allow such a change and I had to contact support. I did and got a reply in less than 5 minutes. I was really impressed by this. Subsequent tickets were answered within the 5 minute time frame. Kudos to Linode support for their speed and professionalism.

    Conclusion

    For a lot cheaper, Linode offered the same thing that Slicehost did. Moving my sites from one VPS to another was a matter of changing my DNS records to point to the new IP address.

    I have been using Linode for a week and so far so good. The support is superb and the documentation is full of how-to's that allows me to experiment with anything I want to - and the prices are not going to break me.

    Resources

  • 2010-08-21 13:38:00

    Create an inexpensive hourly remote backup

    There are two kinds of people, those who backup regularly, and those that never had a hard drive fail

    As you can tell the above is my favorite quote. It is so true and I believe everyone should evaluate how much their data (emails, documents, files) is worth to them and, based on that value, create a backup strategy that suits them. I know for sure that if I ever lost the pictures and videos of my family I would be devastated since those are irreplaceable.

    So the question is how can I have an inexpensive backup solution? All my documents and emails are stored in Google, since my domain is on Google Apps. What happens to the live/development servers though that host all my work? I program on a daily basis and the code has to be backed up regularly so as to avoid any hard drive failures and thus result in loss of time and money.

    So here is my solution. I have an old computer (IBM Thincentre) which I decided to beef up a bit. I bought 4Gb of RAM from eBay for less than $100 for it. Although this is was not necessary since my solution would be based on Linux (Gentoo in particular), I wanted to have faster compilation times for packages.

    I bought two external drives (750Gb and 500Gb respectively) and one 750Gb internal drive. I already have a 120Gb hard drive in the computer. The two external ones are connected to the computer using USB while the internal ones are connected using SATA.

    The external drives are formatted using NTFS while the whole computer is built using ReiserFS.

    Here is the approach:

    • I have installed and have a working Gentoo installation on the machine
    • I have an active Internet connection
    • I have installed LVM on the machine and set up the core system on the 120Mb drive while the 500Mb is on LVM
    • I have 300Mb active on the LVM (from the available 500Mb)
    • I have generated a public SSH key (I will need this to exchange it with the target servers)
    • I have mounted the internal 500Mb drive to the /storage folder
    • I have mounted the external USB 750Mb drive to the /backup_hourly folder
    • I have mounted the external USB 500Mb drive to the /backup_daily folder

    Here is how my backup works:

    Every hour a script runs. The script uses rsync to syncrhonize files and folders from a remote server locally. Those files and folders are kept in relevant server named subfolders in the /storage folder (remember this is my LVM). So for instance my subfolders will be /storage/beryllium.niden.net, /storage/nitrogen.niden.net, /storage/argon.niden.net etc.

    Once the rsync completes, the script continues by compressing the relevant 'server' folder and creates the compressed file with a date-time stamp on its name.

    When all compressions are completed, if the time that the script has executed is midnight, the backups are moved from the /storage folder to the /backup_daily folder (which has the external USB 500Gb mounted). If it is any other time, the files are moved in the /backup_hourly folder (which has the external USB 750Gb mounted).

    This way I ensure that I keep a lot of backups (daily and hourly ones). The backups are being recycled, so older ones get deleted. The amount of data that you need to archive as well as the storage space you have available dictate how far back you can go in your hourly and daily cycles.

    So let's get down to business. The script itself:

    #!/bin/bash
    DATE=`date +%Y-%m-%d-%H-%M`
    DATE2=`date +%Y-%m-%d`
    DATEBACK_HOUR=`date --date='6 days ago' +%Y-%m-%d`
    DATEBACK_DAY=`date --date='60 days ago' +%Y-%m-%d`
    FLAGS="--archive --verbose --numeric-ids --delete --rsh='ssh'"
    BACKUP_DRIVE="/storage"
    DAY_USB_DRIVE="/backup_daily"
    HOUR_USB_DRIVE="/backup_hourly"
    

    These are some variables that I need for the script to work. The DATE and DATE2 are used to date/time stamp the backups, while the DATEBACK_* are used to clear previous backups. In this case it is set to 6 days ago (for my system). It can be set to whatever you want provided that you do not run out of space.

    The FLAGS variable keeps the rsync command options while the BACKUP_DRIVE, DAY_USB_DRIVE and HOUR_USB_DRIVE hold the locations of the rsync folders, daily backup and hourly backup sorage areas.

    The script works with arrays. I have 4 arrays to do the work and the 3 of them must have exactly the same elements.

    # RSync Information
    rsync_info[1]="beryllium.niden.net html rsync"
    rsync_info[2]="beryllium.niden.net db rsync"
    rsync_info[3]="nitrogen.niden.net html rsync"
    rsync_info[4]="nitrogen.niden.net html db"
    rsync_info[5]="nitrogen.niden.net html svn"
    rsync_info[6]="argon.niden.net html rsync"
    

    This is the first array which holds descriptions to what needs to be done as far as source is concerned. These descriptions get appended to the log and helps me identify what step I am in.

    # RSync Source Folders
    rsync_source[1]="beryllium.niden.net:/var/www/localhost/htdocs/"
    rsync_source[2]="beryllium.niden.net:/niden_backup/db/"
    rsync_source[3]="nitrogen.niden.net:/var/www/localhost/htdocs/"
    rsync_source[4]="nitrogen.niden.net:/niden_backup/db"
    rsync_source[5]="nitrogen.niden.net:/niden_backup/svn"
    rsync_source[6]="argon.niden.net:/var/www/localhost/htdocs/"
    

    This array holds the source host and folder. Remember that I have already exchanged SSH keys with each server, therefore when the script runs there is a direct connection to the source server. If you need to keep things a bit more secure for you, then you will need to alter the contents of the rsync_source array so that it reflects the user that you log in with as well as the password.

    # RSync Target Folders
    rsync_target[1]="beryllium.niden.net/html/"
    rsync_target[2]="beryllium.niden.net/db/"
    rsync_target[3]="nitrogen.niden.net/html/"
    rsync_target[4]="nitrogen.niden.net/db/"
    rsync_target[5]="nitrogen.niden.net/svn/"
    rsync_target[6]="argon.niden.net/html/"
    

    This array holds the target locations for the rsync. These folders exist in my case under the /storage subfolder.

    # GZip target files
    servers[1]="beryllium.niden.net"
    servers[2]="nitrogen.niden.net"
    servers[3]="argon.niden.net"
    

    This array holds the names of the folders to be archived. These are the folders directly under the /storage folder and I am also using this array for the prefix of the compressed files. The suffix of the compressed files is a date/time stamp.

    Here is how the script evolves:

    echo "BACKUP START" >> $BACKUP_DRIVE/logs/$DATE.log
    date >> $BACKUP_DRIVE/logs/$DATE.log
    
    echo "BACKUP START" >> $BACKUP_DRIVE/logs/$DATE.log
    date >> $BACKUP_DRIVE/logs/$DATE.log
    
    # Loop through the RSync process
    element_count=${#rsync_info[@]}
    let "element_count = $element_count + 1"
    index=1
    while [ "$index" -lt "$element_count" ]
    do
        echo ${rsync_info[$index]} > $BACKUP_DRIVE/logs/$DATE.log
        rsync $FLAGS ${rsync_source[$index]} $BACKUP_DRIVE/${rsync_target[$index]} > $BACKUP_DRIVE/logs/$DATE.log
        let "index = $index + 1"
    done
    

    The snippet above loops through the rsync_info array and prints out the information in the log file. Right after that it uses the rsync_source and rsync_target arrays (as well as the FLAGS variable) to rsync the contents of the source server with the local folder. Remember that all three arrays have to be identical in size (rsync_info, rsync_source, rsync_target).

    The next thing to do is zip the data (I loop through the servers array)

    # Looping to GZip data
    element_count=${#servers[@]}
    let "element_count = $element_count + 1"
    index=1
    while [ "$index" -lt "$element_count" ]
    do
        echo "GZip ${servers[$index]}" > $BACKUP_DRIVE/logs/$DATE.log
        tar cvfz $BACKUP_DRIVE/${servers[$index]}-$DATE.tgz $BACKUP_DRIVE/${servers[$index]} > $BACKUP_DRIVE/logs/$DATE.log
        let "index = $index + 1"
    done
    

    The compression method I use is tar/gzip. I found it to be fast with a good compression ratio. You can choose anything you like.

    Now I need to delete old files from the drives and copy the files on those drives. I use the servers array again.

    # Looping to copy the produced files (if applicable) to the daily drive
    element_count=${#servers[@]}
    let "element_count = $element_count + 1"
    index=1
    
    while [ "$index" -lt "$element_count" ]
    do
        # Copy the midnight files
        echo "Removing old daily midnight files" > $BACKUP_DRIVE/logs/$DATE.log
        rm -f $DAY_USB_DRIVE/${servers[$index]}/${servers[$index]}-$DATEBACK_DAY*.* > $BACKUP_DRIVE/logs/$DATE.log
        echo "Copying daily midnight files" > $BACKUP_DRIVE/logs/$DATE.log
        cp -v $BACKUP_DRIVE/${servers[$index]}-$DATE2-00-*.tgz $DAY_USB_DRIVE/${servers[$index]} &nbsp>>; $BACKUP_DRIVE/logs/$DATE.log
        rm -f $BACKUP_DRIVE/${servers[$index]}-$DATE2-00-*.tgz > $BACKUP_DRIVE/logs/$DATE.log
    
        # Now copy the files in the hourly
        echo "Removing old hourly files" > $BACKUP_DRIVE/logs/$DATE.log
        rm -f $HOUR_USB_DRIVE/${servers[$index]}/${servers[$index]}-$DATEBACK_HOUR*.* > $BACKUP_DRIVE/logs/$DATE.log
        echo "Copying daily midnight files" > $BACKUP_DRIVE/logs/$DATE.log
        cp -v $BACKUP_DRIVE/${servers[$index]}-$DATE.tgz $HOUR_USB_DRIVE/${servers[$index]} > $BACKUP_DRIVE/logs/$DATE.log
        rm -f $HOUR_USB_DRIVE/${servers[$index]}/${servers[$index]}-$DATEBACK*.* > $BACKUP_DRIVE/logs/$DATE.log
        let "index = $index + 1"
    done
    
    echo "BACKUP END" >> $BACKUP_DRIVE/logs/$DATE.log
    

    The last part of the script loops through the servers array and:

    • Deletes the old files (recycling of space) from the daily backup drive (/storage/backup_daily) according to the DATEBACK_DAY variable. If the files are not found a warning will appear in the log.
    • Copies the daily midnight file to the daily drive (if the file does not exist it will simply echo a warning in the log - I do not worry about warnings of this kind in the log file and was too lazy to use an IF EXISTS condition)
    • Removes the daily midnight file from the /storage drive.

    The reason I am using copy and then remove instead of the move (mv) command is that I have found this method to be faster.

    Finally the same thing happens with the hourly files

    • Old files are removed (DATEBACK_HOUR variable)
    • Hourly file gets copied to the /backup_hourly drive
    • Hourly file gets deleted from the /storage drive

    All I need now is to add the script in my crontab and let it run every hour.

    NOTE: The first time you will run the script you will need to do it manually (not in a cron job). The reason behind it is that the first time rsync will need to download all the contents of the source servers/folders in the /storage drive so as to create an exact mirror. Once that lengthy step is done, the script can be added in the crontab. Subsequent runs of the script will download only the changed/deleted files.

    This method can be very effective while not using a ton of bandwidth every hour. I have used this method for the best part of a year now and it has saved me a couple of times.

    The last thing I need to present you is the backup script that I have for my databases. As you notice above the source folder of beryllium.niden.net as far as databases are concerned is beryllium.niden.net/db/. What I do is I dump and zip the databases every hour on my servers. Although this is not a very efficient way of doing things and it adds to the bandwidth consumption every hour (since the dump will create a new file every hour) I have the following script running on my database servers every hour at the 45th minute:

    #!/bin/bash
    
    DBUSER=mydbuser
    DBPASS='dbpassword'
    DBHOST=localhost
    BACKUPFOLDER="/niden_backup"
    DBNAMES="`mysql --user=$DBUSER --password=$DBPASS --host=$DBHOST --batch --skip-column-names -e "show databases"| sed 's/ /%/g'`"
    OPTIONS="--quote-names --opt --compress "
    
    # Clear the backu folder
    rm -fR $BACKUPFOLDER/db/*.*
    
    for i in $DBNAMES; do
        echo Dumping Database: $i
        mysqldump --user=$DBUSER --password=$DBPASS --host=$DBHOST $OPTIONS $i > $BACKUPFOLDER/db/$i.sql
        tar cvfz $BACKUPFOLDER/db/$i.tqz $BACKUPFOLDER/db/$i.sql
        rm -f $BACKUPFOLDER/db/$i.sql
    done
    

    That's it.

    The backup script can be found in my GitHub here.

    Update: The metric units for the drives were GB not MB. Thanks to Jani Hartikainen for pointing it out.

  • 2010-08-01 13:11:00

    Subversion Backup How-To

    I will start this post once again with the words of a wise man:

    There are two kinds of people, those who backup regularly, and those that never had a hard drive fail

    So the moral of the story here is backup often. If something is to happen, the impact on your operations will be minimal if your backup strategy is in place and operational.

    There are a lot of backup scenarios and strategies. Most of them suggest a backup once a day, usually at the early hours of the day. This however might not work very well with a fast paced environment where data changes several times per hour. This kind of environment is usually a software development one.

    If you have chosen Subversion to be your software version control software then you will need a backup strategy for your repositories. Since the code changes very often, this strategy cannot rely on the daily backup schedule. The reason being is that, in software, a day's worth of work usually costs a lot more than the actual daily rate of the programmers.

    Below are some of the scripts I have used over the years for my incremental backups, that I hope will help you too. You are more than welcome to copy and paste the scripts and use them  or modify them to suit your needs. Please note though that the scripts are provided as is and that you must check your backup strategy with a full backup/restore cycle. I cannot assume responsibility of something that might happen in your system.

    Now that the 'legal' stuff are out of the way, here are the different strategies that you can adopt. :)

    svn-hot-backup

    This is a script that is provided with Subversion. It copies (and compresses if requested) the whole repository to a specified location. This technique allows for a full copy of the repository to be moved to a different location. The target location can be a resource on the local machine or a network resource. You can also backup on the local drive and then as a next step transfer the target files to an offsite location with FTP, SCP, RSync or any other mechanism you prefer.

    #!/bin/bash
    
    # Grab listing of repositories and copy each
    # repository accordingly
    
    SVNFLD="/var/svn"
    BACKUPFLD="/backup"
    
    # First clean up the backup folder
    rm -f $BACKUPFLD/*.*
    
    for i in $(ls -1v $SVNFLD); do
        if [ $i != 'conf' ]; then
            /usr/bin/svn-hot-backup --archive-type=bz2 $SVNFLD/$i $BACKUPFLD
        fi
    done
    

    This script will create a copy of each of your repositories and compress it as a bz2 file in the target location. Note that I am filtering for 'conf'. The reason being is that I have a conf file with some configuration scripts in the same SVN folder. You can adapt the script to your needs to include/exclude repositories/folders as needed.

    This technique gives the ability to immediately restore a repository (or more than one) by changing the configuration file of SVN to point to the backup location. If you run the script every hour or so then your downtime and loss will be minimal, should something happens.

    There are some configuration options that you can tweak by editing the actual svn-hot-backup script. In Gentoo it is located under /usr/bin/. The default number of backups (num_backups) that the script will keep is 64. You can choose 0 to keep them all but you can adjust it according to your storage or your backup strategy.

    One last thing to note is that you can change the compression mechanism by changing the parameter of the --archive-type option. The compression types supported are gz (.tar.gz), bz2 (.tar.bz2) and zip (.zip)

    Full backup using dump

    This method is similar to the svn-hot-backup. It works by 'dumping' the repository in a portable file format and compressing it.

    #!/bin/bash
    
    # Grab listing of folders and dump each
    # repository accordingly
    
    SVNFLD="/var/svn"
    BACKUPFLD="/backup"
    
    # First clean up the backup folder
    rm -f $BACKUPFLD/svn/*.*
    
    for i in $(ls -1v $SVNFLD); do
        if [ $i != 'conf' ]; then
            svnadmin dump $SVNFLD/$i/ > $BACKUPFLD/$i.svn.dump
            tar cvfz $BACKUPFLD/svn/$i.tgz $BACKUPFLD/$i.svn.dump
            rm -f $BACKUPFLD/$i.svn.dump
        fi
    done
    

    As you can see, this version does the same thing as the svn-hot-backup. It does however give you a bit more control over the whole backup process and allows for a different compression mechanism - since the compression happens on a separate line in the script.

    NOTE: If you use the hotcopy parameter in svnadmin (svnadmin hotcopy ....) you will be duplicating the behavior of svn-hot-backup.

    Incremental backup using dump based on revision

    This last method is what I use at work. We have our repositories backed up externally and we rely on the backup script to have everything backed up and transferred to the external location within an hour, since our backup strategy is an hourly backup. We have discovered that sometimes the size of a repository can cause problems with the transfer, since the Internet line will not be able to transfer the files across in the allocated time. This happened once in the past with a repository that ended up being 500Mb (don't ask :)).

    So in order to minimize the upload time, I have altered the script to dump each repository's revision in a separate file. Here is how it works:

    We backup using rsync. This way the 'old' files are not being transferred.

    Every hour the script loops through each repository name and does the following:

    • Checks if the .latest file exists in the svn-latest folder. If not, then it sets the LASTDUMP variable to 0.
    • If the file exists, it reads it and obtains the number stored in that file. It then stores that number incremented by 1 in the LASTDUMP variable.
    • Checks the number of the latest revision and stores it in the LASTVERSION variable
    • It loops through the repository, dumps each revision (LASTDUMP to LASTVERSION) and compresses it

    This method creates new files every hour so long as new code has been added in each repository via the checkin process. The rsync command will then pick only the new files and nothing else, therefore the data transferred is reduced to a bare minimum allowing easily for hourly external backups. With this method we can also restore a single revision in a repository if we need to.

    The script that achieves that is as follows:

    #!/bin/bash
    
    # Grab listing of folders and dump each
    # repository accordingly
    
    SVNFLD="/var/svn"
    BACKUPFLD="/backup"
    CHECKFLD=$BACKUPFLD/svn-latest
    
    for i in $(ls -1v $SVNFLD); do
        if [ $i != 'conf' ]; then
            # Find out what our 'start' will be
            if [ -f $CHECKFLD/$i.latest ]
            then
                LATEST=$(cat $CHECKFLD/$i.latest)
                LASTDUMP=$LATEST+1
            else
                LASTDUMP=0
            fi
    
            # This is the 'end' for the loop
            LASTREVISION=$(svnlook youngest $SVNFLD/$i/)
    
            for ((r=$LASTDUMP; r< =$LASTREVISION; r++ )); do
                svnadmin dump $SVNFLD/$i/ --revision $r > $BACKUPFLD/$i-$r.svn.dump
                tar cvfz $BACKUPFLD/svn/$i-$r.tgz $BACKUPFLD/$i-$r.svn.dump
                rm -f $BACKUPFLD/$i-$r.svn.dump
                echo $r > $CHECKFLD/$i.latest
            done
        fi
    done
    

    Conclusion

    You must always backup your data. The frequency is dictated by the rate that your data updates and how critical your data is. I hope that the methods presented in this blog post will complement your programming and source control should you choose to adopt them.

  • 2010-01-10 12:00:00

    Create a SSL Certificate in Linux

    There are times that I want to set up a secure communication with the server I am working on. This might be because I want to run phpMyAdmin over SSL (I do not like unencrypted communications over the Internet), install a certificate for an eShop for a client or just for my personal use.

    The first time I did this, I had to research on the Internet and after a bit of a trial and error I managed to get everything working. However if you do not do something on a regular basis you will forget. I am no exception to this rule hence this post to allow me to remember what I did and hopefully help you too.

    Prerequisites:

    This how-to assumes that you are running Gentoo, however these instructions can easily be applied to any other Linux distribution.

    I need to check if openssl is installed:

    vanadium ~ # emerge --pretend dev-libs/openssl
    
    These are the packages that would be merged, in order:
    
    Calculating dependencies... done!
    [ebuild  R  ] dev-libs/openssl-0.9.8l-r2
    

    If you do not see the [R] next to the package (and you see a N for instance) that means that you need to install the package. Issuing:

    vanadium  ~ # emerge --verbose dev-libs/openssl
    

    will do the trick.

    Generate the Private Key

    I like to generate keys with a lot of bits. All of my certificates have 4096 bits. This is a personal preference and it does not hurt to keep that value. Your host or Signing Authority (like GoDaddy, VeriSign, Thawte etc.) might ask you in their instructions to generate one with 2048 bits so don't be alarmed there.

    Creating the RSA private key with 4096 bits using Triple-DES:

    vanadium ~ # openssl genrsa -des3 -out /root/vanadium.niden.net.locked.key 4096
    Generating RSA private key, 4096 bit long modulus
    .............................................................++
    ...........++
    e is 65537 (0x10001)
    Enter pass phrase for /root/vanadium.niden.net.locked.key:
    Verifying - Enter pass phrase for /root/vanadium.niden.net.locked.key:
    

    Remove the passphrase from the Private Key

    The key that was created earlier has a passphrase. Although this is good, it does have a side effect that any web server administrator does not like - the passphrase itself. Once the certificate is installed using the key (with the passphrase), every time that Apache is restarted, it will prompt the operator for the passphrase. This can be very inconvenient if your web server reboots in the middle of the night. Since Apache will be waiting for the passphrase, your site will be inaccessible.

    To avoid this inconvenience, I am removing the passphrase from the key. If you noticed the key that I have created above has the 'locked' phrase in its name. The reason is that I know that that particular key has the passphrase on it. I first need to copy the key and then remove the passphrase:

    vanadium ~ # cp -v vanadium.niden.net.locked.key vanadium.niden.net.key
    `vanadium.niden.net.locked.key' -> `vanadium.niden.net.key'
    vanadium ~ # openssl rsa -in vanadium.niden.net.locked.key -out vanadium.niden.net.key
    Enter pass phrase for vanadium.niden.net.locked.key:
    writing RSA key
    

    Generate the Certificate Signing Request (CSR)

    The purpose of the CSR is to be sent to one of the Certificate Authorities (GoDaddy, VeriSign, Thawte etc.) for verification. Alternatively I can self-sign the CSR (see below).

    Upon generation of this CSR I am asked about particular pieces of information to be incorporated in the CSR. The most important piece of information that I need to ensure that is correct is the Common Name. The answer to that question has to be the name of my web server - vanadium.niden.net in my case.

    NOTE: I am using the key without the passphrase.

    The command to generate the CSR is as follows:

    vanadium ~ # openssl req -new -key vanadium.niden.net.key -out vanadium.niden.net.csr
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [AU]:US
    State or Province Name (full name) [Some-State]:Virginia
    Locality Name (eg, city) []:Arlington
    Organization Name (eg, company) [Internet Widgits Pty Ltd]:niden.net
    Organizational Unit Name (eg, section) []:IT
    Common Name (eg, YOUR name) []:vanadium.niden.net
    Email Address []:[email protected]
    
    Please enter the following 'extra' attributes
    to be sent with your certificate request
    A challenge password []:
    An optional company name []:
    

    Once this step is completed, I open the CSR file with a text editor, copy the contents and paste them in the relevant field of the Certification Authority (in my case GoDaddy), so that they can verify the CSR and issue the certificate.

    If however this is a development box or you do not want your certificate signed by a Certification Authority, you can check the section below on how to generate a self-signed certificate.

    Generating a Self-Signed Certificate

    At this point you will need to generate a self-signed certificate because you either don't plan on having your certificate signed by a CA, or you wish to test your new SSL implementation while the CA is signing your certificate. This temporary certificate will generate an error in the client browser to the effect that the signing certificate authority is unknown and not trusted.

    To generate a temporary certificate which is good for 365 days, issue the following command:

    vanadium ~ # openssl x509 -req -days 365 -in vanadium.niden.net.csr -signkey vanadium.niden.net.key -out vanadium.niden.net.crt
    Signature ok
    subject=/C=US/ST=Virginia/L=Arlington/O=niden.net/OU=IT/CN=vanadi[email protected]
    Getting Private key
    

    Installation

    For my system, the certificates are kept under /etc/apache2/ssl/ so I am going to copy them there (your need to adjust the instructions below to suit your system/installation):

    vanadium ~ # cp -v vanadium.niden.net.key /etc/apache2/ssl/
    vanadium ~ # cp -v vanadium.niden.net.crt /etc/apache2/ssl/
    

    I also need to open the relevant file to enable the certificate

    vanadium ~ # nano -w /etc/apache2/vhosts.d/00_default_ssl_vhost.conf
    

    In that file I need to change the following directives:

    SSLCertificateFile /etc/ssl/apache2/vanadium.niden.net.crt
    SSLCertificateKeyFile /etc/ssl/apache2/vanadium.niden.net.key

    If my certificate was issued by a Certificate Authority, the files that I have received have the CA certificate file. I can enable it in the following line:

    SSLCACertificateFile /etc/ssl/apache2/vanadium.niden.net.ca-bundle.crt
    

    Restarting Apache

    vanadium ~ # /etc/init.d/apache2 restart
     * Stopping apache2 ...                              [ ok ]
     * Starting apache2 ...                              [ ok ]
    

    Navigating to https://vanadium.niden.net should tell me if what I did was successful or not. If your browser (Google Chrome in my case) gives you a bright red screen with all sorts of warnings, that means that

    • either you self signed the certificate - in which case it complains about the certificate not being signed by a Certificate Authority or
    • you made a mistake and the Common Name in the certificate is not the same as the host name.

    Both these errors are easy to fix.

  • 2009-12-10 12:00:00

    Faster rsync and emege in Gentoo

    Scenario

    Recently I have started setting up a cluster of 7 Gentoo boxes for a project I am working on. The problem with boxes coming right out of the setup process of a hosting company is that they do not contain the packages that you need. Therefore you need to setup your USE flags and emerge the packages you require as per the role of every box.

    I have implemented the following procedure many times in my local networks (since I have more than one Gentoo boxes) and have also implemented the same process at work (we run 3 Gentoo boxes).

    The way to speed up rsync and emerge is to run a local rsync mirror and to use http-replicator. This will not make the packages compile faster but what it will do is reduce the resource usage (downloads in particular) of your network since each package will be downloaded only one time and reduce the time you have to wait for each package to be downloaded. The same applies with the rsync.

    My network has as I said 7 boxes. 5 of them are going to be used as web servers so effectively they have the same USE flags and 2 as database servers. For the purposes of this tutorial I will name the web servers ws1, ws2, ws3, ws4, ws5 and the database servers db1, db2. The ws1 box will be used as the local rsync mirror and will run http-replicator.

    I am going to set up the /etc/hosts file on each machine so that the local network is resolved in each box and no hits to the DNS are required. So for my network I have:

    10.13.18.101  ws1
    10.13.18.102  ws2
    10.13.18.103  ws3
    10.13.18.104  ws4
    10.13.18.105  ws5
    10.13.18.201  db1
    10.13.18.202  db2
    

    Modify the above to your specific setup needs.

    Setting up a local rsync

    Server setup (ws1)

    There is a really good tutorial can be found in the Gentoo Documentation but here is the short version:

    The ws1 box already has the rsync package in there. All I need to do is start the daemon. Some configuration is necessary before I start the service:

    nano -w /etc/rsyncd.conf
    

    and what I should have in there is:

    # Restrict the number of connections
    max connections = 5
    # Important!! Always use chroot
    use chroot = yes
    # Just in case you are allowed only read only access
    read only = yes
    # The user has no privileges
    uid = nobody
    gid = nobody
    # Recommended: Restrict via IP (subnets or just IP addresses)
    hosts allow = 10.13.18.0/24
    # Everyone else denied
    hosts deny  = *
    
    # The local portage
    [niden-gentoo-portage]
    path = /usr/portage
    comment = niden.net Gentoo Portage tree
    exclude = /distfiles /packages
    

    That's it. Now I add the service to the default runlevel and start the service

    rc-update add rsyncd default
    /etc/init.d/rsyncd start
    

    NOTE: If you have a firewall using iptables, you will need to add the following rule:

    # RSYNC
    -A INPUT --protocol tcp --source 10.13.18.0/24 --match state --state NEW --destination-port 873 --jump ACCEPT
    
    Client setup

    In my clients I need to edit the /etc/make.conf file and change the SYNC directive to:

    SYNC="rsync://ws1/niden-gentoo-portage"
    

    or I can use the IP address:

    SYNC="rsync://10.13.18.101/niden-gentoo-portage"
    

    Note that the path used in the SYNC command is what I have specified as a section in the rsyncd.conf file (niden-gentoo-portage in my setup). This path can be anything you like.

    Testing

    I have already run

    emerge --sync
    

    in the ws1 box, so all I need to do now is run it on my clients. Once I run it I can see the following (at the top of the listing):

    emerge --sync
    >>> Starting rsync with rsync://10.13.18.101/niden-gentoo-portage...
    receiving incremental file list
    ......
    

    So everything works as I expect it.

    Setting up http-replicator

    http-replicator is a proxy server. When a machine (the local or a remote) requests a package, http-replicator checks its cache and if the file is there, it passes it to the requesting machine. If the file doesn't exist though, http-replicator downloads it from a mirror and then passes it to the requesting machine. The file is then kept in http-replicator's cache for future requests. This way I save on resources by downloading once and serving many times locally.

    Although this might not seem as a 'pure speedup' it will make your installations and updates faster since the download factor will be reduced to a bare minimum. Waiting for packages like mysql, Gnome or others to be downloaded does take a long time. Multiply that time with the number of machines you have on your network and you can see the benefits of having a setup like this.

    Server setup (ws1)

    First of all I need to emerge the package

    emerge http-replicator
    

    Once everything is done I need to change the configuration file to suit my needs:

    nano -w /etc/conf.d/http-replicator
    

    and the file should have:

    GENERAL_OPTS="--dir /var/cache/http-replicator"
    GENERAL_OPTS="$GENERAL_OPTS --user portage"
    DAEMON_OPTS="$GENERAL_OPTS"
    DAEMON_OPTS="$DAEMON_OPTS --alias /usr/portage/packages/All:All"
    DAEMON_OPTS="$DAEMON_OPTS --log /var/log/http-replicator.log"
    DAEMON_OPTS="$DAEMON_OPTS --ip 10.13.18.*"
    ## The proxy port on which the server listens for http requests:
    DAEMON_OPTS="$DAEMON_OPTS --port 8080"
    

    The last line with the --port parameter specifies the port that the http-replicator will listen to. You can change it to whatever you want. Also the --ip parameter restricts who is allowed to connect to this proxy server. I have allowed my whole internal network; change it to suit your needs. Lastly the --dir option is where the cached data is stored. You can change it to whatever you like. I have left it to what it is. Therefore I need to create that folder:

    mkdir /var/cache/http-replicator
    

    Since I have specified that the user that this proxy will run as is portage (see --user directive above) I need to change the owner of my cache folder:

    chown portage:portage /var/cache/http-replicator
    

    I add the service to the default runlevel and start the service

    rc-update add http-replicator default
    /etc/init.d/http-replicator start
    

    NOTE: If you have a firewall using iptables, you will need to add the following rule:

    # HTTP-REPLICATOR
    -A INPUT --protocol tcp --source 10.13.18.0/24 --match state --state NEW --destination-port 8080 --jump ACCEPT
    

    You will need also to regularly run

    repcacheman
    

    and

    rm -rf /usr/portage/distfiles/*
    

    to clear the distfiles folder. I have added those in a bash script and I run it every night using my cron.

    Client setup

    In my clients I need to edit the /etc/make.conf and change the SYNC directive to:

    http_proxy="http://ws1:8080"
    RESUMECOMMAND=" /usr/bin/wget -t 5 --passive-ftp  \${URI} -O \${DISTDIR}/\${FILE}"</pre>
    

    I have commented any previous RESUMECOMMAND statements.

    Testing

    The testing begins in one of the clients (you can choose any package):

    emerge logrotate
    

    and see in the output that everything works fine

    ws2 ~ # emerge logrotate
    Calculating dependencies... done!
    
    >>> Verifying ebuild manifests
    
    >>> Emerging (1 of 1) app-admin/logrotate-3.7.8
    >>> Downloading 'http://distfiles.gentoo.org/distfiles/logrotate-3.7.8.tar.gz'
    --2009-12-10 06:46:47--  http://distfiles.gentoo.org/distfiles/logrotate-3.7.8.tar.gz
    Resolving ws1... 10.13.18.101
    Connecting to ws1|10.13.18.101|:8080... connected.
    Proxy request sent, awaiting response... 200 OK
    Length: 43246 (42K)
    Saving to: `/usr/portage/distfiles/logrotate-3.7.8.tar.gz'
    
    100%[=============================>] 43,246      --.-K/s   in 0s
    
    2009-12-10 06:46:47 (89.6 MB/s) - `/usr/portage/distfiles/logrotate-3.7.8.tar.gz' saved [43246/43246]
    .....
    

    Final thoughts

    Setting up local proxies allows your network to be as efficient as possible. It does not only reduce the download time for your updates but it is also courteous to the Gentoo community. Since mirrors are run by volunteers or non-profit organizations, it is only fair to not abuse the resources by downloading an update more than once for your network.

    I hope this quick guide will help you and your network :)

  • 2009-11-04 12:00:00

    Gentoo Stage 1 Installation

    This is my effort to install Gentoo Linux on my Acer Ferrari LMi 3000.

    The first thing I did was to start ssh. The live CD of Gentoo had already identified my network card which is based on the via-rhine module.

    # /etc/init.d/sshd start
    

    The key is generated and of course I needed to change the password to something known (it is scrambled for security reasons by the Live CD)

    # passwd
    

    Following that I left the notebook where it is and invoked PuTTY from my Windows box to access it. I printed the Gentoo Handbook guide just in case and had it near by as a reference.

    I decided to check the hard drive performance. I used

    # hdparm -tT /dev/hda
    

    and it reported:

    /dev/hda:
    Timing cached reads: 1068 MB in 2.00 seconds = 533.81 MB/sec 
    Timing buffered disk reads: 80 MB in 3.05 seconds = 26.22 MB/sec
    

    Just in case I activated DMA:

    # hdparm -d 1 /dev/hda
    

    and it reported:

    /dev/hda:
    setting
    using_dma to 1 (on)
    using_dma = 1 (on)
    

    So now with the hard drive tweaked for max performance the network works just fine (I am using ssh so it must be working ) I skip to chapter 4 to prepare my disks. I read thoroughly through the installation guide and decided to proceed with the following structure:

    Partition Filesystem  Size             Description
    /dev/hda1 ReiserFS    110 Mb           Boot partition
    /dev/hda2 swap       1024 Mb           Swap partition
    /dev/hda3 ReiserFS   Rest of the Disk  Root partition
    

    This partition scheme is nearly identical to the one used by the guide only that my choice of filesystem is ReiserFS and I have increased the swap to 1024 Mb.

    I used the cfdisk tool that comes with the CD.

    # cfdisk
    and in that program I defined
    <pre><code>Name Flags    Part   Type  FS Type [Label] Size (MB)
    ----------------------------------------------------
    hda1 Boot   Primary  Linux                   106.93
    hda2        Primary  Linux swap             1019.94
    hda3        Primary  Linux                 58884.78
    

    I toggled the Boot flag from the interface after having selected hda1. Once I finished with the partitioning I chose Write and confirmed it so that the partition table is written on the disk. I chose Quit and then rebooted the system just in case.

    # reboot
    

    I restarted the system and it booted again from the Live CD. Again I started sshd after setting a password for the root account. Now it is the time to format my partitions. The first one is the boot partition and I chose to label it boot

    # mkreiserfs -l boot /dev/hda1
    

    following that the root partition which was labeled root

    # mkreiserfs -l root /dev/hda3
    

    Finally time to format the swap partition

    # mkswap /dev/hda2
    

    and activate it

    # swapon /dev/hda2
    

    The partitions are now ready so all I have to do is mount them and start the installation.

    # mount /dev/hda3 /mnt/gentoo
    

    I will need to create a boot folder in the newly mounted partition

    # mkdir /mnt/gentoo/boot
    

    and now mount the boot partition in that folder

    # mount /dev/hda1 /mnt/gentoo/boot
    

    Moving on I need to check the date/time issuing the following command:

    # date
    

    The time was a bit off so I had to set it using the following command:

    # date 120123042004
    

    (where 12 is the month 01 is the day 23 is the hour, 04 the minute and 2004 the year)

    Now it is time to fetch the tarball. First I change the directory to /mnt/gentoo

    # cd /mnt/gentoo
    

    and then I use the links2 program (I like it better) to navigate through the mirrors and pick one which is closer to me (Austria)

    # links2 http://www.gentoo.org/main/en/mirrors.xml
    

    I chose the Inode network and then navigated to /releases/x86/2004.2/stages/x86 and downloaded the stage1-x86-2004.2.tar.bz2. Following that I unpacked the stage:

    # tar -xvjpf stage1-x86-2004.3.tar.bz2
    

    Then I had to tweak the make.conf file

    # nano /mnt/gentoo/etc/make.conf
    

    My make.conf is as follows:

    USE="-* X aalib acl acpi aim alsa apache2 apm audiofile 
    avi berkdb bidi bindist bitmap-fonts bzlib caps cdr 
    cpdflib crypt cscope ctype cups curl curlwrappers 
    dba dbx dga dio directfb divx4linux dvd dvdr encode 
    ethereal exif fam fastcgi fbcon fdftk flac flash 
    flatfile foomaticdb ftp gd gdbm ggi gif gmp gnome 
    gnutls gphoto2 gpm gtk gtk2 gtkhtml iconv icq imagemagick 
    imap imlib inifile innodb ipv6 jabber jack jikes jpeg 
    kerberos krb4 ladcca lcms ldap libwww mad maildir 
    mailwrapper mbox mcal memlimit mhash mikmod ming mmap mmx 
    motif moznocompose moznoirc moznomail mpeg mpi msn mssql 
    mysql -mysqli nas ncurses netcdf nhc98 nis nls offensive 
    oggvorbis opengl oscar pam pcmcia pcntl pcre pda pdflib 
    perl php pic pie plotutils png pnp posix ppds prelude 
    python quicktime readline samba sasl scanner sdl session 
    shared sharedmem simplexml slang slp snmp soap sockets 
    socks5 speex spell spl ssl svga sysvipc szip tcltk tcpd 
    tetex theora tidy tiff tokenizer truetype trusted uclibc 
    unicode usb vhosts videos wavelan wddx wmf xface xine 
    xml xml2 xmlrpc xmms xosd xprint xsl xv xvid yahoo yaz 
    zeo zlib x86"
    CHOST="i686-pc-linux-gnu"
    
    CFLAGS="-march=athlon-xp -O3 -pipe -fomit-frame-pointer"
    
    CXXFLAGS="${CFLAGS}"
    ACCEPT_KEYWORDS="~x86"
    PORTAGE_TMPDIR=/var/tmp
    PORTDIR=/usr/portage
    DISTDIR=${PORTDIR}/distfiles
    PKGDIR=${PORTDIR}/packages
    PORT_LOGDIR=/var/log/portage
    PORTDIR_OVERLAY=/usr/local/portage
    
    http_proxy="http://taurus.niden.net:8080"
        RESUMECOMMAND="
            /usr/bin/wget 
            -t 5 
            –passive-ftp \${URI} 
            -O \${DISTDIR}/\${FILE}"
    
    GENTOO_MIRRORS="
        http://gentoo.inode.at/ 
        http://gentoo.osuosl.org 
        http://gentoo.oregonstate.edu"
    SYNC="rsync://taurus.niden.net/portage"
    
    MAKEOPTS="-j2"
    
    AUTOCLEAN="yes"
    FEATURES="sandbox"
    

    You will notice that I use

    http_proxy="http://taurus.niden.net:8080" \
        RESUMECOMMAND="
            /usr/bin/wget 
                -t 5 
                –passive-ftp \${URI} 
                -O \${DISTDIR}/\${FILE}"
    

    because I have set up the httpd-replicator on my server and keep a local rsync mirror so that I don’t abuse the internet bandwidth. You will not need these lines on your installation. Additionally I set up my sync mirror to be my local server

    SYNC="rsync://taurus.niden.net/portage"
    

    whereas you will need to use one of the below (the closer to your location the better)

    Default: "rsync://rsync.gentoo.org/gentoo-portage" 
    North America: "rsync://rsync.namerica.gentoo.org/gentoo-portage" 
    South America: "rsync://rsync.samerica.gentoo.org/gentoo-portage" 
    Europe: "rsync://rsync.europe.gentoo.org/gentoo-portage" 
    Asia: "rsync://rsync.asia.gentoo.org/gentoo-portage" 
    Australia: "rsync://rsync.au.gentoo.org/gentoo-portage"
    

    Also I set up some Portage paths which have to be created (PORTDIR_OVERLAY and PORT_LOGDIR):

    # mkdir /mnt/gentoo/usr/local/portage 
    # mkdir /mnt/gentoo/var/log/portage
    

    Before chrooting I need to copy the resolv.conf file in our mounted partition

    # cp -L /etc/resolv.conf /mnt/gentoo/etc/resolv.conf
    

    mount the proc partition

    # mount -t proc none /mnt/gentoo/proc
    

    and chroot to the new environment

    # chroot /mnt/gentoo /bin/bash 
    # env-update 
    # source /etc/profile
    

    Now let us update the portage for the first time

    # emerge sync
    

    and here comes the wait - bootstraping

    # cd /usr/portage 
    # scripts/bootstrap.sh
    

    The compilation started at 13:30 and finished at 16:14, 3 hours later error free so I moved on to emerge my whole system.

    # emerge system
    

    73 packages were to be merged and for that I started at 06:00 and finished at 8.27. Not bad for my baby notebook!
    There was one config file that needed updating so I went on and updated it:

    # etc-update
    

    It appeared that there were trivial changes, nothing to report.

    So now off to set our timezone. For me it is Vienna, Austria. A little look at my system with:

    # ls /usr/share/zoneinfo
    

    reveals a Europe folder which in turn has the Vienna zone. Hence the command to set the link to my timezone:

    # ln -sf /usr/share/zoneinfo/Europe/Vienna /etc/localtime
    

    Okay now to the easy stuff. We need to grab a kernel. From the choices (and the handbook plus the Gentoo Kernel Guide have a wealth of information helping you choose) I opted for gentoo-dev-sources.

    # emerge gentoo-dev-sources
    

    Before I choose what I need for my kernel (modules or built in) I went and read the Gentoo udev guide. I choose to with udev since this is the way things are moving and I might as well get a head start with it.

    First I need to emerge the udev, which will emerge baselayout and hotplug

    # emerge udev
    

    and also coldplug for boot support on plugged devices

    # emerge coldplug
    

    Now I have to compile the kernel. This requires a bit more attention so I tried to get it first time right.

    # cd /usr/src/linux 
    # make menuconfig
    

    After making my choices I compile the kernel

    # make && make modules_install
    

    I installed the kernel by copying the relevant file in my boot partition:

    # cp arch/i386/boot/bzImage /boot/kernel-2.6.10-gentoo-r4
    

    I also copy the System.map and the .config file just in case:

    # cp System.map /boot/System.map-2.6.10-gentoo-r4 
    # cp .config /boot/config-2.6.10-gentoo-r4
    

    At this point I need to sort out the fstab file for my system to load properly.
    ```sh

    nano -w /etc/fstab

    ```

    My fstab is as follows:

    /dev/hda1 /boot      reiserfs noauto,noatime,notail 1 2
    /dev/hda2 none       swap     defaults              0 0
    /dev/hda3 /          reiserfs noatime               0 1
    none      /proc      proc     defaults              0 0
    none      /dev/pts   devpts   defaults              0 0
    none      /dev/shm   tmpfs    defaults              0 0
    none      /sys       sysfs    defaults              0 0
    /dev/hdc  /mnt/cdrom auto     noauto,ro             0 0
    

    What follows is the host name, domain name and network configuration.

    Hostname

    # nano -w /etc/conf.d/hostname
    

    Domain name

    # nano -w /etc/conf.d/domainname
    

    Adding the domain name to the default runlevel

    # rc-update add domainname default
    

    There is no need for me to touch the /etc/conf.d/net file since I will be using DHCP for my LAN. I won’t add it to the default runlevel either (the network) since I don’t usually connect to the network by the LAN interface rather than the wireless one - for that a bit later.

    Finally I need to set up the hosts file:

    # nano -w /etc/hosts
    

    with the available hosts in my network.

    What follows is the PCMCIA. This is handled by emerging the pcmcia-cs package (note that I am using the -X flag since I don’t want xorg-x11 to be installed now - the handbook is king!)

    # USE="-X" emerge pcmcia-cs
    

    Critical dependency is dhcpd. I need to merge it so that I can obtain an IP address from my router

    # emerge dhcpd
    

    Also critical is to set the root password

    # passwd
    

    I am also emerging pciutils. These will give me lsmod and lspci later on

    # emerge pciutils
    

    Now is the time for the system tools. I will install a system logger, a cron daemon, file system tools and bootloader.

    System Logger - I chose syslog-ng.

    # emerge syslog-ng
    

    and added it to the default runlevel

    # rc-update add syslog-ng default
    

    Cron daemon - I chose vixie-cron

    # emerge vixie-cron
    

    and added it to the default runlevel

    # rc-update add vixie-cron default
    

    File System tools - Naturally I need reiserprogs due to my file system

    # emerge reiserfsprogs
    

    Bootloader - I chose grub.

    # emerge grub
    

    once grub was compiled I setup my grub.conf

    # nano -w /boot/grub/grub.conf
    

    Now let us set grub properly by updating the /etc/mtab

    # cp /proc/mounts /etc/mtab
    

    and grub-install will finish the job

    # grub-install –root-directory=/boot /dev/hda
    

    Finally we are ready to reboot the system.

    Exit the chrooted environment

    # exit
    

    change directory to the root of the Live CD

    # cd /
    

    unmount the mounted partitions

    # umount /mnt/gentoo/boot/ /mnt/gentoo/proc/ /mnt/gentoo/
    

    and reboot

    # reboot
    

    Make sure you eject the CD when the system reboots because you don’t want to boot from it.

    Well it appears to be OK so far since the grub menu showed up and after the whole boot sequence I had my first Linux login.