• 2016-09-29

    Migrating a SVN repository to Git (Bitbucket)

    Migrating a SVN repository to Git (Bitbucket)

    Preface

    This article explains how to migrate a SVN repository to Git. Although this guide uses BitBucket as the Git repository, you can easily adjust the steps to migrate to a different Git repository provider.

    I was recently tasked to migrate a repository from SVN to Git (BitBucket). I have tried the the importer from BitBucket but it failed due to corruption in our SVN repository.

    So I had no other alternative than to do things by hand. Below is the process I have used and some gotchas.

    Authors

    SVN stores just a username with every commit. So nikos could be one and Nikos could be another user. Git however stores also the email of the user and to make things work perfectly we need to create an authors.txt file which contains the mapping between the SVN users and the Git users.

    NOTE The authors.txt file is not necessary for the migration. It only helps for the mapping between your current users (in your Git installation).

    The format of the file is simple:

    captain = Captain America <[email protected]>
    

    If you have the file ready, skip the steps below. Alternatively you can generate the authors.txt file by running the following command in your SVN project folder:

    svn log -q | \
        awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | \
        sort -u > authors.txt
    

    Conventions

    • The source SVN repository is called SVNSOURCE
    • The target GIT repository is called GITTARGET
    • The SVN URL is https://svn.avengers.org/svn

    Commands

    Create a work folder and cd into it

    mkdir source_repo
    cd source_repo/
    

    Initialize the Git repository and copy the authors file in it

    git svn init https://svn.avengers.org/svn/SVNSOURCE/ --stdlayout 
    cp ../authors.txt .
    

    Set up the authors mapping file in the config

    git config svn.authorsfile authors.txt
    

    Check the config just in case

    git config --local --list
    

    The output should be something like this:

    core.repositoryformatversion=0
    core.filemode=true
    core.bare=false
    core.logallrefupdates=true
    svn-remote.svn.url=https://svn.avengers.org/svn/SVNSOURCE
    svn-remote.svn.fetch=trunk:refs/remotes/trunk
    svn-remote.svn.branches=branches/*:refs/remotes/*
    svn-remote.svn.tags=tags/*:refs/remotes/tags/*
    svn.authorsfile=authors.txt
    

    Get the data from SVN (rerun the command if there is a timeout or proxy error)

    git svn fetch
    

    Check the status of the repository and the branches

    git status
    git branch -a
    

    Create the new bare work folder

    cd ..
    mkdir source_bare
    cd source_bare/
    

    Initialize the bare folder and map the trunk

    git init --bare .
    git symbolic-ref HEAD refs/heads/trunk
    

    Return to the work folder

    cd ..
    cd source_repo/
    

    Add the bare repo as the remote and push the data to it

    git remote add bare ../source_bare/
    git config remote.bare.push 'refs/remotes/*:refs/heads/*'
    git push bare
    

    Return to the bare work folder and check the branches

    cd ..
    cd source_bare/
    git branch
    

    Rename trunk to master

    git branch -m trunk master
    

    Note all the branches that are prefixed /tags/ and modify the lines below (as many times as necessary) to convert SVN tags to Git tags

    git tag 3.0.0 refs/heads/tags/3.0.0
    ...
    git branch -D tags/3.0.0
    ...
    

    Alternatively you can put the following in a script and run it:

    git for-each-ref --format='%(refname)' refs/heads/tags | \
    cut -d / -f 4 | \
    while read ref
    do
      git tag "$ref" "refs/heads/tags/$ref";
      git branch -D "tags/$ref";
    done
    

    Check the branches and the new tags

    git br
    git tags
    

    Check the authors

    git log
    

    Push the repository to BitBucket

    git push --mirror [email protected]:avengers/GITTARGET
    

    Enjoy

  • 2015-10-29 21:15:00

    Setting up AWS RDS MySQL replication offsite

    Preface

    Recently I worked on setting up replication between our AWS RDS instance and a server running as a MySQL slave in our location. Although the task was not difficult, there are quite a few areas that one needs to pay attention to.

    In this blog post, I am going to outline the process step by step, so that others can benefit and not lose time trying to discover what went wrong.

    Disclaimer: Google, DuckDuckGo, the AWS forums and this blog post have been invaluable guides to help me do what I needed to do.

    Setup

    • One RDS MySQL or Aurora instance (Master)
    • One server running MySQL in your premises (or wherever you want to put it) (Slave)
    • Appropriate access to IP of the Slave on the master.

    Master Setup

    There is little to do on our master (RDS). Depending on the database size and update frequency, we will need to set up the maximum retention time for the bin logs. For a very large database we need to set a high number, so that we are able to export the database from the master, import it in the slave and start replication.

    Connect to your database and run the following command:

    MySQL [(none)]> call mysql.rds_set_configuration('binlog retention hours', 24);
    

    You can use a different number of hours; I am using 24 for this example.

    Slave Setup

    I am assuming that MySQL is installed on the machine that has been designated as the slave, and also that that machine has ample space for the actual data as well as the binary logs that will be created for the replication.

    Edit my.cnf

    The location of this file is usually under /etc or /etc/mysql. Depending on your distribution it might be located elsewhere.

    [mysqld]
    ...
    
    #bind-address = 0.0.0.0
    
    # Logging and Replication
    general_log_file  = /logs/mysql.log
    general_log       = 1
    log_error         = /logs/mysql_safe.log
    log_slow_queries  = /logs/mysql-slow.log
    long_query_time   = 2
    slave-skip-errors = 1062
    log-queries-not-using-indexes
    
    server-id         = 1234567
    log_bin           = /logs/mysql-bin.log
    expire_logs_days  = 2
    max_binlog_size   = 100M
    

    Note: The configuration file will contain a lot more entries but the ones above are the ones you need to pay attention to.

    • bind-address: We need to comment this line so that we can connect to the instance from somewhere else in the network. Keep this line if you are going to work only on the slave machine and allow no connections from elsewhere.
    • general_log_file: The location of your query log file. You can disable this (see next entry) but it is always good to keep it on at least at the start, to ensure that replication is moving smoothly. Tailing that log will give you a nice indicator of the activity in your database.
    • general_log: Enable or disable the general log
    • log_error: Where to store the errors log
    • log_slow_queries: Where to store the slow queries log. Especially helpful in identifying bottlenecks in your application
    • long_query_time: Time to specify what a slow query is
    • slave-skip-errors: 1062 is the "1062 | Error 'Duplicate entry 'xyz' for key 1' on query. Default database: 'db'. Query: 'INSERT INTO ...'" error. Helpful especially when the replication starts.
    • log-queries-not-using-indexes: We want this because it can help identifying potential bottlenecks in the application
    • server-id: A unique ID for your slave instance.
    • log_bin: Where the binary replication logs are kept
    • expire_logs_days: How long to keep the replication logs for
    • max_binlog_size: Maximum replication log size (per file)

    Once you set these up, restart your MySQL instance

    /etc/init.d/mysql restart
    
    Download the SSH Public Key for RDS

    In your slave server, navigate to /etc/mysql and download the rds-combined-ca-bundle.pem file. This file will be used by the slave to ensure that all the replication traffic is done using SSL and nobody can eavesdrop on your data in transit.

    cd /etc/mysql
    wget http://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem
    

    NOTE You can put the rds-combined-ca-bundle.pem anywhere on your slave. If you change the path, you will have to modify the command to connect the slave to the master (shown further below) to specify the exact location of the key.

    Import timezone data

    This step might not be necessary depending on your MySQL installation. However since RDS works with UTC, you might find your replication breaking because your slave MySQL instance cannot understand the UTC timezone. The shell command you need to run on your slave machine to fix this is:

    mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -u root mysql
    
    Creating the RDS related tables

    RDS uses its own tables to keep track of the replication status and other related data such as the replication heartbeat, configuration etc. Those tables need to be present in the mysql database of your slave in order for the replication to work.

    DROP TABLE IF EXISTS `rds_configuration`;
    CREATE TABLE `rds_configuration` (
      `name` varchar(100) NOT NULL,
      `value` varchar(100) DEFAULT NULL,
      `description` varchar(300) NOT NULL,
      PRIMARY KEY (`name`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_global_status_history`;
    CREATE TABLE `rds_global_status_history` (
      `collection_end` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      `collection_start` timestamp NULL DEFAULT NULL,
      `variable_name` varchar(64) NOT NULL,
      `variable_value` varchar(1024) NOT NULL,
      `variable_delta` int(20) NOT NULL,
      PRIMARY KEY (`collection_end`,`variable_name`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_global_status_history_old`;
    CREATE TABLE `rds_global_status_history_old` (
      `collection_end` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
      `collection_start` timestamp NULL DEFAULT NULL,
      `variable_name` varchar(64) NOT NULL,
      `variable_value` varchar(1024) NOT NULL,
      `variable_delta` int(20) NOT NULL,
      PRIMARY KEY (`collection_end`,`variable_name`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_heartbeat2`;
    CREATE TABLE `rds_heartbeat2` (
      `id` int(11) NOT NULL,
      `value` bigint(20) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_history`;
    CREATE TABLE `rds_history` (
      `action_timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      `called_by_user` varchar(50) NOT NULL,
      `action` varchar(20) NOT NULL,
      `mysql_version` varchar(50) NOT NULL,
      `master_host` varchar(255) DEFAULT NULL,
      `master_port` int(11) DEFAULT NULL,
      `master_user` varchar(16) DEFAULT NULL,
      `master_log_file` varchar(50) DEFAULT NULL,
      `master_log_pos` mediumtext,
      `master_ssl` tinyint(1) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_replication_status`;
    CREATE TABLE `rds_replication_status` (
      `action_timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
      `called_by_user` varchar(50) NOT NULL,
      `action` varchar(20) NOT NULL,
      `mysql_version` varchar(50) NOT NULL,
      `master_host` varchar(255) DEFAULT NULL,
      `master_port` int(11) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    
    DROP TABLE IF EXISTS `rds_sysinfo`;
    CREATE TABLE `rds_sysinfo` (
      `name` varchar(25) DEFAULT NULL,
      `value` varchar(50) DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
    

    NOTE I am not 100% that all of these tables are needed. I have seen only the rds_heartbeat2 and rds_replication_status used. You can experiment with these when you enable replication and add each table in turn if needed. You can confirm whether the above are correct for your instance by connecting to the master and taking a mysqldump of the mysql database.

    Replication

    Replication user

    We need to create a user in our master database that will have the appropriate rights to perform all the replication related actions. We need these commands to be run on the master. For this example I am creating a user called rpluser with the password 424242:

    MySQL [(none)]> CREATE USER 'rpluser'@'%' IDENTIFIED BY '424242';
    MySQL [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'rpluser'@'%';
    
    Master Status

    Connect to your master and issue this command:

    MySQL [(none)]> show master status;
    

    The output will be something like this:

    +----------------------------+----------+--------------+------------------+-------------------+
    | File                       | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
    +----------------------------+----------+--------------+------------------+-------------------+
    | mysql-bin-changelog.000123 |   171819 |              |                  |                   |
    +----------------------------+----------+--------------+------------------+-------------------+
    

    Keep those values handy (File and Position) since we will use them to instruct the slave where to start requesting data from the master (binlog file and position).

    mysqldump

    Take a database dump of all the databases in RDS (exclude information_schema and mysql). If your database can afford a bit of downtime you can use the --opt flag in mysqldump, which will lock all tables until the backup completes. If not, you can use the --skip-add-locks flag. More information about mysqldump options can be found here

    mysqldump --host=192.168.1.2 --user='root' --password my_db > /backups/my_db.sql
    

    Adjust the above command to fit your needs. Once all databases have been dumped, we need to import them in the slave.

    Importing data in the slave

    Navigate to the folder you have all the *.sql dump files, connect to the slave database and start sourcing them.

    cd /backups
    mysql --host=192.168.1.2 --user='root' --password
    MySQL [(none)]> create database my_db;
    MySQL [(none)]> use my_db;
    MySQL [my_db]> source my_db.sql;
    

    Repeat the process of creating the database, using it and sourcing the dump file until all your databases have been imported.

    NOTE There are other ways of doing the above, piping the results directly to the database or even using RDS to get the data straight from it without a mysqldump. Whichever way you choose is up to you. In my experience, the direct import worked for a bit until our database grew to a point that it was timing out or breaking while importing, so I opted for the multi step approach. Have a look at this section in the AWS RDS documentation for more options.

    Connecting to the master

    Once your restore has been completed it is time to connect our slave to the master. In order to connect to the master from the slave we need to verify the following:

    • The name of the RDS instance (for the command below I will use myinstance.rds.amazonaws.com)
    • The name of the replication user (we chose rpluser)
    • The password of the replication user (we chose 424242)
    • The master log file (see above, we got mysql-bin-changelog.000123 from show master status;)
    • The master log file position (see above, we got 171819)
    • The location of the SSL certificate (we used /etc/mysql/rds-combined-ca-bundle.pem)

    The command we need to run on the slave MySQL server is (newlines added for readability):

    MySQL [(none)]> CHANGE MASTER TO 
        -> MASTER_HOST='myinstance.rds.amazonaws.com', 
        -> MASTER_USER='rpluser', 
        -> MASTER_PASSWORD='424242', 
        -> MASTER_LOG_FILE='mysql-bin-changelog.000123', 
        -> MASTER_LOG_POS=171819, 
        -> MASTER_SSL=1, 
        -> MASTER_SSL_CERT='', 
        -> MASTER_SSL_CA='/etc/mysql/rds-combined-ca-bundle.pem', 
        -> MASTER_SSL_KEY='';
    
    Starting the replication

    All we have to do now is to start the slave:

    MySQL [(none)]> START SLAVE;
    

    We can check if everything is OK either by using the general log (see my.cnf section) by tailing it from the shell:

    tail -f /logs/mysql.log
    

    or by issuing this command on the mysql prompt:

    MySQL [(none)]> SHOW SLAVE STATUS \G;
    *************************** 1. row ***************************
                   Slave_IO_State: Waiting for master to send event
                      Master_Host: myinstance.rds.amazonaws.com
                      Master_User: rpluser
                      Master_Port: 3306
                    Connect_Retry: 60
                  Master_Log_File: mysql-bin-changelog.000123
              Read_Master_Log_Pos: 171819
                   Relay_Log_File: mysqld-relay-bin.000002
                    Relay_Log_Pos: 123
    ...
                       Last_Errno: 0
                       Last_Error: 
    ...
               Master_SSL_Allowed: Yes
               Master_SSL_CA_File: /etc/mysql/rds-combined-ca-bundle.pem
               Master_SSL_CA_Path: 
                  Master_SSL_Cert: 
                Master_SSL_Cipher: 
                   Master_SSL_Key: 
    ...
          Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
    ...
    

    Congratulations on your working RDS slave (offsite) machine :)

    Conclusion

    This blog post is by no means exhausting all the topics that replication can cover. For additional information please see the references below.

    I hope you find this post helpful :)

    References

  • 2015-10-18 13:09:00

    Using Traits to add more functionality to your classes in PHP

    Traits are a mechanism for code reuse in single inheritance languages such as PHP.

    A Trait is similar to a class, but only intended to group functionality in a fine-grained and consistent way. It is not possible to instantiate a Trait on its own. It is an addition to traditional inheritance and enables horizontal composition of behavior; that is, the application of class members without requiring inheritance. Source

    Traits have been introduced in PHP 5.4.0. However, a lot of developers have not yet embraced them and taken advantage of the power that they offer.

    As mentioned above in the snippet of the PHP manual, Traits are a mechanism to reuse code, making your code more DRY.

    Let's have a look at a real life example of how Traits can help you with your Phalcon project, or any project you might have.

    Models

    With Phalcon, we have model classes which represent pretty much a table in our database, and allows us to interact with a record or a resultset for the needs of our application.

    Scenario

    We have an application where we need to store information about Companies. Each Company can have one or more Customers as well as one or more Employees. We chose to store that information in three different tables.

    For each Employee or Customer, we need to store their first name, middle name and last name. However we also need to be able to show the full name in this format:

    <Last name>, <First Name> <Middle Name>
    

    Using custom getter

    In each model we can use a custom getter method in the Phalcon model to calculate the full name of the record.

    Employee
    namespace NDN\Models;
    
    class Employee
    {
        ...
        public function getFullName()
        {
            return trim(
                sprintf(
                    '%s, %s %s',
                    $this->getLastName(),
                    $this->getFirstName(),
                    $this->getMiddleName()
                )
            );
        }
    }
    
    Customer
    namespace NDN\Models;
    
    class Customer
    {
        ...
        public function getFullName()
        {
            return trim(
                sprintf(
                    '%s, %s %s',
                    $this->getLastName(),
                    $this->getFirstName(),
                    $this->getMiddleName()
                )
            );
        }
    }
    

    The above introduces a problem. If we want to change the behavior of the getFullName we will have to visit both models and make changes to the relevant methods in each model. In addition, we are using the same code in two different files i.e. duplicating code and effort.

    We could create a base model class that our Customer and Employee models extend and put the getFullName function in there. However that increases the class extensions and could lead to maintenance nightmares.

    For instance we will have to create the base model class that only Customer and Employee models extend but what would happen if we need common functionality for other models? We will need to then create another base model class and so on and so forth. If we end up piling all the common functionality into one base model class then we will end up with functions that would not apply to all of our models and thus a maintenance nightmare.

    NOTE: We can also use the afterFetch method to create a calculated field which will be available for us to use. We can use either the getter or the afterFetch like so:

    namespace NDN\Models;
    
    class Customer
    {
        ...
        public function afterFetch()
        {
            $this->full_name = trim(
                sprintf(
                    '%s, %s %s',
                    $this->getLastName(),
                    $this->getFirstName(),
                    $this->getMiddleName()
                )
            );
        }
    }
    

    Traits

    We can use a trait to offer the same functionality, keeping our code DRY. Since a Trait is not a class that can be instantiated by itself, we attach it to wherever we need to, in this case the Employee and Customer models.

    namespace NDN\Traits;
    
    trait FullNameTrait
    {
        /**
         * Gets the user first/last/med name and formats it in a readable format
         *
         * @return  string
         */
        public function getFullName()
        {
            return trim(
                sprintf(
                    '%s, %s %s',
                    $this->getLastName(),
                    $this->getFirstName(),
                    $this->getMiddleName()
                )
            );
        }
    }
    

    We can attach now this trait to the relevant models

    Employee
    namespace NDN\Models;
    
    use NDN\Traits\FullNameTrait;
    
    class Employee
    {
        use FullNameTrait;
    }
    
    Customer
    namespace NDN\Models;
    
    use NDN\Traits\FullNameTrait;
    
    class Customer
    {
        use FullNameTrait;
    }
    

    Now we can use the getFullName() function in our two models to get the full name of the Employee or Customer calculated by the relevant model fields.

    // Customer:
    // first_name:  John
    // middle_name: Mark
    // last_name:   Doe
    
    // Prints: Doe, John Mark
    echo $customer->getFullName();
    
    // Employee:
    // first_name:  Stanley
    // middle_name: Martin
    // last_name:   Lieber
    
    // Prints: Lieber, Stanley Martin
    echo $employee->getFullName();
    

    Conclusion

    Traits can be very powerful and helpful allies, keeping our code very flexible and reusable.

    Give it a try!

  • 2013-09-15 12:00:00

    Let the RDBMS do more than just store data

    One of the common "mistakes" that programmers (and have been guilty as charged many a times in the past) is not to use the tools that are available to them to the maximum extent possible.

    A common example is using the RDBMS of your choice to only store and retrieve data, without taking advantage of its power and its features to the full extent.

    A RDBMS can do much, much more. One can use triggers that can auto update fields (as I will demonstrate in this blog post), log data into tables, trigger cascade deletes etc.; stored procedures can compute complex data sets, joining tables and transforming data; views can offer easier representations of data, hiding complex queries from the actual application. In addition, such features, like stored procedures/views, can offer security enhancements as well as maintainability to an application. Execution for instance can be restricted to particular groups/logins, while changing the stored procedure/view only requires a change on the database layer and not the application itself.

    In this blog post I will show you a simple example on how one can transfer some of the processing of an application to the RDBMS. I am using MariaDB as the RDBMS and PhalconPHP as the PHP framework.

    The RDBMS

    Each table of my database has several common fields that are used for logging and reporting as well as recording status.

    An example table is as follows

    CREATE TABLE IF NOT EXISTS co_address (
      id             int(11)      unsigned NOT NULL AUTO_INCREMENT,
      address_line_1 varchar(150) COLLATE utf8_unicode_ci DEFAULT NULL,
      address_line_2 varchar(150) COLLATE utf8_unicode_ci DEFAULT NULL,
      address_line_3 varchar(150) COLLATE utf8_unicode_ci DEFAULT NULL,
      region         varchar(6)   COLLATE utf8_unicode_ci DEFAULT NULL,
      post_code      varchar(24)  COLLATE utf8_unicode_ci DEFAULT NULL,
      country        varchar(2)   COLLATE utf8_unicode_ci DEFAULT NULL,
      created_id     int(11)      unsigned NOT NULL DEFAULT '0',
      created_date   datetime              NOT NULL,
      updated_id     int(11)      unsigned NOT NULL DEFAULT '0',
      updated_date   datetime              NOT NULL,
      deleted        tinyint(1)   unsigned NOT NULL DEFAULT '0',
      PRIMARY KEY (id),
      KEY created_id (created_id),
      KEY created_date (created_date),
      KEY updated_id (updated_id),
      KEY updated_date (updated_date),
      KEY deleted (deleted)
    ) ENGINE=InnoDB  
    DEFAULT CHARSET=utf8
    COLLATE=utf8_unicode_ci
    COMMENT='Holds addresses for various entities' AUTO_INCREMENT=1 ;
    

    The fields are:

    Field Name Description
    created_id The id of the user that created the record
    created_date The date/time that the record was created
    updated_id The id of the user that last updated the record
    updated_date The date/time that the record was last updated
    deleted A soft delete flag

    There is not much I can do with the user ids (created/updated) or the deleted column (see also notes below regarding this). However as far as the dates are concerned I can definitely let MariaDB handle those updates.

    Triggers

    The work is delegated to triggers, attached to each table.

    --
    -- Triggers address
    --
    DROP TRIGGER IF EXISTS trg_created_date;
    DELIMITER //
    CREATE TRIGGER trg_created_date BEFORE INSERT ON address
     FOR EACH ROW SET NEW.created_date = NOW(), NEW.updated_date = NOW()
    //
    DELIMITER ;
    DROP TRIGGER IF EXISTS trg_updated_date;
    DELIMITER //
    CREATE TRIGGER trg_updated_date BEFORE UPDATE ON address
     FOR EACH ROW SET NEW.updated_date = NOW()
    //
    DELIMITER ;
    

    The triggers above update the created_date and updated_date fields automatically upon insert/update.

    Phalcon Model

    I needed to make some changes to my model Address, in order to allow the triggers to work without interference from the model.

    class Model extends PhModel
    {
        public function initialize()
        {
            // Disable literals
            $this->setup(['phqlLiterals' => false]);
    
            // We skip these since they are handled by the RDBMS
            $this->skipAttributes(
                [
                    'created_date',
                    'updated_date',
                ]
            );
    
        }
    
        public function getSource()
        {
            return 'address';
        }
    
        public function getCreatedDate()
        {
            return $this->created_date;
        }
    
        public function getUpdatedDate()
        {
            return $this->updated_date;
        }
    }
    

    By using skipAttributes, I am instructing the Phalcon model not to update those fields. By doing so, I am letting my triggers worry about that data.

    Conclusion

    It might seem a very trivial task that I am delegating but in the grand scheme of things, the models of an application can be very complex and have a lot of logic in them (and so might controllers). Delegating some of that logic in the RDBMS simplifies things and also increases performance of the application, which now requires just a bit less computational power.

    NOTES

    For a soft delete feature i.e. automatically updating the deleted field when a DELETE is called, a trigger will not work. Instead one can use a stored procedure for it. See this Stack Overflow answer.

  • 2012-11-25 12:00:00

    Building a web app with PhalconPHP and AngularJS Update

    It's been a while since I last wrote a blog post, so I wanted to touch on the effort to upgrade the application that I wrote for Harry Hog Fottball using PhalconPHP and AngularJS

    If you haven't read it, the first two blog posts were here and here.

    The application was written using the 0.4.5 version of PhalconPHP. Since then there have been significant changes to the framework, such as the introduction of a DI container, injectable objects and lately interfaces (in 0.7.0, to be released in a couple of days), I had to make some changes.

    There are a couple of things that I as a developer would like to see in PhalconPHP, which I am pretty sure will appear later on, since let's face it the framework is still very young (not even 1.0 version yet). Despite its "youth" it is a robust framework with excellent support, features and a growing community. One of these features is behaviors which I had to implement myself, and this was something new that came with this upgrade.

    Recently a new repo has been created on Github called the incubator, where developers can share implementations of common tasks, that act as drop ins to the framework and extend it. These implementations are all written in PHP so everyone can just download them and use them. The more submissions come in, the more the framework will grow and eventually these submissions will become part of the framework itself.

    Converting the 0.4.x application to 0.5.x

    The task of converting everything from 0.4 to 0.5 was a bit challenging. The reason behind it was the DI container and how best to use it to suit the needs of the current application. Now these challenges would not even be an issue if one started writing their application from scratch, but since I had everything in place, I ventured into upgrading vs. rewriting. Note that this kind of upgrade will most likely never happen again, since the framework has been changed accordingly so that future upgrades will not require developers to rewrite their code (like I did now). From 0.5.x onward the framework design has been kind of "frozen".

    I decided to create a new library that will help me with my tasks. I therefore created a custom bootstrap class, that would instantiate everything I wanted in my code. A short snippet of the class is below (the full code of course is in my Github repo which you are more than welcome to download and modify to suit your needs)

    namespace NDN;
    
    use \Phalcon\Config\Adapter\Ini as PhConfig;
    use \Phalcon\Loader as PhLoader;
    ....
    use \Phalcon\Exception as PhException;
    
    class Bootstrap
    {
        private $_di;
    
        /**
         * Constructor
         * 
         * @param $di
         */
        public function __construct($di)
        {
            $this->_di = $di;
        }
    
        /**
         * Runs the application performing all initializations
         * 
         * @param $options
         *
         * @return mixed
         */
        public function run($options)
        {
            $loaders = array(
                'config',
                'loader',
                'environment',
                'timezone',
                'debug',
                'flash',
                'url',
                'dispatcher',
                'view',
                'logger',
                'database',
                'session',
                'cache',
                'behaviors',
            );
    
    
            try {
                foreach ($loaders as $service)
                {
                    $function = 'init' . ucfirst($service);
    
                    $this->$function($options);
                }
    
                $application = new PhApplication();
                $application->setDI($this->_di);
    
                return $application->handle()->getContent();
    
            } catch (PhException $e) {
                echo $e->getMessage();
            } catch (\PDOException $e) {
                echo $e->getMessage();
            }
        }
    
        // Protected functions
    
        /**
         * Initializes the config. Reads it from its location and
         * stores it in the Di container for easier access
         *
         * @param array $options
         */
        protected function initConfig($options = array())
        {
            $configFile = ROOT_PATH . '/app/var/config/config.ini';
    
            // Create the new object
            $config = new PhConfig($configFile);
    
            // Store it in the Di container
            $this->_di->set('config', $config);
        }
    
        /**
         * Initializes the loader
         *
         * @param array $options
         */
        protected function initLoader($options = array())
        {
            $config = $this->_di->get('config');
    
            // Creates the autoloader
            $loader = new PhLoader();
    
            $loader->registerDirs(
                array(
                    ROOT_PATH . $config->app->path->controllers,
                    ROOT_PATH . $config->app->path->models,
                    ROOT_PATH . $config->app->path->library,
                )
            );
    
            // Register the namespace
            $loader->registerNamespaces(
                array("NDN" => $config->app->path->library)
            );
    
            $loader->register();
        }
    
        ....
    
        /**
         * Initializes the view and Volt
         *
         * @param array $options
         */
        protected function initView($options = array())
        {
            $config = $this->_di->get('config');
            $di     = $this->_di;
    
            $this->_di->set(
                'volt',
                function($view, $di) use($config)
                {
                    $volt = new PhVolt($view, $di);
                    $volt->setOptions(
                        array(
                            'compiledPath'      => ROOT_PATH . $config->app->volt->path,
                            'compiledExtension' => $config->app->volt->extension,
                            'compiledSeparator' => $config->app->volt->separator,
                            'stat'              => (bool) $config->app->volt->stat,
                        )
                    );
                    return $volt;
                }
            );
        }
        ....
    
        /**
         * Initializes the model behaviors
         *
         * @param array $options
         */
        protected function initBehaviors($options = array())
        {
            $session = $this->_di->getShared('session');
    
            // Timestamp
            $this->_di->set(
                'Timestamp',
                function() use ($session)
                {
                    $timestamp = new Models\Behaviors\Timestamp($session);
                    return $timestamp;
                }
            );
        }
    }
    

    I chose to show a few sections of this bootstrap which I will explain shortly. What this bootstrap class does is it initializes my whole environment and keeps my index.php file small.

    error_reporting(E_ALL);
    
    try {
    
        if (!defined('ROOT_PATH')) {
            define('ROOT_PATH', dirname(dirname(__FILE__)));
        }
    
        // Using require once because I want to get the specific
        // bootloader class here. The loader will be initialized
        // in my bootstrap class
        require_once ROOT_PATH . '/app/library/NDN/Bootstrap.php';
        require_once ROOT_PATH . '/app/library/NDN/Error.php';
    
        // Instantiate the DI container
        $di  = new \Phalcon\DI\FactoryDefault();
    
        // Instantiate the boostrap class and inject the DI container 
        // in it so that services can be registered
        $app = new \NDN\Bootstrap($di);
    
        // Here we go!
        echo $app->run(array());
    
    } catch (\Phalcon\Exception $e) {
        echo $e->getMessage();
    }
    

    As you can see the index.php is very small in terms of code.

    Let's have a look at a couple of the functions that are in the bootstrap.

        /**
         * Initializes the config. Reads it from its location and
         * stores it in the Di container for easier access
         *
         * @param array $options
         */
        protected function initConfig($options = array())
        {
            $configFile = ROOT_PATH . '/app/var/config/config.ini';
    
            // Create the new object
            $config = new PhConfig($configFile);
    
            // Store it in the Di container
            $this->_di->set('config', $config);
        }
    

    Pretty straight forward. The config INI file is read from its location and stored in the DI container. I need to do this first, since a lot of the parameters of the application are controlled from that file.

        /**
         * Initializes the loader
         *
         * @param array $options
         */
        protected function initLoader($options = array())
        {
            $config = $this->_di->get('config');
    
            // Creates the autoloader
            $loader = new PhLoader();
    
            $loader->registerDirs(
                array(
                    ROOT_PATH . $config->app->path->controllers,
                    ROOT_PATH . $config->app->path->models,
                    ROOT_PATH . $config->app->path->library,
                )
            );
    
            // Register the namespace
            $loader->registerNamespaces(
                array("NDN" => $config->app->path->library)
            );
    
            $loader->register();
        }
    

    The loader is what does all the discovery of classes for me. As you can see I store a lot of the paths in the config INI file, and I register my custom namespace NDN.

        /**
         * Initializes the view and Volt
         *
         * @param array $options
         */
        protected function initView($options = array())
        {
            $config = $this->di->get('config');
            $di     = $this->_di;
    
            $this->_di->set(
                'volt',
                function($view, $di) use($config)
                {
                    $volt = new PhVolt($view, $di);
                    $volt->setOptions(
                        array(
                            'compiledPath'      => ROOT_PATH . $config->app->volt->path,
                            'compiledExtension' => $config->app->volt->extension,
                            'compiledSeparator' => $config->app->volt->separator,
                            'stat'              => (bool) $config->app->volt->stat,
                        )
                    );
                    return $volt;
                }
            );
        }
    

    This is an interesting one. Registering the view and Volt. Volt is the template engine that comes with Phalcon. It is inspired by Twig and written in C, thus offering maximum performance. I set the compiled path, extension and separator for the template files and also I have a variable (set in the config of course) to allow the application to always create template files or not. In a production environment that variable (stat) will be set to false since templates do not change.

        /**
         * Initializes the model behaviors
         *
         * @param array $options
         */
        protected function initBehaviors($options = array())
        {
            $session = $this->_di->getShared('session');
    
            // Timestamp
            $this->_di->set(
                'Timestamp',
                function() use ($session)
                {
                    $timestamp = new Models\Behaviors\Timestamp($session);
                    return $timestamp;
                }
            );
        }
    

    The above is my implementation of behaviors. Of course it is far from perfect but it works the way I want to. A better implementation of this has been written by Wojtek Gancarczyk and is available in the incubator. All I do here is go through the behaviors I have (Timestamp only for now) and register them in the DI container so that I can reuse them later on with any model that needs them.

    Models

    Every model I have that interacts with my database tables extends the NDN\Model.

    class Model extends \Phalcon\Mvc\Model
    {
        protected $behaviors = array();
    
        /**
         * Adds a behavior in the model
         *
         * @param $behavior
         */
        public function addBehavior($behavior)
        {
            $this->behaviors[$behavior] = true;
        }
    
        public function beforeSave()
        {
            $di   = Di::getDefault();
    
            foreach ($this->behaviors as $behavior => $active)
            {
                if ($active && $di->has($behavior))
                {
                    $di->get($behavior)->beforeSave($this);
                }
            }
        }
    
        /**
         * @param array $parameters
         *
         * @static
         * @return Phalcon_Model_Resultset Model[]
         */
        static public function find($parameters = array())
        {
            return parent::find($parameters);
        }
    
        /**
         * @param array $parameters
         *
         * @static
         * @return  Phalcon_Model_Base   Models
         */
        static public function findFirst($parameters = array())
        {
            return parent::findFirst($parameters);
        }
    }
    

    The class itself is pretty simple, offering find and findFirst to the class that extends this. The interesting thing is that it also registers behaviors and calls the relevant validator function. So for instance the beforeSave validator checks the registered behaviors ($behaviors array), checks if they are active, checks if they exist in the DI container and gets them from there and then calls the beforeSave in the behavior class.

    The behavior class is equally simple:

    class Timestamp
    {
        protected $session;
    
        public function __construct($session)
        {
            $this->session= $session;
        }
    
        /**
         * beforeSave hook - called prior to any Save (insert/update)
         */
        public function beforeSave($record)
        {
            $auth     = $this->session->get('auth');
            $userId   = (isset($auth['id'])) ? (int) $auth['id'] : 0;
            $datetime = date('Y-m-d H:i:s');
            if (empty($record->created_at_user_id)) {
                $record->created_at         = $datetime;
                $record->created_at_user_id = $userId;
            }
            $record->last_update         = $datetime;
            $record->last_update_user_id = $userId;
        }
    }
    

    So effectively every time I call the save() function on a model, this piece of code will be executed, populating my fields with the date time and the user that created the record and/or updated it.

    In order to get this functionality to work, all I have to do in my model is to register the behavior like so:

    class Episodes extends \NDN\Model
    {
        /**
         * Initializes the class and sets any relationships with other models
         */
        public function initialize()
        {
            $this->addBehavior('Timestamp');
            $this->hasMany('id', 'Awards', 'episode_id');
        }
    }
    

    Controllers

    Very little has changed in the controller logic, so that was the easiest part of the upgrade. Of course I tweaked a few things but the code works as is. I still extended my custom NDN\Controller class which takes care of my breadcrumbs (NDN\Breadcrumbs) as well as the construction of the top menu. The biggest difference with the previous version is that I stopped using AngularJS to populate the menu (so I am no longer sending a JSON array in the view) and used Volt instead. It was a matter of preference and nothing more.

    Views

    Quite a bit of work had to be done in the views to switch everything to use Volt. Of course every view extension had to be changed to .volt but that was not the only change. I split the layout to use partials so that the header, navigation and footer are different sections (organizing things a bit better) and kept the master layout index.volt.

    I started using the built in Volt functions to generate content as well as tags and it was a nice surprise to see that everything was easy to use and it worked!

    <!DOCTYPE html>
    <html ng-app='HHF'>
        {{ partial('partials/header') }} 
        <body>
            <div id="spinner" style="display: none;">
                {{ image('img/ajax-loader.gif') }} Loading ...
            </div>
    
            {{ partial('partials/navbar') }}
    
            <div class='container-fluid'>
                <div class='row-fluid'>
                    <ul class='breadcrumb'>
                        <li>
                            {% for bc in breadcrumbs %}
                            {% if (bc['active']) %}
                            {{ bc['text'] }}
                            {% else %}
                            <a href='{{ bc['link'] }}'>{{ bc['text'] }}</a> 
                            <span class='divider'>/</span>
                            {% endif %}
                            {% endfor %}
                        </li>
                    </ul>
                </div>
    
                <?php echo $this->flash->output() ?>
    
                <div class="row-fluid">
                    <?php echo $this->getContent() ?>
                </div> <!-- row -->
    
                {{ partial('partials/footer') }}
            </div>
    
            {{ javascript_include(config.app.js.jquery, config.app.js.local) }}
            {{ javascript_include(config.app.js.jquery_ui, config.app.js.local) }}
            {{ javascript_include(config.app.js.bootstrap, config.app.js.local) }}
            {{ javascript_include(config.app.js.angular, config.app.js.local) }}
            {{ javascript_include(config.app.js.angular_resource, config.app.js.local) }}
            {{ javascript_include(config.app.js.angular_ui, config.app.js.local) }}
            {{ javascript_include('js/utils.js') }}
    
        </body>
    </html>
    

    The above is the index.volt. As you can see I call on the partials/header.volt, then the partials/navbar.volt (where the menu is generated) and then I construct the breadcrumbs (note the {% for bc in breadcrumbs %} block). After that the flash messenger comes into play, the main content displayed, the footer and finally the javascript includes that I need.

    I am still using AngularJS to make the necessary AJAX calls so that the relevant controller to retrieve the data but also to display this data on screen (which is cached to avoid unnecessary database hits).

    The Episodes view became

    {{ content() }}
    
    <div>
        <ul class='nav nav-tabs'>
            <li class='pull-right'>
                {{ addButton }}
            </li>
        </ul>
    </div>
    
    <div ng-controller='MainCtrl'>
        <table class='table table-bordered table-striped ng-cloak' ng-cloak>
            <thead>
            <tr>
                <th><a href='' ng-click="predicate='number'; reverse=!reverse">#</a></th>
                <th><a href='' ng-click="predicate='air_date'; reverse=!reverse">Date</a></th>
                <th><a href='' ng-click="predicate='outcome'; reverse=!reverse">W/L</a></th>
                <th><a href='' ng-click="predicate='summary'; reverse=!reverse">Summary</a></th>
            </tr>
            </thead>
            <tbody>
                <tr ng-repeat="episode in data.results | orderBy:predicate:reverse">
                    <td>[[episode.number]]</td>
                    <td width='7%'>[[episode.air_date]]</td>
                    <td>[[episode.outcome]]</td>
                    <td>[[episode.summary]]</td>
                    {% if (addButton) %}
                    <td width='1%'><a href='/episodes/edit/[[episode.id]]'><i class='icon-pencil'></i></a></td>
                    <td width='1%'><a href='/episodes/delete/[[episode.id]]'><i class='icon-remove'></i></a></td>
                    {% endif %}
                </tr>
            </tbody>
        </table>
    </div>
    

    The beauty of AngularJS! I only have to pass a JSON array with my results. ng-repeat with the orderBy filter allows me to present the data to the user and offer sorting capabilities per column. This is all done at the browser level without any database hits! Pretty awesome feature!

    For those that have used AngularJS in the past, you will note that I had to change the interpolate provider (i.e. the characters that wrap a string or a piece of code that AngularJS understands). Usually these characters are the curly brackets {{ }} but I changed them to [[ ]] to avoid collisions with Volt.

    This was done with a couple of lines of code in my definition of my AngularJS model:

    var ngModule = angular.module(
            'HHF', 
            ['ngResource', 'ui']
        )
        .config(
            function ($interpolateProvider) {
                $interpolateProvider.startSymbol('[[');
                $interpolateProvider.endSymbol(']]');
            }
        )
    

    Conclusion

    I spent at most a day working on this mostly because I wanted to try various things and see how it works. The actual time to convert the application (because let's face it, it is a small application) was a couple of hours inclusive of the time it took me to rename certain fields, restructure the folder structure, compile the new extension on my server and upload the data upstream.

    I am very satisfied with both AngularJS, which helps tremendously in my presentation layer, as well as with Phalcon. Phalcon's new design makes implementation a breeze, while AngularJS offers a lot of flexibility on the view layer.

    As written before, you are more than welcome to download the source code of this application here and use it for your own needs. Some resources are:

    References

  • 2012-09-02 12:00:00

    AngularJS - Simplicity in your browser

    Recently I was contacted by an acquaintance through my Google+ circles, who needed some help with a project of hers.

    Her task was to redesign a church website. Pretty simple stuff, CSS, HTML and content.

    Scope

    The particular church videotapes all the sermons and posts them on their channel in LiveStream for their followers to watch. One of the requirements was to redo the video archives page and to offer a link where followers can download the audio of each sermon for listening.

    Design (kinda)

    After the initial contact, I decided to get rid of all the bloated jQuery code that was there to control the video player and use AngularJS to control the generation of content. There were two key facts that influenced my decision:

    • the use of ng-repeat to generate the table that will list all the available sermons and
    • the variable binding that AngularJS offers to play the video in the available player.

    I also decided to switch the player to a new updated one that LiveStream offered, which features a slider to jump through the video, volume control and more.

    Previous code

    The previous code for that page was around 300 lines. The file had some CSS in it, quite a few lines of HTML but was heavy on javascript. There were a lot of jQuery functions which controlled the retrieval of the available videos per playlist. Each playlist would be effectively a collection of videos for a particular year. jQuery was observing clicks on specific links and make an AJAX call to the LiveStream API to retrieve the list of available data in JSON format, and output the formatted results (as a table) on screen. It was something like this:

    head
    title
    link - css
    style (inline)
    ....
    end inline style
    script jQuery
    script jQueryUI
    script jquery.timer
    link jquery-ui CSS
    link slider CSS
    style (inline)
    ....
    end inline style
    script jQuery
    $(document).ready ... // Document ready
    ....
    $("#div_col dd").click // Bind click to a year
    .....
    getPlaylists(year) // Not sure why this was ever here, not used
    ....
    getPlaylistClips(playlistID) // Gets the clips of the playlist
    .....
    playClip(clipID) // Plays the clip in the player
    .....
    end jQuery script
    script Video Player
    ....
    end head
    body
    navigation
    main container
    list of years
    instructions
    container to show video player
    container to show video list
    end body
    end html
    

    Enter AngularJS

    I checked the latest video player from LiveStream. The code was much cleaner and all I had to do is bind one variable, the GUID of the video, in the relevant call so that the video can be played. I also bound another variable (the video title) above the video so as to offer more information to the user.

    With a bit of Bootstrap CSS, I created two tabs and listed the two years 2012, 2011. A function was created in my AngularJS module to accept the year and make the relevant call to the LiveStream API to receive the data a a JSON object.

    ng-repeat (with >ng-cloak) was used to "print" the data on screen and the application was ready.

    I removed all the cruft and created one small file that is loaded and offers the functionality that we need. It is 50 lines of code (just the javascript part. The code is below with added comments for the reader to follow:

    // Create the module and inject the Resource object
    var ngModule = angular.module("CHF", ['ngResource']);
    
    // The main controller that needs the scope and resource
    ngModule.controller("MainCtrl", function ($scope, $resource) {
    
        // Calculates the current year
        //  ensures we always get the last year on first load
        $scope.currentYear = function () {
            var currentDate = new Date();
            return currentDate.getFullYear();
        };
    
        // This is the playlist array. This is obtained by 
        //  LiveStream and it changes once every year. 
        //  Hardly an effort by the administrator
        $scope.playlists = [
            {"year":"2012", "guid":"63426-xxx-xxx-xxx"},
            {"year":"2011", "guid":"84f84-xxx-xxx-xxx"}
        ];
    
        // This couldn't be simpler. It merely sets some variables 
        //  in the scope. By doing so, the binding in the relevant
        //  variables will allow the video to play and the title 
        //  to update.
        $scope.playVideo = function (element) {
            $scope.currentVideo  = element.guid;
            $scope.currentTitle  = element.title;
        };
    
        // This is the core. It makes the AJAX request to the 
        // LiveStream API so that it can get the JSON data back
        $scope.makeRequest = function (year) {
    
            // Calculating the current year and the year selected.
            // Their difference offers an offset which effectively 
            // is the offset of the array stored in $scope.playlists
            var thisYear    = $scope.currentYear();
            var diff        = thisYear - year;
    
            var objData = $scope.playlists[diff];
    
            // Just in case something was passed that is not valid
            if (objData.guid)
            {
                var reqData = $resource(
                    "http://livestream_url/2.0/:action",
                    {
                        action:'listclips.json', 
                        id:objData.guid,
                        query: {isArray: true},
                        maxresults:"500",
                        callback:'JSON_CALLBACK'
                    },
                    {get:{method:'JSONP'}}
                );
    
                // Set the year and get the data
                $scope.year    = year;
                $scope.listData = reqData.get();
           }
        };
    
        // This is the first load - load the current year
        $scope.makeRequest($scope.currentYear());
    
    });
    

    Now moving into the HTML side of things:

    <div id="playerContainer" style='text-align:center;'>
        <p ng-cloak>
            {{currentTitle}}
        </p>
        <iframe 
            width="560" 
            height="340" 
            src="http://ls_url?clip={{currentVideo}}&params" 
            style="border:0;outline:0" 
            frameborder="0" 
            scrolling="no">
        </iframe>
    </div>
    
    <br />
    <div>
        <ul class="nav nav-tabs">
            <li ng-repeat="playlist in playlists" 
                   ng-class="{true:'active',false:''}[year == playlist.year]">
                <a ng-click="makeRequest(playlist.year)">{{playlist.year}}</a>
            </li>
        </ul>
        <table class="table table-bordered" style="width: 100%;">
            <thead>
                <tr>
                    <th>Date/Title</th>
                    <th>Audio</th>
                </tr>
            </thead>
            <tbody>
                <tr ng-cloak ng-repeat="video in listData.channel.item">
                    <td ng-click="playVideo(video)">{{video.title}}</td>
                    <td>
                        <span ng-show="video.description">
                            <a href="{{video.description}}" title="Download Audio">
                                <i class="icon-download-alt"></i>
                            </a>
                        </span>
                    </td>
                </tr>
            </tbody>
        </table>
    </div>
    

    That is all the HTML I had to change. The full HTML file is 100 lines and 50 for the AngularJS related javascript, I can safely say that I had a 50% reduction in code offering the same functionality - and if I might say so, it is much much cleaner.

    The final page looks something like this:

    Pointers

    <div id="playerContainer" style='text-align:center;'>
        <p ng-cloak>
        {{currentTitle}}
        </p>
        <iframe 
            width="560" 
            height="340" 
            src="http://ls_url?clip={{currentVideo}}&params" 
            style="border:0;outline:0" 
            frameborder="0" 
            scrolling="no">
        </iframe>
    </div>
    

    This block displays the video player and due to the variable binding that AngularJS offers, the minute those variables change, the video is ready to be played.

    <ul class="nav nav-tabs">
        <li ng-repeat="playlist in playlists" 
               ng-class="{true:'active',false:''}[year == playlist.year]">
            <a ng-click="makeRequest(playlist.year)">{{playlist.year}}</a>
        </li>
    </ul>
    

    This block shows the tabs depicting each playlist. In our case these are years. ng-repeat does all the hard work, printing the data that is defined in our JS file. The ng-class is there to change the class of the tab to "active" when the tab is clicked/selected. The ng-click initiates a request through makeRequest, a function defined in our javascript file (see above).

    <tbody>
        <tr ng-cloak ng-repeat="video in listData.channel.item">
            <td ng-click="playVideo(video)">{{video.title}}</td>
            <td>
                <span ng-show="video.description">
                    <a href="{{video.description}}" 
                       title="Download Audio">
                        <i class="icon-download-alt"></i>
                    </a>
                </span>
            </td>
        </tr>
    </tbody>
    

    Finally the data is displayed on screen. ng-cloak makes sure that the content is displayed only when the data is there (otherwise browsers might show something like {{video.description}} which is not nice from a UI perspective). ng-repeat loops through the data and "prints" the table.

    The description of the video is used as a storage for the URL that will point to the MP3 audio file so as the users can download it. Therefore I use ng-show to show the link, if it exists.

    Conclusion

    This whole project was no more than 30 minutes, which included the time I had to research and experiment a bit with the LiveStream API. This is a flexible design, with much much cleaner code (and a lot less of it). When the admin needs to add a new playlist (aka year), all they have to do is open the JS file and type a new element in the $scope.playlists array. The application will take care of the rest automatically.

    I cannot think of doing this with less lines of code than this.

    If you haven't heard of AngularJS or used it, I would highly encourage you to give it a shot. Great project, awesome support and a very very responsive, helpful and polite community.

  • 2012-07-12 12:00:00

    Building a web app with PhalconPHP and AngularJS Part II

    This is Part II of a series of posts on building an application using Phalcon and AngularJS. Part I is located here.

    Preface

    I have recently discovered Phalcon and I was impressed with its speed and ease of use. At the time of this writing, PhalconPHP is at version 0.4.2, with some serious redesign coming down the line on 0.5.x.

    Phalcon takes a different approach than any other PHP framework (see Zend, Symfony, CakePHP etc.). It is written in C and compiled as a module which is then loaded on your web server. Effectively the whole framework is in memory for you to use, without needing to access the file system so that you can include a file here or a file there.

    Advantages

    The core advantage of this approach is speed. The framework is in memory, ready to deliver its functionality, so your application is now only concerned about its files and not the framework itself. Once a framework is mature enough for usage, its files don't change that much. Yet for any of the traditional frameworks, PHP needs to scan the files, load them and the interpret them. This in effect has a serious impact on performance, especially for large projects.

    Another advantage is that since the framework is a module on your web server, you don't need to upload library files to each and every application you install on your host.

    Finally, you can mix and match whatever you need, using any of the components as 'glue' components rather than the whole framework. Most of the major frameworks also use this methodology for most of their components, however performance still is an issue. Additionally, in the case of any other framework, one might need to upload a very complicated and deep file structure on their web server so as to take advantage of one component to be used in an application.

    Disadvantages

    Support and bug tracing are the two weaknesses of Phalcon. By support I do not mean support from the developers. On the contrary, the developers are doing a great job listening to the relatively young community, and issuing fixes. However, as with any framework, if you find a bug, you will try to trace the code back to each component in an effort to find a solution to your problem. When developing an application and have access to the source files (the library PHP files like Zend Framework has), not only you can learn from those implementations, but you can quickly fix something that might be broken and continue working. With Phalcon you will need to wait until the next version is released, unless you are fluent in C and play around with the source code. For most PHP programmers (like myself), the process will be report the bug and wait for the fix.

    Since the framework is a module on your web server, you will need to be careful on upgrades. If your applications do not take advantage of the latest functionality the framework offers, you might fix something in one application, while breaking something in another. You cannot mix and match versions of Phalcon per application.

    Consideration

    Phalcon is very young as a framework. It does have a lot of power, but there are a lot of things still missing (for instance relationships between models and a query builder). In time these pieces will be implemented and the framework will grow stronger :)

    Implementation

    I downloaded the INVO sample application and set it up on my web browser. Using that as a starting point, I started modifying it to fit my needs. I also set up the PhalconPHP developer tools and PHPStorm support.

    For this application, I needed a table to store information about every podcast episode, a table to store all players and a table to store the users (namely Aaron, Josh and John). The Awards table would be the one that would store all the information regarding the game balls and kick in the balls awards.

    Models

    Once those were in place I started building my models and relevant controllers/views. Setting a model up was really easy. I would create the table in my database and then run

    phalcon create-model --table-name episodes
    

    and my model would be ready for me to use (example below for Episodes).

    class Episodes extends Phalcon_Model_Base 
    {
        public $id;
        public $number;
        public $summary;
        public $airDate;
        public $outcome;
        public $createdAt;
        public $createdAtUserId;
        public $lastUpdate;
        public $lastUpdateUserId;
    }
    

    After a while I decided I wanted to keep a track on who created a record and when, and who last updated a record and when for certain tables. After some refactoring I created my own model class that would give me the functionality I needed, and extended that class in relevant models.

    My custom class (that would take care of the createdAt, createdAtUserId, lastUpdated, lastUpdatedUserId fields) also took advantage of the beforeSave hook to ensure that these fields were transparently updated. The find and findFirst static functions are used throughout the models and there is no reason to repeat them in each model, so they end up in this custom class. (Comments removed to preserve space)

    use NDN_Session as Session;
    
    class NDN_Model extends Phalcon_Model_Base
    {
        public $createdAt;
        public $createdAtUserId;
        public $lastUpdate;
        public $lastUpdateUserId;
    
        public function beforeSave()
        {
            if (empty($this->createdAtUserId)) {
                $auth     = Session::get('auth');
                $datetime = date('Y-m-d H:i:s');
    
                $this->createdAt        = $datetime;
                $this->createdAtUserId  = (int) $auth['id'];
            }
        }
    
        static public function find($parameters = array())
        {
            return parent::find($parameters);
        }
    
        static public function findFirst($parameters = array())
        {
            return parent::findFirst($parameters);
        }
    }
    
    Session

    Although Phalcon provides a flash messenger utility, I had an issue with using the _forward function on a controller, after an action (say Add or Edit) was completed. Effectively the data would not refresh on screen. To combat that I used _redirect. However, all the messages that I had in the flash messenger (Phalcon_Flash) would disappear. An easy solution was to extend the Phalcon_Session and create two new functions setFlash and getFlash. The setFlash is called whenever I want to set a message for the user to see. The function stores the message in a session variable. Before the controller is dispatched, the getFlash is called to return any messages waiting to be displayed, and after that the messages are cleared from the session and displayed on screen.

    class NDN_Session extends Phalcon_Session
    {
        public static function setFlash($class, $message, $css)
        {
            $data = array(
                'class'   => $class,
                'message' => $message,
                'css'     => $css,
            );
            self::set('flash', $data);
        }
    
        public static function getFlash()
        {
            $data = self::get('flash');
            if (is_array($data)) {
                self::remove('flash');
                return $data;
            } else {
                return null;
            }
        }
    }
    
    Breadcrumbs

    I wanted to show breadcrumbs to the user, as a way to easily navigate throughout the application. To do so, I created my own Breadcrumbs class which holds an array of areas that the user is in. The class has a generate function, which returns back a JSON string. This is to be parsed by AngularJS so as to display the breadcrumbs.

    Controllers

    I created my controllers using the Phalcon Developer Tools. Whether you use the webtools or the command line makes no difference. The skeleton of the controller is generated for you to use.

    Based on the flash messenger and _redirect that I mentioned in the previous section, I had to extend the base controller, so as to add functionality that would allow me to show messages on screen after a redirect. Other reasons for this new class were to allow for a prefix on each page title, generate breadcrumbs and menus.

    use Phalcon_Tag as Tag;
    
    use Phalcon_Flash as Flash;
    use NDN_Session as Session;
    
    class NDN_Controller extends Phalcon_Controller 
    {
        protected $_bc = null;
        public function initialize()
        {
            Tag::prependTitle('HHF G&KB Awards | ');
            $this->_bc = new NDN_Breadcrumbs();
        }
    
        public function beforeDispatch()
        {
            $message = Session::getFlash();
            if (is_array($message)) {
                Flash::$message['class'](
                    $message['message'], $message['css']
                );
            }
            $this->view->setVar('breadcrumbs', $this->_bc->generate());
        }
    
        protected function _constructMenu($controller)
        {
            $commonMenu = array(
                'index'      => 'Home', 
                'awards'     => 'Awards', 
                'players'    => 'Players', 
                'episodes'   => 'Episodes', 
                'about'      => 'About', 
                'contact'    => 'Contact Us', 
            ); 
            $auth = Session::get('auth'); 
    
            $class  = get_class($controller); 
            $class  = str_replace('Controller', '', $class); 
            $active         = strtolower($class); 
            $sessionCaption = ($auth) ? 'Log Out'         : 'Log In'; 
            $sessionAction  = ($auth) ? '/session/logout' : '/session/index'; 
    
            $leftMenu = array(); 
            foreach ($commonMenu as $link => $text) { 
                $isActive   = (bool) ($active == $link); 
                $newLink  = ('index' == $link) ? '/' : '/' . $link; 
                $leftMenu[] = array( 
                    'active' => $isActive, 
                    'link'   => $newLink, 
                    'text'   => $text, 
                ); 
            } 
    
            $menu = new StdClass(); 
            $menu->current = $active; 
            $menu->left    = $leftMenu; 
    
            if ($auth != false) { 
                $sessionCaption .= ' ' . $auth['name']; 
            } 
    
            $menu->rightLink = $sessionAction; 
            $menu->rightText = $sessionCaption; 
    
            return json_encode($menu); 
        } 
    }
    

    Each controller would extend my base controller. In the initialize function:

    • the page title is set,
    • the breadcrumbs are added (and generated later on in the beforeDispatch of the base controller),
    • the menu is generated and passed to the view for AngularJS to process,
    • additional variables would be generated for displaying elements based on whether a user is logged in or not.
    Views

    Creating the views was really easy. I already had the structure ready from the sample application (INVO) and with the help of Bootstrap CSS, I was done in no time. The views inherit from a base view (index.phtml) located at the root of the views folder. That view holds the skeleton of the web page and content is injected accordingly based on each controller (and its view).

    In that file I added the relevant variables that will be used by AngularJS as well as variables that hold conditional elements (i.e. elements that appear when a user is logged in).

    More on the views in the next installment of these series.

    Conclusion

    With all that the application was ready as far as the main structure was concerned. Tying everything with AngularJS was the next step, which will be covered in part III of this How-To.

    The whole application, from start to finish, took less than 4 hours to develop. This included breaks, reading the manual and making design decisions based on my ever changing requirements.

    References

  • 2012-07-10 12:00:00

    Building a web app with PhalconPHP and AngularJS Part I

    There are ample frameworks on the Internet, most free, that a programmer can use to build a web application. Two of these frameworks are PhalconPHP and AngularJS.

    I decided to use those two frameworks and build a simple application which will keep track of the Game Balls and Kick in the Balls awards of Harry Hog Football.

    Harry Hog Football is a podcast that has been going strong since 2005, created by Redskins fans for Redskins Fans. (for those that do not know, Washington Redskins is a team on the National Football League in the USA).

    Every week during the regular season, Aaron, Josh and John create a podcast, where they discuss the recent game, the injuries, the cuts, the new signings and they offer their Game Balls to the best players of the week as well as the Kick in the Balls awards for the ones that (according to the podcasters) 'suck'.

    I therefore created an application to record all those game balls and kick in the balls awards, so that we can all see, who is the most valuable player and who is the least valuable player for the Redskins throughout the years (the term valuable is used loosely here).

    As a starting point I used the INVO application that PhalconPHP showcases as an easy application to get you started. I modified it significantly to address my needs, refactoring classes as much as possible to get the least amount of code with maximum usability.

    After building the application, I listened to all the episodes I could find, and entered the game balls and kick in the balls in the database. The models use the PhalconModelBase class to handle data, while the rest of the application is handled by the Phalcon_Controller (and view of course).

    The data transfer between the application and the relevant sections is primarily handled by AngularJS, which is dominant in the view layer. AngularJS controllers handle menu creation, breadcrumbx as well as displaying results on screen.

    Twitter's Bootstrap CSS> is used to put the final touches for the application.

    In subsequent posts I will explain each layer in turn, starting with PhalconPHP and continuing with AngularJS.

    This of course is by no means the perfect implementation. It has been a fun project for me, working on it on my own free time. You are more than welcome to fork the project and make any modifications you need. For those that are interested in getting straight to the code, it is available on Github here.

    NOTE: The Github repository contains code that works with nginx. If you are having problems with Apache, check the public/index.php - there is a note there for nginx (probably will need to remove it)

    References

  • 2012-06-29 12:00:00

    How to build an inexpensive RAID for storage and streaming

    Overview

    I have written about this before and it is still my favorite mantra

    There are people that take backups and there are people that never had a hard drive fail

    This mantra is basically what should drive everyone to take regular backups of their electronic files, so as to ensure that nothing is lost in case of hardware failures.

    Let's face it. Hard drives or any magnetic media are man made and will fail at some point. When is what we don't know, so we have to be prepared for it.

    Services like Google Drive, iCloud, Dropbox, BoundlessCloud and others offer good backup services, ensuring that there is at least one safe copy of your data. But that is not enough. You should ensure that whatever happens, the memories stored in your pictures or videos, the important email communications, the important documents are all kept in a safe place and there are multiple backups of it. Once they are gone, they are gone for good, so backups are the only way to ensure that this does not happen.

    Background

    My current setup at home consists of a few notebooks, a mac-mini and a Shuttle computer with a 1TB hard drive, where I store all my pictures, some movies and my songs. I use Google Music Manager for my songs so that they are available at any time on my android phone, Picasa> to be able to share my photos with my family and friends and Google Drive so as to keep every folder I have in sync. I also use RocksBox to stream some of that content (especially the movies) upstairs on either of our TVs through the Roku boxes we have.

    Recently I went downstairs and noticed that the Shuttle computer (which run Windows XP at the time) was stuck in the POST screen. I rebooted the machine but it refused to load Windows, getting stuck either in the computer's POST screen or in the Starting Windows.

    Something was wrong and my best guess was the hard drive was failing. I booted up then in Safe mode (and that worked), set the hard drive to be checked for defects and rebooted again to let the whole process finish. After an hour or so, the system was still checking the free space and was stuck at 1%. The hard drive was making some weird noises so I shut the system down.

    Although I had backups of all my files on the cloud through Picasa, Google Music Manager and Google Drive, I still wanted to salvage whatever I had in there just in case. I therefore booted up the system with a Linux live CD, mounted the hard drive and used FileZilla to transfer all the data from the Shuttle's hard drive to another computer. There was of course a bit of juggling going on since I had to transfer data in multiple hard drives due to space restrictions.

    Replacing the storage

    I had to find something very cheap and practical. I therefore went to Staples and found a very nice (for the money) computer by Lenovo. It was only $300, offering 1TB 7200 SATA hard drive, i5 processor and 6GB of RAM.

    As soon as I got the system I booted it up, started setting everything up and before the end of the day everything was back in place, synched to the cloud.

    However the question remained: what happens if the hard drive fails again? Let's face it, I did lose a lot of time trying to set everything up again so I wasn't prepared to go through that again.

    My solution was simple:

    Purchase a cheap RAID (hardware) controller, an identical 1TB hard drive to the one I have, and a second hard drive to have the OS on. This way, the two 1TB hard drives can be connected to the RAID controller on a mirror setup (or RAID 1), while the additional hard drive can keep the operating system.

    I opted for a solid state drive from Crucial for the OS. Although it was not necessary to have that kind of speed, I thought it wouldn't hurt. It did hurt my pocket a bit but c'est la vie. For your own setup you can choose whatever you like.

    Hardware

    NOTE : For those that are not interested in having a solid state drive for the OS, one can always go with other, much cheaper drives such as this one.

    Setup

    After all the components arrived, I opened the computer and had a look at what I am facing with. One thing I did not realize was the lack of space for the third hard drive (the one that will hold the OS). I was under the impression that it would fit under the DVD ROM drive, but I did not account for the SD card reader that was installed in that space, so I had to be a bit creative (Picture 1).

    A couple of good measurements and two holes with the power drill created a perfect mounting point for the solid state hard drive. It is sitting now secure in front of the card reader connections, without interfering in any way.

    The second hard drive and the raid card were really easy to install, just a couple of screws and everything was set in place.

    The second hard drive ended up in the only expansion 'bay' available for this system. This is below the existing drive, mounted on the left side of the case. The actual housing has guides that allow you to slide the drive until the screw holes are aligned and from there it is a two minute job to secure the drive in place.

    I also had a generic nVidia 460 1GRAM card, which I also installed in the system. This was not included in the purchases for building this system, but it is around $45 if not less now. I have had it for over a year now and it was installed in the old Shuttle computer, so I wasn't going to let it go to waste.

    With everything in place, all I had to do is boot the system up and enter the BIOS screen so as to ensure that the SSD drive had a higher priority than any other drive.

    Once that was done, I put the installation disks in the DVD-ROM and restored the system on the SSD drive. 4 DVDs later (around 30 minutes) the system was installed and booted up. It took however another couple of hours until I had everything set up. The myriad of Windows Updates, (plus my slow Internet connection) contributed to this delay. However I have to admit, that the SSD drive was a very good purchase, since I have never seen Windows boot in less than 10 seconds (from power up to the login screen).

    The Windows updates included the nVidia driver so everything was set up (well almost that is). The only driver not installed was for the HighPoint RaidRocket RAID controller.

    The installation disk provided that driver, alongside with a web based configuration tool. After installing the driver and a quick reboot, the RAID configuration tool was not easy to understand but I figured it out, even without reading the manual.

    Entering the Disk Manager, I initialized and formatted the drive and from then on, I started copying all my files in there.

    As a last minute change, I decided not to install RocksBox and instead go with Plex Media Server. After playing around with Plex, I found out that it was a lot easier to setup than RocksBox (RocksBox requires a web server to be installed on the server machine, whereas Plex automatically discovers servers). Installing the relevant channel on my Roku boxes was really easy and everything was ready to work right out of the box so to speak.

    Problems

    The only problem that I encountered had to do with Lenovo itself. I wanted basically to install the system on the SSD drive. Since the main drive is 1TB and the SSD drive 128GB I could not use CloneZilla or Image for Windows to move the system from one drive to another. I tried almost everything. I shrank the 1TB system partition so as to make it fit in the 128GB drive. I had to shut hibernation off, reboot a couple of times in Safe Mode to remove unmovable files, in short it was a huge waste of time.

    Since Lenovo did not include the installation disks (only an applications DVD), I called their support line and inquired about those disks. I was sent from the hardware department to the software department, where a gentleman informed me that I have to pay $65 to purchase the OS disks. You can imagine my frustration to the fact that I had already paid for the OS by purchasing the computer. We went back and forth with the technician and in the end got routed to a manager who told me I can create the disks myself using Lenovo's proprietary software.

    The create rescue process required 10 DVDs, so I started the process. On DVD 7 the process halted. I repeated the process, only to see the same error on DVD 4. The following day I called Lenovo hardware support and managed to talk to a lady who was more than willing to send me the installation disks free of charge. Unfortunately right after I gave her my address, the line dropped, so I had to call again.

    The second phone call did not go very well. I was transferred to the software department again, where I was told that I have to pay $65 for the disks. The odd thing is that the technician tried to convince me that Lenovo actually doesn't pay money to Microsoft since they get an OEM license. Knowing that this is not correct, and after the fact that the technician was getting really rude, I asked to speak to a supervisor. The supervisor was even worse and having already spent 45 minutes on the phone, I asked to be transferred to the hardware department again. Once there, I spoke to another lady, explained the situation and how long I have been trying to get this resolved (we are at 55 minutes now) and she happily took my information and sent me the installation disks free of charge.

    Conclusion

    The setup discussed in this post is an inexpensive and relatively secure way of storing data in your own home/home network. The RAID 1 configuration offers redundancy, while the price of the system does not break the bank.

    I am very disappointed with Lenovo, trying to charge me for something I already paid for (the Operating System that is). Luckily the ladies at the hardware department were a lot more accommodating and I got what I wanted in the end.

    I hope this guide helped you.

  • 2012-01-15 12:00:00

    Downgrading PHPUnit from 3.6 to 3.5

    Recently I had to rebuild my computer, and decided to install Linux Mint 12 (Lisa), which is a very lean installation - for my taste that is.

    Going through the whole process of reinstalling all the packages that I need or had, PHPUnit was one of them. Easy enough a couple commands did the trick

    sudo apt-get install php-pear
    sudo pear upgrade PEAR
    sudo pear config-set auto_discover 1
    sudo pear install pear.phpunit.de/PHPUnit
    

    I wanted to run my tests after that, only to find an error in the execution:

    PHP Fatal error:  Call to undefined method PHPUnit_Util_Filter::addFileToFilter()
     in /home/www/project/library/PHPUnit/Framework.php on line 48
    

    At first I thought that it was a path error, so I included the /usr/share/php/PHPUnit and others in the php.ini file but with no luck. With a little bit of Googling I found out that there have been some changes in the 3.6 version of PHPUnit and things don't work as they did before.

    Effectively, 3.6 had some refactoring done and thus the line:

    PHPUnit_Util_Filter::addDirectoryToFilter("$dir/tests");
    

    changed to

    PHP_CodeCoverage_Filter::getInstance()
            ->addDirectoryToBlacklist("$dir/tests");
    

    Since I didn't want to change my whole test suite, I had to find a solution i.e. downgrade PHPUnit to 3.5.

    Unfortunately specifying the version directly did not wok

    sudo pear install phpunit/PHPUnit-3.5.15
    

    since it would pull the latest version again and I would end up with the 3.6 files.

    So I went one step further and installed specific versions of the relevant dependencies to satisfy the 3.5.15 version.

    Uninstallation of 3.6

    pear uninstall phpunit/PHPUnit_Selenium
    pear uninstall phpunit/DbUnit
    pear uninstall phpunit/PHPUnit
    pear uninstall phpunit/PHP_CodeCoverage
    pear uninstall phpunit/PHP_Iterator
    pear uninstall phpunit/PHPUnit_MockObject
    pear uninstall phpunit/Text_Template
    pear uninstall phpunit/PHP_Invoker
    pear uninstall phpunit/PHP_Timer
    pear uninstall phpunit/File_Iterator
    pear uninstall pear.symfony-project.com/YAML
    

    Installation of 3.5.15

    pear install pear.symfony-project.com/YAML-1.0.2
    pear install phpunit/PHPUnit_Selenium-1.0.1
    pear install phpunit/PHP_Timer-1.0.0
    pear install phpunit/Text_Template-1.0.0
    pear install phpunit/PHPUnit_MockObject-1.0.3
    pear install phpunit/File_Iterator-1.2.3
    pear install phpunit/PHP_CodeCoverage-1.0.2
    pear install phpunit/DbUnit-1.0.0
    pear install phpunit/PHPUnit-3.5.15
    

    I hope you find the above useful :)