In order to follow this guide, one need to install Java (preferably Sun Java) installed, MySQL 5.x and Hadoop cluster. We already have published separate guides on how to install Percona MySQL on Ubuntu server and how to install Hadoop on single server. Chukwa visualization interface requires HBase. Chukwa is a system designed for reliable log collection and processing with Hadoop. Chukwa cluster management scripts rely on SSH. So we need a Hadoop and HBase cluster on which Chukwa will process data, a collector process, which will write collected data to HBase and agent processes to send monitoring data to the collector. Chukwa’s mechanism was designed to require Hadoop (HDFS) and MapReduce jobs. Chukwa’s demux functionality internally runs a Map Reduce task to compute the key value pairs.

Steps of Installation of Apache Chukwa
This is the repository of Apache Chukwa, you need the binary file :
1 2 | https://chukwa.apache.org/releases.html https://github.com/apache/chukwa |
Untar via tar -xzvf command. Copy Chukwa on each node being monitored and run a collector. The official directory name containing Chukwa is referenced as CHUKWA_HOME. Create that directory and move the files.
---
Make sure that JAVA_HOME is set and points to the Java runtime. The Chukwa configuration files are located in the CHUKWA_HOME/conf directory with appened *.template extension. We need to copy, rename, and modify the *.template files so that chukwa-collector-conf.xml.template becomes chukwa-collector-conf.xml. There is a script in conf/chukwa-env.sh for this work and other settings. In that conf/chukwa-env.sh, set CHUKWA_LOG_DIR and CHUKWA_PID_DIR. Edit CHUKWA_HOME/conf/chukwa-env.sh to set JAVA_HOME to Java installation, HADOOP_JAR to $CHUKWA_HOME/hadoopjars/hadoop-0.18.2.jar (as example version), set CHUKWA_IDENT_STRING to the Chukwa cluster name. Edit CHUKWA_HOME/conf/chukwa-collector-conf.xml to the writer.hdfs.filesystem property to the HDFS root URL.
If the Hadoop configuration files are located in the HADOOP_HOME/conf directory then :
1 2 3 4 | cp CHUKWA_HOME/conf/hadoop-log4j.properties HADOOP_HOME/conf/log4j.properties cp CHUKWA_HOME/conf/hadoop-metrics.properties HADOOP_HOME/conf/hadoop-metrics.properties ln -s HADOOP_HOME/conf/hadoop-site.xml CHUKWA_HOME/conf/hadoop-site.xml cp $HADOOP_HOME/lib hadoop-*-core.jar file $CHUKWA_HOME/hadoopjars |
Edit HADOOP_HOME/conf/hadoop-metrics.properties file and change the parameter @CHUKWA_LOG_DIR@ to a real log directory path such as CHUKWA_HOME/var/log. Remaining step is installation of MySQL. It is a generic way of how MySQL required to be installed :
1 2 3 | tar fxvz mysql-*.tar.gz -C $CHUKWA_HOME/opt cd $CHUKWA_HOME/opt/mysql-* cp my.cnf CHUKWA_HOME/opt/mysql-* |
We need to run these commands as general MySQL installation & configuration process:
1 2 3 4 | ./scripts/mysql_install_db ./bin/mysqld_safe& ./bin/mysqladmin -u root create <clustername> ./bin/mysql -u root <clustername> < $CHUKWA_HOME/conf/database_create_table |
Edit CHUKWA_HOME/conf/jdbc.conf configuration file to set the clustername to the MYSQL root URL:
1 | <clustername>=jdbc:mysql://localhost:3306/<clustername>?user=root |
Download the MySQL Connector from the MySQL site and copy the jar file in CHUKWA_HOME/lib.
1 2 3 4 5 6 7 8 | mysql -u root -p Enter password: GRANT REPLICATION SLAVE ON *.* TO '<username>'@'%' IDENTIFIED BY '<password>'; FLUSH PRIVILEGES; # migrate data from Chukwa use <database_name> source /path/to/chukwa/conf/database_create_table.sql source /path/to/chukwa/conf/database_upgrade.sql |
Restart your Hadoop Cluster. Make sure HBase is started. After Hadoop and HBase are started, run:
1 | bin/hbase shell < CHUKWA_HOME/etc/chukwa/hbase.schema |
Add collector hostnames to CHUKWA_HOME/etc/chukwa/collectors. For data analytics with Apache Pig, you need extra environment setup, its like Hadoop.
Start the Chukwa collector script :
1 2 3 4 5 6 | # when Chukwa Collector installed CHUKWA_HOME/tools/init.d/chukwa-collector start # in data processor node CHUKWA_HOME/tools/init.d/chukwa-data-processors start # Chukwa Processes CHUKWA_HOME/tools/init.d/chukwa-collector status |
The Hadoop Infrastructure Care Center (HICC) is the Chukwa web user interface. Download apache-tomcat and decompress the tarball to CHUKWA_HOME/opt, copy CHUKWA_HOME/hicc.war to apache-tomcat-x.y.z/webapps.
Installation and configuration of Apache Chukwa is not easy. There is a detailed administration guide to help you.