Recently, I got around to installing Hadoop 0.20.205 using its rpm. I also used the included configuration scripts to create a functional multi node Hadoop configuration. I chose to use a non-secure configuration. I discovered a couple of gotchas along the way.
Pre-requisite: My test cluster consists of 4 CentOS 5.7 VMs each with dual cores and 2GB of memory. I named these 4 VMs ‘master’, ‘slave1’, ‘slave2’, and ‘slave3’. I created a hosts file mapping these names to their IP addresses and copied it over to each of these machines. I also configured the VM ‘master’ to be able to do passwordless ssh into the three slaves.
- Login to the node ‘master’ as root, and do the following.
- Download the JDK and install it. I am using JDK 1.6.0 Update 29. Add a file /etc/profile.d/java.sh that sets the env variable JAVA_HOME and adds $JAVA_HOME/bin to the path. Run ‘java -version’ and ensure that you are getting Oracle JDK 1.6 and not openjdk or some other such silliness.
- Download the rpm ‘hadoop-0.20.205.0-1.i386.rpm’, and install it using ‘
rpm --install hadoop-0.20.205.0-1.i386.rpm‘
- Hadoop includes a convenient script /usr/sbin/hadoop-setup-conf.sh for generating configuration script (hadoop does not suffer from a paucity of configuration options). First, I need to run this script on the node ‘master’ and generate configuration files. The command line I used was as follows: ‘
/usr/sbin/hadoop-setup-conf.sh --namenode-host=master --jobtracker-host=master --conf-dir=/etc/hadoop --hdfs-dir=/var/lib/hadoop/hdfs --namenode-dir=/var/lib/hadoop/hdfs/namenode --mapred-dir=/var/lib/hadoop/mapred --datanode-dir=/var/lib/hadoop/hdfs/data --log-dir=/var/log/hadoop --auto --mapreduce-user=mapred --dfs-support-append=true‘
- At this point, logout of the shell, and then login again (as root). This is necessary because a file /etc/profile.d/hadoop-env.sh is created with critical environment variables. Without these env variables sourced, subsequent operations will fail.
- Now, format the HDFS using the following command ‘
- Startup the namenode using ‘
- Startup the jobtracker using ‘
At this point, your ‘master’ is ready. Next, we setup the slaves.
- Login as root into the node ‘slave1’
- Download and install the JDK. See instructions for master above
- Download and install the Hadoop RPM. See instructions for master above.
- Run the same ‘/usr/sbin/hadoop-setup-conf.sh’ command as you did on the master to generate config files. Note that the config files for the slaves are exactly the same as for the master.
- Finally, run ‘
/etc/init.d/hadoop-datanode start‘ and ‘
Once the slaves are setup, browse over to http://master:50070/ to get to the NameNode web UI. Ensure that there are three ‘Live Nodes’ listed. Also, browse over to http://master:50030/ to get to the JobTracker web UI. Ensure that the jobtracker can see three nodes.
As the final step, run the wordcound example. I did so, not as root, but as the user ‘jagane’.
- First, I created a home directory on HDFS for the user ‘jagane’. Logged into the Linux system ‘master’ as root, I typed ‘
/usr/sbin/hadoop-create-user.sh -u jagane‘
- Next, I logged into the Linux system ‘master’ as user ‘jagane’ and created an input directory on HDFS, like so: ‘
hadoop fs -mkdir /user/jagane/input‘
- I am going to run word count on the linux dict, so I type in ‘
hadoop fs -copyFromLocal /usr/share/dict/linux.words /user/jagane/input‘ to copy the dict file over to HDFS.
- Finally, the moment of truth. I typed in ‘
hadoop jar /usr/share/hadoop/hadoop-examples-0.20.205.0.jar wordcount /user/jagane/input /user/jagane/output‘. That actually worked. I counted the words in the linux dict.
- To prove that it worked, I dumped the output using ‘
hadoop fs -cat /user/jagane/output/part-r-00000‘
Well, there you have it. Hadoop 0.20.205 from rpm in a jiffy (‘big data’ jiffy that is).
I’m sure folks have a dozen different ways (chef, puppet, pdsh) of installing and managing Hadoop. But there is something about the elegance of a well packaged rpm, and a nice configuration generation script that is just great.
Congratulations to Eric Yang on putting together the rpm, and its configuration script.
This is a quick post. Here is a link to the video of my Livebackup presentation at the KVM Forum 2011.
Here is a link to the presentation:
Recently, I have been working on a piece of technology that enables full and incremental backups of running VMs. My first implementation is for KVM. It is, by design, very low overhead technology. It does not impose the cost of continuous synchronization such as drbd does. Yet it offers a solution that should work for a substantial number of VMs.
I have been involved in discussions with the KVM community to get this software into the main kvm codebase. Towards this end, I presented LiveBackup to the KVM community at the KVM forum 2011. Here is a link to the KVM forum 2011 agenda.
And here is a link to the slides for my presentation.
Most presentations were recorded, and I will post a link when the recordings become available.
Recently, I got around to installing Oracle Linux 6 (RHEL 6 clone) on a machine in order to experiment with kvm. The machine I installed it on is an Intel Core 2 Duo 6400 with 4 GB of RAM. Intel Virtualization Technology (VT) is present in this chip, and enabled in the BIOS. I am running the server headless.
At install time, I chose the ‘virtual host’ option.
Setting up a bridge ‘br0’ in order to enable VM bridged networking:
Oracle Linux installs a default bridge virbr0 that is useful if you want to configure the VM to use ‘host only’ networking. I wanted a bridged network VM, i.e. the VM’s virtual interface should appear on my physical network just as any other machine would. There are a few steps that I need to do in order to enable this:
- Create a new bridge ‘br0’ and assign it the static IP address that used to be associated with eth0.
- Make the physical network interface ‘eth0’ be an uplink port to this bridge ‘br0’.
The assumption here is that the physical network card in the system is ‘eth0’. If you have ‘eth1’ connected to the network, make the corresponding changes to the setup described. Another assumption here is that at the end of the Linux install, ‘eth0’ has the static IP address 192.168.1.201/24 with gateway 192.168.1.10.
First create a new file /etc/sysconfig/network-scripts/ifcfg-br0 with the following contents:
DEVICE="br0" TYPE=Bridge NM_CONTROLLED="no" ONBOOT="yes" BOOTPROTO=static IPADDR=192.168.1.201 NETMASK=255.255.255.0 GATEWAY=192.168.1.10
Next, delete the old ifcfg-eth0 file, and create a new one with the following contents:
DEVICE="eth0" NM_CONTROLLED="no" ONBOOT="yes" BRIDGE=br0
Setup /etc/resolv.conf to point to the free google DNS Servers:
nameserver 184.108.40.206 nameserver 220.127.116.11
Reboot the system. When it comes up again, the bridge br0 should have the IP address, and eth0 should be an uplink port on the bridge, as shown below:
br0 Link encap:Ethernet HWaddr 00:1C:C0:07:20:70 inet addr:192.168.1.201 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:c0ff:fe07:2070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:69422 errors:0 dropped:0 overruns:0 frame:0 TX packets:38962 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8360996 (7.9 MiB) TX bytes:9988797 (9.5 MiB) eth0 Link encap:Ethernet HWaddr 00:1C:C0:07:20:70 inet6 addr: fe80::21c:c0ff:fe07:2070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:69433 errors:0 dropped:0 overruns:0 frame:0 TX packets:39489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9335186 (8.9 MiB) TX bytes:10020863 (9.5 MiB) Memory:e0400000-e0420000
Setting up yum to use a disk based copy of the install DVD as its repository:
After installation, I copied the contents of the install DVD into a directory called /root/cdrom. Then I ran the command ‘createrepo .’ from the /root/cdrom directory. Note that I had to do a ‘rpm –install’ of the createrepo rpm before I could do this. I also created a file /etc/yum.repos.d/iso.repo with the following contents:
[iso_repository] baseurl=file:///root/cdrom enabled=1
One more step:
I ran the following on Oracle Linux:
# rpm --import /root/cdrom/RPM-GPG-KEY
I ran the following on CentOS 6:
# rpm --import /root/cdrom/RPM-GPG-KEY-CentOS-6
Now, yum can find rpms from the /root/cdrom directory.
Starting to install RHEL6 in a newly created blank VM using the libvirt command line tool virt-install:
First, I turned off the firewall using ‘/etc/init.d/iptables stop’, since I want to connect to the guest console using vncviewer from my desktop. Remember, this server is running headless.
# mkdir -p /vms/1 # virt-install --name=el6guest --arch=x86_64 --ram=512 --os-type=linux --os-variant=rhel6 --hvm --network bridge=br0 --cdrom=/dev/cdrom --disk path=/vms/1/vdisk0,size=16 --accelerate --vnc --vnclisten=0.0.0.0 Starting install... Creating storage file vdisk0 | 16 GB 00:00 Creating domain... | 0 B 00:00 Cannot open display: Run 'virt-viewer --help' to see a full list of available command line options Domain installation still in progress. You can reconnect to the console to complete the installation process.
Connecting to the console of the newly created VM in order to start installation:
The VM el6guest has now been created by virt-install, but we dont yet know which vnc port the guest is listening on. Run the command virsh as follows:
[root@localhost ~]# virsh vncdisplay el6guest :0
The ‘:0’ printed out by virsh tells us that the vnc server for guest el6guest is listening on port 5900, i.e. display 0
Startup the vncviewer binary on your desktop, and connect to the VM server at display 0. The Oracle Linux installer console will come up in vncviewer.
I installed a base server with root password el6guest. Once installation is complete, hit restart. The VM will shutdown at this point. You can restart the VM from virsh as shown below:
[root@localhost qemu]# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # start el6guest Domain el6guest started virsh # vncdisplay el6guest :0
Now, you can connect to display 0 using a vncviewer from your desktop, and the VM’s console will show up. That’s all folks..
Often, you may need to mount individual partitions of a virtual disk image. Here is a cheat sheet for doing so under Linux:
The first sector of a hard disk contains the Master Boot Record or MBR. In the case of a virtual disk image file, the first 512 bytes of the file represent the MBR.
The disk is divided into partitions, and each partition can be formatted with a different filesystem. In order to mount the filesystem, you need to locate the partition table entry, determine the first sector of the partition, and supply that to the mount command for mounting.
The partition table is located at offset 0x1be (decimal 446). To dump the first partition entry type the following command:
# od -A d -t x1 vdisk0
Locate offset 446 of the print out:
0000432 00 00 00 00 00 00 00 00 55 20 06 00 00 00 80 01
0000448 01 00 83 fe ff 0e 3f 00 00 00 10 f0 bf 00 00 00
The first partition table entry is highlighted in red above. Bytes 9,10,11 and 12 constitute the offset, in sectors, of the beginning of the partition. It is highlighted in green above. Remember that it is in Little Endian byte order, so the 32 bit number is actually 0x0000003f, i.e. 63 decimal. This is in sector count, and each sector is 512 bytes long, so the actual file offset of the beginning of the first partition in file vdisk0 is (63 * 512) i.e. 32256.
Hence, use the following command to mount the first partition of virtual disk file vdisk0
# mount -o loop,offset=32256 ./vdisk0 ./mnt
Now you can ‘cd’ into ‘./mnt’ and view and modify files in the first partition of vdisk0. Remember to umount ./mnt when you are done.
Till a few years ago, if you wanted to run something on the Internet, either to provide service to the public, or to your own employees, you would do one of the following:
- Your own in-house Datacenter: You put your servers in your data closet, bought a T1 or T3 link from your telco, and published a DNS name for your service (chat.mycompany.com, for example)
- You could rent a full/half/quarter rack from a co-location facility such as Exodus of the past. Load up your own servers, install your server OS, install your Applications, install your firewall etc. and run your service
The principal reasons for renting space in a colo rack were:
- Good electricty (Dual power sources + UPS, for example)
- Good network connectivity (fiber connection to Sprint, ATT and other backbone networks)
- Higher bandwidth for a lower price ( T1 is 1.544Mbps to your own data closer and may cost you $500/month, versus a $350 10Mbps link at the colo with capacity to burst upto 100Mbps for at most 5% of the time)
Setting up and getting going in a colo was, and continues to be a pain. It could involve a multi year contract, and some upfront setup charges.
Fast forward to today, and the colo scenario has been replaced by Cloud computing. Cloud computing consists of the following different types:
- Infrastructure As A Service (IaaS) – rent virtual machines from the Cloud Service Provider, and run (almost) any software (OS + Apps) on it
- Example: Amazon EC2. Smallest VM is 8.5cents an hour ($744.60/year)
- Platform As A Service (PaaS) – rent capacity on An Application Platform to run your application. You do not get to choose the OS or hardware, and the application environment is usually very restricted. Google App Engine, for instance, requires you to write brand new applications in a language called Python
- Example: Salesforce’s force.com and google’s Google App Engine
- Software As A Service(SaaS) – you rent an application. You do not get to choose hardware, the Operating System, or the Application. You just buy ‘functionality’.
- Example: WebEx. You purchase the capability to run web conferencing.
All of the three cloud compute options bring about an ease of use and low barrier to entry for customers that is an astounding improvement over the colo scenario of years past. Billing by the hour of use, pioneered by Amazon’s EC2 service, makes it very easy for customers to try out software, and develop on the cloud.
Advances in Cloud compute technologies are encouraging people to outsource their own datacenter and its operations to the Cloud. One step in this migration may be to run Cloud compute software in your own datacenter, and then move suitable applications to the public cloud.
I set out to accomplish a (seemingly) simple task: Install CentOS 5.4 with KVM Virtualization on a system and then create a CentOS 5.4 KVM VM with virtio Net and Disk drivers.
It turns out that there is more to this task than meets the eye. So, here’s my step by step procedure.
Step 1: Install Centos (Redhat 5.4) with KVM Virtualization on a Intel VT or AMD Pacifica enabled server (I used a Intel Core 2 Duo E6420/2GB/120GB SATA system)
- Install CentOS 5.4 64 bit with the “Virtualization” option
- While installing, choose the “Customize now” instead of customize later, and select KVM instead of ‘Virtualization’ in the Virtualization customization screen.
- For this install, I chose to disable SELinux. I’m sure its useful in some security contexts, but for most of my use – it is just a source of endless problems. Someday, I might actually spend the time to learn how SELinux works. Right now, it feels to me like the Windowsification of Linux. Moving on…
- When the newly installed system boots up, you need to create a bridge(software switch) called br0, move the IP address of eth0 to br0, and then make eth0 an uplink to the bridge br0. Here’s how to do it:
- Create a file /etc/sysconfig/network-scripts/ifcfg-br0 with the following contents:
- Edit /etc/sysconfig/network-scripts/ifcfg-eth0 and replace its contents with:
- Reboot your system. Note that this configuration is for a static IP server.
- Add the following lines to /etc/sysconfig/iptables to allow relevant traffic:
- -A RH-Firewall-1-INPUT -i br0 -j ACCEPT
- -A RH-Firewall-1-INPUT -m state –state NEW -m tcp -p tcp –dport 5900:6900 -j ACCEPT
- Create a file /etc/sysconfig/network-scripts/ifcfg-br0 with the following contents:
Step 2: Create a CentOS 5.4 KVM VM using the CentOS boot CD
Well, this task got complicated quickly. My intent was to make this VM connect to a bridged network interface, so that I could benchmark it by running nuttcp to another physical machine. CentOS 5.4 (Redhat 5.4) does not come with scripts for a bridged VM network out of the box. This is why we needed to create the br0 bridge in the previous step.
- In order to use the br0 bridge effectively, we need a utility called tunctl (I have a precompiled version here – http://www.thinsy.com/utils/tunctl.gz ). Please this in /usr/sbin on your new CentOS box.
- It turns out that creating a VM by calling qemu directly involves a lot of options. I ended up building a script for this purpose. You can download it here: http://www.thinsy.com/utils/start_a_kvm.sh.gz. Place start_a_kvm.sh in /usr/sbin.
- Create a directory for our VM called /vms/1
- Create two 8GB files vdisk0 and vdisk1 in this directory using the following commands:
- dd if=/dev/zero of=./vdisk0 count=1 bs=1 seek=8589934591
- dd if=/dev/zero of=./vdisk1 count=1 bs=1 seek=8589934591
- Create a file called vm.params with the following contents ( a sample is available at http://www.thinsy.com/utils/vm.params):
- For booting and installing the VM from the CentOS 5.4 CD image, run the following command:
- /usr/sbin/start_a_kvm.sh /vms/1/vm.params /tmp/CentOS-5.4-i386-bin-DVD.iso boot_from_cd
- This will cause a tap interface called tap0 to be created, connected to the bridge br0, and the VM created by calling kvm-qemu directly
- The start_a_kvm.sh script sets up the VM to publish a graphical console using the VNC protocol at TCP port 5900 + $VNCDISP, where VNCDISP is set in the vm.params file. Use your favorite vncviewer to connect to this graphical console.
- When the VM is started up, you will get the graphical console of the VM. Now go through the process of installing the OS on your newly created VM
Step 3: First boot of the CentOS in your newly created VM
After the OS installation is completed, you can reboot the VM from the virtual hard drive, without the CDROM image attached. Here is the command to do that:
There you have it – a KVM VM with paravirtualized drivers (virtio) for network and disk.
All this without the use of libvirt or virt-manager or one of the myriad programs that did not quite work for me.
Step 4: Fixup VNC mouse tracking
One of the most annoying things about the qemu vnc server is the fact that the mouse works like cr**. Here’s a simple fix for that problem. Download the following xorg.conf file and place it in your newly created VM’s /etc/X11 directory. This configures a VNC screen of size 1024×768 with a mouse that actually works – http://www.thinsy.com/utils/xorg.conf.gz