Professional Documents
Culture Documents
Credits
Author: Michael Guenther Editor: Aaron Loucks Dancing Elephants: Michael V. Shuman
Introductions
Aaron Loucks
y Senior Technical Operations Engineer,
Michael Guenther
y Technical Operations Team Lead, CCHA y ~16 months active Hadoop Admin
Experience
pretty thin. y Manning (finally) released their book, so now we have 3 Hadoop books. y HBase has even less documentation available and no books. (July for Lars Georges book. Probably, hopefully) y Cloudera didnt officially support HBase until CDH3
Playing Catch Up
IS and Ops came to the game a bit later than development so we had to play catch up early on in the project. We had to write a lot of our own tools and implement our own processes (rack awareness, log cleanup, metadata backups, deploy configs, etc.) Additionally, we needed to learn a lot about Linux system details and network setup and configuration.
places, Ops are part of IS. The correct model depends on staffing and which group fulfills various enterprise roles.
Administrating Hadoop/HBase created a problem for our traditional support model and non-SA activity on the machines It took some time to get used to the new system and what was needed for us to run and maintain it. Most of which changed with CDH3.
mount and an ssh script to push configs. y Cloudera recommended using Puppet or Chef. y We havent made that jump yet. When the cluster goes heterogeneous, we will investigate further.
Dell R710s Dual Intel Quad-cores (spec) 72GB of RAM SCSI Drives in RAID configuration (~70GB)
Dell R410s Dual Intel Quad-cores 64GB of RAM 4x2TB 5400RPM SATA in JBOD configuration Dell R610s Dual Intel Quad-cores (Specs) 32GB of RAM SCSI Drives in RAID configuration (~70GB)
Rack Details
y TOR Switches Cisco 4948s y 1GB/E links to TOR y 42U rack, ~32U usable for servers
Network
y TORs are 1GB/E to Core (Cisco 6509s).
Channel bonding possible if needed. y 10GB/E is being investigated if needed. y 192GB Backbone
When we did add new servers, we ran into rack space issues. Our rack breakdown for UAT datanodes is 5, 10, and 15 servers Uneven datanode distribution isnt handled well by HBase and Hadoop. Re-racking was not an option. Options: Turn off rack-awareness, go with the uneven rack arrangement, or lie to Hadoop?
Be Paranoid.
Set up monitoring and alerting. Set up your trash option in HDFS to greater than the 10 minute default. Lock down your cluster
y Keep everyone off of your cluster y Provide a client server for user interaction.
Again, multiple dfs.name.dirs Run wgets regularly on the namenode image and edits URL to create a backup. Back up your config files prior to any major change (or even minor). Save your job statistics to HDFS.
y mapred.job.tracker.persist.jobstatus.dir
initially:
mirrorlist=http://archive.cloudera.com/redhat/c dh/3/mirrors
y Thats the latest and greatest CDH3 build
repo (B2, B3, B4, etc). y We are on CDH3B3, so we needed to set our repo mirror list to this:
mirrorlist=http://archive.cloudera.com/redhat/c dh/3b3/mirrors
worse than it turned out to be when it happened. y Its still bad, but only the files listed as corrupt are lost. It wasnt the swath of destruction we thought it would be.
Cloudera might be able to work some magic, but youve almost certainly lost your file(s).
Document EVERYTHING
y Its a bit tiresome at first, but issues can
sometimes be months between reoccurrence. y Write it down now and save yourself having to research again. y This is especially true when you are setting up your first cluster. Theres a lot to learn, its really easy to forget. y Pay special attention to the error message that goes along with the problem. HBase tends to have extremely vague exceptions and error logging.
caused some serious problems. y Users grow frustrated when their jobs arent running, so they increase the priority. y Now their job is running, but others are being starved. y We ended up restricting page access to a very small subset of users.
Smaller Issues
y Missing pid files? y Users receive a zip file exception when
running a job y CDH3 Install/Upgrade requires a local hadoop user. y The Job Tracker complains about port already in use. Check your mapred-site.xml.
equal to your max. y Set your memory explicitly per process, not using HADOOP_HEAPSIZE. y Set your map and reduce heap size as final in your mapred-site.xml.
Set your hbase users ulimits high 64k is good. Sometimes the HBase take a really long time to start back up (2 hours one Saturday). 0.89 WAL File corruption problem. Keep your quorum off of your data nodes (off that rack really). HBase is extremely sensitive to network events/maintenance/connectivity issues/etc.
than your HMaster. Region Servers can, and will, run out of memory and crash. Rowcounter is your friend for nonresponsive region servers. Zookeeper should be set to 1 GB of JVM heap. Talk to Cloudera about special JVM settings for your HBase daemons.