Chapter 1. Upgrading

Table of Contents

1.1. Upgrading from 0.94.x to 0.96.x
1.2. Upgrading from 0.92.x to 0.94.x
1.3. Upgrading from 0.90.x to 0.92.x
1.3.1. You can’t go back!
1.3.2. MSLAB is ON by default
1.3.3. Distributed splitting is on by default
1.3.4. Memory accounting is different now
1.3.5. On the Hadoop version to use
1.3.6. HBase 0.92.0 ships with ZooKeeper 3.4.2
1.3.7. Online alter is off by default
1.3.8. WebUI
1.3.9. Security tarball
1.3.10. Experimental off-heap cache
1.3.11. Changes in HBase replication
1.3.12. RegionServer now aborts if OOME
1.3.13. HFile V2 and the “Bigger, Fewer” Tendency
1.4. Upgrading to HBase 0.90.x from 0.20.x or 0.89.x

You cannot skip major verisons upgrading. If you are upgrading from version 0.20.x to 0.92.x, you must first go from 0.20.x to 0.90.x and then go from 0.90.x to 0.92.x.

Review ???, in particular the section on Hadoop version.

1.1. Upgrading from 0.94.x to 0.96.x

The Singularity

You will have to stop your old 0.94 cluster completely to upgrade. If you are replicating between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown so there are no WAL files laying around (TODO: Can 0.96 read 0.94 WAL files?). Make sure zookeeper is cleared of state. All clients must be upgraded to 0.96 too.

The API has changed in a few areas; in particular how you use coprocessors (TODO: MapReduce too?)

TODO: Write about 3.4 zk ensemble and multi support

1.2. Upgrading from 0.92.x to 0.94.x

0.92 and 0.94 are interface compatible. You can do a rolling upgrade between these versions.

1.3. Upgrading from 0.90.x to 0.92.x

Upgrade Guide

You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few things to watch out for upgrading from 0.90.x to 0.92.0.

tl;dr

If you've not patience, here are the important things to know upgrading.

  1. Once you upgrade, you can’t go back.
  2. MSLAB is on by default. Watch that heap usage if you have a lot of regions.
  3. Distributed splitting is on by defaul. It should make region server failover faster.
  4. There’s a separate tarball for security.
  5. If -XX:MaxDirectMemorySize is set in your hbase-env.sh, it’s going to enable the experimental off-heap cache (You may not want this).

1.3.1. You can’t go back!

To move to 0.92.0, all you need to do is shutdown your cluster, replace your hbase 0.90.x with hbase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart). On startup, the .META. table content is rewritten removing the table schema from the info:regioninfo column. Also, any flushes done post first startup will write out data in the new 0.92.0 file format, HFile V2. This means you cannot go back to 0.90.x once you’ve started HBase 0.92.0 over your HBase data directory.

1.3.2. MSLAB is ON by default

In 0.92.0, the hbase.hregion.memstore.mslab.enabled flag is set to true (See ???). In 0.90.x it was false. When it is enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the memstore has zero or just a few small elements. This is fine usually but if you had lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off), you may find yourself OOME'ing on upgrade because the thousands of regions * number of column families * 2MB MSLAB (at a minimum) puts your heap over the top. Set hbase.hregion.memstore.mslab.enabled to false or set the MSLAB size down from 2MB by setting hbase.hregion.memstore.mslab.chunksize to something less.

1.3.3. Distributed splitting is on by default

Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log splitting is done by the cluster (See See “HBASE-1364 [performance] Distributed splitting of regionserver commit logs”). This should cut down significantly on the amount of time it takes splitting logs and getting regions back online again.

1.3.4. Memory accounting is different now

In 0.92.0, ??? indices and bloom filters take up residence in the same LRU used caching blocks that come from the filesystem. In 0.90.x, the HFile v1 indices lived outside of the LRU so they took up space even if the index was on a ‘cold’ file, one that wasn’t being actively used. With the indices now in the LRU, you may find you have less space for block caching. Adjust your block cache accordingly. See the ??? for more detail. The block size default size has been changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25.

1.3.5. On the Hadoop version to use

Run 0.92.0 on Hadoop 1.0.x (or CDH3u3 when it ships). The performance benefits are worth making the move. Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync. See ???.

If running on Hadoop 1.0.x (or CDH3u3), enable local read. See Practical Caching presentation for ruminations on the performance benefits ‘going local’ (and for how to enable local reads).

1.3.6. HBase 0.92.0 ships with ZooKeeper 3.4.2

If you can, upgrade your zookeeper. If you can’t, 3.4.2 clients should work against 3.3.X ensembles (HBase makes use of 3.4.2 API).

1.3.7. Online alter is off by default

In 0.92.0, we’ve added an experimental online schema alter facility (See ???). Its off by default. Enable it at your own risk. Online alter and splitting tables do not play well together so be sure your cluster quiescent using this feature (for now).

1.3.8. WebUI

The webui has had a few additions made in 0.92.0. It now shows a list of the regions currently transitioning, recent compactions/flushes, and a process list of running processes (usually empty if all is well and requests are being handled promptly). Other additions including requests by region, a debugging servlet dump, etc.

1.3.9. Security tarball

We now ship with two tarballs; secure and insecure HBase. Documentation on how to setup a secure HBase is on the way.

1.3.10. Experimental off-heap cache

A new cache was contributed to 0.92.0 to act as a solution between using the “on-heap” cache which is the current LRU cache the region servers have and the operating system cache which is out of our control. To enable, set “-XX:MaxDirectMemorySize” in hbase-env.sh to the value for maximum direct memory size and specify hbase.offheapcache.percentage in hbase-site.xml with the percentage that you want to dedicate to off-heap cache. This should only be set for servers and not for clients. Use at your own risk. See this blog post for additional information on this new experimental feature: http://www.cloudera.com/blog/2012/01/caching-in-hbase-slabcache/

1.3.11. Changes in HBase replication

0.92.0 adds two new features: multi-slave and multi-master replication. The way to enable this is the same as adding a new peer, so in order to have multi-master you would just run add_peer for each cluster that acts as a master to the other slave clusters. Collisions are handled at the timestamp level which may or may not be what you want, this needs to be evaluated on a per use case basis. Replication is still experimental in 0.92 and is disabled by default, run it at your own risk.

1.3.12. RegionServer now aborts if OOME

If an OOME, we now have the JVM kill -9 the regionserver process so it goes down fast. Previous, a RegionServer might stick around after incurring an OOME limping along in some wounded state. To disable this facility, and recommend you leave it in place, you’d need to edit the bin/hbase file. Look for the addition of the -XX:OnOutOfMemoryError="kill -9 %p" arguments (See [HBASE-4769] - ‘Abort RegionServer Immediately on OOME’)

1.3.13. HFile V2 and the “Bigger, Fewer” Tendency

0.92.0 stores data in a new format, ???. As HBase runs, it will move all your data from HFile v1 to HFile v2 format. This auto-migration will run in the background as flushes and compactions run. HFile V2 allows HBase run with larger regions/files. In fact, we encourage that all HBasers going forward tend toward Facebook axiom #1, run with larger, fewer regions. If you have lots of regions now -- more than 100s per host -- you should look into setting your region size up after you move to 0.92.0 (In 0.92.0, default size is now 1G, up from 256M), and then running online merge tool (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).

1.4. Upgrading to HBase 0.90.x from 0.20.x or 0.89.x

This version of 0.90.x HBase can be started on data written by HBase 0.20.x or HBase 0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x does write out the name of region directories differently -- it names them with a md5 hash of the region name rather than a jenkins hash -- so this means that once started, there is no going back to HBase 0.20.x.

Be sure to remove the hbase-default.xml from your conf directory on upgrade. A 0.20.x version of this file will have sub-optimal configurations for 0.90.x HBase. The hbase-default.xml file is now bundled into the HBase jar and read from there. If you would like to review the content of this file, see it in the src tree at src/main/resources/hbase-default.xml or see ???.

Finally, if upgrading from 0.20.x, check your .META. schema in the shell. In the past we would recommend that users run with a 16kb MEMSTORE_FLUSHSIZE. Run hbase> scan '-ROOT-' in the shell. This will output the current .META. schema. Check MEMSTORE_FLUSHSIZE size. Is it 16kb (16384)? If so, you will need to change this (The 'normal'/default value is 64MB (67108864)). Run the script bin/set_meta_memstore_size.rb. This will make the necessary edit to your .META. schema. Failure to run this change will make for a slow cluster [1] .