Below we list what the important Configurations. We've divided this section into required configuration and worth-a-look recommended configs.
See Section 1.3.1, “Requirements”.
It lists at least two required configurations needed running HBase bearing
load: i.e. Section 1.3.1.6, “
ulimit
and
nproc
” and
Section 1.3.1.7, “dfs.datanode.max.xcievers
”.
The default timeout is three minutes (specified in milliseconds). This means that if a server crashes, it will be three minutes before the Master notices the crash and starts recovery. You might like to tune the timeout down to a minute or even less so the Master notices failures the sooner. Before changing this value, be sure you have your JVM garbage collection configuration under control otherwise, a long garbage collection that lasts beyond the ZooKeeper session timeout will take out your RegionServer (You might be fine with this -- you probably want recovery to start on the server if a RegionServer has been in GC for a long period of time).
To change this configuration, edit hbase-site.xml
,
copy the changed file around the cluster and restart.
We set this value high to save our having to field noob questions up on the mailing lists asking why a RegionServer went down during a massive import. The usual cause is that their JVM is untuned and they are running into long GC pauses. Our thinking is that while users are getting familiar with HBase, we'd save them having to know all of its intricacies. Later when they've built some confidence, then they can play with configuration such as this.
This setting defines the number of threads that are kept open to answer incoming requests to user tables. The default of 10 is rather low in order to prevent users from killing their region servers when using large write buffers with a high number of concurrent clients. The rule of thumb is to keep this number low when the payload per request approaches the MB (big puts, scans using a large cache) and high when the payload is small (gets, small puts, ICVs, deletes).
It is safe to set that number to the maximum number of incoming clients if their payload is small, the typical example being a cluster that serves a website since puts aren't typically buffered and most of the operations are gets.
The reason why it is dangerous to keep this setting high is that the aggregate size of all the puts that are currently happening in a region server may impose too much pressure on its memory, or even trigger an OutOfMemoryError. A region server running on low memory will trigger its JVM's garbage collector to run more frequently up to a point where GC pauses become noticeable (the reason being that all the memory used to keep all the requests' payloads cannot be trashed, no matter how hard the garbage collector tries). After some time, the overall cluster throughput is affected since every request that hits that region server will take longer, which exacerbates the problem even more.
HBase ships with a reasonable, conservative configuration that will work on nearly all machine types that people might want to test with. If you have larger machines -- HBase has 8G and larger heap -- you might the following configuration options helpful. TODO.
You should consider enabling LZO compression. Its near-frictionless and in most all cases boosts performance.
Unfortunately, HBase cannot ship with LZO because of the licensing issues; HBase is Apache-licensed, LZO is GPL. Therefore LZO install is to be done post-HBase install. See the Using LZO Compression wiki page for how to make LZO work with HBase.
A common problem users run into when using LZO is that while initial setup of the cluster runs smooth, a month goes by and some sysadmin goes to add a machine to the cluster only they'll have forgotten to do the LZO fixup on the new machine. In versions since HBase 0.90.0, we should fail in a way that makes it plain what the problem is, but maybe not. Remember you read this paragraph[11].
See also Appendix B, Compression In HBase at the tail of this book.
Consider going to larger regions to cut down on the total number of regions
on your cluster. Generally less Regions to manage makes for a smoother running
cluster (You can always later manually split the big Regions should one prove
hot and you want to spread the request load over the cluster). By default,
regions are 256MB in size. You could run with
1G. Some run with even larger regions; 4G or even larger. Adjust
hbase.hregion.max.filesize
in your hbase-site.xml
.
Rather than let HBase auto-split your Regions, manage the splitting manually
[12].
With growing amounts of data, splits will continually be needed. Since
you always know exactly what regions you have, long-term debugging and
profiling is much easier with manual splits. It is hard to trace the logs to
understand region level problems if it keeps splitting and getting renamed.
Data offlining bugs + unknown number of split regions == oh crap! If an
HLog
or StoreFile
was mistakenly unprocessed by HBase due to a weird bug and
you notice it a day or so later, you can be assured that the regions
specified in these files are the same as the current regions and you have
less headaches trying to restore/replay your data.
You can finely tune your compaction algorithm. With roughly uniform data
growth, it's easy to cause split / compaction storms as the regions all
roughly hit the same data size at the same time. With manual splits, you can
let staggered, time-based major compactions spread out your network IO load.
How do I turn off automatic splitting? Automatic splitting is determined by the configuration value
hbase.hregion.max.filesize
. It is not recommended that you set this
to Long.MAX_VALUE
in case you forget about manual splits. A suggested setting
is 100GB, which would result in > 1hr major compactions if reached.
What's the optimal number of pre-split regions to create?
Mileage will vary depending upon your application.
You could start low with 10 pre-split regions / server and watch as data grows
over time. It's better to err on the side of too little regions and rolling split later.
A more complicated answer is that this depends upon the largest storefile
in your region. With a growing data size, this will get larger over time. You
want the largest region to be just big enough that the Store
compact
selection algorithm only compacts it due to a timed major. If you don't, your
cluster can be prone to compaction storms as the algorithm decides to run
major compactions on a large series of regions all at once. Note that
compaction storms are due to the uniform data growth, not the manual split
decision.
If you pre-split your regions too thin, you can increase the major compaction
interval by configuring HConstants.MAJOR_COMPACTION_PERIOD
. If your data size
grows too large, use the (post-0.90.0 HBase) org.apache.hadoop.hbase.util.RegionSplitter
script to perform a network IO safe rolling split
of all regions.
[11] See
Section B.2, “
hbase.regionserver.codecs
”
for a feature to help protect against failed LZO install
[12] What follows is taken from the javadoc at the head of
the org.apache.hadoop.hbase.util.RegionSplitter
tool
added to HBase post-0.90.0 release.