ssh must be installed and sshd must be running to use Hadoop's scripts to manage remote Hadoop and HBase daemons. You must be able to ssh to all nodes, including your local node, using passwordless login (Google "ssh passwordless login").
HBase uses the local hostname to self-report it's IP address. Both forward and reverse DNS resolving should work.
If your machine has multiple interfaces, HBase will use the interface that the primary hostname resolves to.
If this is insufficient, you can set
hbase.regionserver.dns.interface
to indicate the
primary interface. This only works if your cluster configuration is
consistent and every host has the same network interface
configuration.
Another alternative is setting
hbase.regionserver.dns.nameserver
to choose a
different nameserver than the system wide default.
HBase expects the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions, for example, will default to 127.0.1.1 and this will cause problems for you.
/etc/hosts
should look something like this:
127.0.0.1 localhost 127.0.0.1 ubuntu.ubuntu-domain ubuntu
The clocks on cluster members should be in basic alignments. Some skew is tolerable but wild skew could generate odd behaviors. Run NTP on your cluster, or an equivalent.
If you are having problems querying data, or "weird" cluster operations, check system time!
HBase is a database. It uses a lot of files all at the same time. The default ulimit -n -- i.e. user file limit -- of 1024 on most *nix systems is insufficient (On mac os x its 256). Any significant amount of loading will lead you to ???. You may also notice errors such as...
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException 2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901
Do yourself a favor and change the upper bound on the number of file descriptors. Set it to north of 10k. The math runs roughly as follows: per ColumnFamily there is at least one StoreFile and possibly up to 5 or 6 if the region is under load. Multiply the average number of StoreFiles per ColumnFamily times the number of regions per RegionServer. For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open 3 * 3 * 100 = 900 file descriptors (not counting open jar files, config files, etc.)
You should also up the hbase users'
nproc
setting; under load, a low-nproc
setting could manifest as OutOfMemoryError
[2]
[3].
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration. Also, a common mistake is that administrators will up the file descriptors for a particular user but for whatever reason, HBase will be running as some one else. HBase prints in its logs as the first line the ulimit its seeing. Ensure its correct. [4]
If you are on Ubuntu you will need to make the following changes:
In the file /etc/security/limits.conf
add
a line like:
hadoop - nofile 32768
Replace hadoop
with whatever user is running
Hadoop and HBase. If you have separate users, you will need 2
entries, one for each user. In the same file set nproc hard and soft
limits. For example:
hadoop soft/hard nproc 32000
.
In the file /etc/pam.d/common-session
add
as the last line in the file:
session required pam_limits.so
Otherwise the changes in /etc/security/limits.conf
won't be
applied.
Don't forget to log out and back in again for the changes to take effect!
HBase has been little tested running on Windows. Running a production install of HBase on top of Windows is not recommended.
If you are running HBase on Windows, you must install Cygwin to have a *nix-like environment for the shell scripts. The full details are explained in the Windows Installation guide. Also search our user mailing list to pick up latest fixes figured by Windows users.
[2] See Jack Levin's major hdfs issues note up on the user list.
[3] The requirement that a database requires upping of system limits is not peculiar to HBase. See for example the section Setting Shell Limits for the Oracle User in Short Guide to install Oracle 10 on Linux.
[4] A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?