|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hbase.regionserver.HLog
public class HLog
HLog stores all the edits to the HStore. It performs logfile-rolling, so external callers are not aware that the underlying file is being rolled.
A single HLog is used by several HRegions simultaneously.
Each HRegion is identified by a unique long int
. HRegions do
not need to declare themselves before using the HLog; they simply include
their HRegion-id in the append
or
completeCacheFlush
calls.
An HLog consists of multiple on-disk files, which have a chronological order. As data is flushed to other (better) on-disk structures, the log becomes obsolete. We can destroy all the log messages for a given HRegion-id up to the most-recent CACHEFLUSH message from that HRegion.
It's only practical to delete entire files. Thus, we delete an entire on-disk file F when all of the messages in F have a log-sequence-id that's older (smaller) than the most-recent CACHEFLUSH message for every HRegion that has a message in F.
Synchronized methods can never execute in parallel. However, between the start of a cache flush and the completion point, appends are allowed but log rolling is not. To prevent log rolling taking place during this period, a separate reentrant lock is used.
Field Summary |
---|
Constructor Summary | |
---|---|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.conf.Configuration conf,
LogRollListener listener)
Create an edit log at the given dir location. |
Method Summary | |
---|---|
void |
append(HRegionInfo regionInfo,
byte[] row,
HLogEdit logEdit)
Append an entry to the log. |
void |
append(HRegionInfo regionInfo,
HLogEdit logEdit)
Append an entry without a row to the log. |
void |
close()
Shut down the log. |
void |
closeAndDelete()
Shut down the log and delete the log directory |
org.apache.hadoop.fs.Path |
computeFilename(long fn)
This is a convenience method that computes a new filename with a given file-number. |
long |
getFilenum()
Accessor for tests. |
static String |
getHLogDirectoryName(HServerInfo info)
Construct the HLog directory name |
long |
getSequenceNumber()
|
static boolean |
isMetaColumn(byte[] column)
|
static void |
main(String[] args)
Pass one or more log file names and it will either dump out a text version on stdout or split the specified log files. |
byte[] |
rollWriter()
Roll the log writer. |
static void |
splitLog(org.apache.hadoop.fs.Path rootDir,
org.apache.hadoop.fs.Path srcDir,
org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.conf.Configuration conf)
Split up a bunch of regionserver commit log files that are no longer being written to, into new files, one per region for region to replay on startup. |
void |
sync()
|
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Constructor Detail |
---|
public HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.conf.Configuration conf, LogRollListener listener) throws IOException
dir
location.
You should never have to load an existing log. If there is a log at
startup, it should have already been processed and deleted by the time the
HLog object is started up.
fs
- dir
- conf
- listener
-
IOException
Method Detail |
---|
public long getFilenum()
public long getSequenceNumber()
public byte[] rollWriter() throws org.apache.hadoop.hbase.regionserver.FailedLogCloseException, IOException
Note that this method cannot be synchronized because it is possible that startCacheFlush runs, obtaining the cacheFlushLock, then this method could start which would obtain the lock on this but block on obtaining the cacheFlushLock and then completeCacheFlush could be called which would wait for the lock on this and consequently never release the cacheFlushLock
FailedLogCloseException
IOException
public org.apache.hadoop.fs.Path computeFilename(long fn)
fn
-
public void closeAndDelete() throws IOException
IOException
public void close() throws IOException
IOException
public void sync() throws IOException
sync
in interface org.apache.hadoop.fs.Syncable
IOException
public void append(HRegionInfo regionInfo, HLogEdit logEdit) throws IOException
regionInfo
- logEdit
-
IOException
public void append(HRegionInfo regionInfo, byte[] row, HLogEdit logEdit) throws IOException
regionInfo
- row
- logEdit
-
IOException
public static boolean isMetaColumn(byte[] column)
column
-
public static void splitLog(org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path srcDir, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.conf.Configuration conf) throws IOException
rootDir
- qualified root directory of the HBase instancesrcDir
- Directory of log files to split: e.g.
${ROOTDIR}/log_HOST_PORT
fs
- FileSystemconf
- HBaseConfiguration
IOException
public static String getHLogDirectoryName(HServerInfo info)
info
- HServerInfo for server
public static void main(String[] args) throws IOException
stdout
or split the specified log files.
args
-
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |