|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hbase.regionserver.wal.HLog
public class HLog
HLog stores all the edits to the HStore. Its the hbase write-ahead-log implementation. It performs logfile-rolling, so external callers are not aware that the underlying file is being rolled.
There is one HLog per RegionServer. All edits for all Regions carried by a particular RegionServer are entered first in the HLog.
Each HRegion is identified by a unique long int
. HRegions do
not need to declare themselves before using the HLog; they simply include
their HRegion-id in the append
or
completeCacheFlush
calls.
An HLog consists of multiple on-disk files, which have a chronological order. As data is flushed to other (better) on-disk structures, the log becomes obsolete. We can destroy all the log messages for a given HRegion-id up to the most-recent CACHEFLUSH message from that HRegion.
It's only practical to delete entire files. Thus, we delete an entire on-disk file F when all of the messages in F have a log-sequence-id that's older (smaller) than the most-recent CACHEFLUSH message for every HRegion that has a message in F.
Synchronized methods can never execute in parallel. However, between the start of a cache flush and the completion point, appends are allowed but log rolling is not. To prevent log rolling taking place during this period, a separate reentrant lock is used.
To read an HLog, call getReader(org.apache.hadoop.fs.FileSystem,
org.apache.hadoop.fs.Path, org.apache.hadoop.conf.Configuration)
.
Nested Class Summary | |
---|---|
static class |
HLog.Entry
Utility class that lets us keep track of the edit with it's key Only used when splitting logs |
static interface |
HLog.Reader
|
static interface |
HLog.Writer
|
Field Summary | |
---|---|
static long |
FIXED_OVERHEAD
|
static byte[] |
METAFAMILY
|
Constructor Summary | |
---|---|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf)
Constructor. |
|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
List<WALObserver> listeners,
boolean failIfLogDirExists,
String prefix)
Create an edit log at the given dir location. |
|
HLog(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path dir,
org.apache.hadoop.fs.Path oldLogDir,
org.apache.hadoop.conf.Configuration conf,
List<WALObserver> listeners,
String prefix)
Create an edit log at the given dir location. |
Method Summary | |
---|---|
void |
abortCacheFlush()
Abort a cache flush. |
void |
append(HRegionInfo info,
byte[] tableName,
WALEdit edits,
long now)
Append a set of edits to the log. |
void |
append(HRegionInfo regionInfo,
HLogKey logKey,
WALEdit logEdit)
Append an entry to the log. |
void |
append(HRegionInfo regionInfo,
WALEdit logEdit,
long now,
boolean isMetaRegion)
Append an entry to the log. |
void |
close()
Shut down the log. |
void |
closeAndDelete()
Shut down the log and delete the log directory |
void |
completeCacheFlush(byte[] encodedRegionName,
byte[] tableName,
long logSeqId,
boolean isMetaRegion)
Complete the cache flush Protected by cacheFlushLock |
protected org.apache.hadoop.fs.Path |
computeFilename()
This is a convenience method that computes a new filename with a given using the current HLog file-number |
protected org.apache.hadoop.fs.Path |
computeFilename(long filenum)
This is a convenience method that computes a new filename with a given file-number. |
static HLog.Writer |
createWriter(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Get a writer for the WAL. |
protected HLog.Writer |
createWriterInstance(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
This method allows subclasses to inject different writers without having to extend other methods like rollWriter(). |
protected void |
doWrite(HRegionInfo info,
HLogKey logKey,
WALEdit logEdit)
|
protected org.apache.hadoop.fs.Path |
getDir()
Get the directory we are making logs in. |
long |
getFilenum()
|
static String |
getHLogDirectoryName(HServerInfo info)
Construct the HLog directory name |
static String |
getHLogDirectoryName(String serverName)
Construct the HLog directory name |
static String |
getHLogDirectoryName(String serverAddress,
long startCode)
Construct the HLog directory name |
static Class<? extends HLogKey> |
getKeyClass(org.apache.hadoop.conf.Configuration conf)
|
static HLog.Reader |
getReader(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Get a reader for the WAL. |
static org.apache.hadoop.fs.Path |
getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
|
long |
getSequenceNumber()
|
static NavigableSet<org.apache.hadoop.fs.Path> |
getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path regiondir)
Returns sorted set of edit files made by wal-log splitter. |
static long |
getSyncOps()
|
static long |
getSyncTime()
|
static long |
getWriteOps()
|
static long |
getWriteTime()
|
void |
hsync()
|
static boolean |
isMetaFamily(byte[] family)
|
static void |
main(String[] args)
Pass one or more log file names and it will either dump out a text version on stdout or split the specified log files. |
protected HLogKey |
makeKey(byte[] regionName,
byte[] tableName,
long seqnum,
long now)
|
static org.apache.hadoop.fs.Path |
moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path edits)
Move aside a bad edits file. |
static HLogKey |
newKey(org.apache.hadoop.conf.Configuration conf)
|
void |
registerWALActionsListener(WALObserver listener)
|
byte[][] |
rollWriter()
Roll the log writer. |
void |
setSequenceNumber(long newvalue)
Called by HRegionServer when it opens a new region to ensure that log sequence numbers are always greater than the latest sequence number of the region being brought on-line. |
long |
startCacheFlush()
By acquiring a log sequence ID, we can allow log messages to continue while we flush the cache. |
void |
sync()
|
boolean |
unregisterWALActionsListener(WALObserver listener)
|
static boolean |
validateHLogFilename(String filename)
|
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
public static final byte[] METAFAMILY
public static final long FIXED_OVERHEAD
Constructor Detail |
---|
public HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf) throws IOException
fs
- filesystem handledir
- path to where hlogs are storedoldLogDir
- path to where hlogs are archivedconf
- configuration to use
IOException
public HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf, List<WALObserver> listeners, String prefix) throws IOException
dir
location.
You should never have to load an existing log. If there is a log at
startup, it should have already been processed and deleted by the time the
HLog object is started up.
fs
- filesystem handledir
- path to where hlogs are storedoldLogDir
- path to where hlogs are archivedconf
- configuration to uselisteners
- Listeners on WAL events. Listeners passed here will
be registered before we do anything else; e.g. the
Constructor rollWriter()
.prefix
- should always be hostname and port in distributed env and
it will be URL encoded before being used.
If prefix is null, "hlog" will be used
IOException
public HLog(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, org.apache.hadoop.fs.Path oldLogDir, org.apache.hadoop.conf.Configuration conf, List<WALObserver> listeners, boolean failIfLogDirExists, String prefix) throws IOException
dir
location.
You should never have to load an existing log. If there is a log at
startup, it should have already been processed and deleted by the time the
HLog object is started up.
fs
- filesystem handledir
- path to where hlogs are storedoldLogDir
- path to where hlogs are archivedconf
- configuration to uselisteners
- Listeners on WAL events. Listeners passed here will
be registered before we do anything else; e.g. the
Constructor rollWriter()
.failIfLogDirExists
- If true IOException will be thrown if dir already exists.prefix
- should always be hostname and port in distributed env and
it will be URL encoded before being used.
If prefix is null, "hlog" will be used
IOException
Method Detail |
---|
public static long getWriteOps()
public static long getWriteTime()
public static long getSyncOps()
public static long getSyncTime()
public void registerWALActionsListener(WALObserver listener)
public boolean unregisterWALActionsListener(WALObserver listener)
public long getFilenum()
public void setSequenceNumber(long newvalue)
newvalue
- We'll set log edit/sequence number to this value if it
is greater than the current value.public long getSequenceNumber()
public byte[][] rollWriter() throws FailedLogCloseException, IOException
Note that this method cannot be synchronized because it is possible that startCacheFlush runs, obtaining the cacheFlushLock, then this method could start which would obtain the lock on this but block on obtaining the cacheFlushLock and then completeCacheFlush could be called which would wait for the lock on this and consequently never release the cacheFlushLock
HRegionInfo.getEncodedName()
FailedLogCloseException
IOException
protected HLog.Writer createWriterInstance(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf) throws IOException
fs
- path
- conf
-
IOException
public static HLog.Reader getReader(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf) throws IOException
fs
- path
- conf
-
IOException
public static HLog.Writer createWriter(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path, org.apache.hadoop.conf.Configuration conf) throws IOException
path
- conf
-
IOException
protected org.apache.hadoop.fs.Path computeFilename()
protected org.apache.hadoop.fs.Path computeFilename(long filenum)
filenum
- to use
public void closeAndDelete() throws IOException
IOException
public void close() throws IOException
IOException
public void append(HRegionInfo regionInfo, WALEdit logEdit, long now, boolean isMetaRegion) throws IOException
regionInfo
- logEdit
- now
- Time of this edit write.
IOException
protected HLogKey makeKey(byte[] regionName, byte[] tableName, long seqnum, long now)
now
- regionName
- tableName
-
public void append(HRegionInfo regionInfo, HLogKey logKey, WALEdit logEdit) throws IOException
regionInfo
- logEdit
- logKey
-
IOException
public void append(HRegionInfo info, byte[] tableName, WALEdit edits, long now) throws IOException
Logs cannot be restarted once closed, or once the HLog process dies. Each time the HLog starts, it must create a new log. This means that other systems should process the log appropriately upon each startup (and prior to initializing HLog). synchronized prevents appends during the completion of a cache flush or for the duration of a log roll.
info
- tableName
- edits
- now
-
IOException
public void sync() throws IOException
sync
in interface org.apache.hadoop.fs.Syncable
IOException
public void hsync() throws IOException
IOException
protected void doWrite(HRegionInfo info, HLogKey logKey, WALEdit logEdit) throws IOException
IOException
public long startCacheFlush()
completeCacheFlush(byte[], byte[], long, boolean)
(byte[], byte[], long)}completeCacheFlush(byte[], byte[], long, boolean)
,
abortCacheFlush()
public void completeCacheFlush(byte[] encodedRegionName, byte[] tableName, long logSeqId, boolean isMetaRegion) throws IOException
encodedRegionName
- tableName
- logSeqId
-
IOException
public void abortCacheFlush()
public static boolean isMetaFamily(byte[] family)
family
-
public static Class<? extends HLogKey> getKeyClass(org.apache.hadoop.conf.Configuration conf)
public static HLogKey newKey(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public static String getHLogDirectoryName(HServerInfo info)
info
- HServerInfo for server
public static String getHLogDirectoryName(String serverAddress, long startCode)
serverAddress
- startCode
-
public static String getHLogDirectoryName(String serverName)
serverName
-
protected org.apache.hadoop.fs.Path getDir()
public static boolean validateHLogFilename(String filename)
public static NavigableSet<org.apache.hadoop.fs.Path> getSplitEditFilesSorted(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path regiondir) throws IOException
fs
- regiondir
-
regiondir
as a sorted set.
IOException
public static org.apache.hadoop.fs.Path moveAsideBadEditsFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path edits) throws IOException
fs
- edits
- Edits file to move aside.
IOException
public static org.apache.hadoop.fs.Path getRegionDirRecoveredEditsDir(org.apache.hadoop.fs.Path regiondir)
regiondir
- This regions directory in the filesystem.
regiondir
public static void main(String[] args) throws IOException
stdout
or split the specified log files.
args
-
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |