org.apache.hadoop.hdfs.server.namenode
Class FSNamesystem

java.lang.Object
  extended by org.apache.hadoop.hdfs.server.namenode.FSNamesystem
All Implemented Interfaces:
FSConstants, FSClusterStats, FSNamesystemMBean, NameNodeMXBean

@InterfaceAudience.Private
public class FSNamesystem
extends Object
implements FSConstants, FSNamesystemMBean, FSClusterStats, NameNodeMXBean

FSNamesystem does the actual bookkeeping work for the DataNode. It tracks several important tables. 1) valid fsname --> blocklist (kept on disk, logged) 2) Set of all valid blocks (inverted #1) 3) block --> machinelist (kept in memory, rebuilt dynamically from reports) 4) machine --> blocklist (inverted #2) 5) LRU cache of updated-heartbeat machines


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction
 
Field Summary
static org.apache.commons.logging.Log auditLog
          Logger for audit events, noting successful FSNamesystem operations.
 org.apache.hadoop.hdfs.server.namenode.FSDirectory dir
           
 LeaseManager leaseManager
           
 org.apache.hadoop.util.Daemon lmthread
           
static org.apache.commons.logging.Log LOG
           
 org.apache.hadoop.util.Daemon replthread
           
 
Fields inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
BLOCK_INVALIDATE_CHUNK, BLOCKREPORT_INITIAL_DELAY, BLOCKREPORT_INTERVAL, BUFFER_SIZE, DEFAULT_BLOCK_SIZE, DEFAULT_BYTES_PER_CHECKSUM, DEFAULT_DATA_SOCKET_SIZE, DEFAULT_FILE_BUFFER_SIZE, DEFAULT_REPLICATION_FACTOR, DEFAULT_WRITE_PACKET_SIZE, HDFS_URI_SCHEME, HEARTBEAT_INTERVAL, LAYOUT_VERSION, LEASE_HARDLIMIT_PERIOD, LEASE_RECOVER_PERIOD, LEASE_SOFTLIMIT_PERIOD, MAX_PATH_DEPTH, MAX_PATH_LENGTH, MIN_BLOCKS_FOR_WRITE, QUOTA_DONT_SET, QUOTA_RESET, SIZE_OF_INTEGER, SMALL_BUFFER_SIZE
 
Method Summary
 boolean abandonBlock(Block b, String src, String holder)
          The client would like to let go of the given block
 void blockReceived(DatanodeID nodeID, Block block, String delHint)
          The given node is reporting that it received a certain block.
 void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void close()
          Close down this file system manager.
 boolean completeFile(String src, String holder, Block last)
          Complete in-progress write to the given file.
 int computeDatanodeWork()
          Compute block replication and block invalidation work that can be scheduled on data-nodes.
 void concat(String target, String[] srcs)
          Moves all the blocks from srcs and appends them to trg To avoid rollbacks we will verify validitity of ALL of the args before we start actual move.
 void createSymlink(String target, String link, org.apache.hadoop.fs.permission.PermissionStatus dirPerms, boolean createParent)
          Create a symbolic link.
 DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
           
 boolean delete(String src, boolean recursive)
          Remove the indicated file from namespace.
 void DFSNodesStatus(ArrayList<DatanodeDescriptor> live, ArrayList<DatanodeDescriptor> dead)
           
 LocatedBlock getAdditionalBlock(String src, String clientName, Block previous, HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes)
          The client would like to obtain an additional block for the indicated filename (which is being written-to).
 int getBlockCapacity()
           
 long getBlocksTotal()
          Get the total number of blocks in the system.
 long getCapacityRemaining()
          Total non-used raw bytes.
 float getCapacityRemainingPercent()
          Total remaining space by data nodes as percentage of total capacity
 long getCapacityTotal()
          Total raw bytes including non-dfs used space.
 long getCapacityUsed()
          Total used space by data nodes
 long getCapacityUsedNonDFS()
          Total used space by data nodes for non DFS purposes such as storing temporary files on the local file system
 float getCapacityUsedPercent()
          Total used space by data nodes as percentage of total capacity
 long getCorruptReplicaBlocks()
          Returns number of blocks with corrupt replicas
 DatanodeDescriptor getDatanode(DatanodeID nodeID)
          Get data node by storage ID.
 DatanodeInfo getDataNodeInfo(String name)
           
 String getDeadNodes()
          Returned information is a JSON representation of map with host name as the key and value is a map of dead node attribute keys to its values
 ArrayList<DatanodeDescriptor> getDecommissioningNodes()
           
 String getDecomNodes()
          Returned information is a JSON representation of map with host name as the key and value is a map of decomisioning node attribute keys to its values
 org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
           
 DelegationTokenSecretManager getDelegationTokenSecretManager()
          Returns the DelegationTokenSecretManager instance in the namesystem.
 long getExcessBlocks()
           
 long getFilesTotal()
          Total number of files and directories
 long getFree()
          Gets total non-used raw bytes.
 FSNamesystemMetrics getFSNamesystemMetrics()
          get FSNamesystemMetrics
 String getFSState()
          The state of the file system: Safemode or Operational
 long getGenerationStamp()
          Gets the generation stamp for this filesystem
 DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation)
          Get a partial listing of the indicated directory
 String getLiveNodes()
          Returned information is a JSON representation of map with host name as the key and value is a map of live node attribute keys to its values
 long getMissingBlocksCount()
           
static Collection<URI> getNamespaceDirs(org.apache.hadoop.conf.Configuration conf)
           
static Collection<URI> getNamespaceEditsDirs(org.apache.hadoop.conf.Configuration conf)
           
 long getNonDfsUsedSpace()
          Gets total used space by data nodes for non DFS purposes such as storing temporary files on the local file system
 int getNumDeadDataNodes()
          Number of dead data nodes
 int getNumLiveDataNodes()
          Number of live data nodes
 long getPendingDeletionBlocks()
           
 long getPendingReplicationBlocks()
          Blocks pending to be replicated
 float getPercentRemaining()
          Gets the total remaining space by data nodes as percentage of total capacity
 float getPercentUsed()
          Gets the total used space by data nodes as percentage of total capacity
 String getRegistrationID()
          Get registrationID for datanodes based on the namespaceID.
 String getSafemode()
          Gets the safemode status
 long getScheduledReplicationBlocks()
          Blocks scheduled for replication
 Date getStartTime()
           
static Collection<URI> getStorageDirs(org.apache.hadoop.conf.Configuration conf, String propertyName)
           
 int getThreads()
          Gets the number of threads.
 long getTotal()
          Gets total raw bytes including non-dfs used space.
 long getTotalBlocks()
          Gets the total numbers of blocks on the cluster.
 long getTotalFiles()
          Gets the total number of files on the cluster
 int getTotalLoad()
          Total number of connections.
 long getUnderReplicatedBlocks()
          Blocks under replicated
protected  org.apache.hadoop.fs.permission.PermissionStatus getUpgradePermission()
          Return the default path permission when upgrading from releases with no permissions (<=0.15) to releases with permissions (>=0.16)
 long getUsed()
          Gets the used space by data nodes.
 String getVersion()
          Class representing Namenode information for JMX interfaces
 boolean isUpgradeFinalized()
          Checks if upgrade is finalized.
 void logUpdateMasterKey(org.apache.hadoop.security.token.delegation.DelegationKey key)
          Log the updateMasterKey operation to edit logs
 void markBlockAsCorrupt(Block blk, DatanodeInfo dn)
          Mark the block belonging to datanode as corrupt
 boolean mkdirs(String src, org.apache.hadoop.fs.permission.PermissionStatus permissions, boolean createParent)
          Create all the necessary directories
 int numCorruptReplicas(Block blk)
           
 void processReport(DatanodeID nodeID, BlockListAsLongs newReport)
          The given node is reporting all its blocks.
 void refreshNodes(org.apache.hadoop.conf.Configuration conf)
          Rereads the config to get hosts and exclude list file names.
 void registerDatanode(DatanodeRegistration nodeReg)
          Register Datanode.
 void removeDatanode(DatanodeID nodeID)
          Remove a datanode descriptor.
 long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void setGenerationStamp(long stamp)
          Sets the generation stamp for this filesystem
 void setNodeReplicationLimit(int limit)
           
 void setOwner(String src, String username, String group)
          Set owner for an existing file.
 void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission)
          Set permissions for an existing file.
 boolean setReplication(String src, short replication)
          Set replication for an existing file.
 void setTimes(String src, long mtime, long atime)
          stores the modification and access time for this inode.
 void shutdown()
          shutdown FSNamesystem
 void stopDecommission(DatanodeDescriptor node)
          Stop decommissioning the specified datanodes.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

LOG

public static final org.apache.commons.logging.Log LOG

auditLog

public static final org.apache.commons.logging.Log auditLog
Logger for audit events, noting successful FSNamesystem operations. Emits to FSNamesystem.audit at INFO. Each event causes a set of tab-separated key=value pairs to be written for the following properties: ugi=<ugi in RPC> ip=<remote IP> cmd=<command> src=<src path> dst=<dst path (optional)> perm=<permissions (optional)>


dir

public org.apache.hadoop.hdfs.server.namenode.FSDirectory dir

leaseManager

public LeaseManager leaseManager

lmthread

public org.apache.hadoop.util.Daemon lmthread

replthread

public org.apache.hadoop.util.Daemon replthread
Method Detail

getNamespaceDirs

public static Collection<URI> getNamespaceDirs(org.apache.hadoop.conf.Configuration conf)

getStorageDirs

public static Collection<URI> getStorageDirs(org.apache.hadoop.conf.Configuration conf,
                                             String propertyName)

getNamespaceEditsDirs

public static Collection<URI> getNamespaceEditsDirs(org.apache.hadoop.conf.Configuration conf)

getUpgradePermission

protected org.apache.hadoop.fs.permission.PermissionStatus getUpgradePermission()
Return the default path permission when upgrading from releases with no permissions (<=0.15) to releases with permissions (>=0.16)


close

public void close()
Close down this file system manager. Causes heartbeat and lease daemons to stop; waits briefly for them to finish, but a short timeout returns control back to caller.


setPermission

public void setPermission(String src,
                          org.apache.hadoop.fs.permission.FsPermission permission)
                   throws org.apache.hadoop.security.AccessControlException,
                          FileNotFoundException,
                          SafeModeException,
                          org.apache.hadoop.fs.UnresolvedLinkException,
                          IOException
Set permissions for an existing file.

Throws:
IOException
org.apache.hadoop.security.AccessControlException
FileNotFoundException
SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException

setOwner

public void setOwner(String src,
                     String username,
                     String group)
              throws org.apache.hadoop.security.AccessControlException,
                     FileNotFoundException,
                     SafeModeException,
                     org.apache.hadoop.fs.UnresolvedLinkException,
                     IOException
Set owner for an existing file.

Throws:
IOException
org.apache.hadoop.security.AccessControlException
FileNotFoundException
SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException

concat

public void concat(String target,
                   String[] srcs)
            throws IOException,
                   org.apache.hadoop.fs.UnresolvedLinkException
Moves all the blocks from srcs and appends them to trg To avoid rollbacks we will verify validitity of ALL of the args before we start actual move.

Parameters:
target -
srcs -
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

setTimes

public void setTimes(String src,
                     long mtime,
                     long atime)
              throws IOException,
                     org.apache.hadoop.fs.UnresolvedLinkException
stores the modification and access time for this inode. The access time is precise upto an hour. The transaction, if needed, is written to the edits log but is not flushed.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

createSymlink

public void createSymlink(String target,
                          String link,
                          org.apache.hadoop.fs.permission.PermissionStatus dirPerms,
                          boolean createParent)
                   throws IOException,
                          org.apache.hadoop.fs.UnresolvedLinkException
Create a symbolic link.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

setReplication

public boolean setReplication(String src,
                              short replication)
                       throws IOException,
                              org.apache.hadoop.fs.UnresolvedLinkException
Set replication for an existing file. The NameNode sets new replication and schedules either replication of under-replicated data blocks or removal of the excessive block copies if the blocks are over-replicated.

Parameters:
src - file name
replication - new replication
Returns:
true if successful; false if file does not exist or is a directory
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException
See Also:
ClientProtocol.setReplication(String, short)

getAdditionalBlock

public LocatedBlock getAdditionalBlock(String src,
                                       String clientName,
                                       Block previous,
                                       HashMap<org.apache.hadoop.net.Node,org.apache.hadoop.net.Node> excludedNodes)
                                throws LeaseExpiredException,
                                       NotReplicatedYetException,
                                       QuotaExceededException,
                                       SafeModeException,
                                       org.apache.hadoop.fs.UnresolvedLinkException,
                                       IOException
The client would like to obtain an additional block for the indicated filename (which is being written-to). Return an array that consists of the block, plus a set of machines. The first on this list should be where the client writes data. Subsequent items in the list must be provided in the connection to the first datanode. Make sure the previous blocks have been reported by datanodes and are replicated. Will return an empty 2-elt array if we want the client to "try again later".

Throws:
LeaseExpiredException
NotReplicatedYetException
QuotaExceededException
SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException
IOException

abandonBlock

public boolean abandonBlock(Block b,
                            String src,
                            String holder)
                     throws LeaseExpiredException,
                            FileNotFoundException,
                            org.apache.hadoop.fs.UnresolvedLinkException,
                            IOException
The client would like to let go of the given block

Throws:
LeaseExpiredException
FileNotFoundException
org.apache.hadoop.fs.UnresolvedLinkException
IOException

completeFile

public boolean completeFile(String src,
                            String holder,
                            Block last)
                     throws SafeModeException,
                            org.apache.hadoop.fs.UnresolvedLinkException,
                            IOException
Complete in-progress write to the given file.

Returns:
true if successful, false if the client should continue to retry (e.g if not all blocks have reached minimum replication yet)
Throws:
IOException - on error (eg lease mismatch, file not open, file deleted)
SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException

markBlockAsCorrupt

public void markBlockAsCorrupt(Block blk,
                               DatanodeInfo dn)
                        throws IOException
Mark the block belonging to datanode as corrupt

Parameters:
blk - Block to be marked as corrupt
dn - Datanode which holds the corrupt replica
Throws:
IOException

delete

public boolean delete(String src,
                      boolean recursive)
               throws org.apache.hadoop.security.AccessControlException,
                      SafeModeException,
                      org.apache.hadoop.fs.UnresolvedLinkException,
                      IOException
Remove the indicated file from namespace.

Throws:
org.apache.hadoop.security.AccessControlException
SafeModeException
org.apache.hadoop.fs.UnresolvedLinkException
IOException
See Also:
for detailed descriptoin and description of exceptions

mkdirs

public boolean mkdirs(String src,
                      org.apache.hadoop.fs.permission.PermissionStatus permissions,
                      boolean createParent)
               throws IOException,
                      org.apache.hadoop.fs.UnresolvedLinkException
Create all the necessary directories

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

getListing

public DirectoryListing getListing(String src,
                                   byte[] startAfter,
                                   boolean needLocation)
                            throws org.apache.hadoop.security.AccessControlException,
                                   org.apache.hadoop.fs.UnresolvedLinkException,
                                   IOException
Get a partial listing of the indicated directory

Parameters:
src - the directory name
startAfter - the name to start after
needLocation - if blockLocations need to be returned
Returns:
a partial listing starting after startAfter
Throws:
org.apache.hadoop.security.AccessControlException - if access is denied
org.apache.hadoop.fs.UnresolvedLinkException - if symbolic link is encountered
IOException - if other I/O error occurred

registerDatanode

public void registerDatanode(DatanodeRegistration nodeReg)
                      throws IOException
Register Datanode.

The purpose of registration is to identify whether the new datanode serves a new data storage, and will report new data block copies, which the namenode was not aware of; or the datanode is a replacement node for the data storage that was previously served by a different or the same (in terms of host:port) datanode. The data storages are distinguished by their storageIDs. When a new data storage is reported the namenode issues a new unique storageID.

Finally, the namenode returns its namespaceID as the registrationID for the datanodes. namespaceID is a persistent attribute of the name space. The registrationID is checked every time the datanode is communicating with the namenode. Datanodes with inappropriate registrationID are rejected. If the namenode stops, and then restarts it can restore its namespaceID and will continue serving the datanodes that has previously registered with the namenode without restarting the whole cluster.

Throws:
IOException
See Also:
DataNode.register()

getRegistrationID

public String getRegistrationID()
Get registrationID for datanodes based on the namespaceID.

Returns:
registration ID
See Also:
registerDatanode(DatanodeRegistration), FSImage.newNamespaceID()

computeDatanodeWork

public int computeDatanodeWork()
                        throws IOException
Compute block replication and block invalidation work that can be scheduled on data-nodes. The datanode will be informed of this work at the next heartbeat.

Returns:
number of blocks scheduled for replication or removal.
Throws:
IOException

setNodeReplicationLimit

public void setNodeReplicationLimit(int limit)

removeDatanode

public void removeDatanode(DatanodeID nodeID)
                    throws IOException
Remove a datanode descriptor.

Parameters:
nodeID - datanode ID.
Throws:
IOException

processReport

public void processReport(DatanodeID nodeID,
                          BlockListAsLongs newReport)
                   throws IOException
The given node is reporting all its blocks. Use this info to update the (machine-->blocklist) and (block-->machinelist) tables.

Throws:
IOException

blockReceived

public void blockReceived(DatanodeID nodeID,
                          Block block,
                          String delHint)
                   throws IOException
The given node is reporting that it received a certain block.

Throws:
IOException

getMissingBlocksCount

public long getMissingBlocksCount()

getCapacityTotal

public long getCapacityTotal()
Total raw bytes including non-dfs used space.

Specified by:
getCapacityTotal in interface FSNamesystemMBean
Returns:
- total capacity in bytes

getCapacityUsed

public long getCapacityUsed()
Total used space by data nodes

Specified by:
getCapacityUsed in interface FSNamesystemMBean
Returns:
- used capacity in bytes

getCapacityUsedPercent

public float getCapacityUsedPercent()
Total used space by data nodes as percentage of total capacity


getCapacityUsedNonDFS

public long getCapacityUsedNonDFS()
Total used space by data nodes for non DFS purposes such as storing temporary files on the local file system


getCapacityRemaining

public long getCapacityRemaining()
Total non-used raw bytes.

Specified by:
getCapacityRemaining in interface FSNamesystemMBean
Returns:
- free capacity in bytes

getCapacityRemainingPercent

public float getCapacityRemainingPercent()
Total remaining space by data nodes as percentage of total capacity


getTotalLoad

public int getTotalLoad()
Total number of connections.

Specified by:
getTotalLoad in interface FSClusterStats
Specified by:
getTotalLoad in interface FSNamesystemMBean
Returns:
- total load of FSNamesystem

datanodeReport

public DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
                              throws org.apache.hadoop.security.AccessControlException
Throws:
org.apache.hadoop.security.AccessControlException

DFSNodesStatus

public void DFSNodesStatus(ArrayList<DatanodeDescriptor> live,
                           ArrayList<DatanodeDescriptor> dead)

stopDecommission

public void stopDecommission(DatanodeDescriptor node)
                      throws IOException
Stop decommissioning the specified datanodes.

Throws:
IOException

getDataNodeInfo

public DatanodeInfo getDataNodeInfo(String name)

getStartTime

public Date getStartTime()

refreshNodes

public void refreshNodes(org.apache.hadoop.conf.Configuration conf)
                  throws IOException
Rereads the config to get hosts and exclude list file names. Rereads the files to update the hosts and exclude lists. It checks if any of the hosts have changed states: 1. Added to hosts --> no further work needed here. 2. Removed from hosts --> mark AdminState as decommissioned. 3. Added to exclude --> start decommission. 4. Removed from exclude --> stop decommission.

Throws:
IOException

getDatanode

public DatanodeDescriptor getDatanode(DatanodeID nodeID)
                               throws IOException
Get data node by storage ID.

Parameters:
nodeID -
Returns:
DatanodeDescriptor or null if the node is not found.
Throws:
IOException

getBlocksTotal

public long getBlocksTotal()
Get the total number of blocks in the system.

Specified by:
getBlocksTotal in interface FSNamesystemMBean
Returns:
- number of allocated blocks

getFilesTotal

public long getFilesTotal()
Description copied from interface: FSNamesystemMBean
Total number of files and directories

Specified by:
getFilesTotal in interface FSNamesystemMBean
Returns:
- num of files and directories

getPendingReplicationBlocks

public long getPendingReplicationBlocks()
Description copied from interface: FSNamesystemMBean
Blocks pending to be replicated

Specified by:
getPendingReplicationBlocks in interface FSNamesystemMBean
Returns:
- num of blocks to be replicated

getUnderReplicatedBlocks

public long getUnderReplicatedBlocks()
Description copied from interface: FSNamesystemMBean
Blocks under replicated

Specified by:
getUnderReplicatedBlocks in interface FSNamesystemMBean
Returns:
- num of blocks under replicated

getCorruptReplicaBlocks

public long getCorruptReplicaBlocks()
Returns number of blocks with corrupt replicas


getScheduledReplicationBlocks

public long getScheduledReplicationBlocks()
Description copied from interface: FSNamesystemMBean
Blocks scheduled for replication

Specified by:
getScheduledReplicationBlocks in interface FSNamesystemMBean
Returns:
- num of blocks scheduled for replication

getPendingDeletionBlocks

public long getPendingDeletionBlocks()

getExcessBlocks

public long getExcessBlocks()

getBlockCapacity

public int getBlockCapacity()

getFSState

public String getFSState()
Description copied from interface: FSNamesystemMBean
The state of the file system: Safemode or Operational

Specified by:
getFSState in interface FSNamesystemMBean
Returns:
the state

getFSNamesystemMetrics

public FSNamesystemMetrics getFSNamesystemMetrics()
get FSNamesystemMetrics


shutdown

public void shutdown()
shutdown FSNamesystem


getNumLiveDataNodes

public int getNumLiveDataNodes()
Number of live data nodes

Specified by:
getNumLiveDataNodes in interface FSNamesystemMBean
Returns:
Number of live data nodes

getNumDeadDataNodes

public int getNumDeadDataNodes()
Number of dead data nodes

Specified by:
getNumDeadDataNodes in interface FSNamesystemMBean
Returns:
Number of dead data nodes

setGenerationStamp

public void setGenerationStamp(long stamp)
Sets the generation stamp for this filesystem


getGenerationStamp

public long getGenerationStamp()
Gets the generation stamp for this filesystem


numCorruptReplicas

public int numCorruptReplicas(Block blk)

getDecommissioningNodes

public ArrayList<DatanodeDescriptor> getDecommissioningNodes()

getDelegationTokenSecretManager

public DelegationTokenSecretManager getDelegationTokenSecretManager()
Returns the DelegationTokenSecretManager instance in the namesystem.

Returns:
delegation token secret manager object

getDelegationToken

public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
                                                                                     throws IOException
Parameters:
renewer -
Returns:
Token
Throws:
IOException

renewDelegationToken

public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                          throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                 IOException
Parameters:
token -
Returns:
New expiryTime of the token
Throws:
org.apache.hadoop.security.token.SecretManager.InvalidToken
IOException

cancelDelegationToken

public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                           throws IOException
Parameters:
token -
Throws:
IOException

logUpdateMasterKey

public void logUpdateMasterKey(org.apache.hadoop.security.token.delegation.DelegationKey key)
                        throws IOException
Log the updateMasterKey operation to edit logs

Parameters:
key - new delegation key.
Throws:
IOException

getVersion

public String getVersion()
Class representing Namenode information for JMX interfaces

Specified by:
getVersion in interface NameNodeMXBean
Returns:
the version

getUsed

public long getUsed()
Description copied from interface: NameNodeMXBean
Gets the used space by data nodes.

Specified by:
getUsed in interface NameNodeMXBean
Returns:
the used space by data nodes

getFree

public long getFree()
Description copied from interface: NameNodeMXBean
Gets total non-used raw bytes.

Specified by:
getFree in interface NameNodeMXBean
Returns:
total non-used raw bytes

getTotal

public long getTotal()
Description copied from interface: NameNodeMXBean
Gets total raw bytes including non-dfs used space.

Specified by:
getTotal in interface NameNodeMXBean
Returns:
the total raw bytes including non-dfs used space

getSafemode

public String getSafemode()
Description copied from interface: NameNodeMXBean
Gets the safemode status

Specified by:
getSafemode in interface NameNodeMXBean
Returns:
the safemode status

isUpgradeFinalized

public boolean isUpgradeFinalized()
Description copied from interface: NameNodeMXBean
Checks if upgrade is finalized.

Specified by:
isUpgradeFinalized in interface NameNodeMXBean
Returns:
true, if upgrade is finalized

getNonDfsUsedSpace

public long getNonDfsUsedSpace()
Description copied from interface: NameNodeMXBean
Gets total used space by data nodes for non DFS purposes such as storing temporary files on the local file system

Specified by:
getNonDfsUsedSpace in interface NameNodeMXBean
Returns:
the non dfs space of the cluster

getPercentUsed

public float getPercentUsed()
Description copied from interface: NameNodeMXBean
Gets the total used space by data nodes as percentage of total capacity

Specified by:
getPercentUsed in interface NameNodeMXBean
Returns:
the percentage of used space on the cluster.

getPercentRemaining

public float getPercentRemaining()
Description copied from interface: NameNodeMXBean
Gets the total remaining space by data nodes as percentage of total capacity

Specified by:
getPercentRemaining in interface NameNodeMXBean
Returns:
the percentage of the remaining space on the cluster

getTotalBlocks

public long getTotalBlocks()
Description copied from interface: NameNodeMXBean
Gets the total numbers of blocks on the cluster.

Specified by:
getTotalBlocks in interface NameNodeMXBean
Returns:
the total number of blocks of the cluster

getTotalFiles

public long getTotalFiles()
Description copied from interface: NameNodeMXBean
Gets the total number of files on the cluster

Specified by:
getTotalFiles in interface NameNodeMXBean
Returns:
the total number of files on the cluster

getThreads

public int getThreads()
Description copied from interface: NameNodeMXBean
Gets the number of threads.

Specified by:
getThreads in interface NameNodeMXBean
Returns:
the number of threads

getLiveNodes

public String getLiveNodes()
Returned information is a JSON representation of map with host name as the key and value is a map of live node attribute keys to its values

Specified by:
getLiveNodes in interface NameNodeMXBean
Returns:
the live node information

getDeadNodes

public String getDeadNodes()
Returned information is a JSON representation of map with host name as the key and value is a map of dead node attribute keys to its values

Specified by:
getDeadNodes in interface NameNodeMXBean
Returns:
the dead node information

getDecomNodes

public String getDecomNodes()
Returned information is a JSON representation of map with host name as the key and value is a map of decomisioning node attribute keys to its values

Specified by:
getDecomNodes in interface NameNodeMXBean
Returns:
the decommissioning node information


Copyright © 2009 The Apache Software Foundation