|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.hadoop.hdfs.server.namenode.NameNode
@InterfaceAudience.Private public class NameNode
NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS. There is a single NameNode running in any DFS deployment. (Well, except when there is a second backup/failover NameNode.) The NameNode controls two critical tables: 1) filename->blocksequence (namespace) 2) block->machinelist ("inodes") The first table is stored on disk and is very precious. The second table is rebuilt every time the NameNode comes up. 'NameNode' refers to both this class as well as the 'NameNode server'. The 'FSNamesystem' class actually performs most of the filesystem management. The majority of the 'NameNode' class itself is concerned with exposing the IPC interface and the http server to the outside world, plus some configuration management. NameNode implements the ClientProtocol interface, which allows clients to ask for DFS services. ClientProtocol is not designed for direct use by authors of DFS client code. End-users should instead use the org.apache.nutch.hadoop.fs.FileSystem class. NameNode also implements the DatanodeProtocol interface, used by DataNode programs that actually store DFS data blocks. These methods are invoked repeatedly and automatically by all the DataNodes in a DFS deployment. NameNode also implements the NamenodeProtocol interface, used by secondary namenodes or rebalancing processes to get partial namenode's state, for example partial blocksMap etc.
Nested Class Summary |
---|
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants |
---|
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction |
Field Summary | |
---|---|
static int |
DEFAULT_PORT
|
protected InetSocketAddress |
httpAddress
HTTP server address |
protected org.apache.hadoop.http.HttpServer |
httpServer
httpServer |
static org.apache.commons.logging.Log |
LOG
|
protected FSNamesystem |
namesystem
|
protected NamenodeRegistration |
nodeRegistration
Registration information of this name-node |
protected HdfsConstants.NamenodeRole |
role
|
protected InetSocketAddress |
rpcAddress
RPC server address |
protected InetSocketAddress |
serviceRPCAddress
RPC server for DN address |
protected org.apache.hadoop.ipc.Server |
serviceRpcServer
RPC server for HDFS Services communication. |
static org.apache.commons.logging.Log |
stateChangeLog
|
protected boolean |
stopRequested
only used for testing purposes |
Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol |
---|
GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, versionID |
Fields inherited from interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol |
---|
DISK_ERROR, DNA_ACCESSKEYUPDATE, DNA_FINALIZE, DNA_INVALIDATE, DNA_RECOVERBLOCK, DNA_REGISTER, DNA_SHUTDOWN, DNA_TRANSFER, DNA_UNKNOWN, FATAL_DISK_ERROR, INVALID_BLOCK, NOTIFY, versionID |
Fields inherited from interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol |
---|
ACT_CHECKPOINT, ACT_SHUTDOWN, ACT_UNKNOWN, FATAL, JA_CHECKPOINT_TIME, JA_IS_ALIVE, JA_JOURNAL, JA_JSPOOL_START, NOTIFY, versionID |
Fields inherited from interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol |
---|
versionID |
Fields inherited from interface org.apache.hadoop.security.RefreshUserMappingsProtocol |
---|
versionID |
Constructor Summary | |
---|---|
|
NameNode(org.apache.hadoop.conf.Configuration conf)
Start NameNode. |
protected |
NameNode(org.apache.hadoop.conf.Configuration conf,
HdfsConstants.NamenodeRole role)
|
Method Summary | |
---|---|
void |
abandonBlock(Block b,
String src,
String holder)
The client needs to give up on the block. |
LocatedBlock |
addBlock(String src,
String clientName,
Block previous,
DatanodeInfo[] excludedNodes)
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). |
LocatedBlock |
append(String src,
String clientName)
Append to the end of the file. |
void |
blockReceived(DatanodeRegistration nodeReg,
Block[] blocks,
String[] delHints)
blockReceived() allows the DataNode to tell the NameNode about recently-received block data, with a hint for pereferred replica to be deleted when there is any excessive blocks. |
DatanodeCommand |
blockReport(DatanodeRegistration nodeReg,
long[] blocks)
blockReport() tells the NameNode about all the locally-stored blocks. |
void |
cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Cancel an existing delegation token. |
void |
commitBlockSynchronization(Block block,
long newgenerationstamp,
long newlength,
boolean closeFile,
boolean deleteblock,
DatanodeID[] newtargets)
Commit block synchronization in lease recovery |
boolean |
complete(String src,
String clientName,
Block last)
The client is done writing data to the given filename, and would like to complete it. |
void |
concat(String trg,
String[] src)
Moves blocks from srcs to trg and delete srcs |
void |
create(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
String clientName,
org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag,
boolean createParent,
short replication,
long blockSize)
Create a new file entry in the namespace. |
static NameNode |
createNameNode(String[] argv,
org.apache.hadoop.conf.Configuration conf)
|
void |
createSymlink(String target,
String link,
org.apache.hadoop.fs.permission.FsPermission dirPerms,
boolean createParent)
Create symlink to a file or directory. |
boolean |
delete(String src)
Deprecated. |
boolean |
delete(String src,
boolean recursive)
Delete the given file or directory from the file system. |
UpgradeStatusReport |
distributedUpgradeProgress(FSConstants.UpgradeAction action)
Report distributed upgrade progress or force current upgrade to proceed. |
void |
endCheckpoint(NamenodeRegistration registration,
CheckpointSignature sig)
A request to the active name-node to finalize previously started checkpoint. |
void |
errorReport(DatanodeRegistration nodeReg,
int errorCode,
String msg)
Handle an error report from a datanode. |
void |
errorReport(NamenodeRegistration registration,
int errorCode,
String msg)
Report to the active name-node an error occurred on a subordinate node. |
void |
finalizeUpgrade()
Finalize previous upgrade. |
static void |
format(org.apache.hadoop.conf.Configuration conf)
Format a new filesystem. |
void |
fsync(String src,
String clientName)
Write all metadata for this file into persistent storage. |
static InetSocketAddress |
getAddress(org.apache.hadoop.conf.Configuration conf)
|
static InetSocketAddress |
getAddress(String address)
|
ExportedBlockKeys |
getBlockKeys()
Get the current block keys |
LocatedBlocks |
getBlockLocations(String src,
long offset,
long length)
Get locations of the blocks of the specified file within the specified range. |
BlocksWithLocations |
getBlocks(DatanodeInfo datanode,
long size)
Get a list of blocks belonging to datanode
whose total size equals size . |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(String path)
Get ContentSummary rooted at the specified directory. |
DatanodeInfo[] |
getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes. |
org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> |
getDelegationToken(org.apache.hadoop.io.Text renewer)
Get a valid Delegation Token. |
long |
getEditLogSize()
Deprecated. |
HdfsFileStatus |
getFileInfo(String src)
Get the file info for a specific file. |
HdfsFileStatus |
getFileLinkInfo(String src)
Get the file info for a specific file. |
FSImage |
getFSImage()
|
File |
getFsImageName()
Returns the name of the fsImage file |
File[] |
getFsImageNameCheckpoint()
Returns the name of the fsImage file uploaded by periodic checkpointing |
static String |
getHostPortString(InetSocketAddress addr)
Compose a "host:port" string from the address. |
InetSocketAddress |
getHttpAddress()
Returns the address of the NameNodes http server, which is used to access the name-node web UI. |
protected InetSocketAddress |
getHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
|
static String |
getInfoServer(org.apache.hadoop.conf.Configuration conf)
|
String |
getLinkTarget(String path)
Return the target of the given symlink. |
DirectoryListing |
getListing(String src,
byte[] startAfter,
boolean needLocation)
Get a partial listing of the indicated directory |
InetSocketAddress |
getNameNodeAddress()
Returns the address on which the NameNodes is listening to. |
static NameNodeMetrics |
getNameNodeMetrics()
|
long |
getPreferredBlockSize(String filename)
Get the block size for the given file. |
long |
getProtocolVersion(String protocol,
long clientVersion)
|
HdfsConstants.NamenodeRole |
getRole()
|
protected InetSocketAddress |
getRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
|
org.apache.hadoop.fs.FsServerDefaults |
getServerDefaults()
Get server default values for a number of configuration params. |
static InetSocketAddress |
getServiceAddress(org.apache.hadoop.conf.Configuration conf,
boolean fallback)
Fetches the address for services to use when connecting to namenode based on the value of fallback returns null if the special address is not specified or returns the default namenode address to be used by both clients and services. |
protected InetSocketAddress |
getServiceRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
Given a configuration get the address of the service rpc server If the service rpc is not configured returns null |
long[] |
getStats()
Get a set of statistics about the filesystem. |
static URI |
getUri(InetSocketAddress namenode)
|
protected void |
initialize(org.apache.hadoop.conf.Configuration conf)
Initialize name-node. |
boolean |
isInSafeMode()
Is the cluster currently in safe mode? |
void |
join()
Wait for service to finish. |
void |
journal(NamenodeRegistration registration,
int jAction,
int length,
byte[] args)
Journal edit records. |
long |
journalSize(NamenodeRegistration registration)
Get the size of the active name-node journal (edit log) in bytes. |
Collection<org.apache.hadoop.hdfs.server.namenode.FSNamesystem.CorruptFileBlockInfo> |
listCorruptFileBlocks(String path,
String startBlockAfter)
|
protected void |
loadNamesystem(org.apache.hadoop.conf.Configuration conf)
|
static void |
main(String[] argv)
|
void |
metaSave(String filename)
Dumps namenode state into specified file |
boolean |
mkdirs(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
boolean createParent)
Create a directory (or hierarchy of directories) with the given name and permission. |
UpgradeCommand |
processUpgradeCommand(UpgradeCommand comm)
This is a very general way to send a command to the name-node during distributed upgrade process. |
boolean |
recoverLease(String src,
String clientName)
Start lease recovery. |
void |
refreshNodes()
Refresh the list of datanodes that the namenode should allow to connect. |
void |
refreshServiceAcl()
|
void |
refreshSuperUserGroupsConfiguration()
|
void |
refreshUserToGroupsMappings()
|
NamenodeRegistration |
register(NamenodeRegistration registration)
Register a subordinate name-node like backup node. |
DatanodeRegistration |
registerDatanode(DatanodeRegistration nodeReg)
Register Datanode. |
boolean |
rename(String src,
String dst)
Deprecated. |
void |
rename(String src,
String dst,
org.apache.hadoop.fs.Options.Rename... options)
Rename src to dst. |
long |
renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Renew an existing delegation token. |
void |
renewLease(String clientName)
Client programs can cause stateful changes in the NameNode that affect other clients. |
void |
reportBadBlocks(LocatedBlock[] blocks)
The client has detected an error on the specified located blocks and is reporting them to the server. |
boolean |
restoreFailedStorage(String arg)
Enable/Disable restore failed storage. |
CheckpointSignature |
rollEditLog()
Deprecated. |
void |
rollFsImage(CheckpointSignature sig)
Deprecated. |
void |
saveNamespace()
Save namespace image. |
DatanodeCommand[] |
sendHeartbeat(DatanodeRegistration nodeReg,
long capacity,
long dfsUsed,
long remaining,
int xmitsInProgress,
int xceiverCount,
int failedVolumes)
Data node notify the name node that it is alive Return an array of block-oriented commands for the datanode to execute. |
protected void |
setHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
|
void |
setOwner(String src,
String username,
String groupname)
Set Owner of a path (i.e. |
void |
setPermission(String src,
org.apache.hadoop.fs.permission.FsPermission permissions)
Set permissions for an existing file/directory. |
void |
setQuota(String path,
long namespaceQuota,
long diskspaceQuota)
Set the quota for a directory. |
boolean |
setReplication(String src,
short replication)
Set replication for an existing file. |
protected void |
setRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
|
protected void |
setRpcServiceServerAddress(org.apache.hadoop.conf.Configuration conf)
Modifies the configuration passed to contain the service rpc address setting |
boolean |
setSafeMode(FSConstants.SafeModeAction action)
Enter, leave or get safe mode. |
static void |
setServiceAddress(org.apache.hadoop.conf.Configuration conf,
String address)
Set the configuration property for the service rpc address to address |
void |
setTimes(String src,
long mtime,
long atime)
Sets the modification and access time of the file to the specified time. |
NamenodeCommand |
startCheckpoint(NamenodeRegistration registration)
A request to the active name-node to start a checkpoint. |
void |
stop()
Stop all NameNode threads and wait for all to finish. |
LocatedBlock |
updateBlockForPipeline(Block block,
String clientName)
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block. |
void |
updatePipeline(String clientName,
Block oldBlock,
Block newBlock,
DatanodeID[] newNodes)
Update a pipeline for a block under construction |
void |
verifyRequest(NodeRegistration nodeReg)
Verify request. |
void |
verifyVersion(int version)
Verify version. |
NamespaceInfo |
versionRequest()
Request name-node version and storage information. |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
public static final int DEFAULT_PORT
public static final org.apache.commons.logging.Log LOG
public static final org.apache.commons.logging.Log stateChangeLog
protected FSNamesystem namesystem
protected HdfsConstants.NamenodeRole role
protected org.apache.hadoop.ipc.Server serviceRpcServer
protected InetSocketAddress rpcAddress
protected InetSocketAddress serviceRPCAddress
protected org.apache.hadoop.http.HttpServer httpServer
protected InetSocketAddress httpAddress
protected boolean stopRequested
protected NamenodeRegistration nodeRegistration
Constructor Detail |
---|
public NameNode(org.apache.hadoop.conf.Configuration conf) throws IOException
The name-node can be started with one of the following startup options:
REGULAR
- normal name node startupFORMAT
- format name nodeBACKUP
- start backup nodeCHECKPOINT
- start checkpoint nodeUPGRADE
- start the cluster
upgrade and create a snapshot of the current file system stateROLLBACK
- roll the
cluster back to the previous stateFINALIZE
- finalize
previous upgradeIMPORT
- import checkpointzero
in the conf.
conf
- confirguration
IOException
protected NameNode(org.apache.hadoop.conf.Configuration conf, HdfsConstants.NamenodeRole role) throws IOException
IOException
Method Detail |
---|
public long getProtocolVersion(String protocol, long clientVersion) throws IOException
getProtocolVersion
in interface org.apache.hadoop.ipc.VersionedProtocol
IOException
public static void format(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public static NameNodeMetrics getNameNodeMetrics()
public static InetSocketAddress getAddress(String address)
public static void setServiceAddress(org.apache.hadoop.conf.Configuration conf, String address)
public static InetSocketAddress getServiceAddress(org.apache.hadoop.conf.Configuration conf, boolean fallback)
public static InetSocketAddress getAddress(org.apache.hadoop.conf.Configuration conf)
public static URI getUri(InetSocketAddress namenode)
public static String getHostPortString(InetSocketAddress addr)
public HdfsConstants.NamenodeRole getRole()
protected InetSocketAddress getServiceRpcServerAddress(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
protected InetSocketAddress getRpcServerAddress(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
protected void setRpcServiceServerAddress(org.apache.hadoop.conf.Configuration conf)
protected void setRpcServerAddress(org.apache.hadoop.conf.Configuration conf)
protected InetSocketAddress getHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
protected void setHttpServerAddress(org.apache.hadoop.conf.Configuration conf)
protected void loadNamesystem(org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
protected void initialize(org.apache.hadoop.conf.Configuration conf) throws IOException
conf
- the configuration
IOException
public static String getInfoServer(org.apache.hadoop.conf.Configuration conf)
public void join()
public void stop()
public BlocksWithLocations getBlocks(DatanodeInfo datanode, long size) throws IOException
NamenodeProtocol
datanode
whose total size equals size
.
getBlocks
in interface NamenodeProtocol
datanode
- a data nodesize
- requested size
IOException
Balancer
public ExportedBlockKeys getBlockKeys() throws IOException
getBlockKeys
in interface NamenodeProtocol
IOException
public void errorReport(NamenodeRegistration registration, int errorCode, String msg) throws IOException
NamenodeProtocol
errorReport
in interface NamenodeProtocol
registration
- requesting node.errorCode
- indicates the errormsg
- free text description of the error
IOException
public NamenodeRegistration register(NamenodeRegistration registration) throws IOException
NamenodeProtocol
register
in interface NamenodeProtocol
NamenodeRegistration
of the node,
which this node has just registered with.
IOException
public NamenodeCommand startCheckpoint(NamenodeRegistration registration) throws IOException
NamenodeProtocol
startCheckpoint
in interface NamenodeProtocol
registration
- the requesting node
CheckpointCommand
if checkpoint is allowed.
IOException
CheckpointCommand
,
NamenodeCommand
,
NamenodeProtocol.ACT_SHUTDOWN
public void endCheckpoint(NamenodeRegistration registration, CheckpointSignature sig) throws IOException
NamenodeProtocol
endCheckpoint
in interface NamenodeProtocol
registration
- the requesting nodesig
- CheckpointSignature
which identifies the checkpoint.
IOException
public long journalSize(NamenodeRegistration registration) throws IOException
NamenodeProtocol
journalSize
in interface NamenodeProtocol
registration
- the requesting node
IOException
public void journal(NamenodeRegistration registration, int jAction, int length, byte[] args) throws IOException
NamenodeProtocol
EditLogBackupOutputStream
in order to synchronize meta-data
changes with the backup namespace image.
journal
in interface NamenodeProtocol
registration
- active node registrationjAction
- journal actionlength
- length of the byte arrayargs
- byte array containing serialized journal records
IOException
public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
ClientProtocol
getDelegationToken
in interface ClientProtocol
renewer
- the designated renewer for the token
IOException
public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws org.apache.hadoop.security.token.SecretManager.InvalidToken, IOException
ClientProtocol
renewDelegationToken
in interface ClientProtocol
token
- delegation token obtained earlier
IOException
org.apache.hadoop.security.token.SecretManager.InvalidToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
ClientProtocol
cancelDelegationToken
in interface ClientProtocol
token
- delegation token
IOException
public LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
Return LocatedBlocks
which contains
file length, blocks and their locations.
DataNode locations for each block are sorted by
the distance to the client's address.
The client will then have to contact one of the indicated DataNodes to obtain the actual data.
getBlockLocations
in interface ClientProtocol
src
- file nameoffset
- range start offsetlength
- range length
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
does not exist
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
getServerDefaults
in interface ClientProtocol
IOException
public void create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize) throws IOException
This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients.
Although, other clients cannot ClientProtocol.delete(String, boolean)
, re-create or
ClientProtocol.rename(String, String)
it until the file is completed
or explicitly as a result of lease expiration.
Blocks have a maximum size. Clients that intend to create
multi-block files must also use
ClientProtocol.addBlock(String, String, Block, DatanodeInfo[])
create
in interface ClientProtocol
src
- path of the file being created.masked
- masked permission.clientName
- name of the current client.flag
- indicates whether the file should be
overwritten if it already exists or create if it does not exist or append.createParent
- create missing parent directory if truereplication
- block replication factor.blockSize
- maximum block size.
org.apache.hadoop.security.AccessControlException
- If access is denied
AlreadyBeingCreatedException
- if the path does not exist.
DSQuotaExceededException
- If file creation violates disk space
quota restriction
org.apache.hadoop.fs.FileAlreadyExistsException
- If file src
already exists
FileNotFoundException
- If parent of src
does not exist
and createParent
is false
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of src
is not a
directory.
NSQuotaExceededException
- If file creation violates name space
quota restriction
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred
RuntimeExceptions:public LocatedBlock append(String src, String clientName) throws IOException
append
in interface ClientProtocol
src
- path of the file being created.clientName
- name of the current client.
org.apache.hadoop.security.AccessControlException
- if permission to append file is
denied by the system. As usually on the client side the exception will
be wrapped into RemoteException
.
Allows appending to an existing file if the server is
configured with the parameter dfs.support.append set to true, otherwise
throws an IOException.
FileNotFoundException
- If file src
is not found
DSQuotaExceededException
- If append violates disk space quota
restriction
SafeModeException
- append not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred.
RuntimeExceptions:public boolean recoverLease(String src, String clientName) throws IOException
recoverLease
in interface ClientProtocol
src
- path of the file to start lease recoveryclientName
- name of the current client
IOException
public boolean setReplication(String src, short replication) throws IOException
The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
setReplication
in interface ClientProtocol
src
- file namereplication
- new replication
org.apache.hadoop.security.AccessControlException
- If access is denied
DSQuotaExceededException
- If replication violates disk space
quota restriction
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink
IOException
- If an I/O error occurredpublic void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException
setPermission
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic void setOwner(String src, String username, String groupname) throws IOException
setOwner
in interface ClientProtocol
username
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic LocatedBlock addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes) throws IOException
ClientProtocol
addBlock
in interface ClientProtocol
src
- the file being createdclientName
- the name of the client that adds the blockprevious
- previous blockexcludedNodes
- a list of nodes that should not be
allocated for the current block
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
NotReplicatedYetException
- previous blocks of the file are not
replicated yet. Blocks cannot be added until replication
completes.
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic void abandonBlock(Block b, String src, String holder) throws IOException
abandonBlock
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic boolean complete(String src, String clientName, Block last) throws IOException
complete
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic void reportBadBlocks(LocatedBlock[] blocks) throws IOException
reportBadBlocks
in interface ClientProtocol
reportBadBlocks
in interface DatanodeProtocol
blocks
- Array of located blocks to report
IOException
public LocatedBlock updateBlockForPipeline(Block block, String clientName) throws IOException
updateBlockForPipeline
in interface ClientProtocol
block
- a blockclientName
- the name of the client
IOException
- if any error occurspublic void updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes) throws IOException
ClientProtocol
updatePipeline
in interface ClientProtocol
clientName
- the name of the clientoldBlock
- the old blocknewBlock
- the new block containing new generation stamp and lengthnewNodes
- datanodes in the pipeline
IOException
- if any error occurspublic void commitBlockSynchronization(Block block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, DatanodeID[] newtargets) throws IOException
commitBlockSynchronization
in interface DatanodeProtocol
IOException
public long getPreferredBlockSize(String filename) throws IOException
ClientProtocol
getPreferredBlockSize
in interface ClientProtocol
filename
- The name of the file
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.@Deprecated public boolean rename(String src, String dst) throws IOException
rename
in interface ClientProtocol
src
- existing file or directory name.dst
- new name.
IOException
- an I/O error occurredpublic void concat(String trg, String[] src) throws IOException
concat
in interface ClientProtocol
trg
- existing filesrc
- - list of existing files (same block size, same replication)
IOException
- if some arguments are invalid
org.apache.hadoop.fs.UnresolvedLinkException
- if trg
or srcs
contains a symlinkpublic void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
This implementation of rename is atomic.
rename
in interface ClientProtocol
src
- existing file or directory name.dst
- new name.options
- Rename options
org.apache.hadoop.security.AccessControlException
- If access is denied
DSQuotaExceededException
- If rename violates disk space
quota restriction
org.apache.hadoop.fs.FileAlreadyExistsException
- If dst
already exists and
options has Options.Rename.OVERWRITE
option
false.
FileNotFoundException
- If src
does not exist
NSQuotaExceededException
- If rename violates namespace
quota restriction
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of dst
is not a directory
SafeModeException
- rename not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
or
dst
contains a symlink
IOException
- If an I/O error occurred
@Deprecated public boolean delete(String src) throws IOException
ClientProtocol
Any blocks belonging to the deleted files will be garbage-collected.
delete
in interface ClientProtocol
src
- existing name.
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
public boolean delete(String src, boolean recursive) throws IOException
same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
delete
in interface ClientProtocol
src
- existing namerecursive
- if true deletes a non empty directory recursively,
else throws an exception.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
mkdirs
in interface ClientProtocol
src
- The path of the directory being createdmasked
- The masked permission of the directory being createdcreateParent
- create missing parent directory if true
org.apache.hadoop.security.AccessControlException
- If access is denied
org.apache.hadoop.fs.FileAlreadyExistsException
- If src
already exists
FileNotFoundException
- If parent of src
does not exist
and createParent
is false
NSQuotaExceededException
- If file creation violates quota restriction
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of src
is not a directory
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred.
RunTimeExceptions:public void renewLease(String clientName) throws IOException
ClientProtocol
So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
renewLease
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
- permission denied
IOException
- If an I/O error occurredpublic DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
ClientProtocol
getListing
in interface ClientProtocol
src
- the directory namestartAfter
- the name to start listing after encoded in java UTF8needLocation
- if the FileStatus should contain block locations
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredpublic HdfsFileStatus getFileInfo(String src) throws IOException
getFileInfo
in interface ClientProtocol
src
- The string representation of the path to the file
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
- If an I/O error occurredpublic HdfsFileStatus getFileLinkInfo(String src) throws IOException
getFileLinkInfo
in interface ClientProtocol
src
- The string representation of the path to the file
org.apache.hadoop.security.AccessControlException
- permission denied
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink
IOException
- If an I/O error occurredpublic long[] getStats()
ClientProtocol
ClientProtocol.GET_STATS_CAPACITY_IDX
in place of
actual numbers to index into the array.
getStats
in interface ClientProtocol
public DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type) throws IOException
ClientProtocol
getDatanodeReport
in interface ClientProtocol
IOException
public boolean setSafeMode(FSConstants.SafeModeAction action) throws IOException
ClientProtocol
Safe mode is a name node state when it
Safe mode is entered automatically at name node startup.
Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_GET)
.
At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.
If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER)
then the name node stays in safe mode until it is manually turned off
using setSafeMode(SafeModeAction.SAFEMODE_LEAVE)
.
Current state of the name node can be verified using
setSafeMode(SafeModeAction.SAFEMODE_GET)
setSafeMode
in interface ClientProtocol
action
- IOException
public boolean isInSafeMode()
public boolean restoreFailedStorage(String arg) throws org.apache.hadoop.security.AccessControlException
ClientProtocol
sets flag to enable restore of failed storage replicas
restoreFailedStorage
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
public void saveNamespace() throws IOException
ClientProtocol
Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.
saveNamespace
in interface ClientProtocol
org.apache.hadoop.security.AccessControlException
- if the superuser privilege is violated.
IOException
- if image creation failed.public void refreshNodes() throws IOException
refreshNodes
in interface ClientProtocol
IOException
@Deprecated public long getEditLogSize() throws IOException
getEditLogSize
in interface NamenodeProtocol
IOException
@Deprecated public CheckpointSignature rollEditLog() throws IOException
rollEditLog
in interface NamenodeProtocol
IOException
@Deprecated public void rollFsImage(CheckpointSignature sig) throws IOException
rollFsImage
in interface NamenodeProtocol
sig
- the signature of this checkpoint (old fsimage)
IOException
public void finalizeUpgrade() throws IOException
ClientProtocol
finalizeUpgrade
in interface ClientProtocol
IOException
public UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action) throws IOException
ClientProtocol
distributedUpgradeProgress
in interface ClientProtocol
action
- FSConstants.UpgradeAction
to perform
IOException
public void metaSave(String filename) throws IOException
metaSave
in interface ClientProtocol
IOException
public Collection<org.apache.hadoop.hdfs.server.namenode.FSNamesystem.CorruptFileBlockInfo> listCorruptFileBlocks(String path, String startBlockAfter) throws org.apache.hadoop.security.AccessControlException, IOException
path
- Sub-tree used in querying corrupt filesstartBlockAfter
- Paging support---pass in the last block returned from the previous
call and some # of corrupt blocks after that point are returned
org.apache.hadoop.security.AccessControlException
IOException
public org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
ContentSummary
rooted at the specified directory.
getContentSummary
in interface ClientProtocol
path
- The string representation of the path
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file path
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if path
contains a symlink.
IOException
- If an I/O error occurredpublic void setQuota(String path, long namespaceQuota, long diskspaceQuota) throws IOException
setQuota
in interface ClientProtocol
path
- The string representation of the path to the directorynamespaceQuota
- Limit on the number of names in the tree rooted
at the directorydiskspaceQuota
- Limit on disk space occupied all the files under
this directory.
FSConstants.QUOTA_DONT_SET
implies
the quota will not be changed, and (3) FSConstants.QUOTA_RESET
implies the quota will be reset. Any other value is a runtime error.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file path
is not found
QuotaExceededException
- if the directory size
is greater than the given quota
org.apache.hadoop.fs.UnresolvedLinkException
- if the path
contains a symlink.
IOException
- If an I/O error occurredpublic void fsync(String src, String clientName) throws IOException
fsync
in interface ClientProtocol
src
- The string representation of the pathclientName
- The string representation of the client
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
- If an I/O error occurredpublic void setTimes(String src, long mtime, long atime) throws IOException
ClientProtocol
setTimes
in interface ClientProtocol
src
- The string representation of the pathmtime
- The number of milliseconds since Jan 1, 1970.
Setting mtime to -1 means that modification time should not be set
by this call.atime
- The number of milliseconds since Jan 1, 1970.
Setting atime to -1 means that access time should not be set
by this call.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
- If an I/O error occurredpublic void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException
ClientProtocol
createSymlink
in interface ClientProtocol
target
- The path of the destination that the
link points to.link
- The path of the link being created.dirPerms
- permissions to use when creating parent directoriescreateParent
- - if true then missing parent dirs are created
if false then parent must exist
org.apache.hadoop.security.AccessControlException
- permission denied
org.apache.hadoop.fs.FileAlreadyExistsException
- If file link
already exists
FileNotFoundException
- If parent of link
does not exist
and createParent
is false
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of link
is not a
directory.
org.apache.hadoop.fs.UnresolvedLinkException
- if link contains a symlink.
IOException
- If an I/O error occurred
public String getLinkTarget(String path) throws IOException
ClientProtocol
getLinkTarget
in interface ClientProtocol
path
- The path with a link that needs resolution.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- If path
does not exist
IOException
- If the given path does not refer to a symlink
or an I/O error occurredpublic DatanodeRegistration registerDatanode(DatanodeRegistration nodeReg) throws IOException
DatanodeProtocol
registerDatanode
in interface DatanodeProtocol
DatanodeRegistration
, which contains
new storageID if the datanode did not have one and
registration ID for further communication.
IOException
DataNode.dnRegistration
,
FSNamesystem.registerDatanode(DatanodeRegistration)
public DatanodeCommand[] sendHeartbeat(DatanodeRegistration nodeReg, long capacity, long dfsUsed, long remaining, int xmitsInProgress, int xceiverCount, int failedVolumes) throws IOException
sendHeartbeat
in interface DatanodeProtocol
IOException
public DatanodeCommand blockReport(DatanodeRegistration nodeReg, long[] blocks) throws IOException
DatanodeProtocol
blockReport
in interface DatanodeProtocol
blocks
- - the block list as an array of longs.
Each block is represented as 2 longs.
This is done instead of Block[] to reduce memory used by block reports.
IOException
public void blockReceived(DatanodeRegistration nodeReg, Block[] blocks, String[] delHints) throws IOException
DatanodeProtocol
blockReceived
in interface DatanodeProtocol
IOException
public void errorReport(DatanodeRegistration nodeReg, int errorCode, String msg) throws IOException
errorReport
in interface DatanodeProtocol
IOException
public NamespaceInfo versionRequest() throws IOException
NamenodeProtocol
versionRequest
in interface DatanodeProtocol
versionRequest
in interface NamenodeProtocol
NamespaceInfo
identifying versions and storage information
of the name-node
IOException
public UpgradeCommand processUpgradeCommand(UpgradeCommand comm) throws IOException
DatanodeProtocol
processUpgradeCommand
in interface DatanodeProtocol
IOException
public void verifyRequest(NodeRegistration nodeReg) throws IOException
nodeReg
- data node registration
IOException
public void verifyVersion(int version) throws IOException
version
-
IOException
public File getFsImageName() throws IOException
IOException
public FSImage getFSImage()
public File[] getFsImageNameCheckpoint() throws IOException
IOException
public InetSocketAddress getNameNodeAddress()
public InetSocketAddress getHttpAddress()
public void refreshServiceAcl() throws IOException
refreshServiceAcl
in interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
IOException
public void refreshUserToGroupsMappings() throws IOException
refreshUserToGroupsMappings
in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
IOException
public void refreshSuperUserGroupsConfiguration()
refreshSuperUserGroupsConfiguration
in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
public static NameNode createNameNode(String[] argv, org.apache.hadoop.conf.Configuration conf) throws IOException
IOException
public static void main(String[] argv) throws Exception
Exception
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |