|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
@InterfaceAudience.Private @InterfaceStability.Evolving public interface ClientProtocol
ClientProtocol is used by user code via
DistributedFileSystem
class to communicate
with the NameNode. User code can manipulate the directory namespace,
as well as open/close file streams, etc.
Field Summary | |
---|---|
static int |
GET_STATS_CAPACITY_IDX
|
static int |
GET_STATS_CORRUPT_BLOCKS_IDX
|
static int |
GET_STATS_MISSING_BLOCKS_IDX
|
static int |
GET_STATS_REMAINING_IDX
|
static int |
GET_STATS_UNDER_REPLICATED_IDX
|
static int |
GET_STATS_USED_IDX
|
static long |
versionID
Compared to the previous version the following changes have been introduced: (Only the latest change is reflected. |
Method Summary | |
---|---|
void |
abandonBlock(Block b,
String src,
String holder)
The client can give up on a blcok by calling abandonBlock(). |
LocatedBlock |
addBlock(String src,
String clientName,
Block previous,
DatanodeInfo[] excludeNodes)
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). |
LocatedBlock |
append(String src,
String clientName)
Append to the end of the file. |
void |
cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Cancel an existing delegation token. |
boolean |
complete(String src,
String clientName,
Block last)
The client is done writing data to the given filename, and would like to complete it. |
void |
concat(String trg,
String[] srcs)
Moves blocks from srcs to trg and delete srcs |
void |
create(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
String clientName,
org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag,
boolean createParent,
short replication,
long blockSize)
Create a new file entry in the namespace. |
void |
createSymlink(String target,
String link,
org.apache.hadoop.fs.permission.FsPermission dirPerm,
boolean createParent)
Create symlink to a file or directory. |
boolean |
delete(String src)
Deprecated. use delete(String, boolean) istead. |
boolean |
delete(String src,
boolean recursive)
Delete the given file or directory from the file system. |
UpgradeStatusReport |
distributedUpgradeProgress(FSConstants.UpgradeAction action)
Report distributed upgrade progress or force current upgrade to proceed. |
void |
finalizeUpgrade()
Finalize previous upgrade. |
void |
fsync(String src,
String client)
Write all metadata for this file into persistent storage. |
LocatedBlocks |
getBlockLocations(String src,
long offset,
long length)
Get locations of the blocks of the specified file within the specified range. |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(String path)
Get ContentSummary rooted at the specified directory. |
DatanodeInfo[] |
getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes. |
org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> |
getDelegationToken(org.apache.hadoop.io.Text renewer)
Get a valid Delegation Token. |
HdfsFileStatus |
getFileInfo(String src)
Get the file info for a specific file or directory. |
HdfsFileStatus |
getFileLinkInfo(String src)
Get the file info for a specific file or directory. |
String |
getLinkTarget(String path)
Return the target of the given symlink. |
DirectoryListing |
getListing(String src,
byte[] startAfter,
boolean needLocation)
Get a partial listing of the indicated directory |
long |
getPreferredBlockSize(String filename)
Get the block size for the given file. |
org.apache.hadoop.fs.FsServerDefaults |
getServerDefaults()
Get server default values for a number of configuration params. |
long[] |
getStats()
Get a set of statistics about the filesystem. |
void |
metaSave(String filename)
Dumps namenode data structures into specified file. |
boolean |
mkdirs(String src,
org.apache.hadoop.fs.permission.FsPermission masked,
boolean createParent)
Create a directory (or hierarchy of directories) with the given name and permission. |
boolean |
recoverLease(String src,
String clientName)
Start lease recovery. |
void |
refreshNodes()
Tells the namenode to reread the hosts and exclude files. |
boolean |
rename(String src,
String dst)
Deprecated. Use rename(String, String, Options.Rename...) instead. |
void |
rename(String src,
String dst,
org.apache.hadoop.fs.Options.Rename... options)
Rename src to dst. |
long |
renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
Renew an existing delegation token. |
void |
renewLease(String clientName)
Client programs can cause stateful changes in the NameNode that affect other clients. |
void |
reportBadBlocks(LocatedBlock[] blocks)
The client wants to report corrupted blocks (blocks with specified locations on datanodes). |
boolean |
restoreFailedStorage(String arg)
Enable/Disable restore failed storage. |
void |
saveNamespace()
Save namespace image. |
void |
setOwner(String src,
String username,
String groupname)
Set Owner of a path (i.e. |
void |
setPermission(String src,
org.apache.hadoop.fs.permission.FsPermission permission)
Set permissions for an existing file/directory. |
void |
setQuota(String path,
long namespaceQuota,
long diskspaceQuota)
Set the quota for a directory. |
boolean |
setReplication(String src,
short replication)
Set replication for an existing file. |
boolean |
setSafeMode(FSConstants.SafeModeAction action)
Enter, leave or get safe mode. |
void |
setTimes(String src,
long mtime,
long atime)
Sets the modification and access time of the file to the specified time. |
LocatedBlock |
updateBlockForPipeline(Block block,
String clientName)
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block. |
void |
updatePipeline(String clientName,
Block oldBlock,
Block newBlock,
DatanodeID[] newNodes)
Update a pipeline for a block under construction |
Methods inherited from interface org.apache.hadoop.ipc.VersionedProtocol |
---|
getProtocolVersion |
Field Detail |
---|
static final long versionID
static final int GET_STATS_CAPACITY_IDX
static final int GET_STATS_USED_IDX
static final int GET_STATS_REMAINING_IDX
static final int GET_STATS_UNDER_REPLICATED_IDX
static final int GET_STATS_CORRUPT_BLOCKS_IDX
static final int GET_STATS_MISSING_BLOCKS_IDX
Method Detail |
---|
@Nullable LocatedBlocks getBlockLocations(String src, long offset, long length) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
Return LocatedBlocks
which contains
file length, blocks and their locations.
DataNode locations for each block are sorted by
the distance to the client's address.
The client will then have to contact one of the indicated DataNodes to obtain the actual data.
src
- file nameoffset
- range start offsetlength
- range length
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
does not exist
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredorg.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
IOException
void create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize) throws org.apache.hadoop.security.AccessControlException, AlreadyBeingCreatedException, DSQuotaExceededException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, NSQuotaExceededException, org.apache.hadoop.fs.ParentNotDirectoryException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients.
Although, other clients cannot delete(String, boolean)
, re-create or
rename(String, String)
it until the file is completed
or explicitly as a result of lease expiration.
Blocks have a maximum size. Clients that intend to create
multi-block files must also use
addBlock(String, String, Block, DatanodeInfo[])
src
- path of the file being created.masked
- masked permission.clientName
- name of the current client.flag
- indicates whether the file should be
overwritten if it already exists or create if it does not exist or append.createParent
- create missing parent directory if truereplication
- block replication factor.blockSize
- maximum block size.
org.apache.hadoop.security.AccessControlException
- If access is denied
AlreadyBeingCreatedException
- if the path does not exist.
DSQuotaExceededException
- If file creation violates disk space
quota restriction
org.apache.hadoop.fs.FileAlreadyExistsException
- If file src
already exists
FileNotFoundException
- If parent of src
does not exist
and createParent
is false
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of src
is not a
directory.
NSQuotaExceededException
- If file creation violates name space
quota restriction
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred
RuntimeExceptions:
org.apache.hadoop.fs.InvalidPathException
- Path src
is invalidLocatedBlock append(String src, String clientName) throws org.apache.hadoop.security.AccessControlException, DSQuotaExceededException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- path of the file being created.clientName
- name of the current client.
org.apache.hadoop.security.AccessControlException
- if permission to append file is
denied by the system. As usually on the client side the exception will
be wrapped into RemoteException
.
Allows appending to an existing file if the server is
configured with the parameter dfs.support.append set to true, otherwise
throws an IOException.
org.apache.hadoop.security.AccessControlException
- If permission to append to file is denied
FileNotFoundException
- If file src
is not found
DSQuotaExceededException
- If append violates disk space quota
restriction
SafeModeException
- append not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred.
RuntimeExceptions:
UnsupportedOperationException
- if append is not supportedboolean setReplication(String src, short replication) throws org.apache.hadoop.security.AccessControlException, DSQuotaExceededException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
src
- file namereplication
- new replication
org.apache.hadoop.security.AccessControlException
- If access is denied
DSQuotaExceededException
- If replication violates disk space
quota restriction
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink
IOException
- If an I/O error occurredvoid setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredvoid setOwner(String src, String username, String groupname) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- username
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredvoid abandonBlock(Block b, String src, String holder) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredLocatedBlock addBlock(String src, String clientName, @Nullable Block previous, @Nullable DatanodeInfo[] excludeNodes) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, NotReplicatedYetException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- the file being createdclientName
- the name of the client that adds the blockprevious
- previous blockexcludeNodes
- a list of nodes that should not be
allocated for the current block
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
NotReplicatedYetException
- previous blocks of the file are not
replicated yet. Blocks cannot be added until replication
completes.
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredboolean complete(String src, String clientName, Block last) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredvoid reportBadBlocks(LocatedBlock[] blocks) throws IOException
blocks
- Array of located blocks to report
IOException
@Deprecated boolean rename(String src, String dst) throws org.apache.hadoop.fs.UnresolvedLinkException, IOException
rename(String, String, Options.Rename...)
instead.
src
- existing file or directory name.dst
- new name.
IOException
- an I/O error occurred
org.apache.hadoop.fs.UnresolvedLinkException
void concat(String trg, String[] srcs) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
trg
- existing filesrcs
- - list of existing files (same block size, same replication)
IOException
- if some arguments are invalid
org.apache.hadoop.fs.UnresolvedLinkException
- if trg
or srcs
contains a symlinkvoid rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws org.apache.hadoop.security.AccessControlException, DSQuotaExceededException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, NSQuotaExceededException, org.apache.hadoop.fs.ParentNotDirectoryException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
This implementation of rename is atomic.
src
- existing file or directory name.dst
- new name.options
- Rename options
org.apache.hadoop.security.AccessControlException
- If access is denied
DSQuotaExceededException
- If rename violates disk space
quota restriction
org.apache.hadoop.fs.FileAlreadyExistsException
- If dst
already exists and
options has Options.Rename.OVERWRITE
option
false.
FileNotFoundException
- If src
does not exist
NSQuotaExceededException
- If rename violates namespace
quota restriction
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of dst
is not a directory
SafeModeException
- rename not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
or
dst
contains a symlink
IOException
- If an I/O error occurred
@Deprecated boolean delete(String src) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
delete(String, boolean)
istead.
Any blocks belonging to the deleted files will be garbage-collected.
src
- existing name.
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
boolean delete(String src, boolean recursive) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
src
- existing namerecursive
- if true deletes a non empty directory recursively,
else throws an exception.
org.apache.hadoop.security.AccessControlException
- If access is denied
FileNotFoundException
- If file src
is not found
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredboolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, NSQuotaExceededException, org.apache.hadoop.fs.ParentNotDirectoryException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- The path of the directory being createdmasked
- The masked permission of the directory being createdcreateParent
- create missing parent directory if true
org.apache.hadoop.security.AccessControlException
- If access is denied
org.apache.hadoop.fs.FileAlreadyExistsException
- If src
already exists
FileNotFoundException
- If parent of src
does not exist
and createParent
is false
NSQuotaExceededException
- If file creation violates quota restriction
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of src
is not a directory
SafeModeException
- create not allowed in safemode
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurred.
RunTimeExceptions:
org.apache.hadoop.fs.InvalidPathException
- If src
is invalidDirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- the directory namestartAfter
- the name to start listing after encoded in java UTF8needLocation
- if the FileStatus should contain block locations
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- If src
contains a symlink
IOException
- If an I/O error occurredvoid renewLease(String clientName) throws org.apache.hadoop.security.AccessControlException, IOException
So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
org.apache.hadoop.security.AccessControlException
- permission denied
IOException
- If an I/O error occurredboolean recoverLease(String src, String clientName) throws IOException
src
- path of the file to start lease recoveryclientName
- name of the current client
IOException
long[] getStats() throws IOException
GET_STATS_CAPACITY_IDX
in place of
actual numbers to index into the array.
IOException
DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type) throws IOException
IOException
long getPreferredBlockSize(String filename) throws IOException, org.apache.hadoop.fs.UnresolvedLinkException
filename
- The name of the file
IOException
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.boolean setSafeMode(FSConstants.SafeModeAction action) throws IOException
Safe mode is a name node state when it
Safe mode is entered automatically at name node startup.
Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_GET)
.
At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.namenode.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.
If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER)
then the name node stays in safe mode until it is manually turned off
using setSafeMode(SafeModeAction.SAFEMODE_LEAVE)
.
Current state of the name node can be verified using
setSafeMode(SafeModeAction.SAFEMODE_GET)
action
- IOException
void saveNamespace() throws org.apache.hadoop.security.AccessControlException, IOException
Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.
org.apache.hadoop.security.AccessControlException
- if the superuser privilege is violated.
IOException
- if image creation failed.boolean restoreFailedStorage(String arg) throws org.apache.hadoop.security.AccessControlException
sets flag to enable restore of failed storage replicas
org.apache.hadoop.security.AccessControlException
- if the superuser privilege is violated.void refreshNodes() throws IOException
IOException
void finalizeUpgrade() throws IOException
IOException
@Nullable UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action) throws IOException
action
- FSConstants.UpgradeAction
to perform
IOException
void metaSave(String filename) throws IOException
IOException
@Nullable HdfsFileStatus getFileInfo(String src) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- The string representation of the path to the file
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if the path contains a symlink.
IOException
- If an I/O error occurredHdfsFileStatus getFileLinkInfo(String src) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- The string representation of the path to the file
org.apache.hadoop.security.AccessControlException
- permission denied
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink
IOException
- If an I/O error occurredorg.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
ContentSummary
rooted at the specified directory.
path
- The string representation of the path
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file path
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if path
contains a symlink.
IOException
- If an I/O error occurredvoid setQuota(String path, long namespaceQuota, long diskspaceQuota) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
path
- The string representation of the path to the directorynamespaceQuota
- Limit on the number of names in the tree rooted
at the directorydiskspaceQuota
- Limit on disk space occupied all the files under
this directory.
FSConstants.QUOTA_DONT_SET
implies
the quota will not be changed, and (3) FSConstants.QUOTA_RESET
implies the quota will be reset. Any other value is a runtime error.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file path
is not found
QuotaExceededException
- if the directory size
is greater than the given quota
org.apache.hadoop.fs.UnresolvedLinkException
- if the path
contains a symlink.
IOException
- If an I/O error occurredvoid fsync(String src, String client) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- The string representation of the pathclient
- The string representation of the client
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
- If an I/O error occurredvoid setTimes(String src, long mtime, long atime) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
src
- The string representation of the pathmtime
- The number of milliseconds since Jan 1, 1970.
Setting mtime to -1 means that modification time should not be set
by this call.atime
- The number of milliseconds since Jan 1, 1970.
Setting atime to -1 means that access time should not be set
by this call.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- file src
is not found
org.apache.hadoop.fs.UnresolvedLinkException
- if src
contains a symlink.
IOException
- If an I/O error occurredvoid createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, SafeModeException, org.apache.hadoop.fs.UnresolvedLinkException, IOException
target
- The path of the destination that the
link points to.link
- The path of the link being created.dirPerm
- permissions to use when creating parent directoriescreateParent
- - if true then missing parent dirs are created
if false then parent must exist
org.apache.hadoop.security.AccessControlException
- permission denied
org.apache.hadoop.fs.FileAlreadyExistsException
- If file link
already exists
FileNotFoundException
- If parent of link
does not exist
and createParent
is false
org.apache.hadoop.fs.ParentNotDirectoryException
- If parent of link
is not a
directory.
org.apache.hadoop.fs.UnresolvedLinkException
- if link contains a symlink.
IOException
- If an I/O error occurred
SafeModeException
String getLinkTarget(String path) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException
path
- The path with a link that needs resolution.
org.apache.hadoop.security.AccessControlException
- permission denied
FileNotFoundException
- If path
does not exist
IOException
- If the given path does not refer to a symlink
or an I/O error occurredLocatedBlock updateBlockForPipeline(Block block, String clientName) throws IOException
block
- a blockclientName
- the name of the client
IOException
- if any error occursvoid updatePipeline(String clientName, Block oldBlock, Block newBlock, DatanodeID[] newNodes) throws IOException
clientName
- the name of the clientoldBlock
- the old blocknewBlock
- the new block containing new generation stamp and lengthnewNodes
- datanodes in the pipeline
IOException
- if any error occursorg.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
renewer
- the designated renewer for the token
IOException
long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
token
- delegation token obtained earlier
IOException
void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException
token
- delegation token
IOException
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |