org.apache.hadoop.hdfs
Class DFSClient

java.lang.Object
  extended by org.apache.hadoop.hdfs.DFSClient
All Implemented Interfaces:
Closeable, FSConstants

@InterfaceAudience.Private
public class DFSClient
extends Object
implements FSConstants, Closeable

DFSClient can connect to a Hadoop Filesystem and perform basic file tasks. It uses the ClientProtocol to communicate with a NameNode daemon, and connects directly to DataNodes to read/write block data. Hadoop DFS users should obtain an instance of DistributedFileSystem, which uses DFSClient to handle filesystem tasks.


Nested Class Summary
static class DFSClient.DFSDataInputStream
          The Hdfs implementation of FSDataInputStream
 
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
FSConstants.DatanodeReportType, FSConstants.SafeModeAction, FSConstants.UpgradeAction
 
Field Summary
static org.apache.commons.logging.Log LOG
           
static int MAX_BLOCK_ACQUIRE_FAILURES
           
static long SERVER_DEFAULTS_VALIDITY_PERIOD
           
 
Fields inherited from interface org.apache.hadoop.hdfs.protocol.FSConstants
BLOCK_INVALIDATE_CHUNK, BLOCKREPORT_INITIAL_DELAY, BLOCKREPORT_INTERVAL, BUFFER_SIZE, DEFAULT_BLOCK_SIZE, DEFAULT_BYTES_PER_CHECKSUM, DEFAULT_DATA_SOCKET_SIZE, DEFAULT_FILE_BUFFER_SIZE, DEFAULT_REPLICATION_FACTOR, DEFAULT_WRITE_PACKET_SIZE, HDFS_URI_SCHEME, HEARTBEAT_INTERVAL, LAYOUT_VERSION, LEASE_HARDLIMIT_PERIOD, LEASE_RECOVER_PERIOD, LEASE_SOFTLIMIT_PERIOD, MAX_PATH_DEPTH, MAX_PATH_LENGTH, MIN_BLOCKS_FOR_WRITE, QUOTA_DONT_SET, QUOTA_RESET, SIZE_OF_INTEGER, SMALL_BUFFER_SIZE
 
Constructor Summary
DFSClient(org.apache.hadoop.conf.Configuration conf)
          Deprecated. Deprecated at 0.21
DFSClient(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf)
          Same as this(nameNodeAddr, conf, null);
DFSClient(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats)
          Same as this(nameNodeAddr, null, conf, stats);
 
Method Summary
 void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void close()
          Close the file system, abandoning all of the leases and files being created and close connections to the namenode.
 void concat(String trg, String[] srcs)
          Move blocks from src to trg and delete src See ClientProtocol.concat(String, String []).
 OutputStream create(String src, boolean overwrite)
          Call create(String, boolean, short, long, Progressable) with default replication and blockSize and null progress.
 OutputStream create(String src, boolean overwrite, org.apache.hadoop.util.Progressable progress)
          Call create(String, boolean, short, long, Progressable) with default replication and blockSize.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize)
          Call create(String, boolean, short, long, Progressable) with null progress.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)
          Call create(String, boolean, short, long, Progressable, int) with default bufferSize.
 OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Call create(String, FsPermission, EnumSet, short, long, Progressable, int) with default permission FsPermission.getDefault().
 OutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.
 OutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize)
          Call create(String, FsPermission, EnumSet, boolean, short, long, Progressable, int) with createParent set to true.
static ClientProtocol createNamenode(org.apache.hadoop.conf.Configuration conf)
          The locking hierarchy is to first acquire lock on DFSClient object, followed by lock on leasechecker, followed by lock on an individual DFSOutputStream.
static ClientProtocol createNamenode(InetSocketAddress nameNodeAddr, org.apache.hadoop.conf.Configuration conf)
           
 void createSymlink(String target, String link, boolean createParent)
          Creates a symbolic link.
 DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
           
 boolean delete(String src)
          Deprecated. 
 boolean delete(String src, boolean recursive)
          delete file or directory.
 UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
           
 boolean exists(String src)
          Implemented using getFileInfo(src)
 void finalizeUpgrade()
           
 org.apache.hadoop.fs.BlockLocation[] getBlockLocations(String src, long start, long length)
          Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region.
 long getBlockSize(String f)
           
 long getCorruptBlocksCount()
          Returns count of blocks with at least one replica marked corrupt.
 long getDefaultBlockSize()
          Get the default block size for this cluster
 short getDefaultReplication()
           
 org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
           
 org.apache.hadoop.fs.FsStatus getDiskStatus()
           
 org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src)
          Get the checksum of a file.
static org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src, ClientProtocol namenode, SocketFactory socketFactory, int socketTimeout)
          Get the checksum of a file.
 HdfsFileStatus getFileInfo(String src)
          Get the file info for a specific file or directory.
 HdfsFileStatus getFileLinkInfo(String src)
          Get the file info for a specific file or directory.
 String getLinkTarget(String path)
          Resolve the *first* symlink, if any, in the path.
 long getMissingBlocksCount()
          Returns count of blocks with no good replicas left.
 ClientProtocol getNamenode()
          Get the namenode associated with this DFSClient object
 org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
          Get server default values for a number of configuration params.
 long getUnderReplicatedBlocksCount()
          Returns count of blocks with one of more replica missing.
 DirectoryListing listPaths(String src, byte[] startAfter)
          Get a partial listing of the indicated directory No block locations need to be fetched
 DirectoryListing listPaths(String src, byte[] startAfter, boolean needLocation)
          Get a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directory
 void metaSave(String pathname)
          Dumps DFS data structures into specified file.
 boolean mkdirs(String src)
          Deprecated. 
 boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission permission, boolean createParent)
          Create a directory (or hierarchy of directories) with the given name and permission.
 DFSInputStream open(String src)
           
 DFSInputStream open(String src, int buffersize, boolean verifyChecksum)
          Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places.
 DFSInputStream open(String src, int buffersize, boolean verifyChecksum, org.apache.hadoop.fs.FileSystem.Statistics stats)
          Deprecated. Use open(String, int, boolean) instead.
 OutputStream primitiveCreate(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, int bytesPerChecksum)
          Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int) except that the permission is absolute (ie has already been masked with umask.
 boolean primitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission)
          Same {mkdirs(String, FsPermission, boolean) except that the permissions has already been masked against umask.
 void refreshNodes()
          Refresh the hosts and exclude files.
 boolean rename(String src, String dst)
          Deprecated. Use rename(String, String, Options.Rename...) instead.
 void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options)
          Rename file or directory.
 long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
           
 void reportBadBlocks(LocatedBlock[] blocks)
          Report corrupt blocks that were discovered by the client.
 void setOwner(String src, String username, String groupname)
          Set file or directory owner.
 void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission)
          Set permissions to a file or directory.
 boolean setReplication(String src, short replication)
          Set replication for an existing file.
 boolean setSafeMode(FSConstants.SafeModeAction action)
          Enter, leave or get safe mode.
 void setTimes(String src, long mtime, long atime)
          set the modification and access time of a file
static String stringifyToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
          A test method for printing out tokens
 String toString()
          
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
 

Field Detail

LOG

public static final org.apache.commons.logging.Log LOG

SERVER_DEFAULTS_VALIDITY_PERIOD

public static final long SERVER_DEFAULTS_VALIDITY_PERIOD
See Also:
Constant Field Values

MAX_BLOCK_ACQUIRE_FAILURES

public static final int MAX_BLOCK_ACQUIRE_FAILURES
See Also:
Constant Field Values
Constructor Detail

DFSClient

@Deprecated
public DFSClient(org.apache.hadoop.conf.Configuration conf)
          throws IOException
Deprecated. Deprecated at 0.21

Same as this(NameNode.getAddress(conf), conf);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, Configuration)

DFSClient

public DFSClient(InetSocketAddress nameNodeAddr,
                 org.apache.hadoop.conf.Configuration conf)
          throws IOException
Same as this(nameNodeAddr, conf, null);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, Configuration, org.apache.hadoop.fs.FileSystem.Statistics)

DFSClient

public DFSClient(InetSocketAddress nameNodeAddr,
                 org.apache.hadoop.conf.Configuration conf,
                 org.apache.hadoop.fs.FileSystem.Statistics stats)
          throws IOException
Same as this(nameNodeAddr, null, conf, stats);

Throws:
IOException
See Also:
DFSClient(InetSocketAddress, ClientProtocol, Configuration, org.apache.hadoop.fs.FileSystem.Statistics)
Method Detail

createNamenode

public static ClientProtocol createNamenode(org.apache.hadoop.conf.Configuration conf)
                                     throws IOException
The locking hierarchy is to first acquire lock on DFSClient object, followed by lock on leasechecker, followed by lock on an individual DFSOutputStream.

Throws:
IOException

createNamenode

public static ClientProtocol createNamenode(InetSocketAddress nameNodeAddr,
                                            org.apache.hadoop.conf.Configuration conf)
                                     throws IOException
Throws:
IOException

close

public void close()
           throws IOException
Close the file system, abandoning all of the leases and files being created and close connections to the namenode.

Specified by:
close in interface Closeable
Throws:
IOException

getDefaultBlockSize

public long getDefaultBlockSize()
Get the default block size for this cluster

Returns:
the default block size in bytes

getBlockSize

public long getBlockSize(String f)
                  throws IOException
Throws:
IOException
See Also:
ClientProtocol.getPreferredBlockSize(String)

getServerDefaults

public org.apache.hadoop.fs.FsServerDefaults getServerDefaults()
                                                        throws IOException
Get server default values for a number of configuration params.

Throws:
IOException
See Also:
ClientProtocol.getServerDefaults()

stringifyToken

public static String stringifyToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                             throws IOException
A test method for printing out tokens

Parameters:
token -
Returns:
Stringify version of the token
Throws:
IOException

getDelegationToken

public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer)
                                                                                     throws IOException
Throws:
IOException
See Also:
ClientProtocol.getDelegationToken(Text)

renewDelegationToken

public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                          throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                 IOException
Throws:
org.apache.hadoop.security.token.SecretManager.InvalidToken
IOException
See Also:
ClientProtocol.renewDelegationToken(Token)

cancelDelegationToken

public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token)
                           throws org.apache.hadoop.security.token.SecretManager.InvalidToken,
                                  IOException
Throws:
org.apache.hadoop.security.token.SecretManager.InvalidToken
IOException
See Also:
ClientProtocol.cancelDelegationToken(Token)

reportBadBlocks

public void reportBadBlocks(LocatedBlock[] blocks)
                     throws IOException
Report corrupt blocks that were discovered by the client.

Throws:
IOException
See Also:
ClientProtocol.reportBadBlocks(LocatedBlock[])

getDefaultReplication

public short getDefaultReplication()

getBlockLocations

public org.apache.hadoop.fs.BlockLocation[] getBlockLocations(String src,
                                                              long start,
                                                              long length)
                                                       throws IOException,
                                                              org.apache.hadoop.fs.UnresolvedLinkException
Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region. It returns a set of hostnames for every block within the indicated region. This function is very useful when writing code that considers data-placement when performing operations. For example, the MapReduce system tries to schedule tasks on the same machines as the data-block the task processes.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

public DFSInputStream open(String src)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

@Deprecated
public DFSInputStream open(String src,
                                      int buffersize,
                                      boolean verifyChecksum,
                                      org.apache.hadoop.fs.FileSystem.Statistics stats)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Deprecated. Use open(String, int, boolean) instead.

Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

open

public DFSInputStream open(String src,
                           int buffersize,
                           boolean verifyChecksum)
                    throws IOException,
                           org.apache.hadoop.fs.UnresolvedLinkException
Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

getNamenode

public ClientProtocol getNamenode()
Get the namenode associated with this DFSClient object

Returns:
the namenode associated with this DFSClient object

create

public OutputStream create(String src,
                           boolean overwrite)
                    throws IOException
Call create(String, boolean, short, long, Progressable) with default replication and blockSize and null progress.

Throws:
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           org.apache.hadoop.util.Progressable progress)
                    throws IOException
Call create(String, boolean, short, long, Progressable) with default replication and blockSize.

Throws:
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize)
                    throws IOException
Call create(String, boolean, short, long, Progressable) with null progress.

Throws:
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress)
                    throws IOException
Call create(String, boolean, short, long, Progressable, int) with default bufferSize.

Throws:
IOException

create

public OutputStream create(String src,
                           boolean overwrite,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException
Call create(String, FsPermission, EnumSet, short, long, Progressable, int) with default permission FsPermission.getDefault().

Parameters:
src - File name
overwrite - overwrite an existing file if true
replication - replication factor for the file
blockSize - maximum block size
progress - interface for reporting client progress
buffersize - underlying buffersize
Returns:
output stream
Throws:
IOException

create

public OutputStream create(String src,
                           org.apache.hadoop.fs.permission.FsPermission permission,
                           EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException
Call create(String, FsPermission, EnumSet, boolean, short, long, Progressable, int) with createParent set to true.

Throws:
IOException

create

public OutputStream create(String src,
                           org.apache.hadoop.fs.permission.FsPermission permission,
                           EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                           boolean createParent,
                           short replication,
                           long blockSize,
                           org.apache.hadoop.util.Progressable progress,
                           int buffersize)
                    throws IOException
Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.

Parameters:
src - File name
permission - The permission of the directory being created. If null, use default permission FsPermission.getDefault()
flag - indicates create a new file or create/overwrite an existing file or append to an existing file
createParent - create missing parent directory if true
replication - block replication
blockSize - maximum block size
progress - interface for reporting client progress
buffersize - underlying buffer size
Returns:
output stream
Throws:
IOException
See Also:
for detailed description of exceptions thrown

primitiveCreate

public OutputStream primitiveCreate(String src,
                                    org.apache.hadoop.fs.permission.FsPermission absPermission,
                                    EnumSet<org.apache.hadoop.fs.CreateFlag> flag,
                                    boolean createParent,
                                    short replication,
                                    long blockSize,
                                    org.apache.hadoop.util.Progressable progress,
                                    int buffersize,
                                    int bytesPerChecksum)
                             throws IOException,
                                    org.apache.hadoop.fs.UnresolvedLinkException
Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int) except that the permission is absolute (ie has already been masked with umask.

Throws:
IOException
org.apache.hadoop.fs.UnresolvedLinkException

createSymlink

public void createSymlink(String target,
                          String link,
                          boolean createParent)
                   throws IOException
Creates a symbolic link.

Throws:
IOException
See Also:
ClientProtocol.createSymlink(String, String,FsPermission, boolean)

getLinkTarget

public String getLinkTarget(String path)
                     throws IOException
Resolve the *first* symlink, if any, in the path.

Throws:
IOException
See Also:
ClientProtocol.getLinkTarget(String)

setReplication

public boolean setReplication(String src,
                              short replication)
                       throws IOException
Set replication for an existing file.

Parameters:
src - file name
replication -
Throws:
IOException
See Also:
ClientProtocol.setReplication(String, short)

rename

@Deprecated
public boolean rename(String src,
                                 String dst)
               throws IOException
Deprecated. Use rename(String, String, Options.Rename...) instead.

Rename file or directory.

Throws:
IOException
See Also:
ClientProtocol.rename(String, String)

concat

public void concat(String trg,
                   String[] srcs)
            throws IOException
Move blocks from src to trg and delete src See ClientProtocol.concat(String, String []).

Throws:
IOException

rename

public void rename(String src,
                   String dst,
                   org.apache.hadoop.fs.Options.Rename... options)
            throws IOException
Rename file or directory.

Throws:
IOException
See Also:
ClientProtocol.rename(String, String, Options.Rename...)

delete

@Deprecated
public boolean delete(String src)
               throws IOException
Deprecated. 

Delete file or directory. See ClientProtocol.delete(String).

Throws:
IOException

delete

public boolean delete(String src,
                      boolean recursive)
               throws IOException
delete file or directory. delete contents of the directory if non empty and recursive set to true

Throws:
IOException
See Also:
ClientProtocol.delete(String, boolean)

exists

public boolean exists(String src)
               throws IOException
Implemented using getFileInfo(src)

Throws:
IOException

listPaths

public DirectoryListing listPaths(String src,
                                  byte[] startAfter)
                           throws IOException
Get a partial listing of the indicated directory No block locations need to be fetched

Throws:
IOException

listPaths

public DirectoryListing listPaths(String src,
                                  byte[] startAfter,
                                  boolean needLocation)
                           throws IOException
Get a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directory

Throws:
IOException
See Also:
ClientProtocol.getListing(String, byte[], boolean)

getFileInfo

public HdfsFileStatus getFileInfo(String src)
                           throws IOException
Get the file info for a specific file or directory.

Parameters:
src - The string representation of the path to the file
Returns:
object containing information regarding the file or null if file not found
Throws:
IOException
See Also:
for description of exceptions

getFileLinkInfo

public HdfsFileStatus getFileLinkInfo(String src)
                               throws IOException
Get the file info for a specific file or directory. If src refers to a symlink then the FileStatus of the link is returned.

Parameters:
src - path to a file or directory. For description of exceptions thrown
Throws:
IOException
See Also:
ClientProtocol.getFileLinkInfo(String)

getFileChecksum

public org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src)
                                                             throws IOException
Get the checksum of a file.

Parameters:
src - The file path
Returns:
The checksum
Throws:
IOException
See Also:
DistributedFileSystem.getFileChecksum(Path)

getFileChecksum

public static org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src,
                                                                           ClientProtocol namenode,
                                                                           SocketFactory socketFactory,
                                                                           int socketTimeout)
                                                                    throws IOException
Get the checksum of a file.

Parameters:
src - The file path
Returns:
The checksum
Throws:
IOException

setPermission

public void setPermission(String src,
                          org.apache.hadoop.fs.permission.FsPermission permission)
                   throws IOException
Set permissions to a file or directory.

Parameters:
src - path name.
permission -
Throws:
IOException
See Also:
ClientProtocol.setPermission(String, FsPermission)

setOwner

public void setOwner(String src,
                     String username,
                     String groupname)
              throws IOException
Set file or directory owner.

Parameters:
src - path name.
username - user id.
groupname - user group.
Throws:
IOException
See Also:
ClientProtocol.setOwner(String, String, String)

getDiskStatus

public org.apache.hadoop.fs.FsStatus getDiskStatus()
                                            throws IOException
Throws:
IOException
See Also:
ClientProtocol.getStats()

getMissingBlocksCount

public long getMissingBlocksCount()
                           throws IOException
Returns count of blocks with no good replicas left. Normally should be zero.

Throws:
IOException

getUnderReplicatedBlocksCount

public long getUnderReplicatedBlocksCount()
                                   throws IOException
Returns count of blocks with one of more replica missing.

Throws:
IOException

getCorruptBlocksCount

public long getCorruptBlocksCount()
                           throws IOException
Returns count of blocks with at least one replica marked corrupt.

Throws:
IOException

datanodeReport

public DatanodeInfo[] datanodeReport(FSConstants.DatanodeReportType type)
                              throws IOException
Throws:
IOException

setSafeMode

public boolean setSafeMode(FSConstants.SafeModeAction action)
                    throws IOException
Enter, leave or get safe mode.

Throws:
IOException
See Also:
ClientProtocol.setSafeMode(FSConstants.SafeModeAction)

refreshNodes

public void refreshNodes()
                  throws IOException
Refresh the hosts and exclude files. (Rereads them.) See ClientProtocol.refreshNodes() for more details.

Throws:
IOException
See Also:
ClientProtocol.refreshNodes()

metaSave

public void metaSave(String pathname)
              throws IOException
Dumps DFS data structures into specified file.

Throws:
IOException
See Also:
ClientProtocol.metaSave(String)

finalizeUpgrade

public void finalizeUpgrade()
                     throws IOException
Throws:
IOException
See Also:
ClientProtocol.finalizeUpgrade()

distributedUpgradeProgress

public UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action)
                                               throws IOException
Throws:
IOException
See Also:
ClientProtocol.distributedUpgradeProgress(FSConstants.UpgradeAction)

mkdirs

@Deprecated
public boolean mkdirs(String src)
               throws IOException
Deprecated. 

Throws:
IOException

mkdirs

public boolean mkdirs(String src,
                      org.apache.hadoop.fs.permission.FsPermission permission,
                      boolean createParent)
               throws IOException
Create a directory (or hierarchy of directories) with the given name and permission.

Parameters:
src - The path of the directory being created
permission - The permission of the directory being created. If permission == null, use FsPermission.getDefault().
createParent - create missing parent directory if true
Returns:
True if the operation success.
Throws:
IOException
See Also:
ClientProtocol.mkdirs(String, FsPermission, boolean)

primitiveMkdir

public boolean primitiveMkdir(String src,
                              org.apache.hadoop.fs.permission.FsPermission absPermission)
                       throws IOException
Same {mkdirs(String, FsPermission, boolean) except that the permissions has already been masked against umask.

Throws:
IOException

setTimes

public void setTimes(String src,
                     long mtime,
                     long atime)
              throws IOException
set the modification and access time of a file

Throws:
IOException
See Also:
ClientProtocol.setTimes(String, long, long)

toString

public String toString()

Overrides:
toString in class Object


Copyright © 2009 The Apache Software Foundation