org.apache.hadoop.dfs
Class NameNode

java.lang.Object
  extended byorg.apache.hadoop.dfs.NameNode
All Implemented Interfaces:
org.apache.hadoop.dfs.ClientProtocol, org.apache.hadoop.dfs.DatanodeProtocol, org.apache.hadoop.dfs.FSConstants

public class NameNode
extends Object
implements org.apache.hadoop.dfs.ClientProtocol, org.apache.hadoop.dfs.DatanodeProtocol, org.apache.hadoop.dfs.FSConstants

NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS. There is a single NameNode running in any DFS deployment. (Well, except when there is a second backup/failover NameNode.) The NameNode controls two critical tables: 1) filename->blocksequence (namespace) 2) block->machinelist ("inodes") The first table is stored on disk and is very precious. The second table is rebuilt every time the NameNode comes up. 'NameNode' refers to both this class as well as the 'NameNode server'. The 'FSNamesystem' class actually performs most of the filesystem management. The majority of the 'NameNode' class itself is concerned with exposing the IPC interface to the outside world, plus some configuration management. NameNode implements the ClientProtocol interface, which allows clients to ask for DFS services. ClientProtocol is not designed for direct use by authors of DFS client code. End-users should instead use the org.apache.nutch.hadoop.fs.FileSystem class. NameNode also implements the DatanodeProtocol interface, used by DataNode programs that actually store DFS data blocks. These methods are invoked repeatedly and automatically by all the DataNodes in a DFS deployment.

Author:
Mike Cafarella

Field Summary
static int BLOCK_SIZE
           
static long BLOCKREPORT_INTERVAL
           
static int BUFFER_SIZE
           
static byte CHUNKED_ENCODING
           
static int COMPLETE_SUCCESS
           
static long DATANODE_STARTUP_PERIOD
           
static long EXPIRE_INTERVAL
           
static long HEARTBEAT_INTERVAL
           
static long LEASE_PERIOD
           
static Logger LOG
           
static int MIN_BLOCKS_FOR_WRITE
           
static byte OP_ACK
           
static byte OP_BLOCKRECEIVED
           
static byte OP_BLOCKREPORT
           
static byte OP_CLIENT_ABANDONBLOCK
           
static byte OP_CLIENT_ABANDONBLOCK_ACK
           
static byte OP_CLIENT_ADDBLOCK
           
static byte OP_CLIENT_ADDBLOCK_ACK
           
static byte OP_CLIENT_COMPLETEFILE
           
static byte OP_CLIENT_COMPLETEFILE_ACK
           
static byte OP_CLIENT_DATANODE_HINTS
           
static byte OP_CLIENT_DATANODE_HINTS_ACK
           
static byte OP_CLIENT_DATANODEREPORT
           
static byte OP_CLIENT_DATANODEREPORT_ACK
           
static byte OP_CLIENT_DELETE
           
static byte OP_CLIENT_DELETE_ACK
           
static byte OP_CLIENT_EXISTS
           
static byte OP_CLIENT_EXISTS_ACK
           
static byte OP_CLIENT_ISDIR
           
static byte OP_CLIENT_ISDIR_ACK
           
static byte OP_CLIENT_LISTING
           
static byte OP_CLIENT_LISTING_ACK
           
static byte OP_CLIENT_MKDIRS
           
static byte OP_CLIENT_MKDIRS_ACK
           
static byte OP_CLIENT_OBTAINLOCK
           
static byte OP_CLIENT_OBTAINLOCK_ACK
           
static byte OP_CLIENT_OPEN
           
static byte OP_CLIENT_OPEN_ACK
           
static byte OP_CLIENT_RAWSTATS
           
static byte OP_CLIENT_RAWSTATS_ACK
           
static byte OP_CLIENT_RELEASELOCK
           
static byte OP_CLIENT_RELEASELOCK_ACK
           
static byte OP_CLIENT_RENAMETO
           
static byte OP_CLIENT_RENAMETO_ACK
           
static byte OP_CLIENT_RENEW_LEASE
           
static byte OP_CLIENT_RENEW_LEASE_ACK
           
static byte OP_CLIENT_STARTFILE
           
static byte OP_CLIENT_STARTFILE_ACK
           
static byte OP_CLIENT_TRYAGAIN
           
static byte OP_ERROR
           
static byte OP_FAILURE
           
static byte OP_HEARTBEAT
           
static byte OP_INVALIDATE_BLOCKS
           
static byte OP_READ_BLOCK
           
static byte OP_READSKIP_BLOCK
           
static byte OP_TRANSFERBLOCKS
           
static byte OP_TRANSFERDATA
           
static byte OP_WRITE_BLOCK
           
static int OPERATION_FAILED
           
static int READ_TIMEOUT
           
static byte RUNLENGTH_ENCODING
           
static int STILL_WAITING
           
static long WRITE_COMPLETE
           
 
Constructor Summary
NameNode(Configuration conf)
          Create a NameNode at the default location
NameNode(File dir, int port, Configuration conf)
          Create a NameNode at the specified location and start it.
 
Method Summary
 void abandonBlock(org.apache.hadoop.dfs.Block b, String src)
          The client needs to give up on the block.
 void abandonFileInProgress(String src)
          A client that wants to abandon writing to the current file should call abandonFileInProgress().
 org.apache.hadoop.dfs.LocatedBlock addBlock(String src, String clientMachine)
          A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().
 void blockReceived(String sender, org.apache.hadoop.dfs.Block[] blocks)
          blockReceived() allows the DataNode to tell the NameNode about recently-received block data.
 org.apache.hadoop.dfs.Block[] blockReport(String sender, org.apache.hadoop.dfs.Block[] blocks)
          blockReport() tells the NameNode about all the locally-stored blocks.
 boolean complete(String src, String clientName)
          The client is done writing data to the given filename, and would like to complete it.
 org.apache.hadoop.dfs.LocatedBlock create(String src, String clientName, String clientMachine, boolean overwrite)
          Create a new file.
 boolean delete(String src)
          Remove the given filename from the filesystem
 void errorReport(String sender, String msg)
          errorReport() tells the NameNode about something that has gone awry.
 boolean exists(String src)
          Check whether the given file exists
static void format(Configuration conf)
          Format a new filesystem.
 org.apache.hadoop.dfs.BlockCommand getBlockwork(String sender, int xmitsInProgress)
          Return a block-oriented command for the datanode to execute.
 org.apache.hadoop.dfs.DatanodeInfo[] getDatanodeReport()
          Get a full report on the system's current datanodes.
 String[][] getHints(String src, long start, long len)
          getHints() returns a list of hostnames that store data for a specific file region.
 org.apache.hadoop.dfs.DFSFileInfo[] getListing(String src)
          Get a listing of the indicated directory
 long[] getStats()
          Get a set of statistics about the filesystem.
 boolean isDir(String src)
          Check whether the given filename is a directory or not.
 void join()
          Wait for service to finish.
static void main(String[] argv)
           
 boolean mkdirs(String src)
          Create a directory (or hierarchy of directories) with the given name.
 boolean obtainLock(String src, String clientName, boolean exclusive)
          obtainLock() is used for lock managemnet.
 org.apache.hadoop.dfs.LocatedBlock[] open(String src)
          Open an existing file, at the given name.
 boolean releaseLock(String src, String clientName)
          releaseLock() is called if the client would like to release a held lock.
 boolean rename(String src, String dst)
          Rename an item in the fs namespace
 void renewLease(String clientName)
          Client programs can cause stateful changes in the NameNode that affect other clients.
 void reportWrittenBlock(org.apache.hadoop.dfs.LocatedBlock lb)
          The client can report in a set written blocks that it wrote.
 void sendHeartbeat(String sender, long capacity, long remaining)
          sendHeartbeat() tells the NameNode that the DataNode is still alive and well.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

LOG

public static final Logger LOG

BLOCK_SIZE

public static final int BLOCK_SIZE
See Also:
Constant Field Values

MIN_BLOCKS_FOR_WRITE

public static final int MIN_BLOCKS_FOR_WRITE
See Also:
Constant Field Values

WRITE_COMPLETE

public static final long WRITE_COMPLETE
See Also:
Constant Field Values

OP_ERROR

public static final byte OP_ERROR
See Also:
Constant Field Values

OP_HEARTBEAT

public static final byte OP_HEARTBEAT
See Also:
Constant Field Values

OP_BLOCKRECEIVED

public static final byte OP_BLOCKRECEIVED
See Also:
Constant Field Values

OP_BLOCKREPORT

public static final byte OP_BLOCKREPORT
See Also:
Constant Field Values

OP_TRANSFERDATA

public static final byte OP_TRANSFERDATA
See Also:
Constant Field Values

OP_CLIENT_OPEN

public static final byte OP_CLIENT_OPEN
See Also:
Constant Field Values

OP_CLIENT_STARTFILE

public static final byte OP_CLIENT_STARTFILE
See Also:
Constant Field Values

OP_CLIENT_ADDBLOCK

public static final byte OP_CLIENT_ADDBLOCK
See Also:
Constant Field Values

OP_CLIENT_RENAMETO

public static final byte OP_CLIENT_RENAMETO
See Also:
Constant Field Values

OP_CLIENT_DELETE

public static final byte OP_CLIENT_DELETE
See Also:
Constant Field Values

OP_CLIENT_COMPLETEFILE

public static final byte OP_CLIENT_COMPLETEFILE
See Also:
Constant Field Values

OP_CLIENT_LISTING

public static final byte OP_CLIENT_LISTING
See Also:
Constant Field Values

OP_CLIENT_OBTAINLOCK

public static final byte OP_CLIENT_OBTAINLOCK
See Also:
Constant Field Values

OP_CLIENT_RELEASELOCK

public static final byte OP_CLIENT_RELEASELOCK
See Also:
Constant Field Values

OP_CLIENT_EXISTS

public static final byte OP_CLIENT_EXISTS
See Also:
Constant Field Values

OP_CLIENT_ISDIR

public static final byte OP_CLIENT_ISDIR
See Also:
Constant Field Values

OP_CLIENT_MKDIRS

public static final byte OP_CLIENT_MKDIRS
See Also:
Constant Field Values

OP_CLIENT_RENEW_LEASE

public static final byte OP_CLIENT_RENEW_LEASE
See Also:
Constant Field Values

OP_CLIENT_ABANDONBLOCK

public static final byte OP_CLIENT_ABANDONBLOCK
See Also:
Constant Field Values

OP_CLIENT_RAWSTATS

public static final byte OP_CLIENT_RAWSTATS
See Also:
Constant Field Values

OP_CLIENT_DATANODEREPORT

public static final byte OP_CLIENT_DATANODEREPORT
See Also:
Constant Field Values

OP_CLIENT_DATANODE_HINTS

public static final byte OP_CLIENT_DATANODE_HINTS
See Also:
Constant Field Values

OP_ACK

public static final byte OP_ACK
See Also:
Constant Field Values

OP_TRANSFERBLOCKS

public static final byte OP_TRANSFERBLOCKS
See Also:
Constant Field Values

OP_INVALIDATE_BLOCKS

public static final byte OP_INVALIDATE_BLOCKS
See Also:
Constant Field Values

OP_FAILURE

public static final byte OP_FAILURE
See Also:
Constant Field Values

OP_CLIENT_OPEN_ACK

public static final byte OP_CLIENT_OPEN_ACK
See Also:
Constant Field Values

OP_CLIENT_STARTFILE_ACK

public static final byte OP_CLIENT_STARTFILE_ACK
See Also:
Constant Field Values

OP_CLIENT_ADDBLOCK_ACK

public static final byte OP_CLIENT_ADDBLOCK_ACK
See Also:
Constant Field Values

OP_CLIENT_RENAMETO_ACK

public static final byte OP_CLIENT_RENAMETO_ACK
See Also:
Constant Field Values

OP_CLIENT_DELETE_ACK

public static final byte OP_CLIENT_DELETE_ACK
See Also:
Constant Field Values

OP_CLIENT_COMPLETEFILE_ACK

public static final byte OP_CLIENT_COMPLETEFILE_ACK
See Also:
Constant Field Values

OP_CLIENT_TRYAGAIN

public static final byte OP_CLIENT_TRYAGAIN
See Also:
Constant Field Values

OP_CLIENT_LISTING_ACK

public static final byte OP_CLIENT_LISTING_ACK
See Also:
Constant Field Values

OP_CLIENT_OBTAINLOCK_ACK

public static final byte OP_CLIENT_OBTAINLOCK_ACK
See Also:
Constant Field Values

OP_CLIENT_RELEASELOCK_ACK

public static final byte OP_CLIENT_RELEASELOCK_ACK
See Also:
Constant Field Values

OP_CLIENT_EXISTS_ACK

public static final byte OP_CLIENT_EXISTS_ACK
See Also:
Constant Field Values

OP_CLIENT_ISDIR_ACK

public static final byte OP_CLIENT_ISDIR_ACK
See Also:
Constant Field Values

OP_CLIENT_MKDIRS_ACK

public static final byte OP_CLIENT_MKDIRS_ACK
See Also:
Constant Field Values

OP_CLIENT_RENEW_LEASE_ACK

public static final byte OP_CLIENT_RENEW_LEASE_ACK
See Also:
Constant Field Values

OP_CLIENT_ABANDONBLOCK_ACK

public static final byte OP_CLIENT_ABANDONBLOCK_ACK
See Also:
Constant Field Values

OP_CLIENT_RAWSTATS_ACK

public static final byte OP_CLIENT_RAWSTATS_ACK
See Also:
Constant Field Values

OP_CLIENT_DATANODEREPORT_ACK

public static final byte OP_CLIENT_DATANODEREPORT_ACK
See Also:
Constant Field Values

OP_CLIENT_DATANODE_HINTS_ACK

public static final byte OP_CLIENT_DATANODE_HINTS_ACK
See Also:
Constant Field Values

OP_WRITE_BLOCK

public static final byte OP_WRITE_BLOCK
See Also:
Constant Field Values

OP_READ_BLOCK

public static final byte OP_READ_BLOCK
See Also:
Constant Field Values

OP_READSKIP_BLOCK

public static final byte OP_READSKIP_BLOCK
See Also:
Constant Field Values

RUNLENGTH_ENCODING

public static final byte RUNLENGTH_ENCODING
See Also:
Constant Field Values

CHUNKED_ENCODING

public static final byte CHUNKED_ENCODING
See Also:
Constant Field Values

OPERATION_FAILED

public static final int OPERATION_FAILED
See Also:
Constant Field Values

STILL_WAITING

public static final int STILL_WAITING
See Also:
Constant Field Values

COMPLETE_SUCCESS

public static final int COMPLETE_SUCCESS
See Also:
Constant Field Values

HEARTBEAT_INTERVAL

public static final long HEARTBEAT_INTERVAL
See Also:
Constant Field Values

EXPIRE_INTERVAL

public static final long EXPIRE_INTERVAL
See Also:
Constant Field Values

BLOCKREPORT_INTERVAL

public static final long BLOCKREPORT_INTERVAL
See Also:
Constant Field Values

DATANODE_STARTUP_PERIOD

public static final long DATANODE_STARTUP_PERIOD
See Also:
Constant Field Values

LEASE_PERIOD

public static final long LEASE_PERIOD
See Also:
Constant Field Values

READ_TIMEOUT

public static final int READ_TIMEOUT
See Also:
Constant Field Values

BUFFER_SIZE

public static final int BUFFER_SIZE
Constructor Detail

NameNode

public NameNode(Configuration conf)
         throws IOException
Create a NameNode at the default location


NameNode

public NameNode(File dir,
                int port,
                Configuration conf)
         throws IOException
Create a NameNode at the specified location and start it.

Method Detail

format

public static void format(Configuration conf)
                   throws IOException
Format a new filesystem. Destroys any filesystem that may already exist at this location.

Throws:
IOException

join

public void join()
Wait for service to finish. (Normally, it runs forever.)


open

public org.apache.hadoop.dfs.LocatedBlock[] open(String src)
                                          throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Open an existing file, at the given name. Returns block and DataNode info. The client will then have to contact each indicated DataNode to obtain the actual data. There is no need to call close() or any other function after calling open().

Specified by:
open in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

create

public org.apache.hadoop.dfs.LocatedBlock create(String src,
                                                 String clientName,
                                                 String clientMachine,
                                                 boolean overwrite)
                                          throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Create a new file. Get back block and datanode info, which describes where the first block should be written. Successfully calling this method prevents any other client from creating a file under the given name, but the caller must invoke complete() for the file to be added to the filesystem. Blocks have a maximum size. Clients that intend to create multi-block files must also use reportWrittenBlock() and addBlock().

Specified by:
create in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

addBlock

public org.apache.hadoop.dfs.LocatedBlock addBlock(String src,
                                                   String clientMachine)
                                            throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() returns block and datanode info, just like the initial call to create(). A null response means the NameNode could not allocate a block, and that the caller should try again.

Specified by:
addBlock in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

reportWrittenBlock

public void reportWrittenBlock(org.apache.hadoop.dfs.LocatedBlock lb)
                        throws IOException
The client can report in a set written blocks that it wrote. These blocks are reported via the client instead of the datanode to prevent weird heartbeat race conditions.

Specified by:
reportWrittenBlock in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

abandonBlock

public void abandonBlock(org.apache.hadoop.dfs.Block b,
                         String src)
                  throws IOException
The client needs to give up on the block.

Specified by:
abandonBlock in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

abandonFileInProgress

public void abandonFileInProgress(String src)
                           throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
A client that wants to abandon writing to the current file should call abandonFileInProgress(). After this call, any client can call create() to obtain the filename. Any blocks that have been written for the file will be garbage-collected.

Specified by:
abandonFileInProgress in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

complete

public boolean complete(String src,
                        String clientName)
                 throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.

Specified by:
complete in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

getHints

public String[][] getHints(String src,
                           long start,
                           long len)
                    throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
getHints() returns a list of hostnames that store data for a specific file region. It returns a set of hostnames for every block within the indicated region. This function is very useful when writing code that considers data-placement when performing operations. For example, the MapReduce system tries to schedule tasks on the same machines as the data-block the task processes.

Specified by:
getHints in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

rename

public boolean rename(String src,
                      String dst)
               throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Rename an item in the fs namespace

Specified by:
rename in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

delete

public boolean delete(String src)
               throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Remove the given filename from the filesystem

Specified by:
delete in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

exists

public boolean exists(String src)
               throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Check whether the given file exists

Specified by:
exists in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

isDir

public boolean isDir(String src)
              throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Check whether the given filename is a directory or not.

Specified by:
isDir in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

mkdirs

public boolean mkdirs(String src)
               throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Create a directory (or hierarchy of directories) with the given name.

Specified by:
mkdirs in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

obtainLock

public boolean obtainLock(String src,
                          String clientName,
                          boolean exclusive)
                   throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
obtainLock() is used for lock managemnet. It returns true if the lock has been seized correctly. It returns false if the lock could not be obtained, and the client should try again. Locking is a part of most filesystems and is useful for a number of inter-process synchronization tasks.

Specified by:
obtainLock in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

releaseLock

public boolean releaseLock(String src,
                           String clientName)
                    throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
releaseLock() is called if the client would like to release a held lock. It returns true if the lock is correctly released. It returns false if the client should wait and try again.

Specified by:
releaseLock in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

renewLease

public void renewLease(String clientName)
                throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly. So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.

Specified by:
renewLease in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

getListing

public org.apache.hadoop.dfs.DFSFileInfo[] getListing(String src)
                                               throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Get a listing of the indicated directory

Specified by:
getListing in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

getStats

public long[] getStats()
                throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Get a set of statistics about the filesystem. Right now, only two values are returned. [0] contains the total storage capacity of the system, in bytes. [1] contains the available storage of the system, in bytes.

Specified by:
getStats in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

getDatanodeReport

public org.apache.hadoop.dfs.DatanodeInfo[] getDatanodeReport()
                                                       throws IOException
Description copied from interface: org.apache.hadoop.dfs.ClientProtocol
Get a full report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode.

Specified by:
getDatanodeReport in interface org.apache.hadoop.dfs.ClientProtocol
Throws:
IOException

sendHeartbeat

public void sendHeartbeat(String sender,
                          long capacity,
                          long remaining)
Description copied from interface: org.apache.hadoop.dfs.DatanodeProtocol
sendHeartbeat() tells the NameNode that the DataNode is still alive and well. Includes some status info, too.

Specified by:
sendHeartbeat in interface org.apache.hadoop.dfs.DatanodeProtocol

blockReport

public org.apache.hadoop.dfs.Block[] blockReport(String sender,
                                                 org.apache.hadoop.dfs.Block[] blocks)
Description copied from interface: org.apache.hadoop.dfs.DatanodeProtocol
blockReport() tells the NameNode about all the locally-stored blocks. The NameNode returns an array of Blocks that have become obsolete and should be deleted. This function is meant to upload *all* the locally-stored blocks. It's invoked upon startup and then infrequently afterwards.

Specified by:
blockReport in interface org.apache.hadoop.dfs.DatanodeProtocol

blockReceived

public void blockReceived(String sender,
                          org.apache.hadoop.dfs.Block[] blocks)
Description copied from interface: org.apache.hadoop.dfs.DatanodeProtocol
blockReceived() allows the DataNode to tell the NameNode about recently-received block data. For example, whenever client code writes a new Block here, or another DataNode copies a Block to this DataNode, it will call blockReceived().

Specified by:
blockReceived in interface org.apache.hadoop.dfs.DatanodeProtocol

errorReport

public void errorReport(String sender,
                        String msg)
Description copied from interface: org.apache.hadoop.dfs.DatanodeProtocol
errorReport() tells the NameNode about something that has gone awry. Useful for debugging.

Specified by:
errorReport in interface org.apache.hadoop.dfs.DatanodeProtocol

getBlockwork

public org.apache.hadoop.dfs.BlockCommand getBlockwork(String sender,
                                                       int xmitsInProgress)
Return a block-oriented command for the datanode to execute. This will be either a transfer or a delete operation.

Specified by:
getBlockwork in interface org.apache.hadoop.dfs.DatanodeProtocol

main

public static void main(String[] argv)
                 throws Exception
Throws:
Exception


Copyright © 2006 The Apache Software Foundation