Uses of Class
org.apache.hadoop.hdfs.protocol.DatanodeInfo

Packages that use DatanodeInfo
org.apache.hadoop.hdfs A distributed implementation of FileSystem
org.apache.hadoop.hdfs.protocol   
org.apache.hadoop.hdfs.server.common   
org.apache.hadoop.hdfs.server.namenode   
org.apache.hadoop.hdfs.server.protocol   
 

Uses of DatanodeInfo in org.apache.hadoop.hdfs
 

Methods in org.apache.hadoop.hdfs that return DatanodeInfo
 DatanodeInfo[] DFSClient.datanodeReport(FSConstants.DatanodeReportType type)
           
 DatanodeInfo DFSInputStream.getCurrentDatanode()
          Returns the datanode from which the stream is currently reading.
 DatanodeInfo DFSClient.DFSDataInputStream.getCurrentDatanode()
          Returns the datanode from which the stream is currently reading.
 DatanodeInfo[] DistributedFileSystem.getDataNodeStats()
          Return statistics for each datanode.
 

Uses of DatanodeInfo in org.apache.hadoop.hdfs.protocol
 

Methods in org.apache.hadoop.hdfs.protocol that return DatanodeInfo
 DatanodeInfo[] ClientProtocol.getDatanodeReport(FSConstants.DatanodeReportType type)
          Get a report on the system's current datanodes.
 DatanodeInfo[] LocatedBlock.getLocations()
           
static DatanodeInfo DatanodeInfo.read(DataInput in)
          Read a DatanodeInfo
 

Methods in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfo
 LocatedBlock ClientProtocol.addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludeNodes)
          A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().
protected abstract  void DataTransferProtocol.Receiver.opReplaceBlock(DataInputStream in, Block blk, String sourceId, DatanodeInfo src, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Abstract OP_REPLACE_BLOCK method.
static void DataTransferProtocol.Sender.opReplaceBlock(DataOutputStream out, Block blk, String storageId, DatanodeInfo src, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Send OP_REPLACE_BLOCK
protected abstract  void DataTransferProtocol.Receiver.opWriteBlock(DataInputStream in, Block blk, int pipelineSize, DataTransferProtocol.BlockConstructionStage stage, long newGs, long minBytesRcvd, long maxBytesRcvd, String client, DatanodeInfo src, DatanodeInfo[] targets, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Abstract OP_WRITE_BLOCK method.
protected abstract  void DataTransferProtocol.Receiver.opWriteBlock(DataInputStream in, Block blk, int pipelineSize, DataTransferProtocol.BlockConstructionStage stage, long newGs, long minBytesRcvd, long maxBytesRcvd, String client, DatanodeInfo src, DatanodeInfo[] targets, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Abstract OP_WRITE_BLOCK method.
static void DataTransferProtocol.Sender.opWriteBlock(DataOutputStream out, Block blk, int pipelineSize, DataTransferProtocol.BlockConstructionStage stage, long newGs, long minBytesRcvd, long maxBytesRcvd, String client, DatanodeInfo src, DatanodeInfo[] targets, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Send OP_WRITE_BLOCK
static void DataTransferProtocol.Sender.opWriteBlock(DataOutputStream out, Block blk, int pipelineSize, DataTransferProtocol.BlockConstructionStage stage, long newGs, long minBytesRcvd, long maxBytesRcvd, String client, DatanodeInfo src, DatanodeInfo[] targets, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken)
          Send OP_WRITE_BLOCK
 

Constructors in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfo
DatanodeInfo(DatanodeInfo from)
           
LocatedBlock(Block b, DatanodeInfo[] locs)
           
LocatedBlock(Block b, DatanodeInfo[] locs, long startOffset)
           
LocatedBlock(Block b, DatanodeInfo[] locs, long startOffset, boolean corrupt)
           
UnregisteredNodeException(DatanodeID nodeID, DatanodeInfo storedNode)
          The exception is thrown if a different data-node claims the same storage id as the existing one.
 

Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.common
 

Methods in org.apache.hadoop.hdfs.server.common that return DatanodeInfo
static DatanodeInfo JspHelper.bestNode(DatanodeInfo[] nodes, boolean doRandom)
           
static DatanodeInfo JspHelper.bestNode(LocatedBlock blk)
           
static DatanodeInfo JspHelper.bestNode(LocatedBlocks blks)
           
 

Methods in org.apache.hadoop.hdfs.server.common with parameters of type DatanodeInfo
static DatanodeInfo JspHelper.bestNode(DatanodeInfo[] nodes, boolean doRandom)
           
 

Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.namenode
 

Subclasses of DatanodeInfo in org.apache.hadoop.hdfs.server.namenode
 class DatanodeDescriptor
          DatanodeDescriptor tracks stats on a given DataNode, such as available storage capacity, last update time, etc., and maintains a set of blocks stored on the datanode.
 

Methods in org.apache.hadoop.hdfs.server.namenode that return DatanodeInfo
 DatanodeInfo[] FSNamesystem.datanodeReport(FSConstants.DatanodeReportType type)
           
 DatanodeInfo FSNamesystem.getDataNodeInfo(String name)
           
 DatanodeInfo[] NameNode.getDatanodeReport(FSConstants.DatanodeReportType type)
           
 

Methods in org.apache.hadoop.hdfs.server.namenode with parameters of type DatanodeInfo
 LocatedBlock NameNode.addBlock(String src, String clientName, Block previous, DatanodeInfo[] excludedNodes)
           
 BlocksWithLocations BackupNode.getBlocks(DatanodeInfo datanode, long size)
           
 BlocksWithLocations NameNode.getBlocks(DatanodeInfo datanode, long size)
           
 void FSNamesystem.markBlockAsCorrupt(Block blk, DatanodeInfo dn)
          Mark the block belonging to datanode as corrupt
 

Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.protocol
 

Methods in org.apache.hadoop.hdfs.server.protocol that return DatanodeInfo
 DatanodeInfo[][] BlockCommand.getTargets()
           
 

Methods in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeInfo
 BlocksWithLocations NamenodeProtocol.getBlocks(DatanodeInfo datanode, long size)
          Get a list of blocks belonging to datanode whose total size equals size.
 

Constructors in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeInfo
BlockRecoveryCommand.RecoveringBlock(Block b, DatanodeInfo[] locs, long newGS)
          Create RecoveringBlock.
 



Copyright © 2009 The Apache Software Foundation