A B C D E F G H I J K L M N O P Q R S T U V W X Z

A

abandonBlock(Block, String) - Method in class org.apache.hadoop.dfs.NameNode
The client needs to give up on the block.
abandonFileInProgress(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
abort(long) - Method in class org.apache.hadoop.hbase.HClient
Abort a row mutation
abort(long) - Method in class org.apache.hadoop.hbase.HRegion
Abort a pending set of writes.
abort(Text, long, long) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Abort a row mutation
abort(Text, long, long) - Method in class org.apache.hadoop.hbase.HRegionServer
Abort a row mutation
abort(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Aborts a single file.
abort() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Aborts the file creation, all uncommitted files created by this PhasedFileSystem instance are deleted.
abortBatch() - Method in class org.apache.hadoop.hbase.HClient
Abort a batch mutation
abortRequested - Variable in class org.apache.hadoop.hbase.HRegionServer
 
ABSOLUTE - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
AbstractMetricsContext - Class in org.apache.hadoop.metrics.spi
The main class of the Service Provider Interface.
AbstractMetricsContext() - Constructor for class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new instance of AbstractMetricsContext
accept(Path) - Method in interface org.apache.hadoop.fs.PathFilter
Tests whether or not the specified abstract pathname should be included in a pathname list.
accept(Writable) - Method in interface org.apache.hadoop.mapred.SequenceFileInputFilter.Filter
filter function Decide if a record should be filtered or not
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
Filtering method If MD5(key) % frequency==0, return true; otherwise return false
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
Filtering method If record# % frequency==0, return true; otherwise return false
accept(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Filtering method If key matches the regex, return true; otherwise return false
acceptedRow(Text) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Called to let filter know that the specified row has been included in the results (passed all filtering).
acceptedRow(Text) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Called to let filter know that the specified row has been included in the results (passed all filtering).
acceptedRow(Text) - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Called to let filter know that the specified row has been included in the results (passed all filtering).
acceptedRow(Text) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Called to let filter know that the specified row has been included in the results (passed all filtering).
activateOptions() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
add(Object) - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
add(Object) - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
add(Text, TreeMap<Text, byte[]>, long) - Method in class org.apache.hadoop.hbase.HMemcache
Store a value.
add(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Add a data node Update data node counter & rack counter if neccessary
add(Key) - Method in class org.onelab.filter.BloomFilter
 
add(Key) - Method in class org.onelab.filter.CountingBloomFilter
 
add(Key) - Method in class org.onelab.filter.DynamicBloomFilter
 
add(Key) - Method in class org.onelab.filter.Filter
Adds a key to this filter.
add(ArrayList<Key>) - Method in class org.onelab.filter.Filter
Adds a list of keys to this filter.
add(Collection<Key>) - Method in class org.onelab.filter.Filter
Adds a collection of keys to this filter.
add(Key[]) - Method in class org.onelab.filter.Filter
Adds an array of keys to this filter.
add(Key) - Method in class org.onelab.filter.RetouchedBloomFilter
 
add_escapes(String) - Method in exception org.apache.hadoop.hbase.shell.generated.ParseException
Used to convert raw characters to their escaped version when these raw version cannot be used as part of an ASCII string literal.
add_escapes(String) - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
Used to convert raw characters to their escaped version when these raw version cannot be used as part of an ASCII string literal.
addArchiveToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an archive path to the current set of classpath entries.
addBlock(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
addCacheArchive(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a archives to be localized to the conf
addCacheFile(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a file to be localized to the conf
addClass(String, Class, String) - Method in class org.apache.hadoop.util.ProgramDriver
This is the method that adds the classed to the repository
addColumn(ColumnName, boolean) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a column to the table
addColumn(ColumnName) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a column to the table
addColumn(Text, HColumnDescriptor) - Method in class org.apache.hadoop.hbase.HClient
Add a column to an existing table
addColumn(Text, HColumnDescriptor) - Method in class org.apache.hadoop.hbase.HMaster
addColumn(Text, HColumnDescriptor) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
addDefaultResource(String) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDefaultResource(URL) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDefaultResource(Path) - Method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addEscapes(String) - Static method in error org.apache.hadoop.hbase.shell.generated.TokenMgrError
Replaces unprintable characters by their espaced (or unicode escaped) equivalents in the given string
addEscapes(String) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Replaces unprintable characters by their espaced (or unicode escaped) equivalents in the given string
addFalsePositive(Key) - Method in class org.onelab.filter.RetouchedBloomFilter
Adds a false positive information to this retouched Bloom filter.
addFalsePositive(Collection<Key>) - Method in class org.onelab.filter.RetouchedBloomFilter
Adds a collection of false positive information to this retouched Bloom filter.
addFalsePositive(ArrayList<Key>) - Method in class org.onelab.filter.RetouchedBloomFilter
Adds a list of false positive information to this retouched Bloom filter.
addFalsePositive(Key[]) - Method in class org.onelab.filter.RetouchedBloomFilter
Adds an array of false positive information to this retouched Bloom filter.
addFamily(HColumnDescriptor) - Method in class org.apache.hadoop.hbase.HTableDescriptor
Add a column family.
addFileset(FileSet) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Adds a fileset that can consist of one or more files
addFileToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an file path to the current set of classpath entries It adds the file to cache as well.
addFinalResource(String) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addFinalResource(URL) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addFinalResource(Path) - Method in class org.apache.hadoop.conf.Configuration
Add a final resource.
addInputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
additionalConfSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
addJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a new job.
addJobs(Collection<Job>) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a collection of jobs
addLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addMissing(String, long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Add a missing block name, plus its size.
addName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Add an alternate name for a class.
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(double) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
add a value to the aggregator
addNextValue(Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
add the given val to the aggregator.
addPhase(String) - Method in class org.apache.hadoop.util.Progress
Adds a named node to the tree.
addPhase() - Method in class org.apache.hadoop.util.Progress
Adds a node to the tree.
addRow(boolean[]) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a row to the table.
addServlet(String, String, Class<T>) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Add a servlet in the server.
addTableFooter(JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableHeader(JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableRow(JspWriter, String[]) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTableRow(JspWriter, String[], int) - Method in class org.apache.hadoop.dfs.JspHelper
 
addTaskEnvironment_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
Method to adjust line and column numbers for the start of a token.
adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Method to adjust line and column numbers for the start of a token.
adjustTop() - Method in class org.apache.hadoop.util.PriorityQueue
Should be called when the Object at top changes values.
adminState - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
AggregateWordCount - Class in org.apache.hadoop.examples
This is an example Aggregated Hadoop Map/Reduce application.
AggregateWordCount() - Constructor for class org.apache.hadoop.examples.AggregateWordCount
 
AggregateWordCount.WordCountPlugInClass - Class in org.apache.hadoop.examples
 
AggregateWordCount.WordCountPlugInClass() - Constructor for class org.apache.hadoop.examples.AggregateWordCount.WordCountPlugInClass
 
aggregatorDescriptorList - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
allFinished() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
AlreadyBeingCreatedException - Exception in org.apache.hadoop.dfs
The exception that happens when you ask to create a file that already is being created, but is not closed yet.
AlreadyBeingCreatedException(String) - Constructor for exception org.apache.hadoop.dfs.AlreadyBeingCreatedException
 
AND - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
and(Filter) - Method in class org.onelab.filter.BloomFilter
 
and(Filter) - Method in class org.onelab.filter.CountingBloomFilter
 
and(Filter) - Method in class org.onelab.filter.DynamicBloomFilter
 
and(Filter) - Method in class org.onelab.filter.Filter
Peforms a logical AND between this filter and a specified filter.
APP_NAME - Static variable in class org.apache.hadoop.hbase.shell.HelpManager
application name
APP_VERSION - Static variable in class org.apache.hadoop.hbase.shell.HelpManager
version of the code
append(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Writer
Append a value to the file.
append(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Writer
Append a key/value pair to the map.
append(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Writer
Append a key/value pair.
append(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Writer
Deprecated. Append a key to a set.
append(LoggingEvent) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
append(LoggingEvent) - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
append(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer.
append(byte[]) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer
appendRaw(byte[], int, int, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Writer
 
archiveURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
argv_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
ArrayFile - Class in org.apache.hadoop.io
A dense file-based mapping from integers to values.
ArrayFile() - Constructor for class org.apache.hadoop.io.ArrayFile
 
ArrayFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing array file.
ArrayFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.ArrayFile.Reader
Construct an array reader for the named file.
ArrayFile.Writer - Class in org.apache.hadoop.io
Write a new array file.
ArrayFile.Writer(Configuration, FileSystem, String, Class) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayFile.Writer(Configuration, FileSystem, String, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayListBackedIterator - Class in org.apache.hadoop.contrib.utils.join
This class provides an implementation of ResetableIterator.
ArrayListBackedIterator() - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
ArrayListBackedIterator(ArrayList<Object>) - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
arrayToString(String[]) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of strings, return a comma-separated list of its elements.
ArrayWritable - Class in org.apache.hadoop.io
A Writable for arrays containing instances of a class.
ArrayWritable() - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(Class) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(Class, Writable[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(String[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
available() - Method in class org.apache.hadoop.fs.FSInputChecker
 
available() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
available() - Method in class org.apache.hadoop.mapred.TaskLog.Reader
 

B

backup(int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
backup(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BasicCommand - Class in org.apache.hadoop.hbase.shell
 
BasicCommand() - Constructor for class org.apache.hadoop.hbase.shell.BasicCommand
 
BatchOperation - Class in org.apache.hadoop.hbase.io
batch update operation
BatchOperation() - Constructor for class org.apache.hadoop.hbase.io.BatchOperation
default constructor used by Writable
BatchOperation(Text, byte[]) - Constructor for class org.apache.hadoop.hbase.io.BatchOperation
Creates a put operation
BatchOperation(Text) - Constructor for class org.apache.hadoop.hbase.io.BatchOperation
Creates a delete operation
batchUpdate(Text, long, BatchUpdate) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Applies a batch of updates via one RPC
batchUpdate(Text, long, BatchUpdate) - Method in class org.apache.hadoop.hbase.HRegionServer
Applies a batch of updates via one RPC
BatchUpdate - Class in org.apache.hadoop.hbase.io
A Writable object that contains a series of BatchOperations There is one BatchUpdate object per server, so a series of batch operations can result in multiple BatchUpdate objects if the batch contains rows that are served by multiple region servers.
BatchUpdate() - Constructor for class org.apache.hadoop.hbase.io.BatchUpdate
constructor
beginColumn - Variable in class org.apache.hadoop.hbase.shell.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
beginColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
beginLine - Variable in class org.apache.hadoop.hbase.shell.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
beginLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
BeginToken() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
BeginToken() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
bestNode(LocatedBlock) - Method in class org.apache.hadoop.dfs.JspHelper
 
BinaryRecordInput - Class in org.apache.hadoop.record
 
BinaryRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordInput(DataInput) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordOutput - Class in org.apache.hadoop.record
 
BinaryRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
BinaryRecordOutput(DataOutput) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
Block - Class in org.apache.hadoop.fs.s3
Holds metadata about a block of data being stored in a FileSystemStore.
Block(long, long) - Constructor for class org.apache.hadoop.fs.s3.Block
 
BLOCK_INVALIDATE_CHUNK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
blockCrcUpgradeGetBlockLocations(Block) - Method in class org.apache.hadoop.dfs.NameNode
 
blockExists(long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
blockReceived(DatanodeRegistration, Block[]) - Method in class org.apache.hadoop.dfs.NameNode
 
blockReport(DatanodeRegistration, Block[]) - Method in class org.apache.hadoop.dfs.NameNode
 
BLOCKREPORT_INTERVAL - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
BLOOMFILTER - Static variable in class org.apache.hadoop.hbase.BloomFilterDescriptor
Bloom filter, as defined by Bloom in 1970.
BloomFilter - Class in org.onelab.filter
Implements a Bloom filter, as defined by Bloom in 1970.
BloomFilter() - Constructor for class org.onelab.filter.BloomFilter
Default constructor - use with readFields
BloomFilter(int, int) - Constructor for class org.onelab.filter.BloomFilter
Constructor
BloomFilterDescriptor - Class in org.apache.hadoop.hbase
Supplied as a parameter to HColumnDescriptor to specify what kind of bloom filter to use for a column, and its configuration parameters
BloomFilterDescriptor() - Constructor for class org.apache.hadoop.hbase.BloomFilterDescriptor
Default constructor - used in conjunction with Writable
BloomFilterDescriptor(int, int, int) - Constructor for class org.apache.hadoop.hbase.BloomFilterDescriptor
 
BOOLEAN_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
BooleanWritable - Class in org.apache.hadoop.io
A WritableComparable for booleans.
BooleanWritable() - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable(boolean) - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BooleanWritable.
BooleanWritable.Comparator() - Constructor for class org.apache.hadoop.io.BooleanWritable.Comparator
 
bufcolumn - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
bufcolumn - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
buffer - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
Buffer - Class in org.apache.hadoop.record
A byte sequence that is used as a Java native type for buffer.
Buffer() - Constructor for class org.apache.hadoop.record.Buffer
Create a zero-count sequence.
Buffer(byte[]) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte array as the initial value.
Buffer(byte[], int, int) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte range as the initial value.
buffer - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BUFFER_SIZE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
BUFFER_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
BufferedFSInputStream - Class in org.apache.hadoop.fs
A class optimizes reading from FSInputStream by bufferring
BufferedFSInputStream(FSInputStream, int) - Constructor for class org.apache.hadoop.fs.BufferedFSInputStream
Creates a BufferedFSInputStream with the specified buffer size, and saves its argument, the input stream in, for later use.
bufline - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
bufline - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
bufpos - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
bufpos - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
buildRoot(HttpServletRequest, XMLOutputter) - Method in class org.apache.hadoop.dfs.ListPathsServlet
Build a map from the query string, setting values and defaults.
BuiltInZlibDeflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Deflater to make it conform to org.apache.hadoop.io.compress.Compressor interface.
BuiltInZlibDeflater(int, boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater(int) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibInflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Inflater to make it conform to org.apache.hadoop.io.compress.Decompressor interface.
BuiltInZlibInflater(boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BuiltInZlibInflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BYTE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
byteDesc(long) - Static method in class org.apache.hadoop.fs.FsShell
Return an abbreviated English-language desc of the byte length
bytesToCodePoint(ByteBuffer) - Static method in class org.apache.hadoop.io.Text
Returns the next code point at the current position in the buffer.
BytesWritable - Class in org.apache.hadoop.io
A byte sequence that is usable as a key or value.
BytesWritable() - Constructor for class org.apache.hadoop.io.BytesWritable
Create a zero-size sequence.
BytesWritable(byte[]) - Constructor for class org.apache.hadoop.io.BytesWritable
Create a BytesWritable using the byte array as the initial value.
BytesWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BytesWritable.
BytesWritable.Comparator() - Constructor for class org.apache.hadoop.io.BytesWritable.Comparator
 
byteToHexString(byte[]) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of bytes it will convert the bytes to a hex string representation of the bytes

C

cacheArchives - Variable in class org.apache.hadoop.streaming.StreamJob
 
cacheFiles - Variable in class org.apache.hadoop.streaming.StreamJob
 
cacheFlusherLock - Variable in class org.apache.hadoop.hbase.HRegionServer
 
call(Writable, InetSocketAddress) - Method in class org.apache.hadoop.ipc.Client
Make a call, passing param, to the IPC server running at address, returning the value.
call(Writable[], InetSocketAddress[]) - Method in class org.apache.hadoop.ipc.Client
Makes a set of calls in parallel.
call(Method, Object[][], InetSocketAddress[], Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Expert: Make multiple, parallel calls to a set of servers.
call(Writable) - Method in class org.apache.hadoop.ipc.RPC.Server
 
call(Writable) - Method in class org.apache.hadoop.ipc.Server
Called for each call.
cancelLease(long, long) - Method in class org.apache.hadoop.hbase.Leases
Client explicitly cancels a lease.
capacity - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
captureOutAndError(List<String>, File, File, long) - Static method in class org.apache.hadoop.mapred.TaskLog
Wrap a command in a shell to capture stdout and stderr to files.
charAt(int) - Method in class org.apache.hadoop.io.Text
Returns the Unicode Scalar Value (32-bit integer value) for the character at position.
checkDir(File) - Static method in class org.apache.hadoop.util.DiskChecker
 
checkMaster() - Method in class org.apache.hadoop.hbase.HClient
 
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.hbase.mapred.TableOutputFormat
 
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
 
checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
Check whether the output specification for a job is appropriate.
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.OutputFormatBase
 
checkPath(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check that a Path belongs to this FileSystem.
checkPath(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Check that a Path belongs to this FileSystem.
checkpoint() - Method in class org.apache.hadoop.fs.Trash
Create a trash checkpoint.
checkReservedTableName(Text) - Method in class org.apache.hadoop.hbase.HClient
 
checkState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Check and update the state of this job.
CHECKSUM_CRC32 - Static variable in class org.apache.hadoop.dfs.DataChecksum
 
CHECKSUM_NULL - Static variable in class org.apache.hadoop.dfs.DataChecksum
 
ChecksumDistributedFileSystem - Class in org.apache.hadoop.dfs
An implementation of ChecksumFileSystem over DistributedFileSystem.
ChecksumDistributedFileSystem() - Constructor for class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
 
ChecksumDistributedFileSystem(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Deprecated.  
ChecksumException - Exception in org.apache.hadoop.fs
Thrown for checksum errors.
ChecksumException(String, long) - Constructor for exception org.apache.hadoop.fs.ChecksumException
 
ChecksumFileSystem - Class in org.apache.hadoop.fs
Abstract Checksumed FileSystem.
ChecksumFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.ChecksumFileSystem
 
checkURIs(URI[], URI[]) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks if there is a conflict in the fragment names of the uris.
chmod(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Change the permissions on a filename.
chooseRandom(String) - Method in class org.apache.hadoop.net.NetworkTopology
randomly choose one node from scope if scope starts with ~, choose one from the all datanodes except for the ones in scope; otherwise, choose one from scope
cleanup() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
The default cleanup.
cleanup(int) - Static method in class org.apache.hadoop.mapred.TaskLog
Purge old user logs.
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Interface to cleanup *distcp* specific resources
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
cleanup(Configuration, JobConf, String, String) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
CLEAR - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
clear() - Method in class org.apache.hadoop.util.PriorityQueue
Removes all entries from the PriorityQueue.
clear() - Method in class org.onelab.filter.HashFunction
Clears this hash function.
ClearCommand - Class in org.apache.hadoop.hbase.shell
 
ClearCommand() - Constructor for class org.apache.hadoop.hbase.shell.ClearCommand
 
clearCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
Client - Class in org.apache.hadoop.ipc
A client for an IPC service.
Client(Class, Configuration) - Constructor for class org.apache.hadoop.ipc.Client
Construct an IPC client whose values are of the given Writable class.
clone(Writable, JobConf) - Static method in class org.apache.hadoop.io.WritableUtils
Make a copy of a writable object using serialization to a buffer.
clone() - Method in class org.apache.hadoop.record.Buffer
 
clone() - Method in class org.onelab.filter.BloomFilter
Returns a shallow copy of this Bloom filter.
clone() - Method in class org.onelab.filter.CountingBloomFilter
 
clone() - Method in class org.onelab.filter.DynamicBloomFilter
 
cloneFileAttributes(Path, Path, Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Clones the attributes (like compression of the input file and creates a corresponding Writer
close() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
close() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
close() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
close() - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
 
close() - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
 
close() - Method in class org.apache.hadoop.fs.FileSystem
No more filesystem operations are needed.
close() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
close() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
close() - Method in class org.apache.hadoop.fs.FsShell
 
close() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
close() - Method in class org.apache.hadoop.hbase.HAbstractScanner
Mechanism used to shut down the whole scan
close() - Method in interface org.apache.hadoop.hbase.HInternalScannerInterface
Close the scanner.
close() - Method in class org.apache.hadoop.hbase.HRegion
Close down this HRegion.
close(long) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Close a scanner
close(long) - Method in class org.apache.hadoop.hbase.HRegionServer
Close a scanner
close() - Method in interface org.apache.hadoop.hbase.HScannerInterface
 
close() - Method in class org.apache.hadoop.hbase.Leases
Shut down this Leases instance.
close(Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableOutputFormat.TableRecordWriter
 
close() - Method in interface org.apache.hadoop.io.Closeable
Called after the last call to any other method on this object to free and/or flush resources.
close() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
 
close() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
close() - Method in class org.apache.hadoop.io.MapFile.Reader
Close the map.
close() - Method in class org.apache.hadoop.io.MapFile.Writer
Close the map.
close() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Close the file.
close() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
closes the iterator so that the underlying streams can be closed
close() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Close the file.
close() - Method in class org.apache.hadoop.mapred.JobClient
 
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
close() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
close() - Method in class org.apache.hadoop.mapred.MapReduceBase
Default implementation that does nothing.
close() - Method in interface org.apache.hadoop.mapred.RecordReader
Close this to future operations.
close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
Close this to future operations.
close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
close() - Method in class org.apache.hadoop.mapred.TaskLog.Reader
 
close() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
close() - Method in class org.apache.hadoop.mapred.TaskTracker
Close down the TaskTracker and all its components.
close(Reporter) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
close() - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
close() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring and also frees any buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring and frees buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.streaming.PipeMapper
 
close() - Method in class org.apache.hadoop.streaming.PipeReducer
 
close() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Close this to future operations.
close() - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
Closeable - Interface in org.apache.hadoop.io
That which can be closed.
closeAfterLeasesExpire() - Method in class org.apache.hadoop.hbase.Leases
Shuts down this lease instance when all outstanding leases expire.
closeAll() - Static method in class org.apache.hadoop.fs.FileSystem
Close all cached filesystems.
closed(Text) - Method in interface org.apache.hadoop.hbase.RegionUnavailableListener
regionName is closed and no longer available.
closeSocket(Socket) - Static method in class org.apache.hadoop.fs.FileUtil
 
closeStream(Closeable) - Static method in class org.apache.hadoop.fs.FileUtil
 
closing(Text) - Method in interface org.apache.hadoop.hbase.RegionUnavailableListener
regionName is closing.
cluster_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
ClusterStatus - Class in org.apache.hadoop.mapred
Summarizes the size and current state of the cluster.
cmdStatement() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
CodeBuffer - Class in org.apache.hadoop.record.compiler
A wrapper around StringBuffer that automatically does indentation
COL_REGIONINFO - Static variable in interface org.apache.hadoop.hbase.HConstants
ROOT/META column family member - contains HRegionInfo
COL_SERVER - Static variable in interface org.apache.hadoop.hbase.HConstants
ROOT/META column family member - contains HServerAddress.toString()
COL_STARTCODE - Static variable in interface org.apache.hadoop.hbase.HConstants
ROOT/META column family member - contains server start code (a long)
collate(Object[], String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collate(List, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collect(WritableComparable, TaggedMapOutput, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected.
collect(Text, KeyedDataArrayWritable) - Method in class org.apache.hadoop.hbase.mapred.TableOutputCollector
Restrict Table Map/Reduce's output to be a Text key and a record.
collect(WritableComparable, Writable) - Method in interface org.apache.hadoop.mapred.OutputCollector
Adds a key/value pair to the output.
collected - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
collector - Variable in class org.apache.hadoop.hbase.mapred.TableOutputCollector
The collector object
COLUMN - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
column - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
column - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
COLUMN_FAMILY - Static variable in interface org.apache.hadoop.hbase.HConstants
The ROOT and META column family
COLUMN_FAMILY_ARRAY - Static variable in interface org.apache.hadoop.hbase.HConstants
 
COLUMN_LIST - Static variable in class org.apache.hadoop.hbase.mapred.TableInputFormat
space delimited list of columns
COLUMNFAMILIES - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
combine(Object[], Object[]) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
comCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
COMMA - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
COMMA_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
Command - Class in org.apache.hadoop.fs
A base class for running a unix command like du or df
Command() - Constructor for class org.apache.hadoop.fs.Command
 
Command - Interface in org.apache.hadoop.hbase.shell
 
CommandFactory - Interface in org.apache.hadoop.hbase.shell
Parser uses command factories to create command.
commit(long) - Method in class org.apache.hadoop.hbase.HClient.BatchHandler
Finalize a batch mutation
commit(long) - Method in class org.apache.hadoop.hbase.HClient
Finalize a row mutation
commit(long, long) - Method in class org.apache.hadoop.hbase.HClient
Finalize a row mutation
commit(long, long) - Method in class org.apache.hadoop.hbase.HRegion
Commit a pending set of writes to the memcache.
commit(Text, long, long, long) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Finalize a row mutation
commit(Text, long, long, long) - Method in class org.apache.hadoop.hbase.HRegionServer
Finalize a row mutation
commit(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Commits a single file file to its final locations as passed in create* methods.
commit() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. Commits files to their final locations as passed in create* methods.
commitBatch() - Method in class org.apache.hadoop.hbase.HClient
Finalize a batch mutation
commitBatch(long) - Method in class org.apache.hadoop.hbase.HClient
Finalize a batch mutation
compare(byte[], int) - Method in class org.apache.hadoop.dfs.DataChecksum
Compares the checksum located at buf[offset] with the current checksum.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable.Comparator
Compare the buffers in serialized form.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BooleanWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable.Comparator
Compare the buffers in serialized form.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.FloatWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.IntWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.Comparator
 
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.MD5Hash.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.Text.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.WritableComparator
Optimization hook.
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.WritableComparator
Compare two WritableComparables.
compare(Object, Object) - Method in class org.apache.hadoop.io.WritableComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.record.RecordComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.io.WritableComparator
Lexicographic order of binary data.
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.record.Utils
Lexicographic order of binary data.
compareTo(Object) - Method in class org.apache.hadoop.dfs.DatanodeID
Comparable.
compareTo(Object) - Method in class org.apache.hadoop.fs.Path
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
compareTo(Object) - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.HLogKey
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.HRegionInfo
compareTo(Object) - Method in class org.apache.hadoop.hbase.HServerAddress
compareTo(Object) - Method in class org.apache.hadoop.hbase.HStoreFile
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.HStoreKey
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Define the sort order of the BytesWritable.
compareTo(byte[]) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
 
compareTo(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
compareTo(Object) - Method in class org.apache.hadoop.io.BytesWritable
Define the sort order of the BytesWritable.
compareTo(Object) - Method in class org.apache.hadoop.io.FloatWritable
Compares two FloatWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.IntWritable
Compares two IntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.LongWritable
Compares two LongWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.MD5Hash
Compares this object with the specified object for order.
compareTo(Object) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.io.Text
Compare two Texts bytewise using standard UTF8 ordering.
compareTo(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Compare two UTF8s.
compareTo(Object) - Method in class org.apache.hadoop.io.VIntWritable
Compares two VIntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.VLongWritable
Compares two VLongWritables.
compareTo(Object) - Method in class org.apache.hadoop.record.Buffer
Define the sort order of the Buffer.
compareTo(Object) - Method in class org.apache.hadoop.record.Record
 
compareTo(Object) - Method in class org.onelab.filter.BloomFilter
 
compareTo(Object) - Method in class org.onelab.filter.CountingBloomFilter
 
compareTo(Object) - Method in class org.onelab.filter.DynamicBloomFilter
 
compareTo(Object) - Method in class org.onelab.filter.Filter
 
compareTo(Object) - Method in class org.onelab.filter.Key
 
compareTo(Object) - Method in class org.onelab.filter.RetouchedBloomFilter
 
complete(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
complete() - Method in class org.apache.hadoop.util.Progress
Completes this node, moving the parent node to its next child.
COMPLETE_CACHEFLUSH - Static variable in interface org.apache.hadoop.hbase.HConstants
Value written to HLog on a complete cache flush
COMPLETE_SUCCESS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
completedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
completeUpgrade() - Method in interface org.apache.hadoop.dfs.Upgradeable
Complete upgrade.
compress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Fills specified buffer with compressed data.
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
CompressedWritable - Class in org.apache.hadoop.io
A base-class for Writables which store themselves compressed and lazily inflate on field access.
CompressedWritable() - Constructor for class org.apache.hadoop.io.CompressedWritable
 
CompressionCodec - Interface in org.apache.hadoop.io.compress
This class encapsulates a streaming compression/decompression pair.
CompressionCodecFactory - Class in org.apache.hadoop.io.compress
A factory that will find the correct codec for a given filename.
CompressionCodecFactory(Configuration) - Constructor for class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the codecs specified in the config value io.compression.codecs and register them.
CompressionInputStream - Class in org.apache.hadoop.io.compress
A compression input stream.
CompressionInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionInputStream
Create a compression input stream that reads the decompressed bytes from the given stream.
CompressionOutputStream - Class in org.apache.hadoop.io.compress
A compression output stream.
CompressionOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionOutputStream
Create a compression output stream that writes the compressed bytes to the given stream.
Compressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'compressor' which can be plugged into a CompressionOutputStream to compress data.
ConditionExpression() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
conf - Static variable in class org.apache.hadoop.dfs.JspHelper
 
conf - Variable in class org.apache.hadoop.hbase.HRegionServer
 
conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
conf - Variable in class org.apache.hadoop.util.ToolBase
 
config_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
configPath_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Configurable - Interface in org.apache.hadoop.conf
Something that may be configured with a Configuration.
Configuration - Class in org.apache.hadoop.conf
Provides access to configuration parameters.
Configuration() - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration.
Configuration(Configuration) - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration with the same settings cloned from another.
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
 
configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
Mapper configuration.
configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Reducer configuration.
configure(JobConf) - Method in class org.apache.hadoop.hbase.mapred.GroupingTableMap
 
configure(JobConf) - Method in class org.apache.hadoop.hbase.mapred.TableInputFormat
 
configure(JobConf) - Method in class org.apache.hadoop.hbase.mapred.TableMap
 
configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Do nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
get the input file name.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combiner does not need to configure.
configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Configure the object
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
Default implementation that does nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapper
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapRed
 
configure(JobConf) - Method in class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
configure(JobConf) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Mapper configuration.
configure(JobConf) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
Configured - Class in org.apache.hadoop.conf
Base class for things that may be configured with a Configuration.
Configured(Configuration) - Constructor for class org.apache.hadoop.conf.Configured
Construct a Configured.
ConsoleTable - Class in org.apache.hadoop.hbase.shell
Manufactures console table, but stupid.
ConsoleTable() - Constructor for class org.apache.hadoop.hbase.shell.ConsoleTable
 
contains(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Check if the tree contains data node node
ContextFactory - Class in org.apache.hadoop.metrics
Factory class for creating MetricsContext objects.
ContextFactory() - Constructor for class org.apache.hadoop.metrics.ContextFactory
Creates a new instance of ContextFactory
copy(FileSystem, Path, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy files between FileSystems.
copy(File, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy local files to a FileSystem.
copy(FileSystem, Path, File, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy FileSystem files to local files.
copy(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Copy the specified byte array to the Buffer.
copy(Configuration, String, String, Path, boolean, boolean) - Static method in class org.apache.hadoop.util.CopyFiles
Driver to copy srcPath to destPath depending on required protocol.
CopyFiles - Class in org.apache.hadoop.util
A Map-reduce program to recursively copy directories between different file-systems.
CopyFiles() - Constructor for class org.apache.hadoop.util.CopyFiles
 
CopyFiles.CopyFilesMapper - Class in org.apache.hadoop.util
Base-class for all mappers for distcp
CopyFiles.CopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
 
CopyFiles.FSCopyFilesMapper - Class in org.apache.hadoop.util
DFSCopyFilesMapper: The mapper for copying files from the DFS.
CopyFiles.FSCopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
 
CopyFiles.HTTPCopyFilesMapper - Class in org.apache.hadoop.util
 
CopyFiles.HTTPCopyFilesMapper() - Constructor for class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
copyFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
copyMerge(FileSystem, Path, FileSystem, Path, boolean, Configuration, String) - Static method in class org.apache.hadoop.fs.FileUtil
Copy all files in a directory to one output file (merge).
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
Counters - Class in org.apache.hadoop.mapred
A set of named counters.
Counters() - Constructor for class org.apache.hadoop.mapred.Counters
 
Counters.Group - Class in org.apache.hadoop.mapred
Represents a group of counters, comprising the counters from a particular counter enum class.
COUNTING_BLOOMFILTER - Static variable in class org.apache.hadoop.hbase.BloomFilterDescriptor
counting Bloom filter, as defined by Fan et al.
CountingBloomFilter - Class in org.onelab.filter
Implements a counting Bloom filter, as defined by Fan et al.
CountingBloomFilter() - Constructor for class org.onelab.filter.CountingBloomFilter
Default constructor - use with readFields
CountingBloomFilter(int, int) - Constructor for class org.onelab.filter.CountingBloomFilter
Constructor
countNumOfAvailableNodes(String, List<DatanodeDescriptor>) - Method in class org.apache.hadoop.net.NetworkTopology
return the number of leaves in scope but not in excludedNodes if scope starts with ~, return the number of datanodes that are not in scope and excludedNodes;
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
create(String, String, boolean, short, long) - Method in class org.apache.hadoop.dfs.NameNode
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Create an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, short, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
CREATE - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
create(Class<?>, Object, RetryPolicy) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the same retry policy for each method in the interface.
create(Class<?>, Object, Map<String, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the a set of retry policies specified by method name.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method create symlinks for all files in a given dir in another directory
CreateCommand - Class in org.apache.hadoop.hbase.shell
 
CreateCommand() - Constructor for class org.apache.hadoop.hbase.shell.CreateCommand
 
createCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
createCompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a new Compressor for use by this CompressionCodec.
createCompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createCompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createCompressor() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createDataJoinJob(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
createDecompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a new Decompressor for use by this CompressionCodec.
createDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createDecompressor() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createGroupKey(byte[][]) - Method in class org.apache.hadoop.hbase.mapred.GroupingTableMap
Create a key by concatenating multiple column values.
createHardLink(File, File) - Static method in class org.apache.hadoop.fs.FileUtil.HardLink
 
createInputStream(InputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a stream decompressor that will read from the given input stream.
createInputStream(InputStream, Decompressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionInputStream that will read from the given InputStream with the given Decompressor.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Create an instance of the given class
createKey(String) - Static method in class org.apache.hadoop.hbase.util.Keying
Makes a key out of passed URI for use as row name or column qualifier.
createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as a key.
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createKey() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createLease(long, long, LeaseListener) - Method in class org.apache.hadoop.hbase.Leases
Obtain a lease
createLeaseName(long, long) - Method in class org.apache.hadoop.hbase.Leases
Create a lease id out of the holder and resource ids.
createLocalTempFile(File, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
Create a tmp file for a base file.
createNewFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
Creates the given Path as a brand-new zero-length file.
createOutputStream(OutputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionOutputStream that will write to the given OutputStream.
createOutputStream(OutputStream, Compressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionOutputStream that will write to the given OutputStream with the given Compressor.
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
createRecord(String) - Method in interface org.apache.hadoop.metrics.MetricsContext
Creates a new MetricsRecord instance with the given recordName.
createRecord(MetricsContext, String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to create and return new metrics record instance within the given context.
createRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new AbstractMetricsRecord instance with the given recordName.
createResetableIterator() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can provide a different implementation on ResetableIterator.
createSocketAddr(String) - Static method in class org.apache.hadoop.dfs.DataNode
Util method to build socket addr from either: : ://:/
createSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method allows you to create symlinks in the current working directory of the task to all the cache files/archives
createTable(HTableDescriptor) - Method in class org.apache.hadoop.hbase.HClient
Creates a new table
createTable(HTableDescriptor) - Method in class org.apache.hadoop.hbase.HMaster
createTable(HTableDescriptor) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
createTableAsync(HTableDescriptor) - Method in class org.apache.hadoop.hbase.HClient
Creates a new table but does not block and wait for it to come online.
createTitle(JspWriter, HttpServletRequest, String) - Static method in class org.apache.hadoop.dfs.JspHelper
 
createTmpFileForWrite(String, long, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Creates a temporary file in the local FS.
createValue() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as the value.
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createValue() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
Create an Aggregate based map/reduce job.
createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
createValueBytes() - Method in class org.apache.hadoop.io.SequenceFile.Reader
 
createWriter(FileSystem, Configuration, Path, Class, Class) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
CSTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
CsvRecordInput - Class in org.apache.hadoop.record
 
CsvRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.CsvRecordInput
Creates a new instance of CsvRecordInput
CsvRecordOutput - Class in org.apache.hadoop.record
 
CsvRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.CsvRecordOutput
Creates a new instance of CsvRecordOutput
CUR_DIR - Static variable in class org.apache.hadoop.fs.Path
 
curChar - Variable in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
curChar - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
currentToken - Variable in exception org.apache.hadoop.hbase.shell.generated.ParseException
This is the last token that has been consumed successfully.
currentToken - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is the last token that has been consumed successfully.

D

Daemon - Class in org.apache.hadoop.util
A thread that has called Thread.setDaemon(boolean) with true.
Daemon() - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
Daemon(Runnable) - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
DancingLinks<ColumnName> - Class in org.apache.hadoop.examples.dancing
A generic solver for tile laying problems using Knuth's dancing link algorithm.
DancingLinks() - Constructor for class org.apache.hadoop.examples.dancing.DancingLinks
 
DancingLinks.SolutionAcceptor<ColumnName> - Interface in org.apache.hadoop.examples.dancing
Applications should implement this to receive the solutions to their problems.
DATA_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the data file.
DATA_TRANFER_VERSION - Static variable in interface org.apache.hadoop.dfs.FSConstants
Version for data transfers between clients and datanodes This should change when serialization of DatanodeInfo, not just when protocol changes.
DataChecksum - Class in org.apache.hadoop.dfs
This class provides inteface and utilities for processing checksums for DFS data transfers.
DataInputBuffer - Class in org.apache.hadoop.io
A reusable DataInput implementation that reads from an in-memory buffer.
DataInputBuffer() - Constructor for class org.apache.hadoop.io.DataInputBuffer
Constructs a new empty buffer.
DataJoinJob - Class in org.apache.hadoop.contrib.utils.join
This class implements the main function for creating a map/reduce job to join data of different sources.
DataJoinJob() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
DataJoinMapperBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the mapper class of a data join job.
DataJoinMapperBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
DataJoinReducerBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the reducer class of a data join job.
DataJoinReducerBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
DataNode - Class in org.apache.hadoop.dfs
DataNode is a class (and program) that stores a set of blocks for a DFS deployment.
DatanodeDescriptor - Class in org.apache.hadoop.dfs
DatanodeDescriptor tracks stats on a given DataNode, such as available storage capacity, last update time, etc., and maintains a set of blocks stored on the datanode.
DatanodeDescriptor() - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
Default constructor
DatanodeDescriptor(DatanodeID) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String, String) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, long, long, int) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeDescriptor(DatanodeID, String, String, long, long, int) - Constructor for class org.apache.hadoop.dfs.DatanodeDescriptor
DatanodeDescriptor constructor
DatanodeID - Class in org.apache.hadoop.dfs
DatanodeID is composed of the data node name (hostname:portNumber) and the data storage ID, which it currently represents.
DatanodeID() - Constructor for class org.apache.hadoop.dfs.DatanodeID
DatanodeID default constructor
DatanodeID(DatanodeID) - Constructor for class org.apache.hadoop.dfs.DatanodeID
DatanodeID copy constructor
DatanodeID(String, String, int) - Constructor for class org.apache.hadoop.dfs.DatanodeID
Create DatanodeID
DatanodeInfo - Class in org.apache.hadoop.dfs
DatanodeInfo represents the status of a DataNode.
DatanodeInfo.AdminStates - Enum in org.apache.hadoop.dfs
 
DataOutputBuffer - Class in org.apache.hadoop.io
A reusable DataOutput implementation that writes to an in-memory buffer.
DataOutputBuffer() - Constructor for class org.apache.hadoop.io.DataOutputBuffer
Constructs a new empty buffer.
dateForm - Static variable in class org.apache.hadoop.fs.FsShell
 
debug_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
debugStream - Variable in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
debugStream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
decode(byte[]) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decode(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
 
decode(byte[], int, int, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decodeRemoteException(RemoteException) - Static method in class org.apache.hadoop.hbase.RemoteExceptionHandler
Converts org.apache.hadoop.ipc.RemoteException into original exception, if possible.
decompress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Fills specified buffer with uncompressed data.
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
Decompressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'de-compressor' which can be plugged into a CompressionInputStream to compress data.
DEFAULT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
DEFAULT - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DEFAULT_BELL_ENABLED - Static variable in class org.apache.hadoop.hbase.Shell
audible keyboard bells
DEFAULT_BLOCK_SIZE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
DEFAULT_HBASE_DIR - Static variable in interface org.apache.hadoop.hbase.HConstants
Default HBase instance root directory
DEFAULT_HOST - Static variable in interface org.apache.hadoop.hbase.HConstants
 
DEFAULT_MASTER_ADDRESS - Static variable in interface org.apache.hadoop.hbase.HConstants
Default master address
DEFAULT_MAX_FILE_SIZE - Static variable in interface org.apache.hadoop.hbase.HConstants
Default maximum file size
DEFAULT_PERIOD - Static variable in interface org.apache.hadoop.metrics.MetricsContext
Default period in seconds at which data is sent to the metrics system.
DEFAULT_RACK - Static variable in class org.apache.hadoop.net.NetworkTopology
 
DEFAULT_REGION_SERVER_CLASS - Static variable in interface org.apache.hadoop.hbase.HConstants
Default region server interface class name.
DEFAULT_REGIONSERVER_ADDRESS - Static variable in interface org.apache.hadoop.hbase.HConstants
Default region server address
DefaultCodec - Class in org.apache.hadoop.io.compress
 
DefaultCodec() - Constructor for class org.apache.hadoop.io.compress.DefaultCodec
 
DefaultJobHistoryParser - Class in org.apache.hadoop.mapred
Default parser for job history files.
DefaultJobHistoryParser() - Constructor for class org.apache.hadoop.mapred.DefaultJobHistoryParser
 
DefaultJobHistoryParser.MasterIndex - Class in org.apache.hadoop.mapred
Contents of a job history file.
DefaultJobHistoryParser.MasterIndex() - Constructor for class org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndex
 
define(Class, WritableComparator) - Static method in class org.apache.hadoop.io.WritableComparator
Register an optimized comparator for a WritableComparable implementation.
define(Class, RecordComparator) - Static method in class org.apache.hadoop.record.RecordComparator
Register an optimized comparator for a Record implementation.
delete(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Get rid of Path f, whether a true file or dir.
delete(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
delete(String) - Method in class org.apache.hadoop.dfs.NameNode
 
delete(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Get rid of Path f, whether a true file or dir.
delete(Path) - Method in class org.apache.hadoop.fs.FileSystem
Delete a file
delete(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Delete a file
delete(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
delete(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
delete(long, Text) - Method in class org.apache.hadoop.hbase.HClient.BatchHandler
Delete the value for a column
delete(long, Text) - Method in class org.apache.hadoop.hbase.HClient
Delete the value for a column
delete(long, Text) - Method in class org.apache.hadoop.hbase.HRegion
Delete a value or write a value.
delete(Text, long, long, Text) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Delete the value for a column
delete(Text, long, long, Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Delete the value for a column
delete(long, Text) - Method in class org.apache.hadoop.hbase.io.BatchUpdate
Delete the value for a column
DELETE - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
delete(FileSystem, String) - Static method in class org.apache.hadoop.io.MapFile
Deletes the named map file.
delete(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
delete(Key) - Method in class org.onelab.filter.CountingBloomFilter
Removes a specified key from this counting Bloom filter.
DELETE_BYTES - Static variable in interface org.apache.hadoop.hbase.HConstants
Value stored for a deleted item
DELETE_OP - Static variable in class org.apache.hadoop.hbase.io.BatchOperation
delete operation
deleteBlock(Block) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteColumn(Text, Text) - Method in class org.apache.hadoop.hbase.HClient
Delete a column from a table
deleteColumn(Text, Text) - Method in class org.apache.hadoop.hbase.HMaster
deleteColumn(Text, Text) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
DeleteCommand - Class in org.apache.hadoop.hbase.shell
 
DeleteCommand() - Constructor for class org.apache.hadoop.hbase.shell.DeleteCommand
 
deleteCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
deleteINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteLocalFiles() - Method in class org.apache.hadoop.mapred.JobConf
 
deleteLocalFiles(String) - Method in class org.apache.hadoop.mapred.JobConf
 
deleteSnapshot() - Method in class org.apache.hadoop.hbase.HMemcache
Delete the snapshot, remove from history.
deleteTable(Text) - Method in class org.apache.hadoop.hbase.HClient
Deletes a table
deleteTable(Text) - Method in class org.apache.hadoop.hbase.HMaster
deleteTable(Text) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
DELIMITER - Static variable in class org.apache.hadoop.hbase.HRegionInfo
delimiter used between portions of a region name
DEPENDENT_FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
depth() - Method in class org.apache.hadoop.fs.Path
Return the number of elements in this path.
DescCommand - Class in org.apache.hadoop.hbase.shell
 
DescCommand() - Constructor for class org.apache.hadoop.hbase.shell.DescCommand
 
descCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
DESCRIBE - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
deserialize(InputStream) - Static method in class org.apache.hadoop.fs.s3.INode
 
deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.Record
Deserialize a record with a tag (usually field name)
deserialize(RecordInput) - Method in class org.apache.hadoop.record.Record
Deserialize a record without a tag
detailedUsage_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
df - Static variable in class org.apache.hadoop.dfs.HftpFileSystem
 
DF - Class in org.apache.hadoop.fs
Filesystem disk space usage statistics.
DF(File, Configuration) - Constructor for class org.apache.hadoop.fs.DF
 
DF(File, long) - Constructor for class org.apache.hadoop.fs.DF
 
DF_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.DF
 
dfmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
DFSAdmin - Class in org.apache.hadoop.dfs
This class provides some DFS administrative access.
DFSAdmin() - Constructor for class org.apache.hadoop.dfs.DFSAdmin
Construct a DFSAdmin object.
DFSck - Class in org.apache.hadoop.dfs
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
DFSck(Configuration) - Constructor for class org.apache.hadoop.dfs.DFSck
Filesystem checker.
DFSNodesStatus(ArrayList<DatanodeDescriptor>, ArrayList<DatanodeDescriptor>) - Method in class org.apache.hadoop.dfs.JspHelper
 
digest(byte[]) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(byte[], int, int) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(String) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
digest(UTF8) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
DIRECTORY_INODE - Static variable in class org.apache.hadoop.fs.s3.INode
 
disable_tracing() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
disable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
disableTable(Text) - Method in class org.apache.hadoop.hbase.HClient
Disables a table (takes it off-line) If it is being served, the master will tell the servers to stop serving it.
disableTable(Text) - Method in class org.apache.hadoop.hbase.HMaster
disableTable(Text) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
DiskChecker - Class in org.apache.hadoop.util
Class that provides utility functions for checking disk problem
DiskChecker() - Constructor for class org.apache.hadoop.util.DiskChecker
 
DiskChecker.DiskErrorException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskErrorException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskErrorException
 
DiskChecker.DiskOutOfSpaceException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskOutOfSpaceException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException
 
displayByteArray(byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
DistributedCache - Class in org.apache.hadoop.filecache
The DistributedCache maintains all the caching information of cached archives and unarchives all the files as well and returns the path
DistributedCache() - Constructor for class org.apache.hadoop.filecache.DistributedCache
 
DistributedFileSystem - Class in org.apache.hadoop.dfs
Implementation of the abstract FileSystem for the DFS system.
DistributedFileSystem() - Constructor for class org.apache.hadoop.dfs.DistributedFileSystem
 
DistributedFileSystem(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.  
DistributedPentomino - Class in org.apache.hadoop.examples.dancing
Launch a distributed pentomino solver.
DistributedPentomino() - Constructor for class org.apache.hadoop.examples.dancing.DistributedPentomino
 
DistributedPentomino.PentMap - Class in org.apache.hadoop.examples.dancing
Each map takes a line, which represents a prefix move and finds all of the solutions that start with that prefix.
DistributedPentomino.PentMap() - Constructor for class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
 
distributedUpgradeProgress(FSConstants.UpgradeAction) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
 
distributedUpgradeProgress(FSConstants.UpgradeAction) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
distributedUpgradeProgress(FSConstants.UpgradeAction) - Method in class org.apache.hadoop.dfs.NameNode
 
DNS - Class in org.apache.hadoop.net
A class that provides direct and reverse lookup functionalities, allowing the querying of specific network interfaces or nameservers.
DNS() - Constructor for class org.apache.hadoop.net.DNS
 
doAnalyze(String, String, String, String, String) - Method in class org.apache.hadoop.tools.Logalyzer
doAnalyze:
doArchive(String, String) - Method in class org.apache.hadoop.tools.Logalyzer
doArchive: Workhorse function to archive log-files.
doCommandLine(String[]) - Method in class org.apache.hadoop.hbase.HClient
Process command-line args.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.FileDataServlet
Service a GET request as described below.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.FsckServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.GetImageServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.ListPathsServlet
Service a GET request as described below.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.SecondaryNameNode.GetImageServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.dfs.StreamFile
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.StatusHttpServer.StackServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskLogServlet
Get the logs via http.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
doMain(Configuration, String[]) - Method in class org.apache.hadoop.util.ToolBase
Work as a main program: execute a command and handle exception if any
Done() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
done(String) - Method in class org.apache.hadoop.mapred.TaskTracker
The task is done.
Done() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
done() - Method in interface org.apache.hadoop.record.Index
 
doSync() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Do the sync checks
DOT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
DOT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
DoubleValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of double values.
DoubleValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
The default constructor
doUpdates(MetricsContext) - Method in class org.apache.hadoop.metrics.jvm.JvmMetrics
This will be called periodically (with the period being configuration dependent).
doUpdates(MetricsContext) - Method in interface org.apache.hadoop.metrics.Updater
Timer-based call-back from the metric library.
driver(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
driver(String[]) - Method in class org.apache.hadoop.util.ProgramDriver
This is a driver for the example programs.
DROP - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
DropCommand - Class in org.apache.hadoop.hbase.shell
 
DropCommand() - Constructor for class org.apache.hadoop.hbase.shell.DropCommand
 
dropCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
dump() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Diagnostic method to dump all INodes to the console.
DynamicBloomFilter - Class in org.onelab.filter
Implements a dynamic Bloom filter, as defined in the INFOCOM 2006 paper.
DynamicBloomFilter(int, int, int) - Constructor for class org.onelab.filter.DynamicBloomFilter
Constructor.

E

emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.file.FileContext
Emits a metrics record to a file.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sends a record to the metrics system.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of emitRecord
EMPTY_ARRAY - Static variable in class org.apache.hadoop.mapred.TaskCompletionEvent
 
enable_tracing() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
enable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
enableTable(Text) - Method in class org.apache.hadoop.hbase.HClient
Brings a table on-line (enables it)
enableTable(Text) - Method in class org.apache.hadoop.hbase.HMaster
enableTable(Text) - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
encode(String) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
encode(String, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
end() - Method in interface org.apache.hadoop.io.compress.Compressor
Closes the compressor and discards any unprocessed input.
end() - Method in interface org.apache.hadoop.io.compress.Decompressor
Closes the decompressor and discards any unprocessed input.
end() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
end() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
endColumn - Variable in class org.apache.hadoop.hbase.shell.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endLine - Variable in class org.apache.hadoop.hbase.shell.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized map.
endMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized map.
endMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized record.
endRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized record.
endRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized vector.
endVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized vector.
endVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
ensureInflated() - Method in class org.apache.hadoop.io.CompressedWritable
Must be called by all methods which access fields to ensure that the data has been uncompressed.
entries() - Method in class org.apache.hadoop.conf.Configuration
Deprecated. use iterator() instead
env_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Environment - Class in org.apache.hadoop.streaming
This is a class used to get the current environment on the host machines running the map/reduce.
Environment() - Constructor for class org.apache.hadoop.streaming.Environment
 
EOF - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
EOF - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
eol - Variable in exception org.apache.hadoop.hbase.shell.generated.ParseException
The end of line string for this machine.
eol - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
The end of line string for this machine.
equals(Object) - Method in class org.apache.hadoop.dfs.DatanodeID
 
equals(Object) - Method in class org.apache.hadoop.fs.Path
 
equals(Object) - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
equals(Object) - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
equals(Object) - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
equals(Object) - Method in class org.apache.hadoop.hbase.HLogKey
 
equals(Object) - Method in class org.apache.hadoop.hbase.HRegionInfo
equals(Object) - Method in class org.apache.hadoop.hbase.HServerAddress
equals(Object) - Method in class org.apache.hadoop.hbase.HStoreFile
 
equals(Object) - Method in class org.apache.hadoop.hbase.HStoreKey
 
equals(Object) - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
equals(Object) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Are the two byte sequences equal?
EQUALS - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
equals(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
equals(Object) - Method in class org.apache.hadoop.io.BytesWritable
Are the two byte sequences equal?
equals(Object) - Method in class org.apache.hadoop.io.FloatWritable
Returns true iff o is a FloatWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.IntWritable
Returns true iff o is a IntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.LongWritable
Returns true iff o is a LongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.MD5Hash
Returns true iff o is an MD5Hash whose digest contains the same values.
equals(SequenceFile.Metadata) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
equals(Object) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
equals(Object) - Method in class org.apache.hadoop.io.Text
Returns true iff o is a Text with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Returns true iff o is a UTF8 with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.VIntWritable
Returns true iff o is a VIntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.VLongWritable
Returns true iff o is a VLongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.record.Buffer
 
equals(Object) - Method in class org.onelab.filter.BloomFilter
 
equals(Object) - Method in class org.onelab.filter.CountingBloomFilter
 
equals(Object) - Method in class org.onelab.filter.DynamicBloomFilter
 
equals(Object) - Method in class org.onelab.filter.Key
 
equals(Object) - Method in class org.onelab.filter.RetouchedBloomFilter
 
errorReport(DatanodeRegistration, int, String) - Method in class org.apache.hadoop.dfs.NameNode
 
EventCounter - Class in org.apache.hadoop.metrics.jvm
A log4J Appender that simply counts logging events in three levels: fatal, error and warn.
EventCounter() - Constructor for class org.apache.hadoop.metrics.jvm.EventCounter
 
ExampleDriver - Class in org.apache.hadoop.examples
A description of an example program based on its class and a human-readable description.
ExampleDriver() - Constructor for class org.apache.hadoop.examples.ExampleDriver
 
execCommand(String[]) - Static method in class org.apache.hadoop.fs.Command
Static method to execute a command.
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.ClearCommand
 
execute(HClient) - Method in interface org.apache.hadoop.hbase.shell.Command
Execute a command
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.CreateCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.DeleteCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.DescCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.DropCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.ExitCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.HelpCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
execute(HClient) - Method in class org.apache.hadoop.hbase.shell.ShowCommand
 
execute() - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Invoke the Hadoop record compiler on each record definition file
executeTime(boolean, long, long) - Static method in class org.apache.hadoop.hbase.Shell
return a string of code execution time.
exists(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
exists(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
exists(String) - Method in class org.apache.hadoop.dfs.NameNode
 
exists(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check if exists.
exists(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Check if exists.
exists(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
exists(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
EXIT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
ExitCommand - Class in org.apache.hadoop.hbase.shell
 
ExitCommand() - Constructor for class org.apache.hadoop.hbase.shell.ExitCommand
 
exitCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
exitUsage(boolean) - Method in class org.apache.hadoop.streaming.StreamJob
 
ExpandBuff(boolean) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ExpandBuff(boolean) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
expectedTokenSequences - Variable in exception org.apache.hadoop.hbase.shell.generated.ParseException
Each entry in this array is an array of integers.
expectedTokenSequences - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
Each entry in this array is an array of integers.
exponentialBackoffRetry(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
expunge() - Method in class org.apache.hadoop.fs.Trash
Delete old checkpoints.
extractFamily(Text) - Static method in class org.apache.hadoop.hbase.HStoreKey
Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
extractFamily(Text, boolean) - Static method in class org.apache.hadoop.hbase.HStoreKey
Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
extractKeyValues(KeyedDataArrayWritable) - Method in class org.apache.hadoop.hbase.mapred.GroupingTableMap
Extract columns values from the current record.
extractQualifier(Text) - Static method in class org.apache.hadoop.hbase.HStoreKey
Extracts the column qualifier, the portion that follows the colon (':') family/qualifier separator.

F

fail(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
FAILED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
failedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
families() - Method in class org.apache.hadoop.hbase.HTableDescriptor
All the column families in this table.
FAMILY_INDICATOR - Static variable in interface org.apache.hadoop.hbase.shell.Command
family indicator
Field() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
FieldSelectionMapReduce - Class in org.apache.hadoop.mapred.lib
This class implements a mapper/reducer class that can be used to perform field selections in a manner similar to unix cut.
FieldSelectionMapReduce() - Constructor for class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
file - Variable in class org.apache.hadoop.fs.FSInputChecker
The file name from which data is read from
FILE_NAME_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
FILE_TYPES - Static variable in class org.apache.hadoop.fs.s3.INode
 
FileAlreadyExistsException - Exception in org.apache.hadoop.mapred
Used when target file already exists for any operation and is not configured to be overwritten.
FileAlreadyExistsException() - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileAlreadyExistsException(String) - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileContext - Class in org.apache.hadoop.metrics.file
Metrics context for writing metrics to a file.

This class is configured by setting ContextFactory attributes which in turn are usually configured through a properties file.

FileContext() - Constructor for class org.apache.hadoop.metrics.file.FileContext
Creates a new instance of FileContext
FileDataServlet - Class in org.apache.hadoop.dfs
Redirect queries about the hosted filesystem to an appropriate datanode.
FileDataServlet() - Constructor for class org.apache.hadoop.dfs.FileDataServlet
 
fileExtension(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
FileInputFormat - Class in org.apache.hadoop.mapred
A base class for InputFormat.
FileInputFormat() - Constructor for class org.apache.hadoop.mapred.FileInputFormat
 
FileSplit - Class in org.apache.hadoop.mapred
A section of an input file.
FileSplit(Path, long, long, JobConf) - Constructor for class org.apache.hadoop.mapred.FileSplit
Constructs a split.
FileStatus - Interface in org.apache.hadoop.fs
Interface that represents the client side information for a file.
FileSystem - Class in org.apache.hadoop.fs
An abstract base class for a fairly generic filesystem.
FileSystem() - Constructor for class org.apache.hadoop.fs.FileSystem
 
FileSystemStore - Interface in org.apache.hadoop.fs.s3
A facility for storing and retrieving INodes and Blocks.
fileURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
FileUtil - Class in org.apache.hadoop.fs
A collection of file-processing util methods
FileUtil() - Constructor for class org.apache.hadoop.fs.FileUtil
 
FileUtil.HardLink - Class in org.apache.hadoop.fs
Class for creating hardlinks.
FileUtil.HardLink() - Constructor for class org.apache.hadoop.fs.FileUtil.HardLink
 
FillBuff() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
FillBuff() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
filter(Text) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Filters on just a row key.
filter(Text, Text, byte[]) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Filters on row key and/or a column key.
filter(Text) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Filters on just a row key.
filter(Text, Text, byte[]) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Filters on row key and/or a column key.
filter(Text) - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Filters on just a row key.
filter(Text, Text, byte[]) - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Filters on row key and/or a column key.
filter(Text) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Filters on just a row key.
filter(Text, Text, byte[]) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Filters on row key and/or a column key.
Filter - Class in org.onelab.filter
Defines the general behavior of a filter.
Filter() - Constructor for class org.onelab.filter.Filter
 
Filter(int, int) - Constructor for class org.onelab.filter.Filter
Constructor.
filterAllRemaining() - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Determines if the filter has decided that all remaining results should be filtered (skipped).
filterAllRemaining() - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Determines if the filter has decided that all remaining results should be filtered (skipped).
filterAllRemaining() - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Determines if the filter has decided that all remaining results should be filtered (skipped).
filterAllRemaining() - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Determines if the filter has decided that all remaining results should be filtered (skipped).
FilterFileSystem - Class in org.apache.hadoop.fs
A FilterFileSystem contains some other file system, which it uses as its basic file system, possibly transforming the data along the way or providing additional functionality.
FilterFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.FilterFileSystem
 
filterNotNull(TreeMap<Text, byte[]>) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Filters row if given columns are non-null and have null criteria or if there exists criteria on columns not included in the column set.
filterNotNull(TreeMap<Text, byte[]>) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Filters row if given columns are non-null and have null criteria or if there exists criteria on columns not included in the column set.
filterNotNull(TreeMap<Text, byte[]>) - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Filters row if given columns are non-null and have null criteria or if there exists criteria on columns not included in the column set.
filterNotNull(TreeMap<Text, byte[]>) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Filters row if given columns are non-null and have null criteria or if there exists criteria on columns not included in the column set.
finalize() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
finalize() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Finalize previously upgraded files system state.
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.DFSAdmin
Command to ask the namenode to finalize previously performed upgrade.
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Finalize previously upgraded files system state.
finalizeUpgrade() - Method in class org.apache.hadoop.dfs.NameNode
 
finalKey(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Reads the final key from the file.
find(String) - Method in class org.apache.hadoop.io.Text
 
find(String, int) - Method in class org.apache.hadoop.io.Text
Finds any occurence of what in the backing buffer, starting as position start.
findByte(byte[], int, int, byte) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occurrence of the given byte b in a UTF-8 encoded string
findInClasspath(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findInClasspath(String, ClassLoader) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findNthByte(byte[], byte, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the nth occurrence of the given byte b in a UTF-8 encoded string
findRootRegion() - Method in class org.apache.hadoop.hbase.HMaster
findRootRegion() - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
findSeparator(byte[], int, int, byte) - Static method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
findTab(byte[], int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occured tab in a UTF-8 encoded string
findTab(byte[]) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Find the first occured tab in a UTF-8 encoded string
finish() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Finishes writing compressed data to the output stream without closing the underlying stream.
finish() - Method in interface org.apache.hadoop.io.compress.Compressor
When called, indicates that compression should end with the current contents of the input buffer.
finish() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
finish() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
finish() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
fix(FileSystem, Path, Class, Class, boolean, Configuration) - Static method in class org.apache.hadoop.io.MapFile
This method attempts to fix a corrupt MapFile by re-creating its index.
FIXING_DELETE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Delete corrupted files.
FIXING_MOVE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Move corrupted files to /lost+found .
FIXING_NONE - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
Don't attempt any fixing .
FLOAT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
FloatWritable - Class in org.apache.hadoop.io
A WritableComparable for floats.
FloatWritable() - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable(float) - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for FloatWritable.
FloatWritable.Comparator() - Constructor for class org.apache.hadoop.io.FloatWritable.Comparator
 
flush() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
flush() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
flush() - Method in class org.apache.hadoop.metrics.file.FileContext
Flushes the output writer, forcing updates to disk.
flush() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called each period after all records have been emitted, this method does nothing.
flushBuffer() - Method in class org.apache.hadoop.fs.FSOutputSummer
 
format(Configuration) - Static method in class org.apache.hadoop.dfs.NameNode
Format a new filesystem.
formatBytes(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatBytes2(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatPercent(double, int) - Static method in class org.apache.hadoop.util.StringUtils
Format a percentage for presentation to the user.
formatTimeDiff(long, long) - Static method in class org.apache.hadoop.util.StringUtils
Given a finish and start time in long milliseconds, returns a String in the format Xhrs, Ymins, Z sec, for the time difference between two times.
fourRotations - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Are all 4 rotations unique?
fs - Variable in class org.apache.hadoop.fs.FilterFileSystem
 
fs - Variable in class org.apache.hadoop.fs.FsShell
 
fsck() - Method in class org.apache.hadoop.dfs.NamenodeFsck
Check files on DFS, starting from the indicated path.
FsckServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to do fsck on namenode.
FsckServlet() - Constructor for class org.apache.hadoop.dfs.FsckServlet
 
FSConstants - Interface in org.apache.hadoop.dfs
Some handy constants
FSConstants.NodeType - Enum in org.apache.hadoop.dfs
Type of the node
FSConstants.SafeModeAction - Enum in org.apache.hadoop.dfs
 
FSConstants.StartupOption - Enum in org.apache.hadoop.dfs
 
FSConstants.UpgradeAction - Enum in org.apache.hadoop.dfs
Distributed upgrade actions: 1.
FSDataInputStream - Class in org.apache.hadoop.fs
Utility that wraps a FSInputStream in a DataInputStream and buffers input through a BufferedInputStream.
FSDataInputStream(InputStream) - Constructor for class org.apache.hadoop.fs.FSDataInputStream
 
FSDataOutputStream - Class in org.apache.hadoop.fs
Utility that wraps a OutputStream in a DataOutputStream, buffers output through a BufferedOutputStream and creates a checksum file.
FSDataOutputStream(OutputStream) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
 
FSError - Error in org.apache.hadoop.fs
Thrown for unexpected filesystem errors, presumed to reflect disk errors in the native filesystem.
fsError(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A child task had a local filesystem error.
FSInputChecker - Class in org.apache.hadoop.fs
This is a generic input stream for verifying checksums for data before it is read by a user.
FSInputChecker(Path, int) - Constructor for class org.apache.hadoop.fs.FSInputChecker
Constructor
FSInputChecker(Path, int, Checksum, int, int) - Constructor for class org.apache.hadoop.fs.FSInputChecker
Constructor
FSInputStream - Class in org.apache.hadoop.fs
FSInputStream is a generic old InputStream with a little bit of RAF-style seek ability.
FSInputStream() - Constructor for class org.apache.hadoop.fs.FSInputStream
 
FSOutputSummer - Class in org.apache.hadoop.fs
This is a generic output stream for generating checksums for data before it is written to the underlying stream
FSOutputSummer(Checksum, int, int) - Constructor for class org.apache.hadoop.fs.FSOutputSummer
 
FsShell - Class in org.apache.hadoop.fs
Provide command line access to a FileSystem.
FsShell() - Constructor for class org.apache.hadoop.fs.FsShell
 
fullyDelete(File) - Static method in class org.apache.hadoop.fs.FileUtil
Delete a directory and all its contents.
fullyDelete(FileSystem, Path) - Static method in class org.apache.hadoop.fs.FileUtil
Recursively delete a directory.

G

GangliaContext - Class in org.apache.hadoop.metrics.ganglia
Context for sending metrics to Ganglia.
GangliaContext() - Constructor for class org.apache.hadoop.metrics.ganglia.GangliaContext
Creates a new instance of GangliaContext
genCode(String, String, ArrayList<String>) - Method in class org.apache.hadoop.record.compiler.JFile
Generate record code in given language.
generateEntry(String, String, Object) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
generateGroupKey(TaggedMapOutput) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a map output key.
generateInputTag(String) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Determine the source tag based on the input file name.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.examples.AggregateWordCount.WordCountPlugInClass
 
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pair.
generateParseException() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
generateParseException() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
generateTaggedMapOutput(Writable) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a tagged map output value.
generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
GenericWritable - Class in org.apache.hadoop.io
A wrapper for Writable instances.
GenericWritable() - Constructor for class org.apache.hadoop.io.GenericWritable
 
get(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Deprecated. A side map of Configuration to Object should be used instead.
get(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property, or null if no such property exists.
get(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property.
get(int) - Method in class org.apache.hadoop.dfs.LocatedBlocks
Get located block.
get(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the configured filesystem implementation.
get(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the FileSystem for this URI's scheme and authority.
get(Text, Text) - Method in class org.apache.hadoop.hbase.HClient
Get a single value for the specified row and column
get(Text, Text, int) - Method in class org.apache.hadoop.hbase.HClient
Get the specified number of versions of the specified row and column
get(Text, Text, long, int) - Method in class org.apache.hadoop.hbase.HClient
Get the specified number of versions of the specified row and column with the specified timestamp.
get(HStoreKey, int) - Method in class org.apache.hadoop.hbase.HMemcache
Look back through all the backlog TreeMaps to find the target.
get(Text, Text, Text) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Retrieve a single value from the specified region for the specified row and column keys
get(Text, Text, Text, int) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Get the specified number of versions of the specified row and column
get(Text, Text, Text, long, int) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Get the specified number of versions of the specified row and column with the specified timestamp.
get(Text, Text, Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Retrieve a single value from the specified region for the specified row and column keys
get(Text, Text, Text, int) - Method in class org.apache.hadoop.hbase.HRegionServer
Get the specified number of versions of the specified row and column
get(Text, Text, Text, long, int) - Method in class org.apache.hadoop.hbase.HRegionServer
Get the specified number of versions of the specified row and column with the specified timestamp.
get() - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Get the data from the BytesWritable.
get() - Method in class org.apache.hadoop.hbase.io.KeyedDataArrayWritable
 
get(long, Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Return the nth value in the file.
get() - Method in class org.apache.hadoop.io.ArrayWritable
 
get() - Method in class org.apache.hadoop.io.BooleanWritable
Returns the value of the BooleanWritable
get() - Method in class org.apache.hadoop.io.BytesWritable
Get the data from the BytesWritable.
get() - Method in class org.apache.hadoop.io.FloatWritable
Return the value of this FloatWritable.
get() - Method in class org.apache.hadoop.io.GenericWritable
Return the wrapped instance.
get() - Method in class org.apache.hadoop.io.IntWritable
Return the value of this IntWritable.
get() - Method in class org.apache.hadoop.io.LongWritable
Return the value of this LongWritable.
get(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Return the value for the named key, or null if none exists.
get() - Static method in class org.apache.hadoop.io.NullWritable
Returns the single instance of this class.
get() - Method in class org.apache.hadoop.io.ObjectWritable
Return the instance, or null if none.
get(Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
get(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the matching key from a set into key.
get() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
get() - Method in class org.apache.hadoop.io.VIntWritable
Return the value of this VIntWritable.
get() - Method in class org.apache.hadoop.io.VLongWritable
Return the value of this LongWritable.
get(Class) - Static method in class org.apache.hadoop.io.WritableComparator
Get a comparator for a WritableComparable implementation.
get() - Static method in class org.apache.hadoop.ipc.Server
Returns the server instance called under or null.
get(DataInput) - Static method in class org.apache.hadoop.record.BinaryRecordInput
Get a thread-local record input for the supplied DataInput.
get(DataOutput) - Static method in class org.apache.hadoop.record.BinaryRecordOutput
Get a thread-local record output for the supplied DataOutput.
get() - Method in class org.apache.hadoop.record.Buffer
Get the data from the Buffer.
get() - Method in class org.apache.hadoop.util.Progress
Returns the overall progress of the root.
getAbsolutePath(String) - Method in class org.apache.hadoop.streaming.PathFinder
Returns the full path name of this file if it is listed in the path
getAddress(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
 
getAllTasks() - Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Returns all map and reduce tasks .
getApproxChkSumLength(long) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
 
getArchiveClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the archive entries in classpath as an array of Path
getArchiveTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the archives
getAssignedTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
Get tracker name for a given task id.
getAttribute(String) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the value in the webapp context.
getAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the value of the named attribute, or null if there is no attribute of that name.
getAttribute(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Convenience method for subclasses to access factory attributes.
getAttributeNames() - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the names of all the factory's attributes.
getAttributeTable(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns an attribute-value map derived from the factory attributes by finding all factory attributes that begin with contextName.tableName.
getAvailable() - Method in class org.apache.hadoop.fs.DF
 
getAvailableSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getBasePathInJarOut(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
getBasicCommand() - Method in class org.apache.hadoop.hbase.shell.BasicCommand
 
getBeginColumn() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
getBeginColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBeginLine() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
getBeginLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBiggerInt(int[]) - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
getBindAddress() - Method in class org.apache.hadoop.hbase.HServerAddress
 
getBlockLocations(String, long, long) - Method in class org.apache.hadoop.dfs.NameNode
 
getBlocks() - Method in class org.apache.hadoop.fs.s3.INode
 
getBlockSize() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
getBlockSize(String) - Method in class org.apache.hadoop.dfs.NameNode
 
getBlockSize() - Method in interface org.apache.hadoop.fs.FileStatus
Get the block size of the file.
getBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an boolean.
getBoundAntProperty(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getBytes() - Method in class org.apache.hadoop.io.Text
Retuns the raw bytes.
getBytes() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The raw bytes.
getBytes(String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Convert a string to a UTF-8 encoded byte array.
getBytes() - Method in class org.onelab.filter.Key
 
getBytesPerChecksum() - Method in class org.apache.hadoop.dfs.DataChecksum
 
getBytesPerSum() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the bytes Per Checksum
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of uncompressed bytes input so far.
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of uncompressed bytes input so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of compressed bytes output so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of compressed bytes output so far.
getCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache archives set in the Configuration
getCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache files set in the Configuration
getCapacity() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw capacity.
getCapacity() - Method in class org.apache.hadoop.fs.DF
 
getCapacity() - Method in class org.apache.hadoop.io.BytesWritable
Get the capacity, which is the maximum size that could handled without resizing the backing storage.
getCapacity() - Method in class org.apache.hadoop.record.Buffer
Get the capacity, which is the maximum count that could handled without resizing the backing storage.
getCapacitySkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getCategory(List<List<Pentomino.ColumnName>>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find whether the solution has the x in the upper left quadrant, the x-midline, the y-midline or in the center.
getChecksumFile(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the name of the checksum file associated with a file.
getChecksumFileLength(Path, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the length of the checksum file given the size of the actual file.
getChecksumSize() - Method in class org.apache.hadoop.dfs.DataChecksum
 
getChecksumType() - Method in class org.apache.hadoop.dfs.DataChecksum
 
getChunkPosition(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Return position of beginning of chunk containing pos.
getClass(String, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a Class.
getClass(String, Class<? extends U>, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a Class.
getClass(String, Configuration) - Static method in class org.apache.hadoop.io.WritableName
Return the class for a name.
getClassByName(String) - Method in class org.apache.hadoop.conf.Configuration
Load a class by name.
getClassByName(String) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
getClassLoader() - Method in class org.apache.hadoop.conf.Configuration
Get the class loader for this job.
getClassName() - Method in exception org.apache.hadoop.ipc.RemoteException
 
getClientVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the client's prefered version
getClosest(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClusterNick() - Method in class org.apache.hadoop.streaming.StreamJob
 
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
 
getClusterStatus() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get the current status of the cluster
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobTracker
 
getCodec(Path) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the relevant compression codec for the given file based on its filename suffix.
getCodecClasses(Configuration) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Get the list of codecs listed in the configuration
getColumn() - Method in class org.apache.hadoop.hbase.HLogEdit
 
getColumn() - Method in class org.apache.hadoop.hbase.HStoreKey
 
getColumn() - Method in class org.apache.hadoop.hbase.io.BatchOperation
 
getColumn() - Method in class org.apache.hadoop.hbase.shell.DeleteCommand
 
getColumn() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
Deprecated.  
getColumn(int) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
getColumn() - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
getColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Deprecated.  
getColumnName(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the name of a given column as a string
getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getCombinerOutput() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getCommand() - Method in class org.apache.hadoop.hbase.shell.BasicCommand
basic commands are their own factories.
getCommand() - Method in interface org.apache.hadoop.hbase.shell.CommandFactory
 
getCompression() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the compression codec of data in this file.
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the compression codec of data in this file.
getCompressionType(Configuration) - Static method in class org.apache.hadoop.io.SequenceFile
Get the compression type for the reduce outputs
getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
Are the outputs of the maps be compressed?
getCompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Compressor needed by this CompressionCodec.
getCompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getCompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getCompressorType() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Is the reduce output compressed?
getCondition() - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
getConditionSize() - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
getConf() - Method in interface org.apache.hadoop.conf.Configurable
Return the configuration used by this object.
getConf() - Method in class org.apache.hadoop.conf.Configured
 
getConf() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
getConf() - Method in class org.apache.hadoop.hbase.HClient
 
getConf() - Method in class org.apache.hadoop.hbase.HRegion
 
getConf() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getConf() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
getConf() - Method in class org.apache.hadoop.io.ObjectWritable
 
getConf() - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
getConf() - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
getConf() - Method in class org.apache.hadoop.util.ToolBase
 
getConfResourceAsInputStream(String) - Method in class org.apache.hadoop.conf.Configuration
Returns an input stream attached to the configuration resource with the given name.
getConfResourceAsReader(String) - Method in class org.apache.hadoop.conf.Configuration
Returns a reader attached to the configuration resource with the given name.
getContentLength(Path) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
 
getContentLength(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getContentLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes of the given path If f is a file, return the size of the file; If f is a directory, return the size of the directory tree
getContext(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the named MetricsContext instance, constructing it if necessary using the factory's current configuration attributes.
getContext(String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to return the named context.
getContext() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
getContextFactory() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the factory by which this context was created.
getContextName() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns the context name.
getContextName() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the context name.
getCorruptFiles() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of currupted files.
getCount() - Method in class org.apache.hadoop.record.Buffer
Get the current count of the buffer.
getCounter(Enum) - Method in class org.apache.hadoop.mapred.Counters
Returns current value of the specified counter, or 0 if the counter does not exist.
getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the value of the specified counter, or 0 if the counter does not exist.
getCounterNames() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the counters for this group, with their names localized.
getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
Gets the counters for this job.
getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
A table of counters.
getCurrentSplit(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getCurrentTrashDir() - Method in class org.apache.hadoop.fs.FsShell
Returns the Trash object associated with this shell.
getCurrentValue(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getData() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getData() - Method in class org.apache.hadoop.hbase.io.KeyedData
 
getData() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the current contents of the buffer.
getDataNode() - Static method in class org.apache.hadoop.dfs.DataNode
Return the DataNode object
getDatanodeReport() - Method in class org.apache.hadoop.dfs.DatanodeInfo
A formatted string for reporting the status of the DataNode.
getDatanodeReport() - Method in class org.apache.hadoop.dfs.NameNode
 
getDataNodeStats() - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return statistics for each datanode.
getDataNodeStats() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return statistics for each datanode.
getDate() - Static method in class org.apache.hadoop.util.VersionInfo
The date that Hadoop was compiled.
getDeclaredClass() - Method in class org.apache.hadoop.io.ObjectWritable
Return the class this is meant to be.
getDecompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Decompressor needed by this CompressionCodec.
getDecompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getDecompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getDecompressorType() - Method in class org.apache.hadoop.io.compress.LzoCodec
 
getDefaultBlockSize() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FilterFileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultExtension() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.LzoCodec
Get the default filename extension for this kind of compression.
getDefaultHost(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the provided nameserver with the address bound to the specified network interface
getDefaultHost(String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the default nameserver with the address bound to the specified network interface
getDefaultIP(String) - Static method in class org.apache.hadoop.net.DNS
Returns the first available IP address associated with the provided network interface
getDefaultReplication() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getDefaultReplication() - Method in class org.apache.hadoop.fs.FileSystem
Get the default replication.
getDefaultReplication() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the default replication.
getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getDescription() - Method in interface org.apache.hadoop.dfs.Upgradeable
Description of the upgrade object for displaying.
getDiagnostics() - Method in class org.apache.hadoop.mapred.TaskReport
A list of error messages.
getDigest() - Method in class org.apache.hadoop.io.MD5Hash
Returns the digest bytes.
getDirPath() - Method in class org.apache.hadoop.fs.DF
 
getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns localized name of the group.
getDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Returns localized name of the specified counter.
getDistance(DatanodeDescriptor, DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Return the distance between two data nodes It is assumed that the distance from one node to its parent is 1 The distance between two nodes is calculated by summing up their distances to their closest common ancestor.
getDoubleValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getDU(File) - Static method in class org.apache.hadoop.fs.FileUtil
Takes an input dir and returns the du on that local directory.
getEditLogSize() - Method in class org.apache.hadoop.dfs.NameNode
Returns the size of the current edit log.
getEmptier() - Method in class org.apache.hadoop.fs.Trash
Return a Runnable that periodically empties the trash.
getEndColumn() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
getEndColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEndKey() - Method in class org.apache.hadoop.hbase.HRegion
 
getEndKey() - Method in class org.apache.hadoop.hbase.HRegionInfo
 
getEndLine() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
getEndLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEndRow() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
getEntry(MapFile.Reader[], Partitioner, WritableComparable, Writable) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Get an entry from output generated by this class.
getError() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getEventId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns event Id.
getExcludedHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getExecString() - Method in class org.apache.hadoop.fs.Command
return an array comtaining the command name & its parameters
getExecutable(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Get the URI of the application's executable.
getFactor() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the number of streams to merge at once.
getFactory(Class) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
getFactory() - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns the singleton ContextFactory instance, constructing it if necessary.
getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getFatal() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getFile(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns a local file name under a directory named in dirsProp with the given path.
getFile() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated. Call FileSplit.getPath() instead.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.FileSystem
Return a 2D array of size 1x1 or greater, containing hostnames where portions of the given file can be found.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.fs.FilterFileSystem
Return a 2D array of size 1x1 or greater, containing hostnames where portions of the given file can be found.
getFileCacheHints(Path, long, long) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
getFileClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the file entries in classpath as an array of Path
getFileInfo(String) - Method in class org.apache.hadoop.dfs.NameNode
 
getFileLength() - Method in class org.apache.hadoop.dfs.LocatedBlocks
 
getFileName() - Method in class org.apache.hadoop.metrics.file.FileContext
Returns the configured file name, or null.
getFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Returns the stat information about the file.
getFileStatus(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Returns the stat information about the file.
getFileStatus(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Get file status.
getFileStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
FileStatus for S3 file systems.
getFilesystem() - Method in class org.apache.hadoop.fs.DF
 
getFileSystem(Configuration) - Method in class org.apache.hadoop.fs.Path
Return the FileSystem that owns this Path.
getFilesystem() - Method in class org.apache.hadoop.hbase.HRegion
 
getFileSystem() - Method in class org.apache.hadoop.mapred.TaskTracker
Return the DFS filesystem
getFilesystemName() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
A MapReduce system always operates on a single filesystem.
getFilesystemName() - Method in class org.apache.hadoop.mapred.JobTracker
Grab the local fs name
getFileTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the files
getFileType() - Method in class org.apache.hadoop.fs.s3.INode
 
getFinishTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get finish time of task.
getFlippable() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getFloat(String, float) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a float.
getFormattedTimeWithDiff(DateFormat, long, long) - Static method in class org.apache.hadoop.util.StringUtils
Formats time in ms and appends difference (finishTime - startTime) as returned by formatTimeDiff().
getFs() - Method in class org.apache.hadoop.mapred.JobClient
Get a filesystem handle.
getFsEditName() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the edits file
getFsImageName() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file
getFsImageNameCheckpoint() - Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file uploaded by periodic checkpointing
getFSSize() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getFull(HStoreKey) - Method in class org.apache.hadoop.hbase.HMemcache
Return all the available columns for the given key.
getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
Returns the named counter group, or an empty group if there is none with the specified name.
getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
Returns the names of all counter classes.
getHadoopClientHome() - Method in class org.apache.hadoop.streaming.StreamJob
 
getHost() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getHost() - Method in class org.apache.hadoop.streaming.Environment
 
getHostName() - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
getHostname() - Static method in class org.apache.hadoop.util.StringUtils
Return hostname without throwing exception.
getHosts(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the provided nameserver with the address bound to the specified network interface
getHosts(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the default nameserver with the address bound to the specified network interface
getHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getHRegionConnection(HServerAddress) - Method in class org.apache.hadoop.hbase.HClient
Establishes a connection to the region server at the specified address.
getId() - Method in class org.apache.hadoop.fs.s3.Block
 
GetImage() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
GetImage() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
GetImageServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to retrieve a file.
GetImageServlet() - Constructor for class org.apache.hadoop.dfs.GetImageServlet
 
getIndexInterval() - Method in class org.apache.hadoop.io.MapFile.Writer
The number of entries that are added before an index entry is added.
getInetSocketAddress() - Method in class org.apache.hadoop.hbase.HServerAddress
 
getInfo() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getInfoPort() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getInfoPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
 
getInputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Call RecordReader.createKey().
getInputPaths() - Method in class org.apache.hadoop.mapred.JobConf
 
getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
Get the InputSplit object for a map.
getInputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Call RecordReader.createValue().
getInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an integer.
getInt() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getInterfaceName() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the interface name
getIPs(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the IPs associated with the provided interface, if any, in textual form.
getIsJavaMapper(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Mapper.
getIsJavaRecordReader(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java RecordReader
getIsJavaRecordWriter(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Will the reduce use a Java RecordWriter?
getIsJavaReducer(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Reducer.
getJar() - Method in class org.apache.hadoop.mapred.JobConf
 
getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Get an RunningJob object to track an ongoing job.
getJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobClient() - Method in class org.apache.hadoop.mapred.TaskTracker
The connection to the JobTracker, used by the TaskRunner for locating remote files.
getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobCounters(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab the current job counters
getJobCounters(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobFile() - Method in class org.apache.hadoop.mapred.JobProfile
Get the configuration file for the job.
getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns the path of the submitted job.
getJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobId() - Method in class org.apache.hadoop.mapred.JobProfile
Get the job id.
getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
 
getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns an identifier for the job
getJobName() - Method in class org.apache.hadoop.mapred.JobConf
Get the user-specified job name.
getJobName() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobName() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user-specified job name.
getJobName() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns the name of the job
getJobPriority() - Method in class org.apache.hadoop.mapred.JobConf
Get the job priority for this job.
getJobProfile(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a handle to a job that is already known to the JobTracker.
getJobProfile(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobStatus(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a handle to a job that is already known to the JobTracker.
getJobStatus(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobTrackerHostPort() - Method in class org.apache.hadoop.streaming.StreamJob
 
getJobTrackerMachine() - Method in class org.apache.hadoop.mapred.JobTracker
 
getKeepCommandFile(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Does the user want to keep the command file for debugging? If this is true, pipes will write a copy of the command data to a file in the task directory named "downlink.data", which may be used to run the C++ program under the debugger.
getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
Should the temporary files for failed tasks be kept?
getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
Get the regular expression that is matched against the task names to see if we need to keep the files.
getKey() - Method in class org.apache.hadoop.hbase.io.KeyedData
 
getKey() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw key
getKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Returns the stored rawKey
getKeyClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.WritableComparator
Returns the WritableComparable implementation class.
getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of key that must be passed to SequenceFileRecordReader.next(Writable,Writable)..
getLastUpdate() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The time when this information was accurate.
getLen() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
getLen() - Method in interface org.apache.hadoop.fs.FileStatus
 
getLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getLength() - Method in class org.apache.hadoop.fs.s3.Block
 
getLength() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
getLength() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the length of the input.
getLength() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the current length of the output file.
getLength() - Method in class org.apache.hadoop.io.Text
Returns the number of bytes in the byte array
getLength() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The number of bytes in the encoded string.
getLength() - Method in class org.apache.hadoop.mapred.FileSplit
The number of bytes in the file to process.
getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
Get the number of input bytes in the split.
getLength() - Method in class org.apache.hadoop.mapred.MultiFileSplit
 
getLength(int) - Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the length of the ith Path
getLengths() - Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns an array containing the lengths of the files in the split
getLevel() - Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's level in the tree.
getLevel() - Method in interface org.apache.hadoop.net.Node
Return this node's level in the tree.
getLevel() - Method in class org.apache.hadoop.net.NodeBase
Return this node's level in the tree.
getLine() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
Deprecated.  
getLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Deprecated.  
getListenerAddress() - Method in class org.apache.hadoop.ipc.Server
Return the socket (ip+port) on which the RPC server is listening to.
getListing(String) - Method in class org.apache.hadoop.dfs.NameNode
 
getLiteralValues() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getLoadNativeLibraries(JobConf) - Method in class org.apache.hadoop.util.NativeCodeLoader
Return if native hadoop libraries, if present, can be used for this job.
getLocal(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Get the local file syste
getLocalCache(URI, Configuration, Path, boolean, long, Path) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be previously cached (and valid) or copy it from the FileSystem now.
getLocalCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized caches
getLocalCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized files
getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
 
getLocalPath(String, String) - Method in class org.apache.hadoop.conf.Configuration
Returns a local file under a directory named in dirsProp with the given path.
getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
Constructs a local file name.
getLocalPathForWrite(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathForWrite(String, long, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathToRead(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS for reading.
getLocatedBlocks() - Method in class org.apache.hadoop.dfs.LocatedBlocks
Get located blocks.
getLocations() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
 
getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
Get the list of hostnames where the input split is located.
getLocations() - Method in class org.apache.hadoop.mapred.MultiFileSplit
 
getLog() - Method in class org.apache.hadoop.hbase.HRegion
 
getLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as a long.
getLongValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getMapCompletionEvents(String, int, int) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getMapCount(int, long, JobClient) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Calculate how many maps to run.
getMapOutputCompressionType() - Method in class org.apache.hadoop.mapred.JobConf
Get the compression type for the map outputs.
getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Get the codec for compressing the map outputs
getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Get the key class for the map output data.
getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Get the value class for the map output data.
getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getMapredJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the map tasks of a job.
getMapTaskReports(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a bunch of info on the map tasks that make up the job
getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of currently running map tasks.
getMasterAddress() - Method in class org.apache.hadoop.hbase.HMaster
 
getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a map task, as specified by the mapred.map.max.attempts property.
getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of map tasks that can fail without the job being aborted.
getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a reduce task, as specified by the mapred.reduce.max.attempts property.
getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of reduce tasks that can fail without the job being aborted.
getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
Get the maximum no.
getMaxTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The maximum capacity for running tasks in the cluster.
getMaxVersions() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
getMemory() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the total amount of buffer memory, in bytes.
getMessage() - Method in exception org.apache.hadoop.hbase.shell.generated.ParseException
This method has the standard behavior when this object has been created using the standard constructors.
getMessage() - Method in error org.apache.hadoop.hbase.shell.generated.TokenMgrError
You can also modify the body of this method to customize your error messages.
getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get a summary message of the problems found.
getMessage() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getMessage() - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
This method has the standard behavior when this object has been created using the standard constructors.
getMessage() - Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
You can also modify the body of this method to customize your error messages.
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the metadata object of the file
getMetric(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the metric object which can be a Float, Integer, Short or Byte.
getMetricNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of metric names.
getMissingIds() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return a list of missing block names (as list of Strings).
getMissingSize() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of missing data, in bytes.
getModificationTime() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
getModificationTime() - Method in interface org.apache.hadoop.fs.FileStatus
Get the modification time of the file.
getMount() - Method in class org.apache.hadoop.fs.DF
 
getMsg() - Method in class org.apache.hadoop.hbase.HMsg
Accessor
getMsg() - Method in class org.apache.hadoop.hbase.shell.ReturnMsg
 
getName() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getName() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.  
getName() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getName() - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.Path
Returns the final component of this path.
getName() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
getName() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getName() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
getName() - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
getName(Class) - Static method in class org.apache.hadoop.io.WritableName
Return the name for a class.
getName() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns raw name of the group.
getName() - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
getName() - Method in interface org.apache.hadoop.net.Node
Return this node's name
getName() - Method in class org.apache.hadoop.net.NodeBase
Return this node's name
getNamed(String, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #get(URI,Configuration) instead.
getNamenode() - Method in class org.apache.hadoop.dfs.DataNode
Return the namenode's identifier
getNameNodeAddr() - Method in class org.apache.hadoop.dfs.DataNode
 
getNameNodeAddress() - Method in class org.apache.hadoop.dfs.NameNode
Returns the address on which the NameNodes is listening to.
getNetworkLocation() - Method in class org.apache.hadoop.dfs.DatanodeInfo
rack name
getNetworkLocation() - Method in interface org.apache.hadoop.net.Node
Return the string representation of this node's network location
getNetworkLocation() - Method in class org.apache.hadoop.net.NodeBase
Return this node's network location
getNewJobId() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Allocate a name for the job.
getNewJobId() - Method in class org.apache.hadoop.mapred.JobTracker
Allocates a new JobId string.
getNextToken() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getNextToken() - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
getNullContext(String) - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns a "null" context - one which does nothing.
getNumber() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
getNumberColumns() - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the number of columns.
getNumBytesInSum() - Method in class org.apache.hadoop.dfs.DataChecksum
 
getNumFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
 
getNumOfLeaves() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of data nodes
getNumOfRacks() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of racks
getNumPaths() - Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the number of Paths in the split
getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
 
getObject(String) - Method in class org.apache.hadoop.conf.Configuration
Deprecated. A side map of Configuration to Object should be used instead.
getOp() - Method in class org.apache.hadoop.hbase.io.BatchOperation
 
getOutputCompressorClass(JobConf, Class) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Get the codec for compressing the reduce outputs
getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputPath() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
Get the user defined comparator for grouping values.
getOverReplicatedBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of over-replicated blocks.
getParent() - Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's parent
getParent() - Method in class org.apache.hadoop.fs.Path
Returns the parent of a path or null if at root.
getParent() - Method in interface org.apache.hadoop.net.Node
Return this node's parent
getParent() - Method in class org.apache.hadoop.net.NodeBase
Return this node's parent
getPartition(WritableComparable, Writable, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
Use Object.hashCode() to partition.
getPartition(WritableComparable, Writable, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
Use Object.hashCode() to partition.
getPartition(WritableComparable, Writable, int) - Method in interface org.apache.hadoop.mapred.Partitioner
Returns the paritition number for a given entry given the total number of partitions.
getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getPath() - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
getPath() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
getPath() - Method in class org.apache.hadoop.mapred.FileSplit
The file containing this split's data.
getPath(int) - Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the ith Path
getPath() - Method in class org.apache.hadoop.net.NodeBase
Return this node's path
getPaths() - Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns all the Paths in the split
getPercentUsed() - Method in class org.apache.hadoop.fs.DF
 
getPercentUsed() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
 
getPercentUsedSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getPeriod() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the timer period.
getPlatformName() - Static method in class org.apache.hadoop.util.PlatformName
Get the complete platform as per the java-vm.
getPort() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getPort() - Method in class org.apache.hadoop.hbase.HServerAddress
 
getPort() - Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the port that the server is on
getPos() - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
getPos() - Method in exception org.apache.hadoop.fs.ChecksumException
 
getPos() - Method in class org.apache.hadoop.fs.FSDataInputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSInputChecker
 
getPos() - Method in class org.apache.hadoop.fs.FSInputStream
Return the current offset from the start of the file
getPos() - Method in interface org.apache.hadoop.fs.Seekable
Return the current offset from the start of the file
getPos() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
Returns the current position in the input.
getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getPos() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Return the current byte position in the input file.
getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get the complete list of the problems reported.
getProgress() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the Progress object; this has a float (0.0 - 1.0) indicating the bytes processed by the iterator so far
getProgress() - Method in class org.apache.hadoop.mapred.LineRecordReader
Get the progress within the split
getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
How far has the reader gone through the input.
getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
Return the progress within the input split
getProgress() - Method in class org.apache.hadoop.mapred.TaskReport
The amount completed, between zero and one.
getProgress() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.dfs.NameNode
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.hbase.HMaster
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) - Method in class org.apache.hadoop.hbase.HRegionServer
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) - Method in interface org.apache.hadoop.ipc.VersionedProtocol
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.JobTracker
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getProxy(Class, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol, talking to a server at the named address.
getQueryStr() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getRaw(String) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property, without doing variable expansion.
getRawCapacity() - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return the total raw capacity of the filesystem, disregarding replication .
getRawCapacity() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw capacity of the filesystem, disregarding replication .
getRawFileSystem() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
get the raw file system
getRawUsed() - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return the total raw used space in the filesystem, disregarding replication .
getRawUsed() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw used space in the filesystem, disregarding replication .
getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Open the output generated by this format.
getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Open the output generated by this format.
getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRecordName() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Returns the record name.
getRecordName() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Returns the record name.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
Construct a RecordReader for a FileSplit.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
Create a record reader for the given split
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.streaming.StreamInputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.hbase.mapred.TableOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
Construct a RecordWriter with Progressable.
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.OutputFormatBase
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
 
getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
 
getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the reduce tasks of a job.
getReduceTaskReports(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Grab a bunch of info on the reduce tasks that make up the job
getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of current running reduce tasks.
getRegion(Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Protected utility method for safely obtaining an HRegion handle.
getRegion(Text, boolean) - Method in class org.apache.hadoop.hbase.HRegionServer
Protected utility method for safely obtaining an HRegion handle.
getRegionDir() - Method in class org.apache.hadoop.hbase.HRegion
 
getRegionId() - Method in class org.apache.hadoop.hbase.HRegion
 
getRegionId() - Method in class org.apache.hadoop.hbase.HRegionInfo
 
getRegionInfo() - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
 
getRegionInfo() - Method in class org.apache.hadoop.hbase.HMsg
Accessor
getRegionInfo(Text) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Get metainfo about an HRegion
getRegionInfo(Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Get metainfo about an HRegion
getRegionLocation(Text) - Method in class org.apache.hadoop.hbase.HClient
 
getRegionName() - Method in class org.apache.hadoop.hbase.HRegion
 
getRegionName() - Method in class org.apache.hadoop.hbase.HRegionInfo
 
getRemaining() - Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw free space.
getRemoteAddress() - Static method in class org.apache.hadoop.ipc.Server
Returns remote address as a string when invoked inside an RPC.
getRemoteIp() - Static method in class org.apache.hadoop.ipc.Server
Returns the remote side ip address when invoked inside an RPC Returns null incase of an error.
getReplication() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
getReplication() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the intended replication factor, against which the over/under- replicated blocks are counted.
getReplication() - Method in interface org.apache.hadoop.fs.FileStatus
Get the replication factor of a file.
getReplication(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getReplicationFactor() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the actual replication factor.
getReport() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getReport() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportDetails() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getResource(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the URL for the named resource.
getRevision() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion revision number for the root directory
getRootDir() - Method in class org.apache.hadoop.hbase.HRegion
 
getRotations() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getRow(Text) - Method in class org.apache.hadoop.hbase.HClient
Get all the data for the specified row
getRow(Text, Text) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Get all the data for the specified row
getRow(Text, Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Get all the data for the specified row
getRow() - Method in class org.apache.hadoop.hbase.HStoreKey
 
getRow() - Method in class org.apache.hadoop.hbase.shell.DeleteCommand
 
getRow() - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
getRow() - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
getRunnable() - Method in class org.apache.hadoop.util.Daemon
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
Version that is called from a timer thread, and therefore needs to be careful to synchronize.
getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
 
getSafeModeText() - Method in class org.apache.hadoop.dfs.JspHelper
 
getScanner(Text[], Text, long, RowFilterInterface) - Method in class org.apache.hadoop.hbase.HRegion
Return an iterator that scans over the HRegion, returning the indicated columns for only the rows that match the data filter.
getSerializedLength() - Method in class org.apache.hadoop.fs.s3.INode
 
getServer(Object, String, int, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServer(Object, String, int, int, boolean, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServerAddress() - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
 
getServerAddress() - Method in class org.apache.hadoop.hbase.HServerInfo
 
getServerVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the server's agreed to version.
getSessionId() - Method in class org.apache.hadoop.mapred.JobConf
Get the user-specified session identifier.
getShape(boolean, int) - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getSize() - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Get the current size of the buffer.
getSize() - Method in class org.apache.hadoop.io.BytesWritable
Get the current size of the buffer.
getSpace(int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
Should speculative execution be used for this job?
getSplits(int) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Generate a list of prefixes to a given depth
getSplits(JobConf, int) - Method in class org.apache.hadoop.hbase.mapred.TableInputFormat
A split will be created for each HRegion of the input table
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
Splits files returned by FileInputFormat.listPaths(JobConf) when they're too big.
getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
Splits a set of input files.
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
 
getStart() - Method in class org.apache.hadoop.mapred.FileSplit
The position of the first byte in the file to process.
getStartCode() - Method in class org.apache.hadoop.hbase.HServerInfo
 
getStartKey() - Method in class org.apache.hadoop.hbase.HRegion
 
getStartKey() - Method in class org.apache.hadoop.hbase.HRegionInfo
 
getStartKeys() - Method in class org.apache.hadoop.hbase.HClient
Gets the starting row key for every region in the currently open table
getStartRow() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
getStartTime() - Method in class org.apache.hadoop.mapred.JobStatus
 
getStartTime() - Method in class org.apache.hadoop.mapred.JobTracker
 
getStartTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get start time of task.
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getState() - Method in class org.apache.hadoop.mapred.TaskReport
The most recent state, reported by a Reporter.
getStats() - Method in class org.apache.hadoop.dfs.NameNode
 
getStatusText(boolean) - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get upgradeStatus data as a text for reporting.
getStorageID() - Method in class org.apache.hadoop.dfs.DatanodeID
 
getString() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
TODO : expressions codes need more love.
getStringLiteral() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getStrings(String) - Method in class org.apache.hadoop.conf.Configuration
Returns the value of the name property as an array of strings.
getStrings(String) - Static method in class org.apache.hadoop.util.StringUtils
returns an arraylist of strings
getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
GetSuffix(int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
GetSuffix(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks to see if symlinks are to be create for the localized cache files in the current working directory
getSystemDir() - Method in class org.apache.hadoop.mapred.JobConf
 
getTableDesc() - Method in class org.apache.hadoop.hbase.HRegion
 
getTableDesc() - Method in class org.apache.hadoop.hbase.HRegionInfo
 
getTableName() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
getTableNameFromRegionName(Text) - Static method in class org.apache.hadoop.hbase.HRegionInfo
Extracts table name prefix from a region name.
getTableServers(Text) - Method in class org.apache.hadoop.hbase.HClient
Gets the servers of the given table.
getTabSize(int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
getTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getTag() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getTag(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns a tag object which is can be a String, Integer, Short or Byte.
getTagNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of tag names
getTask(String) - Method in class org.apache.hadoop.mapred.TaskTracker
Called upon startup by the child process, to fetch Task data.
getTaskAttempts() - Method in class org.apache.hadoop.mapred.JobHistory.Task
Returns all task attempts for this task.
getTaskCompletionEvents(String, int, int) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get task completion events for the jobid, starting from fromEventId.
getTaskCompletionEvents(String, int, int) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
 
getTaskDiagnostics(String, String, String) - Method in class org.apache.hadoop.mapred.JobTracker
Get the diagnostics for a given task
getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns task id.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskLogAppender
Getter/Setter methods for log4j.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
The id of the task.
getTaskInfo(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getTaskLogFile(String, TaskLog.LogName) - Static method in class org.apache.hadoop.mapred.TaskLog
 
getTaskLogLength(JobConf) - Static method in class org.apache.hadoop.mapred.TaskLog
Get the desired maximum length of task's logs.
getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Get the task output filter out of the JobConf
getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns enum Status.SUCESS or Status.FAILURE.
getTaskTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskTrackerHttp() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
http location of the tasktracker where this task ran.
getTaskTrackerReportPort() - Method in class org.apache.hadoop.mapred.TaskTracker
Return the port at which the tasktracker bound to
getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
The number of task trackers in the cluster.
getTime() - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
getTimestamp(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
Returns mtime of a given cache file on hdfs.
getTimestamp() - Method in class org.apache.hadoop.hbase.HLogEdit
 
getTimestamp() - Method in class org.apache.hadoop.hbase.HStoreKey
 
getToken(int) - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
getToken(int) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getTotalBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the total number of blocks in the scanned area.
getTotalDirs() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of directories encountered during this scan.
getTotalFiles() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of files encountered during this scan.
getTotalLogFileSize() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
getTotalSize() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of scanned data, in bytes.
getTotalSubmissions() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTracker() - Static method in class org.apache.hadoop.mapred.JobTracker
 
getTrackerIdentifier() - Method in class org.apache.hadoop.mapred.JobTracker
Get the unique identifier (ie.
getTrackerPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a URL where some job progress information will be displayed.
getType() - Method in interface org.apache.hadoop.dfs.Upgradeable
Get the type of the software component, which this object is upgrading.
getType() - Method in class org.apache.hadoop.hbase.shell.ReturnMsg
 
getTypes() - Method in class org.apache.hadoop.io.GenericWritable
Return all classes that may be wrapped.
getUnderReplicatedBlocks() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of under-replicated blocks.
getUniqueItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getUpgradeStatus() - Method in interface org.apache.hadoop.dfs.Upgradeable
Upgrade status determines a percentage of the work done out of the total amount required by the upgrade.
getUpgradeStatus() - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get upgrade upgradeStatus as a percentage of the total upgrade done.
getUpgradeStatusReport(boolean) - Method in interface org.apache.hadoop.dfs.Upgradeable
Get status report for the upgrade.
getUri() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getUri() - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.FileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getURIs(String, String) - Method in class org.apache.hadoop.streaming.StreamJob
get the uris of all the files/caches
getURL() - Method in class org.apache.hadoop.mapred.JobProfile
Get the link to the web-ui for details of the job.
getUrl() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion URL for the root Hadoop directory.
getUsed() - Method in class org.apache.hadoop.fs.DF
 
getUsed() - Method in class org.apache.hadoop.fs.FileSystem
Return the total size of all files in the filesystem.
getUsedSkipRefresh() - Method in class org.apache.hadoop.fs.DF
 
getUser() - Method in class org.apache.hadoop.mapred.JobConf
Get the reported username for this job.
getUser() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user id.
getUser() - Static method in class org.apache.hadoop.util.VersionInfo
The user that compiled Hadoop.
getUsername() - Method in class org.apache.hadoop.mapred.JobStatus
 
getVal() - Method in class org.apache.hadoop.hbase.HLogEdit
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getValue() - Method in class org.apache.hadoop.dfs.DataChecksum
 
getValue() - Method in class org.apache.hadoop.hbase.io.BatchOperation
 
getValue(int) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
getValue() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw value
getValueClass() - Method in class org.apache.hadoop.io.ArrayWritable
 
getValueClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of value that must be passed to SequenceFileRecordReader.next(Writable,Writable)..
getVersion() - Method in interface org.apache.hadoop.dfs.Upgradeable
Get the layout version of the upgrade object.
getVersion() - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get the layout version of the currently running upgrade.
getVersion() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
getVersion() - Method in class org.apache.hadoop.io.VersionedWritable
Return the version number of the current implementation.
getVersion() - Static method in class org.apache.hadoop.util.VersionInfo
Get the Hadoop version.
getVIntSize(long) - Static method in class org.apache.hadoop.io.WritableUtils
Get the encoded length if an integer is stored in a variable-length format
getVIntSize(long) - Static method in class org.apache.hadoop.record.Utils
Get the encoded length if an integer is stored in a variable-length format
getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getWarn() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getWeight() - Method in class org.onelab.filter.Key
 
getWorkingDirectory() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
Get the current working directory for the default file system.
getXceiverCount() - Method in class org.apache.hadoop.dfs.DatanodeInfo
number of active connections
getZlibCompressor(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib compressor.
getZlibCompressorType(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate type of the zlib compressor.
getZlibDecompressor(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib decompressor.
getZlibDecompressorType(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate type of the zlib decompressor.
globPaths(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return all the files that match filePattern and are not checksum files.
globPaths(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
glob all the file names that matches filePattern and is accepted by filter.
go() - Method in class org.apache.hadoop.streaming.StreamJob
This is the method that actually intializes the job conf and submits the job to the jobtracker
goodClassOrNull(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
It may seem strange to silently switch behaviour when a String is not a classname; the reason is simplified Usage:
Grep - Class in org.apache.hadoop.examples
 
GROUP_COLUMNS - Static variable in class org.apache.hadoop.hbase.mapred.GroupingTableMap
JobConf parameter to specify the columns used to produce the key passed to collect from the map phase
GroupingTableMap - Class in org.apache.hadoop.hbase.mapred
Extract grouping columns from input record
GroupingTableMap() - Constructor for class org.apache.hadoop.hbase.mapred.GroupingTableMap
default constructor
GT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
GzipCodec - Class in org.apache.hadoop.io.compress
This class creates gzip compressors/decompressors.
GzipCodec() - Constructor for class org.apache.hadoop.io.compress.GzipCodec
 
GzipCodec.GzipInputStream - Class in org.apache.hadoop.io.compress
 
GzipCodec.GzipInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
GzipCodec.GzipInputStream(DecompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
Allow subclasses to directly set the inflater stream.
GzipCodec.GzipOutputStream - Class in org.apache.hadoop.io.compress
A bridge that wraps around a DeflaterOutputStream to make it a CompressionOutputStream.
GzipCodec.GzipOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
GzipCodec.GzipOutputStream(CompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
Allow children types to put a different type in here.

H

HAbstractScanner - Class in org.apache.hadoop.hbase
Abstract base class that implements the HScannerInterface.
hadoopAliasConf_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
HadoopStreaming - Class in org.apache.hadoop.streaming
The main entrypoint.
HadoopStreaming() - Constructor for class org.apache.hadoop.streaming.HadoopStreaming
 
HadoopVersionAnnotation - Annotation Type in org.apache.hadoop
A package attribute that captures the version of Hadoop that was compiled.
halfDigest() - Method in class org.apache.hadoop.io.MD5Hash
Construct a half-sized version of this MD5.
handle(JobHistory.RecordTypes, Map<JobHistory.Keys, String>) - Method in interface org.apache.hadoop.mapred.JobHistory.Listener
Callback method for history parser.
hasFamily(Text) - Method in class org.apache.hadoop.hbase.HTableDescriptor
Checks to see if this table contains the given column family
hash - Variable in class org.onelab.filter.Filter
The hash function used to map a key to several positions in the vector.
hash(Key) - Method in class org.onelab.filter.HashFunction
Hashes a specified key into several integers.
hashBytes(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Compute hash for binary data.
hashCode() - Method in class org.apache.hadoop.dfs.DatanodeID
 
hashCode() - Method in class org.apache.hadoop.fs.Path
 
hashCode() - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
hashCode() - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
hashCode() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
hashCode() - Method in class org.apache.hadoop.hbase.HLogKey
 
hashCode() - Method in class org.apache.hadoop.hbase.HRegionInfo
hashCode() - Method in class org.apache.hadoop.hbase.HServerAddress
hashCode() - Method in class org.apache.hadoop.hbase.HStoreFile
 
hashCode() - Method in class org.apache.hadoop.hbase.HStoreKey
 
hashCode() - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
hashCode() - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
 
hashCode() - Method in class org.apache.hadoop.io.BooleanWritable
 
hashCode() - Method in class org.apache.hadoop.io.BytesWritable
 
hashCode() - Method in class org.apache.hadoop.io.FloatWritable
 
hashCode() - Method in class org.apache.hadoop.io.IntWritable
 
hashCode() - Method in class org.apache.hadoop.io.LongWritable
 
hashCode() - Method in class org.apache.hadoop.io.MD5Hash
Returns a hash code value for this object.
hashCode() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
hashCode() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
hashCode() - Method in class org.apache.hadoop.io.Text
hash function
hashCode() - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
hashCode() - Method in class org.apache.hadoop.io.VIntWritable
 
hashCode() - Method in class org.apache.hadoop.io.VLongWritable
 
hashCode() - Method in class org.apache.hadoop.record.Buffer
 
hashCode() - Method in class org.onelab.filter.BloomFilter
 
hashCode() - Method in class org.onelab.filter.CountingBloomFilter
 
hashCode() - Method in class org.onelab.filter.DynamicBloomFilter
 
hashCode() - Method in class org.onelab.filter.Filter
 
hashCode() - Method in class org.onelab.filter.Key
 
hashCode() - Method in class org.onelab.filter.RetouchedBloomFilter
 
HashFunction - Class in org.onelab.filter
Implements a hash object that returns a certain number of hashed values.
HashFunction(int, int) - Constructor for class org.onelab.filter.HashFunction
Constructor.
HashPartitioner - Class in org.apache.hadoop.mapred.lib
Partition keys by their Object.hashCode().
HashPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.HashPartitioner
 
hasNext() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
hasSimpleInputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
HBASE_DIR - Static variable in interface org.apache.hadoop.hbase.HConstants
Parameter name for HBase instance root directory
HBaseConfiguration - Class in org.apache.hadoop.hbase
 
HBaseConfiguration() - Constructor for class org.apache.hadoop.hbase.HBaseConfiguration
 
HClient - Class in org.apache.hadoop.hbase
HClient manages a connection to a single HRegionServer.
HClient(Configuration) - Constructor for class org.apache.hadoop.hbase.HClient
Creates a new HClient
HClient.BatchHandler - Class in org.apache.hadoop.hbase
 
HClient.BatchHandler() - Constructor for class org.apache.hadoop.hbase.HClient.BatchHandler
constructor
HClient.RegionLocation - Class in org.apache.hadoop.hbase
 
HColumnDescriptor - Class in org.apache.hadoop.hbase
A HColumnDescriptor contains information about a column family such as the number of versions, compression settings, etc.
HColumnDescriptor() - Constructor for class org.apache.hadoop.hbase.HColumnDescriptor
Default constructor.
HColumnDescriptor(String) - Constructor for class org.apache.hadoop.hbase.HColumnDescriptor
Construct a column descriptor specifying only the family name The other attributes are defaulted.
HColumnDescriptor(Text, int, HColumnDescriptor.CompressionType, boolean, int, BloomFilterDescriptor) - Constructor for class org.apache.hadoop.hbase.HColumnDescriptor
Constructor - specify all parameters.
HColumnDescriptor.CompressionType - Enum in org.apache.hadoop.hbase
The type of compression.
HConstants - Interface in org.apache.hadoop.hbase
HConstants holds a bunch of HBase-related constants
HEADER - Static variable in class org.apache.hadoop.ipc.Server
The first four bytes of Hadoop RPC connections
HEADER_LEN - Static variable in class org.apache.hadoop.dfs.DataChecksum
 
heartbeat(TaskTrackerStatus, boolean, boolean, short) - Method in class org.apache.hadoop.mapred.JobTracker
The periodic heartbeat mechanism between the TaskTracker and the JobTracker.
HEARTBEAT_INTERVAL - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
height - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
HELP - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
help - Static variable in class org.apache.hadoop.hbase.shell.HelpManager
help contents map
helpCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
HelpCommand - Class in org.apache.hadoop.hbase.shell
 
HelpCommand() - Constructor for class org.apache.hadoop.hbase.shell.HelpCommand
 
HelpContents - Class in org.apache.hadoop.hbase.shell
 
HelpContents() - Constructor for class org.apache.hadoop.hbase.shell.HelpContents
 
HelpManager - Class in org.apache.hadoop.hbase.shell
Prints a usage message for the program to the given stream.
HelpManager() - Constructor for class org.apache.hadoop.hbase.shell.HelpManager
 
hexchars - Static variable in class org.apache.hadoop.record.Utils
 
hexStringToByte(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a hexstring this will return the byte array corresponding to the string
HftpFileSystem - Class in org.apache.hadoop.dfs
An implementation of a protocol for accessing filesystems over HTTP.
HftpFileSystem() - Constructor for class org.apache.hadoop.dfs.HftpFileSystem
 
HftpFileSystem.HftpFileStatus - Class in org.apache.hadoop.dfs
 
HftpFileSystem.HftpFileStatus(long, boolean, short, long, long, Path) - Constructor for class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
HGlobals - Class in org.apache.hadoop.hbase
Global values used for finding and scanning the root and meta tables.
HGlobals() - Constructor for class org.apache.hadoop.hbase.HGlobals
 
HInternalScannerInterface - Interface in org.apache.hadoop.hbase
Internally, we need to be able to determine if the scanner is doing wildcard column matches (when only a column family is specified or if a column regex is specified) or if multiple members of the same column family were specified.
HLocking - Class in org.apache.hadoop.hbase
HLocking is a set of lock primitives that does not rely on a particular thread holding the monitor for an object.
HLocking() - Constructor for class org.apache.hadoop.hbase.HLocking
Constructor
HLog - Class in org.apache.hadoop.hbase
HLog stores all the edits to the HStore.
HLogEdit - Class in org.apache.hadoop.hbase
A log value.
HLogEdit() - Constructor for class org.apache.hadoop.hbase.HLogEdit
 
HLogEdit(Text, byte[], long) - Constructor for class org.apache.hadoop.hbase.HLogEdit
 
HLogKey - Class in org.apache.hadoop.hbase
A Key for an entry in the change log.
HLogKey() - Constructor for class org.apache.hadoop.hbase.HLogKey
Create an empty key useful when deserializing
HLogKey(Text, Text, Text, long) - Constructor for class org.apache.hadoop.hbase.HLogKey
Create the log key! We maintain the tablename mainly for debugging purposes.
HMaster - Class in org.apache.hadoop.hbase
HMaster is the "master server" for a HBase.
HMaster(Configuration) - Constructor for class org.apache.hadoop.hbase.HMaster
Build the HMaster out of a raw configuration item.
HMaster(Path, HServerAddress, Configuration) - Constructor for class org.apache.hadoop.hbase.HMaster
Build the HMaster
HMasterInterface - Interface in org.apache.hadoop.hbase
Clients interact with the HMasterInterface to gain access to meta-level HBase functionality, like finding an HRegionServer and creating/destroying tables.
HMasterRegionInterface - Interface in org.apache.hadoop.hbase
HRegionServers interact with the HMasterRegionInterface to report on local goings-on and to obtain data-handling instructions from the HMaster.
HMemcache - Class in org.apache.hadoop.hbase
The HMemcache holds in-memory modifications to the HRegion.
HMemcache() - Constructor for class org.apache.hadoop.hbase.HMemcache
constructor
HMemcache.Snapshot - Class in org.apache.hadoop.hbase
represents the state of the memcache at a specified point in time
HMsg - Class in org.apache.hadoop.hbase
HMsg is for communicating instructions between the HMaster and the HRegionServers.
HMsg() - Constructor for class org.apache.hadoop.hbase.HMsg
Default constructor.
HMsg(byte) - Constructor for class org.apache.hadoop.hbase.HMsg
Construct a message with an empty HRegionInfo
HMsg(byte, HRegionInfo) - Constructor for class org.apache.hadoop.hbase.HMsg
Construct a message with the specified message code and HRegionInfo
HostsFileReader - Class in org.apache.hadoop.util
 
HostsFileReader(String, String) - Constructor for class org.apache.hadoop.util.HostsFileReader
 
HRegion - Class in org.apache.hadoop.hbase
HRegion stores data for a certain region of a table.
HRegion(Path, HLog, FileSystem, Configuration, HRegionInfo, Path) - Constructor for class org.apache.hadoop.hbase.HRegion
HRegion constructor.
HREGION_LOGDIR_NAME - Static variable in interface org.apache.hadoop.hbase.HConstants
Used to construct the name of the log directory for a region server
HREGION_OLDLOGFILE_NAME - Static variable in interface org.apache.hadoop.hbase.HConstants
Name of old log file for reconstruction
HREGIONDIR_PREFIX - Static variable in interface org.apache.hadoop.hbase.HConstants
Used to construct the name of the directory in which a HRegion resides
HRegionInfo - Class in org.apache.hadoop.hbase
HRegion information.
HRegionInfo() - Constructor for class org.apache.hadoop.hbase.HRegionInfo
Default constructor - creates empty object
HRegionInfo(byte[]) - Constructor for class org.apache.hadoop.hbase.HRegionInfo
Construct a HRegionInfo object from byte array
HRegionInfo(long, HTableDescriptor, Text, Text) - Constructor for class org.apache.hadoop.hbase.HRegionInfo
Construct HRegionInfo with explicit parameters
HRegionInterface - Interface in org.apache.hadoop.hbase
Clients interact with HRegionServers using a handle to the HRegionInterface.
HRegionServer - Class in org.apache.hadoop.hbase
HRegionServer makes a set of HRegions available to clients.
HRegionServer(Configuration) - Constructor for class org.apache.hadoop.hbase.HRegionServer
Starts a HRegionServer at the default location
HRegionServer(Path, HServerAddress, Configuration) - Constructor for class org.apache.hadoop.hbase.HRegionServer
Starts a HRegionServer at the specified location
HScannerInterface - Interface in org.apache.hadoop.hbase
HScannerInterface iterates through a set of rows.
HServerAddress - Class in org.apache.hadoop.hbase
HServerAddress is a "label" for a HBase server that combines the host name and port number.
HServerAddress() - Constructor for class org.apache.hadoop.hbase.HServerAddress
Empty constructor, used for Writable
HServerAddress(InetSocketAddress) - Constructor for class org.apache.hadoop.hbase.HServerAddress
Construct a HServerAddress from an InetSocketAddress
HServerAddress(String) - Constructor for class org.apache.hadoop.hbase.HServerAddress
Construct a HServerAddress from a string of the form hostname:port
HServerAddress(String, int) - Constructor for class org.apache.hadoop.hbase.HServerAddress
Construct a HServerAddress from hostname, port number
HServerAddress(HServerAddress) - Constructor for class org.apache.hadoop.hbase.HServerAddress
Construct a HServerAddress from another HServerAddress
HServerInfo - Class in org.apache.hadoop.hbase
HRSInfo contains metainfo about an HRegionServer, including details about the source machine and load statistics.
HServerInfo() - Constructor for class org.apache.hadoop.hbase.HServerInfo
 
HServerInfo(HServerAddress, long) - Constructor for class org.apache.hadoop.hbase.HServerInfo
 
HServerInfo(HServerInfo) - Constructor for class org.apache.hadoop.hbase.HServerInfo
 
HStoreFile - Class in org.apache.hadoop.hbase
Each HStore maintains a bunch of different data files.
HStoreKey - Class in org.apache.hadoop.hbase
A Key for a stored row
HStoreKey() - Constructor for class org.apache.hadoop.hbase.HStoreKey
Default constructor used in conjunction with Writable interface
HStoreKey(Text) - Constructor for class org.apache.hadoop.hbase.HStoreKey
Create an HStoreKey specifying only the row The column defaults to the empty string and the time stamp defaults to Long.MAX_VALUE
HStoreKey(Text, long) - Constructor for class org.apache.hadoop.hbase.HStoreKey
Create an HStoreKey specifying the row and timestamp The column name defaults to the empty string
HStoreKey(Text, Text) - Constructor for class org.apache.hadoop.hbase.HStoreKey
Create an HStoreKey specifying the row and column names The timestamp defaults to Long.MAX_VALUE
HStoreKey(Text, Text, long) - Constructor for class org.apache.hadoop.hbase.HStoreKey
Create an HStoreKey specifying all the fields
HStoreKey(HStoreKey) - Constructor for class org.apache.hadoop.hbase.HStoreKey
Construct a new HStoreKey from another
HTableDescriptor - Class in org.apache.hadoop.hbase
HTableDescriptor contains the name of an HTable, and its column families.
HTableDescriptor() - Constructor for class org.apache.hadoop.hbase.HTableDescriptor
Constructs an empty object
HTableDescriptor(String) - Constructor for class org.apache.hadoop.hbase.HTableDescriptor
Constructor.
humanReadableInt(long) - Static method in class org.apache.hadoop.util.StringUtils
Given an integer, return a string that is in an approximate, but human readable format.

I

ID - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
IDENT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IdentityMapper - Class in org.apache.hadoop.mapred.lib
Implements the identity function, mapping inputs directly to outputs.
IdentityMapper() - Constructor for class org.apache.hadoop.mapred.lib.IdentityMapper
 
IdentityReducer - Class in org.apache.hadoop.mapred.lib
Performs no reduction, writing all input values directly to the output.
IdentityReducer() - Constructor for class org.apache.hadoop.mapred.lib.IdentityReducer
 
IdentityTableMap - Class in org.apache.hadoop.hbase.mapred
Pass the given key and record as-is to reduce
IdentityTableMap() - Constructor for class org.apache.hadoop.hbase.mapred.IdentityTableMap
constructor
IdentityTableReduce - Class in org.apache.hadoop.hbase.mapred
Write to table each key, record pair
IdentityTableReduce() - Constructor for class org.apache.hadoop.hbase.mapred.IdentityTableReduce
constructor
idWithinJob() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
ifmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
image - Variable in class org.apache.hadoop.hbase.shell.generated.Token
The string image of the token.
image - Variable in class org.apache.hadoop.record.compiler.generated.Token
The string image of the token.
ImmutableBytesWritable - Class in org.apache.hadoop.hbase.io
A byte sequence that is usable as a key or value.
ImmutableBytesWritable() - Constructor for class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Create a zero-size sequence.
ImmutableBytesWritable(byte[]) - Constructor for class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Create a ImmutableBytesWritable using the byte array as the initial value.
ImmutableBytesWritable(ImmutableBytesWritable) - Constructor for class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Set the new ImmutableBytesWritable to a copy of the contents of the passed ibw.
ImmutableBytesWritable(byte[], int, int) - Constructor for class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Set the value to a copy of the given byte range
ImmutableBytesWritable.Comparator - Class in org.apache.hadoop.hbase.io
A Comparator optimized for ImmutableBytesWritable.
ImmutableBytesWritable.Comparator() - Constructor for class org.apache.hadoop.hbase.io.ImmutableBytesWritable.Comparator
 
in - Variable in class org.apache.hadoop.io.compress.CompressionInputStream
The input stream to be compressed.
inbuf - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
inBuf - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
inBuf - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Include() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
INCLUDE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
incr() - Method in interface org.apache.hadoop.record.Index
 
incrAllCounters(Counters) - Method in class org.apache.hadoop.mapred.Counters
Increments multiple counters by their amounts in another Counters instance.
incrCounter(Enum, long) - Method in class org.apache.hadoop.mapred.Counters
Increments the specified counter by the specified amount, creating it if it didn't already exist.
incrCounter(Enum, long) - Method in interface org.apache.hadoop.mapred.Reporter
Increments the counter identified by the key, which can be of any enum type, by the specified amount.
INCREMENT - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
incrementWeight(double) - Method in class org.onelab.filter.Key
Increments the weight of this key with a specified value.
incrementWeight() - Method in class org.onelab.filter.Key
Increments the weight of this key by one.
incrMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
Index - Interface in org.apache.hadoop.record
Interface that acts as an iterator for deserializing maps.
INDEX_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the index file.
infoPort - Variable in class org.apache.hadoop.dfs.DatanodeID
 
init() - Method in class org.apache.hadoop.fs.FsShell
 
init(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.file.FileContext
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
init(String, String) - Static method in class org.apache.hadoop.metrics.jvm.JvmMetrics
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Initializes the context.
init() - Method in class org.apache.hadoop.streaming.StreamJob
 
init() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
initialize(URI, Configuration) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
initialize(int) - Method in class org.apache.hadoop.util.PriorityQueue
Subclass constructors must call this.
initializePieces() - Method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
Define the one sided pieces.
initializePieces() - Method in class org.apache.hadoop.examples.dancing.Pentomino
Fill in the pieces list.
initJob(String, String, String, Class<? extends TableMap>, JobConf) - Static method in class org.apache.hadoop.hbase.mapred.GroupingTableMap
Use this before submitting a TableMap job.
initJob(String, String, Class<? extends TableMap>, JobConf) - Static method in class org.apache.hadoop.hbase.mapred.TableMap
Use this before submitting a TableMap job.
initJob(String, Class<? extends TableReduce>, JobConf) - Static method in class org.apache.hadoop.hbase.mapred.TableReduce
Use this before submitting a TableReduce job.
InMemoryFileSystem - Class in org.apache.hadoop.fs
An implementation of the in-memory filesystem.
InMemoryFileSystem() - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
 
InMemoryFileSystem(URI, Configuration) - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
 
INode - Class in org.apache.hadoop.fs.s3
Holds file metadata including type (regular file, or directory), and the list of blocks that are pointers to the data.
INode(INode.FileType, Block[]) - Constructor for class org.apache.hadoop.fs.s3.INode
 
inodeExists(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
Input() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
input_stream - Variable in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
input_stream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
inputFile - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inputFile - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
InputFormat - Interface in org.apache.hadoop.mapred
An input data format.
InputFormatBase - Class in org.apache.hadoop.mapred
Deprecated. replaced by FileInputFormat
InputFormatBase() - Constructor for class org.apache.hadoop.mapred.InputFormatBase
Deprecated.  
inputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
inputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
InputSplit - Interface in org.apache.hadoop.mapred
The description of the data for a single map task.
inputStream - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
inputStream - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
inputTag - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inReaderSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
INSERT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
insert(Object) - Method in class org.apache.hadoop.util.PriorityQueue
Adds element to the PriorityQueue in log(size) time if either the PriorityQueue is not full, or not lessThan(element, top()).
insertCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
InsertCommand - Class in org.apache.hadoop.hbase.shell
 
InsertCommand() - Constructor for class org.apache.hadoop.hbase.shell.InsertCommand
 
INT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IntWritable - Class in org.apache.hadoop.io
A WritableComparable for ints.
IntWritable() - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable(int) - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for IntWritable.
IntWritable.Comparator() - Constructor for class org.apache.hadoop.io.IntWritable.Comparator
 
InvalidColumnNameException - Exception in org.apache.hadoop.hbase
 
InvalidColumnNameException() - Constructor for exception org.apache.hadoop.hbase.InvalidColumnNameException
 
InvalidColumnNameException(String) - Constructor for exception org.apache.hadoop.hbase.InvalidColumnNameException
 
InvalidFileTypeException - Exception in org.apache.hadoop.mapred
Used when file type differs from the desired file type.
InvalidFileTypeException() - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidFileTypeException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidInputException - Exception in org.apache.hadoop.mapred
This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one by one.
InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapred.InvalidInputException
Create the exception with the given list.
InvalidJobConfException - Exception in org.apache.hadoop.mapred
This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid.
InvalidJobConfException() - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InvalidJobConfException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InvalidRowFilterException - Exception in org.apache.hadoop.hbase.filter
Used to indicate an invalid RowFilter.
InvalidRowFilterException() - Constructor for exception org.apache.hadoop.hbase.filter.InvalidRowFilterException
 
InvalidRowFilterException(String) - Constructor for exception org.apache.hadoop.hbase.filter.InvalidRowFilterException
 
InverseMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that swaps keys and values.
InverseMapper() - Constructor for class org.apache.hadoop.mapred.lib.InverseMapper
 
isAbsolute() - Method in class org.apache.hadoop.fs.Path
True if the directory of this path is absolute.
isAbsolute() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isAlive - Variable in class org.apache.hadoop.dfs.DatanodeDescriptor
 
isBlockCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if records are block-compressed.
isChecksumFile(Path) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
Return true iff file is a checksum file name.
isComplete() - Method in interface org.apache.hadoop.mapred.RunningJob
Non-blocking function to check whether the job is finished or not.
isCompleted() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if values are compressed.
isContextValid(String) - Static method in class org.apache.hadoop.fs.LocalDirAllocator
Method to check whether a context is valid
isCygwin() - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isDir() - Method in class org.apache.hadoop.dfs.HftpFileSystem.HftpFileStatus
 
isDir(String) - Method in class org.apache.hadoop.dfs.NameNode
 
isDir() - Method in interface org.apache.hadoop.fs.FileStatus
Is this a directory?
isDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
isDirectory() - Method in class org.apache.hadoop.fs.s3.INode
 
isDisableHistory() - Static method in class org.apache.hadoop.mapred.JobHistory
Returns history disable status.
isFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
True iff the named path is a regular file.
isFile() - Method in class org.apache.hadoop.fs.s3.INode
 
isFile(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
isHealthy() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
DFS is considered healthy if there are no missing blocks.
isIdle() - Method in class org.apache.hadoop.mapred.TaskTracker
Is this task tracker idle?
isIncrement() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isLocalHadoop() - Method in class org.apache.hadoop.streaming.StreamJob
 
isLocalJobTracker(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isMapTask() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
isMasterRunning() - Method in class org.apache.hadoop.hbase.HClient
 
isMasterRunning() - Method in class org.apache.hadoop.hbase.HMaster
isMasterRunning() - Method in interface org.apache.hadoop.hbase.HMasterInterface
 
isMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns true if monitoring is currently in progress.
isMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns true if monitoring is currently in progress.
isMultipleMatchScanner() - Method in class org.apache.hadoop.hbase.HAbstractScanner
 
isMultipleMatchScanner() - Method in interface org.apache.hadoop.hbase.HInternalScannerInterface
 
isNativeCodeLoaded() - Static method in class org.apache.hadoop.util.NativeCodeLoader
Check if native-hadoop code is loaded for this platform.
isNativeLzoLoaded() - Static method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
Check if lzo compressors are loaded and initialized.
isNativeLzoLoaded() - Static method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Check if lzo decompressors are loaded and initialized.
isNativeLzoLoaded(Configuration) - Static method in class org.apache.hadoop.io.compress.LzoCodec
Check if native-lzo library is loaded & initialized.
isNativeZlibLoaded(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Check if native-zlib code is loaded & initialized correctly and can be loaded for this job.
IsolationRunner - Class in org.apache.hadoop.mapred
 
IsolationRunner() - Constructor for class org.apache.hadoop.mapred.IsolationRunner
 
isOnSameRack(DatanodeDescriptor, DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Check if two data nodes are on the same rack
isReady() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.FileInputFormat
Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be.
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.TextInputFormat
 
isSuccessful() - Method in interface org.apache.hadoop.mapred.RunningJob
True iff job completed successfully.
isWildcardScanner() - Method in class org.apache.hadoop.hbase.HAbstractScanner
 
isWildcardScanner() - Method in interface org.apache.hadoop.hbase.HInternalScannerInterface
 
iterator() - Method in class org.apache.hadoop.conf.Configuration
Go through the list of String key-value pairs in the configuration.
iterator() - Method in class org.apache.hadoop.hbase.io.BatchUpdate
 

J

jar_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JarBuilder - Class in org.apache.hadoop.streaming
This class is the main class for generating job.jar for Hadoop Streaming jobs.
JarBuilder() - Constructor for class org.apache.hadoop.streaming.JarBuilder
 
JBoolean - Class in org.apache.hadoop.record.compiler
 
JBoolean() - Constructor for class org.apache.hadoop.record.compiler.JBoolean
Creates a new instance of JBoolean
JBuffer - Class in org.apache.hadoop.record.compiler
Code generator for "buffer" type.
JBuffer() - Constructor for class org.apache.hadoop.record.compiler.JBuffer
Creates a new instance of JBuffer
JByte - Class in org.apache.hadoop.record.compiler
Code generator for "byte" type.
JByte() - Constructor for class org.apache.hadoop.record.compiler.JByte
 
jc_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JDouble - Class in org.apache.hadoop.record.compiler
 
JDouble() - Constructor for class org.apache.hadoop.record.compiler.JDouble
Creates a new instance of JDouble
JField<T> - Class in org.apache.hadoop.record.compiler
A thin wrappper around record field.
JField(String, T) - Constructor for class org.apache.hadoop.record.compiler.JField
Creates a new instance of JField
JFile - Class in org.apache.hadoop.record.compiler
Container for the Hadoop Record DDL.
JFile(String, ArrayList<JFile>, ArrayList<JRecord>) - Constructor for class org.apache.hadoop.record.compiler.JFile
Creates a new instance of JFile
JFloat - Class in org.apache.hadoop.record.compiler
 
JFloat() - Constructor for class org.apache.hadoop.record.compiler.JFloat
Creates a new instance of JFloat
JInt - Class in org.apache.hadoop.record.compiler
Code generator for "int" type
JInt() - Constructor for class org.apache.hadoop.record.compiler.JInt
Creates a new instance of JInt
jj_nt - Variable in class org.apache.hadoop.hbase.shell.generated.Parser
 
jj_nt - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
jjFillToken() - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
jjFillToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjnewLexState - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjstrLiteralImages - Static variable in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
jjstrLiteralImages - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
JLong - Class in org.apache.hadoop.record.compiler
Code generator for "long" type
JLong() - Constructor for class org.apache.hadoop.record.compiler.JLong
Creates a new instance of JLong
JMap - Class in org.apache.hadoop.record.compiler
 
JMap(JType, JType) - Constructor for class org.apache.hadoop.record.compiler.JMap
Creates a new instance of JMap
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
Job - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a MapReduce job and its dependency.
Job(JobConf, ArrayList) - Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
Construct a job.
JobBase - Class in org.apache.hadoop.contrib.utils.join
A common base implementing some statics collecting mechanisms that are commonly used in a typical map/reduce job.
JobBase() - Constructor for class org.apache.hadoop.contrib.utils.join.JobBase
 
JobClient - Class in org.apache.hadoop.mapred
JobClient interacts with the JobTracker network interface.
JobClient() - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client, connect to the default job tracker
JobClient(JobConf) - Constructor for class org.apache.hadoop.mapred.JobClient
 
JobClient(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client, connect to the indicated job tracker.
JobClient.TaskStatusFilter - Enum in org.apache.hadoop.mapred
 
JobConf - Class in org.apache.hadoop.mapred
A map/reduce job configuration.
JobConf() - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Configuration) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(Configuration, Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce job configuration.
JobConf(String) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce configuration.
JobConf(Path) - Constructor for class org.apache.hadoop.mapred.JobConf
Construct a map/reduce configuration.
jobConf_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JobConfigurable - Interface in org.apache.hadoop.mapred
That what may be configured.
JobControl - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a set of MapReduce jobs and its dependency.
JobControl(String) - Constructor for class org.apache.hadoop.mapred.jobcontrol.JobControl
Construct a job control for a group of jobs.
JobEndNotifier - Class in org.apache.hadoop.mapred
 
JobEndNotifier() - Constructor for class org.apache.hadoop.mapred.JobEndNotifier
 
JobHistory - Class in org.apache.hadoop.mapred
Provides methods for writing to and reading from job history.
JobHistory() - Constructor for class org.apache.hadoop.mapred.JobHistory
 
JobHistory.HistoryCleaner - Class in org.apache.hadoop.mapred
Delete history files older than one month.
JobHistory.HistoryCleaner() - Constructor for class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
 
JobHistory.JobInfo - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to job start, finish or failure.
JobHistory.JobInfo(String) - Constructor for class org.apache.hadoop.mapred.JobHistory.JobInfo
Create new JobInfo
JobHistory.Keys - Enum in org.apache.hadoop.mapred
Job history files contain key="value" pairs, where keys belong to this enum.
JobHistory.Listener - Interface in org.apache.hadoop.mapred
Callback interface for reading back log events from JobHistory.
JobHistory.MapAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.MapAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.MapAttempt
 
JobHistory.RecordTypes - Enum in org.apache.hadoop.mapred
Record types are identifiers for each line of log in history files.
JobHistory.ReduceAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.ReduceAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
 
JobHistory.Task - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to Task's start, finish or failure.
JobHistory.Task() - Constructor for class org.apache.hadoop.mapred.JobHistory.Task
 
JobHistory.TaskAttempt - Class in org.apache.hadoop.mapred
Base class for Map and Reduce TaskAttempts.
JobHistory.TaskAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.TaskAttempt
 
JobHistory.Values - Enum in org.apache.hadoop.mapred
This enum contains some of the values commonly used by history log events.
jobId_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
jobInfo() - Method in class org.apache.hadoop.streaming.StreamJob
 
JobPriority - Enum in org.apache.hadoop.mapred
Used to describe the priority of the running job.
JobProfile - Class in org.apache.hadoop.mapred
A JobProfile is a MapReduce primitive.
JobProfile() - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct an empty JobProfile.
JobProfile(String, String, String, String, String) - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct a JobProfile the userid, jobid, job config-file, job-details url and job name.
JobStatus - Class in org.apache.hadoop.mapred
Describes the current status of a job.
JobStatus() - Constructor for class org.apache.hadoop.mapred.JobStatus
 
JobStatus(String, float, float, int) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobClient
 
jobsToComplete() - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Get the jobs that are not completed and not failed
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobTracker
 
JobSubmissionProtocol - Interface in org.apache.hadoop.mapred
Protocol that a JobClient and the central JobTracker use to communicate.
JobTracker - Class in org.apache.hadoop.mapred
JobTracker is the central location for submitting and tracking MR jobs in a network environment.
JOBTRACKER_START_TIME - Static variable in class org.apache.hadoop.mapred.JobHistory
 
join() - Method in class org.apache.hadoop.dfs.NameNode
Wait for service to finish.
join() - Method in class org.apache.hadoop.ipc.Server
Wait for the server to be stopped.
JRecord - Class in org.apache.hadoop.record.compiler
 
JRecord(String, ArrayList<JField<JType>>) - Constructor for class org.apache.hadoop.record.compiler.JRecord
Creates a new instance of JRecord
JspHelper - Class in org.apache.hadoop.dfs
 
JspHelper() - Constructor for class org.apache.hadoop.dfs.JspHelper
 
JString - Class in org.apache.hadoop.record.compiler
 
JString() - Constructor for class org.apache.hadoop.record.compiler.JString
Creates a new instance of JString
JType - Class in org.apache.hadoop.record.compiler
Abstract Base class for all types supported by Hadoop Record I/O.
JType() - Constructor for class org.apache.hadoop.record.compiler.JType
 
JVector - Class in org.apache.hadoop.record.compiler
 
JVector(JType) - Constructor for class org.apache.hadoop.record.compiler.JVector
Creates a new instance of JVector
JvmMetrics - Class in org.apache.hadoop.metrics.jvm
Singleton class which eports Java Virtual Machine metrics to the metrics API.

K

key() - Method in class org.apache.hadoop.io.ArrayFile.Reader
Returns the key associated with the most recent call to ArrayFile.Reader.seek(long), ArrayFile.Reader.next(Writable), or ArrayFile.Reader.get(long,Writable).
Key - Class in org.onelab.filter
The general behavior of a key that must be stored in a filter.
Key() - Constructor for class org.onelab.filter.Key
default constructor - use with readFields
Key(byte[]) - Constructor for class org.onelab.filter.Key
Constructor.
Key(byte[], double) - Constructor for class org.onelab.filter.Key
Constructor.
KeyedData - Class in org.apache.hadoop.hbase.io
KeyedData is just a data pair.
KeyedData() - Constructor for class org.apache.hadoop.hbase.io.KeyedData
Default constructor.
KeyedData(HStoreKey, byte[]) - Constructor for class org.apache.hadoop.hbase.io.KeyedData
Create a KeyedData object specifying the parts
KeyedDataArrayWritable - Class in org.apache.hadoop.hbase.io
Wraps an array of KeyedData items as a Writable.
KeyedDataArrayWritable() - Constructor for class org.apache.hadoop.hbase.io.KeyedDataArrayWritable
Make a record of length 0
KeyFieldBasedPartitioner - Class in org.apache.hadoop.mapred.lib
 
KeyFieldBasedPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
Keying - Class in org.apache.hadoop.hbase.util
Utility creating hbase friendly keys.
Keying() - Constructor for class org.apache.hadoop.hbase.util.Keying
 
keys - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
keyToUri(String) - Static method in class org.apache.hadoop.hbase.util.Keying
Reverse the Keying.createKey(String) transform.
KeyValueLineRecordReader - Class in org.apache.hadoop.mapred
This class treats a line in the input as a key/value pair separated by a separator character.
KeyValueLineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
KeyValueTextInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for plain text files.
KeyValueTextInputFormat() - Constructor for class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
killJob(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Kill the indicated job
killJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
killJob() - Method in interface org.apache.hadoop.mapred.RunningJob
Kill the running job.
kind - Variable in class org.apache.hadoop.hbase.shell.generated.Token
An integer that describes the kind of this token.
kind - Variable in class org.apache.hadoop.record.compiler.generated.Token
An integer that describes the kind of this token.

L

largestNumOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
lastUpdate - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
LAYOUT_VERSION - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
LBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LEASE_HARDLIMIT_PERIOD - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
LEASE_SOFTLIMIT_PERIOD - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
leaseCheckFrequency - Variable in class org.apache.hadoop.hbase.Leases
 
leaseExpired() - Method in interface org.apache.hadoop.hbase.LeaseListener
When a lease expires, this method is called.
LeaseExpiredException - Exception in org.apache.hadoop.dfs
The lease that was being used to create this file has expired.
LeaseExpiredException(String) - Constructor for exception org.apache.hadoop.dfs.LeaseExpiredException
 
LeaseListener - Interface in org.apache.hadoop.hbase
LeaseListener is an interface meant to be implemented by users of the Leases class.
leasePeriod - Variable in class org.apache.hadoop.hbase.Leases
 
Leases - Class in org.apache.hadoop.hbase
Leases There are several server classes in HBase that need to track external clients that occasionally send heartbeats.
Leases(long, long) - Constructor for class org.apache.hadoop.hbase.Leases
Creates a lease
leases - Variable in class org.apache.hadoop.hbase.Leases
 
lessThan(Object, Object) - Method in class org.apache.hadoop.util.PriorityQueue
Determines the ordering of objects in this priority queue.
level - Variable in class org.apache.hadoop.net.NodeBase
 
LexicalError(boolean, int, int, int, String, char) - Static method in error org.apache.hadoop.hbase.shell.generated.TokenMgrError
Returns a detailed message for the Error when it is thrown by the token manager to indicate a lexical error.
LexicalError(boolean, int, int, int, String, char) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Returns a detailed message for the Error when it is thrown by the token manager to indicate a lexical error.
lexStateNames - Static variable in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
lexStateNames - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
LIMIT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
limitDecimal(double, int) - Static method in class org.apache.hadoop.fs.FsShell
 
line - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
line - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
LineRecordReader - Class in org.apache.hadoop.mapred
Treats keys as offset in file and value as line.
LineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
 
LineRecordReader(InputStream, long, long) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
 
LINK_URI - Static variable in class org.apache.hadoop.streaming.StreamJob
 
listDeepSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
listJobConfProperties() - Method in class org.apache.hadoop.streaming.StreamJob
Prints out the jobconf properties on stdout when verbose is specified.
listPaths(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
listPaths(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
listPaths(Path[]) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Filter raw files in the given pathes using the default checksum filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Filter raw files in the given path using the default checksum filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.FileSystem
List files in a directory.
listPaths(Path[]) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in the given pathes using the default checksum filter.
listPaths(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in a directory.
listPaths(Path[], PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files in a list directories using user-supplied path filter.
listPaths(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
List files in a directory.
listPaths(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
listPaths(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
listPaths(JobConf) - Method in class org.apache.hadoop.mapred.FileInputFormat
List input directories.
listPaths(JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
 
ListPathsServlet - Class in org.apache.hadoop.dfs
Obtain meta-information about a filesystem.
ListPathsServlet() - Constructor for class org.apache.hadoop.dfs.ListPathsServlet
 
listStatus(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
listSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
listTables() - Method in class org.apache.hadoop.hbase.HClient
List all the userspace tables.
ljustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
Load() - Static method in class org.apache.hadoop.hbase.shell.HelpContents
add help contents
LocalDirAllocator - Class in org.apache.hadoop.fs
An implementation of a round-robin scheme for disk allocation for creating files.
LocalDirAllocator(String) - Constructor for class org.apache.hadoop.fs.LocalDirAllocator
Create an allocator object
LocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the checksumed local filesystem.
LocalFileSystem() - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
LocalFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
localHadoop_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
localizeBin(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
localRunnerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
locatedBlockCount() - Method in class org.apache.hadoop.dfs.LocatedBlocks
Get number of located blocks.
LocatedBlocks - Class in org.apache.hadoop.dfs
Collection of blocks with their locations and the file length.
location - Variable in class org.apache.hadoop.net.NodeBase
 
lock(Path, boolean) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.  
lock(Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
lock(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. FS does not support file locks anymore.
lock(Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. FS does not support file locks anymore.
lock(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
lock - Variable in class org.apache.hadoop.hbase.HRegionServer
 
lock(Path, boolean) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
LockException - Exception in org.apache.hadoop.hbase
 
LockException() - Constructor for exception org.apache.hadoop.hbase.LockException
 
LockException(String) - Constructor for exception org.apache.hadoop.hbase.LockException
 
LOG - Static variable in class org.apache.hadoop.contrib.utils.join.JobBase
 
LOG - Static variable in class org.apache.hadoop.dfs.DataNode
 
LOG - Static variable in class org.apache.hadoop.dfs.NameNode
 
LOG - Static variable in class org.apache.hadoop.dfs.NamenodeFsck
 
LOG - Static variable in class org.apache.hadoop.dfs.SecondaryNameNode
 
LOG - Static variable in class org.apache.hadoop.fs.FileSystem
 
LOG - Static variable in class org.apache.hadoop.fs.FSInputChecker
 
log - Variable in class org.apache.hadoop.hbase.HRegionServer
 
LOG - Static variable in class org.apache.hadoop.hbase.Leases
 
LOG - Static variable in class org.apache.hadoop.io.compress.CompressionCodecFactory
 
LOG - Static variable in class org.apache.hadoop.io.SequenceFile
 
LOG - Static variable in class org.apache.hadoop.ipc.Client
 
LOG - Static variable in class org.apache.hadoop.ipc.Server
 
log(Log) - Method in class org.apache.hadoop.mapred.Counters
Logs the current counter values.
LOG - Static variable in class org.apache.hadoop.mapred.FileInputFormat
 
LOG - Static variable in class org.apache.hadoop.mapred.JobHistory
 
LOG - Static variable in class org.apache.hadoop.mapred.JobTracker
 
LOG - Static variable in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
LOG - Static variable in class org.apache.hadoop.mapred.TaskTracker
 
LOG - Static variable in class org.apache.hadoop.net.NetworkTopology
 
LOG - Static variable in class org.apache.hadoop.streaming.PipeMapRed
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamJob
 
Logalyzer - Class in org.apache.hadoop.tools
Logalyzer: A utility tool for archiving and analyzing hadoop logs.
Logalyzer() - Constructor for class org.apache.hadoop.tools.Logalyzer
 
Logalyzer.LogComparator - Class in org.apache.hadoop.tools
A WritableComparator optimized for UTF8 keys of the logs.
Logalyzer.LogComparator() - Constructor for class org.apache.hadoop.tools.Logalyzer.LogComparator
 
Logalyzer.LogRegexMapper - Class in org.apache.hadoop.tools
A Mapper that extracts text matching a regular expression.
Logalyzer.LogRegexMapper() - Constructor for class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
logFailed(String, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs job failed event.
logFailed(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log task attempt failed event.
logFailed(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log failed reduce task attempt.
logFailed(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log job failed event.
logFinished(String, long, int, int, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job finished.
logFinished(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log finish time of map task attempt.
logFinished(String, String, String, long, long, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log finished event of this task.
logFinished(String, String, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log finish time of task.
logKilled(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log task attempt failed event.
logKilled(String, String, String, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log failed reduce task attempt.
logRollerLock - Variable in class org.apache.hadoop.hbase.HRegionServer
 
logSpec() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
logStarted(String, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs launch time of job.
logStarted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log start time of this map task attempt.
logStarted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log start time of Reduce task attempt.
logStarted(String, String, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log start time of task (TIP).
logSubmitted(String, String, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job submitted event to history.
logThreadInfo(Log, String, long) - Static method in class org.apache.hadoop.util.ReflectionUtils
Log the current thread stacks at INFO level.
LONG_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LONG_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LongSumReducer - Class in org.apache.hadoop.mapred.lib
A Reducer that sums long values.
LongSumReducer() - Constructor for class org.apache.hadoop.mapred.lib.LongSumReducer
 
LongValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the maximum of a sequence of long values.
LongValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
the default constructor
LongValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the minimum of a sequence of long values.
LongValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
the default constructor
LongValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of long values.
LongValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
the default constructor
LongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs.
LongWritable() - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable(long) - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for LongWritable.
LongWritable.Comparator() - Constructor for class org.apache.hadoop.io.LongWritable.Comparator
 
LongWritable.DecreasingComparator - Class in org.apache.hadoop.io
A decreasing Comparator optimized for LongWritable.
LongWritable.DecreasingComparator() - Constructor for class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
LPAREN - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
LT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LzoCodec - Class in org.apache.hadoop.io.compress
A CompressionCodec for a streaming lzo compression/decompression pair.
LzoCodec() - Constructor for class org.apache.hadoop.io.compress.LzoCodec
 
LzoCompressor - Class in org.apache.hadoop.io.compress.lzo
A Compressor based on the lzo algorithm.
LzoCompressor(LzoCompressor.CompressionStrategy, int) - Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
Creates a new compressor using the specified LzoCompressor.CompressionStrategy.
LzoCompressor() - Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
Creates a new compressor with the default lzo1x_1 compression.
LzoCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
The compression algorithm for lzo library.
LzoDecompressor - Class in org.apache.hadoop.io.compress.lzo
A Decompressor based on the lzo algorithm.
LzoDecompressor(LzoDecompressor.CompressionStrategy, int) - Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Creates a new lzo decompressor.
LzoDecompressor() - Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
Creates a new lzo decompressor.
LzoDecompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
 

M

main(String[]) - Static method in class org.apache.hadoop.conf.Configuration
For debugging.
main(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
main(String[]) - Static method in class org.apache.hadoop.dfs.DataNode
 
main(String[]) - Static method in class org.apache.hadoop.dfs.DFSAdmin
main() has some simple utility methods.
main(String[]) - Static method in class org.apache.hadoop.dfs.DFSck
 
main(String[]) - Static method in class org.apache.hadoop.dfs.NameNode
 
main(String[]) - Static method in class org.apache.hadoop.dfs.SecondaryNameNode
main() has some simple utility methods.
main(String[]) - Static method in class org.apache.hadoop.examples.AggregateWordCount
The main driver for word count map/reduce program.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.DistributedPentomino
Launch the solver on 9x10 board and the one sided pentominos.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
Solve the 3x30 puzzle.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.Pentomino
Solve the 6x10 pentomino puzzle.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.Sudoku
Solves a set of sudoku puzzles.
main(String[]) - Static method in class org.apache.hadoop.examples.ExampleDriver
 
main(String[]) - Static method in class org.apache.hadoop.examples.Grep
 
main(String[]) - Static method in class org.apache.hadoop.examples.PiEstimator
Launches all the tasks in order.
main(String[]) - Static method in class org.apache.hadoop.examples.RandomWriter
 
main(String[]) - Static method in class org.apache.hadoop.examples.Sort
The main driver for sort program.
main(String[]) - Static method in class org.apache.hadoop.examples.WordCount
The main driver for word count map/reduce program.
main(String[]) - Static method in class org.apache.hadoop.fs.DF
 
main(String[]) - Static method in class org.apache.hadoop.fs.FsShell
main() has some simple utility methods
main(String[]) - Static method in class org.apache.hadoop.fs.s3.MigrationTool
 
main(String[]) - Static method in class org.apache.hadoop.fs.Trash
Run an emptier.
main(String[]) - Static method in class org.apache.hadoop.hbase.HClient
Main program
main(String[]) - Static method in class org.apache.hadoop.hbase.HLog
Pass a log file and it will dump out a text version on stdout.
main(String[]) - Static method in class org.apache.hadoop.hbase.HMaster
Main program
main(String[]) - Static method in class org.apache.hadoop.hbase.HRegionServer
 
main(String[]) - Static method in class org.apache.hadoop.hbase.Shell
Main method
main(String[]) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
A little test program.
main(String[]) - Static method in class org.apache.hadoop.io.MapFile
 
main(String[]) - Static method in class org.apache.hadoop.mapred.IsolationRunner
Run a single task
main(String[]) - Static method in class org.apache.hadoop.mapred.JobClient
 
main(String[]) - Static method in class org.apache.hadoop.mapred.jobcontrol.Job
 
main(String[]) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker process.
main(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
create and run an Aggregate based map/reduce job.
main(String[]) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Submit a pipes job based on the command line arguments.
main(String[]) - Static method in class org.apache.hadoop.mapred.TaskTracker.Child
 
main(String[]) - Static method in class org.apache.hadoop.mapred.TaskTracker
Start the TaskTracker, point toward the indicated JobTracker
main(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
main(String[]) - Static method in class org.apache.hadoop.streaming.HadoopStreaming
 
main(String[]) - Static method in class org.apache.hadoop.streaming.JarBuilder
Test program
main(String[]) - Static method in class org.apache.hadoop.streaming.PathFinder
 
main(String[]) - Static method in class org.apache.hadoop.tools.Logalyzer
 
main(String[]) - Static method in class org.apache.hadoop.util.CopyFiles
 
main(String[]) - Static method in class org.apache.hadoop.util.PlatformName
 
main(String[]) - Static method in class org.apache.hadoop.util.PrintJarMainClass
 
main(String[]) - Static method in class org.apache.hadoop.util.RunJar
Run a Hadoop job jar.
main(String[]) - Static method in class org.apache.hadoop.util.VersionInfo
 
makeJavaCommand(Class, String[]) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
makeQualified(Path) - Method in class org.apache.hadoop.fs.FileSystem
Make sure that a path specifies a FileSystem.
makeQualified(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make sure that a path specifies a FileSystem.
makeQualified(FileSystem) - Method in class org.apache.hadoop.fs.Path
Returns a qualified path object.
makeRelative(Path, Path) - Static method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Make a path relative with respect to a root path.
makeShellPath(String) - Static method in class org.apache.hadoop.fs.FileUtil
Convert a os-native filename to a path that works for the shell.
makeShellPath(File) - Static method in class org.apache.hadoop.fs.FileUtil
Convert a os-native filename to a path that works for the shell.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order).
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
Map method.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.WordCount.MapClass
 
map(HStoreKey, KeyedDataArrayWritable, TableOutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.GroupingTableMap
Extract the grouping columns from value to construct a new key.
map(HStoreKey, KeyedDataArrayWritable, TableOutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.IdentityTableMap
Pass the key, value to reduce
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableMap
Input:
map(HStoreKey, KeyedDataArrayWritable, TableOutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableMap
Call a user defined function on a single HBase record, represented by a key and its associated record value.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
the map function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
Do nothing.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
The identify function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityMapper
The identify function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.InverseMapper
The inverse function.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.TokenCountMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.Mapper
Maps a single input key/value pair into intermediate key/value pairs.
Map() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.tools.Logalyzer.LogRegexMapper
 
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Map method.
map(WritableComparable, Writable, OutputCollector, Reporter) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
 
MAP_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
mapCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MapFile - Class in org.apache.hadoop.io
A file-based map from keys to values.
MapFile() - Constructor for class org.apache.hadoop.io.MapFile
 
MapFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing map.
MapFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map.
MapFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map using the named comparator.
MapFile.Writer - Class in org.apache.hadoop.io
Writes a new map.
MapFile.Writer(Configuration, FileSystem, String, Class, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFileOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes MapFiles.
MapFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.MapFileOutputFormat
 
mapOutputFieldSeparator - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
mapOutputLost(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A completed map task's output has been lost.
Mapper - Interface in org.apache.hadoop.mapred
Maps input key/value pairs to a set of intermediate key/value pairs.
mapProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
mapProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a float between 0.0 and 1.0, indicating progress on the map portion of the job.
mapRedFinished() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
MapReduceBase - Class in org.apache.hadoop.mapred
Base class for Mapper and Reducer implementations.
MapReduceBase() - Constructor for class org.apache.hadoop.mapred.MapReduceBase
 
MapRunnable - Interface in org.apache.hadoop.mapred
Expert: Permits greater control of map processing.
MapRunner - Class in org.apache.hadoop.mapred
Default MapRunnable implementation.
MapRunner() - Constructor for class org.apache.hadoop.mapred.MapRunner
 
mark(int) - Method in class org.apache.hadoop.fs.FSInputChecker
 
markSupported() - Method in class org.apache.hadoop.fs.FSInputChecker
 
MASTER_ADDRESS - Static variable in interface org.apache.hadoop.hbase.HConstants
Parameter name for master address
MASTER_INDEX_LOG_FILE - Static variable in class org.apache.hadoop.mapred.JobHistory
 
MasterNotRunningException - Exception in org.apache.hadoop.hbase
 
MasterNotRunningException() - Constructor for exception org.apache.hadoop.hbase.MasterNotRunningException
 
MasterNotRunningException(String) - Constructor for exception org.apache.hadoop.hbase.MasterNotRunningException
 
matchesRowCol(HStoreKey) - Method in class org.apache.hadoop.hbase.HStoreKey
 
matchesRowFamily(HStoreKey) - Method in class org.apache.hadoop.hbase.HStoreKey
 
matchesWithoutColumn(HStoreKey) - Method in class org.apache.hadoop.hbase.HStoreKey
 
MAX_PATH_DEPTH - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
MAX_PATH_LENGTH - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
MAXIMUM_FP - Static variable in interface org.onelab.filter.RemoveScheme
MaximumFP Selection.
maxNextCharInd - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
maxNextCharInd - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
mayExit_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MD5_LEN - Static variable in class org.apache.hadoop.io.MD5Hash
 
MD5_LEN - Static variable in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
MD5Hash - Class in org.apache.hadoop.io
A Writable for MD5 hash values.
MD5Hash() - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash.
MD5Hash(String) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash from a hex string.
MD5Hash(byte[]) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash with a specified value.
MD5Hash.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for MD5Hash keys.
MD5Hash.Comparator() - Constructor for class org.apache.hadoop.io.MD5Hash.Comparator
 
membershipTest(Key) - Method in class org.onelab.filter.BloomFilter
 
membershipTest(Key) - Method in class org.onelab.filter.CountingBloomFilter
 
membershipTest(Key) - Method in class org.onelab.filter.DynamicBloomFilter
 
membershipTest(Key) - Method in class org.onelab.filter.Filter
Determines wether a specified key belongs to this filter.
merge(List<SequenceFile.Sorter.SegmentDescriptor>, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the list of segments of type SegmentDescriptor
merge(Path[], boolean, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[] using a max factor value that is already set
merge(Path[], boolean, int, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merge the provided files.
merge(List, List, String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
MergeSort - Class in org.apache.hadoop.util
An implementation of the core algorithm of MergeSort.
MergeSort(Comparator<IntWritable>) - Constructor for class org.apache.hadoop.util.MergeSort
 
mergeSort(int[], int[], int, int) - Method in class org.apache.hadoop.util.MergeSort
 
META_TABLE_NAME - Static variable in interface org.apache.hadoop.hbase.HConstants
The META table's name.
metaSave(String) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
 
metaSave(String[], int) - Method in class org.apache.hadoop.dfs.DFSAdmin
Dumps DFS data structures into specified file.
metaSave(String) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
metaSave(String) - Method in class org.apache.hadoop.dfs.NameNode
Dumps namenode state into specified file
MetricsContext - Interface in org.apache.hadoop.metrics
The main interface to the metrics package.
MetricsException - Exception in org.apache.hadoop.metrics
General-purpose, unchecked metrics exception.
MetricsException() - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsException(String) - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsRecord - Interface in org.apache.hadoop.metrics
A named and optionally tagged set of records to be sent to the metrics system.
MetricsRecordImpl - Class in org.apache.hadoop.metrics.spi
An implementation of MetricsRecord.
MetricsRecordImpl(String, AbstractMetricsContext) - Constructor for class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Creates a new instance of FileRecord
MetricsUtil - Class in org.apache.hadoop.metrics
Utility class to simplify creation and reporting of hadoop metrics.
MetricValue - Class in org.apache.hadoop.metrics.spi
A Number that is either an absolute or an incremental amount.
MetricValue(Number, boolean) - Constructor for class org.apache.hadoop.metrics.spi.MetricValue
Creates a new instance of MetricValue
midKey() - Method in class org.apache.hadoop.io.MapFile.Reader
Get the key at approximately the middle of the file.
MigrationTool - Class in org.apache.hadoop.fs.s3
This class is a tool for migrating data from an older to a newer version of an S3 filesystem.
MigrationTool() - Constructor for class org.apache.hadoop.fs.s3.MigrationTool
 
MIN_BLOCKS_FOR_WRITE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
MINIMUM_FN - Static variable in interface org.onelab.filter.RemoveScheme
MinimumFN Selection.
minRecWrittenToEnableSkip_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
mkdirs(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
mkdirs(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
mkdirs(String) - Method in class org.apache.hadoop.dfs.NameNode
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.FileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Creates the specified directory hierarchy.
mkdirs(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
mkdirsWithExistsCheck(File) - Static method in class org.apache.hadoop.util.DiskChecker
The semantics of mkdirsWithExistsCheck method is different from the mkdirs method provided in the Sun's java.io.File class in the following way: While creating the non-existent parent directories, this method checks for the existence of those directories if the mkdir fails at any point (since that directory might have just been created by some other process).
modifFmt - Static variable in class org.apache.hadoop.fs.FsShell
 
Module() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
MODULE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
ModuleName() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
moveToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
moveToTrash(Path) - Method in class org.apache.hadoop.fs.Trash
Move a file or directory to the current trash directory.
msg(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
MSG_CALL_SERVER_STARTUP - Static variable in class org.apache.hadoop.hbase.HMsg
Region server is unknown to master.
MSG_NEW_REGION - Static variable in class org.apache.hadoop.hbase.HMsg
region server is now serving a region produced by a region split
MSG_REGION_CLOSE - Static variable in class org.apache.hadoop.hbase.HMsg
Stop serving the specified region
MSG_REGION_CLOSE_WITHOUT_REPORT - Static variable in class org.apache.hadoop.hbase.HMsg
Stop serving the specified region and don't report back that it's closed
MSG_REGION_OPEN - Static variable in class org.apache.hadoop.hbase.HMsg
Start serving the specified region
MSG_REGIONSERVER_STOP - Static variable in class org.apache.hadoop.hbase.HMsg
Master tells region server to stop
MSG_REGIONSERVER_STOP_IN_ARRAY - Static variable in class org.apache.hadoop.hbase.HMsg
 
MSG_REPORT_CLOSE - Static variable in class org.apache.hadoop.hbase.HMsg
region server is no longer serving the specified region
MSG_REPORT_EXITING - Static variable in class org.apache.hadoop.hbase.HMsg
region server is shutting down
MSG_REPORT_OPEN - Static variable in class org.apache.hadoop.hbase.HMsg
region server is now serving the specified region
MultiFileInputFormat - Class in org.apache.hadoop.mapred
An abstract InputFormat that returns MultiFileSplit's in MultiFileInputFormat.getSplits(JobConf, int) method.
MultiFileInputFormat() - Constructor for class org.apache.hadoop.mapred.MultiFileInputFormat
 
MultiFileSplit - Class in org.apache.hadoop.mapred
A sub-collection of input files.
MultiFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.MultiFileSplit
 
MultithreadedMapRunner - Class in org.apache.hadoop.mapred.lib
Multithreaded implementation for @link org.apache.hadoop.mapred.MapRunnable.
MultithreadedMapRunner() - Constructor for class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 

N

name - Variable in class org.apache.hadoop.dfs.DatanodeID
 
name - Variable in class org.apache.hadoop.net.NodeBase
 
NameNode - Class in org.apache.hadoop.dfs
NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS.
NameNode(Configuration) - Constructor for class org.apache.hadoop.dfs.NameNode
Start NameNode.
NameNode(String, int, Configuration) - Constructor for class org.apache.hadoop.dfs.NameNode
Create a NameNode at the specified location and start it.
NamenodeFsck - Class in org.apache.hadoop.dfs
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
NamenodeFsck(Configuration, NameNode, Map<String, String[]>, HttpServletResponse) - Constructor for class org.apache.hadoop.dfs.NamenodeFsck
Filesystem checker.
NamenodeFsck.FsckResult - Class in org.apache.hadoop.dfs
FsckResult of checking, plus overall DFS statistics.
NamenodeFsck.FsckResult() - Constructor for class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
NativeCodeLoader - Class in org.apache.hadoop.util
A helper to load the native hadoop code i.e.
NativeCodeLoader() - Constructor for class org.apache.hadoop.util.NativeCodeLoader
 
needChecksum() - Method in class org.apache.hadoop.fs.FSInputChecker
Return true if there is a need for checksum verification
needsDictionary() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if a preset dictionary is needed for decompression.
needsDictionary() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
needsDictionary() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
needsInput() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
NetworkTopology - Class in org.apache.hadoop.net
The class represents a cluster of computer with a tree hierarchical network topology.
NetworkTopology() - Constructor for class org.apache.hadoop.net.NetworkTopology
 
newDataChecksum(int, int) - Static method in class org.apache.hadoop.dfs.DataChecksum
 
newDataChecksum(byte[], int) - Static method in class org.apache.hadoop.dfs.DataChecksum
Creates a DataChecksum from HEADER_LEN bytes from arr[offset].
newDataChecksum(DataInputStream) - Static method in class org.apache.hadoop.dfs.DataChecksum
This constructucts a DataChecksum by reading HEADER_LEN bytes from input stream in
newInstance(Class, Configuration) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance(Class) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance() - Method in interface org.apache.hadoop.io.WritableFactory
Return a new instance.
newInstance(Class<?>, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Create an object for the given class and initialize it from conf
newKey() - Method in class org.apache.hadoop.io.WritableComparator
Construct a new WritableComparable instance.
newRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Subclasses should override this if they subclass MetricsRecordImpl.
newToken(int) - Static method in class org.apache.hadoop.hbase.shell.generated.Token
Returns a new Token object, by default.
newToken(int) - Static method in class org.apache.hadoop.record.compiler.generated.Token
Returns a new Token object, by default.
next() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
next(HStoreKey, TreeMap<Text, byte[]>) - Method in class org.apache.hadoop.hbase.HAbstractScanner
Get the next set of values for this scanner.
next(HStoreKey, TreeMap<Text, byte[]>) - Method in interface org.apache.hadoop.hbase.HInternalScannerInterface
Grab the next row's worth of values.
next(long) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Get the next set of values
next(long) - Method in class org.apache.hadoop.hbase.HRegionServer
Get the next set of values
next(HStoreKey, TreeMap<Text, byte[]>) - Method in interface org.apache.hadoop.hbase.HScannerInterface
 
next - Variable in class org.apache.hadoop.hbase.shell.generated.Token
A reference to the next regular (non-special) token from the input stream.
next(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Read and return the next value in the file.
next(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Read the next key/value pair in the map into key and val.
next(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key in the file into key, skipping its value.
next(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key/value pair in the file into key and val.
next(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Deprecated. Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes).
next() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Sets up the current key and value (for getKey and getValue)
next(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the next key in a set into key.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
Read key/value pair in a line.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.LineRecordReader
Read a line.
next(Writable, Writable) - Method in interface org.apache.hadoop.mapred.RecordReader
Reads the next key/value pair.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
Read key/value pair in a line.
next(Writable, Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next(Writable) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next - Variable in class org.apache.hadoop.record.compiler.generated.Token
A reference to the next regular (non-special) token from the input stream.
next(Writable, Writable) - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Read a record.
next(Writable, Writable) - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
nextRaw(DataOutputBuffer, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' records.
nextRawKey(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' keys.
nextRawKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the rawKey object with the key returned by the Reader
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' values.
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the passed rawValue with the value corresponding to the key read earlier
Node - Interface in org.apache.hadoop.net
The interface defines a node in a network topology.
NodeBase - Class in org.apache.hadoop.net
A base class that implements interface Node
NodeBase() - Constructor for class org.apache.hadoop.net.NodeBase
Default constructor
NodeBase(String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its path
NodeBase(String, String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
NodeBase(String, String, Node, int) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
normalize(String) - Static method in class org.apache.hadoop.net.NodeBase
Normalize a path
NoServerForRegionException - Exception in org.apache.hadoop.hbase
 
NoServerForRegionException() - Constructor for exception org.apache.hadoop.hbase.NoServerForRegionException
 
NoServerForRegionException(String) - Constructor for exception org.apache.hadoop.hbase.NoServerForRegionException
 
not() - Method in class org.onelab.filter.BloomFilter
 
not() - Method in class org.onelab.filter.CountingBloomFilter
 
not() - Method in class org.onelab.filter.DynamicBloomFilter
 
not() - Method in class org.onelab.filter.Filter
Performs a logical NOT on this filter.
NOTEQUAL - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
NotReplicatedYetException - Exception in org.apache.hadoop.dfs
The file has not finished being written to enough datanodes yet.
NotReplicatedYetException(String) - Constructor for exception org.apache.hadoop.dfs.NotReplicatedYetException
 
NotServingRegionException - Exception in org.apache.hadoop.hbase
 
NotServingRegionException() - Constructor for exception org.apache.hadoop.hbase.NotServingRegionException
 
NotServingRegionException(String) - Constructor for exception org.apache.hadoop.hbase.NotServingRegionException
 
NULL - Static variable in interface org.apache.hadoop.mapred.Reporter
A constant of Reporter type that does nothing.
NullContext - Class in org.apache.hadoop.metrics.spi
Null metrics context: a metrics context which does nothing.
NullContext() - Constructor for class org.apache.hadoop.metrics.spi.NullContext
Creates a new instance of NullContext
NullOutputFormat - Class in org.apache.hadoop.mapred.lib
Consume all outputs and put them in /dev/null.
NullOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.NullOutputFormat
 
NullWritable - Class in org.apache.hadoop.io
Singleton Writable with no data.
NUM - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
NUM_OF_VALUES_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
numOfMapOutputKeyFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfMapOutputPartitionFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfReduceOutputKeyFields - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
numOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
numReduceTasksSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 

O

ObjectWritable - Class in org.apache.hadoop.io
A polymorphic Writable that writes an instance with it's class name.
ObjectWritable() - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Class, Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
obtainLock(String, String, boolean) - Method in class org.apache.hadoop.dfs.NameNode
Deprecated.  
obtainReadLock() - Method in class org.apache.hadoop.hbase.HLocking
Caller needs the nonexclusive read-lock
obtainScanner(Text[], Text) - Method in class org.apache.hadoop.hbase.HClient
Get a scanner on the current table starting at the specified row.
obtainScanner(Text[], Text, long) - Method in class org.apache.hadoop.hbase.HClient
Get a scanner on the current table starting at the specified row.
obtainScanner(Text[], Text, RowFilterInterface) - Method in class org.apache.hadoop.hbase.HClient
Get a scanner on the current table starting at the specified row.
obtainScanner(Text[], Text, long, RowFilterInterface) - Method in class org.apache.hadoop.hbase.HClient
Get a scanner on the current table starting at the specified row.
obtainWriteLock() - Method in class org.apache.hadoop.hbase.HLocking
Caller needs the exclusive write-lock
offerService() - Method in class org.apache.hadoop.dfs.DataNode
Main loop for the DataNode.
offerService() - Method in class org.apache.hadoop.mapred.JobTracker
Run forever
okCols - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
ONE - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 
oneRotation - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Is the piece fixed under rotation?
OneSidedPentomino - Class in org.apache.hadoop.examples.dancing
Of the "normal" 12 pentominos, 6 of them have distinct shapes when flipped.
OneSidedPentomino() - Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
 
OneSidedPentomino(int, int) - Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
 
onlineRegions - Variable in class org.apache.hadoop.hbase.HRegionServer
 
OP_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_BLOCKRECEIVED - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_BLOCKREPORT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ABANDONBLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ABANDONBLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ADDBLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ADDBLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_COMPLETEFILE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_COMPLETEFILE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODE_HINTS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODE_HINTS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODEREPORT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DATANODEREPORT_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DELETE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_DELETE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_EXISTS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_EXISTS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ISDIR - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_ISDIR_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_LISTING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_LISTING_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_MKDIRS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_MKDIRS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OBTAINLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OBTAINLOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OPEN - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_OPEN_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RAWSTATS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RAWSTATS_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RELEASELOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RELEASELOCK_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENAMETO - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENAMETO_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENEW_LEASE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_RENEW_LEASE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_STARTFILE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_STARTFILE_ACK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_CLIENT_TRYAGAIN - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_ERROR - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_FAILURE - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_HEARTBEAT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_INVALIDATE_BLOCKS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_READ_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_READ_METADATA - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_STATUS_ERROR - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_STATUS_ERROR_CHECKSUM - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_STATUS_ERROR_EXISTS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_STATUS_ERROR_INVALID - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_STATUS_SUCCESS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_TRANSFERBLOCKS - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_TRANSFERDATA - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OP_WRITE_BLOCK - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
open(Path, int) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
open(String, long, long) - Method in class org.apache.hadoop.dfs.NameNode
 
open(Path, int) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
openScanner(Text, Text[], Text, long, RowFilterInterface) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Opens a remote scanner with a RowFilter.
openScanner(Text, Text[], Text, long, RowFilterInterface) - Method in class org.apache.hadoop.hbase.HRegionServer
Opens a remote scanner with a RowFilter.
openTable(Text) - Method in class org.apache.hadoop.hbase.HClient
Loads information so that a table can be manipulated.
OPERATION_FAILED - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
OPTIONS - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
OR - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
or(Filter) - Method in class org.onelab.filter.BloomFilter
 
or(Filter) - Method in class org.onelab.filter.CountingBloomFilter
 
or(Filter) - Method in class org.onelab.filter.DynamicBloomFilter
 
or(Filter) - Method in class org.onelab.filter.Filter
Peforms a logical OR between this filter and a specified filter.
org.apache.hadoop - package org.apache.hadoop
 
org.apache.hadoop.conf - package org.apache.hadoop.conf
Configuration of system parameters.
org.apache.hadoop.contrib.utils.join - package org.apache.hadoop.contrib.utils.join
 
org.apache.hadoop.dfs - package org.apache.hadoop.dfs
A distributed implementation of FileSystem.
org.apache.hadoop.examples - package org.apache.hadoop.examples
Hadoop example code.
org.apache.hadoop.examples.dancing - package org.apache.hadoop.examples.dancing
This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop.
org.apache.hadoop.filecache - package org.apache.hadoop.filecache
 
org.apache.hadoop.fs - package org.apache.hadoop.fs
An abstract file system API.
org.apache.hadoop.fs.s3 - package org.apache.hadoop.fs.s3
A distributed implementation of FileSystem that uses Amazon S3.
org.apache.hadoop.hbase - package org.apache.hadoop.hbase
Provides HBase, the Hadoop simple database.
org.apache.hadoop.hbase.filter - package org.apache.hadoop.hbase.filter
 
org.apache.hadoop.hbase.io - package org.apache.hadoop.hbase.io
 
org.apache.hadoop.hbase.mapred - package org.apache.hadoop.hbase.mapred
 
org.apache.hadoop.hbase.shell - package org.apache.hadoop.hbase.shell
 
org.apache.hadoop.hbase.shell.generated - package org.apache.hadoop.hbase.shell.generated
 
org.apache.hadoop.hbase.util - package org.apache.hadoop.hbase.util
 
org.apache.hadoop.io - package org.apache.hadoop.io
Generic i/o code for use when reading and writing data to the network, to databases, and to files.
org.apache.hadoop.io.compress - package org.apache.hadoop.io.compress
 
org.apache.hadoop.io.compress.lzo - package org.apache.hadoop.io.compress.lzo
 
org.apache.hadoop.io.compress.zlib - package org.apache.hadoop.io.compress.zlib
 
org.apache.hadoop.io.retry - package org.apache.hadoop.io.retry
A mechanism for selectively retrying methods that throw exceptions under certain circumstances.
org.apache.hadoop.ipc - package org.apache.hadoop.ipc
Tools to help define network clients and servers.
org.apache.hadoop.mapred - package org.apache.hadoop.mapred
A system for scalable, fault-tolerant, distributed computation over large data collections.
org.apache.hadoop.mapred.jobcontrol - package org.apache.hadoop.mapred.jobcontrol
Utilities for managing dependent jobs.
org.apache.hadoop.mapred.lib - package org.apache.hadoop.mapred.lib
Library of generally useful mappers, reducers, and partitioners.
org.apache.hadoop.mapred.lib.aggregate - package org.apache.hadoop.mapred.lib.aggregate
Classes for performing various counting and aggregations.
org.apache.hadoop.mapred.pipes - package org.apache.hadoop.mapred.pipes
Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce.
org.apache.hadoop.metrics - package org.apache.hadoop.metrics
This package defines an API for reporting performance metric information.
org.apache.hadoop.metrics.file - package org.apache.hadoop.metrics.file
Implementation of the metrics package that writes the metrics to a file.
org.apache.hadoop.metrics.ganglia - package org.apache.hadoop.metrics.ganglia
Implementation of the metrics package that sends metric data to Ganglia.
org.apache.hadoop.metrics.jvm - package org.apache.hadoop.metrics.jvm
 
org.apache.hadoop.metrics.spi - package org.apache.hadoop.metrics.spi
The Service Provider Interface for the Metrics API.
org.apache.hadoop.net - package org.apache.hadoop.net
Network-related classes.
org.apache.hadoop.record - package org.apache.hadoop.record
Hadoop record I/O contains classes and a record description language translator for simplifying serialization and deserialization of records in a language-neutral manner.
org.apache.hadoop.record.compiler - package org.apache.hadoop.record.compiler
This package contains classes needed for code generation from the hadoop record compiler.
org.apache.hadoop.record.compiler.ant - package org.apache.hadoop.record.compiler.ant
 
org.apache.hadoop.record.compiler.generated - package org.apache.hadoop.record.compiler.generated
This package contains code generated by JavaCC from the Hadoop record syntax file rcc.jj.
org.apache.hadoop.streaming - package org.apache.hadoop.streaming
 
org.apache.hadoop.tools - package org.apache.hadoop.tools
 
org.apache.hadoop.util - package org.apache.hadoop.util
Common utilities.
org.onelab.filter - package org.onelab.filter
 
out - Variable in class org.apache.hadoop.io.compress.CompressionOutputStream
The output stream to be compressed.
outbuf - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
outerrThreadsThrowable - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
output_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OUTPUT_TABLE - Static variable in class org.apache.hadoop.hbase.mapred.TableOutputFormat
JobConf parameter that specifies the output table
OutputCollector - Interface in org.apache.hadoop.mapred
Passed to Mapper and Reducer implementations to collect output data.
OutputFormat - Interface in org.apache.hadoop.mapred
An output data format.
OutputFormatBase - Class in org.apache.hadoop.mapred
A base class for OutputFormat.
OutputFormatBase() - Constructor for class org.apache.hadoop.mapred.OutputFormatBase
 
outputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OutputRecord - Class in org.apache.hadoop.metrics.spi
Represents a record of metric data to be sent to a metrics system.
outputSingleNode_ - Variable in class org.apache.hadoop.streaming.StreamJob
 

P

packageFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
packageJobJar() - Method in class org.apache.hadoop.streaming.StreamJob
 
PageRowFilter - Class in org.apache.hadoop.hbase.filter
Implementation of RowFilterInterface that limits results to a specific page size.
PageRowFilter() - Constructor for class org.apache.hadoop.hbase.filter.PageRowFilter
Default constructor, filters nothing.
PageRowFilter(long) - Constructor for class org.apache.hadoop.hbase.filter.PageRowFilter
Constructor that takes a maximum page size.
parent - Variable in class org.apache.hadoop.net.NodeBase
 
parse(String, int) - Static method in class org.apache.hadoop.metrics.spi.Util
Parses a space and/or comma separated sequence of server specifications of the form hostname or hostname:port.
parseArgs(String[], int, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Parse the cmd-line args, starting at i.
ParseException - Exception in org.apache.hadoop.hbase.shell.generated
This exception is thrown when parse errors are encountered.
ParseException(Token, int[][], String[]) - Constructor for exception org.apache.hadoop.hbase.shell.generated.ParseException
This constructor is used by the method "generateParseException" in the generated parser.
ParseException() - Constructor for exception org.apache.hadoop.hbase.shell.generated.ParseException
The following constructors are for use by you for whatever purpose you can think of.
ParseException(String) - Constructor for exception org.apache.hadoop.hbase.shell.generated.ParseException
 
ParseException - Exception in org.apache.hadoop.record.compiler.generated
This exception is thrown when parse errors are encountered.
ParseException(Token, int[][], String[]) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
This constructor is used by the method "generateParseException" in the generated parser.
ParseException() - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
The following constructors are for use by you for whatever purpose you can think of.
ParseException(String) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
 
parseExecResult(BufferedReader) - Method in class org.apache.hadoop.fs.Command
Parse the execution result
parseHistory(File, JobHistory.Listener) - Static method in class org.apache.hadoop.mapred.JobHistory
Parses history file and invokes Listener.handle() for each line of history.
parseJobTasks(File, JobHistory.JobInfo) - Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
Populates a JobInfo object from the job's history log file.
parseMasterIndex(File) - Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
Parses a master index file and returns a DefaultJobHistoryParser.MasterIndex.
Parser - Class in org.apache.hadoop.hbase.shell.generated
Parsing command line.
Parser(String) - Constructor for class org.apache.hadoop.hbase.shell.generated.Parser
 
Parser(InputStream) - Constructor for class org.apache.hadoop.hbase.shell.generated.Parser
 
Parser(InputStream, String) - Constructor for class org.apache.hadoop.hbase.shell.generated.Parser
 
Parser(Reader) - Constructor for class org.apache.hadoop.hbase.shell.generated.Parser
 
Parser(ParserTokenManager) - Constructor for class org.apache.hadoop.hbase.shell.generated.Parser
 
ParserConstants - Interface in org.apache.hadoop.hbase.shell.generated
 
ParserTokenManager - Class in org.apache.hadoop.hbase.shell.generated
 
ParserTokenManager(SimpleCharStream) - Constructor for class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
ParserTokenManager(SimpleCharStream, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
Partitioner - Interface in org.apache.hadoop.mapred
Partitions the key space.
partitionerSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Path - Class in org.apache.hadoop.fs
Names a file or directory in a FileSystem.
Path(String, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String) - Constructor for class org.apache.hadoop.fs.Path
Construct a path from a String.
Path(String, String, String) - Constructor for class org.apache.hadoop.fs.Path
Construct a Path from components.
PATH_SEPARATOR - Static variable in class org.apache.hadoop.net.NodeBase
 
PATH_SEPARATOR_STR - Static variable in class org.apache.hadoop.net.NodeBase
 
PathFilter - Interface in org.apache.hadoop.fs
 
PathFinder - Class in org.apache.hadoop.streaming
Maps a relative pathname to an absolute pathname using the PATH enviroment.
PathFinder() - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from java.class.path
PathFinder(String) - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from the specified system environment variable.
pathToFile(Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
Convert a path to a File.
pathToFile(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Convert a path to a File.
Pentomino - Class in org.apache.hadoop.examples.dancing
 
Pentomino(int, int) - Constructor for class org.apache.hadoop.examples.dancing.Pentomino
Create the model for a given pentomino set of pieces and board size.
Pentomino() - Constructor for class org.apache.hadoop.examples.dancing.Pentomino
Create the object without initialization.
Pentomino.ColumnName - Interface in org.apache.hadoop.examples.dancing
This interface just is a marker for what types I expect to get back as column names.
Pentomino.Piece - Class in org.apache.hadoop.examples.dancing
Maintain information about a puzzle piece.
Pentomino.Piece(String, String, boolean, int[]) - Constructor for class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
Pentomino.SolutionCategory - Enum in org.apache.hadoop.examples.dancing
 
PERIOD_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
phase() - Method in class org.apache.hadoop.util.Progress
Returns the current sub-node executing.
PhasedFileSystem - Class in org.apache.hadoop.mapred
Deprecated. PhasedFileSystem is no longer used during speculative execution of tasks.
PhasedFileSystem(FileSystem, String, String, String) - Constructor for class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. This Constructor is used to wrap a FileSystem object to a Phased FilsSystem.
PhasedFileSystem(FileSystem, JobConf) - Constructor for class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated. This Constructor is used to wrap a FileSystem object to a Phased FilsSystem.
pickSrcDatanode(DFSFileInfo, NameNode) - Static method in class org.apache.hadoop.dfs.FileDataServlet
Select a datanode to service this request.
pieces - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
PiEstimator - Class in org.apache.hadoop.examples
A Map-reduce program to estimaate the valu eof Pi using monte-carlo method.
PiEstimator() - Constructor for class org.apache.hadoop.examples.PiEstimator
 
PiEstimator.PiMapper - Class in org.apache.hadoop.examples
Mappper class for Pi estimation.
PiEstimator.PiMapper() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiMapper
 
PiEstimator.PiReducer - Class in org.apache.hadoop.examples
 
PiEstimator.PiReducer() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiReducer
 
ping(String) - Method in class org.apache.hadoop.mapred.TaskTracker
Child checking to see if we're alive.
PipeMapper - Class in org.apache.hadoop.streaming
A generic Mapper bridge.
PipeMapper() - Constructor for class org.apache.hadoop.streaming.PipeMapper
 
PipeMapRed - Class in org.apache.hadoop.streaming
Shared functionality for PipeMapper, PipeReducer.
PipeMapRed() - Constructor for class org.apache.hadoop.streaming.PipeMapRed
 
PipeReducer - Class in org.apache.hadoop.streaming
A generic Reducer bridge.
PipeReducer() - Constructor for class org.apache.hadoop.streaming.PipeReducer
 
PlatformName - Class in org.apache.hadoop.util
A helper class for getting build-info of the java-vm.
PlatformName() - Constructor for class org.apache.hadoop.util.PlatformName
 
pop() - Method in class org.apache.hadoop.util.PriorityQueue
Removes and returns the least element of the PriorityQueue in log(size) time.
PositionedReadable - Interface in org.apache.hadoop.fs
Stream that permits positional reading.
PREP - Static variable in class org.apache.hadoop.mapred.JobStatus
 
prependPathComponent(String) - Method in class org.apache.hadoop.streaming.PathFinder
Appends the specified component to the path list
preserveInput(boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Whether to delete the files when no longer needed
prevCharIsCR - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
prevCharIsCR - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
prevCharIsLF - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
prevCharIsLF - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
printFoot() - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
printGotoForm(JspWriter, int, String) - Static method in class org.apache.hadoop.dfs.JspHelper
 
printHead(String) - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
printHelp(String) - Static method in class org.apache.hadoop.hbase.shell.HelpManager
 
PrintJarMainClass - Class in org.apache.hadoop.util
A micro-application that prints the main class name out of a jar file.
PrintJarMainClass() - Constructor for class org.apache.hadoop.util.PrintJarMainClass
 
printLine(int, String, String, String) - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
printPathWithLinks(String, JspWriter, int) - Static method in class org.apache.hadoop.dfs.JspHelper
 
printTable(int, String) - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
printThreadInfo(PrintWriter, String) - Static method in class org.apache.hadoop.util.ReflectionUtils
Print all of the thread's information and stack traces.
printUsage(String) - Method in class org.apache.hadoop.dfs.DFSAdmin
Displays format of commands.
printVersion() - Method in class org.apache.hadoop.hbase.shell.HelpManager
Print out the program version.
PriorityQueue - Class in org.apache.hadoop.util
A PriorityQueue maintains a partial ordering of its elements such that the least element can always be found in constant time.
PriorityQueue() - Constructor for class org.apache.hadoop.util.PriorityQueue
 
processUpgradeCommand(UpgradeCommand) - Method in class org.apache.hadoop.dfs.NameNode
 
ProgramDriver - Class in org.apache.hadoop.util
A driver that is used to run programs added to it
ProgramDriver() - Constructor for class org.apache.hadoop.util.ProgramDriver
 
progress(String, float, String, TaskStatus.Phase, Counters) - Method in class org.apache.hadoop.mapred.TaskTracker
Called periodically to report Task progress, from 0.0 to 1.0.
Progress - Class in org.apache.hadoop.util
Utility to assist with generation of progress reports.
Progress() - Constructor for class org.apache.hadoop.util.Progress
Creates a new root node.
progress() - Method in interface org.apache.hadoop.util.Progressable
callback for reporting progress.
Progressable - Interface in org.apache.hadoop.util
An interface for callbacks when an method makes some progress.
pseudoSortByDistance(DatanodeDescriptor, DatanodeDescriptor[]) - Method in class org.apache.hadoop.net.NetworkTopology
Sort nodes array by their distances to reader It linearly scans the array, if a local node is found, swap it with the first element of the array.
purge() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Delete everything.
purgeCache(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Clear the entire contents of the cache and delete the backing files.
put(long, Text, byte[]) - Method in class org.apache.hadoop.hbase.HClient.BatchHandler
Change the value for the specified column
put(long, Text, byte[]) - Method in class org.apache.hadoop.hbase.HClient
Change a value for the specified column.
put(long, Text, byte[]) - Method in class org.apache.hadoop.hbase.HRegion
Put a cell value into the locked row.
put(Text, long, long, Text, byte[]) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Change a value for the specified column
put(Text, long, long, Text, byte[]) - Method in class org.apache.hadoop.hbase.HRegionServer
Change a value for the specified column
put(long, Text, byte[]) - Method in class org.apache.hadoop.hbase.io.BatchUpdate
Change a value for the specified column
put(Object) - Method in class org.apache.hadoop.util.PriorityQueue
Adds an Object to a PriorityQueue in log(size) time.
PUT_OP - Static variable in class org.apache.hadoop.hbase.io.BatchOperation
put operation

Q

quarterDigest() - Method in class org.apache.hadoop.io.MD5Hash
Return a 32-bit digest of the MD5.
QUOTED_STRING - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 

R

RANDOM - Static variable in interface org.onelab.filter.RemoveScheme
Random selection.
RandomWriter - Class in org.apache.hadoop.examples
This program uses map/reduce to just run a distributed job where there is no interaction between the tasks and each task write a large unsorted random binary sequence file of BytesWritable.
RandomWriter() - Constructor for class org.apache.hadoop.examples.RandomWriter
 
RATIO - Static variable in interface org.onelab.filter.RemoveScheme
Ratio Selection.
RawLocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the raw local filesystem.
RawLocalFileSystem() - Constructor for class org.apache.hadoop.fs.RawLocalFileSystem
 
RBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
Rcc - Class in org.apache.hadoop.record.compiler.generated
 
Rcc(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(RccTokenManager) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
RccConstants - Interface in org.apache.hadoop.record.compiler.generated
 
RccTask - Class in org.apache.hadoop.record.compiler.ant
Hadoop record compiler ant Task
RccTask() - Constructor for class org.apache.hadoop.record.compiler.ant.RccTask
Creates a new instance of RccTask
RccTokenManager - Class in org.apache.hadoop.record.compiler.generated
 
RccTokenManager(SimpleCharStream) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
RccTokenManager(SimpleCharStream, int) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
read() - Method in class org.apache.hadoop.fs.FSInputChecker
Read one checksum-verified byte
read(byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputChecker
Read checksum verified bytes from this byte-input stream into the specified byte array, starting at the given offset.
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
read(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read upto the specified number of bytes, from a given position within a file, and return the number of bytes read.
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Read bytes from the stream.
read() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(DataInput) - Static method in class org.apache.hadoop.io.MD5Hash
Constructs, reads and returns an instance.
read() - Method in class org.apache.hadoop.mapred.TaskLog.Reader
 
read(byte[], int, int) - Method in class org.apache.hadoop.mapred.TaskLog.Reader
 
READ_TIMEOUT - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
readBool(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBool(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBool(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a boolean from serialized record.
readBool(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBuffer(String) - Method in interface org.apache.hadoop.record.RecordInput
Read byte array from serialized record.
readBuffer(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readByte(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a byte from serialized record.
readByte(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readChar() - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
readChar() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
readChunk(long, byte[], int, int, byte[]) - Method in class org.apache.hadoop.fs.FSInputChecker
Reads in next checksum chunk data into buf at offset and checksum into checksum.
readCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readDouble(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readDouble(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readDouble(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a double-precision number from serialized record.
readDouble(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readEnum(DataInput, Class<T>) - Static method in class org.apache.hadoop.io.WritableUtils
Read an Enum value from DataInput, Enums are read and written using String values.
readFields(DataInput) - Method in class org.apache.hadoop.dfs.DatanodeID
 
readFields(DataInput) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
readFields(DataInput) - Method in class org.apache.hadoop.dfs.LocatedBlocks
 
readFields(DataInput) - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HLogEdit
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HLogKey
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HMsg
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HRegionInfo
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HServerAddress
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HServerInfo
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HStoreFile
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HStoreKey
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.io.BatchOperation
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.io.BatchUpdate
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.io.KeyedData
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.io.KeyedDataArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BooleanWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BytesWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.FloatWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.GenericWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.IntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.LongWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.MD5Hash
 
readFields(DataInput) - Method in class org.apache.hadoop.io.NullWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ObjectWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
readFields(DataInput) - Method in class org.apache.hadoop.io.Text
deserialize
readFields(DataInput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
readFields(DataInput) - Method in class org.apache.hadoop.io.VersionedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VIntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VLongWritable
 
readFields(DataInput) - Method in interface org.apache.hadoop.io.Writable
Reads the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.Counters
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.FileSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobProfile
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.MultiFileSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskReport
 
readFields(DataInput) - Method in class org.apache.hadoop.record.Record
 
readFields(DataInput) - Method in class org.onelab.filter.BloomFilter
 
readFields(DataInput) - Method in class org.onelab.filter.CountingBloomFilter
 
readFields(DataInput) - Method in class org.onelab.filter.DynamicBloomFilter
 
readFields(DataInput) - Method in class org.onelab.filter.Filter
 
readFields(DataInput) - Method in class org.onelab.filter.Key
 
readFields(DataInput) - Method in class org.onelab.filter.RetouchedBloomFilter
 
readFieldsCompressed(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.readFields(DataInput).
readFloat(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readFloat(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readFloat(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a single-precision float from serialized record.
readFloat(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
readFully(InputStream, byte[], int, int) - Static method in class org.apache.hadoop.fs.FileUtil
 
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(InputStream, byte[], int, int) - Static method in class org.apache.hadoop.fs.FSInputChecker
A utility function that tries to read up to len bytes from stm
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read the specified number of bytes, from a given position within a file.
readFully(long, byte[]) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read number of bytes equalt to the length of the buffer, from a given position within a file.
readInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an integer from a byte array.
readInt(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readInt(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readInt(String) - Method in interface org.apache.hadoop.record.RecordInput
Read an integer from serialized record.
readInt(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readLine() - Method in class org.apache.hadoop.mapred.LineRecordReader
 
readLine(InputStream, OutputStream) - Static method in class org.apache.hadoop.mapred.LineRecordReader
 
readLine(InputStream) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Read a utf8 encoded line from a data input stream.
readLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a long from a byte array.
readLong(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readLong(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readLong(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a long integer from serialized record.
readLong(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readObject(DataInput, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readObject(DataInput, ObjectWritable, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readString(DataInput) - Static method in class org.apache.hadoop.io.Text
Read a UTF8 encoded string from in
readString(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Read a UTF-8 encoded string.
readString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readString(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readString(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readString(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a UTF-8 encoded string from serialized record.
readString(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readUnsignedShort(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an unsigned short from a byte array.
readVInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded integer from input stream and returns it.
readVInt(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a stream and returns it.
readVLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded long from input stream and returns it.
readVLong(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a stream and return it.
READY - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
Record() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
Record - Class in org.apache.hadoop.record
Abstract class that is extended by generated classes.
Record() - Constructor for class org.apache.hadoop.record.Record
 
RECORD_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
RecordComparator - Class in org.apache.hadoop.record
A raw record comparator base class
RecordComparator(Class) - Constructor for class org.apache.hadoop.record.RecordComparator
Construct a raw Record comparison implementation.
RecordInput - Interface in org.apache.hadoop.record
Interface that all the Deserializers have to implement.
RecordList() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
RecordOutput - Interface in org.apache.hadoop.record
Interface that alll the serializers have to implement.
RecordReader - Interface in org.apache.hadoop.mapred
Reads key/value pairs from an input file FileSplit.
RecordWriter - Interface in org.apache.hadoop.mapred
Writes key/value pairs to an output file.
redCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Reduce method.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.examples.WordCount.Reduce
 
reduce(Text, Iterator, TableOutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.IdentityTableReduce
No aggregation, output pairs of (key, record)
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableReduce
Create a unique key for table insertion by appending a local counter the given key.
reduce(Text, Iterator, TableOutputCollector, Reporter) - Method in class org.apache.hadoop.hbase.mapred.TableReduce
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combines values for a given key.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
Do nothing.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityReducer
Writes all keys and values directly to output.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.LongSumReducer
 
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.Reducer
Combines values for a given key.
reduce(WritableComparable, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeReducer
 
reduceOutFieldSeparator - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
reduceProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
reduceProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns a float between 0.0 and 1.0, indicating progress on the reduce portion of the job.
Reducer - Interface in org.apache.hadoop.mapred
Reduces a set of intermediate values which share a key to a smaller set of values.
ReflectionUtils - Class in org.apache.hadoop.util
General reflection utils
ReflectionUtils() - Constructor for class org.apache.hadoop.util.ReflectionUtils
 
refresh() - Method in class org.apache.hadoop.util.HostsFileReader
 
refreshNodes() - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
 
refreshNodes() - Method in class org.apache.hadoop.dfs.DFSAdmin
Command to ask the namenode to reread the hosts and excluded hosts file.
refreshNodes() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
refreshNodes() - Method in class org.apache.hadoop.dfs.NameNode
 
RegexMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that extracts text matching a regular expression.
RegexMapper() - Constructor for class org.apache.hadoop.mapred.lib.RegexMapper
 
regexpEscape(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
RegExpRowFilter - Class in org.apache.hadoop.hbase.filter
Implementation of RowFilterInterface that can filter by rowkey regular expression and/or individual column values (equals comparison only).
RegExpRowFilter() - Constructor for class org.apache.hadoop.hbase.filter.RegExpRowFilter
Default constructor, filters nothing.
RegExpRowFilter(String) - Constructor for class org.apache.hadoop.hbase.filter.RegExpRowFilter
Constructor that takes a row key regular expression to filter on.
RegExpRowFilter(String, Map<Text, byte[]>) - Constructor for class org.apache.hadoop.hbase.filter.RegExpRowFilter
Constructor that takes a row key regular expression to filter on.
REGION_SERVER_CLASS - Static variable in interface org.apache.hadoop.hbase.HConstants
Parameter name for what region server interface to use.
RegionNotFoundException - Exception in org.apache.hadoop.hbase
 
RegionNotFoundException() - Constructor for exception org.apache.hadoop.hbase.RegionNotFoundException
 
RegionNotFoundException(String) - Constructor for exception org.apache.hadoop.hbase.RegionNotFoundException
 
REGIONSERVER_ADDRESS - Static variable in interface org.apache.hadoop.hbase.HConstants
Parameter name for hbase.regionserver address.
regionServerReport(HServerInfo, HMsg[]) - Method in class org.apache.hadoop.hbase.HMaster
regionServerReport(HServerInfo, HMsg[]) - Method in interface org.apache.hadoop.hbase.HMasterRegionInterface
 
RegionServerRunningException - Exception in org.apache.hadoop.hbase
Thrown if the region server log directory exists (which indicates another region server is running at the same address)
RegionServerRunningException() - Constructor for exception org.apache.hadoop.hbase.RegionServerRunningException
Default Constructor
RegionServerRunningException(String) - Constructor for exception org.apache.hadoop.hbase.RegionServerRunningException
Constructs the exception and supplies a string as the message
regionServerStartup(HServerInfo) - Method in class org.apache.hadoop.hbase.HMaster
regionServerStartup(HServerInfo) - Method in interface org.apache.hadoop.hbase.HMasterRegionInterface
 
RegionUnavailableListener - Interface in org.apache.hadoop.hbase
Used as a callback mechanism so that an HRegion can notify the HRegionServer of the different stages making an HRegion unavailable.
register(DatanodeRegistration, String) - Method in class org.apache.hadoop.dfs.NameNode
 
registerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
registerUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Registers a callback to be called at regular time intervals, as determined by the implementation-class specific configuration.
registerUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Registers a callback to be called at time intervals determined by the configuration.
ReInit(InputStream) - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
ReInit(Reader) - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
ReInit(ParserTokenManager) - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
ReInit(SimpleCharStream) - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
ReInit(SimpleCharStream, int) - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
ReInit(Reader, int, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(Reader, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(Reader) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream, int, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream, int, int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(RccTokenManager) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(SimpleCharStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(SimpleCharStream, int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(Reader, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
release(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.  
release(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
release(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. FS does not support file locks anymore.
release(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. FS does not support file locks anymore.
release(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.  
release(Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
releaseCache(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is the opposite of getlocalcache.
releaseLock(String, String) - Method in class org.apache.hadoop.dfs.NameNode
Deprecated.  
releaseReadLock() - Method in class org.apache.hadoop.hbase.HLocking
Caller is finished with the nonexclusive read-lock
releaseWriteLock() - Method in class org.apache.hadoop.hbase.HLocking
Caller is finished with the write lock
remaining - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
RemoteException - Exception in org.apache.hadoop.ipc
 
RemoteException(String, String) - Constructor for exception org.apache.hadoop.ipc.RemoteException
 
RemoteExceptionHandler - Class in org.apache.hadoop.hbase
An immutable class which contains a static method for handling org.apache.hadoop.ipc.RemoteException exceptions.
remove() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
remove() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Removes, from the buffered data table, all rows having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.remove().
remove() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Removes the row, if it exists, in the buffered data table having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of remove
remove(DatanodeDescriptor) - Method in class org.apache.hadoop.net.NetworkTopology
Remove a data node Update data node counter & rack counter if neccessary
removeAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Removes the named attribute if it exists.
RemoveScheme - Interface in org.onelab.filter
Defines the different remove scheme for retouched Bloom filters.
removeSuffix(String, String) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Removes a suffix from a filename, if it has it.
removeTag(String) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Removes any tag of the specified name.
removeTag(String) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Removes any tag of the specified name.
rename(Path, Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Rename files/dirs
rename(Path, Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
rename(String, String) - Method in class org.apache.hadoop.dfs.NameNode
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Rename files/dirs
rename(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Renames Path src to Path dst.
rename(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Renames Path src to Path dst.
rename(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
rename(FileSystem, String, String) - Static method in class org.apache.hadoop.io.MapFile
Renames an existing map directory.
rename(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
renewLease(String) - Method in class org.apache.hadoop.dfs.NameNode
 
renewLease(long) - Method in class org.apache.hadoop.hbase.HClient
Renew lease on update
renewLease(long, long) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Renew lease on update
renewLease(long, long) - Method in class org.apache.hadoop.hbase.HRegionServer
Renew lease on update
renewLease(long, long) - Method in class org.apache.hadoop.hbase.Leases
Renew a lease
report() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
report() - Method in class org.apache.hadoop.dfs.DFSAdmin
Gives a report on how the FileSystem is doing.
reportBadBlocks(LocatedBlock[]) - Method in class org.apache.hadoop.dfs.NameNode
The client has detected an error on the specified located blocks and is reporting them to the server.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
We need to find the blocks that didn't match.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
We need to find the blocks that didn't match.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Report a checksum error to the file system.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.LocalFileSystem
Moves files to a bad file directory on the same device, so that their storage will not be reused.
reportDiagnosticInfo(String, String) - Method in class org.apache.hadoop.mapred.TaskTracker
Called when the task dies before completion, and we want to report back diagnostic info
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
Reporter - Interface in org.apache.hadoop.mapred
Passed to application code to permit alteration of status.
reportTaskTrackerError(String, String, String) - Method in class org.apache.hadoop.mapred.JobTracker
 
requiresLayout() - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
reserveSpaceWithCheckSum(Path, long) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Register a file with its size.
reset() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
reset() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
reset() - Method in class org.apache.hadoop.dfs.DataChecksum
 
reset() - Method in class org.apache.hadoop.fs.FSInputChecker
 
reset() - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Resets the state of the filter.
reset() - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Resets the state of the filter.
reset() - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Resets the state of the filter.
reset() - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Resets the state of the filter.
reset() - Method in interface org.apache.hadoop.io.compress.Compressor
Resets compressor so that a new set of input data can be processed.
reset() - Method in interface org.apache.hadoop.io.compress.Decompressor
Resets decompressor so that a new set of input data can be processed.
reset() - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
reset(byte[], int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset(byte[], int, int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset() - Method in class org.apache.hadoop.io.DataOutputBuffer
Resets the buffer to empty.
reset() - Method in class org.apache.hadoop.io.MapFile.Reader
Re-positions the reader before its first key.
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
reset the aggregator
reset() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
reset the aggregator
reset() - Method in class org.apache.hadoop.record.Buffer
Reset the buffer to 0 size
ResetableIterator - Interface in org.apache.hadoop.contrib.utils.join
This interface defines an iterator interface that will help the reducer class for re-grouping the values in the values iterator of the reduce method according the their source tags.
resetState() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Reset the decompressor to its initial state and discard any buffered data, as the underlying stream may have been repositioned.
resetState() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Reset the compression to the initial state.
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
resortPriority() - Method in class org.apache.hadoop.mapred.JobTracker
Sort jobs by priority and then by start time.
resume() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
resume the suspended thread
retiringRegions - Variable in class org.apache.hadoop.hbase.HRegionServer
 
RETOUCHED_BLOOMFILTER - Static variable in class org.apache.hadoop.hbase.BloomFilterDescriptor
retouched Bloom filter, as defined in the CoNEXT 2006 paper.
RetouchedBloomFilter - Class in org.onelab.filter
Implements a retouched Bloom filter, as defined in the CoNEXT 2006 paper.
RetouchedBloomFilter() - Constructor for class org.onelab.filter.RetouchedBloomFilter
Default constructor - use with readFields
RetouchedBloomFilter(int, int) - Constructor for class org.onelab.filter.RetouchedBloomFilter
Constructor
retrieveBlock(Block, long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
retrieveINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
RETRY_FOREVER - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying forever.
retryByException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Set a default policy with some explicit handlers for specific exceptions.
retryByRemoteException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
A retry policy for RemoteException Set a default policy with some explicit handlers for specific exceptions.
RetryPolicies - Class in org.apache.hadoop.io.retry
A collection of useful implementations of RetryPolicy.
RetryPolicies() - Constructor for class org.apache.hadoop.io.retry.RetryPolicies
 
RetryPolicy - Interface in org.apache.hadoop.io.retry
Specifies a policy for retrying method failures.
RetryProxy - Class in org.apache.hadoop.io.retry
A factory for creating retry proxies.
RetryProxy() - Constructor for class org.apache.hadoop.io.retry.RetryProxy
 
retryUpToMaximumCountWithFixedSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a fixed time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumCountWithProportionalSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumTimeWithFixedSleep(long, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying for a maximum time, waiting a fixed time between attempts, and then fail by re-throwing the exception.
ReturnMsg - Class in org.apache.hadoop.hbase.shell
 
ReturnMsg(int, String) - Constructor for class org.apache.hadoop.hbase.shell.ReturnMsg
 
ReturnMsg(int) - Constructor for class org.apache.hadoop.hbase.shell.ReturnMsg
 
reverseDns(InetAddress, String) - Static method in class org.apache.hadoop.net.DNS
Returns the hostname associated with the specified IP address by the provided nameserver.
rjustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
rollEditLog() - Method in class org.apache.hadoop.dfs.NameNode
Roll the edit log.
rollFsImage() - Method in class org.apache.hadoop.dfs.NameNode
Roll the image
ROOT - Static variable in class org.apache.hadoop.net.NodeBase
 
ROOT_TABLE_NAME - Static variable in interface org.apache.hadoop.hbase.HConstants
The root table's name.
ROW - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
RowFilterInterface - Interface in org.apache.hadoop.hbase.filter
Interface used for row-level filters applied to HRegion.HScanner scan results during calls to next().
RowFilterSet - Class in org.apache.hadoop.hbase.filter
Implementation of RowFilterInterface that represents a set of RowFilters which will be evaluated with a specified boolean operator AND/OR.
RowFilterSet() - Constructor for class org.apache.hadoop.hbase.filter.RowFilterSet
Default constructor, filters nothing.
RowFilterSet(Set<RowFilterInterface>) - Constructor for class org.apache.hadoop.hbase.filter.RowFilterSet
Constructor that takes a set of RowFilters.
RowFilterSet(RowFilterSet.Operator, Set<RowFilterInterface>) - Constructor for class org.apache.hadoop.hbase.filter.RowFilterSet
Constructor that takes a set of RowFilters and an operator.
RPAREN - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
RPC - Class in org.apache.hadoop.ipc
A simple RPC mechanism.
RPC.Server - Class in org.apache.hadoop.ipc
An RPC Server.
RPC.Server(Object, Configuration, String, int) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.Server(Object, Configuration, String, int, int, boolean) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.VersionMismatch - Exception in org.apache.hadoop.ipc
A version mismatch for the RPC protocol.
RPC.VersionMismatch(String, long, long) - Constructor for exception org.apache.hadoop.ipc.RPC.VersionMismatch
Create a version mismatch exception
run() - Method in class org.apache.hadoop.dfs.DataNode
No matter what kind of exception we get, keep retrying to offerService().
run(Configuration) - Static method in class org.apache.hadoop.dfs.DataNode
Start datanode daemon.
run(String[]) - Method in class org.apache.hadoop.dfs.DFSAdmin
 
run(String[]) - Method in class org.apache.hadoop.dfs.DFSck
 
run(String[]) - Method in class org.apache.hadoop.dfs.NamenodeFsck
 
run() - Method in class org.apache.hadoop.dfs.SecondaryNameNode
 
run(String[]) - Method in class org.apache.hadoop.examples.RandomWriter
This is the main routine for launching a distributed random write job.
run() - Method in class org.apache.hadoop.fs.Command
Run a command
run(String[]) - Method in class org.apache.hadoop.fs.FsShell
run
run(String[]) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
run() - Method in class org.apache.hadoop.hbase.HMaster
Main processing loop
run() - Method in class org.apache.hadoop.hbase.HRegionServer
The HRegionServer sticks in this loop until closed.
run(String[]) - Method in class org.apache.hadoop.mapred.JobClient
 
run() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
The main loop for the thread.
run() - Method in class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
Cleans up history data.
run(RecordReader, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
run(RecordReader, OutputCollector, Reporter) - Method in interface org.apache.hadoop.mapred.MapRunnable
Called to execute mapping.
run(RecordReader, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.MapRunner
 
run() - Method in class org.apache.hadoop.mapred.TaskTracker
The server retry loop.
run(String[]) - Method in class org.apache.hadoop.util.CopyFiles
This is the main driver for recursively copying directories across file systems.
run(String[]) - Method in interface org.apache.hadoop.util.Tool
execute the command with the given arguments
RunJar - Class in org.apache.hadoop.util
Run a Hadoop job jar.
RunJar() - Constructor for class org.apache.hadoop.util.RunJar
 
runJob(JobConf) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
Submit/run a map/reduce job.
runJob(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Utility that submits a job, then polls for progress until the job is complete.
running - Variable in class org.apache.hadoop.hbase.Leases
 
RUNNING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
RUNNING - Static variable in class org.apache.hadoop.mapred.JobStatus
 
running_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
RunningJob - Interface in org.apache.hadoop.mapred
Includes details on a running MapReduce job.
runningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 

S

S3Exception - Exception in org.apache.hadoop.fs.s3
Thrown if there is a problem communicating with Amazon S3.
S3Exception(Throwable) - Constructor for exception org.apache.hadoop.fs.s3.S3Exception
 
S3FileSystem - Class in org.apache.hadoop.fs.s3
A FileSystem backed by Amazon S3.
S3FileSystem() - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystem(FileSystemStore) - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystemException - Exception in org.apache.hadoop.fs.s3
Thrown when there is a fatal exception while using S3FileSystem.
S3FileSystemException(String) - Constructor for exception org.apache.hadoop.fs.s3.S3FileSystemException
 
safeGetCanonicalPath(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
SafeModeException - Exception in org.apache.hadoop.dfs
This exception is thrown when the name node is in safe mode.
SafeModeException(String, FSNamesystem.SafeModeInfo) - Constructor for exception org.apache.hadoop.dfs.SafeModeException
 
scannerClosed - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
SecondaryNameNode - Class in org.apache.hadoop.dfs
The Secondary NameNode is a helper to the primary NameNode.
SecondaryNameNode(Configuration) - Constructor for class org.apache.hadoop.dfs.SecondaryNameNode
Create a connection to the primary namenode.
SecondaryNameNode.GetImageServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to retrieve a file.
SecondaryNameNode.GetImageServlet() - Constructor for class org.apache.hadoop.dfs.SecondaryNameNode.GetImageServlet
 
seek(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
seek(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seek(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Seek to the given position in the stream.
seek(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seek to the given offset from the start of the file.
seek(long) - Method in interface org.apache.hadoop.fs.Seekable
Seek to the given offset from the start of the file.
seek(long) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Positions the reader before its nth value.
seek(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Positions the reader at the named key, or if none such exists, at the first entry after the named key.
seek(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Set the current byte position in the input file.
seek(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
 
seek(long) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
Seekable - Interface in org.apache.hadoop.fs
Stream that permits seeking.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Implementation should seek forward in_ to the first byte of the next record.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seeks a different copy of the data.
seekToNewSource(long) - Method in interface org.apache.hadoop.fs.Seekable
Seeks a different copy of the data.
seenPrimary_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
SELECT - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
selectCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
SelectCommand - Class in org.apache.hadoop.hbase.shell
 
SelectCommand() - Constructor for class org.apache.hadoop.hbase.shell.SelectCommand
 
selectFoot() - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
selectHead() - Static method in class org.apache.hadoop.hbase.shell.ConsoleTable
 
selectiveClearing(Key, short) - Method in class org.onelab.filter.RetouchedBloomFilter
Performs the selective clearing for a given key.
SEMICOLON_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
sendHeartbeat(DatanodeRegistration, long, long, int, int) - Method in class org.apache.hadoop.dfs.NameNode
Data node notify the name node that it is alive Return a block-oriented command for the datanode to execute.
SEPARATOR - Static variable in class org.apache.hadoop.fs.Path
The directory separator, a slash.
SEPARATOR_CHAR - Static variable in class org.apache.hadoop.fs.Path
 
SequenceFile - Class in org.apache.hadoop.io
Support for flat files of binary key/value pairs.
SequenceFile.CompressionType - Enum in org.apache.hadoop.io
The type of compression.
SequenceFile.Metadata - Class in org.apache.hadoop.io
The class encapsulating with the metadata of a file.
SequenceFile.Metadata() - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Metadata(TreeMap<Text, Text>) - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Reader - Class in org.apache.hadoop.io
Reads key/value pairs from a sequence-format file.
SequenceFile.Reader(FileSystem, Path, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Reader
Open the named file.
SequenceFile.Sorter - Class in org.apache.hadoop.io
Sorts key/value pairs in a sequence-format file.
SequenceFile.Sorter(FileSystem, Class, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge files containing the named classes.
SequenceFile.Sorter(FileSystem, WritableComparator, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge using an arbitrary WritableComparator.
SequenceFile.Sorter.RawKeyValueIterator - Interface in org.apache.hadoop.io
The interface to iterate over raw keys/values of SequenceFiles.
SequenceFile.Sorter.SegmentDescriptor - Class in org.apache.hadoop.io
This class defines a merge segment.
SequenceFile.Sorter.SegmentDescriptor(long, long, Path) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Constructs a segment
SequenceFile.ValueBytes - Interface in org.apache.hadoop.io
The interface to 'raw' values of SequenceFiles.
SequenceFile.Writer - Class in org.apache.hadoop.io
Write key/value pairs to a sequence-format file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file with write-progress reporter.
SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader which converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
This class converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
SequenceFileInputFilter - Class in org.apache.hadoop.mapred
A class that allows a map/red job to work on a sample of sequence files.
SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
 
SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
filter interface
SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
base calss for Filters
SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
This class returns a set of records by examing the MD5 digest of its key against a filtering frequency f.
SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
This class returns a percentage of records The percentage is determined by a filtering frequency f using the criteria record# % f == 0.
SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
 
SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
Records filter by matching key to regex
SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
 
SequenceFileInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for SequenceFiles.
SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
 
SequenceFileOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes SequenceFiles.
SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
 
SequenceFileRecordReader - Class in org.apache.hadoop.mapred
An RecordReader for SequenceFiles.
SequenceFileRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
 
serialize() - Method in class org.apache.hadoop.fs.s3.INode
 
serialize(RecordOutput, String) - Method in class org.apache.hadoop.record.Record
Serialize a record with tag (ususally field name)
serialize(RecordOutput) - Method in class org.apache.hadoop.record.Record
Serialize a record without a tag
Server - Class in org.apache.hadoop.ipc
An abstract IPC service.
Server(String, int, Class, int, Configuration) - Constructor for class org.apache.hadoop.ipc.Server
Constructs a server listening on the named port and address.
serverInfo - Variable in class org.apache.hadoop.hbase.HRegionServer
 
set(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Deprecated.  
set(String, String) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property.
set(Checksum, int, int) - Method in class org.apache.hadoop.fs.FSInputChecker
Set the checksum related parameters
set(HStoreKey) - Method in class org.apache.hadoop.hbase.HStoreKey
Set the value of this HStoreKey from the supplied key
set(KeyedData[]) - Method in class org.apache.hadoop.hbase.io.KeyedDataArrayWritable
Sets the KeyedData array
set(Writable[]) - Method in class org.apache.hadoop.io.ArrayWritable
 
set(boolean) - Method in class org.apache.hadoop.io.BooleanWritable
Set the value of the BooleanWritable
set(BytesWritable) - Method in class org.apache.hadoop.io.BytesWritable
Set the BytesWritable to the contents of the given newData.
set(byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable
Set the value to a copy of the given byte range
set(float) - Method in class org.apache.hadoop.io.FloatWritable
Set the value of this FloatWritable.
set(Writable) - Method in class org.apache.hadoop.io.GenericWritable
Set the instance that is wrapped.
set(int) - Method in class org.apache.hadoop.io.IntWritable
Set the value of this IntWritable.
set(long) - Method in class org.apache.hadoop.io.LongWritable
Set the value of this LongWritable.
set(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
Copy the contents of another instance into this instance.
set(Object) - Method in class org.apache.hadoop.io.ObjectWritable
Reset the instance.
set(Text, Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
set(String) - Method in class org.apache.hadoop.io.Text
Set to contain the contents of a string.
set(byte[]) - Method in class org.apache.hadoop.io.Text
Set to a utf8 byte array
set(Text) - Method in class org.apache.hadoop.io.Text
copy a text.
set(byte[], int, int) - Method in class org.apache.hadoop.io.Text
Set the Text to range of bytes
set(Writable[][]) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
set(String) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(UTF8) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(int) - Method in class org.apache.hadoop.io.VIntWritable
Set the value of this VIntWritable.
set(long) - Method in class org.apache.hadoop.io.VLongWritable
Set the value of this LongWritable.
set(byte[]) - Method in class org.apache.hadoop.record.Buffer
Use the specified bytes array as underlying sequence.
set(float) - Method in class org.apache.hadoop.util.Progress
Called during execution on a leaf node to set its progress.
setArchiveTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the timestamp of the archives to be localized
setArgument(String) - Method in class org.apache.hadoop.hbase.shell.DescCommand
 
setArgument(String) - Method in class org.apache.hadoop.hbase.shell.DropCommand
 
setArgument(String) - Method in class org.apache.hadoop.hbase.shell.HelpCommand
 
setArgument(String) - Method in class org.apache.hadoop.hbase.shell.ShowCommand
 
setAttribute(String, Object) - Method in class org.apache.hadoop.mapred.StatusHttpServer
Set a value in the webapp context.
setAttribute(String, Object) - Method in class org.apache.hadoop.metrics.ContextFactory
Sets the named factory attribute to the specified value, creating it if it did not already exist.
setBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to an integer.
setCacheArchives(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of archives
setCacheFiles(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of files
setCapacity(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the capacity of the backing storage.
setCapacity(int) - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing storage.
setClass(String, Class<?>, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to the name of a class.
setClassLoader(ClassLoader) - Method in class org.apache.hadoop.conf.Configuration
Set the class loader that will be used to load the various objects.
setCodecClasses(Configuration, List<Class>) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Sets a list of codec classes in the configuration.
setColumn(Text) - Method in class org.apache.hadoop.hbase.HStoreKey
Change the value of the column key
setColumnfamilies(List<String>) - Method in class org.apache.hadoop.hbase.shell.CreateCommand
 
setColumnfamilies(List<String>) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
setColumnFilter(Text, byte[]) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Specify a value that must be matched for the given column.
setColumnFilters(Map<Text, byte[]>) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Set column filters for a number of columns.
setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
 
setCompressionType(Configuration, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Set the compression type for sequence files.
setCompressMapOutput(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Should the map outputs be compressed before transfer? Uses the SequenceFile compression.
setCompressOutput(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Set whether the output of the reduce is compressed
setCondition(Map<String, List<String>>) - Method in class org.apache.hadoop.hbase.shell.DeleteCommand
 
setCondition(Map<String, List<String>>) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
setCondition(Map<String, List<String>>) - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
setConf(Configuration) - Method in interface org.apache.hadoop.conf.Configurable
Set the configuration to be used by this object.
setConf(Configuration) - Method in class org.apache.hadoop.conf.Configured
 
setConf(Configuration) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
setConf(Configuration) - Method in class org.apache.hadoop.io.compress.LzoCodec
 
setConf(Configuration) - Method in class org.apache.hadoop.io.ObjectWritable
 
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
configure the filter according to configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
configure the filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
configure the Filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.tools.Logalyzer.LogComparator
 
setConf(Configuration) - Method in class org.apache.hadoop.util.CopyFiles
 
setConf(Object, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Check and set 'configuration' if necessary.
setConf(Configuration) - Method in class org.apache.hadoop.util.ToolBase
 
setContentionTracing(boolean) - Static method in class org.apache.hadoop.util.ReflectionUtils
 
setCorruptFiles(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setDebugStream(PrintStream) - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
setDebugStream(PrintStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
setDestdir(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets directory where output files will be generated
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setDigest(String) - Method in class org.apache.hadoop.io.MD5Hash
Sets the digest value from a hex string.
setDisableHistory(boolean) - Static method in class org.apache.hadoop.mapred.JobHistory
Enable/disable history logging.
setDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setEventId(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
set event Id.
setExecutable(JobConf, String) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set the URI for the application's executable.
setFactor(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the number of streams to merge at once.
setFactory(Class, WritableFactory) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
setFailonerror(boolean) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Given multiple files (via fileset), set the error handling behavior
SetFile - Class in org.apache.hadoop.io
A file-based set of keys.
SetFile() - Constructor for class org.apache.hadoop.io.SetFile
 
setFile(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the record definition file attribute
SetFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing set file.
SetFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set.
SetFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set using the named comparator.
SetFile.Writer - Class in org.apache.hadoop.io
Deprecated. pass a Configuration too
SetFile.Writer(FileSystem, String, Class) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create the named set for keys of the named class.
SetFile.Writer(Configuration, FileSystem, String, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create a set naming the element class and compression type.
SetFile.Writer(Configuration, FileSystem, String, WritableComparator, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. Create a set naming the element comparator and compression type.
setFileTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the timestamp of the files to be localized
setFilterClass(Configuration, Class) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
set the filter class
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
set the filtering frequency in configuration
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
set the frequency and stores it in conf
setHostName(String) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setIndexInterval(int) - Method in class org.apache.hadoop.io.MapFile.Writer
Sets the index interval.
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets input data for compression.
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets input data for decompression.
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setInputFormat(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
 
setInputKeyClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Not used
setInputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
setInputValueClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Not used
setInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to an integer.
setIsJavaMapper(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the Mapper is written in Java.
setIsJavaRecordReader(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the job is using a Java RecordReader.
setIsJavaRecordWriter(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the job will use a Java RecordWriter.
setIsJavaReducer(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the Reducer is written in Java.
setJar(String) - Method in class org.apache.hadoop.mapred.JobConf
 
setJarByClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Set the job's jar file by finding an example class location.
setJobConf(JobConf) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred job conf for this job.
setJobConf() - Method in class org.apache.hadoop.streaming.StreamJob
 
setJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job ID for this job.
setJobName(String) - Method in class org.apache.hadoop.mapred.JobConf
Set the user-specified job name.
setJobName(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job name for this job.
setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobConf
Set job priority for this job.
setKeepCommandFile(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether to keep the command file for debugging
setKeepFailedTaskFiles(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Set whether the framework should keep the intermediate files for failed tasks.
setKeepTaskFilesPattern(String) - Method in class org.apache.hadoop.mapred.JobConf
Set a regular expression for task names that should be kept.
setLanguage(String) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the output language option
setLevel(int) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setLimit(int) - Method in class org.apache.hadoop.hbase.shell.CreateCommand
 
setLimit(int) - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
setLoadNativeLibraries(JobConf, boolean) - Method in class org.apache.hadoop.util.NativeCodeLoader
Set if native hadoop libraries, if present, can be used for this job.
setLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized archives
setLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized files
setLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Sets the value of the name property to a long.
setLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setMapOutputCompressionType(SequenceFile.CompressionType) - Method in class org.apache.hadoop.mapred.JobConf
Set the compression type for the map outputs.
setMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Set the given class as the compression codec for the map outputs.
setMapOutputKeyClass(Class<? extends WritableComparable>) - Method in class org.apache.hadoop.mapred.JobConf
Set the key class for the map output data.
setMapOutputValueClass(Class<? extends Writable>) - Method in class org.apache.hadoop.mapred.JobConf
Set the value class for the map output data.
setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapred.JobConf
 
setMapredJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred ID for this job.
setMapRunnerClass(Class<? extends MapRunnable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setMaxItems(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
Set the limit on the number of unique values
setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Expert: Set the number of maximum attempts that will be made to run a map task
setMaxMapTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum percentage of map tasks that can fail without the job being aborted.
setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Expert: Set the number of maximum attempts that will be made to run a reduce task
setMaxReduceTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum percentage of reduce tasks that can fail without the job being aborted.
setMaxTaskFailuresPerTracker(int) - Method in class org.apache.hadoop.mapred.JobConf
Set the maximum no.
setMemory(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the total amount of buffer memory, in bytes.
setMessage(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the message for this job.
setMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMinSplitSize(long) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
setMissingSize(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Set the name that a class should be known as to something other than the class name.
setNumMapTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
 
setNumReduceTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
 
setObject(String, Object) - Method in class org.apache.hadoop.conf.Configuration
Deprecated.  
setOutputCompressorClass(JobConf, Class) - Static method in class org.apache.hadoop.mapred.OutputFormatBase
Set the given class as the output compression codec.
setOutputFormat(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputKeyClass(Class<? extends WritableComparable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputKeyComparatorClass(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputPath(Path) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputValueClass(Class<? extends Writable>) - Method in class org.apache.hadoop.mapred.JobConf
 
setOutputValueGroupingComparator(Class) - Method in class org.apache.hadoop.mapred.JobConf
Set the user defined comparator for grouping values.
setOverReplicatedBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setParent(Node) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapred.JobConf
 
setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Define the filtering regex and stores it in conf
setPeriod(int) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sets the timer period
setPrinter(DancingLinks.SolutionAcceptor<Pentomino.ColumnName>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Set the printer for the puzzle.
setProgressable(Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the progressable object in order to report progress.
setQuietMode(boolean) - Method in class org.apache.hadoop.conf.Configuration
Make this class quiet.
setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
 
setReplication(Path, short) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
setReplication(String, short) - Method in class org.apache.hadoop.dfs.NameNode
 
setReplication(int) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setReplication(Path, short) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
setRow(Text) - Method in class org.apache.hadoop.hbase.HStoreKey
Change the value of the row key
setRunState(int) - Method in class org.apache.hadoop.mapred.JobStatus
Change the current run state of the job.
setSafeMode(FSConstants.SafeModeAction) - Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Enter, leave or get safe mode.
setSafeMode(String[], int) - Method in class org.apache.hadoop.dfs.DFSAdmin
Safe mode maintenance command.
setSafeMode(FSConstants.SafeModeAction) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
Enter, leave or get safe mode.
setSafeMode(FSConstants.SafeModeAction) - Method in class org.apache.hadoop.dfs.NameNode
 
setSessionId(String) - Method in class org.apache.hadoop.mapred.JobConf
Set the user-specified session idengifier.
setSize(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the size of the buffer.
setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Turn on or off speculative execution for this job.
setState(int) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the state for this job.
setStatus(String) - Method in interface org.apache.hadoop.mapred.Reporter
Alter the application's status description.
setStatus(String) - Method in class org.apache.hadoop.util.Progress
 
setTable(String) - Method in class org.apache.hadoop.hbase.shell.CreateCommand
 
setTable(String) - Method in class org.apache.hadoop.hbase.shell.DeleteCommand
 
setTable(String) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
setTable(String) - Method in class org.apache.hadoop.hbase.shell.SelectCommand
 
setTabSize(int) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
setTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
setTag(Text) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
setTag(String, String) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, String) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Sets task id.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setTaskOutputFilter(JobClient.TaskStatusFilter) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) - Static method in class org.apache.hadoop.mapred.JobClient
Modify the JobConf to set the task output filter
setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task status.
setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task tracker http location.
setThreads(int, int) - Method in class org.apache.hadoop.mapred.StatusHttpServer
 
setTimeout(int) - Method in class org.apache.hadoop.ipc.Client
Sets the timeout used for network i/o.
setTimeout(int) - Method in class org.apache.hadoop.ipc.Server
Sets the timeout used for network i/o.
setTotalBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalDirs(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalFiles(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setTotalLogFileSize(long) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setTotalSize(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setUnderReplicatedBlocks(long) - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
setup(Configuration, JobConf, String[], String, Path, boolean) - Method in class org.apache.hadoop.util.CopyFiles.CopyFilesMapper
Interface to initialize *distcp* specific map tasks.
setup(Configuration, JobConf, String[], String, Path, boolean) - Method in class org.apache.hadoop.util.CopyFiles.FSCopyFilesMapper
Initialize DFSCopyFileMapper specific job-configuration.
setup(Configuration, JobConf, String[], String, Path, boolean) - Method in class org.apache.hadoop.util.CopyFiles.HTTPCopyFilesMapper
Initialize HTTPCopyFileMapper specific job.
setUser(String) - Method in class org.apache.hadoop.mapred.JobConf
Set the reported username for this job.
setUserJobConfProps(boolean) - Method in class org.apache.hadoop.streaming.StreamJob
This method sets the user jobconf variable specified by user using -jobconf key=value
setValueClass(Class) - Method in class org.apache.hadoop.io.ArrayWritable
 
setValues(List<String>) - Method in class org.apache.hadoop.hbase.shell.InsertCommand
 
setVerbose(boolean) - Method in class org.apache.hadoop.streaming.JarBuilder
 
setVersion(long) - Method in class org.apache.hadoop.hbase.HStoreKey
Change the value of the timestamp field
setWorkingDirectory(Path) - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.dfs.HftpFileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Set the working directory to the given directory.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapred.JobConf
Set the current working directory for the default file system
Shell - Class in org.apache.hadoop.hbase
An hbase shell.
Shell() - Constructor for class org.apache.hadoop.hbase.Shell
 
shippedCanonFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
shouldPreserveInput() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
shouldRetry(Exception, int) - Method in interface org.apache.hadoop.io.retry.RetryPolicy
Determines whether the framework should retry a method for the given exception, and the number of retries that have been made for that operation so far.
SHOW - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
showCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
ShowCommand - Class in org.apache.hadoop.hbase.shell
 
ShowCommand() - Constructor for class org.apache.hadoop.hbase.shell.ShowCommand
 
shutdown() - Method in class org.apache.hadoop.dfs.DataNode
Shut down this instance of the datanode.
shutdown() - Method in class org.apache.hadoop.dfs.SecondaryNameNode
Shut down this instance of the datanode.
shutdown() - Method in class org.apache.hadoop.hbase.HClient
Shuts down the HBase instance
shutdown() - Method in class org.apache.hadoop.hbase.HMaster
Shutdown an HBase cluster.
shutdown() - Method in interface org.apache.hadoop.hbase.HMasterInterface
Shutdown an HBase cluster.
shutdown() - Method in class org.apache.hadoop.mapred.TaskTracker
 
SimpleCharStream - Class in org.apache.hadoop.hbase.shell.generated
An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing).
SimpleCharStream(Reader, int, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(Reader, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(Reader) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream(InputStream) - Constructor for class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
SimpleCharStream - Class in org.apache.hadoop.record.compiler.generated
An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing).
SimpleCharStream(Reader, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
simpleHostname(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a full hostname, return the word upto the first dot.
size() - Method in class org.apache.hadoop.mapred.Counters.Group
Returns the number of counters in this group.
size() - Method in class org.apache.hadoop.mapred.Counters
Returns the total number of counters, by summing the number of counters in each group.
size() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the number of elements currently stored in the PriorityQueue.
skip(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
skip(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Skips over and discards n bytes of data from the input stream.
skip(long) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
skip(DataInput) - Static method in class org.apache.hadoop.io.Text
Skips over one Text in the input.
skip(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Skips over one UTF8 in the input.
skipCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
skipFully(InputStream, long) - Static method in class org.apache.hadoop.fs.FileUtil
 
skipFully(DataInput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Skip len number of bytes in input streamin
solution(List<List<ColumnName>>) - Method in interface org.apache.hadoop.examples.dancing.DancingLinks.SolutionAcceptor
A callback to return a solution to the application.
solve(int[], DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Given a prefix, find solutions under it.
solve(DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Solve a complete problem
solve(int[]) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find all of the solutions that start with the given prefix.
solve() - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find all of the solutions to the puzzle.
solve() - Method in class org.apache.hadoop.examples.dancing.Sudoku
 
Sort - Class in org.apache.hadoop.examples
This is the trivial map/reduce program that does absolutely nothing other than use the framework to fragment and sort the input values.
Sort() - Constructor for class org.apache.hadoop.examples.Sort
 
sort(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files into an output file.
sort(Path, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
The backwards compatible interface to sort.
sortAndIterate(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files and return an iterator.
sortedLeases - Variable in class org.apache.hadoop.hbase.Leases
 
sortNodeList(ArrayList<DatanodeDescriptor>, String, String) - Method in class org.apache.hadoop.dfs.JspHelper
 
SOURCE_TAGS_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
specialConstructor - Variable in exception org.apache.hadoop.hbase.shell.generated.ParseException
This variable determines which constructor was used to create this object and thereby affects the semantics of the "getMessage" method (see below).
specialConstructor - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This variable determines which constructor was used to create this object and thereby affects the semantics of the "getMessage" method (see below).
specialToken - Variable in class org.apache.hadoop.hbase.shell.generated.Token
This field is used to access special tokens that occur prior to this token, but after the immediately preceding regular (non-special) token.
specialToken - Variable in class org.apache.hadoop.record.compiler.generated.Token
This field is used to access special tokens that occur prior to this token, but after the immediately preceding regular (non-special) token.
split(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Generate a list of row choices to cover the first moves.
splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitOrCompactCheckFrequency - Variable in class org.apache.hadoop.hbase.HRegionServer
 
splitOrCompactLock - Variable in class org.apache.hadoop.hbase.HRegionServer
 
start() - Method in class org.apache.hadoop.ipc.Server
Starts the service.
start() - Method in class org.apache.hadoop.mapred.StatusHttpServer
Start the server.
startBatchUpdate() - Method in class org.apache.hadoop.hbase.HClient
Start a batch of row insertions/updates.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.mapred.PhasedFileSystem
Deprecated.  
startMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized map.
startMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a map to be serialized.
startMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Starts or restarts monitoring, by opening in append-mode, the file specified by the fileName attribute, if specified.
startMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Starts or restarts monitoring, the emitting of metrics records as they are updated.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Starts or restarts monitoring, the emitting of metrics records.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of startMonitoring
startNextPhase() - Method in class org.apache.hadoop.util.Progress
Called during execution to move to the next phase at this level in the tree.
startNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
startRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized record.
startRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a record to be serialized.
startRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startTracker(JobConf) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker with given configuration.
startUpdate(Text) - Method in class org.apache.hadoop.hbase.HClient.BatchHandler
Start a batch row insertion/update.
startUpdate(Text) - Method in class org.apache.hadoop.hbase.HClient
Start an atomic row insertion/update.
startUpdate(Text) - Method in class org.apache.hadoop.hbase.HRegion
The caller wants to apply a series of writes to a single row in the HRegion.
startUpdate(Text, long, Text) - Method in interface org.apache.hadoop.hbase.HRegionInterface
Start an atomic row insertion/update.
startUpdate(Text, long, Text) - Method in class org.apache.hadoop.hbase.HRegionServer
Start an atomic row insertion/update.
startUpdate(Text) - Method in class org.apache.hadoop.hbase.io.BatchUpdate
Start a batch row insertion/update.
startUpgrade() - Method in interface org.apache.hadoop.dfs.Upgradeable
Prepare for the upgrade.
startupShutdownMessage(Class, String[], Log) - Static method in class org.apache.hadoop.util.StringUtils
Print a log message for starting up and shutting down
startVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized vector.
startVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a vector to be serialized.
startVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
stateChangeLog - Static variable in class org.apache.hadoop.dfs.NameNode
 
staticFlag - Static variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
staticFlag - Static variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
StatusHttpServer - Class in org.apache.hadoop.mapred
Create a Jetty embedded server to answer http requests.
StatusHttpServer(String, String, int, boolean) - Constructor for class org.apache.hadoop.mapred.StatusHttpServer
Create a status server on the given port.
StatusHttpServer.StackServlet - Class in org.apache.hadoop.mapred
A very simple servlet to serve up a text representation of the current stack traces.
StatusHttpServer.StackServlet() - Constructor for class org.apache.hadoop.mapred.StatusHttpServer.StackServlet
 
STILL_WAITING - Static variable in interface org.apache.hadoop.dfs.FSConstants
 
stop() - Method in class org.apache.hadoop.dfs.NameNode
Stop all NameNode threads and wait for all to finish.
stop() - Method in class org.apache.hadoop.ipc.Client
Stop all threads related to this client.
stop() - Method in class org.apache.hadoop.ipc.Server
Stops the service.
stop() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
set the thread state to STOPPING so that the thread will stop when it wakes up.
stop() - Method in class org.apache.hadoop.mapred.StatusHttpServer
stop the server
stopClient() - Static method in class org.apache.hadoop.ipc.RPC
Stop all RPC client connections
stopMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Stops monitoring, closing the file.
stopMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring.
stopMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring.
stopNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
stopRequested - Variable in class org.apache.hadoop.hbase.HRegionServer
 
stopTracker() - Static method in class org.apache.hadoop.mapred.JobTracker
 
storageID - Variable in class org.apache.hadoop.dfs.DatanodeID
 
storeBlock(Block, File) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
storeINode(Path, INode) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
StreamBaseRecordReader - Class in org.apache.hadoop.streaming
Shared functionality for hadoopStreaming formats.
StreamBaseRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamBaseRecordReader
 
streamBlockInAscii(InetSocketAddress, long, long, long, long, JspWriter) - Method in class org.apache.hadoop.dfs.JspHelper
 
StreamFile - Class in org.apache.hadoop.dfs
 
StreamFile() - Constructor for class org.apache.hadoop.dfs.StreamFile
 
StreamInputFormat - Class in org.apache.hadoop.streaming
An input format that selects a RecordReader based on a JobConf property.
StreamInputFormat() - Constructor for class org.apache.hadoop.streaming.StreamInputFormat
 
StreamJob - Class in org.apache.hadoop.streaming
All the client-side work happens here.
StreamJob(String[], boolean) - Constructor for class org.apache.hadoop.streaming.StreamJob
 
StreamLineRecordReader - Class in org.apache.hadoop.streaming
Deprecated.  
StreamLineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.streaming.StreamLineRecordReader
Deprecated.  
StreamOutputFormat - Class in org.apache.hadoop.streaming
Deprecated.  
StreamOutputFormat() - Constructor for class org.apache.hadoop.streaming.StreamOutputFormat
Deprecated.  
StreamSequenceRecordReader - Class in org.apache.hadoop.streaming
Deprecated.  
StreamSequenceRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.streaming.StreamSequenceRecordReader
Deprecated.  
StreamUtil - Class in org.apache.hadoop.streaming
Utilities not available elsewhere in Hadoop.
StreamUtil() - Constructor for class org.apache.hadoop.streaming.StreamUtil
 
StreamXmlRecordReader - Class in org.apache.hadoop.streaming
A way to interpret XML fragments as Mapper input records.
StreamXmlRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamXmlRecordReader
 
STRING - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
STRING_LITERAL - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
stringifyException(Throwable) - Static method in class org.apache.hadoop.util.StringUtils
Make a string representation of the exception.
stringifySolution(int, int, List<List<Pentomino.ColumnName>>) - Static method in class org.apache.hadoop.examples.dancing.Pentomino
Convert a solution to the puzzle returned by the model into a string that represents the placement of the pieces onto the board.
stringToPath(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
stringToURI(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
StringUtils - Class in org.apache.hadoop.util
General string utils
StringUtils() - Constructor for class org.apache.hadoop.util.StringUtils
 
StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the biggest of a sequence of strings.
StringValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
the default constructor
StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the smallest of a sequence of strings.
StringValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
the default constructor
submit() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Submit this job to mapred.
submitAndMonitorJob() - Method in class org.apache.hadoop.streaming.StreamJob
 
submitJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system
submitJob(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system
submitJob(String) - Method in interface org.apache.hadoop.mapred.JobSubmissionProtocol
Submit a Job for execution.
submitJob(String) - Method in class org.apache.hadoop.mapred.JobTracker
JobTracker.submitJob() kicks off a new job.
submitJob(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Submit a job to the map/reduce cluster.
Submitter - Class in org.apache.hadoop.mapred.pipes
The main entry point and job submitter.
Submitter() - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
 
SUCCEEDED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
SUCCESS - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
Sudoku - Class in org.apache.hadoop.examples.dancing
This class uses the dancing links algorithm from Knuth to solve sudoku puzzles.
Sudoku(InputStream) - Constructor for class org.apache.hadoop.examples.dancing.Sudoku
Set up a puzzle board to the given size.
Sudoku.ColumnName - Interface in org.apache.hadoop.examples.dancing
This interface is a marker class for the columns created for the Sudoku solver.
suffix(String) - Method in class org.apache.hadoop.fs.Path
Adds a suffix to the final name in the path.
sum(Counters, Counters) - Static method in class org.apache.hadoop.mapred.Counters
Convenience method for computing the sum of two sets of counters.
suspend() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
suspend the running thread
SwitchTo(int) - Method in class org.apache.hadoop.hbase.shell.generated.ParserTokenManager
 
SwitchTo(int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
symLink(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Create a soft link between a src and destination only on a local disk.
sync(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Seek to the next sync mark past a given position.
sync() - Method in class org.apache.hadoop.io.SequenceFile.Writer
create a sync point
SYNC_INTERVAL - Static variable in class org.apache.hadoop.io.SequenceFile
The number of bytes between sync points.
syncSeen() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true iff the previous call to next passed a sync mark.

T

tableExists(Text) - Method in class org.apache.hadoop.hbase.HClient
 
TableExistsException - Exception in org.apache.hadoop.hbase
 
TableExistsException() - Constructor for exception org.apache.hadoop.hbase.TableExistsException
 
TableExistsException(String) - Constructor for exception org.apache.hadoop.hbase.TableExistsException
 
TableInputFormat - Class in org.apache.hadoop.hbase.mapred
Convert HBase tabular data into a format that is consumable by Map/Reduce
TableInputFormat() - Constructor for class org.apache.hadoop.hbase.mapred.TableInputFormat
 
TableMap - Class in org.apache.hadoop.hbase.mapred
Scan an HBase table to sort by a specified sort column.
TableMap() - Constructor for class org.apache.hadoop.hbase.mapred.TableMap
constructor
TableNotDisabledException - Exception in org.apache.hadoop.hbase
 
TableNotDisabledException() - Constructor for exception org.apache.hadoop.hbase.TableNotDisabledException
 
TableNotDisabledException(String) - Constructor for exception org.apache.hadoop.hbase.TableNotDisabledException
 
TableOutputCollector - Class in org.apache.hadoop.hbase.mapred
Refine the types that can be collected from a Table Map/Reduce jobs.
TableOutputCollector() - Constructor for class org.apache.hadoop.hbase.mapred.TableOutputCollector
 
TableOutputFormat - Class in org.apache.hadoop.hbase.mapred
Convert Map/Reduce output and write it to an HBase table
TableOutputFormat() - Constructor for class org.apache.hadoop.hbase.mapred.TableOutputFormat
constructor
TableOutputFormat.TableRecordWriter - Class in org.apache.hadoop.hbase.mapred
Convert Reduce output (key, value) to (HStoreKey, KeyedDataArrayWritable) and write to an HBase table
TableOutputFormat.TableRecordWriter(HClient) - Constructor for class org.apache.hadoop.hbase.mapred.TableOutputFormat.TableRecordWriter
Instantiate a TableRecordWriter with the HBase HClient for writing.
TableReduce - Class in org.apache.hadoop.hbase.mapred
Write a table, sorting by the input key
TableReduce() - Constructor for class org.apache.hadoop.hbase.mapred.TableReduce
Constructor
TableSplit - Class in org.apache.hadoop.hbase.mapred
A table split corresponds to a key range [low, high)
TableSplit() - Constructor for class org.apache.hadoop.hbase.mapred.TableSplit
default constructor
TableSplit(Text, Text, Text) - Constructor for class org.apache.hadoop.hbase.mapred.TableSplit
Constructor
tabSize - Variable in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
tabSize - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
tag - Variable in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
TaggedMapOutput - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the values that flow from the mappers to the reducers in a data join job.
TaggedMapOutput() - Constructor for class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
TaskCompletionEvent - Class in org.apache.hadoop.mapred
This is used to track task completion events on job tracker.
TaskCompletionEvent() - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Default constructor for Writable.
TaskCompletionEvent(int, String, int, boolean, TaskCompletionEvent.Status, String) - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Constructor.
TaskCompletionEvent.Status - Enum in org.apache.hadoop.mapred
 
TaskLog - Class in org.apache.hadoop.mapred
A simple logger to handle the task-specific user logs.
TaskLog() - Constructor for class org.apache.hadoop.mapred.TaskLog
 
TaskLog.LogName - Enum in org.apache.hadoop.mapred
The filter for userlogs.
TaskLog.Reader - Class in org.apache.hadoop.mapred
 
TaskLog.Reader(String, TaskLog.LogName, long, long) - Constructor for class org.apache.hadoop.mapred.TaskLog.Reader
Read a log file from start to end positions.
TaskLogAppender - Class in org.apache.hadoop.mapred
A simple log4j-appender for the task child's map-reduce system logs.
TaskLogAppender() - Constructor for class org.apache.hadoop.mapred.TaskLogAppender
 
TaskLogServlet - Class in org.apache.hadoop.mapred
A servlet that is run by the TaskTrackers to provide the task logs via http.
TaskLogServlet() - Constructor for class org.apache.hadoop.mapred.TaskLogServlet
 
TaskReport - Class in org.apache.hadoop.mapred
A report on the state of a task.
TaskReport() - Constructor for class org.apache.hadoop.mapred.TaskReport
 
TaskTracker - Class in org.apache.hadoop.mapred
TaskTracker is a process that starts and tracks MR Tasks in a networked environment.
TaskTracker(JobConf) - Constructor for class org.apache.hadoop.mapred.TaskTracker
Start with the local machine name, and the default JobTracker
TaskTracker.Child - Class in org.apache.hadoop.mapred
The main() for child processes.
TaskTracker.Child() - Constructor for class org.apache.hadoop.mapred.TaskTracker.Child
 
TaskTracker.MapOutputServlet - Class in org.apache.hadoop.mapred
This class is used in TaskTracker's Jetty to serve the map outputs to other nodes.
TaskTracker.MapOutputServlet() - Constructor for class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
taskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
 
terminatedCommand() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
Parses the given array of command line arguments.
Text - Class in org.apache.hadoop.io
This class stores text using standard UTF8 encoding.
Text() - Constructor for class org.apache.hadoop.io.Text
 
Text(String) - Constructor for class org.apache.hadoop.io.Text
Construct from a string.
Text(Text) - Constructor for class org.apache.hadoop.io.Text
Construct from another text.
Text(byte[]) - Constructor for class org.apache.hadoop.io.Text
Construct from a byte array.
Text.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for Text keys.
Text.Comparator() - Constructor for class org.apache.hadoop.io.Text.Comparator
 
TextInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for plain text files.
TextInputFormat() - Constructor for class org.apache.hadoop.mapred.TextInputFormat
 
TextOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes plain text files.
TextOutputFormat() - Constructor for class org.apache.hadoop.mapred.TextOutputFormat
 
TextOutputFormat.LineRecordWriter - Class in org.apache.hadoop.mapred
 
TextOutputFormat.LineRecordWriter(DataOutputStream) - Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
THREAD_WAKE_FREQUENCY - Static variable in interface org.apache.hadoop.hbase.HConstants
Parameter name for how often threads should wake up
threadWakeFrequency - Variable in class org.apache.hadoop.hbase.HRegionServer
 
TIME - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
timestamp - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
toArray(List<byte[]>) - Static method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
 
toArray() - Method in class org.apache.hadoop.io.ArrayWritable
 
toArray() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
token - Variable in class org.apache.hadoop.hbase.shell.generated.Parser
 
Token - Class in org.apache.hadoop.hbase.shell.generated
Describes the input token stream.
Token() - Constructor for class org.apache.hadoop.hbase.shell.generated.Token
 
token - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
Token - Class in org.apache.hadoop.record.compiler.generated
Describes the input token stream.
Token() - Constructor for class org.apache.hadoop.record.compiler.generated.Token
 
token_source - Variable in class org.apache.hadoop.hbase.shell.generated.Parser
 
token_source - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
TokenCountMapper - Class in org.apache.hadoop.mapred.lib
A Mapper that maps text values into pairs.
TokenCountMapper() - Constructor for class org.apache.hadoop.mapred.lib.TokenCountMapper
 
tokenImage - Variable in exception org.apache.hadoop.hbase.shell.generated.ParseException
This is a reference to the "tokenImage" array of the generated parser within which the parse error occurred.
tokenImage - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
tokenImage - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is a reference to the "tokenImage" array of the generated parser within which the parse error occurred.
tokenImage - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
TokenMgrError - Error in org.apache.hadoop.hbase.shell.generated
 
TokenMgrError() - Constructor for error org.apache.hadoop.hbase.shell.generated.TokenMgrError
 
TokenMgrError(String, int) - Constructor for error org.apache.hadoop.hbase.shell.generated.TokenMgrError
 
TokenMgrError(boolean, int, int, int, String, char, int) - Constructor for error org.apache.hadoop.hbase.shell.generated.TokenMgrError
 
TokenMgrError - Error in org.apache.hadoop.record.compiler.generated
 
TokenMgrError() - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(String, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(boolean, int, int, int, String, char, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
Tool - Interface in org.apache.hadoop.util
A tool interface that support generic options handling
ToolBase - Class in org.apache.hadoop.util
This is a base class to support generic commonad options.
ToolBase() - Constructor for class org.apache.hadoop.util.ToolBase
 
top() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the least element of the PriorityQueue in constant time.
toString() - Method in class org.apache.hadoop.conf.Configuration
 
toString() - Method in class org.apache.hadoop.dfs.DataNode
 
toString() - Method in class org.apache.hadoop.dfs.DatanodeID
 
toString() - Method in class org.apache.hadoop.dfs.DistributedFileSystem
 
toString() - Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
 
toString() - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Print basic upgradeStatus details.
toString() - Method in class org.apache.hadoop.fs.DF
 
toString() - Method in class org.apache.hadoop.fs.Path
 
toString() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
toString() - Method in class org.apache.hadoop.fs.s3.Block
 
toString() - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
toString() - Method in class org.apache.hadoop.hbase.HClient.RegionLocation
toString() - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
toString() - Method in class org.apache.hadoop.hbase.HLogEdit
 
toString() - Method in class org.apache.hadoop.hbase.HLogKey
 
toString() - Method in class org.apache.hadoop.hbase.HMsg
 
toString() - Method in class org.apache.hadoop.hbase.HRegionInfo
toString() - Method in class org.apache.hadoop.hbase.HServerAddress
toString() - Method in class org.apache.hadoop.hbase.HServerInfo
 
toString() - Method in class org.apache.hadoop.hbase.HStoreKey
 
toString() - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
toString() - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
Generate the stream of bytes as hex pairs separated by ' '.
toString() - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
toString() - Method in class org.apache.hadoop.hbase.shell.generated.Token
Returns the image.
toString() - Method in class org.apache.hadoop.io.BytesWritable
Generate the stream of bytes as hex pairs separated by ' '.
toString() - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Print the extension map out as a string.
toString() - Method in class org.apache.hadoop.io.FloatWritable
 
toString() - Method in class org.apache.hadoop.io.IntWritable
 
toString() - Method in class org.apache.hadoop.io.LongWritable
 
toString() - Method in class org.apache.hadoop.io.MD5Hash
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
toString() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the file.
toString() - Method in class org.apache.hadoop.io.Text
Convert text back to string
toString() - Method in class org.apache.hadoop.io.UTF8
Deprecated. Convert to a String.
toString() - Method in exception org.apache.hadoop.io.VersionMismatchException
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.VIntWritable
 
toString() - Method in class org.apache.hadoop.io.VLongWritable
 
toString() - Method in class org.apache.hadoop.mapred.FileSplit
 
toString() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
toString() - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
toString() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
toString() - Method in enum org.apache.hadoop.mapred.TaskLog.LogName
 
toString() - Method in class org.apache.hadoop.net.NetworkTopology
convert a network tree to a string
toString() - Method in class org.apache.hadoop.net.NodeBase
Return this node's string representation
toString() - Method in class org.apache.hadoop.record.Buffer
 
toString(String) - Method in class org.apache.hadoop.record.Buffer
Convert the byte buffer to a string an specific character encoding
toString() - Method in class org.apache.hadoop.record.compiler.CodeBuffer
 
toString() - Method in class org.apache.hadoop.record.compiler.generated.Token
Returns the image.
toString() - Method in class org.apache.hadoop.record.Record
 
toString() - Method in class org.apache.hadoop.util.Progress
 
toString() - Method in class org.onelab.filter.BloomFilter
Returns a String representation of this Bloom filter.
toString() - Method in class org.onelab.filter.CountingBloomFilter
 
toString() - Method in class org.onelab.filter.DynamicBloomFilter
 
toStrings() - Method in class org.apache.hadoop.io.ArrayWritable
 
touch(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
toUri() - Method in class org.apache.hadoop.fs.Path
Convert this to a URI.
toURI(String) - Static method in class org.apache.hadoop.util.CopyFiles
 
transform(InputStream, InputStream, Writer) - Static method in class org.apache.hadoop.util.XMLUtils
Transform input xml given a stylesheet.
Trash - Class in org.apache.hadoop.fs
Provides a trash feature.
Trash(Configuration) - Constructor for class org.apache.hadoop.fs.Trash
Construct a trash can accessor.
truncate() - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing store to be the same as the current count of buffer.
TRY_ONCE_DONT_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail silently for void methods, or by re-throwing the exception for non-void methods.
TRY_ONCE_THEN_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail by re-throwing the exception.
TwoDArrayWritable - Class in org.apache.hadoop.io
A Writable for 2D arrays containing a matrix of instances of a class.
TwoDArrayWritable(Class) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
TwoDArrayWritable(Class, Writable[][]) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
twoRotations - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Is the piece identical if rotated 180 degrees?
Type() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
TYPE_SEPARATOR - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 

U

UNIQ_VALUE_COUNT - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
UniqValueCount - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that dedupes a sequence of objects.
UniqValueCount() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
the default constructor
UniqValueCount(long) - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
constructor
unJar(File, File) - Static method in class org.apache.hadoop.util.RunJar
Unpack a jar file into a directory.
UnknownScannerException - Exception in org.apache.hadoop.hbase
 
UnknownScannerException() - Constructor for exception org.apache.hadoop.hbase.UnknownScannerException
 
UnknownScannerException(String) - Constructor for exception org.apache.hadoop.hbase.UnknownScannerException
 
unregisterUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Removes a callback, if it exists.
unregisterUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Removes a callback, if it exists.
unZip(File, File) - Static method in class org.apache.hadoop.fs.FileUtil
Given a File input it will unzip the file in a the unzip directory passed as the second parameter
update(byte[], int, int) - Method in class org.apache.hadoop.dfs.DataChecksum
 
update(int) - Method in class org.apache.hadoop.dfs.DataChecksum
 
update() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.update().
update() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of update
UpdateLineColumn(char) - Method in class org.apache.hadoop.hbase.shell.generated.SimpleCharStream
 
UpdateLineColumn(char) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Updater - Interface in org.apache.hadoop.metrics
Call-back interface.
Upgradeable - Interface in org.apache.hadoop.dfs
Common interface for distributed upgrade objects.
upgradeProgress(String[], int) - Method in class org.apache.hadoop.dfs.DFSAdmin
Command to request current distributed upgrade status, a detailed status, or to force the upgrade to proceed.
upgradeStatus - Variable in class org.apache.hadoop.dfs.UpgradeStatusReport
 
UpgradeStatusReport - Class in org.apache.hadoop.dfs
Base upgrade upgradeStatus class.
UpgradeStatusReport() - Constructor for class org.apache.hadoop.dfs.UpgradeStatusReport
 
UpgradeStatusReport(int, short) - Constructor for class org.apache.hadoop.dfs.UpgradeStatusReport
 
uriToString(URI[]) - Static method in class org.apache.hadoop.util.StringUtils
 
usage() - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
UserDefinedValueAggregatorDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a wrapper for a user defined value aggregator descriptor.
UserDefinedValueAggregatorDescriptor(String, JobConf) - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
userJobConfProps_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
USTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
UTF8 - Class in org.apache.hadoop.io
Deprecated. replaced by Text
UTF8() - Constructor for class org.apache.hadoop.io.UTF8
Deprecated.  
UTF8(String) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8(UTF8) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8.Comparator - Class in org.apache.hadoop.io
Deprecated. A WritableComparator optimized for UTF8 keys.
UTF8.Comparator() - Constructor for class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
UTF8_ENCODING - Static variable in interface org.apache.hadoop.hbase.HConstants
When we encode strings, we always specify UTF8 encoding
UTF8ByteArrayUtils - Class in org.apache.hadoop.streaming
General utils for byte array containing UTF-8 encoded strings
UTF8ByteArrayUtils() - Constructor for class org.apache.hadoop.streaming.UTF8ByteArrayUtils
 
utf8Length(String) - Static method in class org.apache.hadoop.io.Text
For the given string, returns the number of UTF-8 bytes required to encode the string.
Util - Class in org.apache.hadoop.metrics.spi
Static utility methods
Utils - Class in org.apache.hadoop.record
Various utility functions for Hadooop record I/O runtime.

V

validate(Text[]) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Validates that this filter applies only to a subset of the given columns.
validate(Text[]) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Validates that this filter applies only to a subset of the given columns.
validate(Text[]) - Method in interface org.apache.hadoop.hbase.filter.RowFilterInterface
Validates that this filter applies only to a subset of the given columns.
validate(Text[]) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Validates that this filter applies only to a subset of the given columns.
validateInput(JobConf) - Method in class org.apache.hadoop.hbase.mapred.TableInputFormat
 
validateInput(JobConf) - Method in class org.apache.hadoop.mapred.FileInputFormat
 
validateInput(JobConf) - Method in interface org.apache.hadoop.mapred.InputFormat
Are the input directories valid? This method is used to test the input directories when a job is submitted so that the framework can fail early with a useful error message when the input directory does not exist.
validateInput(JobConf) - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
This implementation always returns true.
validateUTF8(byte[]) - Static method in class org.apache.hadoop.io.Text
Check if a byte array contains valid utf-8
validateUTF8(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
Check to see if a byte array is valid utf-8
vals - Variable in class org.apache.hadoop.hbase.HAbstractScanner
 
VALUE_HISTOGRAM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregator - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the minimal protocol for value aggregators.
ValueAggregatorBaseDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class.
ValueAggregatorBaseDescriptor() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregatorCombiner - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic combiner of Aggregate.
ValueAggregatorCombiner() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
 
ValueAggregatorDescriptor - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the contract a value aggregator descriptor must support.
ValueAggregatorJob - Class in org.apache.hadoop.mapred.lib.aggregate
This is the main class for creating a map/reduce job using Aggregate framework.
ValueAggregatorJob() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
ValueAggregatorJobBase - Class in org.apache.hadoop.mapred.lib.aggregate
This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Aggregate.
ValueAggregatorJobBase() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
ValueAggregatorMapper - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic mapper of Aggregate.
ValueAggregatorMapper() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
 
ValueAggregatorReducer - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic reducer of Aggregate.
ValueAggregatorReducer() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
ValueHistogram - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that computes the histogram of a sequence of strings.
ValueHistogram() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
valueOf(String) - Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.dfs.FSConstants.UpgradeAction
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.hbase.HColumnDescriptor.CompressionType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobPriority
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
Returns the enum constant of this type with the specified name.
values() - Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.dfs.FSConstants.UpgradeAction
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.hbase.HColumnDescriptor.CompressionType
Returns an array containing the constants of this enum type, in the order they're declared.
VALUES - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
values() - Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.JobPriority
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns an array containing the constants of this enum type, in the order they're declared.
values() - Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
Returns an array containing the constants of this enum type, in the order they're declared.
Vector() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
VECTOR_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
verbose - Variable in class org.apache.hadoop.streaming.JarBuilder
 
verbose_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
verifyRequest(DatanodeRegistration) - Method in class org.apache.hadoop.dfs.NameNode
Verify request.
verifyVersion(int) - Method in class org.apache.hadoop.dfs.NameNode
Verify version.
version - Variable in class org.apache.hadoop.dfs.UpgradeStatusReport
 
VersionedProtocol - Interface in org.apache.hadoop.ipc
Superclass of all protocols that use Hadoop RPC.
VersionedWritable - Class in org.apache.hadoop.io
A base class for Writables that provides version checking.
VersionedWritable() - Constructor for class org.apache.hadoop.io.VersionedWritable
 
versionID - Static variable in interface org.apache.hadoop.hbase.HMasterInterface
 
versionID - Static variable in interface org.apache.hadoop.hbase.HMasterRegionInterface
 
versionID - Static variable in interface org.apache.hadoop.hbase.HRegionInterface
initial version
versionID - Static variable in interface org.apache.hadoop.mapred.JobSubmissionProtocol
 
VersionInfo - Class in org.apache.hadoop.util
This class finds the package info for Hadoop and the HadoopVersionAnnotation information.
VersionInfo() - Constructor for class org.apache.hadoop.util.VersionInfo
 
VersionMismatchException - Exception in org.apache.hadoop.fs.s3
Thrown when Hadoop cannot read the version of the data stored in S3FileSystem.
VersionMismatchException(String, String) - Constructor for exception org.apache.hadoop.fs.s3.VersionMismatchException
 
VersionMismatchException - Exception in org.apache.hadoop.io
Thrown by VersionedWritable.readFields(DataInput) when the version of an object being read does not match the current implementation version as returned by VersionedWritable.getVersion().
VersionMismatchException(byte, byte) - Constructor for exception org.apache.hadoop.io.VersionMismatchException
 
versionRequest() - Method in class org.apache.hadoop.dfs.NameNode
 
VIntWritable - Class in org.apache.hadoop.io
A WritableComparable for integer values stored in variable-length format.
VIntWritable() - Constructor for class org.apache.hadoop.io.VIntWritable
 
VIntWritable(int) - Constructor for class org.apache.hadoop.io.VIntWritable
 
VLongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs in a variable-length format.
VLongWritable() - Constructor for class org.apache.hadoop.io.VLongWritable
 
VLongWritable(long) - Constructor for class org.apache.hadoop.io.VLongWritable
 

W

waitForCompletion() - Method in interface org.apache.hadoop.mapred.RunningJob
Blocks until the job is complete.
waitForProxy(Class, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
 
WAITING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
WHERE - Static variable in interface org.apache.hadoop.hbase.shell.generated.ParserConstants
 
WhereClause() - Method in class org.apache.hadoop.hbase.shell.generated.Parser
 
width - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
 
WithinMultiLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WithinOneLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WordCount - Class in org.apache.hadoop.examples
This is an example Hadoop Map/Reduce application.
WordCount() - Constructor for class org.apache.hadoop.examples.WordCount
 
WordCount.MapClass - Class in org.apache.hadoop.examples
Counts the words in each line.
WordCount.MapClass() - Constructor for class org.apache.hadoop.examples.WordCount.MapClass
 
WordCount.Reduce - Class in org.apache.hadoop.examples
A reducer class that just emits the sum of the input values.
WordCount.Reduce() - Constructor for class org.apache.hadoop.examples.WordCount.Reduce
 
wrapping(String, String[], boolean) - Static method in class org.apache.hadoop.hbase.shell.HelpManager
 
Writable - Interface in org.apache.hadoop.io
A simple, efficient, serialization protocol, based on DataInput and DataOutput.
WritableComparable - Interface in org.apache.hadoop.io
An interface which extends both Writable and Comparable.
WritableComparator - Class in org.apache.hadoop.io
A Comparator for WritableComparables.
WritableComparator(Class) - Constructor for class org.apache.hadoop.io.WritableComparator
Construct for a WritableComparable implementation.
WritableFactories - Class in org.apache.hadoop.io
Factories for non-public writables.
WritableFactory - Interface in org.apache.hadoop.io
A factory for a class of Writable.
WritableName - Class in org.apache.hadoop.io
Utility to permit renaming of Writable implementation classes without invalidiating files that contain their class name.
WritableUtils - Class in org.apache.hadoop.io
 
WritableUtils() - Constructor for class org.apache.hadoop.io.WritableUtils
 
write(OutputStream) - Method in class org.apache.hadoop.conf.Configuration
Writes non-default properties in this configuration.
write(DataOutput) - Method in class org.apache.hadoop.dfs.DatanodeID
 
write(DataOutput) - Method in class org.apache.hadoop.dfs.DatanodeInfo
 
write(DataOutput) - Method in class org.apache.hadoop.dfs.LocatedBlocks
 
write(DataOutput) - Method in class org.apache.hadoop.dfs.UpgradeStatusReport
 
write(int) - Method in class org.apache.hadoop.fs.FSOutputSummer
Write one byte
write(byte[], int, int) - Method in class org.apache.hadoop.fs.FSOutputSummer
Writes len bytes from the specified byte array starting at offset off and generate a checksum for each data chunk.
write(DataOutput) - Method in class org.apache.hadoop.hbase.BloomFilterDescriptor
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.filter.PageRowFilter
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.filter.RegExpRowFilter
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.filter.RowFilterSet
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.HColumnDescriptor
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HLogEdit
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HLogKey
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HMsg
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HRegionInfo
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.HServerAddress
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.HServerInfo
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HStoreFile
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HStoreKey
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.HTableDescriptor
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.io.BatchOperation
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.io.BatchUpdate
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.hbase.io.ImmutableBytesWritable
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.io.KeyedData
 
write(DataOutput) - Method in class org.apache.hadoop.hbase.io.KeyedDataArrayWritable
 
write(WritableComparable, Writable) - Method in class org.apache.hadoop.hbase.mapred.TableOutputFormat.TableRecordWriter
Expect key to be of type Text Expect value to be of type KeyedDataArrayWritable
write(DataOutput) - Method in class org.apache.hadoop.hbase.mapred.TableSplit
 
write(DataOutput) - Method in class org.apache.hadoop.io.ArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BooleanWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BytesWritable
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Write compressed bytes to the stream.
write(int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
 
write(DataInput, int) - Method in class org.apache.hadoop.io.DataOutputBuffer
Writes bytes from a DataInput directly into the buffer.
write(DataOutput) - Method in class org.apache.hadoop.io.FloatWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.GenericWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.IntWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.LongWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.MD5Hash
 
write(DataOutput) - Method in class org.apache.hadoop.io.NullWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.ObjectWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
write(DataOutput) - Method in class org.apache.hadoop.io.Text
serialize write this object to out length uses zero-compressed encoding
write(DataOutput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.io.VersionedWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VIntWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VLongWritable
 
write(DataOutput) - Method in interface org.apache.hadoop.io.Writable
Writes the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.Counters
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.FileSplit
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobProfile
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobStatus
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.MultiFileSplit
 
write(WritableComparable, Writable) - Method in interface org.apache.hadoop.mapred.RecordWriter
Writes a key/value pair.
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskReport
 
write(WritableComparable, Writable) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
 
write(DataOutput) - Method in class org.apache.hadoop.record.Record
 
write(DataOutput) - Method in class org.onelab.filter.BloomFilter
 
write(DataOutput) - Method in class org.onelab.filter.CountingBloomFilter
 
write(DataOutput) - Method in class org.onelab.filter.DynamicBloomFilter
 
write(DataOutput) - Method in class org.onelab.filter.Filter
 
write(DataOutput) - Method in class org.onelab.filter.Key
 
write(DataOutput) - Method in class org.onelab.filter.RetouchedBloomFilter
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBool(boolean, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a boolean to serialized record.
writeBool(boolean, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBuffer(Buffer, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a buffer to serialized record.
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeByte(byte, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a byte to serialized record.
writeByte(byte, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeChunk(byte[], int, int, byte[]) - Method in class org.apache.hadoop.fs.FSOutputSummer
 
writeCompressed(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.write(DataOutput).
writeCompressedByteArray(DataOutput, byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Write compressed bytes to outStream.
writeCompressedString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeDouble(double, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a double precision floating point number to serialized record.
writeDouble(double, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeEnum(DataOutput, Enum) - Static method in class org.apache.hadoop.io.WritableUtils
writes String value of enum to DataOutput.
writeFile(SequenceFile.Sorter.RawKeyValueIterator, SequenceFile.Writer) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Writes records from RawKeyValueIterator into a file represented by the passed writer
writeFloat(float, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeFloat(float, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeFloat(float, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a single-precision float to serialized record.
writeFloat(float, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeHeader(DataOutputStream) - Method in class org.apache.hadoop.dfs.DataChecksum
Writes the checksum header to the output stream out.
writeInt(int, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeInt(int, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeInt(int, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write an integer to serialized record.
writeInt(int, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeItem(DFSFileInfo, XMLOutputter, NameNode) - Method in class org.apache.hadoop.dfs.ListPathsServlet
Write a node to output.
writeLong(long, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeLong(long, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeLong(long, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a long integer to serialized record.
writeLong(long, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeObject(DataOutput, Object, Class, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Write a Writable, String, primitive type, or an array of the preceding.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.Text
Write a UTF8 encoded string to out
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Write a UTF-8 encoded string.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeString(String, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeString(String, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeString(String, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a unicode string to serialized record.
writeString(String, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeUncompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Writes the uncompressed bytes to the outStream.
writeValue(DataOutputStream, boolean) - Method in class org.apache.hadoop.dfs.DataChecksum
Writes the current checksum to the stream.
writeValue(byte[], int, boolean) - Method in class org.apache.hadoop.dfs.DataChecksum
Writes the current checksum to a buffer.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes an integer to a binary stream with zero-compressed encoding.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.record.Utils
Serializes an int to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes a long to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.record.Utils
Serializes a long to a binary stream with zero-compressed encoding.
WrongRegionException - Exception in org.apache.hadoop.hbase
 
WrongRegionException() - Constructor for exception org.apache.hadoop.hbase.WrongRegionException
 
WrongRegionException(String) - Constructor for exception org.apache.hadoop.hbase.WrongRegionException
 

X

xceiverCount - Variable in class org.apache.hadoop.dfs.DatanodeInfo
 
XmlRecordInput - Class in org.apache.hadoop.record
XML Deserializer.
XmlRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.XmlRecordInput
Creates a new instance of XmlRecordInput
XmlRecordOutput - Class in org.apache.hadoop.record
XML Serializer.
XmlRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.XmlRecordOutput
Creates a new instance of XmlRecordOutput
XMLUtils - Class in org.apache.hadoop.util
General xml utilities.
XMLUtils() - Constructor for class org.apache.hadoop.util.XMLUtils
 
xor(Filter) - Method in class org.onelab.filter.BloomFilter
 
xor(Filter) - Method in class org.onelab.filter.CountingBloomFilter
 
xor(Filter) - Method in class org.onelab.filter.DynamicBloomFilter
 
xor(Filter) - Method in class org.onelab.filter.Filter
Peforms a logical XOR between this filter and a specified filter.

Z

ZlibCompressor - Class in org.apache.hadoop.io.compress.zlib
A Compressor based on the popular zlib compression algorithm.
ZlibCompressor(ZlibCompressor.CompressionLevel, ZlibCompressor.CompressionStrategy, ZlibCompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor using the specified compression level.
ZlibCompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor with the default compression level.
ZlibCompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The type of header for compressed data.
ZlibCompressor.CompressionLevel - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibDecompressor - Class in org.apache.hadoop.io.compress.zlib
A Decompressor based on the popular zlib compression algorithm.
ZlibDecompressor(ZlibDecompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Creates a new decompressor.
ZlibDecompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
ZlibDecompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The headers to detect from compressed data.
ZlibFactory - Class in org.apache.hadoop.io.compress.zlib
A collection of factories to create the right zlib/gzip compressor/decompressor instances.
ZlibFactory() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibFactory
 

A B C D E F G H I J K L M N O P Q R S T U V W X Z

Copyright © 2006 The Apache Software Foundation