Deprecated Methods |
org.apache.hadoop.hbase.HTable.abortBatch(long)
Batch operations are now the default. abortBatch is now
implemented by @see HTable.abort(long) |
org.apache.hadoop.hbase.HTable.commitBatch(long)
Batch operations are now the default. commitBatch(long) is now
implemented by @see HTable.commit(long) |
org.apache.hadoop.hbase.HTable.commitBatch(long, long)
Batch operations are now the default. commitBatch(long, long)
is now implemented by @see HTable.commit(long, long) |
org.apache.hadoop.util.CopyFiles.copy(Configuration, String, String, Path, boolean, boolean)
|
org.apache.hadoop.dfs.DataNode.createSocketAddr(String)
|
org.apache.hadoop.conf.Configuration.entries()
Use Configuration.iterator() instead. |
org.apache.hadoop.conf.Configuration.get(String, Object)
A side map of Configuration to Object should be used instead. |
org.apache.hadoop.fs.FileSystem.getBlockSize(Path)
Use getFileStatus() instead |
org.apache.hadoop.record.compiler.generated.SimpleCharStream.getColumn()
|
org.apache.hadoop.hbase.hql.generated.SimpleCharStream.getColumn()
|
org.apache.hadoop.mapred.Counters.Group.getCounter(String)
|
org.apache.hadoop.mapred.Counters.Group.getCounterNames()
iterate through the group instead |
org.apache.hadoop.mapred.Counters.Group.getDisplayName(String)
get the counter directly |
org.apache.hadoop.mapred.FileSplit.getFile()
Call FileSplit.getPath() instead. |
org.apache.hadoop.mapred.JobConf.getInputKeyClass()
Call RecordReader.createKey() . |
org.apache.hadoop.mapred.JobConf.getInputValueClass()
Call RecordReader.createValue() . |
org.apache.hadoop.fs.FileSystem.getLength(Path)
Use getFileStatus() instead |
org.apache.hadoop.fs.kfs.KosmosFileSystem.getLength(Path)
|
org.apache.hadoop.record.compiler.generated.SimpleCharStream.getLine()
|
org.apache.hadoop.hbase.hql.generated.SimpleCharStream.getLine()
|
org.apache.hadoop.mapred.ClusterStatus.getMaxTasks()
Use ClusterStatus.getMaxMapTasks() and/or
ClusterStatus.getMaxReduceTasks() |
org.apache.hadoop.dfs.DistributedFileSystem.getName()
|
org.apache.hadoop.fs.RawLocalFileSystem.getName()
|
org.apache.hadoop.fs.FilterFileSystem.getName()
call #getUri() instead. |
org.apache.hadoop.fs.FileSystem.getName()
call #getUri() instead. |
org.apache.hadoop.fs.kfs.KosmosFileSystem.getName()
|
org.apache.hadoop.fs.FileSystem.getNamed(String, Configuration)
call #get(URI,Configuration) instead. |
org.apache.hadoop.conf.Configuration.getObject(String)
A side map of Configuration to Object should be used instead. |
org.apache.hadoop.fs.FileSystem.getReplication(Path)
Use getFileStatus() instead |
org.apache.hadoop.fs.kfs.KosmosFileSystem.getReplication(Path)
|
org.apache.hadoop.net.NetUtils.getServerAddress(Configuration, String, String, String)
|
org.apache.hadoop.mapred.JobConf.getSpeculativeExecution()
Use {JobConf.getMapSpeculativeExecution() or
JobConf.getReduceSpeculativeExecution() instead.
Should speculative execution be used for this job?
Defaults to true . |
org.apache.hadoop.mapred.JobClient.getTaskOutputFilter()
|
org.apache.hadoop.fs.FileSystem.globPaths(Path)
|
org.apache.hadoop.fs.FileSystem.globPaths(Path, PathFilter)
|
org.apache.hadoop.fs.FileSystem.isDirectory(Path)
Use getFileStatus() instead |
org.apache.hadoop.fs.kfs.KosmosFileSystem.isDirectory(Path)
|
org.apache.hadoop.fs.kfs.KosmosFileSystem.isFile(Path)
|
org.apache.hadoop.fs.FileSystem.listPaths(Path)
|
org.apache.hadoop.fs.FileSystem.listPaths(Path[])
|
org.apache.hadoop.fs.FileSystem.listPaths(Path[], PathFilter)
|
org.apache.hadoop.fs.FileSystem.listPaths(Path, PathFilter)
|
org.apache.hadoop.fs.RawLocalFileSystem.lock(Path, boolean)
|
org.apache.hadoop.fs.kfs.KosmosFileSystem.lock(Path, boolean)
|
org.apache.hadoop.io.SequenceFile.Reader.next(DataOutputBuffer)
Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes) . |
org.apache.hadoop.mapred.LineRecordReader.readLine(InputStream, OutputStream)
|
org.apache.hadoop.fs.RawLocalFileSystem.release(Path)
|
org.apache.hadoop.fs.kfs.KosmosFileSystem.release(Path)
|
org.apache.hadoop.hbase.HTable.renewLease(long)
Batch updates are now the default. Consequently this method
does nothing. |
org.apache.hadoop.conf.Configuration.set(String, Object)
|
org.apache.hadoop.mapred.JobConf.setInputKeyClass(Class)
Not used |
org.apache.hadoop.mapred.JobConf.setInputValueClass(Class)
Not used |
org.apache.hadoop.conf.Configuration.setObject(String, Object)
|
org.apache.hadoop.mapred.JobConf.setSpeculativeExecution(boolean)
Use JobConf.setMapSpeculativeExecution(boolean) or
JobConf.setReduceSpeculativeExecution(boolean) instead.
Turn speculative execution on or off for this job. |
org.apache.hadoop.mapred.JobClient.setTaskOutputFilter(JobClient.TaskStatusFilter)
|
org.apache.hadoop.hbase.HTable.startBatchUpdate(Text)
Batch operations are now the default. startBatchUpdate is now
implemented by @see HTable.startUpdate(Text) |