|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use JobConf | |
---|---|
org.apache.hadoop.abacus | Hadoop based Abacus is a specialization of map/reduce framework, specilizing for performing various counting and aggregations. |
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.io | Generic i/o code for use when reading and writing data to the network, to databases, and to files. |
org.apache.hadoop.mapred | A system for scalable, fault-tolerant, distributed computation over large data collections. |
org.apache.hadoop.mapred.jobcontrol | Utilities for managing dependent jobs. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.streaming | |
org.apache.hadoop.tools | |
org.apache.hadoop.util | Common utilities. |
Uses of JobConf in org.apache.hadoop.abacus |
---|
Methods in org.apache.hadoop.abacus that return JobConf | |
---|---|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args)
Create an Abacus based map/reduce job. |
Methods in org.apache.hadoop.abacus with parameters of type JobConf | |
---|---|
void |
UserDefinedValueAggregatorDescriptor.configure(JobConf job)
Do nothing. |
void |
ValueAggregatorCombiner.configure(JobConf job)
Combiner does not need to configure. |
void |
ValueAggregatorBaseDescriptor.configure(JobConf job)
get the input file name. |
void |
ValueAggregatorJobBase.configure(JobConf job)
|
void |
ValueAggregatorDescriptor.configure(JobConf job)
Configure the object |
void |
JobBase.configure(JobConf job)
Initializes a new instance from a JobConf . |
static boolean |
ValueAggregatorJob.runJob(JobConf job)
Submit/run a map/reduce job. |
Constructors in org.apache.hadoop.abacus with parameters of type JobConf | |
---|---|
UserDefinedValueAggregatorDescriptor(String className,
JobConf job)
|
Uses of JobConf in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples with parameters of type JobConf | |
---|---|
void |
PiEstimator.PiMapper.configure(JobConf job)
Mapper configuration. |
void |
PiEstimator.PiReducer.configure(JobConf job)
Reducer configuration. |
Uses of JobConf in org.apache.hadoop.io |
---|
Methods in org.apache.hadoop.io with parameters of type JobConf | |
---|---|
static Writable |
WritableUtils.clone(Writable orig,
JobConf conf)
Make a copy of a writable object using serialization to a buffer. |
Uses of JobConf in org.apache.hadoop.mapred |
---|
Methods in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
void |
OutputFormatBase.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check whether the output specification for a job is appropriate. |
void |
TextInputFormat.configure(JobConf conf)
|
void |
MapRunner.configure(JobConf job)
|
void |
MapReduceBase.configure(JobConf job)
Default implementation that does nothing. |
void |
JobConfigurable.configure(JobConf job)
Initializes a new instance from a JobConf . |
static boolean |
OutputFormatBase.getCompressOutput(JobConf conf)
Is the reduce output compressed? |
static Class |
OutputFormatBase.getOutputCompressorClass(JobConf conf,
Class defaultValue)
Get the codec for compressing the reduce outputs |
RecordReader |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split |
abstract RecordReader |
InputFormatBase.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a RecordReader for a FileSplit . |
RecordWriter |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
abstract RecordWriter |
OutputFormatBase.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Construct a RecordWriter with Progressable. |
InputSplit[] |
InputFormatBase.getSplits(JobConf job,
int numSplits)
Splits files returned by InputFormatBase.listPaths(JobConf) when
they're too big. |
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Splits a set of input files. |
static JobClient.TaskStatusFilter |
JobClient.getTaskOutputFilter(JobConf job)
Get the task output filter out of the JobConf |
protected Path[] |
SequenceFileInputFormat.listPaths(JobConf job)
|
protected Path[] |
InputFormatBase.listPaths(JobConf job)
List input directories. |
static void |
JobClient.runJob(JobConf job)
Utility that submits a job, then polls for progress until the job is complete. |
static void |
OutputFormatBase.setCompressOutput(JobConf conf,
boolean val)
Set whether the output of the reduce is compressed |
static void |
OutputFormatBase.setOutputCompressorClass(JobConf conf,
Class codecClass)
Set the given class as the output compression codec. |
static void |
JobClient.setTaskOutputFilter(JobConf job,
JobClient.TaskStatusFilter newValue)
Modify the JobConf to set the task output filter |
RunningJob |
JobClient.submitJob(JobConf job)
Submit a job to the MR system |
void |
InputFormatBase.validateInput(JobConf job)
|
void |
InputFormat.validateInput(JobConf job)
Are the input directories valid? This method is used to test the input directories when a job is submitted so that the framework can fail early with a useful error message when the input directory does not exist. |
Constructors in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
FileSplit(Path file,
long start,
long length,
JobConf conf)
Constructs a split. |
|
PhasedFileSystem(FileSystem fs,
JobConf conf)
This Constructor is used to wrap a FileSystem object to a Phased FilsSystem. |
|
TaskTracker(JobConf conf)
Start with the local machine name, and the default JobTracker |
Uses of JobConf in org.apache.hadoop.mapred.jobcontrol |
---|
Methods in org.apache.hadoop.mapred.jobcontrol that return JobConf | |
---|---|
JobConf |
Job.getJobConf()
|
Methods in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
void |
Job.setJobConf(JobConf jobConf)
Set the mapred job conf for this job. |
Constructors in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
Job(JobConf jobConf,
ArrayList dependingJobs)
Construct a job. |
Uses of JobConf in org.apache.hadoop.mapred.lib |
---|
Methods in org.apache.hadoop.mapred.lib with parameters of type JobConf | |
---|---|
void |
RegexMapper.configure(JobConf job)
|
void |
MultithreadedMapRunner.configure(JobConf job)
|
void |
HashPartitioner.configure(JobConf job)
|
Uses of JobConf in org.apache.hadoop.streaming |
---|
Fields in org.apache.hadoop.streaming declared as JobConf | |
---|---|
protected JobConf |
StreamJob.jobConf_
|
Methods in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
void |
StreamOutputFormat.checkOutputSpecs(FileSystem fs,
JobConf job)
Check whether the output specification for a job is appropriate. |
void |
MuxOutputFormat.checkOutputSpecs(FileSystem fs,
JobConf job)
|
void |
PipeMapRed.configure(JobConf job)
|
static FileSplit |
StreamUtil.getCurrentSplit(JobConf job)
|
RecordReader |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader |
TupleInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader |
MergerInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordWriter |
StreamOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable progr)
|
RecordWriter |
MuxOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable progr)
|
InputSplit[] |
StreamInputFormat.getSplits(JobConf job,
int numSplits)
|
InputSplit[] |
MergerInputFormat.getSplits(JobConf job,
int numSplits)
Delegate to the primary InputFormat. |
static org.apache.hadoop.streaming.StreamUtil.TaskId |
StreamUtil.getTaskInfo(JobConf job)
|
static boolean |
StreamUtil.isLocalJobTracker(JobConf job)
|
void |
StreamBaseRecordReader.validateInput(JobConf job)
This implementation always returns true. |
Constructors in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
StreamBaseRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
StreamLineRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
StreamSequenceRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
StreamXmlRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
Uses of JobConf in org.apache.hadoop.tools |
---|
Methods in org.apache.hadoop.tools with parameters of type JobConf | |
---|---|
void |
Logalyzer.LogRegexMapper.configure(JobConf job)
|
Uses of JobConf in org.apache.hadoop.util |
---|
Methods in org.apache.hadoop.util with parameters of type JobConf | |
---|---|
abstract void |
CopyFiles.CopyFilesMapper.cleanup(Configuration conf,
JobConf jobConf,
String srcPath,
String destPath)
Interface to cleanup *distcp* specific resources |
void |
CopyFiles.FSCopyFilesMapper.cleanup(Configuration conf,
JobConf jobConf,
String srcPath,
String destPath)
|
void |
CopyFiles.HTTPCopyFilesMapper.cleanup(Configuration conf,
JobConf jobConf,
String srcPath,
String destPath)
|
void |
CopyFiles.FSCopyFilesMapper.configure(JobConf job)
Mapper configuration. |
void |
CopyFiles.HTTPCopyFilesMapper.configure(JobConf job)
|
abstract void |
CopyFiles.CopyFilesMapper.setup(Configuration conf,
JobConf jobConf,
String[] srcPaths,
String destPath,
boolean ignoreReadFailures)
Interface to initialize *distcp* specific map tasks. |
void |
CopyFiles.FSCopyFilesMapper.setup(Configuration conf,
JobConf jobConf,
String[] srcPaths,
String destPath,
boolean ignoreReadFailures)
Initialize DFSCopyFileMapper specific job-configuration. |
void |
CopyFiles.HTTPCopyFilesMapper.setup(Configuration conf,
JobConf jobConf,
String[] srcPaths,
String destPath,
boolean ignoreReadFailures)
Initialize HTTPCopyFileMapper specific job. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |