org.apache.hcatalog.mapreduce
Class HCatEximOutputFormat
java.lang.Object
org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.WritableComparable<?>,HCatRecord>
org.apache.hcatalog.mapreduce.HCatBaseOutputFormat
org.apache.hcatalog.mapreduce.HCatEximOutputFormat
public class HCatEximOutputFormat
- extends HCatBaseOutputFormat
The OutputFormat to use to write data to HCat without a hcat server. This can then
be imported into a hcat instance, or used with a HCatEximInputFormat. As in
HCatOutputFormat, the key value is ignored and
and should be given as null. The value is the HCatRecord to write.
Method Summary |
org.apache.hadoop.mapreduce.OutputCommitter |
getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
Get the output committer for this output format. |
org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.WritableComparable<?>,HCatRecord> |
getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
Get the record writer for the job. |
static void |
setOutput(org.apache.hadoop.mapreduce.Job job,
java.lang.String dbname,
java.lang.String tablename,
java.lang.String location,
HCatSchema partitionSchema,
java.util.List<java.lang.String> partitionValues,
HCatSchema columnSchema)
|
static void |
setOutput(org.apache.hadoop.mapreduce.Job job,
java.lang.String dbname,
java.lang.String tablename,
java.lang.String location,
HCatSchema partitionSchema,
java.util.List<java.lang.String> partitionValues,
HCatSchema columnSchema,
java.lang.String isdname,
java.lang.String osdname,
java.lang.String ifname,
java.lang.String ofname,
java.lang.String serializationLib)
|
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
HCatEximOutputFormat
public HCatEximOutputFormat()
getRecordWriter
public org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.WritableComparable<?>,HCatRecord> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
throws java.io.IOException,
java.lang.InterruptedException
- Get the record writer for the job. Uses the Table's default OutputStorageDriver
to get the record writer.
- Specified by:
getRecordWriter
in class org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.WritableComparable<?>,HCatRecord>
- Parameters:
context
- the information about the current task.
- Returns:
- a RecordWriter to write the output for the job.
- Throws:
java.io.IOException
java.lang.InterruptedException
getOutputCommitter
public org.apache.hadoop.mapreduce.OutputCommitter getOutputCommitter(org.apache.hadoop.mapreduce.TaskAttemptContext context)
throws java.io.IOException,
java.lang.InterruptedException
- Get the output committer for this output format. This is responsible
for ensuring the output is committed correctly.
- Specified by:
getOutputCommitter
in class org.apache.hadoop.mapreduce.OutputFormat<org.apache.hadoop.io.WritableComparable<?>,HCatRecord>
- Parameters:
context
- the task context
- Returns:
- an output committer
- Throws:
java.io.IOException
java.lang.InterruptedException
setOutput
public static void setOutput(org.apache.hadoop.mapreduce.Job job,
java.lang.String dbname,
java.lang.String tablename,
java.lang.String location,
HCatSchema partitionSchema,
java.util.List<java.lang.String> partitionValues,
HCatSchema columnSchema)
throws HCatException
- Throws:
HCatException
setOutput
public static void setOutput(org.apache.hadoop.mapreduce.Job job,
java.lang.String dbname,
java.lang.String tablename,
java.lang.String location,
HCatSchema partitionSchema,
java.util.List<java.lang.String> partitionValues,
HCatSchema columnSchema,
java.lang.String isdname,
java.lang.String osdname,
java.lang.String ifname,
java.lang.String ofname,
java.lang.String serializationLib)
throws HCatException
- Throws:
HCatException