org.apache.hcatalog.rcfile
Class RCFileMapReduceOutputFormat
java.lang.Object
org.apache.hadoop.mapreduce.OutputFormat<K,V>
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable>
org.apache.hcatalog.rcfile.RCFileMapReduceOutputFormat
public class RCFileMapReduceOutputFormat
- extends org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable>
The RC file input format using new Hadoop mapreduce APIs.
Fields inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat |
BASE_OUTPUT_NAME, PART |
Method Summary |
org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable> |
getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext task)
|
static void |
setColumnNumber(org.apache.hadoop.conf.Configuration conf,
int columnNum)
Set number of columns into the given configuration. |
Methods inherited from class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat |
checkOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPath |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
RCFileMapReduceOutputFormat
public RCFileMapReduceOutputFormat()
setColumnNumber
public static void setColumnNumber(org.apache.hadoop.conf.Configuration conf,
int columnNum)
- Set number of columns into the given configuration.
- Parameters:
conf
- configuration instance which need to set the column numbercolumnNum
- column number for RCFile's Writer
getRecordWriter
public org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable> getRecordWriter(org.apache.hadoop.mapreduce.TaskAttemptContext task)
throws java.io.IOException,
java.lang.InterruptedException
- Specified by:
getRecordWriter
in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat<org.apache.hadoop.io.WritableComparable<?>,org.apache.hadoop.hive.serde2.columnar.BytesRefArrayWritable>
- Throws:
java.io.IOException
java.lang.InterruptedException