org.apache.hcatalog.mapreduce
Class HCatEximInputFormat

java.lang.Object
  extended by org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.WritableComparable,HCatRecord>
      extended by org.apache.hcatalog.mapreduce.HCatBaseInputFormat
          extended by org.apache.hcatalog.mapreduce.HCatEximInputFormat

public class HCatEximInputFormat
extends HCatBaseInputFormat

The InputFormat to use to read data from HCat


Constructor Summary
HCatEximInputFormat()
           
 
Method Summary
static java.util.List<HCatSchema> setInput(org.apache.hadoop.mapreduce.Job job, java.lang.String location, java.util.Map<java.lang.String,java.lang.String> partitionFilter)
          Set the input to use for the Job.
 
Methods inherited from class org.apache.hcatalog.mapreduce.HCatBaseInputFormat
createRecordReader, getOutputSchema, getSplits, getTableSchema, setOutputSchema
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

HCatEximInputFormat

public HCatEximInputFormat()
Method Detail

setInput

public static java.util.List<HCatSchema> setInput(org.apache.hadoop.mapreduce.Job job,
                                                  java.lang.String location,
                                                  java.util.Map<java.lang.String,java.lang.String> partitionFilter)
                                           throws java.io.IOException
Set the input to use for the Job. This queries the metadata file with the specified partition predicates, gets the matching partitions, puts the information in the conf object. The inputInfo object is updated with information needed in the client context

Parameters:
job - the job object
Returns:
two hcat schemas, for the table columns and the partition keys
Throws:
java.io.IOException - the exception in communicating with the metadata server