Changed Classes |
Launcher
|
|
MRCompiler
|
The compiler that compiles a given physical plan into a DAG of MapReduce operators which can then be converted into the JobControl structure. |
MapReduceLauncher
|
Main class that launches pig for Map Reduce |
MapReduceOper
|
An operator model for a Map Reduce job. |
MapReducePOStoreImpl
|
This class is used to have a POStore write to DFS via a output collector/record writer. |
MergeJoinIndexer
|
Merge Join indexer is used to generate on the fly index for doing Merge Join efficiently. |
PhyPlanSetter
|
Sets the parent plan for all Physical Operators. |
PigHadoopLogger
|
A singleton class that implements the PigLogger interface for use in map reduce context. |
PigInputFormat
|
|
PigOutputCommitter
|
A specialization of the default FileOutputCommitter to allow pig to inturn delegate calls to the OutputCommiter(s) of the StoreFunc(s)' OutputFormat(s). |
PigRecordReader
|
A wrapper around the actual RecordReader and loadfunc - this is needed for two reasons 1) To intercept the initialize call from hadoop and initialize the underlying actual RecordReader with the right Context object - this is achieved by looking up the Context corresponding to the input split this Reader is supposed to process 2) We need to give hadoop consistent key-value types - text and tuple respectively - so PigRecordReader will call underlying Loader's getNext() to get the Tuple value - the key is null text since key is not used in input to map() in Pig. |
PigSecondaryKeyComparator
|
|
PigSplit
|
The main split class that maintains important information about the input split. |
SampleOptimizer
|
A visitor to optimize plans that have a sample job that immediately follows a load/store only MR job. |