org.apache.pig.piggybank.storage
Class CSVExcelStorage
java.lang.Object
org.apache.pig.LoadFunc
org.apache.pig.FileInputLoadFunc
org.apache.pig.builtin.PigStorage
org.apache.pig.piggybank.storage.CSVExcelStorage
- All Implemented Interfaces:
- LoadMetadata, LoadPushDown, OrderedLoadFunc, StoreFuncInterface, StoreMetadata
public class CSVExcelStorage
- extends PigStorage
- implements StoreFuncInterface, LoadPushDown
CSV loading and storing with support for multi-line fields,
and escaping of delimiters and double quotes within fields;
uses CSV conventions of Excel 2007.
Arguments allow for control over:
Which field delimiter to use (default = ',')
Whether line breaks are allowed inside of fields (YES_MULTILINE = yes, NO_MULTILINE = no, default = no)
How line breaks are to be written when storing (UNIX = LF, WINDOWS = CRLF, NOCHANGE = system default, default = system default)
What to do with header rows (first line of each file):
On load: READ_INPUT_HEADER = read header rows, SKIP_INPUT_HEADER = do not read header rows, default = read header rows
On store: WRITE_OUTPUT_HEADER = write a header row, SKIP_OUTPUT_HEADER = do not write a header row, default = do not write a header row
Usage:
STORE x INTO ''
USING org.apache.pig.piggybank.storage.CSVExcelStorage(
[DELIMITER[,
{YES_MULTILINE | NO_MULTILINE}[,
{UNIX | WINDOWS | NOCHANGE}[,
{READ_INPUT_HEADER, SKIP_INPUT_HEADER, WRITE_OUTPUT_HEADER, SKIP_OUTPUT_HEADER}]]]]
);
Linebreak settings are only used during store; during load, no conversion is performed.
WARNING: A danger with enabling multiline fields during load is that unbalanced
double quotes will cause slurping up of input until a balancing double
quote is found, or until something breaks. If you are not expecting
newlines within fields it is therefore more robust to use NO_MULTILINE,
which is the default for that reason.
This is Adreas Paepcke's CSVExcelStorage with a few modifications.
Methods inherited from class org.apache.pig.builtin.PigStorage |
cleanupOnFailure, cleanupOnSuccess, equals, equals, getOutputFormat, getPartitionKeys, getSchema, getStatistics, hashCode, readField, relToAbsPathForStoreLocation, setPartitionFilter, setStoreFuncUDFContextSignature, setStoreLocation, storeSchema, storeStatistics |
LINEFEED
protected static final byte LINEFEED
- See Also:
- Constant Field Values
DOUBLE_QUOTE
protected static final byte DOUBLE_QUOTE
- See Also:
- Constant Field Values
RECORD_DEL
protected static final byte RECORD_DEL
- See Also:
- Constant Field Values
in
protected org.apache.hadoop.mapreduce.RecordReader in
schema
protected ResourceSchema schema
CSVExcelStorage
public CSVExcelStorage()
CSVExcelStorage
public CSVExcelStorage(String delimiter)
CSVExcelStorage
public CSVExcelStorage(String delimiter,
String multilineTreatmentStr)
CSVExcelStorage
public CSVExcelStorage(String delimiter,
String multilineTreatmentStr,
String eolTreatmentStr)
CSVExcelStorage
public CSVExcelStorage(String delimiter,
String multilineTreatmentStr,
String eolTreatmentStr,
String headerTreatmentStr)
checkSchema
public void checkSchema(ResourceSchema s)
throws IOException
- Description copied from interface:
StoreFuncInterface
- Set the schema for data to be stored. This will be called on the
front end during planning if the store is associated with a schema.
A Store function should implement this function to
check that a given schema is acceptable to it. For example, it
can check that the correct partition keys are included;
a storage function to be written directly to an OutputFormat can
make sure the schema will translate in a well defined way.
- Specified by:
checkSchema
in interface StoreFuncInterface
- Overrides:
checkSchema
in class PigStorage
- Parameters:
s
- to be checked
- Throws:
IOException
- if this schema is not acceptable. It should include
a detailed error message indicating what is wrong with the schema.
prepareToWrite
public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
- Description copied from interface:
StoreFuncInterface
- Initialize StoreFuncInterface to write data. This will be called during
execution before the call to putNext.
- Specified by:
prepareToWrite
in interface StoreFuncInterface
- Overrides:
prepareToWrite
in class PigStorage
- Parameters:
writer
- RecordWriter to use.
putNext
public void putNext(Tuple tupleToWrite)
throws IOException
- Description copied from interface:
StoreFuncInterface
- Write a tuple to the data store.
- Specified by:
putNext
in interface StoreFuncInterface
- Overrides:
putNext
in class PigStorage
- Parameters:
tupleToWrite
- the tuple to store.
- Throws:
IOException
- if an exception occurs during the write
getNext
public Tuple getNext()
throws IOException
- Description copied from class:
LoadFunc
- Retrieves the next tuple to be processed. Implementations should NOT reuse
tuple objects (or inner member objects) they return across calls and
should return a different tuple object in each call.
- Overrides:
getNext
in class PigStorage
- Returns:
- the next tuple to be processed or null if there are no more tuples
to be processed.
- Throws:
IOException
- if there is an exception while retrieving the next
tuple
setLocation
public void setLocation(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from class:
LoadFunc
- Communicate to the loader the location of the object(s) being loaded.
The location string passed to the LoadFunc here is the return value of
LoadFunc.relativeToAbsolutePath(String, Path)
. Implementations
should use this method to communicate the location (and any other information)
to its underlying InputFormat through the Job object.
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
- Overrides:
setLocation
in class PigStorage
- Parameters:
location
- Location as returned by
LoadFunc.relativeToAbsolutePath(String, Path)
job
- the Job
object
store or retrieve earlier stored information from the UDFContext
- Throws:
IOException
- if the location is not valid.
getInputFormat
public org.apache.hadoop.mapreduce.InputFormat getInputFormat()
- Description copied from class:
LoadFunc
- This will be called during planning on the front end. This is the
instance of InputFormat (rather than the class name) because the
load function may need to instantiate the InputFormat in order
to control how it is constructed.
- Overrides:
getInputFormat
in class PigStorage
- Returns:
- the InputFormat associated with this loader.
prepareToRead
public void prepareToRead(org.apache.hadoop.mapreduce.RecordReader reader,
PigSplit split)
- Description copied from class:
LoadFunc
- Initializes LoadFunc for reading data. This will be called during execution
before any calls to getNext. The RecordReader needs to be passed here because
it has been instantiated for a particular InputSplit.
- Overrides:
prepareToRead
in class PigStorage
- Parameters:
reader
- RecordReader
to be used by this instance of the LoadFuncsplit
- The input PigSplit
to process
pushProjection
public LoadPushDown.RequiredFieldResponse pushProjection(LoadPushDown.RequiredFieldList requiredFieldList)
throws FrontendException
- Description copied from interface:
LoadPushDown
- Indicate to the loader fields that will be needed. This can be useful for
loaders that access data that is stored in a columnar format where indicating
columns to be accessed a head of time will save scans. This method will
not be invoked by the Pig runtime if all fields are required. So implementations
should assume that if this method is not invoked, then all fields from
the input are required. If the loader function cannot make use of this
information, it is free to ignore it by returning an appropriate Response
- Specified by:
pushProjection
in interface LoadPushDown
- Overrides:
pushProjection
in class PigStorage
- Parameters:
requiredFieldList
- RequiredFieldList indicating which columns will be needed.
This structure is read only. User cannot make change to it inside pushProjection.
- Returns:
- Indicates which fields will be returned
- Throws:
FrontendException
setUDFContextSignature
public void setUDFContextSignature(String signature)
- Description copied from class:
LoadFunc
- This method will be called by Pig both in the front end and back end to
pass a unique signature to the
LoadFunc
. The signature can be used
to store into the UDFContext
any information which the
LoadFunc
needs to store between various method invocations in the
front end and back end. A use case is to store LoadPushDown.RequiredFieldList
passed to it in LoadPushDown.pushProjection(RequiredFieldList)
for
use in the back end before returning tuples in LoadFunc.getNext()
.
This method will be call before other methods in LoadFunc
- Overrides:
setUDFContextSignature
in class PigStorage
- Parameters:
signature
- a unique signature to identify this LoadFunc
getFeatures
public List<LoadPushDown.OperatorSet> getFeatures()
- Description copied from interface:
LoadPushDown
- Determine the operators that can be pushed to the loader.
Note that by indicating a loader can accept a certain operator
(such as selection) the loader is not promising that it can handle
all selections. When it is passed the actual operators to
push down it will still have a chance to reject them.
- Specified by:
getFeatures
in interface LoadPushDown
- Overrides:
getFeatures
in class PigStorage
- Returns:
- list of all features that the loader can support
Copyright © 2007-2012 The Apache Software Foundation