org.apache.pig.piggybank.storage
Class DBStorage
java.lang.Object
org.apache.pig.StoreFunc
org.apache.pig.piggybank.storage.DBStorage
- All Implemented Interfaces:
- StoreFuncInterface
public class DBStorage
- extends StoreFunc
Constructor Summary |
DBStorage(String driver,
String jdbcURL,
String insertQuery)
|
DBStorage(String driver,
String jdbcURL,
String user,
String pass,
String insertQuery)
|
DBStorage(String driver,
String jdbcURL,
String user,
String pass,
String insertQuery,
String batchSize)
|
Method Summary |
org.apache.hadoop.mapreduce.OutputFormat |
getOutputFormat()
Return the OutputFormat associated with StoreFunc. |
void |
prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
Initialise the database connection and prepared statement here. |
void |
putNext(Tuple tuple)
Write the tuple to Database directly here. |
void |
setStoreLocation(String location,
org.apache.hadoop.mapreduce.Job job)
Communicate to the storer the location where the data needs to be stored. |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
DBStorage
public DBStorage(String driver,
String jdbcURL,
String insertQuery)
DBStorage
public DBStorage(String driver,
String jdbcURL,
String user,
String pass,
String insertQuery)
throws SQLException
- Throws:
SQLException
DBStorage
public DBStorage(String driver,
String jdbcURL,
String user,
String pass,
String insertQuery,
String batchSize)
throws RuntimeException
- Throws:
RuntimeException
putNext
public void putNext(Tuple tuple)
throws IOException
- Write the tuple to Database directly here.
- Specified by:
putNext
in interface StoreFuncInterface
- Specified by:
putNext
in class StoreFunc
- Parameters:
tuple
- the tuple to store.
- Throws:
IOException
- if an exception occurs during the write
getOutputFormat
public org.apache.hadoop.mapreduce.OutputFormat getOutputFormat()
throws IOException
- Description copied from class:
StoreFunc
- Return the OutputFormat associated with StoreFunc. This will be called
on the front end during planning and on the backend during
execution.
- Specified by:
getOutputFormat
in interface StoreFuncInterface
- Specified by:
getOutputFormat
in class StoreFunc
- Returns:
- the
OutputFormat
associated with StoreFunc
- Throws:
IOException
- if an exception occurs while constructing the
OutputFormat
prepareToWrite
public void prepareToWrite(org.apache.hadoop.mapreduce.RecordWriter writer)
throws IOException
- Initialise the database connection and prepared statement here.
- Specified by:
prepareToWrite
in interface StoreFuncInterface
- Specified by:
prepareToWrite
in class StoreFunc
- Parameters:
writer
- RecordWriter to use.
- Throws:
IOException
- if an exception occurs during initialization
setStoreLocation
public void setStoreLocation(String location,
org.apache.hadoop.mapreduce.Job job)
throws IOException
- Description copied from class:
StoreFunc
- Communicate to the storer the location where the data needs to be stored.
The location string passed to the
StoreFunc
here is the
return value of StoreFunc.relToAbsPathForStoreLocation(String, Path)
This method will be called in the frontend and backend multiple times. Implementations
should bear in mind that this method is called multiple times and should
ensure there are no inconsistent side effects due to the multiple calls.
StoreFunc.checkSchema(ResourceSchema)
will be called before any call to
StoreFunc.setStoreLocation(String, Job)
.
- Specified by:
setStoreLocation
in interface StoreFuncInterface
- Specified by:
setStoreLocation
in class StoreFunc
- Parameters:
location
- Location returned by
StoreFunc.relToAbsPathForStoreLocation(String, Path)
job
- The Job
object
- Throws:
IOException
- if the location is not valid.
Copyright © ${year} The Apache Software Foundation