org.apache.hadoop.hbase.mapreduce
Class IdentityTableReducer
java.lang.Object
org.apache.hadoop.mapreduce.Reducer<KEYIN,VALUEIN,KEYOUT,org.apache.hadoop.io.Writable>
org.apache.hadoop.hbase.mapreduce.TableReducer<org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable>
org.apache.hadoop.hbase.mapreduce.IdentityTableReducer
public class IdentityTableReducer
- extends TableReducer<org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable>
Convenience class that simply writes all values (which must be
Put
or
Delete
instances)
passed to it out to the configured HBase table. This works in combination
with TableOutputFormat
which actually does the writing to HBase.
Keys are passed along but ignored in TableOutputFormat. However, they can
be used to control how your values will be divided up amongst the specified
number of reducers.
You can also use the TableMapReduceUtil
class to set up the two
classes in one step:
TableMapReduceUtil.initTableReducerJob("table", IdentityTableReducer.class, job);
This will also set the proper TableOutputFormat
which is given the
table
parameter. The
Put
or
Delete
define the
row and columns implicitly.
Nested classes/interfaces inherited from class org.apache.hadoop.mapreduce.Reducer |
org.apache.hadoop.mapreduce.Reducer.Context |
Method Summary |
void |
reduce(org.apache.hadoop.io.Writable key,
Iterable<org.apache.hadoop.io.Writable> values,
org.apache.hadoop.mapreduce.Reducer.Context context)
Writes each given record, consisting of the row key and the given values,
to the configured OutputFormat . |
Methods inherited from class org.apache.hadoop.mapreduce.Reducer |
cleanup, run, setup |
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
IdentityTableReducer
public IdentityTableReducer()
reduce
public void reduce(org.apache.hadoop.io.Writable key,
Iterable<org.apache.hadoop.io.Writable> values,
org.apache.hadoop.mapreduce.Reducer.Context context)
throws IOException,
InterruptedException
- Writes each given record, consisting of the row key and the given values,
to the configured
OutputFormat
. It is emitting the row key and each
Put
or
Delete
as separate pairs.
- Overrides:
reduce
in class org.apache.hadoop.mapreduce.Reducer<org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable,org.apache.hadoop.io.Writable>
- Parameters:
key
- The current row key.values
- The Put
or
Delete
list for the given
row.context
- The context of the reduce.
- Throws:
IOException
- When writing the record fails.
InterruptedException
- When the job gets interrupted.
Copyright © 2011 The Apache Software Foundation. All Rights Reserved.