|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Text | |
---|---|
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.dancing | This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. |
org.apache.hadoop.hbase | Provides Hbase, the Hadoop simple database. |
org.apache.hadoop.hbase.filter | |
org.apache.hadoop.hbase.hql | |
org.apache.hadoop.hbase.io | |
org.apache.hadoop.hbase.mapred | |
org.apache.hadoop.hbase.rest | |
org.apache.hadoop.io | Generic i/o code for use when reading and writing data to the network, to databases, and to files. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.aggregate | Classes for performing various counting and aggregations. |
org.apache.hadoop.streaming | Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. |
org.apache.hadoop.tools | |
org.apache.hadoop.util | Common utilities. |
Uses of Text in org.apache.hadoop.contrib.utils.join |
---|
Fields in org.apache.hadoop.contrib.utils.join declared as Text | |
---|---|
protected Text |
DataJoinMapperBase.inputTag
|
static Text |
DataJoinReducerBase.NUM_OF_VALUES_FIELD
|
static Text |
DataJoinReducerBase.SOURCE_TAGS_FIELD
|
protected Text |
TaggedMapOutput.tag
|
Methods in org.apache.hadoop.contrib.utils.join that return Text | |
---|---|
protected abstract Text |
DataJoinMapperBase.generateGroupKey(TaggedMapOutput aRecord)
Generate a map output key. |
protected abstract Text |
DataJoinMapperBase.generateInputTag(String inputFile)
Determine the source tag based on the input file name. |
Text |
TaggedMapOutput.getTag()
|
Methods in org.apache.hadoop.contrib.utils.join with parameters of type Text | |
---|---|
void |
TaggedMapOutput.setTag(Text tag)
|
Uses of Text in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples that return types with arguments of type Text | |
---|---|
ArrayList<Map.Entry<Text,Text>> |
AggregateWordCount.WordCountPlugInClass.generateKeyValPairs(Object key,
Object val)
|
ArrayList<Map.Entry<Text,Text>> |
AggregateWordCount.WordCountPlugInClass.generateKeyValPairs(Object key,
Object val)
|
ArrayList<Map.Entry<Text,Text>> |
AggregateWordHistogram.AggregateWordHistogramPlugin.generateKeyValPairs(Object key,
Object val)
Parse the given value, generate an aggregation-id/value pair per word. |
ArrayList<Map.Entry<Text,Text>> |
AggregateWordHistogram.AggregateWordHistogramPlugin.generateKeyValPairs(Object key,
Object val)
Parse the given value, generate an aggregation-id/value pair per word. |
Methods in org.apache.hadoop.examples with parameters of type Text | |
---|---|
void |
WordCount.MapClass.map(LongWritable key,
Text value,
OutputCollector<Text,IntWritable> output,
Reporter reporter)
|
void |
WordCount.Reduce.reduce(Text key,
Iterator<IntWritable> values,
OutputCollector<Text,IntWritable> output,
Reporter reporter)
|
Method parameters in org.apache.hadoop.examples with type arguments of type Text | |
---|---|
void |
WordCount.MapClass.map(LongWritable key,
Text value,
OutputCollector<Text,IntWritable> output,
Reporter reporter)
|
void |
WordCount.Reduce.reduce(Text key,
Iterator<IntWritable> values,
OutputCollector<Text,IntWritable> output,
Reporter reporter)
|
Uses of Text in org.apache.hadoop.examples.dancing |
---|
Methods in org.apache.hadoop.examples.dancing with parameters of type Text | |
---|---|
void |
DistributedPentomino.PentMap.map(WritableComparable key,
Text value,
OutputCollector<Text,Text> output,
Reporter reporter)
Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order). |
Method parameters in org.apache.hadoop.examples.dancing with type arguments of type Text | |
---|---|
void |
DistributedPentomino.PentMap.map(WritableComparable key,
Text value,
OutputCollector<Text,Text> output,
Reporter reporter)
Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order). |
void |
DistributedPentomino.PentMap.map(WritableComparable key,
Text value,
OutputCollector<Text,Text> output,
Reporter reporter)
Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order). |
Uses of Text in org.apache.hadoop.hbase |
---|
Fields in org.apache.hadoop.hbase declared as Text | |
---|---|
static Text[] |
HConstants.ALL_META_COLUMNS
All the columns in the catalog -ROOT- and .META. |
static Text |
HConstants.COL_REGIONINFO
ROOT/META column family member - contains HRegionInfo |
static Text[] |
HConstants.COL_REGIONINFO_ARRAY
Array of column - contains HRegionInfo |
static Text |
HConstants.COL_SERVER
ROOT/META column family member - contains HServerAddress.toString() |
static Text |
HConstants.COL_SPLITA
the lower half of a split region |
static Text |
HConstants.COL_SPLITB
the upper half of a split region |
static Text |
HConstants.COL_STARTCODE
ROOT/META column family member - contains server start code (a long) |
static Text |
HConstants.COLUMN_FAMILY
The ROOT and META column family (Text) |
static Text[] |
HConstants.COLUMN_FAMILY_ARRAY
Array of meta column names |
static Text |
HConstants.EMPTY_START_ROW
Used by scanners, etc when they want to start at the beginning of a region |
static Text |
HConstants.EMPTY_TEXT
An empty instance of Text. |
static Text |
HConstants.META_TABLE_NAME
The META table's name. |
static Text |
HConstants.ROOT_TABLE_NAME
The root table's name. |
protected Text |
HTable.tableName
|
Fields in org.apache.hadoop.hbase with type parameters of type Text | |
---|---|
protected TreeMap<Text,Vector<org.apache.hadoop.hbase.HAbstractScanner.ColumnMatcher>> |
HAbstractScanner.okCols
|
protected SortedMap<Text,HRegion> |
HRegionServer.onlineRegions
|
protected Map<Text,HRegion> |
HRegionServer.retiringRegions
|
Methods in org.apache.hadoop.hbase that return Text | |
---|---|
Text |
HLogEdit.getColumn()
|
Text |
HStoreKey.getColumn()
|
Text |
HRegion.getEndKey()
|
Text |
HRegionInfo.getEndKey()
|
Text |
HColumnDescriptor.getFamilyName()
|
Text |
HTableDescriptor.getName()
|
Text |
HColumnDescriptor.getName()
|
Text |
HRegion.getRegionName()
|
Text |
HRegionInfo.getRegionName()
|
Text |
HMaster.MetaRegion.getRegionName()
|
Text |
HStoreKey.getRow()
|
Text |
HStore.getRowKeyAtOrBefore(Text row,
long timestamp)
Find the key that matches row exactly, or the one that immediately preceeds it. |
Text |
HRegion.getStartKey()
|
Text |
HRegionInfo.getStartKey()
|
Text |
HMaster.MetaRegion.getStartKey()
|
Text[] |
HTable.getStartKeys()
Gets the starting row key for every region in the currently open table |
Text |
HTable.getTableName()
|
static Text |
HRegionInfo.getTableNameFromRegionName(Text regionName)
Extracts table name prefix from a region name. |
Methods in org.apache.hadoop.hbase that return types with arguments of type Text | |
---|---|
TreeMap<Text,HColumnDescriptor> |
HTableDescriptor.families()
All the column families in this table. |
Map<Text,byte[]> |
HRegion.getClosestRowBefore(Text row,
long ts)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
SortedMap<Text,HColumnDescriptor> |
HTableDescriptor.getFamilies()
|
Map<Text,byte[]> |
HRegion.getFull(Text row)
Fetch all the columns for the indicated row. |
Map<Text,byte[]> |
HRegion.getFull(Text row,
long ts)
Fetch all the columns for the indicated row at a specified timestamp. |
Map<Text,HMaster.MetaRegion> |
HMaster.getOnlineMetaRegions()
|
SortedMap<Text,HRegion> |
HRegionServer.getOnlineRegions()
|
SortedMap<Text,byte[]> |
HTable.getRow(Text row)
Get all the data for the specified row at the latest timestamp |
SortedMap<Text,byte[]> |
HTable.getRow(Text row,
long ts)
Get all the data for the specified row at a specified timestamp |
Iterator<Map.Entry<HStoreKey,SortedMap<Text,byte[]>>> |
HAbstractScanner.iterator()
|
Iterator<Map.Entry<HStoreKey,SortedMap<Text,byte[]>>> |
HTable.ClientScanner.iterator()
|
Methods in org.apache.hadoop.hbase with parameters of type Text | |
---|---|
void |
HMaster.addColumn(Text tableName,
HColumnDescriptor column)
Adds a column to the specified table |
void |
HBaseAdmin.addColumn(Text tableName,
HColumnDescriptor column)
Add a column to an existing table |
void |
HMasterInterface.addColumn(Text tableName,
HColumnDescriptor column)
Adds a column to the specified table |
void |
HRegionInterface.batchUpdate(Text regionName,
long timestamp,
BatchUpdate b)
Applies a batch of updates via one RPC |
void |
HRegionServer.batchUpdate(Text regionName,
long timestamp,
BatchUpdate b)
Applies a batch of updates via one RPC |
protected void |
HBaseAdmin.checkReservedTableName(Text tableName)
|
void |
HConnection.close(Text tableName)
Discard all the information about this table |
void |
RegionUnavailableListener.closed(Text regionName)
regionName is closed and no longer available. |
void |
RegionUnavailableListener.closing(Text regionName)
regionName is closing. |
void |
HTable.delete(long lockid,
Text column)
Delete the value for a column. |
void |
HTable.deleteAll(Text row)
Completely delete the row's cells. |
void |
HRegion.deleteAll(Text row,
long ts)
Delete all cells of the same age as the passed timestamp or older. |
void |
HTable.deleteAll(Text row,
long ts)
Completely delete the row's cells of the same timestamp or older. |
void |
HTable.deleteAll(Text row,
Text column)
Delete all cells that match the passed row and column. |
void |
HRegion.deleteAll(Text row,
Text column,
long ts)
Delete all cells of the same age as the passed timestamp or older. |
void |
HRegionInterface.deleteAll(Text regionName,
Text row,
long timestamp)
Delete all cells that match the passed row and whose timestamp is equal-to or older than the passed timestamp. |
void |
HTable.deleteAll(Text row,
Text column,
long ts)
Delete all cells that match the passed row and column and whose timestamp is equal-to or older than the passed timestamp. |
void |
HRegionServer.deleteAll(Text regionName,
Text row,
long timestamp)
Delete all cells that match the passed row and whose timestamp is equal-to or older than the passed timestamp. |
void |
HRegionInterface.deleteAll(Text regionName,
Text row,
Text column,
long timestamp)
Delete all cells that match the passed row and column and whose timestamp is equal-to or older than the passed timestamp. |
void |
HRegionServer.deleteAll(Text regionName,
Text row,
Text column,
long timestamp)
Delete all cells that match the passed row and column and whose timestamp is equal-to or older than the passed timestamp. |
void |
HMaster.deleteColumn(Text tableName,
Text columnName)
Deletes a column from the specified table |
void |
HBaseAdmin.deleteColumn(Text tableName,
Text columnName)
Delete a column from a table |
void |
HMasterInterface.deleteColumn(Text tableName,
Text columnName)
Deletes a column from the specified table |
void |
HTable.deleteFamily(Text row,
Text family)
Delete all cells for a row with matching column family at all timestamps. |
void |
HRegion.deleteFamily(Text row,
Text family,
long timestamp)
Delete all cells for a row with matching column family with timestamps less than or equal to timestamp. |
void |
HTable.deleteFamily(Text row,
Text family,
long timestamp)
Delete all cells for a row with matching column family with timestamps less than or equal to timestamp. |
void |
HRegionInterface.deleteFamily(Text regionName,
Text row,
Text family,
long timestamp)
Delete all cells for a row with matching column family with timestamps less than or equal to timestamp. |
void |
HRegionServer.deleteFamily(Text regionName,
Text row,
Text family,
long timestamp)
Delete all cells for a row with matching column family with timestamps less than or equal to timestamp. |
void |
HMaster.deleteTable(Text tableName)
Deletes a table |
void |
HBaseAdmin.deleteTable(Text tableName)
Deletes a table |
void |
HMasterInterface.deleteTable(Text tableName)
Deletes a table |
void |
HMaster.disableTable(Text tableName)
Take table offline |
void |
HBaseAdmin.disableTable(Text tableName)
Disables a table (takes it off-line) If it is being served, the master will tell the servers to stop serving it. |
void |
HMasterInterface.disableTable(Text tableName)
Take table offline |
void |
HMaster.enableTable(Text tableName)
Puts the table on-line (only needed if table has been previously taken offline) |
void |
HBaseAdmin.enableTable(Text tableName)
Brings a table on-line (enables it) |
void |
HMasterInterface.enableTable(Text tableName)
Puts the table on-line (only needed if table has been previously taken offline) |
static String |
HRegionInfo.encodeRegionName(Text regionName)
|
static TextSequence |
HStoreKey.extractFamily(Text col)
Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server' |
static TextSequence |
HStoreKey.extractFamily(Text col,
boolean withColon)
Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server' |
static TextSequence |
HStoreKey.extractQualifier(Text col)
Extracts the column qualifier, the portion that follows the colon (':') family/qualifier separator. |
byte[] |
HRegion.get(Text row,
Text column)
Fetch a single data item. |
byte[] |
HTable.get(Text row,
Text column)
Get a single value for the specified row and column |
byte[][] |
HRegion.get(Text row,
Text column,
int numVersions)
Fetch multiple versions of a single data item |
byte[][] |
HTable.get(Text row,
Text column,
int numVersions)
Get the specified number of versions of the specified row and column |
byte[][] |
HRegion.get(Text row,
Text column,
long timestamp,
int numVersions)
Fetch multiple versions of a single data item, with timestamp. |
byte[][] |
HTable.get(Text row,
Text column,
long timestamp,
int numVersions)
Get the specified number of versions of the specified row and column with the specified timestamp. |
byte[] |
HRegionInterface.get(Text regionName,
Text row,
Text column)
Retrieve a single value from the specified region for the specified row and column keys |
byte[] |
HRegionServer.get(Text regionName,
Text row,
Text column)
Retrieve a single value from the specified region for the specified row and column keys |
byte[][] |
HRegionInterface.get(Text regionName,
Text row,
Text column,
int numVersions)
Get the specified number of versions of the specified row and column |
byte[][] |
HRegionServer.get(Text regionName,
Text row,
Text column,
int numVersions)
Get the specified number of versions of the specified row and column |
byte[][] |
HRegionInterface.get(Text regionName,
Text row,
Text column,
long timestamp,
int numVersions)
Get the specified number of versions of the specified row and column with the specified timestamp. |
byte[][] |
HRegionServer.get(Text regionName,
Text row,
Text column,
long timestamp,
int numVersions)
Get the specified number of versions of the specified row and column with the specified timestamp. |
Map<Text,byte[]> |
HRegion.getClosestRowBefore(Text row,
long ts)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
HbaseMapWritable |
HRegionInterface.getClosestRowBefore(Text regionName,
Text row)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it. |
HbaseMapWritable |
HRegionServer.getClosestRowBefore(Text regionName,
Text row)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it. |
HbaseMapWritable |
HRegionInterface.getClosestRowBefore(Text regionName,
Text row,
long ts)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
HbaseMapWritable |
HRegionServer.getClosestRowBefore(Text regionName,
Text row,
long ts)
Return all the data for the row that matches row exactly, or the one that immediately preceeds it, at or immediately before ts. |
Map<Text,byte[]> |
HRegion.getFull(Text row)
Fetch all the columns for the indicated row. |
Map<Text,byte[]> |
HRegion.getFull(Text row,
long ts)
Fetch all the columns for the indicated row at a specified timestamp. |
protected HRegion |
HRegionServer.getRegion(Text regionName)
Protected utility method for safely obtaining an HRegion handle. |
protected HRegion |
HRegionServer.getRegion(Text regionName,
boolean checkRetiringRegions)
Protected utility method for safely obtaining an HRegion handle. |
HRegionInfo |
HRegionInterface.getRegionInfo(Text regionName)
Get metainfo about an HRegion |
HRegionInfo |
HRegionServer.getRegionInfo(Text regionName)
Get metainfo about an HRegion |
SortedMap<Text,byte[]> |
HTable.getRow(Text row)
Get all the data for the specified row at the latest timestamp |
SortedMap<Text,byte[]> |
HTable.getRow(Text row,
long ts)
Get all the data for the specified row at a specified timestamp |
HbaseMapWritable |
HRegionInterface.getRow(Text regionName,
Text row)
Get all the data for the specified row |
HbaseMapWritable |
HRegionServer.getRow(Text regionName,
Text row)
Get all the data for the specified row |
HbaseMapWritable |
HRegionInterface.getRow(Text regionName,
Text row,
long ts)
Get all the data for the specified row at a given timestamp |
HbaseMapWritable |
HRegionServer.getRow(Text regionName,
Text row,
long ts)
Get all the data for the specified row at a given timestamp |
Text |
HStore.getRowKeyAtOrBefore(Text row,
long timestamp)
Find the key that matches row exactly, or the one that immediately preceeds it. |
HScannerInterface |
HRegion.getScanner(Text[] cols,
Text firstRow,
long timestamp,
RowFilterInterface filter)
Return an iterator that scans over the HRegion, returning the indicated columns for only the rows that match the data filter. |
HScannerInterface |
HRegion.getScanner(Text[] cols,
Text firstRow,
long timestamp,
RowFilterInterface filter)
Return an iterator that scans over the HRegion, returning the indicated columns for only the rows that match the data filter. |
static Path |
HTableDescriptor.getTableDir(Path rootdir,
Text tableName)
|
static Text |
HRegionInfo.getTableNameFromRegionName(Text regionName)
Extracts table name prefix from a region name. |
boolean |
HTableDescriptor.hasFamily(Text family)
Checks to see if this table contains the given column family |
HRegionLocation |
HConnection.locateRegion(Text tableName,
Text row)
Find the location of the region of tableName that row lives in. |
void |
HMaster.modifyColumn(Text tableName,
Text columnName,
HColumnDescriptor descriptor)
Modifies an existing column on the specified table |
void |
HBaseAdmin.modifyColumn(Text tableName,
Text columnName,
HColumnDescriptor descriptor)
Modify an existing column family on a table |
void |
HMasterInterface.modifyColumn(Text tableName,
Text columnName,
HColumnDescriptor descriptor)
Modifies an existing column on the specified table |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
long timestamp)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
long timestamp)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
RowFilterInterface filter)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
RowFilterInterface filter)
Get a scanner on the current table starting at the specified row. |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
Text stopRow)
Get a scanner on the current table starting at the specified row and ending just before stopRow |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
Text stopRow)
Get a scanner on the current table starting at the specified row and ending just before stopRow |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
Text stopRow,
long timestamp)
Get a scanner on the current table starting at the specified row and ending just before stopRow |
HScannerInterface |
HTable.obtainScanner(Text[] columns,
Text startRow,
Text stopRow,
long timestamp)
Get a scanner on the current table starting at the specified row and ending just before stopRow |
long |
HRegionInterface.openScanner(Text regionName,
Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
Opens a remote scanner with a RowFilter. |
long |
HRegionInterface.openScanner(Text regionName,
Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
Opens a remote scanner with a RowFilter. |
long |
HRegionServer.openScanner(Text regionName,
Text[] cols,
Text firstRow,
long timestamp,
RowFilterInterface filter)
Opens a remote scanner with a RowFilter. |
long |
HRegionServer.openScanner(Text regionName,
Text[] cols,
Text firstRow,
long timestamp,
RowFilterInterface filter)
Opens a remote scanner with a RowFilter. |
void |
HTable.put(long lockid,
Text column,
byte[] val)
Update a value for the specified column. |
void |
HTable.put(long lockid,
Text column,
Writable val)
Update a value for the specified column. |
HRegionLocation |
HConnection.relocateRegion(Text tableName,
Text row)
Find the location of the region of tableName that row lives in, ignoring any value that might be in the cache. |
void |
HStoreKey.setColumn(Text newcol)
Change the value of the column key |
void |
HStoreKey.setRow(Text newrow)
Change the value of the row key |
long |
HTable.startBatchUpdate(Text row)
Deprecated. Batch operations are now the default. startBatchUpdate is now implemented by @see HTable.startUpdate(Text) |
long |
HTable.startUpdate(Text row)
Start an atomic row insertion/update. |
boolean |
HBaseAdmin.tableExists(Text tableName)
|
boolean |
HConnection.tableExists(Text tableName)
Checks if tableName exists. |
Method parameters in org.apache.hadoop.hbase with type arguments of type Text | |
---|---|
protected HRegionInfo |
HMaster.getHRegionInfo(Map<Text,byte[]> map)
|
boolean |
HAbstractScanner.next(HStoreKey key,
SortedMap<Text,byte[]> results)
Get the next set of values for this scanner. |
boolean |
HTable.ClientScanner.next(HStoreKey key,
SortedMap<Text,byte[]> results)
|
boolean |
HScannerInterface.next(HStoreKey key,
SortedMap<Text,byte[]> results)
Grab the next row's worth of values. |
Constructors in org.apache.hadoop.hbase with parameters of type Text | |
---|---|
HColumnDescriptor(Text name,
int maxVersions,
HColumnDescriptor.CompressionType compression,
boolean inMemory,
int maxValueLength,
BloomFilterDescriptor bloomFilter)
Constructor Specify all parameters. |
|
HLogEdit(Text column,
byte[] bval,
long timestamp)
Construct a fully initialized HLogEdit |
|
HLogKey(Text regionName,
Text tablename,
Text row,
long logSeqNum)
Create the log key! We maintain the tablename mainly for debugging purposes. |
|
HRegionInfo(HTableDescriptor tableDesc,
Text startKey,
Text endKey)
Construct HRegionInfo with explicit parameters |
|
HRegionInfo(HTableDescriptor tableDesc,
Text startKey,
Text endKey,
boolean split)
Construct HRegionInfo with explicit parameters |
|
HStoreKey(Text row)
Create an HStoreKey specifying only the row The column defaults to the empty string and the time stamp defaults to Long.MAX_VALUE |
|
HStoreKey(Text row,
long timestamp)
Create an HStoreKey specifying the row and timestamp The column name defaults to the empty string |
|
HStoreKey(Text row,
Text column)
Create an HStoreKey specifying the row and column names The timestamp defaults to LATEST_TIMESTAMP |
|
HStoreKey(Text row,
Text column,
long timestamp)
Create an HStoreKey specifying all the fields |
|
HTable.ClientScanner(Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
|
|
HTable.ClientScanner(Text[] columns,
Text startRow,
long timestamp,
RowFilterInterface filter)
|
|
HTable(HBaseConfiguration conf,
Text tableName)
Creates an object to access a HBase table |
Uses of Text in org.apache.hadoop.hbase.filter |
---|
Fields in org.apache.hadoop.hbase.filter declared as Text | |
---|---|
protected Text |
StopRowFilter.stopRowKey
|
Methods in org.apache.hadoop.hbase.filter that return Text | |
---|---|
Text |
StopRowFilter.getStopRowKey()
An accessor for the stopRowKey |
Methods in org.apache.hadoop.hbase.filter with parameters of type Text | |
---|---|
boolean |
RowFilterInterface.filter(Text rowKey)
Filters on just a row key. |
boolean |
PageRowFilter.filter(Text rowKey)
Filters on just a row key. |
boolean |
RowFilterSet.filter(Text rowKey)
Filters on just a row key. |
boolean |
RegExpRowFilter.filter(Text rowKey)
Filters on just a row key. |
boolean |
InclusiveStopRowFilter.filter(Text rowKey)
|
boolean |
WhileMatchRowFilter.filter(Text rowKey)
Filters on just a row key. |
boolean |
StopRowFilter.filter(Text rowKey)
Filters on just a row key. |
boolean |
RowFilterInterface.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. |
boolean |
PageRowFilter.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. |
boolean |
RowFilterSet.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. |
boolean |
RegExpRowFilter.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. |
boolean |
WhileMatchRowFilter.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. |
boolean |
StopRowFilter.filter(Text rowKey,
Text colKey,
byte[] data)
Filters on row key and/or a column key. Because StopRowFilter does not examine column information, this method defaults to calling the rowKey-only version of filter. |
void |
RowFilterInterface.rowProcessed(boolean filtered,
Text key)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
PageRowFilter.rowProcessed(boolean filtered,
Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
RowFilterSet.rowProcessed(boolean filtered,
Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
RegExpRowFilter.rowProcessed(boolean filtered,
Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
WhileMatchRowFilter.rowProcessed(boolean filtered,
Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
StopRowFilter.rowProcessed(boolean filtered,
Text rowKey)
Called to let filter know the final decision (to pass or filter) on a given row. |
void |
RegExpRowFilter.setColumnFilter(Text colKey,
byte[] value)
Specify a value that must be matched for the given column. |
void |
RowFilterInterface.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
void |
PageRowFilter.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
void |
RowFilterSet.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
void |
RegExpRowFilter.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
void |
WhileMatchRowFilter.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
void |
StopRowFilter.validate(Text[] columns)
Validates that this filter applies only to a subset of the given columns. |
Method parameters in org.apache.hadoop.hbase.filter with type arguments of type Text | |
---|---|
boolean |
RowFilterInterface.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. |
boolean |
PageRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. |
boolean |
RowFilterSet.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. |
boolean |
RegExpRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. |
boolean |
WhileMatchRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. |
boolean |
StopRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. Because StopRowFilter does not examine column information, this method defaults to calling filterAllRemaining(). |
void |
RegExpRowFilter.setColumnFilters(Map<Text,byte[]> columnFilter)
Set column filters for a number of columns. |
Constructors in org.apache.hadoop.hbase.filter with parameters of type Text | |
---|---|
InclusiveStopRowFilter(Text stopRowKey)
Constructor that takes a stopRowKey on which to filter |
|
StopRowFilter(Text stopRowKey)
Constructor that takes a stopRowKey on which to filter |
Constructor parameters in org.apache.hadoop.hbase.filter with type arguments of type Text | |
---|---|
RegExpRowFilter(String rowKeyRegExp,
Map<Text,byte[]> columnFilter)
Constructor that takes a row key regular expression to filter on. |
Uses of Text in org.apache.hadoop.hbase.hql |
---|
Methods in org.apache.hadoop.hbase.hql that return Text | |
---|---|
Text |
InsertCommand.getColumn(int i)
|
Text[] |
DeleteCommand.getColumnList(HBaseAdmin admin,
HTable hTable)
|
Text |
InsertCommand.getRow()
|
Uses of Text in org.apache.hadoop.hbase.io |
---|
Subclasses of Text in org.apache.hadoop.hbase.io | |
---|---|
class |
TextSequence
As CharSequence is to String, so is TextSequence to Text
(except a TextSequence is a Text whereas a String is a CharSequence). |
Methods in org.apache.hadoop.hbase.io that return Text | |
---|---|
Text |
BatchOperation.getColumn()
|
Text |
BatchUpdate.getRow()
|
Text |
TextSequence.toText()
|
Methods in org.apache.hadoop.hbase.io with parameters of type Text | |
---|---|
void |
BatchUpdate.delete(long lid,
Text column)
Delete the value for a column Deletes the cell whose row/column/commit-timestamp match those of the delete. |
void |
BatchUpdate.put(long lid,
Text column,
byte[] val)
Change a value for the specified column |
void |
TextSequence.set(Text other)
|
long |
BatchUpdate.startUpdate(Text row)
Start a batch row insertion/update. |
Constructors in org.apache.hadoop.hbase.io with parameters of type Text | |
---|---|
BatchOperation(Text column)
Creates a DELETE batch operation. |
|
BatchOperation(Text column,
byte[] value)
Create a batch operation. |
|
TextSequence(Text d)
|
|
TextSequence(Text d,
int s)
|
|
TextSequence(Text d,
int s,
int e)
|
Uses of Text in org.apache.hadoop.hbase.mapred |
---|
Fields in org.apache.hadoop.hbase.mapred declared as Text | |
---|---|
protected Text[] |
GroupingTableMap.m_columns
|
Methods in org.apache.hadoop.hbase.mapred that return Text | |
---|---|
protected Text |
GroupingTableMap.createGroupKey(byte[][] vals)
Create a key by concatenating multiple column values. |
Text |
TableSplit.getEndRow()
|
Text |
TableSplit.getStartRow()
|
Text |
TableSplit.getTableName()
|
Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type Text | |
---|---|
RecordWriter<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> |
IndexOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable progress)
|
Methods in org.apache.hadoop.hbase.mapred with parameters of type Text | |
---|---|
void |
IndexTableReduce.reduce(Text key,
Iterator<MapWritable> values,
OutputCollector<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> output,
Reporter reporter)
|
void |
IdentityTableReduce.reduce(Text key,
Iterator<MapWritable> values,
OutputCollector<Text,MapWritable> output,
Reporter reporter)
No aggregation, output pairs of (key, record) |
void |
TableOutputFormat.TableRecordWriter.write(Text key,
MapWritable value)
Writes a key/value pair. |
Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Text | |
---|---|
void |
GroupingTableMap.map(HStoreKey key,
MapWritable value,
OutputCollector<Text,MapWritable> output,
Reporter reporter)
Extract the grouping columns from value to construct a new key. |
void |
IdentityTableMap.map(HStoreKey key,
MapWritable value,
OutputCollector<Text,MapWritable> output,
Reporter reporter)
Pass the key, value to reduce |
abstract void |
TableReduce.reduce(K key,
Iterator<V> values,
OutputCollector<Text,MapWritable> output,
Reporter reporter)
|
void |
IndexTableReduce.reduce(Text key,
Iterator<MapWritable> values,
OutputCollector<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> output,
Reporter reporter)
|
void |
IdentityTableReduce.reduce(Text key,
Iterator<MapWritable> values,
OutputCollector<Text,MapWritable> output,
Reporter reporter)
No aggregation, output pairs of (key, record) |
Constructors in org.apache.hadoop.hbase.mapred with parameters of type Text | |
---|---|
TableSplit(Text tableName,
Text startRow,
Text endRow)
Constructor |
Uses of Text in org.apache.hadoop.hbase.rest |
---|
Method parameters in org.apache.hadoop.hbase.rest with type arguments of type Text | |
---|---|
protected void |
GenericHandler.outputColumnsMime(org.mortbay.servlet.MultiPartResponse mpr,
Map<Text,byte[]> m)
|
protected void |
GenericHandler.outputColumnsXml(org.znerd.xmlenc.XMLOutputter outputter,
Map<Text,byte[]> m)
|
Uses of Text in org.apache.hadoop.io |
---|
Methods in org.apache.hadoop.io that return Text | |
---|---|
Text |
SequenceFile.Metadata.get(Text name)
|
Methods in org.apache.hadoop.io that return types with arguments of type Text | |
---|---|
TreeMap<Text,Text> |
SequenceFile.Metadata.getMetadata()
|
TreeMap<Text,Text> |
SequenceFile.Metadata.getMetadata()
|
Methods in org.apache.hadoop.io with parameters of type Text | |
---|---|
Text |
SequenceFile.Metadata.get(Text name)
|
void |
Text.set(Text other)
copy a text. |
void |
SequenceFile.Metadata.set(Text name,
Text value)
|
Constructors in org.apache.hadoop.io with parameters of type Text | |
---|---|
Text(Text utf8)
Construct from another text. |
Constructor parameters in org.apache.hadoop.io with type arguments of type Text | |
---|---|
SequenceFile.Metadata(TreeMap<Text,Text> arg)
|
|
SequenceFile.Metadata(TreeMap<Text,Text> arg)
|
Uses of Text in org.apache.hadoop.mapred |
---|
Methods in org.apache.hadoop.mapred that return Text | |
---|---|
Text |
SequenceFileAsTextRecordReader.createKey()
|
Text |
KeyValueLineRecordReader.createKey()
|
Text |
SequenceFileAsTextRecordReader.createValue()
|
Text |
LineRecordReader.createValue()
|
Text |
KeyValueLineRecordReader.createValue()
|
Methods in org.apache.hadoop.mapred that return types with arguments of type Text | |
---|---|
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
Methods in org.apache.hadoop.mapred with parameters of type Text | |
---|---|
boolean |
LineRecordReader.next(LongWritable key,
Text value)
Read a line. |
boolean |
SequenceFileAsTextRecordReader.next(Text key,
Text value)
Read key/value pair in a line. |
boolean |
KeyValueLineRecordReader.next(Text key,
Text value)
Read key/value pair in a line. |
int |
LineRecordReader.LineReader.readLine(Text str)
Read from the InputStream into the given Text. |
Uses of Text in org.apache.hadoop.mapred.lib |
---|
Methods in org.apache.hadoop.mapred.lib with parameters of type Text | |
---|---|
void |
TokenCountMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
void |
RegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
void |
FieldSelectionMapReduce.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
Method parameters in org.apache.hadoop.mapred.lib with type arguments of type Text | |
---|---|
void |
TokenCountMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
void |
RegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
void |
FieldSelectionMapReduce.map(K key,
V val,
OutputCollector<Text,Text> output,
Reporter reporter)
The identify function. |
void |
FieldSelectionMapReduce.map(K key,
V val,
OutputCollector<Text,Text> output,
Reporter reporter)
The identify function. |
void |
FieldSelectionMapReduce.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
FieldSelectionMapReduce.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
FieldSelectionMapReduce.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
Uses of Text in org.apache.hadoop.mapred.lib.aggregate |
---|
Fields in org.apache.hadoop.mapred.lib.aggregate declared as Text | |
---|---|
static Text |
ValueAggregatorDescriptor.ONE
|
Methods in org.apache.hadoop.mapred.lib.aggregate that return types with arguments of type Text | |
---|---|
static Map.Entry<Text,Text> |
ValueAggregatorBaseDescriptor.generateEntry(String type,
String id,
Text val)
|
static Map.Entry<Text,Text> |
ValueAggregatorBaseDescriptor.generateEntry(String type,
String id,
Text val)
|
ArrayList<Map.Entry<Text,Text>> |
ValueAggregatorBaseDescriptor.generateKeyValPairs(Object key,
Object val)
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair. |
ArrayList<Map.Entry<Text,Text>> |
ValueAggregatorBaseDescriptor.generateKeyValPairs(Object key,
Object val)
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair. |
ArrayList<Map.Entry<Text,Text>> |
ValueAggregatorDescriptor.generateKeyValPairs(Object key,
Object val)
Generate a list of aggregation-id/value pairs for the given key/value pair. |
ArrayList<Map.Entry<Text,Text>> |
ValueAggregatorDescriptor.generateKeyValPairs(Object key,
Object val)
Generate a list of aggregation-id/value pairs for the given key/value pair. |
ArrayList<Map.Entry<Text,Text>> |
UserDefinedValueAggregatorDescriptor.generateKeyValPairs(Object key,
Object val)
Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object. |
ArrayList<Map.Entry<Text,Text>> |
UserDefinedValueAggregatorDescriptor.generateKeyValPairs(Object key,
Object val)
Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object. |
Methods in org.apache.hadoop.mapred.lib.aggregate with parameters of type Text | |
---|---|
static Map.Entry<Text,Text> |
ValueAggregatorBaseDescriptor.generateEntry(String type,
String id,
Text val)
|
void |
ValueAggregatorReducer.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
ValueAggregatorMapper.reduce(Text arg0,
Iterator<Text> arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorCombiner.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
Combines values for a given key. |
Method parameters in org.apache.hadoop.mapred.lib.aggregate with type arguments of type Text | |
---|---|
void |
ValueAggregatorReducer.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorReducer.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorMapper.map(K1 key,
V1 value,
OutputCollector<Text,Text> output,
Reporter reporter)
the map function. |
void |
ValueAggregatorMapper.map(K1 key,
V1 value,
OutputCollector<Text,Text> output,
Reporter reporter)
the map function. |
void |
ValueAggregatorCombiner.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorCombiner.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorReducer.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
ValueAggregatorReducer.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
ValueAggregatorReducer.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
|
void |
ValueAggregatorMapper.reduce(Text arg0,
Iterator<Text> arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorMapper.reduce(Text arg0,
Iterator<Text> arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorMapper.reduce(Text arg0,
Iterator<Text> arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing. |
void |
ValueAggregatorCombiner.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
Combines values for a given key. |
void |
ValueAggregatorCombiner.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
Combines values for a given key. |
void |
ValueAggregatorCombiner.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
Combines values for a given key. |
Uses of Text in org.apache.hadoop.streaming |
---|
Methods in org.apache.hadoop.streaming that return Text | |
---|---|
Text |
StreamBaseRecordReader.createKey()
|
Text |
StreamBaseRecordReader.createValue()
|
Methods in org.apache.hadoop.streaming that return types with arguments of type Text | |
---|---|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
Methods in org.apache.hadoop.streaming with parameters of type Text | |
---|---|
boolean |
StreamXmlRecordReader.next(Text key,
Text value)
|
abstract boolean |
StreamBaseRecordReader.next(Text key,
Text value)
Read a record. |
static void |
UTF8ByteArrayUtils.splitKeyVal(byte[] utf,
int start,
int length,
Text key,
Text val,
int splitPos)
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos. |
static void |
UTF8ByteArrayUtils.splitKeyVal(byte[] utf,
Text key,
Text val,
int splitPos)
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos. |
Uses of Text in org.apache.hadoop.tools |
---|
Methods in org.apache.hadoop.tools with parameters of type Text | |
---|---|
void |
Logalyzer.LogRegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
Method parameters in org.apache.hadoop.tools with type arguments of type Text | |
---|---|
void |
Logalyzer.LogRegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter)
|
Uses of Text in org.apache.hadoop.util |
---|
Method parameters in org.apache.hadoop.util with type arguments of type Text | |
---|---|
void |
CopyFiles.FSCopyFilesMapper.map(LongWritable key,
org.apache.hadoop.util.CopyFiles.FilePair value,
OutputCollector<WritableComparable,Text> out,
Reporter reporter)
Map method. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |