Uses of Class
org.apache.hadoop.io.Text

Packages that use Text
org.apache.hadoop.contrib.utils.join   
org.apache.hadoop.examples Hadoop example code. 
org.apache.hadoop.examples.dancing This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. 
org.apache.hadoop.hbase Provides HBase, the Hadoop simple database. 
org.apache.hadoop.hbase.filter   
org.apache.hadoop.hbase.io   
org.apache.hadoop.hbase.mapred   
org.apache.hadoop.hbase.shell   
org.apache.hadoop.io Generic i/o code for use when reading and writing data to the network, to databases, and to files. 
org.apache.hadoop.mapred A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. 
org.apache.hadoop.mapred.lib Library of generally useful mappers, reducers, and partitioners. 
org.apache.hadoop.mapred.lib.aggregate Classes for performing various counting and aggregations. 
org.apache.hadoop.streaming Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. 
org.apache.hadoop.tools   
org.apache.hadoop.util Common utilities. 
 

Uses of Text in org.apache.hadoop.contrib.utils.join
 

Fields in org.apache.hadoop.contrib.utils.join declared as Text
protected  Text DataJoinMapperBase.inputTag
           
static Text DataJoinReducerBase.NUM_OF_VALUES_FIELD
           
static Text DataJoinReducerBase.SOURCE_TAGS_FIELD
           
protected  Text TaggedMapOutput.tag
           
 

Methods in org.apache.hadoop.contrib.utils.join that return Text
protected abstract  Text DataJoinMapperBase.generateGroupKey(TaggedMapOutput aRecord)
          Generate a map output key.
protected abstract  Text DataJoinMapperBase.generateInputTag(String inputFile)
          Determine the source tag based on the input file name.
 Text TaggedMapOutput.getTag()
           
 

Methods in org.apache.hadoop.contrib.utils.join with parameters of type Text
 void TaggedMapOutput.setTag(Text tag)
           
 

Uses of Text in org.apache.hadoop.examples
 

Methods in org.apache.hadoop.examples that return types with arguments of type Text
 ArrayList<Map.Entry<Text,Text>> AggregateWordCount.WordCountPlugInClass.generateKeyValPairs(Object key, Object val)
           
 ArrayList<Map.Entry<Text,Text>> AggregateWordCount.WordCountPlugInClass.generateKeyValPairs(Object key, Object val)
           
 

Methods in org.apache.hadoop.examples with parameters of type Text
 void WordCount.MapClass.map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
           
 void WordCount.Reduce.reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text,IntWritable> output, Reporter reporter)
           
 

Method parameters in org.apache.hadoop.examples with type arguments of type Text
 void WordCount.MapClass.map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
           
 void WordCount.Reduce.reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text,IntWritable> output, Reporter reporter)
           
 

Uses of Text in org.apache.hadoop.examples.dancing
 

Methods in org.apache.hadoop.examples.dancing with parameters of type Text
 void DistributedPentomino.PentMap.map(WritableComparable key, Text value, OutputCollector<Text,Text> output, Reporter reporter)
          Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order).
 

Method parameters in org.apache.hadoop.examples.dancing with type arguments of type Text
 void DistributedPentomino.PentMap.map(WritableComparable key, Text value, OutputCollector<Text,Text> output, Reporter reporter)
          Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order).
 void DistributedPentomino.PentMap.map(WritableComparable key, Text value, OutputCollector<Text,Text> output, Reporter reporter)
          Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order).
 

Uses of Text in org.apache.hadoop.hbase
 

Fields in org.apache.hadoop.hbase declared as Text
static Text HConstants.COL_REGIONINFO
          ROOT/META column family member - contains HRegionInfo
static Text[] HConstants.COL_REGIONINFO_ARRAY
          Array of column - contains HRegionInfo
static Text HConstants.COL_SERVER
          ROOT/META column family member - contains HServerAddress.toString()
static Text HConstants.COL_SPLITA
          the lower half of a split region
static Text HConstants.COL_SPLITB
          the upper half of a split region
static Text HConstants.COL_STARTCODE
          ROOT/META column family member - contains server start code (a long)
static Text HConstants.COLUMN_FAMILY
          The ROOT and META column family (Text)
static Text[] HConstants.COLUMN_FAMILY_ARRAY
          Array of meta column names
static Text HConstants.EMPTY_START_ROW
          Used by scanners, etc when they want to start at the beginning of a region
static Text HConstants.EMPTY_TEXT
          An empty instance of Text.
static Text HConstants.META_TABLE_NAME
          The META table's name.
static Text HConstants.ROOT_TABLE_NAME
          The root table's name.
protected  Text HTable.tableName
           
 

Fields in org.apache.hadoop.hbase with type parameters of type Text
protected  TreeMap<Text,Vector<org.apache.hadoop.hbase.HAbstractScanner.ColumnMatcher>> HAbstractScanner.okCols
           
protected  SortedMap<Text,HRegion> HRegionServer.onlineRegions
           
protected  Map<Text,HRegion> HRegionServer.retiringRegions
           
protected  SortedMap<Text,HRegionLocation> HTable.tableServers
           
 

Methods in org.apache.hadoop.hbase that return Text
static Text HStoreKey.extractFamily(Text col)
          Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
static Text HStoreKey.extractFamily(Text col, boolean withColon)
          Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
static Text HStoreKey.extractQualifier(Text col)
          Extracts the column qualifier, the portion that follows the colon (':') family/qualifier separator.
 Text HLogEdit.getColumn()
           
 Text HStoreKey.getColumn()
           
 Text HRegion.getEndKey()
           
 Text HRegionInfo.getEndKey()
           
 Text HTableDescriptor.getName()
           
 Text HColumnDescriptor.getName()
           
 Text HRegion.getRegionName()
           
 Text HRegionInfo.getRegionName()
           
 Text HMaster.MetaRegion.getRegionName()
           
 Text HStoreKey.getRow()
           
 Text HRegion.getStartKey()
           
 Text HRegionInfo.getStartKey()
           
 Text HMaster.MetaRegion.getStartKey()
           
 Text[] HTable.getStartKeys()
          Gets the starting row key for every region in the currently open table
 Text HTable.getTableName()
           
static Text HRegionInfo.getTableNameFromRegionName(Text regionName)
          Extracts table name prefix from a region name.
 

Methods in org.apache.hadoop.hbase that return types with arguments of type Text
 TreeMap<Text,HColumnDescriptor> HTableDescriptor.families()
          All the column families in this table.
 SortedMap<Text,HColumnDescriptor> HTableDescriptor.getFamilies()
           
 TreeMap<Text,byte[]> HMemcache.getFull(HStoreKey key)
          Return all the available columns for the given key.
 Map<Text,HMaster.MetaRegion> HMaster.getOnlineMetaRegions()
           
 SortedMap<Text,HRegion> HRegionServer.getOnlineRegions()
           
 SortedMap<Text,byte[]> HTable.getRow(Text row)
          Get all the data for the specified row
 SortedMap<Text,HRegionLocation> HConnection.getTableServers(Text tableName)
          Gets the servers of the given table.
 SortedMap<Text,HRegionLocation> HConnection.reloadTableServers(Text tableName)
          Reloads servers for the specified table.
 

Methods in org.apache.hadoop.hbase with parameters of type Text
 void HMemcache.add(Text row, TreeMap<Text,byte[]> columns, long timestamp)
          Store a value.
 void HMaster.addColumn(Text tableName, HColumnDescriptor column)
          Adds a column to the specified table
 void HBaseAdmin.addColumn(Text tableName, HColumnDescriptor column)
          Add a column to an existing table
 void HMasterInterface.addColumn(Text tableName, HColumnDescriptor column)
          Adds a column to the specified table
 void HRegionInterface.batchUpdate(Text regionName, long timestamp, BatchUpdate b)
          Applies a batch of updates via one RPC
 void HRegionServer.batchUpdate(Text regionName, long timestamp, BatchUpdate b)
          Applies a batch of updates via one RPC
protected  void HBaseAdmin.checkReservedTableName(Text tableName)
           
 void HConnection.close(Text tableName)
          Discard all the information about this table
 void RegionUnavailableListener.closed(Text regionName)
          regionName is closed and no longer available.
 void RegionUnavailableListener.closing(Text regionName)
          regionName is closing.
protected  void HRegionServer.commit(Text regionName, long lockid, long timestamp)
           
 void HRegion.delete(long lockid, Text targetCol)
          Delete a value or write a value.
 void HTable.delete(long lockid, Text column)
          Delete the value for a column.
protected  void HRegionServer.delete(Text regionName, long lockid, Text column)
           
 void HTable.deleteAll(Text row, Text column)
          Delete all values for a column
 void HRegion.deleteAll(Text row, Text column, long ts)
          Delete all cells of the same age as the passed timestamp or older.
 void HTable.deleteAll(Text row, Text column, long ts)
          Delete all values for a column
 void HRegionInterface.deleteAll(Text regionName, Text row, Text column, long timestamp)
          Delete all cells that match the passed row and column and whose timestamp is equal-to or older than the passed timestamp.
 void HRegionServer.deleteAll(Text regionName, Text row, Text column, long timestamp)
          Delete all cells that match the passed row and column and whose timestamp is equal-to or older than the passed timestamp.
 void HMaster.deleteColumn(Text tableName, Text columnName)
          Deletes a column from the specified table
 void HBaseAdmin.deleteColumn(Text tableName, Text columnName)
          Delete a column from a table
 void HMasterInterface.deleteColumn(Text tableName, Text columnName)
          Deletes a column from the specified table
 void HMaster.deleteTable(Text tableName)
          Deletes a table
 void HBaseAdmin.deleteTable(Text tableName)
          Deletes a table
 void HMasterInterface.deleteTable(Text tableName)
          Deletes a table
 void HMaster.disableTable(Text tableName)
          Take table offline
 void HBaseAdmin.disableTable(Text tableName)
          Disables a table (takes it off-line) If it is being served, the master will tell the servers to stop serving it.
 void HMasterInterface.disableTable(Text tableName)
          Take table offline
 void HMaster.enableTable(Text tableName)
          Puts the table on-line (only needed if table has been previously taken offline)
 void HBaseAdmin.enableTable(Text tableName)
          Brings a table on-line (enables it)
 void HMasterInterface.enableTable(Text tableName)
          Puts the table on-line (only needed if table has been previously taken offline)
static Text HStoreKey.extractFamily(Text col)
          Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
static Text HStoreKey.extractFamily(Text col, boolean withColon)
          Extracts the column family name from a column For example, returns 'info' if the specified column was 'info:server'
static Text HStoreKey.extractQualifier(Text col)
          Extracts the column qualifier, the portion that follows the colon (':') family/qualifier separator.
 byte[] HTable.get(Text row, Text column)
          Get a single value for the specified row and column
 byte[][] HTable.get(Text row, Text column, int numVersions)
          Get the specified number of versions of the specified row and column
 byte[][] HTable.get(Text row, Text column, long timestamp, int numVersions)
          Get the specified number of versions of the specified row and column with the specified timestamp.
 byte[] HRegionInterface.get(Text regionName, Text row, Text column)
          Retrieve a single value from the specified region for the specified row and column keys
 byte[] HRegionServer.get(Text regionName, Text row, Text column)
          Retrieve a single value from the specified region for the specified row and column keys
 byte[][] HRegionInterface.get(Text regionName, Text row, Text column, int numVersions)
          Get the specified number of versions of the specified row and column
 byte[][] HRegionServer.get(Text regionName, Text row, Text column, int numVersions)
          Get the specified number of versions of the specified row and column
 byte[][] HRegionInterface.get(Text regionName, Text row, Text column, long timestamp, int numVersions)
          Get the specified number of versions of the specified row and column with the specified timestamp.
 byte[][] HRegionServer.get(Text regionName, Text row, Text column, long timestamp, int numVersions)
          Get the specified number of versions of the specified row and column with the specified timestamp.
protected  HRegion HRegionServer.getRegion(Text regionName)
          Protected utility method for safely obtaining an HRegion handle.
protected  HRegion HRegionServer.getRegion(Text regionName, boolean checkRetiringRegions)
          Protected utility method for safely obtaining an HRegion handle.
static Path HRegion.getRegionDir(Path dir, Text regionName)
          Computes the Path of the HRegion
 HRegionInfo HRegionInterface.getRegionInfo(Text regionName)
          Get metainfo about an HRegion
 HRegionInfo HRegionServer.getRegionInfo(Text regionName)
          Get metainfo about an HRegion
 SortedMap<Text,byte[]> HTable.getRow(Text row)
          Get all the data for the specified row
 MapWritable HRegionInterface.getRow(Text regionName, Text row)
          Get all the data for the specified row
 MapWritable HRegionServer.getRow(Text regionName, Text row)
          Get all the data for the specified row
 HInternalScannerInterface HRegion.getScanner(Text[] cols, Text firstRow, long timestamp, RowFilterInterface filter)
          Return an iterator that scans over the HRegion, returning the indicated columns for only the rows that match the data filter.
 HInternalScannerInterface HRegion.getScanner(Text[] cols, Text firstRow, long timestamp, RowFilterInterface filter)
          Return an iterator that scans over the HRegion, returning the indicated columns for only the rows that match the data filter.
static Text HRegionInfo.getTableNameFromRegionName(Text regionName)
          Extracts table name prefix from a region name.
 SortedMap<Text,HRegionLocation> HConnection.getTableServers(Text tableName)
          Gets the servers of the given table.
 boolean HTableDescriptor.hasFamily(Text family)
          Checks to see if this table contains the given column family
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, long timestamp)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, long timestamp)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, RowFilterInterface filter)
          Get a scanner on the current table starting at the specified row.
 HScannerInterface HTable.obtainScanner(Text[] columns, Text startRow, RowFilterInterface filter)
          Get a scanner on the current table starting at the specified row.
 long HRegionInterface.openScanner(Text regionName, Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
          Opens a remote scanner with a RowFilter.
 long HRegionInterface.openScanner(Text regionName, Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
          Opens a remote scanner with a RowFilter.
 long HRegionServer.openScanner(Text regionName, Text[] cols, Text firstRow, long timestamp, RowFilterInterface filter)
           
 long HRegionServer.openScanner(Text regionName, Text[] cols, Text firstRow, long timestamp, RowFilterInterface filter)
           
 void HRegion.put(long lockid, Text targetCol, byte[] val)
          Put a cell value into the locked row.
 void HTable.put(long lockid, Text column, byte[] val)
          Change a value for the specified column.
protected  void HRegionServer.put(Text regionName, long lockid, Text column, byte[] val)
           
 SortedMap<Text,HRegionLocation> HConnection.reloadTableServers(Text tableName)
          Reloads servers for the specified table.
 void HStoreKey.setColumn(Text newcol)
          Change the value of the column key
 void HStoreKey.setRow(Text newrow)
          Change the value of the row key
 long HTable.startBatchUpdate(Text row)
          Deprecated. Batch operations are now the default. startBatchUpdate is now implemented by @see HTable.startUpdate(Text)
 long HRegion.startUpdate(Text row)
          The caller wants to apply a series of writes to a single row in the HRegion.
 long HTable.startUpdate(Text row)
          Start an atomic row insertion/update.
protected  long HRegionServer.startUpdate(Text regionName, Text row)
           
 boolean HBaseAdmin.tableExists(Text tableName)
           
 boolean HConnection.tableExists(Text tableName)
          Checks if tableName exists.
 

Method parameters in org.apache.hadoop.hbase with type arguments of type Text
 void HMemcache.add(Text row, TreeMap<Text,byte[]> columns, long timestamp)
          Store a value.
 boolean HAbstractScanner.next(HStoreKey key, SortedMap<Text,byte[]> results)
          Get the next set of values for this scanner.
 boolean HTable.ClientScanner.next(HStoreKey key, SortedMap<Text,byte[]> results)
          Grab the next row's worth of values.
 boolean HScannerInterface.next(HStoreKey key, SortedMap<Text,byte[]> results)
          Grab the next row's worth of values.
 

Constructors in org.apache.hadoop.hbase with parameters of type Text
HColumnDescriptor(Text name, int maxVersions, HColumnDescriptor.CompressionType compression, boolean inMemory, int maxValueLength, BloomFilterDescriptor bloomFilter)
          Constructor Specify all parameters.
HLogEdit(Text column, byte[] bval, long timestamp)
          Construct a fully initialized HLogEdit
HLogKey(Text regionName, Text tablename, Text row, long logSeqNum)
          Create the log key! We maintain the tablename mainly for debugging purposes.
HRegionInfo(long regionId, HTableDescriptor tableDesc, Text startKey, Text endKey)
          Construct HRegionInfo with explicit parameters
HRegionInfo(long regionId, HTableDescriptor tableDesc, Text startKey, Text endKey, boolean split)
          Construct HRegionInfo with explicit parameters
HStoreKey(Text row)
          Create an HStoreKey specifying only the row The column defaults to the empty string and the time stamp defaults to Long.MAX_VALUE
HStoreKey(Text row, long timestamp)
          Create an HStoreKey specifying the row and timestamp The column name defaults to the empty string
HStoreKey(Text row, Text column)
          Create an HStoreKey specifying the row and column names The timestamp defaults to Long.MAX_VALUE
HStoreKey(Text row, Text column, long timestamp)
          Create an HStoreKey specifying all the fields
HTable.ClientScanner(Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
           
HTable.ClientScanner(Text[] columns, Text startRow, long timestamp, RowFilterInterface filter)
           
HTable(Configuration conf, Text tableName)
          Creates an object to access a HBase table
 

Uses of Text in org.apache.hadoop.hbase.filter
 

Methods in org.apache.hadoop.hbase.filter that return Text
 Text StopRowFilter.getStopRowKey()
          An accessor for the stopRowKey
 

Methods in org.apache.hadoop.hbase.filter with parameters of type Text
 boolean RowFilterInterface.filter(Text rowKey)
          Filters on just a row key.
 boolean PageRowFilter.filter(Text rowKey)
          Filters on just a row key.
 boolean RowFilterSet.filter(Text rowKey)
          Filters on just a row key.
 boolean RegExpRowFilter.filter(Text rowKey)
          Filters on just a row key.
 boolean WhileMatchRowFilter.filter(Text rowKey)
          Filters on just a row key.
 boolean StopRowFilter.filter(Text rowKey)
          Filters on just a row key.
 boolean RowFilterInterface.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean PageRowFilter.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean RowFilterSet.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean RegExpRowFilter.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean WhileMatchRowFilter.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key.
 boolean StopRowFilter.filter(Text rowKey, Text colKey, byte[] data)
          Filters on row key and/or a column key. Because StopRowFilter does not examine column information, this method defaults to calling the rowKey-only version of filter.
 void RowFilterInterface.rowProcessed(boolean filtered, Text key)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void PageRowFilter.rowProcessed(boolean filtered, Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void RowFilterSet.rowProcessed(boolean filtered, Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void RegExpRowFilter.rowProcessed(boolean filtered, Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void WhileMatchRowFilter.rowProcessed(boolean filtered, Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void StopRowFilter.rowProcessed(boolean filtered, Text rowKey)
          Called to let filter know the final decision (to pass or filter) on a given row.
 void RegExpRowFilter.setColumnFilter(Text colKey, byte[] value)
          Specify a value that must be matched for the given column.
 void RowFilterInterface.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void PageRowFilter.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void RowFilterSet.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void RegExpRowFilter.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void WhileMatchRowFilter.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 void StopRowFilter.validate(Text[] columns)
          Validates that this filter applies only to a subset of the given columns.
 

Method parameters in org.apache.hadoop.hbase.filter with type arguments of type Text
 boolean RowFilterInterface.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean PageRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean RowFilterSet.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean RegExpRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean WhileMatchRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null.
 boolean StopRowFilter.filterNotNull(TreeMap<Text,byte[]> columns)
          Filters a row if: 1) The given row (@param columns) has a columnKey expected to be null AND the value associated with that columnKey is non-null. Because StopRowFilter does not examine column information, this method defaults to calling filterAllRemaining().
 void RegExpRowFilter.setColumnFilters(Map<Text,byte[]> columnFilter)
          Set column filters for a number of columns.
 

Constructors in org.apache.hadoop.hbase.filter with parameters of type Text
StopRowFilter(Text stopRowKey)
          Constructor that takes a stopRowKey on which to filter
 

Constructor parameters in org.apache.hadoop.hbase.filter with type arguments of type Text
RegExpRowFilter(String rowKeyRegExp, Map<Text,byte[]> columnFilter)
          Constructor that takes a row key regular expression to filter on.
 

Uses of Text in org.apache.hadoop.hbase.io
 

Methods in org.apache.hadoop.hbase.io that return Text
 Text BatchOperation.getColumn()
           
 Text BatchUpdate.getRow()
           
 

Methods in org.apache.hadoop.hbase.io with parameters of type Text
 void BatchUpdate.delete(long lid, Text column)
          Delete the value for a column Deletes the cell whose row/column/commit-timestamp match those of the delete.
 void BatchUpdate.put(long lid, Text column, byte[] val)
          Change a value for the specified column
 long BatchUpdate.startUpdate(Text row)
          Start a batch row insertion/update.
 

Constructors in org.apache.hadoop.hbase.io with parameters of type Text
BatchOperation(BatchOperation.Operation operation, Text column, byte[] value)
          Creates a put operation
BatchOperation(Text column)
          Creates a DELETE operation
BatchOperation(Text column, byte[] value)
          Creates a PUT operation
 

Uses of Text in org.apache.hadoop.hbase.mapred
 

Methods in org.apache.hadoop.hbase.mapred that return Text
protected  Text GroupingTableMap.createGroupKey(byte[][] vals)
          Create a key by concatenating multiple column values.
 Text TableSplit.getEndRow()
           
 Text TableSplit.getStartRow()
           
 Text TableSplit.getTableName()
           
 

Methods in org.apache.hadoop.hbase.mapred that return types with arguments of type Text
 RecordWriter<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> IndexOutputFormat.getRecordWriter(FileSystem fs, JobConf job, String name, Progressable progress)
           
 

Methods in org.apache.hadoop.hbase.mapred with parameters of type Text
 void TableOutputCollector.collect(Text key, MapWritable value)
          Restrict Table Map/Reduce's output to be a Text key and a record.
 void IndexTableReduce.reduce(Text key, Iterator<MapWritable> values, OutputCollector<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> output, Reporter reporter)
           
 void IdentityTableReduce.reduce(Text key, Iterator values, TableOutputCollector output, Reporter reporter)
          No aggregation, output pairs of (key, record)
abstract  void TableReduce.reduce(Text key, Iterator values, TableOutputCollector output, Reporter reporter)
           
 void TableOutputFormat.TableRecordWriter.write(Text key, MapWritable value)
          Writes a key/value pair.
 

Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type Text
 void IndexTableReduce.reduce(Text key, Iterator<MapWritable> values, OutputCollector<Text,org.apache.hadoop.hbase.mapred.LuceneDocumentWrapper> output, Reporter reporter)
           
 

Constructors in org.apache.hadoop.hbase.mapred with parameters of type Text
TableSplit(Text tableName, Text startRow, Text endRow)
          Constructor
 

Uses of Text in org.apache.hadoop.hbase.shell
 

Methods in org.apache.hadoop.hbase.shell that return Text
 Text InsertCommand.getColumn(int i)
           
 Text[] DeleteCommand.getColumnList(HBaseAdmin admin, HTable hTable)
           
 Text InsertCommand.getRow()
           
 

Uses of Text in org.apache.hadoop.io
 

Methods in org.apache.hadoop.io that return Text
 Text SequenceFile.Metadata.get(Text name)
           
 

Methods in org.apache.hadoop.io that return types with arguments of type Text
 TreeMap<Text,Text> SequenceFile.Metadata.getMetadata()
           
 TreeMap<Text,Text> SequenceFile.Metadata.getMetadata()
           
 

Methods in org.apache.hadoop.io with parameters of type Text
 Text SequenceFile.Metadata.get(Text name)
           
 void Text.set(Text other)
          copy a text.
 void SequenceFile.Metadata.set(Text name, Text value)
           
 

Constructors in org.apache.hadoop.io with parameters of type Text
Text(Text utf8)
          Construct from another text.
 

Constructor parameters in org.apache.hadoop.io with type arguments of type Text
SequenceFile.Metadata(TreeMap<Text,Text> arg)
           
SequenceFile.Metadata(TreeMap<Text,Text> arg)
           
 

Uses of Text in org.apache.hadoop.mapred
 

Methods in org.apache.hadoop.mapred that return Text
 Text SequenceFileAsTextRecordReader.createKey()
           
 Text KeyValueLineRecordReader.createKey()
           
 Text SequenceFileAsTextRecordReader.createValue()
           
 Text LineRecordReader.createValue()
           
 Text KeyValueLineRecordReader.createValue()
           
 

Methods in org.apache.hadoop.mapred that return types with arguments of type Text
 RecordReader<LongWritable,Text> TextInputFormat.getRecordReader(InputSplit genericSplit, JobConf job, Reporter reporter)
           
 RecordReader<Text,Text> SequenceFileAsTextInputFormat.getRecordReader(InputSplit split, JobConf job, Reporter reporter)
           
 RecordReader<Text,Text> SequenceFileAsTextInputFormat.getRecordReader(InputSplit split, JobConf job, Reporter reporter)
           
 RecordReader<Text,Text> KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit, JobConf job, Reporter reporter)
           
 RecordReader<Text,Text> KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit, JobConf job, Reporter reporter)
           
 

Methods in org.apache.hadoop.mapred with parameters of type Text
 boolean LineRecordReader.next(LongWritable key, Text value)
          Read a line.
 boolean SequenceFileAsTextRecordReader.next(Text key, Text value)
          Read key/value pair in a line.
 boolean KeyValueLineRecordReader.next(Text key, Text value)
          Read key/value pair in a line.
 

Uses of Text in org.apache.hadoop.mapred.lib
 

Methods in org.apache.hadoop.mapred.lib with parameters of type Text
 void TokenCountMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 void RegexMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 void FieldSelectionMapReduce.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 

Method parameters in org.apache.hadoop.mapred.lib with type arguments of type Text
 void TokenCountMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 void RegexMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 void FieldSelectionMapReduce.map(K key, V val, OutputCollector<Text,Text> output, Reporter reporter)
          The identify function.
 void FieldSelectionMapReduce.map(K key, V val, OutputCollector<Text,Text> output, Reporter reporter)
          The identify function.
 void FieldSelectionMapReduce.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void FieldSelectionMapReduce.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void FieldSelectionMapReduce.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 

Uses of Text in org.apache.hadoop.mapred.lib.aggregate
 

Fields in org.apache.hadoop.mapred.lib.aggregate declared as Text
static Text ValueAggregatorDescriptor.ONE
           
 

Methods in org.apache.hadoop.mapred.lib.aggregate that return types with arguments of type Text
static Map.Entry<Text,Text> ValueAggregatorBaseDescriptor.generateEntry(String type, String id, Text val)
           
static Map.Entry<Text,Text> ValueAggregatorBaseDescriptor.generateEntry(String type, String id, Text val)
           
 ArrayList<Map.Entry<Text,Text>> ValueAggregatorBaseDescriptor.generateKeyValPairs(Object key, Object val)
          Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
 ArrayList<Map.Entry<Text,Text>> ValueAggregatorBaseDescriptor.generateKeyValPairs(Object key, Object val)
          Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
 ArrayList<Map.Entry<Text,Text>> ValueAggregatorDescriptor.generateKeyValPairs(Object key, Object val)
          Generate a list of aggregation-id/value pairs for the given key/value pair.
 ArrayList<Map.Entry<Text,Text>> ValueAggregatorDescriptor.generateKeyValPairs(Object key, Object val)
          Generate a list of aggregation-id/value pairs for the given key/value pair.
 ArrayList<Map.Entry<Text,Text>> UserDefinedValueAggregatorDescriptor.generateKeyValPairs(Object key, Object val)
          Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object.
 ArrayList<Map.Entry<Text,Text>> UserDefinedValueAggregatorDescriptor.generateKeyValPairs(Object key, Object val)
          Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object.
 

Methods in org.apache.hadoop.mapred.lib.aggregate with parameters of type Text
static Map.Entry<Text,Text> ValueAggregatorBaseDescriptor.generateEntry(String type, String id, Text val)
           
 void ValueAggregatorReducer.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void ValueAggregatorMapper.reduce(Text arg0, Iterator<Text> arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorCombiner.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
          Combines values for a given key.
 

Method parameters in org.apache.hadoop.mapred.lib.aggregate with type arguments of type Text
 void ValueAggregatorReducer.map(K1 arg0, V1 arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorReducer.map(K1 arg0, V1 arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorMapper.map(K1 key, V1 value, OutputCollector<Text,Text> output, Reporter reporter)
          the map function.
 void ValueAggregatorMapper.map(K1 key, V1 value, OutputCollector<Text,Text> output, Reporter reporter)
          the map function.
 void ValueAggregatorCombiner.map(K1 arg0, V1 arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorCombiner.map(K1 arg0, V1 arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorReducer.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void ValueAggregatorReducer.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void ValueAggregatorReducer.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
           
 void ValueAggregatorMapper.reduce(Text arg0, Iterator<Text> arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorMapper.reduce(Text arg0, Iterator<Text> arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorMapper.reduce(Text arg0, Iterator<Text> arg1, OutputCollector<Text,Text> arg2, Reporter arg3)
          Do nothing.
 void ValueAggregatorCombiner.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
          Combines values for a given key.
 void ValueAggregatorCombiner.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
          Combines values for a given key.
 void ValueAggregatorCombiner.reduce(Text key, Iterator<Text> values, OutputCollector<Text,Text> output, Reporter reporter)
          Combines values for a given key.
 

Uses of Text in org.apache.hadoop.streaming
 

Methods in org.apache.hadoop.streaming that return Text
 Text StreamBaseRecordReader.createKey()
           
 Text StreamBaseRecordReader.createValue()
           
 

Methods in org.apache.hadoop.streaming that return types with arguments of type Text
 RecordReader<Text,Text> StreamInputFormat.getRecordReader(InputSplit genericSplit, JobConf job, Reporter reporter)
           
 RecordReader<Text,Text> StreamInputFormat.getRecordReader(InputSplit genericSplit, JobConf job, Reporter reporter)
           
 

Methods in org.apache.hadoop.streaming with parameters of type Text
 boolean StreamXmlRecordReader.next(Text key, Text value)
           
abstract  boolean StreamBaseRecordReader.next(Text key, Text value)
          Read a record.
static void UTF8ByteArrayUtils.splitKeyVal(byte[] utf, int start, int length, Text key, Text val, int splitPos)
          split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
static void UTF8ByteArrayUtils.splitKeyVal(byte[] utf, Text key, Text val, int splitPos)
          split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
 

Uses of Text in org.apache.hadoop.tools
 

Methods in org.apache.hadoop.tools with parameters of type Text
 void Logalyzer.LogRegexMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 

Method parameters in org.apache.hadoop.tools with type arguments of type Text
 void Logalyzer.LogRegexMapper.map(K key, Text value, OutputCollector<Text,LongWritable> output, Reporter reporter)
           
 

Uses of Text in org.apache.hadoop.util
 

Method parameters in org.apache.hadoop.util with type arguments of type Text
 void CopyFiles.FSCopyFilesMapper.map(LongWritable key, org.apache.hadoop.util.CopyFiles.FilePair value, OutputCollector<WritableComparable,Text> out, Reporter reporter)
          Map method.
 



Copyright © 2006 The Apache Software Foundation