|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use Table | |
---|---|
org.apache.hadoop.hive.ql.exec | Hive QL execution tasks, operators, functions and other handlers. |
org.apache.hadoop.hive.ql.hooks | |
org.apache.hadoop.hive.ql.index | |
org.apache.hadoop.hive.ql.lockmgr | Hive Lock Manager interfaces and some custom implmentations |
org.apache.hadoop.hive.ql.metadata | |
org.apache.hadoop.hive.ql.metadata.formatting | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.optimizer.physical.index | |
org.apache.hadoop.hive.ql.optimizer.ppr | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.plan | |
org.apache.hadoop.hive.ql.security.authorization |
Uses of Table in org.apache.hadoop.hive.ql.exec |
---|
Methods in org.apache.hadoop.hive.ql.exec with parameters of type Table | |
---|---|
static void |
Utilities.addMapWork(MapredWork mr,
Table tbl,
String alias,
Operator<?> work)
|
static boolean |
Utilities.checkJDOPushDown(Table tab,
ExprNodeDesc expr)
Check if the partition pruning expression can be pushed down to JDO filtering. |
static String |
ArchiveUtils.conflictingArchiveNameOrNull(Hive db,
Table tbl,
LinkedHashMap<String,String> partSpec)
Determines if one can insert into partition(s), or there's a conflict with archive. |
static ArchiveUtils.PartSpecInfo |
ArchiveUtils.PartSpecInfo.create(Table tbl,
Map<String,String> partSpec)
Extract partial prefix specification from table and key-value map |
org.apache.hadoop.fs.Path |
ArchiveUtils.PartSpecInfo.createPath(Table tbl)
Creates path where partitions matching prefix should lie in filesystem |
static TableDesc |
Utilities.getTableDesc(Table tbl)
|
static void |
Utilities.validatePartSpec(Table tbl,
Map<String,String> partSpec)
|
Uses of Table in org.apache.hadoop.hive.ql.hooks |
---|
Methods in org.apache.hadoop.hive.ql.hooks that return Table | |
---|---|
Table |
Entity.getT()
|
Table |
Entity.getTable()
Get the table associated with the entity. |
Methods in org.apache.hadoop.hive.ql.hooks with parameters of type Table | |
---|---|
void |
Entity.setT(Table t)
|
Constructors in org.apache.hadoop.hive.ql.hooks with parameters of type Table | |
---|---|
Entity(Table t)
Constructor for a table. |
|
Entity(Table t,
boolean complete)
|
|
ReadEntity(Table t)
Constructor. |
|
WriteEntity(Table t)
Constructor for a table. |
|
WriteEntity(Table t,
boolean complete)
|
Uses of Table in org.apache.hadoop.hive.ql.index |
---|
Methods in org.apache.hadoop.hive.ql.index with parameters of type Table | |
---|---|
List<Task<?>> |
TableBasedIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
|
List<Task<?>> |
HiveIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
Requests that the handler generate a plan for building the index; the plan should read the base table and write out the index representation. |
Uses of Table in org.apache.hadoop.hive.ql.lockmgr |
---|
Constructors in org.apache.hadoop.hive.ql.lockmgr with parameters of type Table | |
---|---|
HiveLockObject(Table tbl,
HiveLockObject.HiveLockObjectData lockData)
|
Uses of Table in org.apache.hadoop.hive.ql.metadata |
---|
Methods in org.apache.hadoop.hive.ql.metadata that return Table | |
---|---|
Table |
Table.copy()
|
Table |
Partition.getTable()
|
Table |
Hive.getTable(String tableName)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String tableName,
boolean throwException)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String dbName,
String tableName)
Returns metadata of the table |
Table |
Hive.getTable(String dbName,
String tableName,
boolean throwException)
Returns metadata of the table |
Table |
Hive.newTable(String tableName)
|
Methods in org.apache.hadoop.hive.ql.metadata with parameters of type Table | |
---|---|
void |
Hive.alterTable(String tblName,
Table newTbl)
Updates the existing table metadata with the new metadata. |
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition. |
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location,
Map<String,String> partParams,
String inputFormat,
String outputFormat,
int numBuckets,
List<FieldSchema> cols,
String serializationLib,
Map<String,String> serdeParams,
List<String> bucketCols,
List<Order> sortCols)
Creates a partition |
void |
Hive.createTable(Table tbl)
Creates the table with the give objects |
void |
Hive.createTable(Table tbl,
boolean ifNotExists)
Creates the table with the give objects |
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate)
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata |
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial specification. |
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition |
void |
Partition.setTable(Table table)
Should be only used by serialization. |
Constructors in org.apache.hadoop.hive.ql.metadata with parameters of type Table | |
---|---|
DummyPartition(Table tbl,
String name)
|
|
DummyPartition(Table tbl,
String name,
Map<String,String> partSpec)
|
|
Partition(Table tbl)
create an empty partition. |
|
Partition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
Create partition object with the given info. |
|
Partition(Table tbl,
Partition tp)
|
Uses of Table in org.apache.hadoop.hive.ql.metadata.formatting |
---|
Methods in org.apache.hadoop.hive.ql.metadata.formatting with parameters of type Table | |
---|---|
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
Describe table. |
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
Describe table. |
void |
TextMetaDataFormatter.describeTable(DataOutputStream outStream,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
|
static String |
MetaDataFormatUtils.getTableInformation(Table table)
|
Method parameters in org.apache.hadoop.hive.ql.metadata.formatting with type arguments of type Table | |
---|---|
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
|
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status. |
void |
TextMetaDataFormatter.showTableStatus(DataOutputStream outStream,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
|
Uses of Table in org.apache.hadoop.hive.ql.optimizer |
---|
Methods in org.apache.hadoop.hive.ql.optimizer with parameters of type Table | |
---|---|
static List<Index> |
IndexUtils.getIndexes(Table baseTableMetaData,
List<String> matchIndexTypes)
Get a list of indexes on a table that match given types. |
Method parameters in org.apache.hadoop.hive.ql.optimizer with type arguments of type Table | |
---|---|
static Set<Partition> |
IndexUtils.checkPartitionsCoveredByIndex(TableScanOperator tableScan,
ParseContext pctx,
Map<Table,List<Index>> indexes)
Check the partitions used by the table scan to make sure they also exist in the index table. |
Uses of Table in org.apache.hadoop.hive.ql.optimizer.physical.index |
---|
Constructor parameters in org.apache.hadoop.hive.ql.optimizer.physical.index with type arguments of type Table | |
---|---|
IndexWhereProcessor(Map<Table,List<Index>> indexes)
|
Uses of Table in org.apache.hadoop.hive.ql.optimizer.ppr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.ppr with parameters of type Table | |
---|---|
static boolean |
PartitionPruner.onlyContainsPartnCols(Table tab,
ExprNodeDesc expr)
Find out whether the condition only contains partitioned columns. |
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner condition. |
Uses of Table in org.apache.hadoop.hive.ql.parse |
---|
Fields in org.apache.hadoop.hive.ql.parse declared as Table | |
---|---|
Table |
BaseSemanticAnalyzer.tableSpec.tableHandle
|
Methods in org.apache.hadoop.hive.ql.parse that return Table | |
---|---|
Table |
QBMetaData.getDestTableForAlias(String alias)
|
Table |
QBMetaData.getSrcForAlias(String alias)
|
Table |
QBMetaData.getTableForAlias(String alias)
|
Methods in org.apache.hadoop.hive.ql.parse that return types with arguments of type Table | |
---|---|
HashMap<String,Table> |
QBMetaData.getAliasToTable()
|
HashMap<TableScanOperator,Table> |
ParseContext.getTopToTable()
|
Methods in org.apache.hadoop.hive.ql.parse with parameters of type Table | |
---|---|
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
List<Partition> partitions)
|
boolean |
BaseSemanticAnalyzer.isValidPrefixSpec(Table tTable,
Map<String,String> spec)
Checks if given specification is proper specification for prefix of partition cols, for table partitioned by ds, hr, min valid ones are (ds='2008-04-08'), (ds='2008-04-08', hr='12'), (ds='2008-04-08', hr='12', min='30') invalid one is for example (ds='2008-04-08', min='30') |
void |
QBMetaData.setDestForAlias(String alias,
Table tab)
|
void |
QBMetaData.setSrcForAlias(String alias,
Table tab)
|
Method parameters in org.apache.hadoop.hive.ql.parse with type arguments of type Table | |
---|---|
void |
ParseContext.setTopToTable(HashMap<TableScanOperator,Table> topToTable)
|
Uses of Table in org.apache.hadoop.hive.ql.plan |
---|
Constructors in org.apache.hadoop.hive.ql.plan with parameters of type Table | |
---|---|
DynamicPartitionCtx(Table tbl,
Map<String,String> partSpec,
String defaultPartName,
int maxParts)
|
Uses of Table in org.apache.hadoop.hive.ql.security.authorization |
---|
Methods in org.apache.hadoop.hive.ql.security.authorization with parameters of type Table | |
---|---|
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns. |
void |
DefaultHiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive table object. |
void |
DefaultHiveAuthorizationProvider.authorize(Table table,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |