|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use HiveException | |
---|---|
org.apache.hadoop.hive.ql.exec | Hive QL execution tasks, operators, functions and other handlers. |
org.apache.hadoop.hive.ql.exec.persistence | |
org.apache.hadoop.hive.ql.index | |
org.apache.hadoop.hive.ql.index.bitmap | |
org.apache.hadoop.hive.ql.index.compact | |
org.apache.hadoop.hive.ql.io | |
org.apache.hadoop.hive.ql.io.rcfile.merge | |
org.apache.hadoop.hive.ql.lockmgr | Hive Lock Manager interfaces and some custom implmentations |
org.apache.hadoop.hive.ql.metadata | |
org.apache.hadoop.hive.ql.metadata.formatting | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.optimizer.ppr | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.plan | |
org.apache.hadoop.hive.ql.security | |
org.apache.hadoop.hive.ql.security.authorization | |
org.apache.hadoop.hive.ql.session | |
org.apache.hadoop.hive.ql.udf.generic | Standard toolkit and framework for generic User-defined functions. |
org.apache.hadoop.hive.ql.udf.xml | |
org.apache.hive.builtins |
Uses of HiveException in org.apache.hadoop.hive.ql.exec |
---|
Subclasses of HiveException in org.apache.hadoop.hive.ql.exec | |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method is not found. |
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method is found. |
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong. |
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length. |
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types. |
Methods in org.apache.hadoop.hive.ql.exec that throw HiveException | |
---|---|
void |
FileSinkOperator.FSPaths.abortWriters(org.apache.hadoop.fs.FileSystem fs,
boolean abort,
boolean delete)
|
static URI |
ArchiveUtils.addSlash(URI u)
Makes sure, that URI points to directory by adding slash to it. |
protected void |
CommonJoinOperator.checkAndGenObject()
|
void |
Operator.cleanUpInputFileChanged()
|
void |
MapOperator.cleanUpInputFileChangedOp()
|
void |
SMBMapJoinOperator.cleanUpInputFileChangedOp()
|
void |
TableScanOperator.cleanUpInputFileChangedOp()
|
void |
Operator.cleanUpInputFileChangedOp()
|
void |
MapJoinOperator.cleanUpInputFileChangedOp()
|
void |
FetchTask.clearFetch()
Clear the Fetch Operator. |
void |
FetchOperator.clearFetchContext()
Clear the context, if anything needs to be done. |
void |
SkewJoinHandler.close(boolean abort)
|
void |
ScriptOperator.close(boolean abort)
|
void |
Operator.close(boolean abort)
|
void |
MapOperator.closeOp(boolean abort)
close extra child operators that are initialized but are not executed. |
void |
HashTableSinkOperator.closeOp(boolean abort)
|
void |
SMBMapJoinOperator.closeOp(boolean abort)
|
void |
GroupByOperator.closeOp(boolean abort)
We need to forward all the aggregations to children. |
void |
LimitOperator.closeOp(boolean abort)
|
void |
CommonJoinOperator.closeOp(boolean abort)
All done. |
void |
HashTableDummyOperator.closeOp(boolean abort)
|
void |
FileSinkOperator.closeOp(boolean abort)
|
protected void |
UDTFOperator.closeOp(boolean abort)
|
void |
TableScanOperator.closeOp(boolean abort)
|
void |
JoinOperator.closeOp(boolean abort)
All done. |
protected void |
Operator.closeOp(boolean abort)
Operator specific close routine. |
void |
MapJoinOperator.closeOp(boolean abort)
|
void |
FileSinkOperator.FSPaths.closeWriters(boolean abort)
|
Integer |
ExprNodeGenericFuncEvaluator.compare(Object row)
If the genericUDF is a base comparison, it returns an integer based on the result of comparing the two sides of the UDF, like the compareTo method in Comparable. |
static ArrayList<Object> |
JoinUtil.computeKeys(Object row,
List<ExprNodeEvaluator> keyFields,
List<ObjectInspector> keyFieldsOI)
Return the key as a standard object. |
static AbstractMapJoinKey |
JoinUtil.computeMapJoinKeys(Object row,
List<ExprNodeEvaluator> keyFields,
List<ObjectInspector> keyFieldsOI)
Return the key as a standard object. |
static Object[] |
JoinUtil.computeMapJoinValues(Object row,
List<ExprNodeEvaluator> valueFields,
List<ObjectInspector> valueFieldsOI,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> filtersOI,
int[] filterMap)
Return the value as a standard object. |
static ArrayList<Object> |
JoinUtil.computeValues(Object row,
List<ExprNodeEvaluator> valueFields,
List<ObjectInspector> valueFieldsOI,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> filtersOI,
int[] filterMap)
Return the value as a standard object. |
static String |
ArchiveUtils.conflictingArchiveNameOrNull(Hive db,
Table tbl,
LinkedHashMap<String,String> partSpec)
Determines if one can insert into partition(s), or there's a conflict with archive. |
static ArchiveUtils.PartSpecInfo |
ArchiveUtils.PartSpecInfo.create(Table tbl,
Map<String,String> partSpec)
Extract partial prefix specification from table and key-value map |
org.apache.hadoop.fs.Path |
ArchiveUtils.PartSpecInfo.createPath(Table tbl)
Creates path where partitions matching prefix should lie in filesystem |
void |
GroupByOperator.endGroup()
|
void |
CommonJoinOperator.endGroup()
Forward a record of join results. |
void |
JoinOperator.endGroup()
Forward a record of join results. |
void |
Operator.endGroup()
|
Object |
ExprNodeNullEvaluator.evaluate(Object row)
|
Object |
ExprNodeConstantEvaluator.evaluate(Object row)
|
abstract Object |
ExprNodeEvaluator.evaluate(Object row)
Evaluate the expression given the row. |
Object |
ExprNodeGenericFuncEvaluator.evaluate(Object row)
|
Object |
ExprNodeFieldEvaluator.evaluate(Object row)
|
Object |
ExprNodeColumnEvaluator.evaluate(Object row)
|
protected void |
GroupByOperator.forward(Object[] keys,
GenericUDAFEvaluator.AggregationBuffer[] aggs)
Forward a record of keys and aggregation results. |
protected void |
Operator.forward(Object row,
ObjectInspector rowInspector)
|
void |
UDTFOperator.forwardUDTFOutput(Object o)
forwardUDTFOutput is typically called indirectly by the GenericUDTF when the GenericUDTF has generated output rows that should be passed on to the next operator(s) in the DAG. |
void |
MapJoinOperator.generateMapMetaData()
|
static int |
ArchiveUtils.getArchivingLevel(Partition p)
Returns archiving level, which is how many fields were set in partial specification ARCHIVE was run for |
static List<LinkedHashMap<String,String>> |
Utilities.getFullDPSpecs(org.apache.hadoop.conf.Configuration conf,
DynamicPartitionCtx dpCtx)
Construct a list of full partition spec from Dynamic Partition Context and the directory names corresponding to these dynamic partitions. |
URI |
ArchiveUtils.HarPathHelper.getHarUri(URI original,
HadoopShims shim)
|
String |
ArchiveUtils.PartSpecInfo.getName()
Generates name for prefix partial partition specification. |
static HashMap<Byte,List<ObjectInspector>> |
JoinUtil.getObjectInspectorsFromEvaluators(Map<Byte,List<ExprNodeEvaluator>> exprEntries,
ObjectInspector[] inputObjInspector,
int posBigTableAlias)
|
ObjectInspector |
FetchOperator.getOutputObjectInspector()
returns output ObjectInspector, never null |
static String |
ArchiveUtils.getPartialName(Partition p,
int level)
Get a prefix of the given parition's string representation. |
static PartitionDesc |
Utilities.getPartitionDesc(Partition part)
|
static PartitionDesc |
Utilities.getPartitionDescFromTableDesc(TableDesc tblDesc,
Partition part)
|
static RowContainer |
JoinUtil.getRowContainer(org.apache.hadoop.conf.Configuration hconf,
List<ObjectInspector> structFieldObjectInspectors,
Byte alias,
int containerSize,
Map<Byte,TableDesc> spillTableDesc,
JoinDesc conf,
boolean noFilter,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
SkewJoinHandler.handleSkew(int tag)
|
protected static ObjectInspector[] |
Operator.initEvaluators(ExprNodeEvaluator[] evals,
int start,
int length,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator from start, for specified length and return the result ObjectInspectors. |
protected static ObjectInspector[] |
Operator.initEvaluators(ExprNodeEvaluator[] evals,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator and return the result ObjectInspectors. |
protected static StructObjectInspector |
ReduceSinkOperator.initEvaluatorsAndReturnStruct(ExprNodeEvaluator[] evals,
List<List<Integer>> distinctColIndices,
List<String> outputColNames,
int length,
ObjectInspector rowInspector)
Initializes array of ExprNodeEvaluator. |
protected static StructObjectInspector |
Operator.initEvaluatorsAndReturnStruct(ExprNodeEvaluator[] evals,
List<String> outputColName,
ObjectInspector rowInspector)
Initialize an array of ExprNodeEvaluator and put the return values into a StructObjectInspector with integer field names. |
void |
Operator.initialize(org.apache.hadoop.conf.Configuration hconf,
ObjectInspector[] inputOIs)
Initializes operators only if all parents have been initialized. |
ObjectInspector |
ExprNodeNullEvaluator.initialize(ObjectInspector rowInspector)
|
ObjectInspector |
ExprNodeConstantEvaluator.initialize(ObjectInspector rowInspector)
|
abstract ObjectInspector |
ExprNodeEvaluator.initialize(ObjectInspector rowInspector)
Initialize should be called once and only once. |
ObjectInspector |
ExprNodeGenericFuncEvaluator.initialize(ObjectInspector rowInspector)
|
ObjectInspector |
ExprNodeFieldEvaluator.initialize(ObjectInspector rowInspector)
|
ObjectInspector |
ExprNodeColumnEvaluator.initialize(ObjectInspector rowInspector)
|
void |
MapOperator.initializeAsRoot(org.apache.hadoop.conf.Configuration hconf,
MapredWork mrwork)
Initializes this map op as the root of the tree. |
protected void |
Operator.initializeChildren(org.apache.hadoop.conf.Configuration hconf)
Calls initialize on each of the children with outputObjetInspector as the output row format. |
void |
SMBMapJoinOperator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf)
|
void |
Operator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf)
|
void |
SMBMapJoinOperator.initializeMapredLocalWork(MapJoinDesc conf,
org.apache.hadoop.conf.Configuration hconf,
MapredLocalWork localWork,
org.apache.commons.logging.Log l4j)
|
void |
MapOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
HashTableSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
SelectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
SMBMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
ReduceSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
FilterOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
CollectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
GroupByOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
LimitOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
UnionOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
UnionOperator will transform the input rows if the inputObjInspectors from different parents are different. |
protected void |
ScriptOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
CommonJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
HashTableDummyOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
FileSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
UDTFOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
TableScanOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
LateralViewJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
AbstractMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
ListSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
JoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
Operator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
Operator specific initialization. |
protected void |
ExtractOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
protected void |
MapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
|
static Object |
FunctionRegistry.invoke(Method m,
Object thisObject,
Object... arguments)
|
protected static byte |
JoinUtil.isFiltered(Object row,
List<ExprNodeEvaluator> filters,
List<ObjectInspector> ois,
int[] filterMap)
Returns true if the row does not pass through filters. |
void |
Operator.jobClose(org.apache.hadoop.conf.Configuration conf,
boolean success,
JobCloseFeedBack feedBack)
Unlike other operator interfaces which are called from map or reduce task, jobClose is called from the jobclient side once the job has completed. |
void |
FileSinkOperator.jobCloseOp(org.apache.hadoop.conf.Configuration hconf,
boolean success,
JobCloseFeedBack feedBack)
|
void |
JoinOperator.jobCloseOp(org.apache.hadoop.conf.Configuration hconf,
boolean success,
JobCloseFeedBack feedBack)
|
void |
Operator.jobCloseOp(org.apache.hadoop.conf.Configuration conf,
boolean success,
JobCloseFeedBack feedBack)
|
static void |
ExecDriver.main(String[] args)
|
static void |
Utilities.mvFileToFinalPath(String specPath,
org.apache.hadoop.conf.Configuration hconf,
boolean success,
org.apache.commons.logging.Log log,
DynamicPartitionCtx dpCtx,
FileSinkDesc conf,
org.apache.hadoop.mapred.Reporter reporter)
|
protected GenericUDAFEvaluator.AggregationBuffer[] |
GroupByOperator.newAggregations()
|
void |
Operator.process(Object row,
int tag)
Process the row. |
void |
MapOperator.process(org.apache.hadoop.io.Writable value)
|
void |
MapOperator.processOp(Object row,
int tag)
|
void |
HashTableSinkOperator.processOp(Object row,
int tag)
|
void |
SelectOperator.processOp(Object row,
int tag)
|
void |
SMBMapJoinOperator.processOp(Object row,
int tag)
|
void |
ReduceSinkOperator.processOp(Object row,
int tag)
|
void |
FilterOperator.processOp(Object row,
int tag)
|
void |
ForwardOperator.processOp(Object row,
int tag)
|
void |
CollectOperator.processOp(Object row,
int tag)
|
void |
GroupByOperator.processOp(Object row,
int tag)
|
void |
LimitOperator.processOp(Object row,
int tag)
|
void |
UnionOperator.processOp(Object row,
int tag)
|
void |
ScriptOperator.processOp(Object row,
int tag)
|
void |
HashTableDummyOperator.processOp(Object row,
int tag)
|
void |
LateralViewForwardOperator.processOp(Object row,
int tag)
|
void |
FileSinkOperator.processOp(Object row,
int tag)
|
void |
UDTFOperator.processOp(Object row,
int tag)
|
void |
TableScanOperator.processOp(Object row,
int tag)
Other than gathering statistics for the ANALYZE command, the table scan operator does not do anything special other than just forwarding the row. |
void |
LateralViewJoinOperator.processOp(Object row,
int tag)
An important assumption for processOp() is that for a given row from the TS, the LVJ will first get the row from the left select operator, followed by all the corresponding rows from the UDTF operator. |
void |
ListSinkOperator.processOp(Object row,
int tag)
|
void |
JoinOperator.processOp(Object row,
int tag)
|
abstract void |
Operator.processOp(Object row,
int tag)
Process the row. |
void |
ExtractOperator.processOp(Object row,
int tag)
|
void |
MapJoinOperator.processOp(Object row,
int tag)
|
boolean |
FetchOperator.pushRow()
Get the next row and push down it to operator tree. |
static void |
Utilities.rename(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Rename src to dst, or in the case dst already exists, move files in src to dst. |
static void |
Utilities.renameOrMoveFiles(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
Rename src to dst, or in the case dst already exists, move files in src to dst. |
protected void |
GroupByOperator.resetAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs)
|
void |
MapOperator.setChildren(org.apache.hadoop.conf.Configuration hconf)
|
int |
DDLTask.showColumns(Hive db,
ShowColumnsDesc showCols)
|
void |
GroupByOperator.startGroup()
|
void |
CommonJoinOperator.startGroup()
|
void |
Operator.startGroup()
|
static void |
FunctionRegistry.unregisterTemporaryUDF(String functionName)
|
protected void |
GroupByOperator.updateAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs,
Object row,
ObjectInspector rowInspector,
boolean hashAggr,
boolean newEntryForHashAggr,
Object[][] lastInvoke)
|
Constructors in org.apache.hadoop.hive.ql.exec that throw HiveException | |
---|---|
ArchiveUtils.HarPathHelper(HiveConf hconf,
URI archive,
URI originalBase)
Creates helper for archive. |
|
ExecDriver(MapredWork plan,
org.apache.hadoop.mapred.JobConf job,
boolean isSilent)
Constructor/Initialization for invocation as independent utility. |
|
MapredLocalTask(MapredLocalWork plan,
org.apache.hadoop.mapred.JobConf job,
boolean isSilent)
|
|
MapRedTask(MapredWork plan,
org.apache.hadoop.mapred.JobConf job,
boolean isSilent)
|
Uses of HiveException in org.apache.hadoop.hive.ql.exec.persistence |
---|
Methods in org.apache.hadoop.hive.ql.exec.persistence that throw HiveException | |
---|---|
abstract void |
AbstractRowContainer.add(Row t)
|
void |
RowContainer.add(Row t)
|
void |
MapJoinRowContainer.add(Row t)
|
abstract void |
AbstractRowContainer.clear()
Remove all elements in the RowContainer. |
void |
HashMapWrapper.clear()
|
void |
RowContainer.clear()
Remove all elements in the RowContainer. |
void |
MapJoinRowContainer.clear()
Remove all elements in the RowContainer. |
void |
HashMapWrapper.close()
Close the persistent hash table and clean it up. |
void |
RowContainer.copyToDFSDirecory(org.apache.hadoop.fs.FileSystem destFs,
org.apache.hadoop.fs.Path destPath)
|
abstract Row |
AbstractRowContainer.first()
|
Row |
RowContainer.first()
|
Row |
MapJoinRowContainer.first()
|
abstract Row |
AbstractRowContainer.next()
|
Row |
RowContainer.next()
|
Row |
MapJoinRowContainer.next()
|
boolean |
HashMapWrapper.put(K key,
V value)
|
void |
MapJoinRowContainer.reset(MapJoinRowContainer<Object[]> other)
|
Constructors in org.apache.hadoop.hive.ql.exec.persistence that throw HiveException | |
---|---|
RowContainer(org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.mapred.Reporter reporter)
|
|
RowContainer(int bs,
org.apache.hadoop.conf.Configuration jc,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of HiveException in org.apache.hadoop.hive.ql.index |
---|
Methods in org.apache.hadoop.hive.ql.index that throw HiveException | |
---|---|
void |
AggregateIndexHandler.analyzeIndexDefinition(Table baseTable,
Index idx,
Table indexTable)
|
void |
HiveIndexHandler.analyzeIndexDefinition(Table baseTable,
Index index,
Table indexTable)
Requests that the handler validate an index definition and fill in additional information about its stored representation. |
boolean |
HiveIndexResult.contains(org.apache.hadoop.mapred.FileSplit split)
|
List<Task<?>> |
TableBasedIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
|
List<Task<?>> |
HiveIndexHandler.generateIndexBuildTaskList(Table baseTbl,
Index index,
List<Partition> indexTblPartitions,
List<Partition> baseTblPartitions,
Table indexTbl,
Set<ReadEntity> inputs,
Set<WriteEntity> outputs)
Requests that the handler generate a plan for building the index; the plan should read the base table and write out the index representation. |
protected abstract Task<?> |
TableBasedIndexHandler.getIndexBuilderMapRedTask(Set<ReadEntity> inputs,
Set<WriteEntity> outputs,
List<FieldSchema> indexField,
boolean partitioned,
PartitionDesc indexTblPartDesc,
String indexTableName,
PartitionDesc baseTablePartDesc,
String baseTableName,
String dbName)
|
Constructors in org.apache.hadoop.hive.ql.index that throw HiveException | |
---|---|
HiveIndexResult(List<String> indexFiles,
org.apache.hadoop.mapred.JobConf conf)
|
Uses of HiveException in org.apache.hadoop.hive.ql.index.bitmap |
---|
Methods in org.apache.hadoop.hive.ql.index.bitmap that throw HiveException | |
---|---|
void |
BitmapIndexHandler.analyzeIndexDefinition(Table baseTable,
Index index,
Table indexTable)
|
protected Task<?> |
BitmapIndexHandler.getIndexBuilderMapRedTask(Set<ReadEntity> inputs,
Set<WriteEntity> outputs,
List<FieldSchema> indexField,
boolean partitioned,
PartitionDesc indexTblPartDesc,
String indexTableName,
PartitionDesc baseTablePartDesc,
String baseTableName,
String dbName)
|
Uses of HiveException in org.apache.hadoop.hive.ql.index.compact |
---|
Methods in org.apache.hadoop.hive.ql.index.compact that throw HiveException | |
---|---|
void |
CompactIndexHandler.analyzeIndexDefinition(Table baseTable,
Index index,
Table indexTable)
|
protected Task<?> |
CompactIndexHandler.getIndexBuilderMapRedTask(Set<ReadEntity> inputs,
Set<WriteEntity> outputs,
List<FieldSchema> indexField,
boolean partitioned,
PartitionDesc indexTblPartDesc,
String indexTableName,
PartitionDesc baseTablePartDesc,
String baseTableName,
String dbName)
|
Uses of HiveException in org.apache.hadoop.hive.ql.io |
---|
Methods in org.apache.hadoop.hive.ql.io that throw HiveException | |
---|---|
static boolean |
HiveFileFormatUtils.checkInputFormat(org.apache.hadoop.fs.FileSystem fs,
HiveConf conf,
Class<? extends org.apache.hadoop.mapred.InputFormat> inputFormatCls,
ArrayList<org.apache.hadoop.fs.FileStatus> files)
checks if files are in same format as the given input format. |
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc,
TableDesc tableInfo,
Class<? extends org.apache.hadoop.io.Writable> outputClass,
FileSinkDesc conf,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter)
|
static FileSinkOperator.RecordWriter |
HiveFileFormatUtils.getRecordWriter(org.apache.hadoop.mapred.JobConf jc,
HiveOutputFormat<?,?> hiveOutputFormat,
Class<? extends org.apache.hadoop.io.Writable> valueClass,
boolean isCompressed,
Properties tableProp,
org.apache.hadoop.fs.Path outPath,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of HiveException in org.apache.hadoop.hive.ql.io.rcfile.merge |
---|
Methods in org.apache.hadoop.hive.ql.io.rcfile.merge that throw HiveException | |
---|---|
static org.apache.hadoop.fs.Path |
RCFileMergeMapper.backupOutputPath(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path outpath,
org.apache.hadoop.mapred.JobConf job)
|
static void |
RCFileMergeMapper.jobClose(String outputPath,
boolean success,
org.apache.hadoop.mapred.JobConf job,
SessionState.LogHelper console,
DynamicPartitionCtx dynPartCtx,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of HiveException in org.apache.hadoop.hive.ql.lockmgr |
---|
Subclasses of HiveException in org.apache.hadoop.hive.ql.lockmgr | |
---|---|
class |
LockException
Exception from lock manager. |
Uses of HiveException in org.apache.hadoop.hive.ql.metadata |
---|
Subclasses of HiveException in org.apache.hadoop.hive.ql.metadata | |
---|---|
class |
InvalidTableException
Generic exception class for Hive. |
Methods in org.apache.hadoop.hive.ql.metadata that throw HiveException | |
---|---|
void |
Hive.alterDatabase(String dbName,
Database db)
|
void |
Hive.alterIndex(String dbName,
String baseTblName,
String idxName,
Index newIdx)
Updates the existing index metadata with the new metadata. |
void |
Hive.alterPartition(String tblName,
Partition newPart)
Updates the existing table metadata with the new metadata. |
void |
Hive.alterPartitions(String tblName,
List<Partition> newParts)
Updates the existing table metadata with the new metadata. |
void |
Hive.alterTable(String tblName,
Table newTbl)
Updates the existing table metadata with the new metadata. |
void |
HiveMetaStoreChecker.checkMetastore(String dbName,
String tableName,
List<? extends Map<String,String>> partitions,
CheckResult result)
Check the metastore for inconsistencies, data missing in either the metastore or on the dfs. |
void |
Table.checkValidity()
|
Table |
Table.copy()
|
protected static void |
Hive.copyFiles(HiveConf conf,
org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.FileSystem fs)
|
protected void |
Table.copyFiles(org.apache.hadoop.fs.Path srcf)
Inserts files specified into the partition. |
void |
Hive.createDatabase(Database db)
Create a Database. |
void |
Hive.createDatabase(Database db,
boolean ifNotExist)
Create a database |
void |
Hive.createIndex(String tableName,
String indexName,
String indexHandlerClass,
List<String> indexedCols,
String indexTblName,
boolean deferredRebuild,
String inputFormat,
String outputFormat,
String serde,
String storageHandler,
String location,
Map<String,String> idxProps,
Map<String,String> tblProps,
Map<String,String> serdeProps,
String collItemDelim,
String fieldDelim,
String fieldEscape,
String lineDelim,
String mapKeyDelim,
String indexComment)
|
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec)
Creates a partition. |
Partition |
Hive.createPartition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location,
Map<String,String> partParams,
String inputFormat,
String outputFormat,
int numBuckets,
List<FieldSchema> cols,
String serializationLib,
Map<String,String> serdeParams,
List<String> bucketCols,
List<Order> sortCols)
Creates a partition |
void |
Hive.createRole(String roleName,
String ownerName)
|
void |
Hive.createTable(String tableName,
List<String> columns,
List<String> partCols,
Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat,
Class<?> fileOutputFormat)
Creates a table metdata and the directory for the table data |
void |
Hive.createTable(String tableName,
List<String> columns,
List<String> partCols,
Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat,
Class<?> fileOutputFormat,
int bucketCount,
List<String> bucketCols)
Creates a table metdata and the directory for the table data |
void |
Hive.createTable(Table tbl)
Creates the table with the give objects |
void |
Hive.createTable(Table tbl,
boolean ifNotExists)
Creates the table with the give objects |
boolean |
Hive.databaseExists(String dbName)
Query metadata to see if a database with the given name already exists. |
boolean |
Hive.deletePartitionColumnStatistics(String dbName,
String tableName,
String partName,
String colName)
|
boolean |
Hive.deleteTableColumnStatistics(String dbName,
String tableName,
String colName)
|
void |
Hive.dropDatabase(String name)
Drop a database. |
void |
Hive.dropDatabase(String name,
boolean deleteData,
boolean ignoreUnknownDb)
Drop a database |
void |
Hive.dropDatabase(String name,
boolean deleteData,
boolean ignoreUnknownDb,
boolean cascade)
Drop a database |
boolean |
Hive.dropIndex(String db_name,
String tbl_name,
String index_name,
boolean deleteData)
|
boolean |
Hive.dropPartition(String tblName,
List<String> part_vals,
boolean deleteData)
|
boolean |
Hive.dropPartition(String db_name,
String tbl_name,
List<String> part_vals,
boolean deleteData)
|
void |
Hive.dropRole(String roleName)
|
void |
Hive.dropTable(String tableName)
Drops table along with the data in it. |
void |
Hive.dropTable(String dbName,
String tableName)
Drops table along with the data in it. |
void |
Hive.dropTable(String dbName,
String tableName,
boolean deleteData,
boolean ignoreUnknownTab)
Drops the table. |
PrincipalPrivilegeSet |
Hive.get_privilege_set(HiveObjectType objectType,
String db_name,
String table_name,
List<String> part_values,
String column_name,
String user_name,
List<String> group_names)
|
static Hive |
Hive.get()
|
static Hive |
Hive.get(HiveConf c)
Gets hive object for the current thread. |
static Hive |
Hive.get(HiveConf c,
boolean needsRefresh)
get a connection to metastore. |
List<String> |
Hive.getAllDatabases()
Get all existing database names. |
List<Index> |
Table.getAllIndexes(short max)
|
List<String> |
Hive.getAllRoleNames()
Get all existing role names. |
List<String> |
Hive.getAllTables()
Get all table names for the current database. |
List<String> |
Hive.getAllTables(String dbName)
Get all table names for the specified database. |
static HiveAuthenticationProvider |
HiveUtils.getAuthenticator(org.apache.hadoop.conf.Configuration conf,
HiveConf.ConfVars authenticatorConfKey)
|
HiveAuthorizationProvider |
HiveStorageHandler.getAuthorizationProvider()
Returns the implementation specific authorization provider |
HiveAuthorizationProvider |
DefaultStorageHandler.getAuthorizationProvider()
|
static HiveAuthorizationProvider |
HiveUtils.getAuthorizeProviderManager(org.apache.hadoop.conf.Configuration conf,
HiveConf.ConfVars authorizationProviderConfKey,
HiveAuthenticationProvider authenticator)
|
Database |
Hive.getDatabase(String dbName)
Get the database by name. |
List<String> |
Hive.getDatabasesByPattern(String databasePattern)
Get all existing databases that match the given pattern. |
static List<FieldSchema> |
Hive.getFieldsFromDeserializer(String name,
Deserializer serde)
|
Index |
Hive.getIndex(String qualifiedIndexName)
|
Index |
Hive.getIndex(String baseTableName,
String indexName)
|
Index |
Hive.getIndex(String dbName,
String baseTableName,
String indexName)
|
List<Index> |
Hive.getIndexes(String dbName,
String tblName,
short max)
|
static HiveIndexHandler |
HiveUtils.getIndexHandler(HiveConf conf,
String indexHandlerClass)
|
Class<? extends org.apache.hadoop.mapred.InputFormat> |
Partition.getInputFormatClass()
|
Class<? extends HiveOutputFormat> |
Partition.getOutputFormatClass()
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate)
|
Partition |
Hive.getPartition(Table tbl,
Map<String,String> partSpec,
boolean forceCreate,
String partPath,
boolean inheritTableSpecs)
Returns partition metadata |
ColumnStatistics |
Hive.getPartitionColumnStatistics(String dbName,
String tableName,
String partName,
String colName)
|
List<String> |
Hive.getPartitionNames(String tblName,
short max)
|
List<String> |
Hive.getPartitionNames(String dbName,
String tblName,
Map<String,String> partSpec,
short max)
|
List<String> |
Hive.getPartitionNames(String dbName,
String tblName,
short max)
|
List<Partition> |
Hive.getPartitions(Table tbl)
get all the partitions that the table has |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
List<Partition> |
Hive.getPartitions(Table tbl,
Map<String,String> partialPartSpec,
short limit)
get all the partitions of the table that matches the given partial specification. |
List<Partition> |
Hive.getPartitionsByFilter(Table tbl,
String filter)
Get a list of Partitions by filter. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
List<String> partNames)
Get all partitions of the table that matches the list of given partition names. |
List<Partition> |
Hive.getPartitionsByNames(Table tbl,
Map<String,String> partialPartSpec)
get all the partitions of the table that matches the given partial specification. |
org.apache.hadoop.fs.Path[] |
Partition.getPath(Sample s)
|
static HiveStorageHandler |
HiveUtils.getStorageHandler(org.apache.hadoop.conf.Configuration conf,
String className)
|
Table |
Hive.getTable(String tableName)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String tableName,
boolean throwException)
Returns metadata for the table named tableName |
Table |
Hive.getTable(String dbName,
String tableName)
Returns metadata of the table |
Table |
Hive.getTable(String dbName,
String tableName,
boolean throwException)
Returns metadata of the table |
ColumnStatistics |
Hive.getTableColumnStatistics(String dbName,
String tableName,
String colName)
|
List<String> |
Hive.getTablesByPattern(String tablePattern)
Returns all existing tables from default database which match the given pattern. |
List<String> |
Hive.getTablesByPattern(String dbName,
String tablePattern)
Returns all existing tables from the specified database which match the given pattern. |
List<String> |
Hive.getTablesForDb(String database,
String tablePattern)
Returns all existing tables from the given database which match the given pattern. |
boolean |
Hive.grantPrivileges(PrivilegeBag privileges)
|
boolean |
Hive.grantRole(String roleName,
String userName,
PrincipalType principalType,
String grantor,
PrincipalType grantorType,
boolean grantOption)
|
boolean |
Table.isValidSpec(Map<String,String> spec)
|
List<Role> |
Hive.listRoles(String userName,
PrincipalType principalType)
|
ArrayList<LinkedHashMap<String,String>> |
Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path loadPath,
String tableName,
Map<String,String> partSpec,
boolean replace,
int numDP,
boolean holdDDLTime)
Given a source directory name of the load path, load all dynamically generated partitions into the specified table and return a list of strings that represent the dynamic partition paths. |
void |
Hive.loadPartition(org.apache.hadoop.fs.Path loadPath,
String tableName,
Map<String,String> partSpec,
boolean replace,
boolean holdDDLTime,
boolean inheritTableSpecs)
Load a directory into a Hive Table Partition - Alters existing content of the partition with the contents of loadPath. |
void |
Hive.loadTable(org.apache.hadoop.fs.Path loadPath,
String tableName,
boolean replace,
boolean holdDDLTime)
Load a directory into a Hive Table. |
Table |
Hive.newTable(String tableName)
|
void |
Hive.renamePartition(Table tbl,
Map<String,String> oldPartSpec,
Partition newPart)
Rename a old partition to new partition |
protected void |
Table.replaceFiles(org.apache.hadoop.fs.Path srcf)
Replaces the directory corresponding to the table by srcf. |
protected static void |
Hive.replaceFiles(org.apache.hadoop.fs.Path srcf,
org.apache.hadoop.fs.Path destf,
org.apache.hadoop.fs.Path oldPath,
HiveConf conf)
Replaces files in the partition with new data set specified by srcf. |
boolean |
Hive.revokePrivileges(PrivilegeBag privileges)
|
boolean |
Hive.revokeRole(String roleName,
String userName,
PrincipalType principalType)
|
void |
Table.setBucketCols(List<String> bucketCols)
|
void |
Table.setInputFormatClass(String name)
|
void |
Table.setOutputFormatClass(String name)
|
void |
Table.setSkewedColNames(List<String> skewedColNames)
|
void |
Table.setSkewedColValues(List<List<String>> skewedValues)
|
void |
Table.setSkewedInfo(SkewedInfo skewedInfo)
|
void |
Partition.setSkewedValueLocationMap(List<String> valList,
String dirName)
|
void |
Table.setSkewedValueLocationMap(List<String> valList,
String dirName)
|
void |
Table.setSortCols(List<Order> sortOrder)
|
void |
Table.setStoredAsSubDirectories(boolean storedAsSubDirectories)
|
void |
Partition.setValues(Map<String,String> partSpec)
Set Partition's values |
List<HiveObjectPrivilege> |
Hive.showPrivilegeGrant(HiveObjectType objectType,
String principalName,
PrincipalType principalType,
String dbName,
String tableName,
List<String> partValues,
String columnName)
|
List<Role> |
Hive.showRoleGrant(String principalName,
PrincipalType principalType)
|
void |
TestPartition.testPartition()
Test that the Partition spec is created properly. |
void |
TestHiveMetaStoreChecker.testPartitionsCheck()
|
void |
TestHiveMetaStoreChecker.testTableCheck()
|
boolean |
Hive.updatePartitionColumnStatistics(ColumnStatistics statsObj)
|
boolean |
Hive.updateTableColumnStatistics(ColumnStatistics statsObj)
|
Constructors in org.apache.hadoop.hive.ql.metadata that throw HiveException | |
---|---|
DummyPartition(Table tbl,
String name)
|
|
DummyPartition(Table tbl,
String name,
Map<String,String> partSpec)
|
|
Partition(Table tbl)
create an empty partition. |
|
Partition(Table tbl,
Map<String,String> partSpec,
org.apache.hadoop.fs.Path location)
Create partition object with the given info. |
|
Partition(Table tbl,
Partition tp)
|
|
Sample(int num,
int fraction,
Dimension d)
|
Uses of HiveException in org.apache.hadoop.hive.ql.metadata.formatting |
---|
Methods in org.apache.hadoop.hive.ql.metadata.formatting that throw HiveException | |
---|---|
void |
JsonMetaDataFormatter.asJson(OutputStream out,
Map<String,Object> data)
Convert the map to a JSON string. |
void |
JsonMetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
Describe table. |
void |
MetaDataFormatter.describeTable(DataOutputStream out,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
Describe table. |
void |
TextMetaDataFormatter.describeTable(DataOutputStream outStream,
String colPath,
String tableName,
Table tbl,
Partition part,
List<FieldSchema> cols,
boolean isFormatted,
boolean isExt)
|
void |
JsonMetaDataFormatter.error(OutputStream out,
String msg,
int errorCode)
Write an error message. |
void |
MetaDataFormatter.error(OutputStream out,
String msg,
int errorCode)
Write an error message. |
void |
TextMetaDataFormatter.error(OutputStream out,
String msg,
int errorCode)
Write an error message. |
void |
JsonMetaDataFormatter.logInfo(OutputStream out,
String msg,
int errorCode)
Write a log info message. |
void |
MetaDataFormatter.logInfo(OutputStream out,
String msg,
int errorCode)
Write a log info message. |
void |
TextMetaDataFormatter.logInfo(OutputStream out,
String msg,
int errorCode)
Write a log info message. |
void |
JsonMetaDataFormatter.logWarn(OutputStream out,
String msg,
int errorCode)
Write a log warn message. |
void |
MetaDataFormatter.logWarn(OutputStream out,
String msg,
int errorCode)
Write a log warn message. |
void |
TextMetaDataFormatter.logWarn(OutputStream out,
String msg,
int errorCode)
Write a log warn message. |
void |
JsonMetaDataFormatter.showDatabaseDescription(DataOutputStream out,
String database,
String comment,
String location,
Map<String,String> params)
Show the description of a database |
void |
MetaDataFormatter.showDatabaseDescription(DataOutputStream out,
String database,
String comment,
String location,
Map<String,String> params)
Describe a database. |
void |
TextMetaDataFormatter.showDatabaseDescription(DataOutputStream outStream,
String database,
String comment,
String location,
Map<String,String> params)
Describe a database |
void |
JsonMetaDataFormatter.showDatabases(DataOutputStream out,
List<String> databases)
Show a list of databases |
void |
MetaDataFormatter.showDatabases(DataOutputStream out,
List<String> databases)
Show the databases |
void |
TextMetaDataFormatter.showDatabases(DataOutputStream outStream,
List<String> databases)
Show the list of databases |
void |
JsonMetaDataFormatter.showTablePartitons(DataOutputStream out,
List<String> parts)
Show the table partitions. |
void |
MetaDataFormatter.showTablePartitons(DataOutputStream out,
List<String> parts)
Show the table partitions. |
void |
TextMetaDataFormatter.showTablePartitons(DataOutputStream outStream,
List<String> parts)
Show the table partitions. |
void |
JsonMetaDataFormatter.showTables(DataOutputStream out,
Set<String> tables)
Show a list of tables. |
void |
MetaDataFormatter.showTables(DataOutputStream out,
Set<String> tables)
Show a list of tables. |
void |
TextMetaDataFormatter.showTables(DataOutputStream out,
Set<String> tables)
Show a list of tables. |
void |
JsonMetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
|
void |
MetaDataFormatter.showTableStatus(DataOutputStream out,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
Show the table status. |
void |
TextMetaDataFormatter.showTableStatus(DataOutputStream outStream,
Hive db,
HiveConf conf,
List<Table> tbls,
Map<String,String> part,
Partition par)
|
Uses of HiveException in org.apache.hadoop.hive.ql.optimizer |
---|
Methods in org.apache.hadoop.hive.ql.optimizer that throw HiveException | |
---|---|
static Set<Partition> |
IndexUtils.checkPartitionsCoveredByIndex(TableScanOperator tableScan,
ParseContext pctx,
Map<Table,List<Index>> indexes)
Check the partitions used by the table scan to make sure they also exist in the index table. |
Uses of HiveException in org.apache.hadoop.hive.ql.optimizer.ppr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.ppr that throw HiveException | |
---|---|
static Object |
PartExprEvalUtils.evalExprWithPart(ExprNodeDesc expr,
LinkedHashMap<String,String> partSpec,
StructObjectInspector rowObjectInspector)
Evaluate expression with partition columns |
static Object |
PartExprEvalUtils.evaluateExprOnPart(Map<PrimitiveObjectInspector,ExprNodeEvaluator> pair,
Object[] rowWithPart)
|
static Map<PrimitiveObjectInspector,ExprNodeEvaluator> |
PartExprEvalUtils.prepareExpr(ExprNodeDesc expr,
List<String> partNames,
StructObjectInspector rowObjectInspector)
|
static PrunedPartitionList |
PartitionPruner.prune(Table tab,
ExprNodeDesc prunerExpr,
HiveConf conf,
String alias,
Map<String,PrunedPartitionList> prunedPartitionsMap)
Get the partition list for the table that satisfies the partition pruner condition. |
Uses of HiveException in org.apache.hadoop.hive.ql.parse |
---|
Subclasses of HiveException in org.apache.hadoop.hive.ql.parse | |
---|---|
class |
SemanticException
Exception from SemanticAnalyzer. |
Methods in org.apache.hadoop.hive.ql.parse that throw HiveException | |
---|---|
List<Task<? extends Serializable>> |
IndexUpdater.generateUpdateTasks()
|
Hive |
HiveSemanticAnalyzerHookContext.getHive()
|
Hive |
HiveSemanticAnalyzerHookContextImpl.getHive()
|
PrunedPartitionList |
ParseContext.getPrunedPartitions(String alias,
TableScanOperator ts)
|
boolean |
BaseSemanticAnalyzer.isValidPrefixSpec(Table tTable,
Map<String,String> spec)
Checks if given specification is proper specification for prefix of partition cols, for table partitioned by ds, hr, min valid ones are (ds='2008-04-08'), (ds='2008-04-08', hr='12'), (ds='2008-04-08', hr='12', min='30') invalid one is for example (ds='2008-04-08', min='30') |
Uses of HiveException in org.apache.hadoop.hive.ql.plan |
---|
Constructors in org.apache.hadoop.hive.ql.plan that throw HiveException | |
---|---|
PartitionDesc(Partition part)
|
|
PartitionDesc(Partition part,
TableDesc tblDesc)
|
Uses of HiveException in org.apache.hadoop.hive.ql.security |
---|
Methods in org.apache.hadoop.hive.ql.security that throw HiveException | |
---|---|
void |
DummyHiveMetastoreAuthorizationProvider.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
DummyHiveMetastoreAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
DummyHiveMetastoreAuthorizationProvider.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
DummyHiveMetastoreAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
DummyHiveMetastoreAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
InjectableDummyAuthenticator.destroy()
|
void |
DummyAuthenticator.destroy()
|
void |
HiveAuthenticationProvider.destroy()
|
void |
HadoopDefaultAuthenticator.destroy()
|
void |
DummyHiveMetastoreAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf)
|
Uses of HiveException in org.apache.hadoop.hive.ql.security.authorization |
---|
Methods in org.apache.hadoop.hive.ql.security.authorization that throw HiveException | |
---|---|
void |
HiveAuthorizationProvider.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a database object. |
void |
StorageBasedAuthorizationProvider.authorize(Database db,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
BitSetCheckedAuthorizationProvider.authorize(Database db,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive partition object. |
void |
StorageBasedAuthorizationProvider.authorize(Partition part,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
BitSetCheckedAuthorizationProvider.authorize(Partition part,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
StorageBasedAuthorizationProvider.authorize(org.apache.hadoop.fs.Path path,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a path. |
void |
HiveAuthorizationProvider.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization user level privileges. |
void |
StorageBasedAuthorizationProvider.authorize(Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
BitSetCheckedAuthorizationProvider.authorize(Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a list of columns. |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Partition part,
List<String> columns,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
void |
HiveAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
Authorization privileges against a hive table object. |
void |
StorageBasedAuthorizationProvider.authorize(Table table,
Privilege[] readRequiredPriv,
Privilege[] writeRequiredPriv)
|
void |
BitSetCheckedAuthorizationProvider.authorize(Table table,
Privilege[] inputRequiredPriv,
Privilege[] outputRequiredPriv)
|
protected boolean |
BitSetCheckedAuthorizationProvider.authorizePrivileges(PrincipalPrivilegeSet privileges,
Privilege[] inputPriv,
boolean[] inputCheck,
Privilege[] outputPriv,
boolean[] outputCheck)
|
protected boolean |
BitSetCheckedAuthorizationProvider.authorizeUserPriv(Privilege[] inputRequiredPriv,
boolean[] inputCheck,
Privilege[] outputRequiredPriv,
boolean[] outputCheck)
|
PrincipalPrivilegeSet |
HiveAuthorizationProviderBase.HiveProxy.get_privilege_set(HiveObjectType column,
String dbName,
String tableName,
List<String> partValues,
String col,
String userName,
List<String> groupNames)
|
Database |
HiveAuthorizationProviderBase.HiveProxy.getDatabase(String dbName)
|
protected org.apache.hadoop.fs.Path |
StorageBasedAuthorizationProvider.getDbLocation(Database db)
|
void |
DefaultHiveMetastoreAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf)
|
void |
HiveAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf)
|
void |
DefaultHiveAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf)
|
void |
StorageBasedAuthorizationProvider.init(org.apache.hadoop.conf.Configuration conf)
|
Constructors in org.apache.hadoop.hive.ql.security.authorization that throw HiveException | |
---|---|
AuthorizationPreEventListener(org.apache.hadoop.conf.Configuration config)
|
Uses of HiveException in org.apache.hadoop.hive.ql.session |
---|
Methods in org.apache.hadoop.hive.ql.session that throw HiveException | |
---|---|
static CreateTableAutomaticGrant |
CreateTableAutomaticGrant.create(HiveConf conf)
|
Uses of HiveException in org.apache.hadoop.hive.ql.udf.generic |
---|
Methods in org.apache.hadoop.hive.ql.udf.generic that throw HiveException | |
---|---|
void |
NGramEstimator.add(ArrayList<String> ng)
Adds a new n-gram to the estimation. |
void |
GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
This function will be called by GroupByOperator when it sees a new input row. |
abstract void |
GenericUDTF.close()
Called to notify the UDTF that there are no more rows to process. |
void |
GenericUDTFParseUrlTuple.close()
|
void |
GenericUDTFExplode.close()
|
void |
GenericUDTFStack.close()
|
void |
GenericUDTFInline.close()
|
void |
GenericUDTFJSONTuple.close()
|
void |
UDTFCollector.collect(Object input)
|
void |
Collector.collect(Object input)
Other classes will call collect() with the data that it has. |
Integer |
GenericUDFBaseCompare.compare(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDAFEvaluator.evaluate(GenericUDAFEvaluator.AggregationBuffer agg)
This function will be called by GroupByOperator when it sees a new input row. |
Object |
GenericUDFTestGetJavaBoolean.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFTestGetJavaString.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFTestTranslate.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFEvaluateNPE.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFConcatWS.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFAssertTrue.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFBetween.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFSize.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFPrintf.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
AbstractGenericUDFEWAHBitmapBop.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFHash.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFArrayContains.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPEqualOrGreaterThan.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFCase.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFSentences.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFNamedStruct.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPEqual.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFElt.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFUnion.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPNotEqual.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFCoalesce.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFWhen.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFTimestamp.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFIf.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFLocate.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFMapKeys.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFToBinary.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFArray.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPNull.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFMap.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFTranslate.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFInFile.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPNot.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFInstr.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFField.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPAnd.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPOr.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFBridge.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFStruct.evaluate(GenericUDF.DeferredObject[] arguments)
|
abstract Object |
GenericUDF.evaluate(GenericUDF.DeferredObject[] arguments)
Evaluate the GenericUDF with the arguments. |
Object |
GenericUDFSortArray.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFEWAHBitmapEmpty.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFFormatNumber.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFSplit.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPEqualOrLessThan.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFMapValues.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPEqualNS.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFIn.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFReflect.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPNotNull.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPGreaterThan.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFOPLessThan.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFIndex.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFFromUtcTimestamp.evaluate(GenericUDF.DeferredObject[] arguments)
|
Object |
GenericUDFStringToMap.evaluate(GenericUDF.DeferredObject[] arguments)
|
protected void |
GenericUDTF.forward(Object o)
Passes an output row to the collector. |
Object |
GenericUDF.DeferredObject.get()
|
Object |
GenericUDF.DeferredJavaObject.get()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.getNewAggregationBuffer()
|
abstract GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFEvaluator.getNewAggregationBuffer()
Get a new aggregation object. |
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFSum.GenericUDAFSumDouble.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFSum.GenericUDAFSumLong.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFMin.GenericUDAFMinEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCount.GenericUDAFCountEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFMax.GenericUDAFMaxEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.getNewAggregationBuffer()
|
GenericUDAFEvaluator.AggregationBuffer |
GenericUDAFAverage.GenericUDAFAverageEvaluator.getNewAggregationBuffer()
|
ArrayList<Object[]> |
NGramEstimator.getNGrams()
Returns the final top-k n-grams in a format suitable for returning to Hive. |
protected double[] |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.getQuantileArray(ConstantObjectInspector quantileOI)
|
ObjectInspector |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
Initialize the evaluator. |
ObjectInspector |
GenericUDAFPercentileApprox.GenericUDAFSinglePercentileApproxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFPercentileApprox.GenericUDAFMultiplePercentileApproxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFSum.GenericUDAFSumDouble.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFSum.GenericUDAFSumLong.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFMin.GenericUDAFMinEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFCount.GenericUDAFCountEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFMax.GenericUDAFMaxEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFAverage.GenericUDAFAverageEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
ObjectInspector |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
void |
NGramEstimator.initialize(int pk,
int ppf,
int pn)
Sets the 'k' and 'pf' parameters. |
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
abstract void |
GenericUDAFEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
Iterate through original data. |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFSum.GenericUDAFSumDouble.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFSum.GenericUDAFSumLong.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFMin.GenericUDAFMinEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFCount.GenericUDAFCountEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFAverage.GenericUDAFAverageEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] parameters)
|
void |
GenericUDFInFile.load(InputStream is)
Load the file from an InputStream. |
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
abstract void |
GenericUDAFEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
Merge with partial aggregation result. |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFSum.GenericUDAFSumDouble.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFSum.GenericUDAFSumLong.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFMin.GenericUDAFMinEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object obj)
|
void |
GenericUDAFCount.GenericUDAFCountEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFAverage.GenericUDAFAverageEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
NGramEstimator.merge(List<org.apache.hadoop.io.Text> other)
Takes a serialized n-gram estimator object created by the serialize() method and merges it with the current n-gram object. |
abstract void |
GenericUDTF.process(Object[] args)
Give a set of arguments for the UDTF to process. |
void |
GenericUDTFParseUrlTuple.process(Object[] o)
|
void |
GenericUDTFExplode.process(Object[] o)
|
void |
GenericUDTFStack.process(Object[] args)
|
void |
GenericUDTFInline.process(Object[] os)
|
void |
GenericUDTFJSONTuple.process(Object[] o)
|
void |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
abstract void |
GenericUDAFEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
Reset the aggregation. |
void |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFSum.GenericUDAFSumDouble.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFSum.GenericUDAFSumLong.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFMin.GenericUDAFMinEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFCount.GenericUDAFCountEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFMax.GenericUDAFMaxEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFAverage.GenericUDAFAverageEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
void |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
ArrayList<org.apache.hadoop.io.Text> |
NGramEstimator.serialize()
In preparation for a Hive merge() call, serializes the current n-gram estimator object into an ArrayList of Text objects. |
Object |
GenericUDAFStdSample.GenericUDAFStdSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCovarianceSample.GenericUDAFCovarianceSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
abstract Object |
GenericUDAFEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
Get final aggregation result. |
Object |
GenericUDAFPercentileApprox.GenericUDAFSinglePercentileApproxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFPercentileApprox.GenericUDAFMultiplePercentileApproxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFSum.GenericUDAFSumDouble.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFSum.GenericUDAFSumLong.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFVarianceSample.GenericUDAFVarianceSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFMin.GenericUDAFMinEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCount.GenericUDAFCountEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFMax.GenericUDAFMaxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFAverage.GenericUDAFAverageEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFStd.GenericUDAFStdEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCovariance.GenericUDAFCovarianceEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
abstract Object |
GenericUDAFEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
Get partial aggregation result. |
Object |
GenericUDAFPercentileApprox.GenericUDAFPercentileApproxEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFSum.GenericUDAFSumDouble.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFSum.GenericUDAFSumLong.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFMin.GenericUDAFMinEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFContextNGrams.GenericUDAFContextNGramEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCount.GenericUDAFCountEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCorrelation.GenericUDAFCorrelationEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFBooleanStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFLongStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFDoubleStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFStringStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFComputeStats.GenericUDAFBinaryStatsEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFMax.GenericUDAFMaxEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFCollectSet.GenericUDAFMkSetEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFEWAHBitmap.GenericUDAFEWAHBitmapEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFnGrams.GenericUDAFnGramEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFAverage.GenericUDAFAverageEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
Uses of HiveException in org.apache.hadoop.hive.ql.udf.xml |
---|
Methods in org.apache.hadoop.hive.ql.udf.xml that throw HiveException | |
---|---|
Object |
GenericUDFXPath.evaluate(GenericUDF.DeferredObject[] arguments)
|
Uses of HiveException in org.apache.hive.builtins |
---|
Methods in org.apache.hive.builtins that throw HiveException | |
---|---|
GenericUDAFEvaluator.AggregationBuffer |
UDAFUnionMap.Evaluator.getNewAggregationBuffer()
|
ObjectInspector |
UDAFUnionMap.Evaluator.init(GenericUDAFEvaluator.Mode m,
ObjectInspector[] parameters)
|
void |
UDAFUnionMap.Evaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg,
Object[] input)
|
void |
UDAFUnionMap.Evaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg,
Object partial)
|
void |
UDAFUnionMap.Evaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
UDAFUnionMap.Evaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
|
Object |
UDAFUnionMap.Evaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |