Uses of Class
org.apache.hadoop.hive.ql.metadata.HiveException

Packages that use HiveException
org.apache.hadoop.hive.ql.exec Hive QL execution tasks, operators, functions and other handlers. 
org.apache.hadoop.hive.ql.exec.persistence   
org.apache.hadoop.hive.ql.io   
org.apache.hadoop.hive.ql.metadata   
org.apache.hadoop.hive.ql.optimizer.ppr   
org.apache.hadoop.hive.ql.parse   
org.apache.hadoop.hive.ql.plan   
org.apache.hadoop.hive.ql.udf.generic Standard toolkit and framework for generic User-defined functions. 
org.apache.hadoop.hive.ql.udf.xml   
 

Uses of HiveException in org.apache.hadoop.hive.ql.exec
 

Subclasses of HiveException in org.apache.hadoop.hive.ql.exec
 class AmbiguousMethodException
          Exception thrown by the UDF and UDAF method resolvers in case a unique method is not found.
 class NoMatchingMethodException
          Exception thrown by the UDF and UDAF method resolvers in case no matching method is found.
 class UDFArgumentException
          exception class, thrown when udf argument have something wrong.
 class UDFArgumentLengthException
          exception class, thrown when udf arguments have wrong length.
 class UDFArgumentTypeException
          exception class, thrown when udf arguments have wrong types.
 

Methods in org.apache.hadoop.hive.ql.exec that throw HiveException
 void FileSinkOperator.FSPaths.abortWriters(org.apache.hadoop.fs.FileSystem fs, boolean abort, boolean delete)
           
protected  void CommonJoinOperator.checkAndGenObject()
           
 void FetchOperator.clearFetchContext()
          Clear the context, if anything needs to be done.
 void SkewJoinHandler.close(boolean abort)
           
 void Operator.close(boolean abort)
           
 void ScriptOperator.close(boolean abort)
           
 void MapJoinOperator.closeOp(boolean abort)
           
 void GroupByOperator.closeOp(boolean abort)
          We need to forward all the aggregations to children.
protected  void Operator.closeOp(boolean abort)
          Operator specific close routine.
 void SMBMapJoinOperator.closeOp(boolean abort)
           
 void CommonJoinOperator.closeOp(boolean abort)
          All done.
 void FileSinkOperator.closeOp(boolean abort)
           
protected  void UDTFOperator.closeOp(boolean abort)
           
 void MapOperator.closeOp(boolean abort)
          close extra child operators that are initialized but are not executed.
 void JoinOperator.closeOp(boolean abort)
          All done.
 void FileSinkOperator.FSPaths.closeWriters(boolean abort)
           
protected static ArrayList<Object> CommonJoinOperator.computeValues(Object row, List<ExprNodeEvaluator> valueFields, List<ObjectInspector> valueFieldsOI)
          Return the value as a standard object.
 void GroupByOperator.endGroup()
           
 void Operator.endGroup()
           
 void CommonJoinOperator.endGroup()
          Forward a record of join results.
 void JoinOperator.endGroup()
          Forward a record of join results.
 Object ExprNodeNullEvaluator.evaluate(Object row)
           
abstract  Object ExprNodeEvaluator.evaluate(Object row)
          Evaluate the expression given the row.
 Object ExprNodeColumnEvaluator.evaluate(Object row)
           
 Object ExprNodeGenericFuncEvaluator.evaluate(Object row)
           
 Object ExprNodeConstantEvaluator.evaluate(Object row)
           
 Object ExprNodeFieldEvaluator.evaluate(Object row)
           
protected  void GroupByOperator.forward(ArrayList<Object> keys, GenericUDAFEvaluator.AggregationBuffer[] aggs)
          Forward a record of keys and aggregation results.
protected  void Operator.forward(Object row, ObjectInspector rowInspector)
           
 void UDTFOperator.forwardUDTFOutput(Object o)
          forwardUDTFOutput is typically called indirectly by the GenericUDTF when the GenericUDTF has generated output rows that should be passed on to the next operator(s) in the DAG.
protected static HashMap<Byte,List<ObjectInspector>> CommonJoinOperator.getObjectInspectorsFromEvaluators(Map<Byte,List<ExprNodeEvaluator>> exprEntries, ObjectInspector[] inputObjInspector)
           
 ObjectInspector FetchOperator.getOutputObjectInspector()
           
static PartitionDesc Utilities.getPartitionDesc(Partition part)
           
 void SkewJoinHandler.handleSkew(int tag)
           
protected static ObjectInspector[] Operator.initEvaluators(ExprNodeEvaluator[] evals, ObjectInspector rowInspector)
          Initialize an array of ExprNodeEvaluator and return the result ObjectInspectors.
protected static StructObjectInspector Operator.initEvaluatorsAndReturnStruct(ExprNodeEvaluator[] evals, List<String> outputColName, ObjectInspector rowInspector)
          Initialize an array of ExprNodeEvaluator and put the return values into a StructObjectInspector with integer field names.
 void Operator.initialize(org.apache.hadoop.conf.Configuration hconf, ObjectInspector[] inputOIs)
          Initializes operators only if all parents have been initialized.
 ObjectInspector ExprNodeNullEvaluator.initialize(ObjectInspector rowInspector)
           
abstract  ObjectInspector ExprNodeEvaluator.initialize(ObjectInspector rowInspector)
          Initialize should be called once and only once.
 ObjectInspector ExprNodeColumnEvaluator.initialize(ObjectInspector rowInspector)
           
 ObjectInspector ExprNodeGenericFuncEvaluator.initialize(ObjectInspector rowInspector)
           
 ObjectInspector ExprNodeConstantEvaluator.initialize(ObjectInspector rowInspector)
           
 ObjectInspector ExprNodeFieldEvaluator.initialize(ObjectInspector rowInspector)
           
 void MapOperator.initializeAsRoot(org.apache.hadoop.conf.Configuration hconf, MapredWork mrwork)
          Initializes this map op as the root of the tree.
protected  void Operator.initializeChildren(org.apache.hadoop.conf.Configuration hconf)
          Calls initialize on each of the children with outputObjetInspector as the output row format.
 void Operator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf)
           
 void SMBMapJoinOperator.initializeLocalWork(org.apache.hadoop.conf.Configuration hconf)
           
 void SMBMapJoinOperator.initializeMapredLocalWork(MapJoinDesc conf, org.apache.hadoop.conf.Configuration hconf, MapredLocalWork localWork, org.apache.commons.logging.Log l4j)
           
protected  void LateralViewJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void MapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void GroupByOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void Operator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
          Operator specific initialization.
protected  void ExtractOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void SMBMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void LimitOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void SelectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void UnionOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
          UnionOperator will transform the input rows if the inputObjInspectors from different parents are different.
protected  void CommonJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void FileSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void UDTFOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void ReduceSinkOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
 void MapOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void AbstractMapJoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void ScriptOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void CollectOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void FilterOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
protected  void JoinOperator.initializeOp(org.apache.hadoop.conf.Configuration hconf)
           
static Object FunctionRegistry.invoke(Method m, Object thisObject, Object... arguments)
           
 void Operator.jobClose(org.apache.hadoop.conf.Configuration conf, boolean success, JobCloseFeedBack feedBack)
          Unlike other operator interfaces which are called from map or reduce task, jobClose is called from the jobclient side once the job has completed.
 void FileSinkOperator.jobClose(org.apache.hadoop.conf.Configuration hconf, boolean success, JobCloseFeedBack feedBack)
           
 void JoinOperator.jobClose(org.apache.hadoop.conf.Configuration hconf, boolean success, JobCloseFeedBack feedBack)
           
static void ExecDriver.main(String[] args)
           
 void FileSinkOperator.mvFileToFinalPath(String specPath, org.apache.hadoop.conf.Configuration hconf, boolean success, org.apache.commons.logging.Log log, DynamicPartitionCtx dpCtx)
           
protected  GenericUDAFEvaluator.AggregationBuffer[] GroupByOperator.newAggregations()
           
 void Operator.process(Object row, int tag)
          Process the row.
 void MapOperator.process(org.apache.hadoop.io.Writable value)
           
 void ExecMapperContext.processInputFileChangeForLocalWork()
           
 void LateralViewJoinOperator.processOp(Object row, int tag)
          An important assumption for processOp() is that for a given row from the TS, the LVJ will first get the row from the left select operator, followed by all the corresponding rows from the UDTF operator.
 void LateralViewForwardOperator.processOp(Object row, int tag)
           
 void MapJoinOperator.processOp(Object row, int tag)
           
 void GroupByOperator.processOp(Object row, int tag)
           
abstract  void Operator.processOp(Object row, int tag)
          Process the row.
 void ExtractOperator.processOp(Object row, int tag)
           
 void SMBMapJoinOperator.processOp(Object row, int tag)
           
 void LimitOperator.processOp(Object row, int tag)
           
 void SelectOperator.processOp(Object row, int tag)
           
 void UnionOperator.processOp(Object row, int tag)
           
 void FileSinkOperator.processOp(Object row, int tag)
           
 void UDTFOperator.processOp(Object row, int tag)
           
 void ForwardOperator.processOp(Object row, int tag)
           
 void ReduceSinkOperator.processOp(Object row, int tag)
           
 void MapOperator.processOp(Object row, int tag)
           
 void TableScanOperator.processOp(Object row, int tag)
          Currently, the table scan operator does not do anything special other than just forwarding the row.
 void ScriptOperator.processOp(Object row, int tag)
           
 void CollectOperator.processOp(Object row, int tag)
           
 void FilterOperator.processOp(Object row, int tag)
           
 void JoinOperator.processOp(Object row, int tag)
           
static void Utilities.rename(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)
          Rename src to dst, or in the case dst already exists, move files in src to dst.
static void Utilities.renameOrMoveFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)
          Rename src to dst, or in the case dst already exists, move files in src to dst.
protected  void GroupByOperator.resetAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs)
           
 void MapOperator.setChildren(org.apache.hadoop.conf.Configuration hconf)
           
 void GroupByOperator.startGroup()
           
 void Operator.startGroup()
           
 void CommonJoinOperator.startGroup()
           
static void FunctionRegistry.unregisterTemporaryUDF(String functionName)
           
protected  void GroupByOperator.updateAggregations(GenericUDAFEvaluator.AggregationBuffer[] aggs, Object row, ObjectInspector rowInspector, boolean hashAggr, boolean newEntryForHashAggr, Object[][] lastInvoke)
           
 

Constructors in org.apache.hadoop.hive.ql.exec that throw HiveException
ExecDriver(MapredWork plan, org.apache.hadoop.mapred.JobConf job, boolean isSilent)
          Constructor/Initialization for invocation as independent utility.
 

Uses of HiveException in org.apache.hadoop.hive.ql.exec.persistence
 

Methods in org.apache.hadoop.hive.ql.exec.persistence that throw HiveException
 void RowContainer.add(Row t)
           
 void HashMapWrapper.clear()
          Clean up the hash table.
 void RowContainer.clear()
          Remove all elements in the RowContainer.
 void HashMapWrapper.close()
          Close the persistent hash table and clean it up.
 void RowContainer.copyToDFSDirecory(org.apache.hadoop.fs.FileSystem destFs, org.apache.hadoop.fs.Path destPath)
           
 Row RowContainer.first()
           
 V HashMapWrapper.get(K key)
          Get the value based on the key.
 Row RowContainer.next()
           
 void HashMapWrapper.put(K key, V value)
          Put the key value pair in the hash table.
 void HashMapWrapper.remove(Object key)
          Remove one key-value pairs from the hash table based on the given key.
 

Constructors in org.apache.hadoop.hive.ql.exec.persistence that throw HiveException
RowContainer(org.apache.hadoop.conf.Configuration jc)
           
RowContainer(int blockSize, org.apache.hadoop.conf.Configuration jc)
           
RowContainer(int blockSize, SerDe sd, ObjectInspector oi, org.apache.hadoop.conf.Configuration jc)
           
 

Uses of HiveException in org.apache.hadoop.hive.ql.io
 

Methods in org.apache.hadoop.hive.ql.io that throw HiveException
static boolean HiveFileFormatUtils.checkInputFormat(org.apache.hadoop.fs.FileSystem fs, HiveConf conf, Class<? extends org.apache.hadoop.mapred.InputFormat> inputFormatCls, ArrayList<org.apache.hadoop.fs.FileStatus> files)
          checks if files are in same format as the given input format.
static FileSinkOperator.RecordWriter HiveFileFormatUtils.getHiveRecordWriter(org.apache.hadoop.mapred.JobConf jc, TableDesc tableInfo, Class<? extends org.apache.hadoop.io.Writable> outputClass, FileSinkDesc conf, org.apache.hadoop.fs.Path outPath)
           
static FileSinkOperator.RecordWriter HiveFileFormatUtils.getRecordWriter(org.apache.hadoop.mapred.JobConf jc, HiveOutputFormat<?,?> hiveOutputFormat, Class<? extends org.apache.hadoop.io.Writable> valueClass, boolean isCompressed, Properties tableProp, org.apache.hadoop.fs.Path outPath)
           
 

Uses of HiveException in org.apache.hadoop.hive.ql.metadata
 

Subclasses of HiveException in org.apache.hadoop.hive.ql.metadata
 class InvalidTableException
          Generic exception class for Hive.
 

Methods in org.apache.hadoop.hive.ql.metadata that throw HiveException
 void Hive.alterPartition(String tblName, Partition newPart)
          Updates the existing table metadata with the new metadata.
 void Hive.alterTable(String tblName, Table newTbl)
          Updates the existing table metadata with the new metadata.
 void HiveMetaStoreChecker.checkMetastore(String dbName, String tableName, List<? extends Map<String,String>> partitions, CheckResult result)
          Check the metastore for inconsistencies, data missing in either the metastore or on the dfs.
 void Table.checkValidity()
           
 Table Table.copy()
           
protected  void Table.copyFiles(org.apache.hadoop.fs.Path srcf)
          Inserts files specified into the partition.
protected static void Hive.copyFiles(org.apache.hadoop.fs.Path srcf, org.apache.hadoop.fs.Path destf, org.apache.hadoop.fs.FileSystem fs)
           
 void Hive.createDatabase(Database db)
          Create a Database.
 void Hive.createDatabase(Database db, boolean ifNotExist)
          Create a database
 Partition Hive.createPartition(Table tbl, Map<String,String> partSpec)
          Creates a partition.
 Partition Hive.createPartition(Table tbl, Map<String,String> partSpec, org.apache.hadoop.fs.Path location)
          Creates a partition
 void Hive.createTable(String tableName, List<String> columns, List<String> partCols, Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat, Class<?> fileOutputFormat)
          Creates a table metdata and the directory for the table data
 void Hive.createTable(String tableName, List<String> columns, List<String> partCols, Class<? extends org.apache.hadoop.mapred.InputFormat> fileInputFormat, Class<?> fileOutputFormat, int bucketCount, List<String> bucketCols)
          Creates a table metdata and the directory for the table data
 void Hive.createTable(Table tbl)
          Creates the table with the give objects
 void Hive.createTable(Table tbl, boolean ifNotExists)
          Creates the table with the give objects
 boolean Hive.databaseExists(String dbName)
          Query metadata to see if a database with the given name already exists.
 void Hive.dropDatabase(String name)
          Drop a database.
 void Hive.dropDatabase(String name, boolean deleteData, boolean ignoreUnknownDb)
          Drop a database
 boolean Hive.dropPartition(String db_name, String tbl_name, List<String> part_vals, boolean deleteData)
           
 void Hive.dropTable(String tableName)
          Drops table along with the data in it.
 void Hive.dropTable(String dbName, String tableName)
          Drops table along with the data in it.
 void Hive.dropTable(String dbName, String tableName, boolean deleteData, boolean ignoreUnknownTab)
          Drops the table.
static Hive Hive.get()
           
static Hive Hive.get(HiveConf c)
          Gets hive object for the current thread.
static Hive Hive.get(HiveConf c, boolean needsRefresh)
          get a connection to metastore.
 List<String> Hive.getAllDatabases()
          Get all existing database names.
 List<String> Hive.getAllTables()
          Get all table names for the current database.
 List<String> Hive.getAllTables(String dbName)
          Get all table names for the specified database.
 List<String> Hive.getDatabasesByPattern(String databasePattern)
          Get all existing databases that match the given pattern.
static List<FieldSchema> Hive.getFieldsFromDeserializer(String name, Deserializer serde)
           
 Class<? extends org.apache.hadoop.mapred.InputFormat> Partition.getInputFormatClass()
           
 Class<? extends HiveOutputFormat> Partition.getOutputFormatClass()
           
 Partition Hive.getPartition(Table tbl, Map<String,String> partSpec, boolean forceCreate)
          Returns partition metadata
 List<String> Hive.getPartitionNames(String dbName, String tblName, Map<String,String> partSpec, short max)
           
 List<String> Hive.getPartitionNames(String dbName, String tblName, short max)
           
 List<Partition> Hive.getPartitions(Table tbl)
          get all the partitions that the table has
 List<Partition> Hive.getPartitions(Table tbl, Map<String,String> partialPartSpec)
          get all the partitions of the table that matches the given partial specification.
 org.apache.hadoop.fs.Path[] Partition.getPath(Sample s)
           
static HiveStorageHandler HiveUtils.getStorageHandler(org.apache.hadoop.conf.Configuration conf, String className)
           
 Table Hive.getTable(String tableName)
          Returns metadata for the table named tableName in the current database.
 Table Hive.getTable(String dbName, String tableName)
          Returns metadata of the table
 Table Hive.getTable(String dbName, String tableName, boolean throwException)
          Returns metadata of the table
 List<String> Hive.getTablesByPattern(String tablePattern)
          Returns all existing tables from default database which match the given pattern.
 List<String> Hive.getTablesByPattern(String dbName, String tablePattern)
          Returns all existing tables from the specified database which match the given pattern.
 List<String> Hive.getTablesForDb(String database, String tablePattern)
          Returns all existing tables from the given database which match the given pattern.
 boolean Table.isValidSpec(Map<String,String> spec)
           
 ArrayList<LinkedHashMap<String,String>> Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path loadPath, String tableName, Map<String,String> partSpec, boolean replace, org.apache.hadoop.fs.Path tmpDirPath, int numDP)
          Given a source directory name of the load path, load all dynamically generated partitions into the specified table and return a list of strings that represent the dynamic partition paths.
 void Hive.loadPartition(org.apache.hadoop.fs.Path loadPath, String tableName, Map<String,String> partSpec, boolean replace, org.apache.hadoop.fs.Path tmpDirPath)
          Load a directory into a Hive Table Partition - Alters existing content of the partition with the contents of loadPath.
 void Hive.loadTable(org.apache.hadoop.fs.Path loadPath, String tableName, boolean replace, org.apache.hadoop.fs.Path tmpDirPath)
          Load a directory into a Hive Table.
protected  void Table.replaceFiles(org.apache.hadoop.fs.Path srcf, org.apache.hadoop.fs.Path tmpd)
          Replaces files in the partition with new data set specified by srcf.
protected static void Hive.replaceFiles(org.apache.hadoop.fs.Path srcf, org.apache.hadoop.fs.Path destf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path tmppath)
          Replaces files in the partition with new data set specifed by srcf.
 void Table.setBucketCols(List<String> bucketCols)
           
 void Table.setInputFormatClass(String name)
           
 void Table.setOutputFormatClass(String name)
           
 void Table.setSortCols(List<Order> sortOrder)
           
 

Constructors in org.apache.hadoop.hive.ql.metadata that throw HiveException
Partition(Table tbl)
          create an empty partition.
Partition(Table tbl, Map<String,String> partSpec, org.apache.hadoop.fs.Path location)
          Create partition object with the given info.
Partition(Table tbl, Partition tp)
           
Sample(int num, int fraction, Dimension d)
           
 

Uses of HiveException in org.apache.hadoop.hive.ql.optimizer.ppr
 

Methods in org.apache.hadoop.hive.ql.optimizer.ppr that throw HiveException
static PrunedPartitionList PartitionPruner.prune(Table tab, ExprNodeDesc prunerExpr, HiveConf conf, String alias, Map<String,PrunedPartitionList> prunedPartitionsMap)
          Get the partition list for the table that satisfies the partition pruner condition.
 

Uses of HiveException in org.apache.hadoop.hive.ql.parse
 

Subclasses of HiveException in org.apache.hadoop.hive.ql.parse
 class SemanticException
          Exception from SemanticAnalyzer.
 

Uses of HiveException in org.apache.hadoop.hive.ql.plan
 

Constructors in org.apache.hadoop.hive.ql.plan that throw HiveException
PartitionDesc(Partition part)
           
 

Uses of HiveException in org.apache.hadoop.hive.ql.udf.generic
 

Methods in org.apache.hadoop.hive.ql.udf.generic that throw HiveException
 void GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
          This function will be called by GroupByOperator when it sees a new input row.
 void GenericUDTFExplode.close()
           
abstract  void GenericUDTF.close()
          Called to notify the UDTF that there are no more rows to process.
 void Collector.collect(Object input)
          Other classes will call collect() with the data that it has.
 void UDTFCollector.collect(Object input)
           
 Object GenericUDAFEvaluator.evaluate(GenericUDAFEvaluator.AggregationBuffer agg)
          This function will be called by GroupByOperator when it sees a new input row.
 Object GenericUDFHash.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFIn.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFArrayContains.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFInstr.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFElt.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFSplit.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFField.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFLocate.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFBridge.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFStruct.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFConcatWS.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFIndex.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFOPNull.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFOPNotNull.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFMap.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFSize.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFWhen.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFArray.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFCase.evaluate(GenericUDF.DeferredObject[] arguments)
           
 Object GenericUDFIf.evaluate(GenericUDF.DeferredObject[] arguments)
           
abstract  Object GenericUDF.evaluate(GenericUDF.DeferredObject[] arguments)
          Evaluate the GenericUDF with the arguments.
 Object GenericUDFCoalesce.evaluate(GenericUDF.DeferredObject[] arguments)
           
protected  void GenericUDTF.forward(Object o)
          Passes an output row to the collector.
 Object GenericUDF.DeferredObject.get()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFAverage.GenericUDAFAverageEvaluator.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFMin.GenericUDAFMinEvaluator.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFCount.GenericUDAFCountEvaluator.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFSum.GenericUDAFSumDouble.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFSum.GenericUDAFSumLong.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.getNewAggregationBuffer()
           
abstract  GenericUDAFEvaluator.AggregationBuffer GenericUDAFEvaluator.getNewAggregationBuffer()
          Get a new aggregation object.
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFVariance.GenericUDAFVarianceEvaluator.getNewAggregationBuffer()
           
 GenericUDAFEvaluator.AggregationBuffer GenericUDAFMax.GenericUDAFMaxEvaluator.getNewAggregationBuffer()
           
 ObjectInspector GenericUDAFAverage.GenericUDAFAverageEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFMin.GenericUDAFMinEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFCount.GenericUDAFCountEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFSum.GenericUDAFSumDouble.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFSum.GenericUDAFSumLong.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFBridge.GenericUDAFBridgeEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
          Initialize the evaluator.
 ObjectInspector GenericUDAFVariance.GenericUDAFVarianceEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 ObjectInspector GenericUDAFMax.GenericUDAFMaxEvaluator.init(GenericUDAFEvaluator.Mode m, ObjectInspector[] parameters)
           
 void GenericUDAFAverage.GenericUDAFAverageEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFMin.GenericUDAFMinEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFCount.GenericUDAFCountEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFSum.GenericUDAFSumDouble.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFSum.GenericUDAFSumLong.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFBridge.GenericUDAFBridgeEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
abstract  void GenericUDAFEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
          Iterate through original data.
 void GenericUDAFVariance.GenericUDAFVarianceEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFMax.GenericUDAFMaxEvaluator.iterate(GenericUDAFEvaluator.AggregationBuffer agg, Object[] parameters)
           
 void GenericUDAFAverage.GenericUDAFAverageEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFMin.GenericUDAFMinEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFCount.GenericUDAFCountEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFSum.GenericUDAFSumDouble.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFSum.GenericUDAFSumLong.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFBridge.GenericUDAFBridgeEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
abstract  void GenericUDAFEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
          Merge with partial aggregation result.
 void GenericUDAFVariance.GenericUDAFVarianceEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDAFMax.GenericUDAFMaxEvaluator.merge(GenericUDAFEvaluator.AggregationBuffer agg, Object partial)
           
 void GenericUDTFExplode.process(Object[] o)
           
abstract  void GenericUDTF.process(Object[] args)
          Give a set of arguments for the UDTF to process.
 void GenericUDAFAverage.GenericUDAFAverageEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFMin.GenericUDAFMinEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFCount.GenericUDAFCountEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFSum.GenericUDAFSumDouble.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFSum.GenericUDAFSumLong.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFBridge.GenericUDAFBridgeEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
abstract  void GenericUDAFEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
          Reset the aggregation.
 void GenericUDAFVariance.GenericUDAFVarianceEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 void GenericUDAFMax.GenericUDAFMaxEvaluator.reset(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFAverage.GenericUDAFAverageEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFMin.GenericUDAFMinEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFVarianceSample.GenericUDAFVarianceSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFCount.GenericUDAFCountEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFSum.GenericUDAFSumDouble.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFSum.GenericUDAFSumLong.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFStd.GenericUDAFStdEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
abstract  Object GenericUDAFEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
          Get final aggregation result.
 Object GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFStdSample.GenericUDAFStdSampleEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFMax.GenericUDAFMaxEvaluator.terminate(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFAverage.GenericUDAFAverageEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFMin.GenericUDAFMinEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFCount.GenericUDAFCountEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFSum.GenericUDAFSumDouble.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFSum.GenericUDAFSumLong.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFBridge.GenericUDAFBridgeEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFHistogramNumeric.GenericUDAFHistogramNumericEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
abstract  Object GenericUDAFEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
          Get partial aggregation result.
 Object GenericUDAFVariance.GenericUDAFVarianceEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 Object GenericUDAFMax.GenericUDAFMaxEvaluator.terminatePartial(GenericUDAFEvaluator.AggregationBuffer agg)
           
 

Uses of HiveException in org.apache.hadoop.hive.ql.udf.xml
 

Methods in org.apache.hadoop.hive.ql.udf.xml that throw HiveException
 Object GenericUDFXPath.evaluate(GenericUDF.DeferredObject[] arguments)
           
 



Copyright © 2010 The Apache Software Foundation