|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use SemanticException | |
---|---|
org.apache.hadoop.hive.ql | |
org.apache.hadoop.hive.ql.exec | Hive QL execution tasks, operators, functions and other handlers. |
org.apache.hadoop.hive.ql.lib | |
org.apache.hadoop.hive.ql.metadata | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.optimizer.index | |
org.apache.hadoop.hive.ql.optimizer.lineage | |
org.apache.hadoop.hive.ql.optimizer.pcr | |
org.apache.hadoop.hive.ql.optimizer.physical | |
org.apache.hadoop.hive.ql.optimizer.physical.index | |
org.apache.hadoop.hive.ql.optimizer.ppr | |
org.apache.hadoop.hive.ql.optimizer.unionproc | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.plan | |
org.apache.hadoop.hive.ql.ppd | |
org.apache.hadoop.hive.ql.tools | |
org.apache.hadoop.hive.ql.udf.generic | Standard toolkit and framework for generic User-defined functions. |
org.apache.hive.builtins |
Uses of SemanticException in org.apache.hadoop.hive.ql |
---|
Methods in org.apache.hadoop.hive.ql that throw SemanticException | |
---|---|
void |
QTestUtil.resetParser()
|
Uses of SemanticException in org.apache.hadoop.hive.ql.exec |
---|
Subclasses of SemanticException in org.apache.hadoop.hive.ql.exec | |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method is not found. |
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method is found. |
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong. |
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length. |
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types. |
Methods in org.apache.hadoop.hive.ql.exec that throw SemanticException | |
---|---|
static GenericUDAFEvaluator |
FunctionRegistry.getGenericUDAFEvaluator(String name,
List<ObjectInspector> argumentOIs,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses. |
void |
Operator.removeChildAndAdoptItsChildren(Operator<? extends Serializable> child)
Remove a child and add all of the child's children to the location of the child |
static void |
Utilities.reworkMapRedWork(Task<? extends Serializable> task,
boolean reworkMapredWork,
HiveConf conf)
The check here is kind of not clean. |
static void |
Utilities.validateColumnNames(List<String> colNames,
List<String> checkCols)
|
static void |
Utilities.validatePartSpec(Table tbl,
Map<String,String> partSpec)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.lib |
---|
Methods in org.apache.hadoop.hive.ql.lib that throw SemanticException | |
---|---|
int |
RuleRegExp.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack. |
int |
Rule.cost(Stack<Node> stack)
|
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
|
void |
DefaultGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
Dispatch the current operator. |
Object |
Dispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
Dispatcher function. |
Object |
DefaultRuleDispatcher.dispatch(Node nd,
Stack<Node> ndStack,
Object... nodeOutputs)
Dispatcher function. |
void |
TaskGraphWalker.dispatch(Node nd,
Stack<Node> ndStack,
TaskGraphWalker.TaskGraphWalkerContext walkerCtx)
Dispatch the current operator. |
Object |
NodeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Generic process for all ops that don't have specific implementations. |
void |
TaskGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
void |
GraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
void |
DefaultGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
void |
TaskGraphWalker.walk(Node nd)
walk the current operator and its descendants. |
void |
PreOrderWalker.walk(Node nd)
Walk the current operator and its descendants. |
void |
DefaultGraphWalker.walk(Node nd)
walk the current operator and its descendants. |
Uses of SemanticException in org.apache.hadoop.hive.ql.metadata |
---|
Methods in org.apache.hadoop.hive.ql.metadata that throw SemanticException | |
---|---|
void |
DummySemanticAnalyzerHook1.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
void |
DummySemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
ASTNode |
DummySemanticAnalyzerHook1.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
ASTNode |
DummySemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer |
---|
Methods in org.apache.hadoop.hive.ql.optimizer that throw SemanticException | |
---|---|
static void |
MapJoinProcessor.checkMapJoin(int mapJoinPos,
JoinCondDesc[] condns)
|
static MapJoinOperator |
MapJoinProcessor.convertMapJoin(LinkedHashMap<Operator<? extends Serializable>,OpParseContext> opParseCtxMap,
JoinOperator op,
QBJoinTree joinTree,
int mapJoinPos,
boolean noCheckOuterJoin)
convert a regular join to a a map-side join. |
List<String> |
ColumnPrunerProcCtx.genColLists(Operator<? extends Serializable> curOp)
Creates the list of internal column names(these names are used in the RowResolver and are different from the external column names) that are needed in the subtree. |
MapJoinOperator |
MapJoinProcessor.generateMapJoinOperator(ParseContext pctx,
JoinOperator op,
QBJoinTree joinTree,
int mapJoinPos)
|
static String |
MapJoinProcessor.genMapJoinOpAndLocalWork(MapredWork newWork,
JoinOperator op,
int mapJoinPos)
|
static List<Index> |
IndexUtils.getIndexes(Table baseTableMetaData,
List<String> matchIndexTypes)
Get a list of indexes on a table that match given types. |
static void |
GenMapRedUtils.initMapJoinPlan(Operator<? extends Serializable> op,
GenMRProcContext ctx,
boolean readInputMapJoin,
boolean readInputUnion,
boolean setReducer,
int pos)
|
static void |
GenMapRedUtils.initMapJoinPlan(Operator<? extends Serializable> op,
GenMRProcContext opProcCtx,
boolean readInputMapJoin,
boolean readInputUnion,
boolean setReducer,
int pos,
boolean createLocalPlan)
Initialize the current plan by adding it to root tasks. |
static void |
GenMapRedUtils.initPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current plan by adding it to root tasks. |
static void |
GenMapRedUtils.initUnionPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current union plan. |
static void |
GenMapRedUtils.joinPlan(Operator<? extends Serializable> op,
Task<? extends Serializable> oldTask,
Task<? extends Serializable> task,
GenMRProcContext opProcCtx,
int pos,
boolean split,
boolean readMapJoinData,
boolean readUnionData)
|
static void |
GenMapRedUtils.joinPlan(Operator<? extends Serializable> op,
Task<? extends Serializable> oldTask,
Task<? extends Serializable> task,
GenMRProcContext opProcCtx,
int pos,
boolean split,
boolean readMapJoinData,
boolean readUnionData,
boolean createLocalWork)
Merge the current task with the task for the current reducer. |
static SamplePruner.LimitPruneRetStatus |
SamplePruner.limitPrune(Partition part,
long sizeLimit,
int fileLimit,
Collection<org.apache.hadoop.fs.Path> retPathList)
Try to generate a list of subset of files in the partition to reach a size limit with number of files less than fileLimit |
static void |
GenMapRedUtils.mergeMapJoinUnion(UnionOperator union,
GenMRProcContext ctx,
int pos)
|
ParseContext |
Optimizer.optimize()
Invoke all the transformations one-by-one, and alter the query plan. |
Object |
SamplePruner.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
SamplePruner.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinProcessor.CurrentMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in the context. |
Object |
MapJoinProcessor.MapJoinFS.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in a list of mapjoins followed by a filesink. |
Object |
MapJoinProcessor.MapJoinDefault.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the mapjoin in a rejected list. |
Object |
MapJoinProcessor.Default.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Nothing to do. |
Object |
MapJoinFactory.TableScanMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.ReduceSinkMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.MapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Create a task by splitting the plan below the join. |
Object |
MapJoinFactory.MapJoinMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.UnionMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GroupByOptimizer.BucketGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRUnion1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Union Operator encountered . |
Object |
GenMRTableScan1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Table Sink encountered. |
Object |
GenMRRedSink4.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMRRedSink3.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMRRedSink2.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMRRedSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMROperator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMRFileSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
File Sink Operator encountered. |
Object |
ColumnPrunerProcFactory.ColumnPrunerFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
static org.apache.hadoop.fs.Path[] |
SamplePruner.prune(Partition part,
FilterDesc.sampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE clause. |
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
Operator<? extends Serializable> topOp,
MapredWork plan,
boolean local,
GenMRProcContext opProcCtx)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
Operator<? extends Serializable> topOp,
MapredWork plan,
boolean local,
GenMRProcContext opProcCtx,
PrunedPartitionList pList)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.setTaskPlan(String path,
String alias,
Operator<? extends Serializable> topOp,
MapredWork plan,
boolean local,
TableDesc tt_desc)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.splitPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Split the current plan by creating a temporary destination. |
static void |
GenMapRedUtils.splitTasks(Operator<? extends Serializable> op,
Task<? extends Serializable> parentTask,
Task<? extends Serializable> childTask,
GenMRProcContext opProcCtx,
boolean setReducer,
boolean local,
int posn)
|
ParseContext |
Transform.transform(ParseContext pctx)
All transformation steps implement this interface. |
ParseContext |
SortedMergeBucketMapJoinOptimizer.transform(ParseContext pctx)
|
ParseContext |
SamplePruner.transform(ParseContext pctx)
|
ParseContext |
ReduceSinkDeDuplication.transform(ParseContext pctx)
|
ParseContext |
MapJoinProcessor.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
JoinReorder.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
GroupByOptimizer.transform(ParseContext pctx)
|
ParseContext |
ColumnPruner.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
BucketMapJoinOptimizer.transform(ParseContext pctx)
|
void |
ColumnPruner.ColumnPrunerWalker.walk(Node nd)
Walk the given operator. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.index |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.index that throw SemanticException | |
---|---|
static ParseContext |
RewriteParseContextGenerator.generateOperatorTree(HiveConf conf,
String command)
Parse the input String command and generate a ASTNode tree. |
void |
RewriteQueryUsingAggregateIndexCtx.invokeRewriteQueryProc(Operator<? extends Serializable> topOp)
Walk the original operator tree using the DefaultGraphWalker using the rules. |
ParseContext |
RewriteGBUsingIndex.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.lineage |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.lineage that throw SemanticException | |
---|---|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends Serializable> inpOp,
ExprNodeDesc expr)
Gets the expression dependencies for the expression. |
Object |
OpProcFactory.TransformLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewJoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.SelectLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.GroupByLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.UnionLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.GenericExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
Generator.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.pcr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.pcr that throw SemanticException | |
---|---|
Object |
PcrOpProcFactory.FilterPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrOpProcFactory.DefaultPCR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PcrExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PartitionConditionRemover.transform(ParseContext pctx)
|
static PcrExprProcFactory.NodeInfoWrapper |
PcrExprProcFactory.walkExprTree(String tabAlias,
ArrayList<Partition> parts,
ExprNodeDesc pred)
Remove partition conditions when necessary from the the expression tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.physical |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.physical that throw SemanticException | |
---|---|
PhysicalContext |
PhysicalOptimizer.optimize()
invoke all the resolvers one-by-one, and alter the physical plan. |
Object |
SkewJoinProcFactory.SkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
SkewJoinProcFactory.SkewJoinDefaultProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
LocalMapJoinProcFactory.MapJoinFollowedByGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
LocalMapJoinProcFactory.LocalMapJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
static void |
GenMRSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ParseContext parseCtx)
Create tasks for processing skew joins. |
PhysicalContext |
SkewJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
PhysicalPlanResolver.resolve(PhysicalContext pctx)
All physical plan resolvers have to implement this entry method. |
PhysicalContext |
MetadataOnlyOptimizer.resolve(PhysicalContext pctx)
|
PhysicalContext |
MapJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
IndexWhereResolver.resolve(PhysicalContext physicalContext)
|
PhysicalContext |
CommonJoinResolver.resolve(PhysicalContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.physical.index |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.physical.index that throw SemanticException | |
---|---|
Object |
IndexWhereTaskDispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
|
Object |
IndexWhereProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.ppr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.ppr that throw SemanticException | |
---|---|
static ExprNodeDesc |
ExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
boolean hasNonPartCols)
Generates the partition pruner for the expression tree. |
Object |
OpProcFactory.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PartitionPruner.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.unionproc |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.unionproc that throw SemanticException | |
---|---|
Object |
UnionProcFactory.MapRedUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.MapUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.MapJoinUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.UnknownUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.NoUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
UnionProcessor.transform(ParseContext pCtx)
Transform the query tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.parse |
---|
Methods in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
void |
BaseSemanticAnalyzer.analyze(ASTNode ast,
Context ctx)
|
void |
SemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
LoadSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ImportSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
FunctionSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ExportSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ExplainSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
DDLSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
abstract void |
BaseSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
static String |
BaseSemanticAnalyzer.charSetString(String charSetName,
String charSetString)
|
static void |
EximUtil.createExportDump(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath,
Table tableHandle,
List<Partition> partitions)
|
static void |
EximUtil.doCheckCompatibility(String currVersion,
String version,
String fcVersion)
|
void |
SemanticAnalyzer.doPhase1(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.Phase1Ctx ctx_1)
Phase 1: (including, but not limited to): 1. |
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias)
|
protected HashMap<String,String> |
BaseSemanticAnalyzer.extractPartitionSpecs(org.antlr.runtime.tree.Tree partspec)
|
static HashMap<Node,Object> |
TypeCheckProcFactory.genExprNode(ASTNode expr,
TypeCheckCtx tcCtx)
|
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptor for the expression passed in the arguments. |
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input,
TypeCheckCtx tcCtx)
Generates an expression node descriptor for the expression passed in the arguments. |
Operator |
SemanticAnalyzer.genPlan(QB qb)
|
static BaseSemanticAnalyzer |
SemanticAnalyzerFactory.get(HiveConf conf,
ASTNode tree)
|
ColumnInfo |
RowResolver.get(String tab_alias,
String col_alias)
Gets the column Info to tab_alias.col_alias type of a column reference. |
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast)
|
static List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase)
Get the list of FieldSchema out of the ASTNode. |
ColumnInfo |
RowResolver.getExpression(ASTNode node)
Retrieves the ColumnInfo corresponding to a source expression which exactly matches the string rendering of the given ASTNode. |
void |
SemanticAnalyzer.getMetaData(QB qb)
|
static String |
DDLSemanticAnalyzer.getTypeName(int token)
|
protected static String |
BaseSemanticAnalyzer.getTypeStringFromAST(ASTNode typeNode)
|
protected void |
BaseSemanticAnalyzer.handleGenericFileFormat(ASTNode node)
|
void |
HiveSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
Invoked after Hive performs its own semantic analysis on a statement (including optimization). |
void |
AbstractSemanticAnalyzerHook.postAnalyze(HiveSemanticAnalyzerHookContext context,
List<Task<? extends Serializable>> rootTasks)
|
ASTNode |
HiveSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
Invoked before Hive performs its own semantic analysis on a statement. |
ASTNode |
AbstractSemanticAnalyzerHook.preAnalyze(HiveSemanticAnalyzerHookContext context,
ASTNode ast)
|
Object |
TypeCheckProcFactory.NullExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.NumExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.StrExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.BoolExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrintOpTreeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
static ExprNodeDesc |
TypeCheckProcFactory.processGByExpr(Node nd,
Object procCtx)
Function to do groupby subexpression elimination. |
static Map.Entry<Table,List<Partition>> |
EximUtil.readMetaData(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path metadataPath)
|
static String |
EximUtil.relativeToAbsolutePath(HiveConf conf,
String location)
|
static String |
BaseSemanticAnalyzer.stripQuotes(String val)
|
void |
TestEximUtil.testCheckCompatibility()
|
void |
SemanticAnalyzer.validate()
|
void |
BaseSemanticAnalyzer.validate()
|
void |
GenMapRedWalker.walk(Node nd)
Walk the given operator. |
Constructors in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
BaseSemanticAnalyzer.tableSpec(Hive db,
HiveConf conf,
ASTNode ast)
|
|
BaseSemanticAnalyzer.tableSpec(Hive db,
HiveConf conf,
ASTNode ast,
boolean allowDynamicPartitionsSpec,
boolean allowPartialPartitionsSpec)
|
|
BaseSemanticAnalyzer(HiveConf conf)
|
|
DDLSemanticAnalyzer(HiveConf conf)
|
|
ExplainSemanticAnalyzer(HiveConf conf)
|
|
ExportSemanticAnalyzer(HiveConf conf)
|
|
FunctionSemanticAnalyzer(HiveConf conf)
|
|
ImportSemanticAnalyzer(HiveConf conf)
|
|
LoadSemanticAnalyzer(HiveConf conf)
|
|
SemanticAnalyzer(HiveConf conf)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.plan |
---|
Methods in org.apache.hadoop.hive.ql.plan that throw SemanticException | |
---|---|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
ArrayList<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers)
Create the reduce sink descriptor. |
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
int numKeys,
ArrayList<ExprNodeDesc> valueCols,
List<List<Integer>> distinctColIndices,
List<String> outputKeyColumnNames,
List<String> outputValueColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers)
Create the reduce sink descriptor. |
Uses of SemanticException in org.apache.hadoop.hive.ql.ppd |
---|
Methods in org.apache.hadoop.hive.ql.ppd that throw SemanticException | |
---|---|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends Serializable> op,
ExprNodeDesc pred)
|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends Serializable> op,
List<ExprNodeDesc> preds)
Extracts pushdown predicates from the given list of predicate expression. |
protected ExprWalkerInfo |
OpProcFactory.DefaultPPD.mergeChildrenPred(Node nd,
OpWalkerInfo owi,
Set<String> excludedAliases,
boolean ignoreAliases)
|
protected boolean |
OpProcFactory.DefaultPPD.mergeWithChildrenPred(Node nd,
OpWalkerInfo owi,
ExprWalkerInfo ewi,
Set<String> aliases,
boolean ignoreAliases)
Take current operators pushdown predicates and merges them with children's pushdown predicates. |
Object |
OpProcFactory.ScriptPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.UDTFPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewForwardPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.FilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Converts the reference from child row resolver to current row resolver. |
Object |
ExprWalkerProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PredicatePushDown.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.tools |
---|
Methods in org.apache.hadoop.hive.ql.tools that throw SemanticException | |
---|---|
void |
LineageInfo.getLineageInfo(String query)
parses given query and gets the lineage info. |
static void |
LineageInfo.main(String[] args)
|
Object |
LineageInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Implements the process method for the NodeProcessor interface. |
Uses of SemanticException in org.apache.hadoop.hive.ql.udf.generic |
---|
Uses of SemanticException in org.apache.hive.builtins |
---|
Methods in org.apache.hive.builtins that throw SemanticException | |
---|---|
GenericUDAFEvaluator |
UDAFUnionMap.getEvaluator(TypeInfo[] parameters)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |