|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use SemanticException | |
---|---|
org.apache.hadoop.hive.ql.exec | Hive QL execution tasks, operators, functions and other handlers. |
org.apache.hadoop.hive.ql.lib | |
org.apache.hadoop.hive.ql.optimizer | |
org.apache.hadoop.hive.ql.optimizer.lineage | |
org.apache.hadoop.hive.ql.optimizer.physical | |
org.apache.hadoop.hive.ql.optimizer.ppr | |
org.apache.hadoop.hive.ql.optimizer.unionproc | |
org.apache.hadoop.hive.ql.parse | |
org.apache.hadoop.hive.ql.plan | |
org.apache.hadoop.hive.ql.ppd | |
org.apache.hadoop.hive.ql.tools | |
org.apache.hadoop.hive.ql.udf.generic | Standard toolkit and framework for generic User-defined functions. |
Uses of SemanticException in org.apache.hadoop.hive.ql.exec |
---|
Subclasses of SemanticException in org.apache.hadoop.hive.ql.exec | |
---|---|
class |
AmbiguousMethodException
Exception thrown by the UDF and UDAF method resolvers in case a unique method is not found. |
class |
NoMatchingMethodException
Exception thrown by the UDF and UDAF method resolvers in case no matching method is found. |
class |
UDFArgumentException
exception class, thrown when udf argument have something wrong. |
class |
UDFArgumentLengthException
exception class, thrown when udf arguments have wrong length. |
class |
UDFArgumentTypeException
exception class, thrown when udf arguments have wrong types. |
Methods in org.apache.hadoop.hive.ql.exec that throw SemanticException | |
---|---|
static GenericUDAFEvaluator |
FunctionRegistry.getGenericUDAFEvaluator(String name,
List<TypeInfo> argumentTypeInfos,
boolean isDistinct,
boolean isAllColumns)
Get the GenericUDAF evaluator for the name and argumentClasses. |
static void |
Utilities.validateColumnNames(List<String> colNames,
List<String> checkCols)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.lib |
---|
Methods in org.apache.hadoop.hive.ql.lib that throw SemanticException | |
---|---|
int |
Rule.cost(Stack<Node> stack)
|
int |
RuleRegExp.cost(Stack<Node> stack)
This function returns the cost of the rule for the specified stack. |
void |
DefaultGraphWalker.dispatch(Node nd,
Stack<Node> ndStack)
Dispatch the current operator. |
Object |
DefaultRuleDispatcher.dispatch(Node nd,
Stack<Node> ndStack,
Object... nodeOutputs)
Dispatcher function. |
Object |
Dispatcher.dispatch(Node nd,
Stack<Node> stack,
Object... nodeOutputs)
Dispatcher function. |
Object |
NodeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Generic process for all ops that don't have specific implementations. |
void |
GraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
void |
DefaultGraphWalker.startWalking(Collection<Node> startNodes,
HashMap<Node,Object> nodeOutput)
starting point for walking. |
void |
DefaultGraphWalker.walk(Node nd)
walk the current operator and its descendants. |
void |
PreOrderWalker.walk(Node nd)
Walk the current operator and its descendants. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer |
---|
Methods in org.apache.hadoop.hive.ql.optimizer that throw SemanticException | |
---|---|
static void |
MapJoinProcessor.checkMapJoin(int mapJoinPos,
JoinCondDesc[] condns)
|
List<String> |
ColumnPrunerProcCtx.genColLists(Operator<? extends Serializable> curOp)
Creates the list of internal column names(these names are used in the RowResolver and are different from the external column names) that are needed in the subtree. |
static void |
GenMapRedUtils.initMapJoinPlan(Operator<? extends Serializable> op,
GenMRProcContext ctx,
boolean readInputMapJoin,
boolean readInputUnion,
boolean setReducer,
int pos)
|
static void |
GenMapRedUtils.initMapJoinPlan(Operator<? extends Serializable> op,
GenMRProcContext opProcCtx,
boolean readInputMapJoin,
boolean readInputUnion,
boolean setReducer,
int pos,
boolean createLocalPlan)
Initialize the current plan by adding it to root tasks. |
static void |
GenMapRedUtils.initPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current plan by adding it to root tasks. |
static void |
GenMapRedUtils.initUnionPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Initialize the current union plan. |
static void |
GenMapRedUtils.joinPlan(Operator<? extends Serializable> op,
Task<? extends Serializable> oldTask,
Task<? extends Serializable> task,
GenMRProcContext opProcCtx,
int pos,
boolean split,
boolean readMapJoinData,
boolean readUnionData)
|
static void |
GenMapRedUtils.joinPlan(Operator<? extends Serializable> op,
Task<? extends Serializable> oldTask,
Task<? extends Serializable> task,
GenMRProcContext opProcCtx,
int pos,
boolean split,
boolean readMapJoinData,
boolean readUnionData,
boolean createLocalWork)
Merge the current task with the task for the current reducer. |
static void |
GenMapRedUtils.mergeMapJoinUnion(UnionOperator union,
GenMRProcContext ctx,
int pos)
|
ParseContext |
Optimizer.optimize()
Invoke all the transformations one-by-one, and alter the query plan. |
Object |
GenMRUnion1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Union Operator encountered . |
Object |
GenMROperator.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
MapJoinFactory.TableScanMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.ReduceSinkMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.MapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Create a task by splitting the plan below the join. |
Object |
MapJoinFactory.MapJoinMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
MapJoinFactory.UnionMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRFileSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
File Sink Operator encountered. |
Object |
GenMRRedSink4.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
ColumnPrunerProcFactory.ColumnPrunerFilterProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerGroupByProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerDefaultProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerTableScanProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerReduceSinkProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerLateralViewJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerSelectProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
ColumnPrunerProcFactory.ColumnPrunerMapJoinProc.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
GenMRRedSink2.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
GenMRTableScan1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Table Sink encountered. |
Object |
SamplePruner.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
SamplePruner.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRRedSink1.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
Object |
MapJoinProcessor.CurrentMapJoin.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in the context. |
Object |
MapJoinProcessor.MapJoinFS.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the current mapjoin in a list of mapjoins followed by a filesink. |
Object |
MapJoinProcessor.MapJoinDefault.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Store the mapjoin in a rejected list. |
Object |
MapJoinProcessor.Default.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Nothing to do. |
Object |
GroupByOptimizer.BucketGroupByProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
GenMRRedSink3.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx opProcCtx,
Object... nodeOutputs)
Reduce Scan encountered. |
static org.apache.hadoop.fs.Path[] |
SamplePruner.prune(Partition part,
FilterDesc.sampleDesc sampleDescr)
Prunes to get all the files in the partition that satisfy the TABLESAMPLE clause. |
static void |
GenMapRedUtils.setTaskPlan(String alias_id,
Operator<? extends Serializable> topOp,
MapredWork plan,
boolean local,
GenMRProcContext opProcCtx)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.setTaskPlan(String path,
String alias,
Operator<? extends Serializable> topOp,
MapredWork plan,
boolean local,
TableDesc tt_desc)
set the current task in the mapredWork. |
static void |
GenMapRedUtils.splitPlan(ReduceSinkOperator op,
GenMRProcContext opProcCtx)
Split the current plan by creating a temporary destination. |
static void |
GenMapRedUtils.splitTasks(Operator<? extends Serializable> op,
Task<? extends Serializable> parentTask,
Task<? extends Serializable> childTask,
GenMRProcContext opProcCtx,
boolean setReducer,
boolean local,
int posn)
|
ParseContext |
BucketMapJoinOptimizer.transform(ParseContext pctx)
|
ParseContext |
JoinReorder.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
SamplePruner.transform(ParseContext pctx)
|
ParseContext |
ColumnPruner.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
Transform.transform(ParseContext pctx)
All transformation steps implement this interface. |
ParseContext |
SortedMergeBucketMapJoinOptimizer.transform(ParseContext pctx)
|
ParseContext |
MapJoinProcessor.transform(ParseContext pactx)
Transform the query tree. |
ParseContext |
GroupByOptimizer.transform(ParseContext pctx)
|
ParseContext |
ReduceSinkDeDuplication.transform(ParseContext pctx)
|
void |
ColumnPruner.ColumnPrunerWalker.walk(Node nd)
Walk the given operator. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.lineage |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.lineage that throw SemanticException | |
---|---|
static LineageInfo.Dependency |
ExprProcFactory.getExprDependency(LineageCtx lctx,
Operator<? extends Serializable> inpOp,
ExprNodeDesc expr)
Gets the expression dependencies for the expression. |
Object |
OpProcFactory.TransformLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewJoinLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.SelectLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.GroupByLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.UnionLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultLineage.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.GenericExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
Generator.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.physical |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.physical that throw SemanticException | |
---|---|
PhysicalContext |
PhysicalOptimizer.optimize()
invoke all the resolvers one-by-one, and alter the physical plan. |
Object |
SkewJoinProcFactory.SkewJoinJoinProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
Object |
SkewJoinProcFactory.SkewJoinDefaultProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
static void |
GenMRSkewJoinProcessor.processSkewJoin(JoinOperator joinOp,
Task<? extends Serializable> currTask,
ParseContext parseCtx)
Create tasks for processing skew joins. |
PhysicalContext |
SkewJoinResolver.resolve(PhysicalContext pctx)
|
PhysicalContext |
PhysicalPlanResolver.resolve(PhysicalContext pctx)
All physical plan resolvers have to implement this entry method. |
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.ppr |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.ppr that throw SemanticException | |
---|---|
static ExprNodeDesc |
ExprProcFactory.genPruner(String tabAlias,
ExprNodeDesc pred,
boolean hasNonPartCols)
Generates the partition pruner for the expression tree. |
Object |
OpProcFactory.FilterPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultPPR.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PartitionPruner.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.optimizer.unionproc |
---|
Methods in org.apache.hadoop.hive.ql.optimizer.unionproc that throw SemanticException | |
---|---|
Object |
UnionProcFactory.MapRedUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.MapUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.MapJoinUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.UnknownUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
UnionProcFactory.NoUnion.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
UnionProcessor.transform(ParseContext pCtx)
Transform the query tree. |
Uses of SemanticException in org.apache.hadoop.hive.ql.parse |
---|
Methods in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
void |
BaseSemanticAnalyzer.analyze(ASTNode ast,
Context ctx)
|
void |
LoadSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
SemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
FunctionSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
DDLSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
abstract void |
BaseSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
void |
ExplainSemanticAnalyzer.analyzeInternal(ASTNode ast)
|
static String |
BaseSemanticAnalyzer.charSetString(String charSetName,
String charSetString)
|
void |
SemanticAnalyzer.doPhase1(ASTNode ast,
QB qb,
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.Phase1Ctx ctx_1)
Phase 1: (including, but not limited to): 1. |
void |
SemanticAnalyzer.doPhase1QBExpr(ASTNode ast,
QBExpr qbexpr,
String id,
String alias)
|
ExprNodeDesc |
SemanticAnalyzer.genExprNodeDesc(ASTNode expr,
RowResolver input)
Generates an expression node descriptor for the expression passed in the arguments. |
Operator |
SemanticAnalyzer.genPlan(QB qb)
|
static BaseSemanticAnalyzer |
SemanticAnalyzerFactory.get(HiveConf conf,
ASTNode tree)
|
ColumnInfo |
RowResolver.get(String tab_alias,
String col_alias)
Gets the column Info to tab_alias.col_alias type of a column reference. |
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast)
|
protected List<FieldSchema> |
BaseSemanticAnalyzer.getColumns(ASTNode ast,
boolean lowerCase)
Get the list of FieldSchema out of the ASTNode. |
ColumnInfo |
RowResolver.getExpression(ASTNode node)
Retrieves the ColumnInfo corresponding to a source expression which exactly matches the string rendering of the given ASTNode. |
void |
SemanticAnalyzer.getMetaData(QB qb)
|
static String |
DDLSemanticAnalyzer.getTypeName(int token)
|
protected static String |
BaseSemanticAnalyzer.getTypeStringFromAST(ASTNode typeNode)
|
Object |
TypeCheckProcFactory.NullExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.NumExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.StrExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.BoolExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
TypeCheckProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
PrintOpTreeProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx ctx,
Object... nodeOutputs)
|
static ExprNodeDesc |
TypeCheckProcFactory.processGByExpr(Node nd,
Object procCtx)
Function to do groupby subexpression elimination. |
static String |
BaseSemanticAnalyzer.stripQuotes(String val)
|
void |
SemanticAnalyzer.validate()
|
void |
BaseSemanticAnalyzer.validate()
|
void |
GenMapRedWalker.walk(Node nd)
Walk the given operator. |
Constructors in org.apache.hadoop.hive.ql.parse that throw SemanticException | |
---|---|
BaseSemanticAnalyzer.tableSpec(Hive db,
HiveConf conf,
ASTNode ast)
|
|
BaseSemanticAnalyzer(HiveConf conf)
|
|
DDLSemanticAnalyzer(HiveConf conf)
|
|
ExplainSemanticAnalyzer(HiveConf conf)
|
|
FunctionSemanticAnalyzer(HiveConf conf)
|
|
LoadSemanticAnalyzer(HiveConf conf)
|
|
SemanticAnalyzer(HiveConf conf)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.plan |
---|
Methods in org.apache.hadoop.hive.ql.plan that throw SemanticException | |
---|---|
static ReduceSinkDesc |
PlanUtils.getReduceSinkDesc(ArrayList<ExprNodeDesc> keyCols,
ArrayList<ExprNodeDesc> valueCols,
List<String> outputColumnNames,
boolean includeKey,
int tag,
int numPartitionFields,
int numReducers)
Create the reduce sink descriptor. |
Uses of SemanticException in org.apache.hadoop.hive.ql.ppd |
---|
Methods in org.apache.hadoop.hive.ql.ppd that throw SemanticException | |
---|---|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends Serializable> op,
ExprNodeDesc pred)
|
static ExprWalkerInfo |
ExprWalkerProcFactory.extractPushdownPreds(OpWalkerInfo opContext,
Operator<? extends Serializable> op,
List<ExprNodeDesc> preds)
Extracts pushdown predicates from the given list of predicate expression. |
protected void |
OpProcFactory.DefaultPPD.mergeWithChildrenPred(Node nd,
OpWalkerInfo owi,
ExprWalkerInfo ewi,
Set<String> aliases,
boolean ignoreAliases)
Take current operators pushdown predicates and merges them with children's pushdown predicates. |
Object |
ExprWalkerProcFactory.ColumnExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Converts the reference from child row resolver to current row resolver. |
Object |
ExprWalkerProcFactory.FieldExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.GenericFuncExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
ExprWalkerProcFactory.DefaultExprProcessor.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ScriptPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.LateralViewForwardPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.TableScanPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.FilterPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.JoinPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.ReduceSinkPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
Object |
OpProcFactory.DefaultPPD.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
|
ParseContext |
PredicatePushDown.transform(ParseContext pctx)
|
Uses of SemanticException in org.apache.hadoop.hive.ql.tools |
---|
Methods in org.apache.hadoop.hive.ql.tools that throw SemanticException | |
---|---|
void |
LineageInfo.getLineageInfo(String query)
parses given query and gets the lineage info. |
static void |
LineageInfo.main(String[] args)
|
Object |
LineageInfo.process(Node nd,
Stack<Node> stack,
NodeProcessorCtx procCtx,
Object... nodeOutputs)
Implements the process method for the NodeProcessor interface. |
Uses of SemanticException in org.apache.hadoop.hive.ql.udf.generic |
---|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |