|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use TreeID | |
---|---|
org.apache.mahout.df.mapred.partial |
Partial-data mapreduce implementation of Random Decision Forests
The builder splits the data, using a FileInputSplit, among the mappers. |
org.apache.mahout.df.mapreduce.partial |
Partial-data mapreduce implementation of Random Decision Forests
The builder splits the data, using a FileInputSplit, among the mappers. |
Uses of TreeID in org.apache.mahout.df.mapred.partial |
---|
Methods in org.apache.mahout.df.mapred.partial with parameters of type TreeID | |
---|---|
void |
Step2Mapper.configure(int partition,
Dataset dataset,
TreeID[] keys,
Node[] trees,
int numInstances)
Useful for testing. |
protected static void |
PartialBuilder.processOutput(org.apache.hadoop.mapred.JobConf job,
org.apache.hadoop.fs.Path outputPath,
int[] firstIds,
TreeID[] keys,
Node[] trees,
PredictionCallback callback)
Processes the output from the output path. |
void |
Step2Job.run(org.apache.hadoop.conf.Configuration conf,
TreeID[] keys,
Node[] trees,
PredictionCallback callback)
Run the second step. |
Method parameters in org.apache.mahout.df.mapred.partial with type arguments of type TreeID | |
---|---|
void |
Step2Mapper.map(org.apache.hadoop.io.LongWritable key,
org.apache.hadoop.io.Text value,
org.apache.hadoop.mapred.OutputCollector<TreeID,MapredOutput> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
Step1Mapper.map(org.apache.hadoop.io.LongWritable key,
org.apache.hadoop.io.Text value,
org.apache.hadoop.mapred.OutputCollector<TreeID,MapredOutput> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of TreeID in org.apache.mahout.df.mapreduce.partial |
---|
Methods in org.apache.mahout.df.mapreduce.partial that return TreeID | |
---|---|
TreeID |
TreeID.clone()
|
Methods in org.apache.mahout.df.mapreduce.partial with parameters of type TreeID | |
---|---|
void |
Step2Mapper.configure(int partition,
Dataset dataset,
TreeID[] keys,
Node[] trees,
int numInstances)
Useful for testing. |
static int |
InterResults.load(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path forestPath,
int numMaps,
int numTrees,
int partition,
TreeID[] keys,
Node[] trees)
Load the trees and the keys returned from the first step |
protected static void |
PartialBuilder.processOutput(org.apache.hadoop.mapreduce.JobContext job,
org.apache.hadoop.fs.Path outputPath,
int[] firstIds,
TreeID[] keys,
Node[] trees,
PredictionCallback callback)
Processes the output from the output path. |
void |
Step2Job.run(org.apache.hadoop.conf.Configuration conf,
TreeID[] keys,
Node[] trees,
PredictionCallback callback)
Run the second step. |
static void |
InterResults.store(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path forestPath,
TreeID[] keys,
Node[] trees,
int[] sizes)
Write the forest trees into a file |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |