|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use VectorWritable | |
---|---|
org.apache.mahout.cf.taste.hadoop.item | |
org.apache.mahout.clustering.canopy | |
org.apache.mahout.clustering.dirichlet | |
org.apache.mahout.clustering.dirichlet.models | |
org.apache.mahout.clustering.fuzzykmeans | |
org.apache.mahout.clustering.kmeans | This package provides an implementation of the k-means clustering algorithm. |
org.apache.mahout.clustering.lda | |
org.apache.mahout.clustering.meanshift | |
org.apache.mahout.math | |
org.apache.mahout.math.hadoop |
Uses of VectorWritable in org.apache.mahout.cf.taste.hadoop.item |
---|
Methods in org.apache.mahout.cf.taste.hadoop.item with parameters of type VectorWritable | |
---|---|
void |
UserVectorToCooccurrenceMapper.map(org.apache.hadoop.io.LongWritable userID,
VectorWritable userVector,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,org.apache.hadoop.io.IntWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
RecommenderMapper.map(org.apache.hadoop.io.LongWritable userID,
VectorWritable vectorWritable,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.LongWritable,RecommendedItemsWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Method parameters in org.apache.mahout.cf.taste.hadoop.item with type arguments of type VectorWritable | |
---|---|
void |
UserVectorToCooccurrenceReducer.reduce(org.apache.hadoop.io.IntWritable index1,
java.util.Iterator<org.apache.hadoop.io.IntWritable> index2s,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
ToUserVectorReducer.reduce(org.apache.hadoop.io.LongWritable userID,
java.util.Iterator<ItemPrefWritable> itemPrefs,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.LongWritable,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.clustering.canopy |
---|
Methods in org.apache.mahout.clustering.canopy with parameters of type VectorWritable | |
---|---|
void |
ClusterMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
CanopyMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Method parameters in org.apache.mahout.clustering.canopy with type arguments of type VectorWritable | |
---|---|
void |
CanopyClusterer.emitPointToExistingCanopies(Vector point,
java.util.List<Canopy> canopies,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> collector,
org.apache.hadoop.mapred.Reporter reporter)
This method is used by the CanopyMapper to perform canopy inclusion tests and to emit the point keyed by its covering canopies to the output. |
void |
ClusterMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
CanopyMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
CanopyReducer.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<VectorWritable> values,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,Canopy> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.clustering.dirichlet |
---|
Methods in org.apache.mahout.clustering.dirichlet that return types with arguments of type VectorWritable | |
---|---|
static DirichletState<VectorWritable> |
DirichletDriver.createState(java.lang.String modelFactory,
java.lang.String modelPrototype,
int prototypeSize,
int numModels,
double alpha_0)
Creates a DirichletState object from the given arguments. |
static DirichletState<VectorWritable> |
DirichletMapper.getDirichletState(org.apache.hadoop.mapred.JobConf job)
|
Methods in org.apache.mahout.clustering.dirichlet with parameters of type VectorWritable | |
---|---|
void |
DirichletMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Method parameters in org.apache.mahout.clustering.dirichlet with type arguments of type VectorWritable | |
---|---|
void |
DirichletReducer.configure(DirichletState<VectorWritable> state)
|
void |
DirichletMapper.configure(DirichletState<VectorWritable> state)
|
void |
DirichletMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,VectorWritable> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
DirichletReducer.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<VectorWritable> values,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,DirichletCluster<VectorWritable>> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
DirichletReducer.reduce(org.apache.hadoop.io.Text key,
java.util.Iterator<VectorWritable> values,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,DirichletCluster<VectorWritable>> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.clustering.dirichlet.models |
---|
Methods in org.apache.mahout.clustering.dirichlet.models that return VectorWritable | |
---|---|
VectorWritable |
VectorModelDistribution.getModelPrototype()
|
Methods in org.apache.mahout.clustering.dirichlet.models with parameters of type VectorWritable | |
---|---|
void |
NormalModel.observe(VectorWritable x)
|
void |
L1Model.observe(VectorWritable x)
|
void |
AsymmetricSampledNormalModel.observe(VectorWritable v)
|
double |
NormalModel.pdf(VectorWritable v)
|
double |
L1Model.pdf(VectorWritable x)
|
double |
AsymmetricSampledNormalModel.pdf(VectorWritable v)
|
void |
VectorModelDistribution.setModelPrototype(VectorWritable modelPrototype)
|
Constructors in org.apache.mahout.clustering.dirichlet.models with parameters of type VectorWritable | |
---|---|
AsymmetricSampledNormalDistribution(VectorWritable modelPrototype)
|
|
L1ModelDistribution(VectorWritable modelPrototype)
|
|
NormalModelDistribution(VectorWritable modelPrototype)
|
|
SampledNormalDistribution(VectorWritable modelPrototype)
|
|
VectorModelDistribution(VectorWritable modelPrototype)
|
Uses of VectorWritable in org.apache.mahout.clustering.fuzzykmeans |
---|
Methods in org.apache.mahout.clustering.fuzzykmeans with parameters of type VectorWritable | |
---|---|
void |
FuzzyKMeansMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,FuzzyKMeansInfo> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
FuzzyKMeansClusterMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,FuzzyKMeansOutput> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.clustering.kmeans |
---|
Methods in org.apache.mahout.clustering.kmeans with parameters of type VectorWritable | |
---|---|
void |
KMeansMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,KMeansInfo> output,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
KMeansClusterMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable point,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.clustering.lda |
---|
Methods in org.apache.mahout.clustering.lda with parameters of type VectorWritable | |
---|---|
void |
LDAMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable wordCountsWritable,
org.apache.hadoop.mapreduce.Mapper.Context context)
|
Uses of VectorWritable in org.apache.mahout.clustering.meanshift |
---|
Methods in org.apache.mahout.clustering.meanshift with parameters of type VectorWritable | |
---|---|
void |
MeanShiftCanopyCreatorMapper.map(org.apache.hadoop.io.WritableComparable<?> key,
VectorWritable vector,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.Text,MeanShiftCanopy> output,
org.apache.hadoop.mapred.Reporter reporter)
|
Uses of VectorWritable in org.apache.mahout.math |
---|
Subclasses of VectorWritable in org.apache.mahout.math | |
---|---|
class |
MultiLabelVectorWritable
Writable to handle serialization of a vector and a variable list of associated label indexes. |
Uses of VectorWritable in org.apache.mahout.math.hadoop |
---|
Fields in org.apache.mahout.math.hadoop with type parameters of type VectorWritable | |
---|---|
protected org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> |
TimesSquaredJob.TimesSquaredMapper.out
|
Methods in org.apache.mahout.math.hadoop with parameters of type VectorWritable | |
---|---|
void |
TransposeJob.TransposeMapper.map(org.apache.hadoop.io.IntWritable r,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,DistributedRowMatrix.MatrixEntryWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
TimesSquaredJob.TimesMapper.map(org.apache.hadoop.io.IntWritable rowNum,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter rep)
|
void |
TimesSquaredJob.TimesSquaredMapper.map(T rowNum,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter rep)
|
protected double |
TimesSquaredJob.TimesSquaredMapper.scale(VectorWritable v)
|
Method parameters in org.apache.mahout.math.hadoop with type arguments of type VectorWritable | |
---|---|
void |
MatrixMultiplicationJob.MatrixMultiplyMapper.map(org.apache.hadoop.io.IntWritable index,
org.apache.hadoop.mapred.join.TupleWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
TimesSquaredJob.TimesMapper.map(org.apache.hadoop.io.IntWritable rowNum,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter rep)
|
void |
TimesSquaredJob.TimesSquaredMapper.map(T rowNum,
VectorWritable v,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter rep)
|
void |
TransposeJob.TransposeReducer.reduce(org.apache.hadoop.io.IntWritable outRow,
java.util.Iterator<DistributedRowMatrix.MatrixEntryWritable> it,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
MatrixMultiplicationJob.MatrixMultiplicationReducer.reduce(org.apache.hadoop.io.IntWritable rowNum,
java.util.Iterator<VectorWritable> it,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
MatrixMultiplicationJob.MatrixMultiplicationReducer.reduce(org.apache.hadoop.io.IntWritable rowNum,
java.util.Iterator<VectorWritable> it,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.IntWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
TimesSquaredJob.VectorSummingReducer.reduce(org.apache.hadoop.io.NullWritable n,
java.util.Iterator<VectorWritable> vectors,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
void |
TimesSquaredJob.VectorSummingReducer.reduce(org.apache.hadoop.io.NullWritable n,
java.util.Iterator<VectorWritable> vectors,
org.apache.hadoop.mapred.OutputCollector<org.apache.hadoop.io.NullWritable,VectorWritable> out,
org.apache.hadoop.mapred.Reporter reporter)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |