Modifier and Type | Method and Description |
---|---|
Query |
Querier.getQuery() |
Constructor and Description |
---|
Querier(java.util.List<java.lang.String> selectorsInput,
Paillier paillierInput,
Query queryInput,
java.util.Map<java.lang.Integer,java.lang.String> embedSelectorMapInput) |
Modifier and Type | Method and Description |
---|---|
static java.util.List<scala.Tuple2<java.lang.Long,java.math.BigInteger>> |
ComputeEncryptedRow.computeEncRow(java.math.BigInteger part,
Query query,
int rowIndex,
int colIndex) |
static java.util.List<scala.Tuple2<java.lang.Long,java.math.BigInteger>> |
ComputeEncryptedRow.computeEncRow(BytesArrayWritable dataPartitions,
Query query,
int rowIndex,
int colIndex)
Method to compute the encrypted row elements for a query from extracted data partitions in the form of BytesArrayWritable
|
static java.util.List<scala.Tuple2<java.lang.Long,java.math.BigInteger>> |
ComputeEncryptedRow.computeEncRow(java.lang.Iterable<BytesArrayWritable> dataPartitionsIter,
Query query,
int rowIndex,
boolean limitHitsPerSelector,
int maxHitsPerSelector,
boolean useCache)
Method to compute the encrypted row elements for a query from extracted data partitions in the form of Iterable
|
static java.util.List<scala.Tuple2<java.lang.Long,java.math.BigInteger>> |
ComputeEncryptedRow.computeEncRow(java.util.List<java.math.BigInteger> dataPartitions,
Query query,
int rowIndex,
int colIndex)
Method to compute the encrypted row elements for a query from extracted data partitions in the form of ArrayList<
|
static java.util.List<scala.Tuple2<java.lang.Long,java.math.BigInteger>> |
ComputeEncryptedRow.computeEncRowBI(java.lang.Iterable<java.util.List<java.math.BigInteger>> dataPartitionsIter,
Query query,
int rowIndex,
boolean limitHitsPerSelector,
int maxHitsPerSelector,
boolean useCache)
Method to compute the encrypted row elements for a query from extracted data partitions in the form of Iterable
* * * * |
static void |
ComputeEncryptedRow.loadCacheFromHDFS(org.apache.hadoop.fs.FileSystem fs,
java.lang.String hdfsFileName,
Query query)
Populate the cache based on the pre-generated exp table in hdfs
|
Modifier and Type | Method and Description |
---|---|
Query |
BroadcastVars.getQuery() |
Modifier and Type | Method and Description |
---|---|
static org.apache.spark.api.java.JavaPairRDD<java.lang.Integer,java.lang.Iterable<scala.Tuple2<java.lang.Integer,java.math.BigInteger>>> |
ComputeExpLookupTable.computeExpTable(org.apache.spark.api.java.JavaSparkContext sc,
org.apache.hadoop.fs.FileSystem fs,
BroadcastVars bVars,
Query query,
java.lang.String queryInputFile,
java.lang.String outputDirExp)
Method to create the distributed modular exponentiation lookup table in hdfs for a given Query
|
static org.apache.spark.api.java.JavaPairRDD<java.lang.Integer,java.lang.Iterable<scala.Tuple2<java.lang.Integer,java.math.BigInteger>>> |
ComputeExpLookupTable.computeExpTable(org.apache.spark.api.java.JavaSparkContext sc,
org.apache.hadoop.fs.FileSystem fs,
BroadcastVars bVars,
Query query,
java.lang.String queryInputFile,
java.lang.String outputDirExp,
boolean useModExpJoin)
Method to create the distributed modular exponentiation lookup table in hdfs for a given Query
|
void |
BroadcastVars.setQuery(Query queryIn) |
Constructor and Description |
---|
Responder(Query queryInput) |
Modifier and Type | Method and Description |
---|---|
static Query |
StormUtils.getQuery(boolean useHdfs,
java.lang.String hdfsUri,
java.lang.String queryFile)
Method to read in serialized Query object from the given queryFile
|
static Query |
StormUtils.prepareQuery(java.util.Map map)
Method to read in and return a serialized Query object from the given file and initialize/load the query.schemas and data.schemas
|