dapply {SparkR}R Documentation

dapply

Description

Apply a function to each partition of a SparkDataFrame.

Apply a function to each partition of a SparkDataFrame and collect the result back

Usage

## S4 method for signature 'SparkDataFrame,'function',structType'
dapply(x, func, schema)

## S4 method for signature 'SparkDataFrame,'function''
dapplyCollect(x, func)

dapply(x, func, schema)

dapplyCollect(x, func)

Arguments

x

A SparkDataFrame

func

A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.

schema

The schema of the resulting DataFrame after the function is applied. It must match the output of func.

x

A SparkDataFrame

func

A function to be applied to each partition of the SparkDataFrame. func should have only one parameter, to which a data.frame corresponds to each partition will be passed. The output of func should be a data.frame.

See Also

Other SparkDataFrame functions: $, $<-, select, select, select,SparkDataFrame,Column-method, select,SparkDataFrame,list-method, selectExpr; SparkDataFrame-class, dataFrame; [, [[, subset; agg, agg, count,GroupedData-method, summarize, summarize; arrange, arrange, arrange, orderBy, orderBy, orderBy, orderBy; as.data.frame, as.data.frame,SparkDataFrame-method; attach, attach,SparkDataFrame-method; cache; collect; colnames, colnames, colnames<-, colnames<-, columns, names, names<-; coltypes, coltypes, coltypes<-, coltypes<-; columns, dtypes, printSchema, schema, schema; count, nrow; describe, describe, describe, summary, summary, summary,AFTSurvivalRegressionModel-method, summary,GeneralizedLinearRegressionModel-method, summary,KMeansModel-method, summary,NaiveBayesModel-method; dim; distinct, unique; dropDuplicates, dropDuplicates; dropna, dropna, fillna, fillna, na.omit, na.omit; drop, drop; dtypes; except, except; explain, explain; filter, filter, where, where; first, first; groupBy, groupBy, group_by, group_by; head; histogram; insertInto, insertInto; intersect, intersect; isLocal, isLocal; join; limit, limit; merge, merge; mutate, mutate, transform, transform; ncol; persist; printSchema; rbind, rbind, unionAll, unionAll; registerTempTable, registerTempTable; rename, rename, withColumnRenamed, withColumnRenamed; repartition; sample, sample, sample_frac, sample_frac; saveAsParquetFile, saveAsParquetFile, write.parquet, write.parquet; saveAsTable, saveAsTable; saveDF, saveDF, write.df, write.df, write.df; selectExpr; showDF, showDF; show, show, show,GroupedData-method, show,WindowSpec-method; str; take; unpersist; withColumn, withColumn; write.jdbc, write.jdbc; write.json, write.json; write.text, write.text

Other SparkDataFrame functions: $, $<-, select, select, select,SparkDataFrame,Column-method, select,SparkDataFrame,list-method, selectExpr; SparkDataFrame-class, dataFrame; [, [[, subset; agg, agg, count,GroupedData-method, summarize, summarize; arrange, arrange, arrange, orderBy, orderBy, orderBy, orderBy; as.data.frame, as.data.frame,SparkDataFrame-method; attach, attach,SparkDataFrame-method; cache; collect; colnames, colnames, colnames<-, colnames<-, columns, names, names<-; coltypes, coltypes, coltypes<-, coltypes<-; columns, dtypes, printSchema, schema, schema; count, nrow; describe, describe, describe, summary, summary, summary,AFTSurvivalRegressionModel-method, summary,GeneralizedLinearRegressionModel-method, summary,KMeansModel-method, summary,NaiveBayesModel-method; dim; distinct, unique; dropDuplicates, dropDuplicates; dropna, dropna, fillna, fillna, na.omit, na.omit; drop, drop; dtypes; except, except; explain, explain; filter, filter, where, where; first, first; groupBy, groupBy, group_by, group_by; head; histogram; insertInto, insertInto; intersect, intersect; isLocal, isLocal; join; limit, limit; merge, merge; mutate, mutate, transform, transform; ncol; persist; printSchema; rbind, rbind, unionAll, unionAll; registerTempTable, registerTempTable; rename, rename, withColumnRenamed, withColumnRenamed; repartition; sample, sample, sample_frac, sample_frac; saveAsParquetFile, saveAsParquetFile, write.parquet, write.parquet; saveAsTable, saveAsTable; saveDF, saveDF, write.df, write.df, write.df; selectExpr; showDF, showDF; show, show, show,GroupedData-method, show,WindowSpec-method; str; take; unpersist; withColumn, withColumn; write.jdbc, write.jdbc; write.json, write.json; write.text, write.text

Examples

## Not run: 
  df <- createDataFrame (sqlContext, iris)
  df1 <- dapply(df, function(x) { x }, schema(df))
  collect(df1)

  # filter and add a column
  df <- createDataFrame (
          sqlContext,
          list(list(1L, 1, "1"), list(2L, 2, "2"), list(3L, 3, "3")),
          c("a", "b", "c"))
  schema <- structType(structField("a", "integer"), structField("b", "double"),
                     structField("c", "string"), structField("d", "integer"))
  df1 <- dapply(
           df,
           function(x) {
             y <- x[x[1] > 1, ]
             y <- cbind(y, y[1] + 1L)
           },
           schema)
  collect(df1)
  # the result
  #       a b c d
  #     1 2 2 2 3
  #     2 3 3 3 4

## End(Not run)
## Not run: 
  df <- createDataFrame (sqlContext, iris)
  ldf <- dapplyCollect(df, function(x) { x })

  # filter and add a column
  df <- createDataFrame (
          sqlContext,
          list(list(1L, 1, "1"), list(2L, 2, "2"), list(3L, 3, "3")),
          c("a", "b", "c"))
  ldf <- dapplyCollect(
           df,
           function(x) {
             y <- x[x[1] > 1, ]
             y <- cbind(y, y[1] + 1L)
           })
  # the result
  #       a b c d
  #       2 2 2 3
  #       3 3 3 4

## End(Not run)

[Package SparkR version 2.0.0 Index]