[[ {SparkR}R Documentation

Subset

Description

Return subsets of SparkDataFrame according to given conditions

Usage

## S4 method for signature 'SparkDataFrame,numericOrcharacter'
x[[i]]

## S4 method for signature 'SparkDataFrame'
x[i, j, ..., drop = F]

## S4 method for signature 'SparkDataFrame'
subset(x, subset, select, drop = F, ...)

Arguments

x

A SparkDataFrame

drop

if TRUE, a Column will be returned if the resulting dataset has only one column. Otherwise, a SparkDataFrame will always be returned.

subset

(Optional) A logical expression to filter on rows

select

expression for the single Column or a list of columns to select from the SparkDataFrame

Value

A new SparkDataFrame containing only the rows that meet the condition with selected columns

See Also

Other SparkDataFrame functions: $, $<-, select, select, select,SparkDataFrame,Column-method, select,SparkDataFrame,list-method, selectExpr; SparkDataFrame-class, dataFrame; agg, agg, count,GroupedData-method, summarize, summarize; arrange, arrange, arrange, orderBy, orderBy, orderBy, orderBy; as.data.frame, as.data.frame,SparkDataFrame-method; attach, attach,SparkDataFrame-method; cache; collect; colnames, colnames, colnames<-, colnames<-, columns, names, names<-; coltypes, coltypes, coltypes<-, coltypes<-; columns, dtypes, printSchema, schema, schema; count, nrow; dapply, dapply, dapplyCollect, dapplyCollect; describe, describe, describe, summary, summary, summary,AFTSurvivalRegressionModel-method, summary,GeneralizedLinearRegressionModel-method, summary,KMeansModel-method, summary,NaiveBayesModel-method; dim; distinct, unique; dropDuplicates, dropDuplicates; dropna, dropna, fillna, fillna, na.omit, na.omit; drop, drop; dtypes; except, except; explain, explain; filter, filter, where, where; first, first; groupBy, groupBy, group_by, group_by; head; histogram; insertInto, insertInto; intersect, intersect; isLocal, isLocal; join; limit, limit; merge, merge; mutate, mutate, transform, transform; ncol; persist; printSchema; rbind, rbind, unionAll, unionAll; registerTempTable, registerTempTable; rename, rename, withColumnRenamed, withColumnRenamed; repartition; sample, sample, sample_frac, sample_frac; saveAsParquetFile, saveAsParquetFile, write.parquet, write.parquet; saveAsTable, saveAsTable; saveDF, saveDF, write.df, write.df, write.df; selectExpr; showDF, showDF; show, show, show,GroupedData-method, show,WindowSpec-method; str; take; unpersist; withColumn, withColumn; write.jdbc, write.jdbc; write.json, write.json; write.text, write.text

Other subsetting functions: $, $<-, select, select, select,SparkDataFrame,Column-method, select,SparkDataFrame,list-method, selectExpr; filter, filter, where, where

Examples

## Not run: 
  # Columns can be selected using `[[` and `[`
  df[[2]] == df[["age"]]
  df[,2] == df[,"age"]
  df[,c("name", "age")]
  # Or to filter rows
  df[df$age > 20,]
  # SparkDataFrame can be subset on both rows and Columns
  df[df$name == "Smith", c(1,2)]
  df[df$age %in% c(19, 30), 1:2]
  subset(df, df$age %in% c(19, 30), 1:2)
  subset(df, df$age %in% c(19), select = c(1,2))
  subset(df, select = c(1,2))

## End(Not run)

[Package SparkR version 2.0.0 Index]