arrange {SparkR}R Documentation

Arrange

Description

Sort a SparkDataFrame by the specified column(s).

Defines the ordering columns in a WindowSpec.

Usage

## S4 method for signature 'SparkDataFrame,Column'
arrange(x, col, ...)

## S4 method for signature 'SparkDataFrame,character'
arrange(x, col, ..., decreasing = FALSE)

## S4 method for signature 'SparkDataFrame,characterOrColumn'
orderBy(x, col, ...)

## S4 method for signature 'WindowSpec,character'
orderBy(x, col, ...)

## S4 method for signature 'WindowSpec,Column'
orderBy(x, col, ...)

arrange(x, col, ...)

orderBy(x, col, ...)

Arguments

x

A SparkDataFrame to be sorted.

col

A character or Column object vector indicating the fields to sort on

...

Additional sorting fields

decreasing

A logical argument indicating sorting order for columns when a character vector is specified for col

x

a WindowSpec

Value

A SparkDataFrame where all elements are sorted.

a WindowSpec

See Also

Other SparkDataFrame functions: $, $<-, select, select, select,SparkDataFrame,Column-method, select,SparkDataFrame,list-method, selectExpr; SparkDataFrame-class, dataFrame; [, [[, subset; agg, agg, count,GroupedData-method, summarize, summarize; as.data.frame, as.data.frame,SparkDataFrame-method; attach, attach,SparkDataFrame-method; cache; collect; colnames, colnames, colnames<-, colnames<-, columns, names, names<-; coltypes, coltypes, coltypes<-, coltypes<-; columns, dtypes, printSchema, schema, schema; count, nrow; dapply, dapply, dapplyCollect, dapplyCollect; describe, describe, describe, summary, summary, summary,AFTSurvivalRegressionModel-method, summary,GeneralizedLinearRegressionModel-method, summary,KMeansModel-method, summary,NaiveBayesModel-method; dim; distinct, unique; dropDuplicates, dropDuplicates; dropna, dropna, fillna, fillna, na.omit, na.omit; drop, drop; dtypes; except, except; explain, explain; filter, filter, where, where; first, first; groupBy, groupBy, group_by, group_by; head; histogram; insertInto, insertInto; intersect, intersect; isLocal, isLocal; join; limit, limit; merge, merge; mutate, mutate, transform, transform; ncol; persist; printSchema; rbind, rbind, unionAll, unionAll; registerTempTable, registerTempTable; rename, rename, withColumnRenamed, withColumnRenamed; repartition; sample, sample, sample_frac, sample_frac; saveAsParquetFile, saveAsParquetFile, write.parquet, write.parquet; saveAsTable, saveAsTable; saveDF, saveDF, write.df, write.df, write.df; selectExpr; showDF, showDF; show, show, show,GroupedData-method, show,WindowSpec-method; str; take; unpersist; withColumn, withColumn; write.jdbc, write.jdbc; write.json, write.json; write.text, write.text

Other windowspec_method: partitionBy, partitionBy; rangeBetween, rangeBetween; rowsBetween, rowsBetween

Examples

## Not run: 
sc <- sparkR.init()
sqlContext <- sparkRSQL.init(sc)
path <- "path/to/file.json"
df <- read.json(sqlContext, path)
arrange(df, df$col1)
arrange(df, asc(df$col1), desc(abs(df$col2)))
arrange(df, "col1", decreasing = TRUE)
arrange(df, "col1", "col2", decreasing = c(TRUE, FALSE))

## End(Not run)
## Not run: 
  orderBy(ws, "col1", "col2")
  orderBy(ws, df$col1, df$col2)

## End(Not run)

[Package SparkR version 2.0.0 Index]