read.stream {SparkR} | R Documentation |
Returns the dataset in a data source as a SparkDataFrame
read.stream(source = NULL, schema = NULL, ...)
source |
The name of external data source |
schema |
The data schema defined in structType or a DDL-formatted string, this is required for file-based streaming data source |
... |
additional external data source specific named options, for instance |
The data source is specified by the source
and a set of options(...).
If source
is not specified, the default data source configured by
"spark.sql.sources.default" will be used.
SparkDataFrame
read.stream since 2.2.0
experimental
## Not run:
##D sparkR.session()
##D df <- read.stream("socket", host = "localhost", port = 9999)
##D q <- write.stream(df, "text", path = "/home/user/out", checkpointLocation = "/home/user/cp")
##D
##D df <- read.stream("json", path = jsonDir, schema = schema, maxFilesPerTrigger = 1)
##D stringSchema <- "name STRING, info MAP<STRING, DOUBLE>"
##D df1 <- read.stream("json", path = jsonDir, schema = stringSchema, maxFilesPerTrigger = 1)
## End(Not run)