Class: DataStreamReader

eclairjs/sql/streaming.DataStreamReader

Interface used to load a streaming Dataset from external storage systems (e.g. file systems, key-value stores, etc). Use readStream to access this.

Constructor

new DataStreamReader()

Since:
  • EclairJS 0.7 Spark 2.0.0
Source:

Methods

csv(path) → {Dataset}

:: Experimental :: Loads a CSV file stream and returns the result as a Dataset. This function will go through the input once to determine the input schema if `inferSchema` is enabled. To avoid going through the entire data once, disable `inferSchema` option or specify the schema explicitly using schema. You can set the following CSV-specific options to deal with CSV files:
  • `maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be considered in every trigger.
  • `sep` (default `,`): sets the single character as a separator for each field and value.
  • `encoding` (default `UTF-8`): decodes the CSV files by the given encoding type.
  • `quote` (default `"`): sets the single character used for escaping quoted values where the separator can be part of the value. If you would like to turn off quotations, you need to set not `null` but an empty string. This behaviour is different form `com.databricks.spark.csv`.
  • `escape` (default `\`): sets the single character used for escaping quotes inside an already quoted value.
  • `comment` (default empty string): sets the single character used for skipping lines beginning with this character. By default, it is disabled.
  • `header` (default `false`): uses the first line as names of columns.
  • `inferSchema` (default `false`): infers the input schema automatically from data. It requires one extra pass over the data.
  • `ignoreLeadingWhiteSpace` (default `false`): defines whether or not leading whitespaces from values being read should be skipped.
  • `ignoreTrailingWhiteSpace` (default `false`): defines whether or not trailing whitespaces from values being read should be skipped.
  • `nullValue` (default empty string): sets the string representation of a null value.
  • `nanValue` (default `NaN`): sets the string representation of a non-number" value.
  • `positiveInf` (default `Inf`): sets the string representation of a positive infinity value.
  • `negativeInf` (default `-Inf`): sets the string representation of a negative infinity value.
  • `dateFormat` (default `null`): sets the string that indicates a date format. Custom date formats follow the formats at `java.text.SimpleDateFormat`. This applies to both date type and timestamp type. By default, it is `null` which means trying to parse times and date by `java.sql.Timestamp.valueOf()` and `java.sql.Date.valueOf()`.
  • `maxColumns` (default `20480`): defines a hard limit of how many columns a record can have.
  • `maxCharsPerColumn` (default `1000000`): defines the maximum number of characters allowed for any given value being read.
  • `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt records during parsing.
    • `PERMISSIVE` : sets other fields to `null` when it meets a corrupted record. When a schema is set by user, it sets `null` for extra fields.
    • `DROPMALFORMED` : ignores the whole corrupted records.
    • `FAILFAST` : throws an exception when it meets corrupted records.
    Parameters:
    Name Type Description
    path string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    Dataset

    format(source) → {DataStreamReader}

    :: Experimental :: Specifies the input data source format.
    Parameters:
    Name Type Description
    source string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    DataStreamReader

    json(path) → {Dataset}

    :: Experimental :: Loads a JSON file stream (one object per line) and returns the result as a Dataset. This function goes through the input once to determine the input schema. If you know the schema in advance, use the version that specifies the schema to avoid the extra scan. You can set the following JSON-specific options to deal with non-standard JSON files:
  • `maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be considered in every trigger.
  • `primitivesAsString` (default `false`): infers all primitive values as a string type
  • `prefersDecimal` (default `false`): infers all floating-point values as a decimal type. If the values do not fit in decimal, then it infers them as doubles.
  • `allowComments` (default `false`): ignores Java/C++ style comment in JSON records
  • `allowUnquotedFieldNames` (default `false`): allows unquoted JSON field names
  • `allowSingleQuotes` (default `true`): allows single quotes in addition to double quotes
  • `allowNumericLeadingZeros` (default `false`): allows leading zeros in numbers (e.g. 00012)
  • `allowBackslashEscapingAnyCharacter` (default `false`): allows accepting quoting of all character using backslash quoting mechanism
  • `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt records during parsing.
    • `PERMISSIVE` : sets other fields to `null` when it meets a corrupted record, and puts the malformed string into a new field configured by `columnNameOfCorruptRecord`. When a schema is set by user, it sets `null` for extra fields.
    • `DROPMALFORMED` : ignores the whole corrupted records.
    • `FAILFAST` : throws an exception when it meets corrupted records.
  • `columnNameOfCorruptRecord` (default is the value specified in `spark.sql.columnNameOfCorruptRecord`): allows renaming the new field having malformed string created by `PERMISSIVE` mode. This overrides `spark.sql.columnNameOfCorruptRecord`.
  • Parameters:
    Name Type Description
    path string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    Dataset

    load(pathopt) → {Dataset}

    :: Experimental :: Loads input in as a Dataset, for data streams that read from some path.
    Parameters:
    Name Type Attributes Description
    path string <optional>
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    Dataset

    option(key, value) → {DataStreamReader}

    :: Experimental :: Adds an input option for the underlying data source.
    Parameters:
    Name Type Description
    key string
    value string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    DataStreamReader

    parquet(path) → {Dataset}

    :: Experimental :: Loads a Parquet file stream, returning the result as a Dataset. You can set the following Parquet-specific option(s) for reading Parquet files:
  • `maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be considered in every trigger.
  • `mergeSchema` (default is the value specified in `spark.sql.parquet.mergeSchema`): sets whether we should merge schemas collected from all Parquet part-files. This will override `spark.sql.parquet.mergeSchema`.
  • Parameters:
    Name Type Description
    path string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    Dataset

    schema(schema) → {DataStreamReader}

    :: Experimental :: Specifies the input schema. Some data sources (e.g. JSON) can infer the input schema automatically from data. By specifying the schema here, the underlying data source can skip the schema inference step, and thus speed up data loading.
    Parameters:
    Name Type Description
    schema module:eclairjs/sql/types.StructType
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    DataStreamReader

    text(path) → {Dataset}

    :: Experimental :: Loads text files and returns a Dataset whose schema starts with a string column named "value", and followed by partitioned columns if there are any. Each line in the text files is a new row in the resulting Dataset. For example:
    Parameters:
    Name Type Description
    path string
    Since:
    • EclairJS 0.7 Spark 2.0.0
    Source:
    Returns:
    Type
    Dataset
    Example
    // Scala:
      spark.readStream.text("/path/to/directory/")
    
      // Java:
      spark.readStream().text("/path/to/directory/")
     
    
    You can set the following text-specific options to deal with text files:
    <li>`maxFilesPerTrigger` (default: no max limit): sets the maximum number of new files to be
    considered in every trigger.</li>