Constructor
new DataStreamWriter()
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Methods
foreach(openCallback, processCallback, closeFunction, openFunctionBindArgsopt, processFunctionBindArgsopt, closeFunctionBindArgsopt) → {module:eclairjs/sql/streaming.DataStreamWriter}
Starts the execution of the streaming query, which will continually send results to the given external system.
processFunction as as new data arrives. The processFunction can be used to send the data
generated by the [[DataFrame]]/module:eclairjs/sql.Dataset to an external system.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
openCallback |
module:eclairjs/sql/streaming.DataStreamWriter~openCallback | Used to open connection to external system. | |
processCallback |
module:eclairjs/sql/streaming.DataStreamWriter~processCallback | use to send the data external system. | |
closeFunction |
module:eclairjs/sql/streaming.DataStreamWriter~closeCallback | Used to close connection to external system. | |
openFunctionBindArgs |
Array.<object> |
<optional> |
|
processFunctionBindArgs |
Array.<object> |
<optional> |
|
closeFunctionBindArgs |
Array.<object> |
<optional> |
- Since:
- EclairJS 0.5 Spark 2.0.0
- Source:
Returns:
Example
var query = counts.writeStream().foreach(function(partitionId, version) {
// open connection
var socket = new java.net.Socket(serverAddress, port);
return socket;
},
function(socket, value) {
var out = new java.io.PrintWriter(socket.getOutputStream(), true);
out.print(JSON.stringify(value));
out.close();
},
function(socket) {
socket.close();
}).start();
format(source) → {module:eclairjs/sql/streaming.DataStreamWriter}
Specifies the underlying output data source. Built-in options include "parquet", "json", etc.
Parameters:
Name | Type | Description |
---|---|---|
source |
string |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
option(key, value) → {module:eclairjs/sql/streaming.DataStreamWriter}
Adds an output option for the underlying data source.
Parameters:
Name | Type | Description |
---|---|---|
key |
string | |
value |
string |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
outputMode(outputMode) → {module:eclairjs/sql/streaming.DataStreamWriter}
Specifies how data of a streaming DataFrame/Dataset is written to a streaming sink.
- `append`: only the new rows in the streaming DataFrame/Dataset will be
written to the sink
- `complete`: all the rows in the streaming DataFrame/Dataset will be written
to the sink every time these is some updates
Parameters:
Name | Type | Description |
---|---|---|
outputMode |
string |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
partitionBy(…colNames) → {module:eclairjs/sql/streaming.DataStreamWriter}
Partitions the output by the given columns on the file system. If specified, the output is
laid out on the file system similar to Hive's partitioning scheme. As an example, when we
partition a dataset by year and then month, the directory layout would look like:
- year=2016/month=01/
- year=2016/month=02/
Partitioning is one of the most widely used techniques to optimize physical data layout.
It provides a coarse-grained index for skipping unnecessary data reads when queries have
predicates on the partitioned columns. In order for partitioning to work well, the number
of distinct values in each column should typically be less than tens of thousands.
This was initially applicable for Parquet but in 1.5+ covers JSON, text, ORC and avro as well.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
colNames |
string |
<repeatable> |
- Since:
- EclairJS 0.5 Spark 1.4.0
- Source:
Returns:
queryName(queryName) → {module:eclairjs/sql/streaming.DataStreamWriter}
Specifies the name of the StreamingQuery that can be started with `start()`.
This name must be unique among all the currently active queries in the associated SQLContext.
Parameters:
Name | Type | Description |
---|---|---|
queryName |
string |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
start(pathopt) → {module:eclairjs/sql/streaming.StreamingQuery}
Starts the execution of the streaming query, which will continually output results to the given
path as new data arrives. The returned StreamingQuery object can be used to interact with
the stream.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
path |
string |
<optional> |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
trigger(trigger) → {module:eclairjs/sql/streaming.DataStreamWriter}
Set the trigger for the stream query. The default value is `ProcessingTime(0)` and it will run
the query as fast as possible.
Parameters:
Name | Type | Description |
---|---|---|
trigger |
module:eclairjs/sql/streaming.Trigger |
- Since:
- EclairJS 0.7 Spark 2.0.0
- Source:
Returns:
Example
df.writeStream().trigger(ProcessingTime.create("10 seconds"))
Type Definitions
closeCallback(connection, bindArgsopt)
Used to open connection to external system.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
connection |
object | ||
bindArgs |
Array.<object> |
<optional> |
openCallback(partitionId, version, bindArgsopt) → {object}
This callback Used to open connection to external system.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
partitionId |
number | ||
version |
number | ||
bindArgs |
Array.<object> |
<optional> |
Returns:
connection that is passed to module:eclairjs/sql/streaming.DataStreamWriter~processCallback
and module:eclairjs/sql/streaming.DataStreamWriter~closeCallback
- Type
- object
processCallback(connection, value, bindArgsopt)
This callback consume data generated by a StreamingQuery. Typically this is used to send the generated data
to external systems from each partition so you usually should do all the initialization (e.g. opening a connection
or initiating a transaction) in the module:eclairjs/sql/streaming.DataStreamWriter~openCallback.
Parameters:
Name | Type | Attributes | Description |
---|---|---|---|
connection |
object | ||
value |
number | ||
bindArgs |
Array.<object> |
<optional> |