Skip to main content

Changelog

Egress route filtering for completion events

It is possible now to filter messages and batches that are routed to the egress sink by specifying a filter condition. The filter condition is a SQL-like expression, e.g. metadata.messageId = 'report' AND starts_with(metadata.deviceId, 'prod'). If the condition evaluates to true, the event is routed to the egress sink. Otherwise, the event is discarded.

Egress Route Filtering

Correspondingly, the CLI commands for creating and updating routes has been extended to support filtering:

spotf stream egress-route create-or-update sql \
--name "sql-route" \
--stream-group-name "example-group" \
--stream-name "example-stream" \
--egress-sink-name "postgresql-sink" \
--with-filter "metadata.messageId = 'report'"

Delta Egress schema configuration - Portal and CLI integration added

Users of Delta Egress feature can now configure non-default table schema not only via HTTP API but also through the Portal and CLI.

Support in the Portal is integrated into the existing Egress Route creation and modification dialog:

Correspondingly, the CLI command for managing Delta Egress routes has been extended to support the table schema configuration:

spotf stream egress-route create-or-update delta \
--name 'delta-route' \
--stream-group-name 'example-group' \
--stream-name 'example-stream' \
--egress-sink-name 'delta-sink' \
--schema 'Otel' \
--otel-schema-logs 'true' \
--otel-schema-metrics 'true' \
--otel-schema-traces 'true'

Configurable Delta Egress table schema and Open Telemetry

Until now, user data routed through Delta Egress was constrained to textual row-based format and single, generic output Table Delta schema. Although this approach covered many original use cases of our customers, with the growing popularity of Databricks, new use cases appeared over time that were tedious or impossible to achieve.

To address these emerging use cases, we redesigned the Delta Egress to support new data formats and table schemas. At the same time, we added first-class support for processing Open Telemetry data in binary Protobuf format together with a new table schema tailor-made for Open Telemetry.

import pyspark
from delta.tables import *

spark = pyspark.sql.SparkSession.builder.appName("MyApp").getOrCreate()

table_uri = f"abfss://{container_name}@{storage_account_name}.dfs.core.windows.net/otel/traces"

df = spark.read.format("delta").load(table_uri)

rows = df.collect()

trace_id = rows[0]['otel_trace_id']
span_id = rows[0]['otel_span_id']

The new Open Telemetry table schema can be configured via API when creating a new Egress Route. Support in Portal and CLI is on the way.

New tutorial on sending telemetry from Teltonika RUTX router

If you use Teltonika RUTX routers, take a look at our new tutorial:

The tutorial shows how to create a custom Rust application that uses the Spotflow Device SDK and compile it for the router. You'll also see how to install the application on the router as a background service and visualize the telemetry data in the instance of Grafana integrated into the Spotflow IoT Platform.

Device SDK is open-source

The source code of the Device SDK is available on GitHub under the MIT license. The repository also contains the description of the architecture and the instructions for compilation and test execution.

Simplified data flows configuration

We've simplified the configuration of data flows in the Spotflow IoT Platform. Now, the page shows a graphical representation of the data flow, making it easier to understand the data processing pipeline. That is, where the data flows for each stream. And conversely, where the data comes from for each egress sink.

SQL Egress now supports 'json' and 'jsonb' PostgreSQL data types

Customers using SQL Egress can now write data to PostgreSQL columns of json and jsonb data types. This update enables number of scenarios that were not possible before, such as storing any data types not supported natively by SQL Egress or storing data with eveloving schema.

No configuration changes are needed to start using this feature.

OTEL Egress can now be managed via CLI

Spotflow customers can now manage OTEL Egress (OpenTelemetry) sinks and routes via spotf CLI, along side the already existing support in Portal.

The newly available CLI commands are:

Creating example Egress Sink
spotf egress-sink create-or-update otel \
--name 'otel-sink' \
--endpoint 'https://otlp-gateway-prod-eu-west-3.grafana.net/otlp' \
--basic-auth-username 'user' \
--basic-auth-password 'password'
Creating example Egress Route
spotf stream egress-route create-or-update otel \
--name 'otel-route' \
--stream-group-name 'example-group' \
--stream-name 'example-stream' \
--egress-sink-name 'otel-sink' \
--log-labels 'deviceId:true'

New tutorial on data routing to Databricks

We've added a new tutorial showing how to easily send data to delta tables (Databricks) using Spotflow IoT Platform. The tutorial also instruct you how to simply access the data in Databricks.

Sounds interesting? Give it a try right away!

Public preview of Device SDK for Rust

You can now use the Rust interface of the Device SDK. Because the interface hasn't reached the version 1.0 yet, there might be minor breaking changes in the future. However, we don't plan to remove any existing functionality.

We'll be happy to hear your feedback on the new interface!

Delta Egress can now be managed via Portal and CLI

Spotflow customers can now manage Delta Egress sinks and routes via Portal and spotf CLI.

The newly available CLI commands are:

Creating example Egress Sink
spotf egress-sink create-or-update delta azure-blob \
--name 'delta-sink' \
--connection-string 'AccountName=...;AccountKey=...' \
--container-name 'tables'
Creating example Egress Route
spotf stream egress-route create-or-update delta \
--name 'delta-route' \
--stream-group-name 'example-group' \
--stream-name 'example-stream' \
--egress-sink-name 'delta-sink' \
--directory-path 'table/path'

New tutorials on data routing to observability backends and Azure EventHubs

We've added new tutorials showing how easy it is to send data to Azure EventHubs or observability backends using Spotflow IoT Platform. The tutorials also instruct you how to read the incoming data or visualize them, respectively.

Sounds interesting? Give it a try right away!

New interface of Device SDK for Python and C

We've simplified the interface of Device SDK for Python and C:

Changes in version 2.0.0 of the Python interface:

  • DeviceClient.start now accepts all the options directly as arguments instead of using DeviceClientOptions.
  • The class ProvisioningToken was replaced by a simple string.
  • DeviceClient.send_message waits until the Device SDK sends the Message to the Platform. We recommend using DeviceClient.enqueue_message for production scenarios because it doesn't block code execution during connection outages.

Improved on-boarding experience

We've introduced a new interactive quickstart tutorial to improve the first interactions with the platform.

When each user signs up, we setup an initial workspace for them, so there are no obstacles to start using the platform. When the user lands in our Portal, they'll see a tutorial which guides the user through the initial steps within the platform:

  1. Using the Device SDK.
  2. Provisioning a new device.
  3. Sending messages into the platform.
  4. Visualizing the data in our built-in Grafana.

Device Fleet Configuration templates are now fully editable

Spotflow customers using Device Fleet Configurations to configure devices at scale can now modify the configuration's content or path.

This feature allows to edit configuration used by many devices without needing to delete and create a new configuration. Our goal with this update is to make it easy for customers to make iterative changes to large scale of devices even when running in production.

Please see Tutorial: Configure Multiple Devices if you want to start using Device Fleet Configurations.

Documentation overhaul: Now more user-friendly and use-case oriented!

We've completely redesigned our documentation to be more intuitive and focused on real-world use cases. Dive in to find clearer guides and more helpful examples tailored just for you!

Devices can now be deleted via CLI

Spotflow users can now delete devices directly through the command line interface (CLI). This feature provides a quick and efficient way to delete devices without the need to access the portal. To delete a device, you can use the spotf device delete command followed by the device's ID. Here's an example:

spotf device delete --device-id "<device_id>"

Please refer to the CLI command reference documentation for detailed instructions.

Route data AWS S3

AWS S3 Egress

Spotflow customers can now seamlessly route data streams from devices into AWS S3 buckets. This feature is provided as a new Egress Sink kind to which both new and existing streams can be routed.

See detailed documentation in Amazon S3 Egress Sink page.

The Target S3 bucket is specified via bucket name, region, and IAM user access key. Optionally, a static prefix of the target path can be added. The new egress sink can be configured via Portal, CLI, or API.

Previously, this use case was possible only by writing custom processors consuming data through one of the existing egress sinks, such as Azure Event Hub. This option is still viable for existing and new customers, but we recommend migrating to the new AWS S3 egress sink.

Creating Egress Sink
Creating Egress Route

Route data to Delta Tables in Azure Blob Storage

Delta Egress

Data streams from devices can now be directly inserted into Delta Tables backed by Azure Blob Storage. This feature is provided as a new Egress Sink kind to which both new and existing streams can be routed.

The target Azure Blob Storage container is specified via connection string and container name. The Delta Table is created in the specified container, and the data is inserted into it. Optionally, a static prefix of the target path can be added.

The created Delta Table contains several predefined columns, such as stream_name, message_id or ingress_enqueued_date_time as well column payload_line with content of the stream message.

See detailed documentation in Delta Egress Sink page.

The new egress sink can be configured via API. Support in CLI and Portal is not yet available.

Creating egress Sink
#!/usr/bin/env pwsh

$body = @{
properties = @{
config = @{
delta = @{
azureBlobStorage = @{
connectionString = "AccountName=...;AccountKey=..."
containerName = "delta"
}
}
}
}
}

$uri = "https://api.eu1.spotflow.io"
$uri += "/workspaces/$workspaceId"
$uri += "/egress-sinks/delta-example"

Invoke-WebRequest `
-Method Patch `
-Uri $uri `
-Headers @{ "Authorization" = "Bearer $token" } `
-ContentType "application/json" `
-Body ($body | ConvertTo-Json -Depth 10)
Creating egress route
#!/usr/bin/env pwsh

$body = @{
properties = @{
egressSinkName = "delta-sink-sample"
config = @{
delta = @{
directoryPath = "sample-path"
}
}
}
}

$uri = "https://api.eu1.spotflow.io"
$uri += "/workspaces/$workspaceId"
$uri += "/stream-groups/$streamGroupName"
$uri += "/streams/$streamName"
$uri += "/egress-routes/delta-route-sample"

Invoke-WebRequest `
-Method Patch `
-Uri $uri `
-Headers @{ "Authorization" = "Bearer $token" } `
-ContentType "application/json" `
-Body ($body | ConvertTo-Json -Depth 10)

Product changelog started

Product changelog has been started. Spotflow engineers and product managers will keep you informed about the latest updates and improvements via this page. Stay tuned!