Using Spotflow with Resource-Constrained Devices
How to configure Spotflow's device module to fit tight RAM, bandwidth, and power budgets on embedded hardware.
Spotflow's device module is designed to be tunable. Every major subsystem (logging, metrics, coredumps, and the TLS stack) exposes Kconfig options that directly control its RAM allocation, transmission frequency, and CPU footprint. This page walks through the options most relevant to devices with limited resources and explains the trade-offs behind each one.
The guidance here targets Zephyr and Nordic nRF Connect SDK. ESP-IDF shares the same underlying principles but uses sdkconfig.defaults instead of prj.conf.
The defaults are intentionally set to cover the widest range of use cases and boards, which means they are higher than most production deployments actually require. This page gives an overview of the options for reducing memory and bandwidth allocations. If you have specific requirements or constraints, reach out via hello@spotflow.io or Discord and we will help you find the best optimization approach for your setup.
Memory
Memory usage is controlled through Kconfig heap additions. Each subsystem declares its own heap pool contribution; enabling a subsystem adds that amount to the system heap. Nothing is allocated silently.
The table below shows the default heap contribution for each subsystem:
| Subsystem | Kconfig option | Default |
|---|---|---|
| Logging backend | HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_LOGGING | 6 KB |
| Custom metrics | HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS | 8 KB |
| System metrics | HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS_SYSTEM | 20 KB |
| Coredumps backend | HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_COREDUMPS | 18 KB |
| Mbed TLS | MBEDTLS_HEAP_SIZE | 32 KB |
Metrics and coredumps are disabled by default (CONFIG_SPOTFLOW_METRICS=n, CONFIG_SPOTFLOW_COREDUMPS=n). With only logging enabled, the baseline is roughly 38 KB (6 KB logging + 32 KB Mbed TLS), plus a 2.5 KB processing thread stack.
The Mbed TLS heap dominates the baseline. Of the ~38 KB total, 32 KB is Mbed TLS and ~8.5 KB is Spotflow itself (6 KB logging heap + 2.5 KB thread stack). Mbed TLS uses this heap for the full TLS session: certificate chain verification, and input and output record buffers for the encrypted connection.
If your firmware already uses a TLS library, the 32 KB may not be an incremental cost as it may already be accounted for in your memory budget. If you are evaluating footprint against other solutions, compare the numbers without the TLS layer unless the competing solution includes one too. If you are using a different TLS library or need a lower-footprint TLS option, contact us and we can look at adjusting the integration.
Reducing the logging heap
The logging heap is sized to hold a queue of CBOR-encoded messages waiting to be sent. Three options control it:
| Kconfig option | Default | Effect |
|---|---|---|
SPOTFLOW_LOG_BACKEND_QUEUE_SIZE | 16 messages | Depth of the in-RAM log queue |
SPOTFLOW_LOG_BUFFER_SIZE | 512 bytes | Raw per-message buffer before CBOR encoding |
SPOTFLOW_CBOR_LOG_MAX_LEN | 1024 bytes | CBOR-encoded output buffer per message |
SPOTFLOW_LOG_INCLUDE_BODY_TEMPLATE | y | Whether to include the original format template alongside the interpolated message |
If your log lines are short (under ~100 characters), you can safely reduce SPOTFLOW_LOG_BUFFER_SIZE and SPOTFLOW_CBOR_LOG_MAX_LEN. Messages that exceed the buffer are dropped.
Disabling SPOTFLOW_LOG_INCLUDE_BODY_TEMPLATE omits the original format template from each log message, sending only the final interpolated string. This reduces per-message CBOR payload size at the cost of losing the template in the cloud.
# Default: ~6 KB heap
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_LOGGING=6144
# Constrained: ~3 KB heap
CONFIG_SPOTFLOW_LOG_BACKEND_QUEUE_SIZE=8
CONFIG_SPOTFLOW_LOG_BUFFER_SIZE=256
CONFIG_SPOTFLOW_CBOR_LOG_MAX_LEN=512
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_LOGGING=3072Reducing the custom metrics heap
When you register a labeled metric, two parameters directly determine how much heap it consumes:
MAX_LABELS_PER_METRIC(Kconfig, compile-time) — the maximum number of label key-value pairs any single metric report will ever carry. Each label slot costs ~48 bytes per timeseries regardless of whether that slot is actually used, because the array is sized at compile time.max_timeseries(registration call, per-metric) — the number of unique label combinations the SDK must hold in memory simultaneously. Each unique combination gets its own independent aggregation state (sum, count, min, max) for the full duration of the aggregation window.
The heap cost per metric follows:
timeseries_size = ~36 + (48 × MAX_LABELS_PER_METRIC) bytes
per-metric heap = 80 + (timeseries_size × max_timeseries)max_timeseries should equal the number of unique label value combinations you expect at runtime. For example, a metric with a location label (3 possible values: north, south, east) and a channel label (2 possible values: a, b) produces 6 combinations, set max_timeseries=6. If a new combination appears beyond that limit, the SDK silently drops reports for it until the aggregation window resets.
The following table shows how MAX_LABELS_PER_METRIC affects timeseries size and the total heap for that 6-combination example:
MAX_LABELS_PER_METRIC | Bytes per timeseries | Heap for 6 timeseries |
|---|---|---|
| 1 | ~84 bytes | ~590 bytes |
| 2 | ~132 bytes | ~875 bytes |
| 4 (default) | ~228 bytes | ~1.5 KB |
| 8 (max) | ~420 bytes | ~2.6 KB |
Set MAX_LABELS_PER_METRIC to the highest label count used by any single metric in your application, not to a generous ceiling. If your most label-heavy metric uses 2 labels, set it to 2. System metrics use at most 1 label.
# Default: 4 labels per timeseries, up to 32 metrics
CONFIG_SPOTFLOW_METRICS_MAX_LABELS_PER_METRIC=4
CONFIG_SPOTFLOW_METRICS_MAX_REGISTERED=32
# Constrained: trim to actual label usage
CONFIG_SPOTFLOW_METRICS_MAX_LABELS_PER_METRIC=2
CONFIG_SPOTFLOW_METRICS_MAX_REGISTERED=8
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS=4096Reducing the system metrics heap
The system metrics heap is the largest contributor after Mbed TLS. Thread stack monitoring dominates its cost: with the default of tracking up to 32 threads automatically, stack metrics alone account for roughly 15 KB of the 20 KB default.
# Default: 20 KB heap, all threads tracked automatically
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS_SYSTEM=20480
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_ALL_THREADS=y
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_MAX_THREADS=32
# Constrained: ~8 KB heap, only track specific threads manually
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_ALL_THREADS=n
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_MAX_THREADS=4
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS_SYSTEM=8192When STACK_ALL_THREADS=n, register the threads you care about explicitly from application code:
spotflow_metrics_system_enable_thread_stack(my_critical_thread);Reducing the coredumps heap
The coredumps heap is sized to buffer binary chunks before MQTT transmission. Reducing the chunk size and queue depth lowers the heap proportionally:
heap ≈ (CHUNK_SIZE × QUEUE_SIZE) + 2 KB# Default: ~18 KB heap
CONFIG_SPOTFLOW_COREDUMPS_CHUNK_SIZE=1024
CONFIG_SPOTFLOW_COREDUMPS_BACKEND_QUEUE_SIZE=16
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_COREDUMPS=18432
# Constrained: ~6 KB heap
CONFIG_SPOTFLOW_COREDUMPS_CHUNK_SIZE=512
CONFIG_SPOTFLOW_COREDUMPS_BACKEND_QUEUE_SIZE=8
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_COREDUMPS=6144On boards with limited flash for storing the coredump image before upload, also enable:
CONFIG_DEBUG_COREDUMP_MEMORY_DUMP_MIN=yBandwidth
Log level filtering
The most direct bandwidth reduction is filtering out low-severity logs before they are serialized and queued. Use SPOTFLOW_DEFAULT_SENT_LOG_LEVEL to set the initial filter at boot time:
| Value | Level | When to use |
|---|---|---|
1 | ERROR | Production devices: only failures |
2 | WARNING | Production devices: failures and anomalies |
3 | INFO | Staging / burn-in: standard operational logs |
4 | DEBUG | Development: all messages (default) |
# Default: send all log levels (DEBUG and above)
CONFIG_SPOTFLOW_DEFAULT_SENT_LOG_LEVEL=4
# Production: send only WARNING and ERROR
CONFIG_SPOTFLOW_DEFAULT_SENT_LOG_LEVEL=2You can also change the log level at runtime from the Spotflow portal without a firmware update (see Adjust Device's Minimal Log Severity). On MQTT connect, the device reports its current level; when you change it in the portal, the device applies it immediately. To persist the level across reboots so it takes effect before the first MQTT connection, enable the Zephyr settings subsystem:
CONFIG_FLASH=y
CONFIG_FLASH_MAP=y
CONFIG_NVS=y
CONFIG_SETTINGS=y
CONFIG_SETTINGS_NVS=y
# CONFIG_SPOTFLOW_SETTINGS=y is set automatically when CONFIG_SETTINGS=yAlso suppress the device module's own internal logs in production builds:
# 1 = ERROR only (default is 2 = WARNING)
CONFIG_SPOTFLOW_MODULE_DEFAULT_LOG_LEVEL=1Metrics aggregation
Metrics are aggregated on-device before transmission: individual samples are accumulated and sent as a single message containing sum, count, min, and max for the window. This means transmission frequency is decoupled from sampling frequency.
| Aggregation interval | Constant | Transmissions per metric per day |
|---|---|---|
| No aggregation | SPOTFLOW_AGG_INTERVAL_NONE | One per sample |
| 1 minute | SPOTFLOW_AGG_INTERVAL_1MIN | 1440 |
| 1 hour | SPOTFLOW_AGG_INTERVAL_1HOUR | 24 |
| 1 day | SPOTFLOW_AGG_INTERVAL_1DAY | 1 |
One-hour aggregation is typically sufficient for fleet health monitoring. One-day aggregation is practical for low-bandwidth deployments where only long-term trends matter. No aggregation is appropriate for event-like metrics where each occurrence is individually significant.
The default aggregation interval and heartbeat frequency are both configurable:
# Default: 1-minute aggregation, 60-second heartbeat
CONFIG_SPOTFLOW_METRICS_DEFAULT_AGGREGATION_INTERVAL=60
CONFIG_SPOTFLOW_METRICS_HEARTBEAT_INTERVAL=60
# Low-bandwidth: 1-hour aggregation, 1-hour heartbeat
CONFIG_SPOTFLOW_METRICS_DEFAULT_AGGREGATION_INTERVAL=3600
CONFIG_SPOTFLOW_METRICS_HEARTBEAT_INTERVAL=3600System metrics use separate collection and aggregation intervals:
# Default: sample every 10 s, send aggregated every 60 s
CONFIG_SPOTFLOW_METRICS_SYSTEM_COLLECTION_INTERVAL=10
CONFIG_SPOTFLOW_METRICS_SYSTEM_AGGREGATION_INTERVAL=60
# Low-bandwidth: sample every 60 s, send aggregated every hour
CONFIG_SPOTFLOW_METRICS_SYSTEM_COLLECTION_INTERVAL=60
CONFIG_SPOTFLOW_METRICS_SYSTEM_AGGREGATION_INTERVAL=3600CBOR encoding
The Spotflow device module always uses CBOR (Concise Binary Object Representation) for all messages — logs, metrics, and coredumps. CBOR uses single-byte field identifiers and encodes values in their native binary form, which produces significantly smaller payloads than text-based formats.
Dictionary Logging
Dictionary logging support is in development.
Standard text logging transmits the full format string with every message. On a constrained link, those strings add up quickly: a single log line with a short message and two numeric arguments can easily exceed 50 bytes on the wire, and a busy firmware may emit hundreds of such messages per second.
Dictionary logging eliminates that overhead. Instead of transmitting the format string, the device sends only a compact numeric reference pointing to the string in the build's ELF file. Arguments are encoded in their native binary form rather than converted to text. The host-side parser reconstructs the full human-readable message offline using a build-time dictionary file. String formatting is never performed on the device.
The difference in what is actually transmitted over the wire for a single log call:
// Text logging — full string sent with every message (~50 bytes)
"[INF] sensor_read: temperature=23 pressure=1012"
// Dictionary logging — ELF reference + raw binary arguments (~6 bytes)
[0x3A2F] 0x00000017 0x000003F4Beyond bandwidth, dictionary logging also reduces CPU overhead in the logging backend thread by skipping on-device string formatting.
Power and Radio Activity
Longer aggregation windows and a longer heartbeat interval directly reduce the number of MQTT publishes per day, which is the primary factor in radio-on time for cellular and low-power wireless devices.
The design choices that reduce radio overhead by default:
- QoS 0: The device module always uses MQTT QoS 0. No acknowledgment round-trips and no retransmit state machine mean fewer radio transactions per message.
- Aggregation: A single aggregated MQTT message replaces hundreds of raw samples. With 1-hour aggregation, a metric that is sampled every 10 seconds produces one transmission instead of 360.
- Heartbeat: The heartbeat interval controls how often the uptime metric is sent. Extending it from the default 60 seconds to 3600 seconds eliminates 59 out of every 60 heartbeat transmissions.
The configuration in the Metrics aggregation section above covers all the relevant options.
CPU Overhead
The MQTT processing runs on a dedicated background thread at K_LOWEST_APPLICATION_THREAD_PRIO by default (priority 14). This means application threads preempt it freely, and it only runs when no application work is pending. No additional configuration is needed to achieve this behavior.
The main tuning lever for CPU overhead is the system metrics collection interval. Sampling heap, stack, CPU utilization, and network counters has a small but nonzero cost. Increasing the interval reduces that overhead:
# Default: collect system metrics every 10 seconds
CONFIG_SPOTFLOW_METRICS_SYSTEM_COLLECTION_INTERVAL=10
# Reduced overhead: collect every 60 seconds
CONFIG_SPOTFLOW_METRICS_SYSTEM_COLLECTION_INTERVAL=60CPU utilization metrics (SPOTFLOW_METRICS_SYSTEM_CPU) depend on Zephyr's CPU_LOAD subsystem, which is only available on single-core (non-SMP) targets: Cortex-M, RISC-V, and Cortex-A. On SMP builds, disable this metric to avoid a build error.
Offline Operation
The logging backend stores messages in a circular queue in RAM. When the device loses connectivity, messages continue to accumulate in the queue. When the connection is restored, the processing thread drains the queue and transmits buffered messages automatically, no application code is required.
When the queue is full and the device is still offline, the oldest messages are overwritten. This newest-wins eviction policy preserves the most recent context at the cost of older entries. Queue depth is controlled by SPOTFLOW_LOG_BACKEND_QUEUE_SIZE.
The log buffer is volatile. If the device reboots while offline, buffered logs are lost. Only coredumps survive reboots, they are stored in a dedicated flash partition and uploaded on the next boot once connectivity is available. Metrics behave similarly: aggregated values are held in RAM within the current aggregation window. If the device reboots mid-window, the partial aggregation is discarded.
Enabling Only What You Need
Metrics and coredumps are disabled by default. Only enable subsystems you actively use:
| Subsystem | Enable with | Default heap |
|---|---|---|
| Logging | CONFIG_SPOTFLOW_LOG_BACKEND=y (auto-enabled) | 6 KB |
| Custom metrics | CONFIG_SPOTFLOW_METRICS=y | 8 KB |
| System metrics | CONFIG_SPOTFLOW_METRICS_SYSTEM=y | 20 KB |
| Coredumps | CONFIG_SPOTFLOW_COREDUMPS=y | 18 KB |
Within system metrics, each metric type can be toggled independently:
CONFIG_SPOTFLOW_METRICS_SYSTEM_HEAP=y
CONFIG_SPOTFLOW_METRICS_SYSTEM_NETWORK=y
CONFIG_SPOTFLOW_METRICS_SYSTEM_CPU=y
CONFIG_SPOTFLOW_METRICS_SYSTEM_CONNECTION=y
CONFIG_SPOTFLOW_METRICS_SYSTEM_RESET_CAUSE=y
# Disable stack monitoring if thread count is high and heap is tight
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK=nDisabling SPOTFLOW_METRICS_SYSTEM_STACK removes the dominant cost of the system metrics heap (roughly 15 KB of the 20 KB default).
Also disable IPv6 if your network stack does not require it, this is a common memory saving in Zephyr samples that applies regardless of Spotflow:
CONFIG_NET_IPV6=nMinimum Footprint Configuration
The following is a starting-point prj.conf for a constrained device using logging only. Adjust queue sizes and log level to match your application's line lengths and production logging requirements.
# Enable Spotflow
CONFIG_SPOTFLOW=y
# Reduce logging heap: ~1 KB instead of default 6 KB
CONFIG_SPOTFLOW_LOG_BACKEND_QUEUE_SIZE=4
CONFIG_SPOTFLOW_LOG_BUFFER_SIZE=128
CONFIG_SPOTFLOW_CBOR_LOG_MAX_LEN=160
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_LOGGING=1024
# Send INFO and above by default
CONFIG_SPOTFLOW_DEFAULT_SENT_LOG_LEVEL=3
# Suppress SDK internal debug output
CONFIG_SPOTFLOW_MODULE_DEFAULT_LOG_LEVEL=3
# Omit format template from log messages to reduce payload size
CONFIG_SPOTFLOW_LOG_INCLUDE_BODY_TEMPLATE=n
# IPv4 only
CONFIG_NET_IPV6=nFor a device that also uses metrics, extend with:
CONFIG_SPOTFLOW_METRICS=y
CONFIG_SPOTFLOW_METRICS_SYSTEM=y
# Aggregate metrics hourly instead of every minute
CONFIG_SPOTFLOW_METRICS_DEFAULT_AGGREGATION_INTERVAL=3600
CONFIG_SPOTFLOW_METRICS_HEARTBEAT_INTERVAL=3600
# Reduce system metrics heap by limiting thread tracking
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_ALL_THREADS=n
CONFIG_SPOTFLOW_METRICS_SYSTEM_STACK_MAX_THREADS=4
CONFIG_HEAP_MEM_POOL_ADD_SIZE_SPOTFLOW_METRICS_SYSTEM=8192
# Sample system metrics less frequently
CONFIG_SPOTFLOW_METRICS_SYSTEM_COLLECTION_INTERVAL=60
CONFIG_SPOTFLOW_METRICS_SYSTEM_AGGREGATION_INTERVAL=3600
# Trim label slots to actual usage (saves ~48 bytes per timeseries per unused label)
CONFIG_SPOTFLOW_METRICS_MAX_LABELS_PER_METRIC=1Learn more
Fundamentals: Metrics
Fundamentals: Logging
Fundamentals: Crash reports & core dumps
Guide: Advanced configuration for Zephyr
How is this guide?