Skip to content

MQTT + Sparkplug Support

1.0

While NF provides native APIs for exchanging data with other systems, it also implements Sparkplug over MQTT.

  • MQTT is a standardized pub/sub protocol widely adopted in the IoT space. Azure IoT, AWS IoT, and GCP IoT all implement portions of the standard; although none are full implementations and have various restrictions. MQTT does not define any particular structure for topics or message payloads.
  • Sparkplug B is a specification for how to implement IoT and Industrial IoT application on top of Sparkplug. It is not possible to fully implement when using a cloud provider as a broker, due to restictions on topic format and lack of support for certain protocol features.

Developers Guide

Protocol Overview

The Sparkplug protocol defines a topic and payload format on top of MQTT, specified when the messages should be sent.

In the Sparkplug protocol, there are main participants:

  • Nodes are essentially gateways which read from underlying devices and communicate over MQTT.
  • Devices are devices which are actually generating the data; for instance, controllers or other sensors.

In this architecture, NF is a node, and BACnet devices are exposed as devices. When a node or device starts up, it publishes an NBIRTH or DBIRTH message which contains the metadata for all points available on that node or device -- in Sparkplug language, these are called Metrics.

In NF's implementation, each BACnet object is mapped to a Sparkplug Metric, whose name is the UUID identifier in NF. The DBIRTH message contains all BACnet properties which were read during the latest scan within the metric's PropertySet.

After the DBIRTH has been sent, NF will send DDATA messages as new data become available. After an outage when the MQTT broker was not reachable, NF can also send buffered data. In this case those values will have the is_historical flag set to indicate they are no longer current.`

Payload Format

1.4

The "Sparkplug B" protocol defines its protocol format using a protocol buffer. Protocol buffers generate an efficient binary format from a simple descriptoin file, called a proto file. The full set of definitions is available in sparkplug_b.proto.

Normal Framework supports several different encodings of this format, controlled by the SPARKPLUG_PAYLOAD_FORMAT environment variable. The options are:

  • proto+gzip (default): binary protobuf format, then gzipped. This generates the most compact messages.
  • proto: binary protobuf
  • json: the json-equivalent encoding of the protobuf message

Info

If you are using either of the binary formats, you will need to use the protocol libraries for your language in order to decode the payload.

To use a proto file, you first use the protocol buffer compiler for your target language to generate bindings. If you don't have the compiler installed already, you can get it here.

$ protoc --js_out=import_style=commonjs,binary:. sparkplug_b.proto
$ protoc --python_out=. sparkplug_b.proto

If you don't want want to install the compiler you can also just download the results: sparkplug_b_pb.js sparkplug_b_pb2.py.

In addition to the protobuf encoding, NF also compresses the payload using GZip. To unpack a message, you need to decompress it and then parse it using the protocol buffer definition.

// only dependency for node is google-protobuf
var fs = require("fs");
var zlib = require("zlib")
var sparkplug_b = require("./sparkplug_b_pb.js")

fs.readFile("payload.pb", (err, data) => {
    var buf = zlib.unzipSync(data)
var message = new sparkplug_b.Payload.deserializeBinary(buf)
    console.log(JSON.stringify(message.toObject()))
    })
});
# need pip install protobuf
import gzip
import sparkplug_b_pb2

with gzip.open("payload.pb") as fp:
     payload = sparkplug_b_pb2.Payload()
     payload.ParseFromString(fp.read())
     print (payload)

Reliability Considerations

Because MQTT is a message bus, data can be published with no guarantee that a consumer is available to insert it into a database. If data are sent and no one is listening, that data will typically be lost. Therefore NF contains several mechanisms to help ensure that all data collected can be reliably saved to a database.

  • Sender-based reliability: if the MQTT broker or receiver can be run in a high-availability, persistent configuration so that all data which are sent are guaranteed to be archived, NF can be run with the SPARKPLUG_AUTO_RECOVERY flag. In this mode, NF will ensure all data are sent to the broker at least once; after an outage where the broker isn't reachable, it will send buffered data which was generated during the outage.

  • Receiver-based reliability: if the broker or upstream system isn't highly-available, NF can run where data are only sent while the broker is available. After an outage, the consumer is responsible for tracking which data have been inserted and generating a retry request asking NF to resend data which occurred during the outage.

As a rule of thumb, sender-based reliabilty is a good fit if using services like Azure IoT since they already contain mechanisms for persisting messages and are highly available. If you are running your own database and broker, receiver-based reliability allows you to guarantee all data will be archived without needing complex system configurations to run an HA broker and database.

Considerations when using a Cloud Provider

Cloud providers vary slighly, but they impose certain restrictions on the use of MQTT:

  • They all require provider-specific authentication logic, specifying their use of X.509 certificates, MQTT client IDs, and passwords;
  • Payload size restictions make compressing messages highly desirable;
  • Lack of support for retained messages means other mechanisms should be used to record device databases.
  • Restrictions on topic structure break conformance with the specification.

Azure IoT Hub

Azure imposes payload and topic format restrictions, making it impossible to use the standard format. When Azure is in use the following changes are made:

  • Device events can only be sent to devices/<device_id>/messages/events/. Therefore all messages are sent, with additional path components attached to the query portion of the path; e.g, devices/TestDevice/messages/events/cmd=DBIRTH&grp=normalgw&node=TestDevice&dev=500
  • Sparkplug requires that all device metadata be sent within a single DBIRTH message; however this can cause the Azure message size limit to be exceeded. NF will paginate device metadata into multiple DBIRTH messages with the same sequence number; by default up to 1000 objects can be placed into a single message.
  • NF implements the Shared Access Signature algorithms for generating MQTT passwords.

Automatic Configuration

1.4

NF now supports configuring an Azure IoT Hub connection using the EdgeHubConnectionString variable. Simply copy the value from the Azure portal into your application (the value that looks like HostName=nf-test.azure-devices.net;DeviceId=nf-test-1;SharedAccessKey=hZm8lXLkdN80cKvWN8t+...). This is the recommended way to configure an IoT Hub connection and sets all required variables to the recommended values.

Manual Configuration

Alternatively, after creating an Azure IoT hub and provisioning a device, you may use the following configuration to connect NF to your IoT Hub:

Environment Variable Description Value
AZURE_SHARED_ACCESS_KEY Set to access key provided by Azure <base64 value>
AZURE_SHARED_ACCESS_SIG_TTL How long SASs are valid 604800
MQTT_BROKER Hub URL <hub_name>.azure-devices.net>
MQTT_CAFILE CA to validate hub against /etc/cacerts/BaltimoreCyberTrustRoot.crt
MQTT_CLIENT_ID <device name>
MQTT_PORT 8883
MQTT_PROTOCOL tls
SPARKPLUG_AUTO_RECOVER Send buffered data after an outage 1
SPARKPLUG_DDATA_SET_NAME Don't use Sparkplug's alias mechanism 1
SPARKPLUG_NAMESPACE Confirm to topic format restriction devices/<device name>/messages/events/
SPARKPLUG_NODE_ID <device name>

Sparkplug over HTTP

Although Sparkplug is defined over MQTT, we also support a simplified version over HTTP for convenience in environments where an MQTT connection is not possible or troublesome.

When enabled, the Sparkplug driver will send each Sparkplug paylod to an HTTP server in the body of a POST request. The request in influenced by a number of environment variables; see the service configuration page for details.