Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Core Data – tests that measure the read/write performance of Core Data and underlying database for different load conditions. For example, varying the number of concurrent clients writing to Core Data with different payload sizes and measuring the time it takes to issue each write call, or varying the number of concurrent clients reading different size collections of events from Core Data and measuring the time it takes to issue each read call.
  • Support Logging – tests that measure log write and log query performance of the Logging Services for different load conditions. For example, measure the time it takes to write a log message for a varying number of Logging (write) clients and varying payload sizes, or measure the time it takes to query a log message for a varying number of Logging (read) clients and , varying payload sizes, and varying number of log entries already present in the system.

          (MKB: these should be do-able in isolation of EdgeX given it is a separate service?)

  • Support Notifications – tests that measure the time it takes from when a Notification in sent to the service to the point the message has been pushed to all registered receivers for different load conditions. For example, measure the fan-out performance where one publisher sends a Notification to the service and a varying number of clients subscribe to receive the Notification or fan-in when where a varying number of concurrent publishers send Notifications to the service and a single client subscribes to receive all of the Notifications.  (MKB: these should be do-able in isolation of EdgeX given it is a separate service?)
  • Core Command – tests measure the time it takes to issue GET and PUT commands to a device/sensor via the Command Service for different load conditions. For example, measure the time it takes for a varying number of concurrent clients to each issue a GET command to read a property value from a device or for a varying number of concurrent clients to set a property on a device with a PUT command.
  • Rules Engine - do we need dedicated Rules Engine tests ?
  • Export Services – do we need dedicated Export Service performance tests ? For example, measure the performance when writing to a specific Cloud instance (e.g. Google IoT Core)?
  • Device Services – for baseline and regression test purposes many of the general performance tests outlined above may be able to be performed using a Virtual Device Service. However, it is also necessary and desirable to be able to repeat at least a subset of these tests with real Device Services (e.g. Modbus, MQTT or BACnet Device Services), perhaps connected to real devise device or minimally connected to a simulator. The performance of each individual Device Service will be implementation is specific.

Requirements

  1. Automated tests – the performance tests must be able to be integrated into the EdgeX build/test (CI pipeline) infrastructure and run on demand. (MKB: why only performance tests here?)
  2. Standalone tests - the performance tests must be able to be run standalone on a developer’s desktop without dependencies on the EdgeX build/test infrastructure. (MKB: why only performance tests here?)
  3. Results logging and display – the results of the performance tests must be recorded in a format that enables a set of graphical performance test curves to be produced that can be easily displayed in a web browser. This includes being able to display historical performance trends to be able to  to easily determine any regression in EdgeX performance.
  4. The performance tests should be able to be run on different platforms - CPU (ARM 32 and 64 bit, x86 64 bit) and OS combinations (Linux and Windows).
  5. Performance tests should be able to be run against both dockerized (default) and un-dockerized version of EdgeX.

Test Cases

Footprint

  1. Measure the file size in bytes of each EdgeX Microservice executable.
  2. Measure the file size in bytes of each EdgeX Microservice docker image.

...

  1. Measure the memory (RAM) in bytes consumed by each EdgeX microservice in their idle state (running but no data flowing through the system).
  2. Measure the memory (RAM) in bytes consumed by each EdgeX microservice with [1, 10, 20, 50, 100] devices, reading an event with [1, 5, 10, 50, 100] readings at a sample rate of [100 ms, 1 sec, 10 sec, 30 sec, 60sec]. (MKB: payload size? payload mix?)

Service Startup

  1. Measure the time it takes to startup each EdgeX microservice, this includes the time it takes to create the docker container, configure and initialize the service.
  2. Measure the time it takes to startup each EdgeX microservice with existing docker containers and initialized data.
  3. Measure the time it takes to startup the complete set of EdgeX microservice microservices required to enable data to be read by a device and exported. 

...

  1. Measure the throughput that can be achieved by read an event with [1, 5, 10, 50, 100] readings from [1, 10, 20, 50, 100] devices (virtual) with a sample rate of [100 ms, 1 sec, 10 sec, 30 sec, 60sec] and export it (via the Export Service). (MKB: payload size?)

Core Data

  1. Measure the time it takes to write an event with [1, 100] readings from [1, 10, 20, 50, 100] devices (virtual or emulated) to Core Data.
  2. Measure the time it takes to read an event by ID with [1, 100] readings from [1, 10, 20, 50, 100] clients from Core Data.
  3. Measure the time it takes to read a set of events and readings with a complex query (all events for a given device, created within a time range and with reading attached (e.g. temperature).

...