'Barcelona': October 2017

'California': ~June 2018

'Delhi': ~October 2018

'Edinburgh': ~ April 2019

'Fuji': ~ October 2019

'Geneva': ~ April 2020


'Hanoi': ~ October 2020

Edinburgh Release (Committed)

Backlog (April 2019)

Priorities shown in the table below are provisional and subject to review.

Item#DescriptionPriorityTarget Release
1Improved unit tests across services (improvements should be driven by specific WG dev teams)1Fuji
2Improve blackbox test structure including reorganization of the tests and better test case documentation (source ramya.ranganathan@intel.com)1Fuji
3New test framework (e.g. Robot or Cucumber) to support additional types of functional/blackbox and system integration tests e.g. Device Service or system level latency tests1Fuji
4Dockerize blackbox testing infrastructure for deployment simplification/flexibility2Geneva
5System integration tests – currently we don’t have any end to end tests e.g. Device Service read data -> Core Data -> Rules Engine or Export Service2Geneva
6Blackbox tests for new Application Services microservices1Fuji
7Blackbox test for Device Services  - Virtual, Modbus, MQTT, BACNet, OPC UA. Develop common set of tests that can be used against all Device Services.1Fuji
8Configuration testing – currently all blackbox tests run with single static configuration, we should test additional EdgeX configurations2Geneva
9Automated performance testing - API Load testing (measure response time) and metrics (CPU, memory) collection for all EdgeX microservices (this work was started during the Edinburgh iteration)1Fuji
10Automated performance testing - EdgeX microservice startup times (STRETCH GOAL FOR EDINBURGH) 1Fuji
11Automated performance testing - Automated system level latency and throughout testing (e.g. device read to export or device read to analytics to device actuation)1Fuji
12Automated performance testing - Baseline performance of service binaries no container3Hanoi
13Blackbox and performance test runs against other container technologies supported (e.g. snaps)3Hanoi
14Automated performance testing - The ability to create summary reports/dashboards of key EdgeX performance indicators with alerts if thresholds have been exceeded2Geneva
15Test coverage analysis e.g. using tools such as Codecov.io1Fuji
16Static code analysis e.g. using tools such as SonarQube or Coverity to identify badly written code, memory leaks and security vulnerabilities  2Geneva
17Tracing during testing e.g. based on Open Tracing standard such as Zipkin, Jaeger3Hanoi
18

Replacement of RAML API documentation with Swagger

2Geneva
19Blackbox tests moved into edgex-go repository.  Will allow backbox test to be more easily added/updated by developers as part of a PR resulting from an API change. 3Hanoi
20Run blackbox tests for an individual EdgeX microservice when a PR issued.3Hanoi


Fuji Release (Committed)

=


Geneva Planning

Backlog (November 2019)

Priorities shown in the table below are provisional and subject to review.

Item#DescriptionPriorityTarget Release
1Improved unit tests across services (improvements should be driven by specific WG dev teams)1Fuji
2Improve blackbox test structure including reorganization of the tests and better test case documentation (source ramya.ranganathan@intel.com)1Fuji
3New test framework (e.g. Robot or Cucumber) to support additional types of functional/blackbox and system integration tests e.g. Device Service or system level latency tests1Fuji
4Dockerize blackbox testing infrastructure for deployment simplification/flexibility2Geneva
5System integration tests – currently we don’t have any end to end tests e.g. Device Service read data -> Core Data -> Rules Engine or Export Service1Geneva
6Blackbox tests for new Application Services microservices1Geneva
7Blackbox test for Device Services  - Virtual, Modbus, MQTT, BACNet, OPC UA. Develop common set of tests that can be used against all Device Services.1Geneva
8Configuration testing – currently all blackbox tests run with single static configuration, we should test additional EdgeX configurations2Geneva
9Automated performance testing - API Load testing (measure response time) and metrics (CPU, memory) collection for all EdgeX microservices (this work was started during the Edinburgh iteration)2Geneva
10Automated performance testing - EdgeX microservice startup times (STRETCH GOAL FOR EDINBURGH) 1Fuji
11Automated performance testing - Automated system level latency and throughout testing (e.g. device read to export or device read to analytics to device actuation)1Fuji
12Automated performance testing - Baseline performance of service binaries no container3Hanoi
13Blackbox and performance test runs against other container technologies supported (e.g. snaps)3Hanoi
14Automated performance testing - The ability to create summary reports/dashboards of key EdgeX performance indicators with alerts if thresholds have been exceeded2Geneva
15Test coverage analysis e.g. using tools such as Codecov.io1Fuji
16Static code analysis e.g. using tools such as SonarQube or Coverity to identify badly written code, memory leaks and security vulnerabilities  2Geneva
17Tracing during testing e.g. based on Open Tracing standard such as Zipkin, Jaeger3Hanoi
18

Replacement of RAML API documentation with Swagger

1Geneva
19Blackbox tests moved into edgex-go repository.  Will allow backbox test to be more easily added/updated by developers as part of a PR resulting from an API change. 3Hanoi
20Run blackbox tests for an individual EdgeX microservice when a PR issued.3Hanoi
 21Edinburgh/Fuji backward compatibility testing 2 Geneva