Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Global configuration - there is a lot of config duplication across services which requires a lot of config to be touched when only one item needs changed (examples:  topic name, log level).  Issue is where would the common configuration go and how would it be made available?  Would it work with or without Consul and with/without security.  Would it work the same in containerized, snap and non-containerized environments.  Per Kamakura planning (Nov 21), this issue is backlogged for now, but should be addressed before the next major release (v3).
  • There is a general concern that EdgeX is too big for some edge environments.  This is largely due to the fact that enterprise/cloud products are used for security and configuration/registry.  Are there alternative products?  Could the open source groups supplying Consul, Kong, Vault be partners in developing edge-sized versions of their products?  It is estimated that we only use about 10% of the functionality.  Per the Kamakura planning meeting (Nov 21), this issue was backlogged for a future release.  Good first project for those new to EdgeX:  research alternative products or ways to reduce the size of the existing large services.
  • SMA and the system manage executor are deprecated.  The features of the SMA only are supported in some environments.  Most deployment orchestration tools/platforms or even the OS can provide memory and CPU utilization for the services.  Start/stop/restart can also be done by other things when running containerized, for example.  Configuration information is available through Consul when it is running.  So what replaces SMA going forward?  Does anything replace it?  What features need to be provided by EdgeX directly versus by the adopters choice of management system?
  • There are some requests from adopters to expand support notifications.  Namely, to send notifications via SMS, web sockets or message protocol (like MQTT or Redis Pub/Sub).  There is also a request to make using/sending notifications easier.  Today, a developer must write code directly into a service to be able to send a notification.  Is there a way to do some notifications by configuration?  At the very least, better examples need to provided on how to use the notification service.  Also needed is a way for the rules engine to trigger notifications.  Per the Kamakura planning meeting (Nov 21) this enhancement as been deferred to a future release.
  • ARM 32 support - MongoDB is not available on ARM 32; CI/CD infrastructure is required
  • Full Windows development support - ZeroMQ libraries do not allow (easily) the compiling and development of all of EdgeX on a Windows platform.
  • Alternate (from Docker/Docker Compose & Snappy) deployment and orchestration options for EdgeX.  Today, while the community and users of EdgeX are free to deploy EdgeX as they see fit based on their use case/needs, the EdgeX community provides Docker container images and a docker-compose.yml file to help get and deploy EdgeX.  Going forward, it is anticipated that the community will seek and even need alternate means of deploying, managing, and orchestrating the EdgeX micro services.  Options include using Kubernetes, Swarm, Mesos, Nomad, to name a few.  Additional support may be offered by system management tools or facilities.
  • Changes to configuration in Consul are immediately made available to applications.  However, applications must implement a “watcher” to see a configuration change and then call to get that change (the core-config-watcher in GitHub was meant to demonstrate how to do this but was never implemented or used in any service).  Further, even if micro services are made to be more dynamic in watching for and using new/updated configuration, some of the configuration changes would only work after a restart (ex: the REST endpoint port number of the micro service).  Is there a way to signify which configuration is allowed to be changed at runtime and which can only take affect after a restart of the service.
  • Facilitate command information to be supplied and known to the northern edge systems.  For example, how would we provide Azure IoT with commands that it or a cloud solution could use to actuate on devices?  Azure IoT or other north side system could make a request call to the command service for the information, but this requires it to pull it and not be pushed the information.
  • Code signing – how to certify the integrity of the system
  • Artifact signing (exe, JAR, Docker containers, etc.)
  • How to secure EdgeX’s devices.  How do we make sure a sensor or device is safe to accept data from?  How do we onboard/provision a device securely?
  • Explore potential use of hyperledger.  How can an audit on data collected at the edge by established and tracked (given the limited resources at the edge)?

  • Protect data at rest (encrypt the database)
  • Protect data in motion (encrypt data passing between services or outside of EdgeX)
  • Support privacy concerns (GDRP, HIP-A, etc.)
  • Improving EdgeX resiliency in face of issues or non-availability of some resources/services/etc. (typically for core and above services and not device services)
    • Insure all micro services follow 12 factor app methodology (see https://12factor.net/)
    • Allow services to be load balanced
    • Allow services to fail over
    • Allow for dynamic workload allocation
    • Allow services to live anywhere and be moved without lots of configuration to be changed in other services
    • Allow services to be distributed across hosts - and across API gateways (requiring service to service communication protection)
  • Support truly distributed microservices
    • Allow services to run on multiple host machines
    • Secure distributed EdgeX with reverse proxy 
    • Cross EdgeX instance command actuation (ex:  device X on EdgeX box A triggers action on device Y on EdgeX box B)
    • Front a collection of duplicate microservices with a load balancer (allow for the microservice copies to scale up or down based on load); allow multiple instances of any microservice (for future load balancing and scaling efforts - today only single instances are allowed)
  • Develop a test environment/playground to test high-availability and distribute service functionality.
  • Support enrichment functions (an EAI concept) in export services (or application services).  Allow additional data or information to be added to the flow of sensor data to the northbound side.  This might be information about the sensor/device that captured it or information about the commands to actuate back down on a sensor/device.
  • Support additional northbound formats
    • Haystack
    • OPC UA
  • Support additional southside connectors
    • Profinet/Profibus
    • CANBus
    • LORA
    • IoTivity
    • Zigbee
    • ZWave
  • Provide more tooling for the device service SDKs.  The original Java DS SDK was command line driven.  In the future, generating a new SDK can/should be done from tools such as IntelliJ.  
  • Allow EdgeX (the entire platform) to be multi-tenant.  Data and services can be attributed to individual clients and protected from one-another.
  • Downsampling: It is mentioned that the device service may receive from the device new unattended readings (e.g. in a pub/sub type of scenario). In this case, there should be a setting to specify whether we accept all readings or we decide to downsample because the source is pumping data too fast. This is actually a very common scenario when you deal with high frequency sensor packages.
  • Data Transformation: This is something we have always considered as a potential in EdgeX – that of a filter, even a small transformation engine between device services and core data. Not a full blown export, but something that serves in a similar fashion and was common across services. We even thought about making it some type of quick CEP/rules engine feeder for those decisions that can’t wait to go through the rest of the layers.
  • While REST will not go away (a REST API will still exist around each micro service), there may be a need to implement point-to-point messaging between select services or to adopt some type of message bus unilaterally across all of EdgeX to support messaging among services.  Messaging provides for more asynchronous communications, typically lower latency communications, and better (or more finely tuned) communication quality of service (QoS).  Are there places where messaging might be more appropriate (like between core data and export distro today).  Would a use case dictate the use of an alternate messaging infrastructure be used among all services with an underlying message bus to support it?  Would alternate protocols (SNMP, WebSockets, etc.) be desired in some use cases?  For the Delhi release, some alternate communication requirements, design and early message implementation experimentation is likely to occur.

    • Should we allow configuration properties be overriden by command line provided properties (like Java allows/provides) for our Go services?
  • Define security testing and implement the necessary harness to automate security testing. 
    • Port scanning (to ensure something hasn't been accidentally left open)
    • Check for weak passwords
    • Test positive and negative access based on access control lists
  • Improve binary data support

    • Local edge analytics may be fed binary data and create intelligence from it to allow for actuation at the edge based on the binary data (example: examine an image an detect the presence of a person of interest).

    • Support alternate message formats for service-to-service exchange (Protobuf, XML, etc.)
  • Wrap any open source software with an API, or provide common / replaceable library to use by services to communicate with the 3rd party infrastructure (Consul, Vault, Kong, MongoDB, database, etc.).  This allows the easier replacement or substitution of the infrastructure in the future.
  • Allow for more messaging between services to support more async, better QoS, and durability of information.  This could be supported by MQTT, AMPQ, 0MQ, DDS, NATS, GRPC, WebSockets, or other alternate technology.  The interfaces to the communications between services should allow for easy replacement.
  • Data visualization - how can sensor data be better visualized and analyzed at the edge?
  • Automate the generation of the API documents from the code (versus manual creation of the API documentation today)
  • Many of the original device services where created with driver stacks that are suit for purpose on all platforms, are no longer supported, are not homogeneous in their make up (i.e. having some parts Java while other elements in Python), or are using stacks that are not consider the best option for the protocol today.  The BLE and BACnet device services fall under this categorization.
  • Several of the device services were created to prove, conceptually, how to connect to a device using that protocol, but it may not be a full implementation of the protocol API.  For example, the SNMP device service implements enough to drive a Patlite and a few other devices, but does not understand all of SNMP.

  • Allow services to respond to native init processes (example: as that provided by systemd).  The functionality added with the outlined system management APIs should help facilitate systemd calls

  • System management - storing system metrics locally

  • System management - setting configuration (finding ways to differentiate read-only versus read-write props)

  • System management - actuation based on metric change (a "rules engine" for control plane data)

  • System management - add alerts and notifications (service down, metric above threshold, etc.)
  • Use of QoS and/or blockchain to prioritize resource usage by certain services (which might be detected by System Management metric collection)
  • Automatic code formatting in CI/CD pipe

  • Additional language SDKs
    • Device Service SDKs in other than Go or C
    • Application functions SDK in other than Go
    • Tooling for SDKs (CLI, JetBrains or Eclipse plugins, etc.)
  • Tooling / UI for Device Profile creation
  • System Management Agent to store configuration as a config provider in a lightweight configuration store (as a replacement of Consul).
  • Secure service-to-service communications
  • Produce deployment artifacts that contain multiple services (e.g. a single core service docker container versus core, metadata and command services)
  • Produce service executables that combine services (e.g. create a single core executable that is core, metadata and command all in one) via the build/make process.

...