PTP Error Threshold and Proper Streaming Stop/Restart Procedure

How many nanoseconds off does PTP need to be before stopping data streaming and re-starting? Also, what is the proper procedure for stopping and re-starting openDAQ data streaming?

Hi Jim,
I am working on a diagram to help with understanding for the implementation of PTP clock slewing.
The main point being that for the testing scenario, the concept of stopping and re-starting data acquisition based on a threshold is valid. However for permanent long term monitoring that concept is not appropriate. The acquisition clocks need to keep in sync with the PTP master continuously and maintain the record of current offset from master associated with each result so that data acquisition does not stop and re-start.

Thanks Andrew. I look forward to your diagram. Yes, it is best if the data streaming is not re-started. When testing our CT3 electronics in a tight network with a single PTPv2 grandmaster the clocks are kept in sync within +/- 50 nanoseconds. Iโ€™m hoping that is the normal customer use case and PTPv2 sync error for our product. Of course, that is dependent on the quality of the PTPv2 master and network. For us, it would not be convenient to send the clock offset with each data sample and hopefully would not be necessary if our clock offsets are low. We output up to 44 channels at 500, 1000 or 2000 samples per second. We do use a single time domain for the 44 synchronized sampled channels so one offset would cover each set if we had to send it.

Regarding the procedure for stopping and re-starting openDAQ data streaming, this feature that is currently in our backlog might come in handy: Packet Flush.

At the moment you can also disconnect and re-connect. However someone might correct me on that. Will let you know if I find out a more explicit set of instructions.

Thank you Dusan. Iโ€™m thinking from our CT3 device as a streaming server that the best we could do is to stop the connection if we determine that the PTP offset was greater than we would want to accept for our Wheel Force Transducer data output. I think that all re-starting would need to be done on the client side. Is that correct? Our main interest is being openDAQ compatible with Dewesoft/HBK data acquisition hardware. In that case our CT3 would most likely need to send the PTP offset metadata for the data acquisition to determine synchronization accuracy and potentially perform a reset. Iโ€™m making a lot of assumptions here so please correct me if Iโ€™m wrong. Thanks!

Good morning Jim,

Thanks for your input. I agree with the following:

In that case, our CT3 would most likely need to send the PTP offset metadata for the data acquisition to determine synchronization accuracy and potentially perform a reset, whereas the reset needs to be requested from the openDAQ Client.

In my view, the following logic applies if the Multi Reader is used on the Client Side:

  • The openDAQ Client starts streaming.
  • The openDAQโ€™s Multi Reader will try to align the data to a common time grid. For this, all signals need to have the same Time Source / Grand Master.
  • If the Multi Reader is successful, it assumes that the signals are synchronously received. The metadata of the PTP accuracy can be stored in parallel for post-processing.

Use-Case: The Client observes the metadata and decides to restart the measurement. Then:

  • The Client stops streaming.
  • The Client restarts streaming (package flush executed).
  • The Multi Reader starts again to find a common start point. For this, the PTP accuracy must be as good as possible so that the Multi Reader can start. We can discuss if the Multi Reader needs an option to specify which sync accuracy is acceptable.

To conclude - your CT3 and other openDAQ devices needs to send the PTP accuracy as meta data in the stream. How this will look like we need to discuss more in the synchronization working group.

Let me know if you need any further adjustments!

A topic for the metadata already exists here: Link. Feel free to share your ideas and even potential requirements.

Thanks Nils,

Yes, that makes sense. I was initially hoping to avoid sending the PTP offsets because I originally thought it would need to be sent with each sample set. Now I think that it should only need to be sent at the rate of the PTP offset corrections in the streaming server. I believe that is commonly about once per second. Even if the PTP corrections occurred more frequently, they should be nowhere near the output sample rates of the CT3. That should be doable as a unique message to the client for assessing the synch quality of the provided data stream.