correct idiom for time-aligned network tables? #6888
-
|
hey, i spent awhile measuring our camera latency, which varies depending on the alignment of the sensor row loop, the coprocessor cpu loop, and the rio main loop. i'd like to get the end-to-end latency to be correct, and i imagined the time-alignment of network tables would help. before i get too far down that road, i just wanted to verify (@PeterJohnson ?) what i'm supposed to do: in the (python) coprocessor client, i have an estimate of the "age" of a pose: publisher.set(pose, ntcore._now() - age_in_microsec)and in my (java) rio listener, i go: void listen(NetworkTableEvent event) {
var value = event.valueData.value;
var time = value.getServerTime() / 1e6;
var pose = someDecoder(value.getRaw());
poseEstimator.addVisionMeasurement(pose, time)
}so the "age" of the signal on the coprocessor is translated into the server time base. is that right? i think the key question is whether |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 7 replies
-
|
Yes, that’s the correct approach. |
Beta Was this translation helpful? Give feedback.
-
|
Unrelated to the original question- keep in mind that if you're using a listener callback, it will be called from a separate thread, so you'll want to synchronize accesses to the pose estimator. It's usually easier to either use |
Beta Was this translation helpful? Give feedback.
-
|
I was poking at an issue in simulation today, and realized that Have we talked about this before? It's super confusing for some values to be, like, the time, but for other values to be magically zero, depending on who wrote them. How should the consumer know what is what? |
Beta Was this translation helpful? Give feedback.
-
|
I apologize for my ongoing trouble here. I'm investigating drift between the RoboRIO and the Raspberry Pi, and it seems like the time sync protocol isn't being used at all. Is there a simple way to check? What I have is a simple RoboRIO program that does two things: (a) publish the fpga time, and (b) listen for time sync. so literally the following: public Robot() {
listener = NetworkTableListener.createTimeSyncListener(
NetworkTableInstance.getDefault(), true, this::consume);
}
...
public void robotPeriodic() {
pub.set(RobotController.getFPGATime());
}(the "consume" function just prints) On the Raspberry Pi side, it just runs a loop every 20ms that computes the difference between the two clocks: servernow = sub.getAtomic()
pinow = ntcore._now()
pub.set(servernow.time - pinow)This measurement drifts about 10 millisec in 200 seconds, which is enough for us to care, and seems like about the same as the Raspberry Pi clock drift. The bigger question is why my "consume" function is only called at startup. I've also measured the offset on the pi, just publishing the value in the same main loop: offset = ntcore.NetworkTableInstance.getDefault().getServerTimeOffset()
if offset is not None:
offset_pub.set(offset)this appears to be set once and then never changed. Any advice? How do you observe the behavior of the time sync mechanism? |
Beta Was this translation helpful? Give feedback.
Yes, for nearly all use cases, you want time in the local time base. The server time is there for clients that want to have that visibility, which is mostly dashboards or programmer debugging tools, not coprocessors.