Hello @priyajakhar ,
I will document a few of your private responses here that are OK to share, so it may help future users encountering similar issues.
- I am not constraining the max speed on the controller
- The amiga is being run outdoors, the accuracies are around h=0.014 and v=0.012. carr_soln is 2.
- The early stop was somewhat consistent when I ran some recent tests. I am also not running reverse tracks.
OK so that is all good. I looked at the example code you sent me and ran it on one of our robots. It ran successfully, with some tweaks.
sleeping in an async for
loop
In your track follower’s stream_status
method (based on the stream_track_state
method of the Follow a Track | Farm-ng Developers example), you are sleeping inside an async for
loop.
# Stream the track_follower state
async def stream_status(self) -> None:
await asyncio.sleep(1.0)
message: TrackFollowerState
with open(logfile, "a") as l_file:
async for _, message in self.clients["track_follower"].subscribe(SubscribeRequest(uri=Uri(path="/state"))):
l_file.write(str(message))
print("###################\n", message)
await asyncio.sleep(1)
This is a very common mistake, but is something you should almost never do (unless you have a very good reason and the sleep is much shorter than the expected loop rate). What is happening here, is the async for
is already asynchronously waiting for the next TrackFollowerState
message. This releases the event loop to schedule other tasks while it waits for the next message.
By adding await asyncio.sleep(1)
in the async for
loop, you are causing the loop (and next message to get printed) to get delayed 1 additional second each message, causing a massive backlog for a message streamed at ~20 hz. Because the max queue size on the gRPC stream is large, you are getting a very late TrackFollowerState
messages.
Slowing down the state stream
IFF you do want to slow the rate of messages printing, you can use the every_n
parameter of the SubscribeRequest
proto. In this case, you can set every_n=20
to limit the 20hz message stream to 1hz (though I would recommend a much lower number so you get more fine grained status updates).
See: farm-ng-core/protos/farm_ng/core/event_service.proto at main · farm-ng/farm-ng-core · GitHub
Not looking at message.status
I’d also recommend you look at the status
field of the TrackFollowerState
messages to monitor what is happening if your robot is stopping early.
If your robot is in fact stopping early, this will tell you why.
Recommended stream_status
method
Putting it all together, I’d advocate you change to something more like:
async def stream_status(self) -> None:
await asyncio.sleep(1.0)
message: TrackFollowerState
with open(logfile, "a") as l_file:
async for _, message in self.clients["track_follower"].subscribe(SubscribeRequest(uri=Uri(path="/state"), every_n=2)):
l_file.write(str(message))
print("###################\n", message.progress)
print(message.status)
Expected output
The last message before finishing the track and the first at completion should look like:
###################
track_size: 42
goal_waypoint_index: 41
closest_waypoint_index: 41
distance_total: 4.0000000000000009
distance_remaining: 0.060659848555762945
duration_total: 5
duration_remaining: 0.075824810694703676
track_status: TRACK_FOLLOWING
robot_status {
controllable: true
}
driving_direction: DIRECTION_FORWARD
waypoint_order: ORDER_STANDARD
###################
track_size: 42
goal_waypoint_index: 41
closest_waypoint_index: 41
distance_total: 4.0000000000000009
duration_total: 5
track_status: TRACK_COMPLETE
robot_status {
controllable: true
}
driving_direction: DIRECTION_FORWARD
waypoint_order: ORDER_STANDARD
If it does fail you will see the reason. E.g.,
###################
track_size: 42
distance_total: 4.0000000000000009
distance_remaining: 7.1767206273288089
duration_total: 5
duration_remaining: 8.97090078416101
track_status: TRACK_FAILED
robot_status {
failure_modes: FILTER_TIMEOUT
}
driving_direction: DIRECTION_FORWARD
waypoint_order: ORDER_STANDARD
Next steps
I hand measured the final position of the robot and it was consistently stopping between 3.95 & 4.0 meters, unless there was a failure reason.
Please let me know what you see now.
Best,
Kyle