Custom Brain application tutorial is live!

Hi everyone :wave:

The full tutorial for developing a custom application on the brain is live!

This example application and tutorial is designed to enable you to develop your own custom applications and deploy them to the Amiga brain. In this application you create an app that displays the camera stream and allows you to drive your Amiga from the touchscreen interface on your Brain.

You can find the full tutorial at: Virtual Joystick Tutorial

The topics covered in this tutorial include:

  • Creating kivy applications
  • GRPC / asyncio application development
  • Streaming an Oak camera with the camera client
  • Streaming Amiga state information with the canbus client
  • Auto control mode of Amiga robot with the canbus client

Please check it out and let us know if you have any questions or suggestions as you go through it :smiley:

1 Like

Can these three lines of code from the virtual-joystick be explained?
if response_stream is None:
print(“Start sending CAN messages”)
response_stream = client.stub.sendCanbusMessage(self.pose_generator())

Hi Dejan,

If anyone is missing context, we have a bi-directional RPC stream and a Generator called pose_generator explained here:

Virtual Joystick Tutorial - auto-control

So these three lines (1) check for an existing bi-directional RPC stream and (2) create the bi-directional RPC stream if it doesn’t exist.

We only want one bi-directional stream to exist at a time or we could send conflicting control messages to the Amiga. (Imagine two streams of commands with one commanding the Amiga to hold its position and the other to drive forward).

After that (in the following lines) we iterate over the response_stream checking for a success response from the canbus server on each sent canbus command, and delete the response_stream if the server does not respond with a success message. This is so it can be recreated when the server is ready again (this should happen rarely if ever in our case).

Thanks for the response, Kyle. We are able to control the motors from our app.

It is clear that “pose_generator” has a forever running loop, but it is less clear what
happens in “response_stream = client.stub.sendCanbusMessage(self.pose_generator())”.
Is “pose_generator” a new task on top of other 3 tasks? What is the argument
“self.pose_generator()”? I do not see that “pose_generator” function returns a value.

pose_generator is a Python Generator object so it does not return a single value, but rather yields a sequence of values as an iterator (like a list).

That line is not creating a new task, but rather is passing the iterator of amiga control commands to the forever running canbus server task dedicated to sending CAN messages. This task runs in the background of the Amiga as long as the canbus service is running, and waits for iterators of canbus messages to unpack, reformat, and send on the CAN bus. If that task has no iterator currently passing it messages, it just sits and waits.

Because we want to control the robot forever (the entire time the app is running) with the virtual joystick, the pose_generator has no condition in which it ends and it just loops forever passing values to the canbus service.
If you only wanted to pass a fixed set of commands, you could replace the pose_generator with a list of proto defined AmigaRpdo1 commands (using make_amiga_rpdo1_proto). Or if you only want to command for a fixed period of time, you could add a break in the pose_generator loop. In both cases, the auto control should stop after the iterator completes and wait for the next time an iterator is passed.

I’d recommend triggering these short lived control periods with a kivy button. And stop the stream of control messages with a couple zero speed, zero angular rate messages so the Amiga knows to stop right away!