Getting started with Brain ADK and additional sensors

Hi team,

We’ve recently recived our Amiga platforms. We’re awaiting the delivery of the batteries but I’d like to get started on some proof-of-concept code snippets.

I was able to install the ADK from source and I’ve had a read of how the camera_client example works, but I’d like to better understand the event/messaging architecture.

We are planning to integrate a Specim FX 10 hyperspectral sensor (https://ftp.stemmer-imaging.com/webdavs/docmanager/161584-Specim-FX10-Reference-Manual.pdf) and a Sick LMS4000 (https://cdn.sick.com/media/docs/0/90/790/operating_instructions_lms4000_2d_lidar_sensors_en_im0079790.pdf). There are a handful of protocols for interacting with these devices, including GiGE vision and direct TCP messaging for the FX10 and the SOPAS Engineering Tool from SICK.

To frame my question, I suppose I’m asking how the messaging works. It seems from the camera_client like we’re not writing an event loop which will talk to the devices, but rather an message receiver which awaits data from the devices to be sent. Is that how we’d be best implementing an interface to these new sensors?

Hello @gsainsbury86 ,

This is a great question!

Fundamentally, all of our message passing is built on top of gRPC in our farm-ng-core open source repository, for the most part in event_service.py. This is any data streams (subscribe) or interactions initiated by the client (requestReply). Our services (including device drivers like the camera) are built on top of this class and pass protobuf messages, which are often custom defined. See farm-ng-core protos & farm-ng-amiga protos.

Using gRPC provides or enables a lot of benefits in this context, including:

  • Separating the device driver processes from the applications using them
  • Seamless communication between processes running in different languages / environments
  • One-to-many publisher / subscriber framework
  • Straightforward communication (e.g., data streaming) between devices over the network (by configuring address)

We currently have three open source examples that implement a custom service and client using the EventServiceGrpc class from event_service.py.

Is that how we’d be best implementing an interface to these new sensors?

It’s hard to say. If we were to internally develop a driver for a new sensor, we would be building it on top of the EventServiceGrpc class. If I were in your shoes, I would first be looking at already implemented protocols for these devices and seeing if any fall in line with intended use of the sensors. If there exists a device driver with a communication protocol that is already implemented, you’ll save a lot of time and be able to jump more quickly into doing cool stuff with the data. I would not anticipate any mutual exclusivity using other protocols in an application alongside our gRPC based messaging.

Things to consider when deciding if the available driver options fall in line with how you will be using the sensor:

  • Are you streaming the device data over the network?
  • Do you intend to consume the device data by a single application or are you hoping to make something more general?
  • How much processing of the data is required? How much will be directly on board the brain?

I hope this helps.

– Kyle

Thank you very much for the detailed reply, Kyle. It looks like I’ve got some reading to do!

I think our intention is to design an application which will acquire imagery/line scans on a regular timing interval. I think streaming the data over the network will be the most straight-forward process and I don’t think we intend to do much pre-processing on the brain, but I’m also not certain. We’re working with partners in our engineering department, but I’m wanting to get the ball rolling.

If I were in your shoes, I would first be looking at already implemented protocols for these devices and seeing if any fall in line with intended use of the sensors. If there exists a device driver with a communication protocol that is already implemented, you’ll save a lot of time and be able to jump more quickly into doing cool stuff with the data.

In relation to this, I’m still not certain how I would be consuming that data if not wrapping it in an EventServiceGrpc class? Though it looks like if I did want to implement that class, I could perform the data request to the device inside the request_reply_handler function definition directly.

I’ll have a play and let you know how I get on.

Hi @gsainsbury86,

I think streaming the data over the network will be the most straight-forward process and I don’t think we intend to do much pre-processing on the brain, but I’m also not certain.

In this case, using the EventServiceGrpc class will be a good avenue for you. The Custom Service Examples listed out above should be very helpful in your development of your services and potential custom protobuf messages.

Have fun!
– Kyle

Hi Kyle,

I got a bit stuck today. When trying to run the example Service Client demo, I got an error about not being able to find the two_ints_pb2 module. Reading the documentation, it says that these were the included protocolbuf files, but we could regenerate them with genprotos.py. I couldn’t find the original files or the genprotos script. After some reading, I found that I could generate the file using protoc --python_out=. two_ints.proto . Just a note in case anyone else gets stuck.

After that, I was able to define a new protobuf file for image_data passing and get a minimal example working where I pass an existing image between a client and service.

Now I’m wrestling with harvesters, a Python module for talking to GigE cameras… but I’ll assume that’s outside your expertise :sweat_smile:

Thank you for the feedback @gsainsbury86 and sorry those files were missing. I’ve opened a ticket for the missing files referenced in the example:

I’m glad you were able to find a way past in the meantime.

Good luck with the harvesters module, and yes that is outside our expertise!

– Kyle