Object Detection Using LEDDAR

About myself

My name is Enzo Evers. I am a fourth year ICT student at Fontys Hogeschool. At Fontys I focus on embedded systems and mostly work with FPGAs for my projects. While FPGAs aren’t really a part of the ICT program, I am very interested in their underlying technology. Besides school and ATeam, I work part-time at Prodrive Technologies in Son. After the summer of 2018 I was looking for a project involving control systems and came to know about ATeam via a teacher. From then I joined the team and started to work on the object detection with the leddar.

My project in short

I am working on object detection using leddars. A leddar is basically an advanced distance sensor that uses LEDs to measure the distance to objects. The object’s position, width, and so on, is calculated using the data from the leddars. This object data can then be used, for example, for automated parking.

This semester (september 2018 till february 2019) three seventh semester Fontys automotive students (Peter Maas, Lennard Buskus and Jamal Kor) were working on an automated parking algorithm, using the leddar data as feedback for their system. The leddar is placed on the back of the car and the car would drive backwards. The car would start or stop steering based on the measured distance to an object (in this case the car behind them of the parking spot). A demo is shown in the video.

The leddars are currently the only sensors on the back of the car. Soon radars and cameras will be added. The leddars will thus be one of the sensor types contributing to a complete world model of the car’s surroundings.

The automated parking algorithm in action

A technical explanation for the engineers

The specific leddar we use (M16D-75B0005) measures 16 segments, 3 degrees each, up to ~45m. The leddar gives a distance for each segment which measures something within ~45m. The measured distance is the distance when having a straight line from the leddar to the object. The figure below shows how the leddar measures its objects.

The algorithm I made basically compares the distance in both x and y directions of the current segment with the x and y of the previous segment and checks if the gaps between the segment’s x and y are big enough to start processing a new object. If the gap isn’t big enough, the segment’s coordinate is added to the current object. When the maximum number of objects are found, but not all segments are processed yet, the algorithm checks if the next segments have a closer coordinate to the car than the furthest object in the current list. If this is the case, the furthest object is removed and this free object spot is being processed with the remaining segments. This continues until all segments are used.

The algorithm still needs optimization, and more object features need to be extracted from the leddar’s data. For the autonomous parking project however, the current implementation is sufficient.

PCB

To use the algorithm, an ESP32 and CAN module were used to read the leddar data and then send the processed object data to the car. However, this setup isn’t ideal because of the breadboard with jumper wires, logic converter and external CAN module. Because of this, I made a PCB which has an ATSAME51J19A (cortex-m4f) on it. This chip is overkill for sure, but it already has two integrated CAN controllers which saves some money in the end.

The image shows version one of the PCB. The PCB is currently only tested by uploading a simple “blink led” program. In the coming time, I plan to write the code to use the integrated CAN controllers. The object detection algorithm itself can stay mostly the same.

Version 1 of the PCB for object detection using LEDDARs