Drone following instructions

Reading instructions from QR codes and executing them using android application


Intro

Recently I got an opportunity to build a drone prototype controlled by Android device. Firstly I had to choose the best candidate. The requirements were: small size and SDK with a video streaming. After some research I've decided that the Bebop 2 from Parrot would be the best choice. Parrot is one of the few companies that has an open SDK for developers. Recently they have released the 3rd version of their SDK.

The first step led to try the android application example. This example covers almost every basic feature: connecting to drone, moving around, making picture in high quality and accessing the drone's media.

One of the steps for the prototype would be autonomous landing onto a pattern. I've done some research about the existing solutions and found this paper that describes the theory behind the landing. So I've decided to create an android application that navigates the drone to land onto detected pattern(in this case a QR code). Later, I've made and update so application can read instructions from the detected patterns and executes them sequentially.


Drone details

Bebop 2 has got many cool features I'm not going to write about, but I will draw your attention to the flight time that is about 22 minutes, which is quite useful for development. After going through the SDK documentation, I found a small disadvantage. The ultrasonic sensor for altitude detection is not yet accessible via API. On the other hand I was pleasantly surprised with a camera features. The drone has got one camera placed in the front with a fish eye lens. The camera is using the gyroscope so the streaming video is still in one fixed position even if the drone is leaning to sides. You can also setup this angle via API so you get the output video from requested angle. For the purpose of this prototype I needed the frontal and bottom view. Streaming video quality can be set up to 640 x 368px. Recording quality has higher resolution but is not accessible as a stream. The video resolution can be also set via API.

Bebop 2 on a cardboard landing pad

Detection and output

I've had a small issue to get a raw image from the video stream. After passing this issue I've used asynchronous detection for QR code using the google vision lib. Small disadvantage for this library is that the result object does not contain the rotation of QR code, so I had to add this missing method. I also needed some output drawing so I added a transparent layer above streaming.

Sequence of moves

Searching existing solutions I've found few libraries written in python or javascript that can execute movements sequentially. These moves work as a sorted list of commands that are executed in predefined order. I've implemented my own sequence moves which consist of 3 different types of commands .

  • time move - executes move in time (e.g. move forward for 3000 milliseconds)
  • single action - execution of singe action (e.g. take off, land, take picture, ...0
  • condition - executes some action after satisfying condition (e.g. after locking onto the pattern reads instruction and adds to the command stack)

Landing

Landing is condition rule command type. The condition part is to center the drone above the pattern, and the action part is landing. So let's describe the centering condition. I've used seven simple independent controllers to center the drone onto the exact position above the pattern. Five of those controllers are for movement and two of them are for possible correction. Each of these controllers gets position from asynchronous QR code detector from video stream with a time stamp.

The movement controllers are quite straightforward. They take care of movement in one axis in both directions. The rotation controller rotates drone to pattern-orientation so the next instruction will be executed in the same rotation. The controller gets active only if other four movement controllers are properly centered. The small correction controller is using the knowledge of last detected QR code position (e.g. if the last position was detected in the bottom side, try to move backward ). The large correction controller is launched when the pattern was not detected for a long time (1-3 sec) and starts searching procedure which contains few steps (move up, rotate, ...). Both correction controllers are time limited.

movement controllers
  • forward / backward
  • left / right
  • rotate clockwise / rotate anticlockwise
  • up / down

pattern rotation
  • correction controllers
  • small correction (use last knowledge of pattern position)
  • large correction (everything is lost, just try to find)

Executing instructions

Executing instruction is a same condition rule command type as a landing action. The condition part is to center the drone above the pattern, but the action afterwords is to parse QR code message and add a new command into a command stack. Each message has a simple structure (e.g. "id:5;fw:2000" means go forward 2000 milliseconds) and a unique identifier which has to be larger than previous one.

Testing

For testing purposes I've made a few cardboard landing pads. The pads cannot be very light so the wind coming from the drone won't move them away.

The following instruction process was tested indoors and outdoors. The outdoor results were insufficient. Even light side wind put drone away from QR code. To successfully read the instruction from pattern, the drone had to execute the searching procedure for a few times.

On the other hand the indoor results were quite satisfying. The QR code was detected almost every time. The searching procedure was not launched even once. The indoor testing was captured on a video and you can see it in attached video.

Result

From my point of view the simple logic controllers could be replaced by functions that describe speed in time for each movement. Therefore the lock onto pattern will be done more quickly and it can be used for an outside purposes too. For better orientation in space some positioning system could be used, but that was not point of this exercise.



Comments

Post a Comment

Popular posts from this blog

Counting dice and train wagons using computer vision

Play table

Skate tricks recognition using gyroscope