Thursday, November 1, 2018

Toilet light wall

LED screen made of empty toilet rolls

Why ?

The first idea was to reuse empty toilet rolls and make some fun with arduino and some LEDs. After gluing first block (4x3 LEDs connected to Arduino Nano), I've decided that it would be cool to control it right from the android. So I've added one BT adapter and created an android app. As the time went by, another empty toilet roll appeared. So I've built one block after another... After some time I've set my goal to have 4x4 blocks all together (192 LEDs). As the blocks were adding, the functionality of android app was also rising.

first 6 blocks


What it can do ?

Because it is quite straightforward from the video below, let's just briefly list the features from an android app:
  • draw (real-time)
  • toggle random LED (screen saver)
  • animation (includes also editor for key-frame animation)
  • font loop
  • incoming sound
  • simple game using accelerometer

front view 

Was there something interesting ?

Aside from the gluing, soldering and programming, I've come across few interesting things:
  • I needed to create a protocol so the lights will be showing in real-time. I ended up with 2 bytes for each block. 12 bits for the light and the rest 4 bits as an identifier in the grid.
  • I've never used multiple bluetooth devices before. In this setup, my power resource was not sufficient enough so i needed to split the screen and use multiple bluetooth adapters.

view from the back

Result

To sum it up, I've made quite a large screen (76x60cm) with a horribly small resolution (16x12px) that can be controlled only via bluetooth with a custom protocol. The possible use-case might be as some sort of art exhibition object or as a domestic screen saver.



Share:

Wednesday, July 12, 2017

Drone following instructions

Reading instructions from QR codes and executing them using android application


Intro

Recently I got an opportunity to build a drone prototype controlled by Android device. Firstly I had to choose the best candidate. The requirements were: small size and SDK with a video streaming. After some research I've decided that the Bebop 2 from Parrot would be the best choice. Parrot is one of the few companies that has an open SDK for developers. Recently they have released the 3rd version of their SDK.

The first step led to try the android application example. This example covers almost every basic feature: connecting to drone, moving around, making picture in high quality and accessing the drone's media.

One of the steps for the prototype would be autonomous landing onto a pattern. I've done some research about the existing solutions and found this paper that describes the theory behind the landing. So I've decided to create an android application that navigates the drone to land onto detected pattern(in this case a QR code). Later, I've made and update so application can read instructions from the detected patterns and executes them sequentially.


Drone details

Bebop 2 has got many cool features I'm not going to write about, but I will draw your attention to the flight time that is about 22 minutes, which is quite useful for development. After going through the SDK documentation, I found a small disadvantage. The ultrasonic sensor for altitude detection is not yet accessible via API. On the other hand I was pleasantly surprised with a camera features. The drone has got one camera placed in the front with a fish eye lens. The camera is using the gyroscope so the streaming video is still in one fixed position even if the drone is leaning to sides. You can also setup this angle via API so you get the output video from requested angle. For the purpose of this prototype I needed the frontal and bottom view. Streaming video quality can be set up to 640 x 368px. Recording quality has higher resolution but is not accessible as a stream. The video resolution can be also set via API.

Bebop 2 on a cardboard landing pad

Detection and output

I've had a small issue to get a raw image from the video stream. After passing this issue I've used asynchronous detection for QR code using the google vision lib. Small disadvantage for this library is that the result object does not contain the rotation of QR code, so I had to add this missing method. I also needed some output drawing so I added a transparent layer above streaming.

Sequence of moves

Searching existing solutions I've found few libraries written in python or javascript that can execute movements sequentially. These moves work as a sorted list of commands that are executed in predefined order. I've implemented my own sequence moves which consist of 3 different types of commands .

  • time move - executes move in time (e.g. move forward for 3000 milliseconds)
  • single action - execution of singe action (e.g. take off, land, take picture, ...0
  • condition - executes some action after satisfying condition (e.g. after locking onto the pattern reads instruction and adds to the command stack)

Landing

Landing is condition rule command type. The condition part is to center the drone above the pattern, and the action part is landing. So let's describe the centering condition. I've used seven simple independent controllers to center the drone onto the exact position above the pattern. Five of those controllers are for movement and two of them are for possible correction. Each of these controllers gets position from asynchronous QR code detector from video stream with a time stamp.

The movement controllers are quite straightforward. They take care of movement in one axis in both directions. The rotation controller rotates drone to pattern-orientation so the next instruction will be executed in the same rotation. The controller gets active only if other four movement controllers are properly centered. The small correction controller is using the knowledge of last detected QR code position (e.g. if the last position was detected in the bottom side, try to move backward ). The large correction controller is launched when the pattern was not detected for a long time (1-3 sec) and starts searching procedure which contains few steps (move up, rotate, ...). Both correction controllers are time limited.

movement controllers
  • forward / backward
  • left / right
  • rotate clockwise / rotate anticlockwise
  • up / down

pattern rotation
  • correction controllers
  • small correction (use last knowledge of pattern position)
  • large correction (everything is lost, just try to find)

Executing instructions

Executing instruction is a same condition rule command type as a landing action. The condition part is to center the drone above the pattern, but the action afterwords is to parse QR code message and add a new command into a command stack. Each message has a simple structure (e.g. "id:5;fw:2000" means go forward 2000 milliseconds) and a unique identifier which has to be larger than previous one.

Testing

For testing purposes I've made a few cardboard landing pads. The pads cannot be very light so the wind coming from the drone won't move them away.

The following instruction process was tested indoors and outdoors. The outdoor results were insufficient. Even light side wind put drone away from QR code. To successfully read the instruction from pattern, the drone had to execute the searching procedure for a few times.

On the other hand the indoor results were quite satisfying. The QR code was detected almost every time. The searching procedure was not launched even once. The indoor testing was captured on a video and you can see it in attached video.

Result

From my point of view the simple logic controllers could be replaced by functions that describe speed in time for each movement. Therefore the lock onto pattern will be done more quickly and it can be used for an outside purposes too. For better orientation in space some positioning system could be used, but that was not point of this exercise.



Share:

Friday, March 24, 2017

Robotic arm with computer vision

Robotic arm with computer vision - picking up the object


Idea

The main idea was to build an environment with a robotic arm that can execute various commands based on an image analysis of a scene. In this article I'm going to describe all parts of the idea. For the first task I've chosen detection and moving one object.



Environment

The whole environment consists of few parts mounted together. For the base I've chosen an old table and repainted it with a white color to get better contrast with objects. Onto the middle of longer side I mounted robotic arm that I got from e-bay. The arm has 6 servo motors, with rotation base and claws on the other size. Parts are made of aluminium and are quite solid. Then I got some perforated metal ledges, short them, and mounted them to the corners of the table. Screw it all together. Then I put RGB Led strip to the bottom side of top part of construction. In the end i placed USB camera at the top of construction so it can see the whole scene.

Communication with arm

The robotic arm has 6 servo motors. The quickest way is to get a servo controller which allows us to control one or group of servos. I chose controller with a serial output with custom protocol, so the communication can be done via USB via few lines of code in any language.

Example of a group operation:
#1P1500#2P2300#3P2300#4P2300#5P2300#6P2300T100\r\n
#servo-indexPtime-in-milis-as-rotation#servo-indexPtime-in-milis-as-rotation...Ttime-for-execution\r\n


Logic flow

The application has few independent parts that communicate with each other.

The input from camera is running in separate thread and runs a preprocessing on interesting frame. (Interesting frame is a frame that was detected after the movement.) The result of preprocessing is list of detected objects with coordinates.
The interesting frame is then sent to main logic. This is where all the modules are registered. If there is no active module, then it tries to initialize first that satisfied initial conditions. If any module is active, interesting image is send to this module. Modules takes care of logic and decides about what to do next. Module then sent movement commands into to queue for usb communicator.
The usb communicator repeatidly reads messages from its queue and sends commands to controller via USB. Controller then moves with the servo motors.

schema of logic flow

Calculation of the next move

One of the most frequently used feature will be picking up the object. After we get preprocessed input from the camera we have to calculate the move to pick up the object. So now we have frame with detected object and center of arm. Next we know the real size of the table, length of arm parts and base height. Our task is to calculate angles for each servo in arm so it can be able to reach and pick up the object. We can split this problem into two smaller problems. Each part has got a little bit of a geometry character.

First part will be the base rotation (we can imagine this as a view from the top). This is trigonometry exercise where we know all the points of a triangle, two sides and we want to calculate the angle. Then we recalculate the angle to miliseconds for servo controller.

Second part will be the rotation of three servos to lean the arm (we can imagine this as a view from the side). In the first version we do not know the height of the object so we use the constant instead. The problem is very similar to the previous trigonometry problem. This time we have one side and angle for each part. So if we substitute the right three values we know if we reached our object. I've used brute force to calculate the three angles (time was less than one second).



Modules

The idea is to create an application with easily insert able modules. Each module we can imagine as series of a moves with custom logic. These modules are extended from template class. Each module is defined by the list of states.
Each state can be changed for one of the following triggers:

time trigger - wait some time to do next move
interesting frame trigger - when movement from the camera is detected
command execution trigger - send broadcast from USB controller that move was executed
So the application would have all the logic for each task, separated in custom module. For example the module for picking up the object can have the following states:

start (interesting frame trigger)
pick up object (time trigger)
move object (time trigger)
release object and return to default position (time trigger)
verify if object was moved (wait if the last move is executed - command execution trigger)


First testing

Small issue that I came across writing with c++ openCV was, that you can not show an image from the background thread, only main thread can call imgshow() method. So I used singleton instance that keeps images and main thread afterwards shows the image.

One of the open problems is detecting the object's height. It's not possible to detect object's height from a single camera. There could be used some sensor at the end of arm or other approach.

Even though, the first tests for picking up the object were successful, more calibration is required. After this calibration there could be used learning for the best object's spot for successful picking. Also the claws should enclose the object hard enough not to slip.





Share:

Tuesday, January 10, 2017

Counting dice and train wagons using computer vision

Computer vision exercises with preprocessing


Before the next project I decided to do some computer vision exercises. Each example is based on a simple logic image preprocessing. No data structure or learning is required.

Dice

I got this idea while browsing the net. I was curious about how hard can it be to write such a script. I'll describe the algorithm in steps.


  1. movement detection: Comparing few frames with thresholds gives us the information, whether something is moving in the frame. Adding some small time frame after the movement stops gives us more precise information.
  2. remove background: Thresholding gray frame removes the background and gives us binary image with objects
  3. cropping the objects: Using contours to detect object and then separate them by cropping.
  4. detecting dots: Inverting the image we get objects that can be again simply detected using contours.
  5. filtering dots: If dice is visible also from the side therefore dots from that side can be recognized as well. But we can simply filter these dots by comparing side ratio from their bounding boxes.
  6. result: Count the recognized dots and do some visualization to output frame.

The results are quite good. More testing with different dice and different background should be the next step.

2017_01-count-dice.zip




Train Wagon counter

Everyone is doing car counter on highways. But you can't find any for counting train wagons. For the next exercise i chose static video of a train from youtube. Again, I'll briefly describe the algorithm in steps.


  1. compare frame with background: Comparing every new frame to a background frame. By background I mean the frame without train. We get a first binary frame.
  2. compare last two frames: Comparing two continuously frames we got an actual movement. We get another binary frame.
  3. combine binary frames: Combining these frames with OR condition.
  4. morphological operation: Use morph opening to remove the noise.
  5. fill in the holes: Detecting areas using contours and detect if they are all filled.
  6. select the area: The idea was to choose an area in a frame where we can clearly see the background in-between wagons. For this video we've chosen the right edge of the image.
  7. signal processing: Now we're facing a new problem. We're finding local minimum in a signal function. Adding some thresholds and limitations for repetition we can get the local minimum.
  8. result: Count the wagons from filtering function and do some visualization to see when and where the local minimum is detected.

This approach is very limited and works only in a good light conditions and wagons should be the same type and color. Next steps should be using colors, shapes or more wagon details to be more accurate.



Share:

Sunday, July 31, 2016

Play table

Using proximity sensors for playing midi tones combined with LED visualization


Description

The aim of this project was to create a table sized device with multiple proximity sensors that can play midi tones. Each sensor had a few LEDs to show hand distance above the table. This table could be used by one or more persons at once.



Hardware setup

First I created a prototype from a cardboard to test sensors and some logic behind. Than I ordered a customized plottered sticker with a design which was painted with bare conductive paint. Afterwards I drilled some holes and connected the touchboard with 7 arduino Nanos. Each arduino was connected to 13 LEDs diodes. At the end I added two potentiometers. One for controlling volume and one for changing notes setup.



Programming

Programming consists of two parts. Master(touchboard) program that reads values from proximity sensors and sends messages to slaves(arduinos). Second program for slaves that reads messages from master and toggle LEDs.
Using Arduino IDE for programming was helpful, even more that the touch board has got already built in midi library, so it was quite simple to use it. For communication I created a simple protocol to distribute proximity from master to slaves . There is no delay between real distance and lit up LEDs (baud rare is set to 9600 )

Protocol example:
ID_SEPARATOR # ID # VALUE_SEPARATOR # VALUE #.....



Lesson learned

My very first idea was to control the volume for each sensor by distance. I tried to use separate midi channels, but it didn't worked. So I have to come up with another solution. I decided to use thresholds for the distances.

During the testing version with cardboard I've had some small issues with touchboard. Sending messages out using TX like in any other arduino didn't work here. Later after reading official docs. I've noticed that this board has two serials. So I have to use Serial1 instead (minor issue but didn't find anything mentioned about it anywhere).



Update 2016-11-20


Testing

I've got in touch with Julian who is a musician. After every session of testing we had, he gave me tips for more features or small changes, so he can play something on the table. We've come across few refactoring cycles, some optimalization (the memory limit can be achieved very easily).



New features

Play table provides a USB MIDI output interface. This feature allows us to master the output from the table using a MIDI protocol.

I solved the previous issue that each sensor can use it's own volume by setting a different channel. I've used this feature for two sensor modes which use distance like a trigger to launch single tone or chord.

During testing sessions, we came up with 3 more different sensor modes:

  • chord mode: plays an chord in full sensor range
  • multi chord: toggling multiple chords based on the distance above the sensor
  • arpeggio mode: play tones in tempo that is dynamically changing by the hand distance above the sensor
  • arpeggio auto mode: same as arpeggio mode, but with auto-play after user touches the sensor


Lesson learned

After few sessions the painted sensors were little blurry because the paint would fade away because of touching. I was thinking about adding new thin layer, but transparent spray paint solved this problem for good. The resistance characteristics did not visibly change. So if you are planning to use bare paint you can freely use transparent spray paint to preserve the conductive paint from blurring.

                             

Share:

Sunday, June 26, 2016

Creating fake SMS with Android

How to crate fake sent and received SMS


Android fake SMS

I was curious about how hard it can be to create fake SMS. So I decided to do a small research. I found a simple app that allows you to create a fake SMS. The interface is very straightforward. User can set a type of message,date and time. The only thing you have to agree with, is to set this app as a default message application. At this point, I was sure it is possible to do this without root permissions.

My first idea was to send some fake intent to pretend the system received a new SMS. I did some searching but any of it works. Lots of answers were pointing to "android.provider.Telephony.SMS_DELIVER" which is the system intent action for broadcasting. But since android 4.4 (KitKat) there is only one app, that is selected as default messaging app and can receive this intent. Broadcasting of this intent is allowed only for system applications. The permission for reading/writing SMS is also allowed only for default messaging app. There is a blog post that describes all these changes from Android 4.4.

After reading above mentioned blog post I updated my application so it can be used as default message app (I've just updated the manifest and created all necessary receivers with empty body). Then I realized, I don't need to send and broadcast, I can simply write a SMS through content provider URI. Following these steps allows you to create fake SMS.


  • to make an application as a message application, you need to create receivers and service
  • add permissions to manifest: WRITE_SMS, READ_SMS, RECEIVE_SMS
  • at the start of an activity (or writing new SMS), you have to request a user to set this application as default application
  • don't forget to check out if user has a right to use permissions to read/write SMS (request user for required permission)
  • just write a new message getContentResolver().insert( Uri.parse("content://sms/inbox"), values);

Now you can create a fake SMS generator for your friends, family or even favourite transport service :) .

download source code here


Share:

Monday, February 29, 2016

Self playable game on smartphone

Connecting Android to servo motor via USB OTG

Hardware

I recently put together a Lego phone holder that can be rotated in one direction (y-axe the longer side of phone). The end of the holder is attached to a servo motor. Servo motor is connected to Android smart phone with USB OTG via motor module. Motor module is controller that can control up to 24 servo motors and runs on 5 volts.

Software

The module controller part has already burned its own protocol. It was a combination of rotation and speed separated by new line symbols.

Second part was Android application that sends messages via serial protocol to this servo-module. I used an example of USB serial communication with Arduino and rewrote it for my device.

Game

For this exercise I chose a game that I made about year ago. It is a simple game that uses accelerometer to move the rocket through gates. Leaning device to sides allows you to control the position of the rocket. The goal is to run through as many gates as you can. The speed is increasing within entered gates.

Steps

My first idea was to detect the values of maximum rotation to both sides. Then to use this values and dynamically calculate all rotations of holder. But after few changes in my holder construction I realized that my configuration is a little bit different each time. So I created a calibration method that runs at start up and detects the limit values by itself. Then during the game the values are recalculated from these limits. For better understanding please watch the video below.

Result

The practical usage of "self playable holder" is literally none. But if you look at it as an exercise, it is a nice introduction to connecting an android to servo motor.















Share: