Tuesday, August 13, 2013

Capturing Arm Motion in Real-time using Two IMUs


Created by Cheni Chadowitz, Rui WangAbhinav Parate, Deepak Ganesan


Introduction

This project was one step towards using a single, wrist-worn sensor to recognize the motions we make throughout our day while in the midst of various activities. Things like eating, drinking, running, smoking all have unique motion associated with them and, by using an IMU (inertial measurement unit) worn on the wrist, we hope to be able to recognize when known activities begin and end. For our purpose, we used the MPU-9150, produced by InvenSense Inc. The MPU-9150 is a 9-axis sensor that includes a 3-axis gyroscope, a 3-axis accelerometer, and a 3-axis compass. Together, these sensors can produce an error-corrected quaternion that indicates the 3D orientation of the sensor with respect to magnetic North and gravity.

MPU-9150 by InvenSense Inc.
Source: http://invensense.com/mems/gyro/documents/AN-MPU-9150IMF%20InvenSense%20MotionFit%20SDK%20Quick%20Start%20Guide.pdf
For this project in particular, we used two MPU-9150 sensors to produce a realistic, real-time graphical representation of an arm moving in 3D space. By mounting one sensor on the bicep just below the shoulder, and a second sensor on the wrist, we can reconstruct the orientation of the entire arm in 3D space. We found this representation to be fairly intuitive when used to identify the activity the subject is partaking in at the time. Using this graphical representation, we will gather and label a large amount of training data so that we are able to recognize a variety of activities when subjects are wearing a single, wrist-mounted sensor.

We show a brief demo in the video above. Both sensors were mounted in the correct alignment with elastic medical bandages, and then the subject went through variety of short motions. The video on the left is a real-time representation of the orientations the sensors are producing, shown by manipulating a 3D model of a mannequin. The video on the right is of the subject himself. The software used is described and provided below.

Software Setup:

The software provided was developed using Processing due to its ease of use and the ability to hit the ground running. Since Processing is Java-based, you can view and edit the source code in any Java environment, however it may be easier to download and install the Processing IDE and edit the source code there.

Currently, the latest version of the software is available at Github, but a .zip archive download is available here, too.

Bluetooth setup:

In Windows and Mac OS X, you must first pair the two sensors with your computer. It is normally easier if you pair one at a time and wait for each device to complete the setup before pairing the next.

The software has not been tested under Linux, however if you are able to pair both sensors (as with Windows and Mac) and create a serial port for each, it may work successfully. The serial library include with Processing needs to be able to access the serial port. Check here for possible tips.

Once you have paired your sensors, you need to edit the config.json file located in the data folder to indicate the correct serial ports for each sensor. If you start the sketch (or the included executables) without changing the configuration, it will display the available serial ports so that you can use the correct values in the configuration file. Once you have saved the configuration, it will attempt to connect to the sensors the next time you start it.

Make sure to check below for your particular operating system to see any additional tips and details.

Windows: 

At the time this was written, the version of the serial library included with Processing (RXTXcomm.jar) has a bug that prevents it from working correctly. This will only effect you if you try to edit the source code in the Processing IDE after adding the source code as a sketch. The included standalone executables have been created using the fixed serial library. To fix this, follow these steps (after you have installed Processing):

  1. Download and extract the zip archive containing the fixed library here.
  2. Open [Processing folder]\modes\java\libraries\serial\library\
  3. Place the included RXTXcomm.jar library in the location you opened in the previous step.
  4. Place the included rxtxSerial.dll file in the windows32 folder.

Mac OS X:

If you are using the provided executable, you must edit the config.json file first. However, it is located inside the app itself, so to edit the configuration file you must first right click on the app and choose Explore package contents. Then browse to Contents/Resources/Java/data/, where you should find the config.json file.

To run the executable, follow the instructions in the provided readme.txt file.


Running the sketch:

Once you have Processing installed, you should copy the source code into your Sketchbook folder. To find out where this is located, launch Processing IDE and go to File > Preferences.
If you would like to use a different model, save your .obj files in the data folder within the sketch.

The source code has been commented so that it is easy to understand and edit.


Features:

There are a few simple features available. 

  • If you run the sketch without specifying the serial port numbers in the config.json file, it will list available ports so that you can easily specify the correct ports in the config file. Once the sketch has successfully connected to the sensors, you should see the arm moving in conjunction with the sensors. If one or both segments of the arm do not appear to be moving, you may have selected the input port for the sensor rather than the output (each sensor has two serial ports as they are capable of receiving commands). 
  • If you would like to see (below right) or hide (below left) the full human model while controlling the arm, press t while the window has focus to toggle that view.
Hiding the full body (left). Showing the full body (right).




  • By clicking and dragging the mouse horizontally across the window you are able to rotate the scene.

Hardware Setup:

The software has been designed with the expectation that the sensors will be worn in a particular orientation (as shown below). 

The first sensor should be worn on the outside of the right bicep, just below the shoulder. The micro-USB port should be pointing up, the on/off switch should be pointing forward, and the battery should be resting against your arm.

The second sensor should be worn on the top of your right wrist, where you typically have the face of a watch. Again, with your arm hanging straight down by your side, the micro-USB port should be pointing up with the on/off switch pointing forward, and the battery resting against your wrist.

The two sensors should be in line with each other when your arm is hanging down by your side.




In the video shown above, we used a 4" wide elastic bandage (normally used for wrapping sprains, etc) cut length-wise to make two 2" wide strips, each about 2 ft long. We wrapped them a couple times around the arm first, then positioned the sensors and continued wrapping the rest of the bandage. A more convenient solution may be using an arm or wrist band with a slot for a music player.



Links:



Friday, July 2, 2010

Realtime Tracking With a Pan-Tilt Camera





Introduction

The human eye is amazingly adept at tracking moving objects. The process is so natural to humans that it happens without any conscious effort. While this remarkable ability depends in part on the human brain's immense processing power, the fast response of the extraocular muscles and the eyeball's light weight are also vital. Even a small point and shoot camera mounted on a servo is typically too heavy and slow to move with the agility of the human eye. How, then, can we give a computer the ability to track movement quickly and responsively?

Thanks to recent progress in camera miniaturization, small, easily manipulable cameras are now readily available. In this project, we use a first person view (FPV) camera intended for use on model airplanes. The camera is mounted on servo motors which can aim the camera with two degrees of freedom. The entire assembly weighs only 32 grams, only slightly more than a typical human eyeball. Coupled with a GPU-based tracking algorithm, the FPV camera allows the computer to robustly track a wide array of patterns and objects with excellent speed and stability.

The above video clip shows a short demonstration. We built a simple camera tracking system using the FPV camera. The video demonstrates how the tracking camera snaps to a person moving in front of it. We show both the view captured by the tracking camera (the smaller video), and the view from a different camera that shows the movement of the tracking camera (the larger video).

How to build it

Parts List (links to parts included)

Note: the software (downloadable below) requires a PC with CUDA-capable graphics hardware (GPU).

The Camera
We used a 420-line pan-tilt camera manufactured by Fat Shark. The camera is mounted on two servo motors, which allow for about 170° of rotation on the yaw axis and 90° of rotation on the pitch axis. The camera produces composite video in PAL format. An NTSC version of the camera is available as well, but it was out of stock when we ordered our parts.

Power
Because the video transmitter requires the 12 volts, we power the camera with a 12V rechargeable Lithium battery. We use a voltage regulator to provide 5 volts for the camera and the servos. We added capacitors before and after the regulator to eliminate any voltage fluctuations. Both the video and servo cables from the camera connect to headers on the voltage regulator circuit, which provide the regulated 5V power supply. The servo control signals and the video output are passed through to a second set of headers, which connect to the frame grabber and the Arduino, respectively. The video output header additionally provides a 12V power supply for the video transmitter.


A picture and the schematic of the voltage regulator circuit.


Digitizing the Video

We used a USB frame-grabber manufactured by StarTech to read the video into the host PC. The frame grabber supports both NTSC and PAL composite video, so the NTSC camera could be used without any hardware changes. We used a video cable sold by Digital Products Company to connect the frame grabber to the video output header on the voltage regulator circuit. The cable also has a power jack, which provides 12 volts to the video transmitter.



The camera is connected to the frame grabber (L), and the Arduino is connected to the servos (R).

The frame-grabber provides 640 x 480 interlaced video at 25 FPS. For efficiency, we downsample the video to half resolution for tracking. Our downsampling filter discards the even lines to eliminate errors due to combing artifacts. We display the video at full resolution, after eliminating combing artifacts with a standard deinterlacing filter.


Controlling the Servos

We used an Arduino Diecimila to generate the control signal for the servos. The Arduino receives the desired pulse widths for the servos over its serial port. Each pulse width is encoded as a 16-bit integer, with 1 bit reserved to select one of the two servos. We use the servo library included with the Arduino software to generate the PWM signals.

Wireless Operation

We can eliminate the wired connections to the host PC with a wireless transmitter and receiver for the video, and a wireless RF link for the servos.




The RF Link

The wireless RF link transmits the servo angles digitally, with a range of up to 500 feet. Both the transmitter and receiver connect to Arduinos running the Virtual Wire library. The transmitter Arduino (connected to the host PC) broadcasts each 2-byte angle, followed by a byte of all zeros to keep the transmitter and receiver in sync. The receiver Arduino updates the servo angles when an angle is transmitted correctly (that is, all 3 bytes are received). Because Virtual Wire is incompatible with the Arduino's servo library, we use the Software Servo library to control the servos.

Wireless Video

The wireless video transmitter broadcasts NTSC or PAL video with a range of up to 500 meters. The transmitter plugs directly into the RCA jack from the camera. The 12 volt line on the video output header powers the transmitter. The RCA jack on the receiver plugs directly into the frame grabber. The receiver can be powered by a generic 12 volt power adapter.

Software

The software for the project, including source code and build files, is available via the links below.   The tracking software is based upon an algorithm developed jointly by the UMass Computer Vision and UMass Computer Graphics Labs. It is written for an NVidia GPU using the CUDA specification. Running on the GPU is necessary to get the real time rates we show in the video. Our implementation has only been tested in Windows 7, but we do not foresee major difficulties in porting it to other versions of Windows. The software can also be built for Linux (Ubuntu), but we did not have a frame grabber that worked under Linux, so we have not built a complete system under Linux. Still, we have tested each of the components of our system under Linux and they all work properly. More details about the software and how to build it are given in the attached README files.

Download Software