Click on text below to jump to the corresponding location in the video (except iOS)



We try to predict with machine learning whether the device is running, walking, sitting or biking. The iPhone has now eight motion processors. It is measuring acceleration and rotation rate and predicting your action. The device is in your pocket and we will predict what the device is doing. The code is on github.

The talk is about feature engineering and not basics of machine learning.
I have prepared a log file named activitydata.csv. I created the file with my smartphone. Now I have the labelled activities. The devices have six degrees of freedom - 3 axes and 2 rotation axes. So I have this motion data with labelled activity, so it is supervised learning. We have some description for everything.

The first feature I want to introduce is absolute acceleration.
Take all three accelerations to calculate absolute acceleration. The first feature is absolute acceleration - you can see the differences between different actions. It is maybe a good feature to use. We calculate difference of max-min. The rolling max and rolling min (using pandas) of absolute acceleration is used. (Demo of charts for the feature)The scatterplot of the features - there is clear difference between sitting and running, but walking and biking you cannot separate.

We need a second feature - the rotation rate of the device.
Rotation rate means how fast device turns. The values while walking and biking show a good difference. Engineers know what to do with periodic signals - you have the FFT algorithm. You can throw a value in a get a frequency back. We fire this FFT algorithm on the whole dataset. We can see difference between biking and walking. So it is the second and most important feature to separate all activities.

Now we come to the machine learning part - I did my best to work out how to estimate activities.
The two features produce four clusters of features for four different activities - looks like an easy one. If the accuracy score is 100% you did something wrong, but I don't know if I did something wrong. The SVC figured out boundaries for every activity. Let us try a real example. The TinkerForge - I am connecting to it - and loading the classifier. I calculate the features from the raw data. Then I am firing everything through the predictor.

Live demo


Not really 100 percent, but if you look at your Apple device it does detect every action you are doing.

Questions
(Questions are Inaudible)
Video outline created using VideoJots. Click and drag lower right corner to resize video. On iOS devices you cannot jump to video location by clicking on the outline.