Thursday, September 22, 2011

Blog #10

Sensing foot gestures from the pocket


Authors:
Jeremy Scott - University of Toronto, Toronto, ON, Canada
David Dearman - University of Toronto, Toronto, ON, Canada
Koji Yatani - University of Toronto, Toronto, ON, Canada
Khai N. Truong - University of Toronto, Toronto, ON, Canada


Proceeding
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Summary #1
The first part of the paper is a pilot study on how well people are able to make foot-based gestures without visual feedback. The pilot study involved directly measuring the position of the foot from 4 different gesture types without giving visual feedback.

Hypothesis
Since it was unknown whether or not people would be able to accurately be able to make foot gestures, the pilot study was simply for testing if the research is reasonable.

Methods
Participants were asked to make 4 different types of foot gestures:

  • Dorsiflexion - moving the foot up with the heel stationary
  • Plantar flexion - moving the heel up with the plantar stationary
  • Heel rotation - rotation about the heel
  • Toe rotation - rotation about the toes.
The results were gathered from direct position measurement using 6 motion capture cameras and a model attached to the foot.



Results
Dorsiflexion was the most inefficient gesture. Not only did the participants not prefer the motion, it took the most time and was the most inaccurate. Otherwise, there was less than 9 degrees of error for the other gestures on average.

Summary #2
Following the pilot study, another study was done to determine whether the motion cameras can be replaced with a simple accelerometer located in or around the participants' pants. Since Dorsiflexion was not a good gesture, it was left out on this study.

Hypothesis
The researchers believed that just using algorithms analyzing acceleration data from an iPhone could be enough to determine foot gestures without feedback.

Methods
The phone was place around the participants' hips in three locations, in the pocket, on the side (as if in a holster), and the back pocket. The participants were then asked to make different gestures with their feet (10 in total) and the phone recorded the acceleration data. 64 FFT coefficients were generated from the
data and Naive Bayes was used to classify the motion. All participants were right footed.

Results
The hypothesis was proven correct for the most part with an accuracy of 82-92%. The least accurate position for  the phone was the back pocket.

Discussion
I was surprised they were able to get so much accuracy out of just an accelerometer. while it would have been nearly foolproof to simply attach strain sensors on the participants feet, it is not as elegant or user friendly. One fault I can imagine for this is that errors would not be easily be undone. Since one out of every 10 inputs will be incorrect, there would need to be some mechanism to give feedback for an error, and to easily undo the previous action. I don't know how to start with this issue, and I feel that the researchers acknowledge this.
Since this amount of precision can be done with just acceleration, I wonder how much other problems can be solved with using all a smartphone's sensors in the same efficiency. This represents a large amount of problems that have not been thought of with user input.

No comments:

Post a Comment