Tuesday, September 6, 2011

Paper Reading #3: Pen + touch = new tools

Pen + touch = new tools

Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington,
Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton
Microsoft Research, One Microsoft Way, Redmond, WA 98052

Released: UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology


Summary:
Using both pen and touch interfaces simultaneously allows for more types of gestures to be created. An application called Manual Deskterity is discussed and reviewed. It is a scrapbook application on Microsoft surface designed as a prototype. Normally pen and touch interfaces are not used together or used in a way where differentiation is not possible, such as using a pen for the same tasks used by a finger. Some design considerations were based on observed behaviors:

  • Touch and pen had specific roles due to the capabilities of both. Hands are good for manipulation while pens are good for precise input. 
  • When using both hands for touch interface, people tuck the pen in their hands to give back the role of the hand.
  • People hold clippings of pictures with their non-preferred hand.
  • People hold the paper while writing.
  • People frame paper with their index finger and thumb while reading.
  • People cut clippings from inspirational material while holding it with their non-preferred hand.
  • People arrange their workspace so that the material is close.
  • People pile "interesting things and hold them with their non-preferred hand.
  • Some people draw along the edges for clippings
  • People hold their place in a stack of papers with their non-preferred thumb
Using these behaviors, A system was designed to both use pen and touch in a synergistic way, but not require both for all tasks since the pen may not always be available. The core idea is that the "pen writes, touch manipulates."


Hypothesis:

The purpose of the study was to gain "insight as to how people work with physical tools and pieces of paper."


Methods:
Microsoft Surface is the platform the study is done with. The pen is differentiated by using an infrared pen which is read differently than touch. Several gestures are made using both pen and touch. One gesture is cutting pictures by holding it with a finger and fully crossing it with the pen. Copying an image is done by holding an image and "peeling off" a copy with the pen. Any other image can be used as a straightedge for the other gestures, such as the cut gesture. When a finger is pressed, and transparent dot appears on the original location, and opens up a color menu when moved away from the dot. Lines can be drawn by using the tape tool which is activated by extension of the built in ink tool. To enable the tape mode, the user holds a finger on the ink line. Eight participants were chosen, all having right hand preferences.




Results
The paper's goal was to discuss and observe behaviors for this new interface. It was found to be helpful in supplementing input, as well as allowing for users to switch modes of input from pen to touch. Since not all inputs can be directly implemented in the exact same way the tasks are done in real life, the gestures were "designed" to work in a way that uses the possible movements of pen and touch in a logical way. Since they wont be obvious, some instruction to the test users had to be given. The authors are not convinced themselves that their implementation will scale to a full-blown application, but the idea could.

Discussion
The new tools made by using a pen to enhance gestures could have been just as easily done by adding menus or buttons in my opinion. However the reason for this research is to find ways of abandoning these lines of thought. Since this is an incomplete theory and application, I don't find this as interesting as the Hands-on Math article.

No comments:

Post a Comment