Thursday, October 13, 2011

Paper Reading #18


Biofeedback game design: using direct and indirect physiological control to enhance game interaction



Proceeding
CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems
Summary
Several physiological sensors are tested for enhancing a video game. Primary interaction is done on a normal Xbox controller while other sensor data such as respiration and heart rate are taken passively.

Hypothesis
1. How do users respond when physiological sensors are used to augment rather than replace game controllers?
2. Which types of physiological sensors (indirect versus direct) work best for which in-game tasks?


Methods
Six sensors are used to gather data from the user.

  • Gaze iteration. Cameras track a user's eye movements to determine where they are looking. This is a direct physiological input.
  • Electromyography. Sensors are place longitudinally along a muscle group to determine the electrical activation level of the muscle.
  • Electrodermal activity. This determines the level of conductance of a user's skin. This insinuates psychological arousal and as such is an indirect input.
  • Electrocardiography. This reads the activity of the heart. Even though this is autonomously controlled, it can be mildly influenced consciously. Heart rate is an indirect input. 
  • Respiration. When a person breathes in, their chest expands. A strain sensor is placed around the body to determine the level of respiration. This is directly controllable.
  • Temperature. Homeostasis dictates internal body temperature is constant, but in this case it is directly controllable by blowing hot air into the sensor.
Together, these sensors were used to control different aspects of a video game in different studies. The size of enemies were varied to make them easier to hit. Since larger enemies are more intimidating, only a shadow of the sprite was made larger. Speed and jumping height was varied as to make the avatar faster or able to jump higher. The weather is changed to make snow appear across the screen. The fallback weapon of a flamethrower has a variable length. Lastly "Medusa's Gaze" is a power-up that allows the user to look at enemies and freeze them by using eye tracking. These variables are changed in two tests as shown below.


Results
The researchers found that users preferred direct control devices, even though they enjoyed using the sensors. They concluded that these sensors are best for use in changing background variables instead of primary interactions. From individual sensors, users preferred ones they could more actively control, such as breath and muscle contraction as opposed to heart rate.

Discussion
I think the SensorLib framework could be used for very rich interactions in games. In any case, most users would probably prefer not having to wear anything though. One thing I was thinking about was the difference between natural mappings and relevant biofeedback mappings.

Tuesday, October 11, 2011

Paper Reading #17

Privacy Risks Emerging from the Adoption of Innocuous Wearable Sensors in the Mobile Environment

Authors: 
Andrew Raij - University of South Florida, Tampa, Florida, USA
Animikh Ghosh - SETLabs, InfoSys, Bangalore, India
Santosh Kumar - University of Memphis, Memphis, Tennessee, USA
Mani Srivastava - University of California, Los Angeles, Los Angeles, California, USA

Proceeding
CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems


Summary
AutoSense is a data collecting system concerning physiological status of humans. It collects inertial data along with cardiac, temperature, skin, and respiration.

Hypothesis
When creating the AutoSense apparatus, it was not known that certain inferences could be made. The researchers proposed that participants can not fully understand the stake in releasing their data unless they had a personal stake in it.

Methods
The first goal of the research was to assess how people regard their data sets, since it includes information about their physiological, behavioral, and psychological states. This was measured by having two primary groups, one that did not have personal stake in the data released, and one that did not.
The second goal was to design the study so that the participants could limit access to their data, or even prevent logging of certain points of data, such as acceleration.
The last goal is to determine how easily a participant can be reidentified using the data sets.


Results
People were less likely to share some parts of their data if they understood that it could be used against them (personal stake). Also there was a considerably larger amount of stress for the S-Post group as well. The data set of exercise was not as much stressed about, but there was still concern about privacy of their exercise preferences.

Discussion
Today we live in a world where the amount of data being shared is enormous. Some people take this as an opportunity to hide everything about themselves, or to embrace it by actively sharing their lives to strangers on the internet. Obviously, when privacy is removed and if there is any incriminating activity, a potential for issues arises. So if privacy was not an option, there is one of two choises: be exposed from "wrong" actions, or to chose not to do those actions knowing that people are watching.

From this, I do not believe that privacy should be an option since it allows unwise actions to be performed.

Paper Reading #15

Madgets: actuating widgets on interactive tabletops

Authors:

Malte WeissRWTH Aachen University, Aachen, Germany
Florian SchwarzRWTH Aachen University, Aachen, Germany
Simon JakubowskiRWTH Aachen University, Aachen, Germany
Jan BorchersRWTH Aachen University, Aachen, Germany

Proceeding
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology


Summary
Madgets are interactive physical objects what can be manipulated by a user or by the interactive tabletop itself. An array of permanent magnets below a multitouch display allows for such interactions.

Hypothesis
Madgets combine the benefits of untethered manipulable objects along with the system's ability to manipulate them as well. Other systems have this ability, such as interactive sound controllers, but do not share the Madgets' degree of physical freedom.

Methods
The system is composed of an array of 228 electromagnets which are all controlled from a PWM signal from an Arduino. On top of the table, the Madgets are held in place from permanent magnets attached to them, and the electromagnets inside the system. Directly above the magnet array is a TFT panel that allows images to be displayed in the interaction area. On top of that is an acrylic sheet which is lit from the edges by infrared LED lights. When a user touches this sheet, infrared light is reflected downwards. Since the magnet array would normally block the transmission of light, several fiber optic cables transmit the light around the magnet array. There, the light is picked up by an infrared camera which can then track any touch in the entire surface. Magnets are actuated by determining which electromagnet to use for attracting or repelling the permanent magnets.
There are 6 sample Magnets displayed

  1. Buttons: Using the fact that the 3rd dimension can be utilized, buttons can be help upwards using repulsive force. When a user presses the button down, the system sees the difference in infrared light and can perform an action.
  2. Clutch: Since now both vertical and horizontal actuation is possible, the button Madget is modified to have a slider that can physically disable pressing a button by sliding beneath a button. This can be useful when a button should not be logically pressed.
  3. Force feedback: Madgets can be manipulated by the system to create force feedback. This can be done in the form of resistance, vibration feedback, or "notches."
  4. Induction: The magnet array is powerful enough to transfer inductive power to an LED.
  5. Motor: By actuating a tangential magnetic force, a pair of magnets can be spun around an axis at a constant velocity. This can be used to create much more complex systems.
  6. Bell: By building on the concept of the button, a magnet can be forcefully launched against a bell to ring it.



Results
The system itself had very little problems, aside from a small overheating problem. If the electromagnets are held at full power for too long they would get too hot. An algorithm was developed to spread the effort of the array across several magnets if one was estimated to be too hot.
Conflicting goals: Having widgets that are actuated in a human understandable way, there is less confusion as to what they do, but the limitations of these simple actions prevent complex interactions. The researchers admit they need to find a balance between these goals.

Conclusion
I imagine this system uses quite a bit of electricity to operate. Although this concept is interesting, the gains in simplicity on the user side is not close to how easy it is to create a virtual interaction object within a touch environment. Ironically, the fusion of software and physical input can be just as easily prototyped with the Arduino that is used to control the magnet array as the Madgets themselves. Since the real world is always manipulated (especially in industrial environments) by software, there will always be a need for interaction, but I doubt it will be in the form of Madgets.

Paper Reading #14

TeslaTouch: electrovibration for touch surfaces

Authors
Olivier Bau Disney Research Pittsburgh, Pittsburgh, PA, USA and Université Paris-Sud, Orsay, France
Ivan Poupyrev Disney Research Pittsburgh, Pittsburgh, PA, USA
Ali Israr Disney Research Pittsburgh, Pittsburgh, PA, USA
Chris Harrison Disney Research Pittsburgh and Carnegie Mellon University, Pittsburgh, PA, USA

Proceeding
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Summary
TeslaTouch is a touch interface that uses oscillating electric forces to create tactile feedback. It works by periodically attracting the surface of finger to the surface, changing the amount of friction based on voltage.

Hypothesis
The researchers believed this system could be used to simulate different types of surfaces dynamically.

Methods
The TeslaTouch System was created from a transparent electrode sandwiched between a glass plate and an insulator. Images were displayed using a projector. High voltages were sent to the electrode and attract the skin of the finger using electrostatic forces. Much better results were achieved if the user is grounded. The safety of the system is ensured by limiting the current possible from the power supply. Different types of signals and strengths were tested on participants and their opinions were noted in categories such as how slick the surface felt and how it felt like vibration or friction. A scale of pleasantness was also considered. First the minimum detection for a single user is determined using a step-wise learning cutoff. The main tests were between 400 and 80 cycles per second, and 115 and 85 volts.

Results
Higher frequencies rendered more "smooth" and "waxy" results, and higher voltages increased these observations. Also, compared to other vibration interfaces, this system does not generate any noise.

Conclusion
I think this system could be used for generating much more tactile sensations than mere sine-waves. I'm imagining having the system determine which direction a finger is moving and then creating a shaped wave based on the direction of the motion. This way there could be much richer types of sensations, and possibly types of sensations that may not be possible in the world.


Thursday, September 29, 2011

Blog #13


Combining multiple depth cameras and projectors for interactions on, above and between surfaces



Authors:
Andrew D. Wilson - Microsoft Research, Redmond, WA, USA
Hrvoje Benko - Microsoft Research, Redmond, WA, USA

Proceeding
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Summary
Depth cameras and projectors are used to make an interactive room named "LightSpace." Interaction are done on plain non-electronic surfaces in a natural fashion that only requires the cameras and projectors. The point of this approach is to limit external devices, since the surfaces do not need attention, and nothing needs to be worn on the person.

Hypothesis
To the author's knowledge, no research has been done on making an interactive space that solely used depth cameras and projectors. 

Methods
The system is composed of a central placement of 3 depth cameras to view the entire room, and 3 projectors. Two projectors are for the table and walls, and one is for projections onto the person, such as a virtual object to be held in the hand. After calibration of the depth cameras using 3 points on the two interaction surfaces, a model mesh of the person is created. for the table, all interaction is analyzed only from a 10cm volume above the table. The resolution is great enough to determine touch onto the table, and essentially creates a multitouch interface on any surface. There are three types of interactions possible with this system:

  • Multi-touch interaction on the "dumb" surfaces
  • Holding and transferring virtual icons of objects by grabbing them off the side of an interface
  • Activating a menu by holding a hand in a column of air above an icon of a surface.
Instead of performing computation on the mesh generated from the cameras, "virtual cameras" were used from orthographic projections of the mesh. There were 3 virtual cameras generated: two for each surface, and one for the entire room.

    

Discussion
When i was watching the video presentation of the LightSpace concept, I couldn't help noticing how rough the interactions were on the surfaces. This is probably due to either the small resolution of the camera prototypes or the face that the underside of the hands can not be seen. One solution would be to use depth cameras in more diverse locations, but then there would be more complexity to the system.

This paper was published approximately one month before the release of the Kinect system. Since then there has been an SDK released for it and many people have used it for several creative hacks. In my opinion, to stay within the main idea of the paper to reduce external complexity, future 3d interaction would have to use actual 3d projection, since currently this can only be emulated by actively measuring the person's position in space. This however requires external hardware to be worn about the person.

Tuesday, September 27, 2011

Blog #12

Enabling beyond-surface interactions for interactive surface with an invisible projection


Authors:
Li-Wei ChanNational Taiwan University, Taipei, Taiwan Roc
Hsiang-Tao WuNational Taiwan University, Taipei, Taiwan Roc
Hui-Shan KaoNational Taiwan University, Taipei, Taiwan Roc
Ju-Chun KoNational Taiwan University, Taipei, Taiwan Roc
Home-Ru LinNational Taiwan University, Taipei, Taiwan Roc
Mike Y. ChenNational Taiwan University, Taipei, Taiwan Roc
Jane HsuNational Taiwan University, Taipei, Taiwan Roc
Yi-Ping HungNational Taiwan University, Taipei, Taiwan Roc



Proceeding:
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Summary:
By using infrared cameras, an infrared projector and color projector to facilitate multi-touch along with invisible markers on a table, three mechanisms for further interactions are proposed and explained. 

Hypothesis:
There was no specific hypothesis, but they did mention that the system allowed for more intuitive navigation and more enjoyable use.

Methods:
Under the table, there are 2 cameras and 2 projectors. A color projector projects the screen, an IR projector projects infrared markers, and 2 IR cameras on different corners of the table pick up multi-touch interaction. When an interaction is detected from subtracting what is seen by an above camera vs what is expected from the invisible markers, the markers within the zone of interaction are removed to prevent the markers from modifying the input itself. The markers themselves allow for the mobile above camera to calculate it's own 3d position and orientation.

With this system in place, a projector attached to the upper camera can give more information about a subset of the graphics on the table. This in essence allows for greater resolution and interaction.

When the markers are analyzed by a tablet instead, a virtual representation of the table can be shown.

The whole system is powered by a normal consumer desktop.

Results:
The original revisions of the software suffered from vibration of the displays which was fixed by Kalman filtering.

Discussion:
At first I didn't understand the reason to use markers on such a display, but now I see what they were doing, the uses for such interactions are nearly limitless. With a multi-touch display and a multi-touch tablet for further 3d manipulation, I see this could have great use in CAD niches. The one thing that bothered me for the upper cameras was the low resolution of 320x240 pixels. I myself have done this hack to make a normal webcam into an infrared one, but the limits of the CCD were depressing. 


Blog #11


Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input

Authors:
Thomas Augsten Hasso Plattner Institute, Potsdam, Germany
Konstantin Kaefer Hasso Plattner Institute, Potsdam, Germany
René Meusel Hasso Plattner Institute, Potsdam, Germany
Caroline Fetzer Hasso Plattner Institute, Potsdam, Germany
Dorian Kanitz Hasso Plattner Institute, Potsdam, Germany
Thomas Stoff Hasso Plattner Institute, Potsdam, Germany
Torsten Becker Hasso Plattner Institute, Potsdam, Germany
Christian Holz Hasso Plattner Institute, Potsdam, Germany
Patrick Baudisch Hasso Plattner Institute, Potsdam, Germany

Proceeding
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology

Summary
User input onto a touch interface using feet is studied. An apparatus using FTIR touch sensing, and projected images to a floor is tested. Participants are asked to test and give input on general ergonomics of the system.

Hypothesis
Since touch interface is limited to whats in "arm's reach" an interface using feet instead would have a much larger possible input area.

Methods
A back-projected surface using an input technology: frustrated total internal reflection or FTIR is used for input resolutions of 1mm. First, 8 participants were asked to show how they could walk over theoretical buttons without activating them, and then activate one with a different gesture based completely on their own ideas. Some ideas were less plausible than others since walking on heels can be difficult, or even dangerous. It was decided that a tap gesture is best for activation, and simply walking would be interpreted as not being any action.

The next test was to determine how tap gestures should be preferred, since the user may want to use their big toe, the ball of their foot, or the tip. It was found that each person could easily have a machine learned default position which helps the next step.

Users were asked to type on 3 different sized keyboard using tap gestures calibrated to each participant. the smallest keyboard measured 1.1cm, while the largest ones measured 5.3cm wide. The users were timed based on the start and end keys and each error is logged.

Another test of the system's usefulness included navigating a game using differing pressures on the feet with natural mappings to left, right, forwards, and backwards, as well as turning.

Results
Since the prototype was rather small, it did not show the full capability of the system. However, user input was done well enough to allow a small 3% error rate on the large keyboard test. Most of the users preferred the larger one, while a few preferred the medium sized keyboard.

Discussion
In my opinion, this prototype does not merit much attention, but the full scale system in development could have much more functionality though. My biggest issue with the paper is that most people simply don't have good eye-foot coordination. Other gestures could plausibly be made while barefoot but due to physiological limitations will still be a hindrance.