Download Print this page

Cub Cadet i1042 Operator's Manual page 9

Zero turn riding mower
Hide thumbs Also See for i1042:

Advertisement

Quantity of Training Data vs. Accuracy
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
0
5
Blocks of Training Data
Figure 9. Classification accuracy versus blocks of training data
for four finger gestures with bags in hand. Each training block
takes seven seconds for a four finger classifier.
still requires careful consideration of appropriate interaction
techniques. Here we explore some of the design issues re-
lated to using muscle-computer interfaces for input.
Visual Feedback: Speed and Accuracy
Our experiments demonstrate that the proposed gesture set
can be accurately recognized via muscle-sensing in the ab-
sence of visual feedback, which is critical to many applica-
tions, including nearly all hands-free mobile scenarios.
However, visual feedback makes the system more predicta-
ble and gives users an opportunity to adapt their behavior to
that of the recognition system. For example, participants
could experiment with finger position or exertion to im-
prove recognition. This can be seen in Part B of our expe-
riment where participants held a travel mug in their hands.
The average accuracy of the system was much higher when
participants had visual feedback. However, this came at the
cost of reduced speed. On average, participants spent more
time performing each gesture, as they adjusted their ges-
tures until the system made the correct classification. This
speed-accuracy tradeoff should be considered carefully in
the context of an application. In applications where an error
can easily be undone and the gesture repeated (e.g., in a
mobile music player), the higher speed that comes from
feedback-free gesture input may justify an increased error
rate. In contrast, in applications where an incorrect gesture
might be more costly (e.g., when controlling a mechanical
device or playing a game), the decreased speed that comes
from using visual feedback might be reasonable.
Engagement, Disengagement, & Calibration
A wearable, always-available input system needs a mechan-
ism for engaging and disengaging the system. We do not
want the system to interpret every squeeze or pinch action
as a command. In our experiment, we used the left hand to
support engagement and disengagement, and we feel that
this separation of tasks across the two hands is a reasonable
option for real applications. However, it would be worth-
while to look at how engagement and disengagement might
be supported by sensing only one hand. In particular, is
there a physical action unique enough to be robustly classi-
10
15
20
fied during everyday activity such that it can be used as an
engagement delimiter? One example of such an action
might be squeezing the hand into a fist twice in succession.
In our limited exploration of this topic, a fist clench has
appeared to be easily distinguishable among other typical
movements, so this may be a starting point for future mus-
cle-computer interfaces.
Multi-Finger Interactions
Our experiments focused on recognition of single gestures
performed one at a time. The system's ability to recognize
these gestures indicates that we could develop interaction
techniques that rely on sequences of gestures. It would also
25
be interesting to compare such sequenced interaction with
simultaneous performance of several gestures at a time. For
example, how does recognition performance compare when
doing an index finger pinch followed by a middle finger
pinch, vs. a simultaneous index and middle finger pinch.
Apart from recognition performance, users' perception and
performance of these different styles of multi-finger inte-
ractions must also be considered carefully.
Ongoing and Future Directions
Air-Guitar Hero
Encouraged by the results, we developed an application that
allows a user to use our muscle-computer interface to play
the Guitar Hero game. In Guitar Hero, users hold a guitar-
like controller and press buttons using both hands as the
system presents stimuli timed to popular music. Using our
muscle-computer interface, users can now play with an
"air-guitar". A user controls four buttons with our pinching
gestures and moves the opposite wrist in a strumming mo-
tion. Informal tests of the system show that users are able to
complete the easy mode of the game. We demonstrate this
system in our video figure.
Wireless Electromyography
Although we extended previous work by not tethering
people's arms and hands to specific orientations or surfaces,
our experiment was conducted in a lab using a wired elec-
tromyography device, and we have yet to validate our clas-
sification approaches in scenarios with more variable ges-
ture execution. To this end, we have recently created a
small, low-power wireless prototype muscle-sensing unit
(see Figure 9). Each of these units is equipped with four
electrodes (two differential electromyography channels)
sampling at 128 Hz, and multiple units can be used simul-
taneously. We are currently working to put this wireless
unit into an armband form factor with dry electrodes.
Figure 8. Our wireless EMG device prototype, weighing
five grams and measuring 26x18x8mm.

Advertisement

loading