Download Print this page

Cub Cadet i1042 Operator's Manual page 7

Zero turn riding mower
Hide thumbs Also See for i1042:

Advertisement

Results
In both parts of our experiment, we collected gesture exam-
ples to train our recognizer and then asked participants to
complete tasks using those gestures in a two-handed tech-
nique. For each part, we examine the average accuracies
our system achieved in classifying finger gestures.
While each part of the experiment was conducted with a set
of four finger gestures, we also present an offline analysis
for Parts A and B of a gesture recognizer that only uses the
first three fingers (index, middle, and ring) to demonstrate
the potential tradeoff of gesture richness against classifica-
tion accuracy. We chose the pinky finger as the finger to
remove in this analysis because participants reported that it
was the most uncomfortable to manipulate.
Part A: Hands-Free Finger Gesture Recognition
As describe above, variability in arm posture (particularly
twisting of the forearm) presents a challenge for accurate
finger gesture classification. To explore this issue, we
trained the gesture recognizer in each of three postures in-
dependently, and performed an offline analysis testing each
recognizer with the test data from the other two postures.
As shown in Table 1, the system performed best when clas-
sifying pinch gestures using training data that was gathered
in the same posture. Furthermore, training transferred more
effectively between postures that were more similar. This
can be seen by grouping these results by distance (in
amount of arm rotation) between training and testing post-
ures. Distance zero represents training and testing on the
same posture. Distance one represents a small rotation
away, that is, either of the extremes to the midpoint or vice
versa. Distance two represents training on one of the ex-
treme positions and testing on the other.
The mean accuracy for distance zero is 77%, while distance
one classifies at 72% and distance two at 63%. A univariate
ANOVA on classification accuracy with rotation distance
as the only factor shows a main effect of distance
(F
=5.79, p=0.004). Posthoc tests with Bonferroni cor-
2,105
rection for multiple comparisons show this effect driven by
significant differences between distance zero and distance
two (p=0.003) and marginally between distance one and
distance two (p=0.086). Note that a random classifier would
be operating at about 25% for the four-finger gestures.
However, when all of the training data is used (75 blocks)
Train
Left
Left
78%
Center
70%
Right
68%
Table 1. Classification accuracies among pinch postures,
averaged across all users. Chance classification for this
four-gesture problem is 25%.
Test
Center
Right
72%
57%
74%
79%
73%
74%
to train the gesture recognizer, instead of training data from
a single posture, the average accuracy over all of a person's
test data is 79% with a standard deviation of 13% (see Fig-
ure 6). This demonstrates that training in a variety of post-
ures could lead to relatively robust models that find the
invariants and work well across the range of postures. Ex-
ploring more complex methods of modeling posture inde-
pendence remains future work. Reducing the gesture recog-
nizer to just the first three fingers increased this accuracy to
85% with a standard deviation of 11%.
Part B: Hands-Busy Finger Gesture Recognition
Participants performed finger gestures both sitting down
with a travel mug in their hand and while standing with
laptop bags in their hands. The system attempted to classify
gestures both when the participants did and did not have
visual feedback from the recognizer.
When participants held a travel mug in their hand, the four-
finger recognizer attained an average accuracy of 65%
without visual feedback (see Figure 7). Mean classification
improved dramatically, to 85%, with visual feedback. A
two-way ANOVA (finger × presence/absence of visual
feedback) on classification accuracy revealed that the re-
sults with visual feedback were significantly higher than
without (F
=24.86, p=0.001). The system also classified
1,10
much more accurately when only classifying among three
fingers instead of four: 77% without feedback and 86%
with feedback.
Participants spent a mean of 1.61 seconds between gestures
without visual feedback. This slowed to a mean of 3.42
seconds when they had visual feedback. An ANOVA re-
vealed a main effect for feedback (F
While holding a bag in each hand, the system classified
participants' four-finger gestures at an accuracy of 86%
without visual feedback and 88% with visual feedback (see
Figure 7). When the classification was reduced to three
fingers, the system's accuracy was better: 91% without vis-
ual feedback and similarly 90% with feedback.
On average, participants waited 1.69 seconds to squeeze
their left fist when there was no visual feedback. This in-
creased to 2.67 seconds when they had visual feedback of
Hands‐Free Gesture Accuracy
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
4 Finger
Figure 6. Mean classification accuracies for pinch gesture.
Error bars represent standard deviation in all graphs.
=13.86, p=0.004).
1,10
3 Finger

Advertisement

loading