Está en la página 1de 10

ARTICLE

International Journal of Advanced Robotic Systems

Markerless Kinect-Based Hand


Tracking for Robot Teleoperation
Regular Paper

Guanglong Du, Ping Zhang*, Jianhua Mai and Zeling Li

Department of Computer Science, South China University of Technology, P.R. China


* Corresponding author E-mail: pzhang@scut.edu.cn

Received 9 Apr 2012; Accepted 23 May 2012


DOI: 10.5772/50093
2012 Du et al.; licensee InTech. This is an open access article distributed under the terms of the Creative
Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract This paper presents a realtime remote robot


teleoperation method using markerless Kinectbased
hand tracking. Using this tracking algorithm, the
positions of index finger and thumb in 3D can be
estimated by processing depth images from Kinect. The
hand pose is used as a model to specify the pose of a
realtime remote robots endeffector. This method
provides a way to send a whole task to a remote robot
instead of sending limited motion commands like
gesturebased approaches and this method has been
testedinpickandplacetasks.

Keywordsrobotmanipulator,markerless,Kinect

1.Introduction

If a task is too complex for an autonomous robot to


complete, then human intelligence is required to make a
decision and control the robot, especially when it is in
unstructureddynamicenvironments.Furthermore,when
the robot is in a dangerous environment, robot
teleoperation may be necessary. Some humanrobot
interfaces (Yussof et al. [1]; Mitsantisuk et al. [2]) like
joysticks, dials and robot replicas, have been commonly

www.intechopen.com

used, but these contacting mechanical devices require


unnatural hand and arm motions to complete a
teleoperationtask.

Another way to communicate complex motions to a


remote robot, which is more natural, is to track the
operatorhandarmmotionwhichisusedtocompletethe
required task using contacting electromagnetic tracking
sensors, inertial sensors and gloves instrumented with
anglesensors(Hircheetal.[3];Villaverdeetal.[4];Wang
etal.[5]).However,thesecontactingdevicesmayhinder
naturalhumanlimbmotions.

Becausevisionbasedtechniquesarenoncontactandless
hindrancetohandarmmotions,theyhavealsobeenused.
Visionbased methods always use physical markers
placed on the anatomical body part (Kofman et al. [6];
LathuilireandHerv[7];GuangLongDuetal.[8]).There
are a lot of applications (Peer et al. [9] Borghese and
Rigiroli[10];Kofmanetal.[6])usingmarkerbasedhuman
motion tracking, however, because body markers may
hinderthemotionforhighlydexteroustasksandmayget
occluded, this markerbased tracking is not always
practical. Thus, a markerless approach seems better for
manyapplications.

Adv Robotic
Sy, 2012,
Vol.
9, Zeling
36:2012
Guanglong Int
Du,J Ping
Zhang, Jianhua
Mai
and
Li:
Markerless Kinect-Based Hand Tracking for Robot Teleoperation

Compared to imagebased tracking which uses markers,


markerless is not only less invasive, but also eliminates
problems of marker occlusion and identification (Verma
[11]).Thus,markerlesstrackingmaybeabetterapproach
for remote robot teleoperation. However, existing
markerless humanlimb tracking techniques have so
manylimitationsthattheymaybedifficulttouseinrobot
teleoperation applications. A lot of existing markerless
tracking techniques capture images and compute the
motion later like a postprocess (Goncalves et al. [12];
Kakadiarisetal.[13];Uedaetal.[14]RosalesandScarloff
[15]). The markerless tracking has to perform
simultaneouslyinrealtimeforremoterobotteleoperation
when controlling continuous robot motion. To allow the
humanoperatortoperformhandarmmotionsforatask
in a natural way without interruption, the position and
orientation of the hand and arm should be provided
immediately.Manytechniquescanonlyprovide2Dimage
information of the human motion (Koaraet al. [16]; Mac
CormickandIsard[17])andthetrackingmethodscannot
be extended for accurate 3D jointposition data. An
endeffector of a remote robot would require the 3D
position and orientation information of the operators
limbjointcentreswithrespecttoafixedreferencesystem,
and identifying human body parts in different
orientations has always been a significant challenge
(Kakadiaris et al. [13]; Goncalves et al.[12]; Triesch and
Malsburg[18]).

For robot teleoperation, there is limited research on


markerlesshumantracking.Mosttechniqueshavetriedto
use a humanrobot interface based on handgesture
recognitiontocontrolrobotmotion(Fongetal.[19];Huet
al. [20]; Moy [21]). Coquin et al. and Ionescu et al. [22]
developed markerless handgesture recognition methods
whichcanbeusedformobilerobotcontrolwhereonlya
few
different
commands
are
enough
likego,stop,left,rightandsoon.However,for
object manipulation in 3D space, it is not possible to
achievenaturalcontrolandflexiblerobotmotionusing

gesturesonly.Ifahumanoperatorwantstousegestures,
he/sheneedstothinkofthoselimitedseparatecommands
thatthehumanrobotinterfacecanunderstandlikemove
up, down, forward and so on. A better way of
humanrobotinteractionwouldbetopermittheoperator
tofocusonthecomplexglobaltaskasahumannaturally
doeswhengraspingandmanipulatingobjectsin3Dspace
insteadofthinkingaboutwhattypeofhandmotionsare
required. To achieve this goal, a method that allows the
operatortocompletethetaskusingthehandarmmotions
naturally, providing the robot with information of the
handarm motion in realtime like the hand and arm
anatomicalpositionandorientation(Kofmanetal.[23]),is
needed.However,toachievetheinitialization,thehuman
operatormustassumeasimpleposturewithanunclothed
arm in front of a dark background, hand placed higher
thantheshoulder.Itisnotpossibletogetapreciseresult
with a complex background. In addition, the human
operatorwouldfindithardtoworkincoldweatherasthe
armisunclothed.Itisalsolimitedbecauseofthelighting
effect,i.e.,itisdifficulttousewhenitistoobrightortoo
dark.

This paper presents a method of remote robot


teleoperation using markerless Kinectbased 3D hand
tracking of the human operator (Figure 1). Markerless
Kinectbased hand tracking is used to acquire 3D
anatomicalpositionandorientation,andthenitsendsthe
datatotherobotmanipulatorbyahumanrobotinterface
toenabletherobotendeffectortocopytheoperatorhand
motion in realtime. This natural way to communicate
with the robot allows the operator to focus on the task
insteadofthinkingintermsoflimitedseparatecommands
that the humanrobot interface can understand like
gesturebased approaches. Using the noninvasive
Kinectbased tracking avoids the problem that physical
sensors,cablesandothercontactinginterfacesmayhinder
natural motions and that there may be marker occlusion
andidentificationwhenusingmarkerbasedapproaches.

Figure1.NoninvasiverobotteleoperationsystembasedontheKinect

Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

www.intechopen.com

2.Humanhandtrackingandpositioningsystem
Human hand tracking and positioning is carried out by
continuouslyprocessingRGBimagesanddepthimagesof
an operator who is performing the hand motion to
completearobotmanipulationtask.TheRGBimagesand
depthimagesarecapturedbytheKinectwhichisfixedin
thefrontoftheoperator.

The Kinect has three autofocus cameras: two infrared


camerasoptimizedfordepthdetectionandonestandard
visualspectrumcamerausedforvisualrecognition.

2.1Kinectcoordinatesystem
InFigure2,anoperatorstandsinfrontoftheKinectand
controls a robot. We can define the Kinectcoordinate as
showninFigure2:axisXisupturned,axisYisrightward
andaxisZisvertical.TheKinectcancapturethedepthof
any objects in its workspace. In Figure 2 we can see the
indexfingertip(I),thethumbtip(T)andapartofthehand
between the thumb and the index finger(B). Every
distancebetweentheKinectandI,B,TorUisdifferent.I
and T are closest to the Kinect and the upper arm U is
furthest. The 3D position of B is used to control the
position of the robot endeffector. The I, T and B of the
operator are used to control the orientation of the robot
endeffector.

Figure2.depthofobjects.K:Kinect;I:indexfingertip;T:thumb
tip,B:apartofthehandbetweenthethumbandtheindexfinger;
U:upperarm.

2.2Imagecaptureandsegmentationofhand

Inordertocatchthehandmotionusedforcontrollingthe
robotmanipulator,weneedtoseparatethehandfromthe
depth image. The arm is segmented from the body by
thresholdingtherawdepthimage.

(a) (b) (c)

(d) (e) (f)


Figure3.Segmentationofhandanddeterminationofthumbandindexfingertippositions

www.intechopen.com

Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li:


Markerless Kinect-Based Hand Tracking for Robot Teleoperation

A depth image D(i,j), shown in Figure 3a, records the


depth of all the pixels of RGB image which is shown in
Figure 3b. Assume that the distance between the human
operatorandtheKinectisnotmorethanT(m)andthereis
no other object between the human operator and the
Kinect.ForalliandjindepthimageD(i,j),thebodyimage
Cb(i,j)isthendividedas:

Cb i, j {d i, j | d (i, j ) T ;d (i, j ) D;
i 1, 2,..., n; j 1, 2,..., m }

(1)

Whered(i,j)isthepixelofdepthimageD;nisthewidthof
DandmistheheightofD.

When the human operator held out the hand to control


therobotmanipulator,thearmiscloserthanthebody,we
can first compute the mean value M of all the body,
includingthearm:
n

mn

(2)

ThenwecandividethearmregionA(i,j)asfollows:

A {d (i, j ) | d (i, j ) Cb & d (i, j ) M } (3)

ThearmregionAisshowninFigure3c.

2.3Determinationofthumbandindexfingertippositions

The positions of thumb tip and indexfinger tip are


determined by an image that contains the arm. The arm
regionA3d(x,y,z)canbereconstructedfromA(i,j)asshown
inFigure3d.

For all 2D points (i,j) in the A(i,j), the 3D points can be


calculatedby:
A3d(x,y,z)=[i,j,d(i,j)] (4)

Then project the 3D points A3d(x,y,z) to the face YOZ as


showninFigure3e.

AYOZ(y,z)= A3d(0,y,z)=[j,d(i,j)] (5)

Definetheminimizeprojectfunction f :

f ( y ) min ( AYOZ ( y, z )) (6)


z 1,2,...m

Determinetheonemaximum(aty=y1)andtwominimum
(aty=y2,y=y3)fortheminimizeprojectfunction
the3DpointofIcanbereconstructedby:

Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

.Then

x
A3d ( x ' , y3, f (y3))

x ' 1

(8)
T ( x, y , z )
y y3

z f (y3)

x
' A3d ( x' , y1, f (y1))

x 1

(9)
B ( x, y , z )
y y1

z f (y1)

C (i, j )
i 1 j 1

' A3d ( x' , y2, f (y2))

x 1

(7)
I ( x, y , z )
y y2

z f (y2)

3.Positionmodel

To avoid large scale motion when the operator performs


manipulation, we need to confine the working space of
the operator to a relatively small space. However, the
workingspaceoftheremoterobotshouldnotbelimited.
Thismeansthemappingfromarelativelysmallplaceto
anunconfinedlargespaceisnecessary.Becauseofdirect
mappingfromsmallspacetoalargerspace,themapping
willlosesomeprecision.Toavoidthisproblem,weadjust
adifferentialpositioningmethodinthissituation.

Similartothemouseandthekeyboard,thepositionofthe
handcanbecalculatedbytheincrementalmethod.From
section2,the3DpositionofB,TandIarecalculatedinthe
worldcoordinate,shownasFigure3.Theinitialposition
and orientation of the robot endeffector in the starting
point are also stored as the robot reference position and
orientation, respectively. The position of the robot
toolcontrol point on the endeffector is controlled by
positionBofthehumanoperator.

Define the 3D position of the I, T and B in the current


frame as I(x,y,z), T(x,y,z) and B(x,y,z), respectively.
Define the length of the line segment jointing the
indexfinger tip (I) and thumbtip (T) on the operator
handasL(shownasFigure5),

L=||T(x,y,z)I(x,y,z)|| (10)

www.intechopen.com

The 3D position of B in the last frame is B(x,y,z). The


endeffectorreferencepositioninthelastframeisP(x,y,z)
andthenewendeffectorreferencepositionisupdated:

P ' P '' L * L u
(11)

Lu
P ' P ''

Whereuisathresholdthatdetermineswhethertherobot
keepsmovingorpauses.WhenL=0,itmeanstheoperator
stopstocontroltherobot,shownasFigure4.

Figure4.Handpose

operatorhandtotherobottoolcoordinatesystem,theline
fromBtothemidpointMofthelinesegment,whichjoints
theindexfingertip(I)andthumbtip(T)ontheoperator
hand, is mapped to the robottool axis X (Figure 4), and
theXYplaneisdefinedbyB,IandT.

Thismeansthatifweonlygetthetransformationmatrix
from the coordinate system of the console to the
coordinate system of the operators hand, we can obtain
the transformation matrix from the base coordinate
systemtotheendeffector.Thedetailsofthederivationof
theorientationmatrixaregivenbelow:

Assuming the origin of the operators hand coordinate


systemisidenticaltotheoneinconsolecoordinatesystem
andthetransformationmatrixisa3*3matrixM.LetPoint
A in the operators hand coordinate system transfer to
PointAintheconsolecoordinatesystem,wehave:
A = MA (12)

In hand tracking and positioning, the unit vector

[ x1 , x2 , x3 ] [ y1 , y2 , y3 ] , [ z1 , z2 , z3 ] indirectionX,Y,

ZcanbemeasuredbyKinectyielding:

m11
m
21
m31

Figure5.Positioningmodel

Because is an adjustable parameter, theoretically the


spacemanipulatedbytheoperatorisaninfinitespaceand
we can obtain coarsecontrol and finecontrol through
adjustmenttothevalueof .

4.OrientationModel

AsdescribedinFigure,theorientationoftheendeffector
isinaccordancewiththeorientationformedbythumbtip,
indexfingertipand Thepartbetweenthethumbandthe
indexfinger

Figure6.Orientationmodel

Theorientationoftheendeffectoriscalculatedusingthe
3D positions of the I, T and B. In the mapping of the

www.intechopen.com

m13 1 x1
m23 0 x2 (13)
m33 0 x3

m11 m12
m
21 m22
m31 m32

m13 0 y1
m23 1 y2
m33 0 y3

(14)

m11
m
21
m31

m12
m22
m32

m13 0 z1
m23 0 z2
m33 1 z3

(15)

Through(13),(14),(15),wecanget:

m11
m
21
m31

m12
m22
m32

m12
m22
m32

m13 x1
m23 x2
m33 x3

y1
y2
y3

z1
z2
z3

(16)

As stated before, the transformation matrix from the


console coordinate system to the operators hand
coordinate system is identical to the one from the base
coordinate system to the endeffector coordinate system,
andthetranslationrelationshipbetweentheendeffector
andthebasecoordinatesystemisalreadyyieldinginthe

Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li:


Markerless Kinect-Based Hand Tracking for Robot Teleoperation

positioning model, so the transformation matrix of


orientationis:

x1
x
M 2
x3

y1
y2
y3
0

z1
z2
z3
0

p1
p2
(17)
p3

Through (8) we can have the angle of six joints:

(1 , 2 ,..., 6 ) .

Noticethatthe[

p1 , p2 , p3 ]isthetranslationmatrixfrom

thebasecoordinatesystemtotheendeffector.

5.VirtualRobotManipulationSystem

We use a six degreeoffreedom industrial robot to


performthisexperiment,asshowninFigure7.Thetaskis
to grab the target object which is in the robots working
spaceandthenplacetheobjectatthedestination.

Therearetwoworkingmodesfortherobot.Thefirstone
is to calculate the angle of every joint by reversing
kinematic according to the position of the endeffector.
After joints execute the entire requested angles, the
endeffector of the virtual robot reaches the destination.
This mode is suitable for a situation where no obstacle
occursintheworkspaceofthevirtualrobot.However,the
second mode is suitable for the situation where an
obstacleshowsupinthevirtualrobotsworkingspace.In
thismodethevirtualrobothastomovealongasafepath,
which ensures the virtual robot will not collide with the
obstacle.

In DH representation, Ai presents the homogeneous


coordinatetransformationmatrixfromcoordinatei1toi:

cos i
sin
i
Ai
0

sin i cos i
cos i cos i

sin i sin i
cos i sin i

sin i
0

cos i
0

li cos i
li sin i
(18)
ri

For a robot with six joints, the homogeneous coordinate


transformationmatrixfromthebasecoordinatesystemto
theendeffectorscoordinatesystemisdefinedas:

T6 A1 A2 ... A6 [

n60

s60 a60 p60


] (19)
0 0 0 1

Where

n60 is the row vector of the endeffector, s60 is

the pitch vector,

a60 is the yaw vector and p60 is the

positionvector.

Using(17),(19),wehave:

T6 M

Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

(20)

(a)

(b)

Figure7.Sixaxisrobotmanipulatorusedattheremoterobotsite

6.Experiments

Weevaluatedthealgorithmonourrobotplatform.When
testing it, we built up an experimental environment of
teleoperation. We built a set of emulation environments
forthetechnicalrobotandasetofvirtualrealitysystems
basedonvideoatthelocalsite.Theremotesiteisthereal
robot in the working environment. In this experiment,
considering the real environment of teleoperation, we
limit bandwidth to 30kB/s and the delay time is
approximately3seconds.

To evaluate the Kinectbased teleoperation algorithm


described in this paper, we use C++ to develop a
Kinectbasedhumanrobotinterfacesystem(Figure8)and
this system is used for the teleoperation of a sixaxis
technical robot. This experimental system includes three
modules:
1) Use the human hand tracking and positioning
systemtogetthehandimages,andthencalculatethe
3D positions of T (the thumb tip), I (the indexfinger
tip) and B (the part of the hand between the thumb
andtheindexfinger).
2)Virtualrobotmanipulationsystemdrivesthevirtual
robot based on the joint angles which are calculated
throughreversekinematic.Ifthecommandsaresafe,
www.intechopen.com

they will be transmitted to the remote site to control


therealrobot.
3)Theremotesitewilltransmitthevideotothelocal
site and the video fuse system displays the virtual
environment and the real environment. Then the
edgesofthevirtualrobotcoverthevideoframewhich
istransmittedfromtheremotesite.

7.Result

After reconstructing and controlling robots by reverse


kinematics, the precision of manipulation will decrease
because of the transformation of the coordinate system
andsolvingoftheequationsset.

Figure showsthepositionandorientationoftherobots
endeffectorandtheoperatorshandduringteleoperation
experiments.Thedashedlinerepresentstheendeffectors
path.Thesolidlinewithgreensquaresrepresentsthepath
oftheoperatorshand.Thevirtualrobotwasmanipulated
to grab the ball which is placed on a square. The data
generatedbythisexperimenthasshownthattheposition
errors ranged from 13 to +13 mm and the orientation
errors ranged from 2 to 2 degree. Figure (c,d,e) shows
the X, Y, Z displacements of the endeffector and hand,
whiletherotationsofthemareshowninFigure (f,g,h).

In the experiment, the operator placed his hand in the


workspacetocontrolthevirtualrobot.Theorientationof
thevirtualrobotsendeffectorcoincidedwiththehuman
hand.Thepositionofthevirtualrobotsendeffectorwas
adjusted by moving the human hand through different
facesofthedirectionspace,asshowninFigure3.

As shown in Figure 3, the way the operator controls the


robot is natural and intuitive. Because of using an
incrementalmethodwhichissimilartokeyboardcontrol,
the operator is not required to make large scale
movementstocontroltherobot.

Figure8.Noninvasivevisionbasedteleoperationsystem

8.Discussion

In the remote unstructured environment of the robot


teleoperation, we assume that all the remote robot site
components, including robotic arm, robot controller,
camerasonendeffectorsandsomeothercameras,canbe
installed on a mobile platform and enter those
unstructured environments. The method shown here is
proved on grabbing objects, picking up objects and
positioningaccuratelyduringgrabbingobjectsinthefine
adjustment controlling mode. One advantage of this
system is that it includes the operator into the decision
controlloop.Itallowsarobottograb,moveandplacethe

www.intechopen.com

objectwithoutanypriorknowledgelikestartinglocation
and even destination location. There are some similar
tasks which require decision making when picking up
objectsandtargetsfrommultipleobjectslikepackingand
cleaningsomeobjectswhichmaycontainsomedangerous
items. It is expected that this system can be used to
achievethosemorecomplexposeswhenthejointsofthe
robotarelimited.Theholetaskshowshowtodetermine
the position of an extruded body and a target hole
randomly. Assembly and disassembly mayinclude more
limitedholetasks.Wemayneedanappropriategrabhook,
biggerholeandgrooveunlessthissystemincludesforce
feedback.
Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li:
Markerless Kinect-Based Hand Tracking for Robot Teleoperation

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Figure9.Analysisoftheexperiment

Compared with the automatic capture (Kofman et al. [6]),


this algorithm uses manual positioning. Considering hand
tremor,thisalgorithmincludesacoarseadjustmentandfine
adjustmentfunction.Whenguidingtherobot,wecanusethe
coarse adjustment to move the robot close to the target
quickly. When grabbing the target, we can use the fine
adjustmenttopositiontherobotaccurately.Thatcanensure
8

Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

the safety and the efficiency of the teleoperation, and solve


theproblemofinaccuracycausedbymanualoperation.

This paper contributes to the guiding teleoperation


system based on noncontact measurement. By using
tracking based on Kinect, robot teleoperation allows the
operator to control the robot in a more natural way.
www.intechopen.com

Generally speaking, using the same hand motion that


naturally would be used in a task can accomplish the
operation task and what is more, this tracking based on
Kinect is noncontact. Thus, compared with contacting
electromagneticdevices,devicesbasedonsensoranddata
gloveswhichareusednormally,noncontactdevicesmay
cause less hindrance to the natural humanlimb motion.
The method proposed here will allow the operator to
focusonthetaskinsteadofthinkingofhowtodecompose
thecommandsintosomesimplecommandsthatthevoice
recognition teleoperation system can understand. This
methodismorenaturalandintuitivethantheoperationin
Kofman et al. [23]. The system can be used immediately
withoutanyinitializationandthisnoncontactingcontrol
systemcanbeusedoutdoors.Becausethisalgorithmuses
infrareddistancemeasurementtogetarminformation,it
canignorethelightingeffectanddoesnotneedtoextract
the 3D coordinates by accurate image processing. That
allowsthesystemtobeusedinmoresevereenvironments,
like when it is too bright or too dark. In addition, the
algorithm of [23] reference 1 needs a bare hand to
recognizethecolourofskin,otherwise,itcannotbeused
to extract the hand data. Compared with that algorithm,
this algorithm does not require a bare hand and the
operatorcanweargloveswhenusingthesysteminacold
outdoorworkingenvironment.Thatenlargesthefieldof
applicationofthesystem.

9.Conclusion

A method of humanrobot interaction using markerless


Kinectbased tracking of the human hand for a
robotmanipulatorteleoperation hasbeenpresented.Via
trackingofthethumbtip,indexfingertipandthepartof
the hand between the thumb and the index finger in
realtime,the3Dpositionandorientationofthehandare
computed accurately and the robot manipulator can be
controlledbyhandtoperformthetaskofpickingupand
placing.Tocompletethecomplextasks,multiKinectwill
beusedtoworktogetherinfuturework.

10.References

[1] Yussof H, Capi G, Nasu Y, Yamano M, Ohka M. A


CORBABased Control Architecture for RealTime
Teleoperation Tasks in a Developmental Humanoid
Robot. International Journal of Advanced Robotic
Systems,8(2):2948,2011.
[2] MitsantisukC,KatsuraS,OhishiK.ForceControlof
HumanRobot Interaction Using Twin DirectDrive
Motor System Based on Modal Space Design. IEEE
Transactions
on
Industrial
Electronics,
57(4):13381392,2010.
[3] Hirche S, Buss M. HumanOriented Control for
Haptic Teleoperation. Proceedings of the IEEE,
100(3):623647,2012.

www.intechopen.com

[4] Villaverde AF, Raimundez C, Barreiro A. Passive


InternetbasedCraneTeleoperationwithHapticAids.
International Journal of Control Automation and
Systems,10(1):7887,2012.
[5] WangZ,GiannopoulosE,SlaterM,PeerA,BussM.
Handshake: Realistic Human Robot Interaction in
Haptic
Enhanced
Virtual
Reality.
PresenceTeleoperators and Virtual Environments,
20(4):371392,2011.
[6] Kofman, Jonathan, Xianghai Wu, TimothyLuu, and
Siddharth Verma. 2005. Teleoperation of a robot
manipulator using a visionbased humanrobot
interface.IEEETransactionsonIndustrialElectronics
52(5):12061219.
[7] Lathuilire, Fabienne and Herv JeanYves. 2000.
Visual hand posture tracking in a gripper guiding
application.Proc.Int.Conf.RoboticsandAutomation
(ICRA)16881694.
[8] Guanglong Du, Ping Zhang, Liying Yang, Yanbin Su.
Robotteleoperationusingavisionbasedmanipulation
method. Audio Language and Image Processing
(ICALIP),2010InternationalConference.2010,945949.
[9] Peer A, Pongrac H, Buss M. Influence of Varied
HumanMovementControlonTaskPerformanceand
Feeling of Telepresence. PresenceTeleoperators and
VirtualEnvironments,19(5):463481,2010.
[10] Borghese, N. Alberto and Rigiroli Paolo. 2002.
Tracking densely moving markers. IEEE First
InternationalSymposiumon3DDataProcessingand
Transmission,PadovaGiugno,682685.
[11] Verma Siddharth. 2004. Visionbased markerless 3D
humanarm tracking. M.A.Sc. Thesis, Department of
Mechanical Engineering, University of Ottawa,
Ottawa,Canada.
[12] Goncalves, Luis, DiBernardo Enrico, Ursella Enrico
and Perona Pietro. 1995. Monocular tracking of the
human arm in 3D.Proceedings of IEEE International
ConferenceonComputerVision,ICCV95,764770.
[13] Kakadiaris, Ioannis A, Metaxas Dimitri and Bajcsy
Ruzena.1994a.Activepartdecomposition,shapeand
motion estimation of articulated objects: a
physicsbased approach. Proceeding of IEEE
Computer Society Conference on Computer Vision
andPatternRecognition,980984.
[14] UedaEtsuko,YoshioMatsumoto,MasakazuImaiand
Tsukasa Ogasawara. 2001. Hand pose estimation for
visionbasedhumaninterface.10thIEEEInternational
Workshop on Robot and Human Communication
(ROMAN2001)473478.
[15] RosalesRomerandSclaroffStan.2000.Inferringbody
pose without tracking body parts. Proceedings of
IEEE Conference on Computer Vision and Pattern
Recognition2:721727.
[16] Koara, Kengo, Atsushi Nishikawa and Fumio
Miyazaki. 2001. Contour based hierarchical part
decomposition method for human body motion

Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li:


Markerless Kinect-Based Hand Tracking for Robot Teleoperation

[17]

[18]

[19]

[20]

analysis from video sequence. In Human Friendly


Mechatronics.Ed.ByE.Arai,T.Arai,andM.Takano,
ElsevierScience.
Mac Cormick, John and Isard, Michael. 2000.
Partitioned sampling, articulated objects, and
interfacequality hand tracking. Proceeding of
EuropeanConferenceonComputerVision2:319.
Triesch, Jochen and von der Malsburg, Christoph.
2002.Asystemforpersonindependenthandposture
recognition against complex backgrounds. IEEE
Transactions on Pattern Analysis and Machine
Intelligence23(12):14491453.
Fong,Terrence,ContiFrancois,GrangeSebastienand
Baur Charles. 2000. Novel interfaces for remote
driving: gesture, haptic and PDA. SPIE
Telemanipulator & Telepresence Technologies VII
4195:300311.
Hu,Chao,MaxQingHuMeng,PeterXiaopingLiu,
andXiangWang.2003.Visualgesturerecognitionfor
humanmachine interface of robot teleoperation.
IEEE=RSJ International Conference on Intelligent
RobotsandSystems,USA,15601565.

10 Int J Adv Robotic Sy, 2012, Vol. 9, 36:2012

[21] Moy,MilynC.1999.Gesturebasedinteractionwitha
PetRobot.Proceedingsof6thNationalConferenceon
Artificial Intelligence and 11th Conference on
Innovative Applications of Artificial Intelligence,
USA,628633.
[22] Ionescu,Bogdan,CoquinDidier,LambertPatrickand
Buzuloiu Vasile. 2005. Dynamic and gesture
recognitionusingtheskeletonofthehand.Journalon
AppliedSignalProcessing13:21012109.
[23] Kofman Jonathan, Verma Siddharth, Wu Xianghai.
RobotManipulator Teleoperation by Markerless
VisionBased HandArm Tracking. International
JournalofOptomechatronics,1:331357,2007.

www.intechopen.com

También podría gustarte