Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Keywordsrobotmanipulator,markerless,Kinect
1.Introduction
www.intechopen.com
Becausevisionbasedtechniquesarenoncontactandless
hindrancetohandarmmotions,theyhavealsobeenused.
Visionbased methods always use physical markers
placed on the anatomical body part (Kofman et al. [6];
LathuilireandHerv[7];GuangLongDuetal.[8]).There
are a lot of applications (Peer et al. [9] Borghese and
Rigiroli[10];Kofmanetal.[6])usingmarkerbasedhuman
motion tracking, however, because body markers may
hinderthemotionforhighlydexteroustasksandmayget
occluded, this markerbased tracking is not always
practical. Thus, a markerless approach seems better for
manyapplications.
Adv Robotic
Sy, 2012,
Vol.
9, Zeling
36:2012
Guanglong Int
Du,J Ping
Zhang, Jianhua
Mai
and
Li:
Markerless Kinect-Based Hand Tracking for Robot Teleoperation
gesturesonly.Ifahumanoperatorwantstousegestures,
he/sheneedstothinkofthoselimitedseparatecommands
thatthehumanrobotinterfacecanunderstandlikemove
up, down, forward and so on. A better way of
humanrobotinteractionwouldbetopermittheoperator
tofocusonthecomplexglobaltaskasahumannaturally
doeswhengraspingandmanipulatingobjectsin3Dspace
insteadofthinkingaboutwhattypeofhandmotionsare
required. To achieve this goal, a method that allows the
operatortocompletethetaskusingthehandarmmotions
naturally, providing the robot with information of the
handarm motion in realtime like the hand and arm
anatomicalpositionandorientation(Kofmanetal.[23]),is
needed.However,toachievetheinitialization,thehuman
operatormustassumeasimpleposturewithanunclothed
arm in front of a dark background, hand placed higher
thantheshoulder.Itisnotpossibletogetapreciseresult
with a complex background. In addition, the human
operatorwouldfindithardtoworkincoldweatherasthe
armisunclothed.Itisalsolimitedbecauseofthelighting
effect,i.e.,itisdifficulttousewhenitistoobrightortoo
dark.
Figure1.NoninvasiverobotteleoperationsystembasedontheKinect
www.intechopen.com
2.Humanhandtrackingandpositioningsystem
Human hand tracking and positioning is carried out by
continuouslyprocessingRGBimagesanddepthimagesof
an operator who is performing the hand motion to
completearobotmanipulationtask.TheRGBimagesand
depthimagesarecapturedbytheKinectwhichisfixedin
thefrontoftheoperator.
2.1Kinectcoordinatesystem
InFigure2,anoperatorstandsinfrontoftheKinectand
controls a robot. We can define the Kinectcoordinate as
showninFigure2:axisXisupturned,axisYisrightward
andaxisZisvertical.TheKinectcancapturethedepthof
any objects in its workspace. In Figure 2 we can see the
indexfingertip(I),thethumbtip(T)andapartofthehand
between the thumb and the index finger(B). Every
distancebetweentheKinectandI,B,TorUisdifferent.I
and T are closest to the Kinect and the upper arm U is
furthest. The 3D position of B is used to control the
position of the robot endeffector. The I, T and B of the
operator are used to control the orientation of the robot
endeffector.
Figure2.depthofobjects.K:Kinect;I:indexfingertip;T:thumb
tip,B:apartofthehandbetweenthethumbandtheindexfinger;
U:upperarm.
2.2Imagecaptureandsegmentationofhand
Inordertocatchthehandmotionusedforcontrollingthe
robotmanipulator,weneedtoseparatethehandfromthe
depth image. The arm is segmented from the body by
thresholdingtherawdepthimage.
www.intechopen.com
Cb i, j {d i, j | d (i, j ) T ;d (i, j ) D;
i 1, 2,..., n; j 1, 2,..., m }
(1)
Whered(i,j)isthepixelofdepthimageD;nisthewidthof
DandmistheheightofD.
mn
(2)
ThenwecandividethearmregionA(i,j)asfollows:
ThearmregionAisshowninFigure3c.
2.3Determinationofthumbandindexfingertippositions
Definetheminimizeprojectfunction f :
Determinetheonemaximum(aty=y1)andtwominimum
(aty=y2,y=y3)fortheminimizeprojectfunction
the3DpointofIcanbereconstructedby:
.Then
x
A3d ( x ' , y3, f (y3))
x ' 1
(8)
T ( x, y , z )
y y3
z f (y3)
x
' A3d ( x' , y1, f (y1))
x 1
(9)
B ( x, y , z )
y y1
z f (y1)
C (i, j )
i 1 j 1
x 1
(7)
I ( x, y , z )
y y2
z f (y2)
3.Positionmodel
Similartothemouseandthekeyboard,thepositionofthe
handcanbecalculatedbytheincrementalmethod.From
section2,the3DpositionofB,TandIarecalculatedinthe
worldcoordinate,shownasFigure3.Theinitialposition
and orientation of the robot endeffector in the starting
point are also stored as the robot reference position and
orientation, respectively. The position of the robot
toolcontrol point on the endeffector is controlled by
positionBofthehumanoperator.
L=||T(x,y,z)I(x,y,z)|| (10)
www.intechopen.com
P ' P '' L * L u
(11)
Lu
P ' P ''
Whereuisathresholdthatdetermineswhethertherobot
keepsmovingorpauses.WhenL=0,itmeanstheoperator
stopstocontroltherobot,shownasFigure4.
Figure4.Handpose
operatorhandtotherobottoolcoordinatesystem,theline
fromBtothemidpointMofthelinesegment,whichjoints
theindexfingertip(I)andthumbtip(T)ontheoperator
hand, is mapped to the robottool axis X (Figure 4), and
theXYplaneisdefinedbyB,IandT.
Thismeansthatifweonlygetthetransformationmatrix
from the coordinate system of the console to the
coordinate system of the operators hand, we can obtain
the transformation matrix from the base coordinate
systemtotheendeffector.Thedetailsofthederivationof
theorientationmatrixaregivenbelow:
[ x1 , x2 , x3 ] [ y1 , y2 , y3 ] , [ z1 , z2 , z3 ] indirectionX,Y,
ZcanbemeasuredbyKinectyielding:
m11
m
21
m31
Figure5.Positioningmodel
4.OrientationModel
AsdescribedinFigure,theorientationoftheendeffector
isinaccordancewiththeorientationformedbythumbtip,
indexfingertipand Thepartbetweenthethumbandthe
indexfinger
Figure6.Orientationmodel
Theorientationoftheendeffectoriscalculatedusingthe
3D positions of the I, T and B. In the mapping of the
www.intechopen.com
m13 1 x1
m23 0 x2 (13)
m33 0 x3
m11 m12
m
21 m22
m31 m32
m13 0 y1
m23 1 y2
m33 0 y3
(14)
m11
m
21
m31
m12
m22
m32
m13 0 z1
m23 0 z2
m33 1 z3
(15)
Through(13),(14),(15),wecanget:
m11
m
21
m31
m12
m22
m32
m12
m22
m32
m13 x1
m23 x2
m33 x3
y1
y2
y3
z1
z2
z3
(16)
x1
x
M 2
x3
y1
y2
y3
0
z1
z2
z3
0
p1
p2
(17)
p3
(1 , 2 ,..., 6 ) .
Noticethatthe[
p1 , p2 , p3 ]isthetranslationmatrixfrom
thebasecoordinatesystemtotheendeffector.
5.VirtualRobotManipulationSystem
Therearetwoworkingmodesfortherobot.Thefirstone
is to calculate the angle of every joint by reversing
kinematic according to the position of the endeffector.
After joints execute the entire requested angles, the
endeffector of the virtual robot reaches the destination.
This mode is suitable for a situation where no obstacle
occursintheworkspaceofthevirtualrobot.However,the
second mode is suitable for the situation where an
obstacleshowsupinthevirtualrobotsworkingspace.In
thismodethevirtualrobothastomovealongasafepath,
which ensures the virtual robot will not collide with the
obstacle.
cos i
sin
i
Ai
0
sin i cos i
cos i cos i
sin i sin i
cos i sin i
sin i
0
cos i
0
li cos i
li sin i
(18)
ri
T6 A1 A2 ... A6 [
n60
Where
positionvector.
Using(17),(19),wehave:
T6 M
(20)
(a)
(b)
Figure7.Sixaxisrobotmanipulatorusedattheremoterobotsite
6.Experiments
Weevaluatedthealgorithmonourrobotplatform.When
testing it, we built up an experimental environment of
teleoperation. We built a set of emulation environments
forthetechnicalrobotandasetofvirtualrealitysystems
basedonvideoatthelocalsite.Theremotesiteisthereal
robot in the working environment. In this experiment,
considering the real environment of teleoperation, we
limit bandwidth to 30kB/s and the delay time is
approximately3seconds.
7.Result
Figure showsthepositionandorientationoftherobots
endeffectorandtheoperatorshandduringteleoperation
experiments.Thedashedlinerepresentstheendeffectors
path.Thesolidlinewithgreensquaresrepresentsthepath
oftheoperatorshand.Thevirtualrobotwasmanipulated
to grab the ball which is placed on a square. The data
generatedbythisexperimenthasshownthattheposition
errors ranged from 13 to +13 mm and the orientation
errors ranged from 2 to 2 degree. Figure (c,d,e) shows
the X, Y, Z displacements of the endeffector and hand,
whiletherotationsofthemareshowninFigure (f,g,h).
Figure8.Noninvasivevisionbasedteleoperationsystem
8.Discussion
www.intechopen.com
objectwithoutanypriorknowledgelikestartinglocation
and even destination location. There are some similar
tasks which require decision making when picking up
objectsandtargetsfrommultipleobjectslikepackingand
cleaningsomeobjectswhichmaycontainsomedangerous
items. It is expected that this system can be used to
achievethosemorecomplexposeswhenthejointsofthe
robotarelimited.Theholetaskshowshowtodetermine
the position of an extruded body and a target hole
randomly. Assembly and disassembly mayinclude more
limitedholetasks.Wemayneedanappropriategrabhook,
biggerholeandgrooveunlessthissystemincludesforce
feedback.
Guanglong Du, Ping Zhang, Jianhua Mai and Zeling Li:
Markerless Kinect-Based Hand Tracking for Robot Teleoperation
(a) (b)
(c) (d)
(e) (f)
(g) (h)
Figure9.Analysisoftheexperiment
9.Conclusion
10.References
www.intechopen.com
[17]
[18]
[19]
[20]
[21] Moy,MilynC.1999.Gesturebasedinteractionwitha
PetRobot.Proceedingsof6thNationalConferenceon
Artificial Intelligence and 11th Conference on
Innovative Applications of Artificial Intelligence,
USA,628633.
[22] Ionescu,Bogdan,CoquinDidier,LambertPatrickand
Buzuloiu Vasile. 2005. Dynamic and gesture
recognitionusingtheskeletonofthehand.Journalon
AppliedSignalProcessing13:21012109.
[23] Kofman Jonathan, Verma Siddharth, Wu Xianghai.
RobotManipulator Teleoperation by Markerless
VisionBased HandArm Tracking. International
JournalofOptomechatronics,1:331357,2007.
www.intechopen.com