Está en la página 1de 6

2012 Third International Conference on Intelligent Control and Information Processing July 15-17, 2012 - Dalian, China

A Survey of Vision Based Autonomous Aerial Refueling for Unmanned Aerial Vehicles
Borui Li, Chundi Mu, and Botao Wu
adaptability to bad weather and longer refueling time due to the slower fuel transfer rate, which unfortunately are not expected in many UAV missions [1]. In order to extend the two existing AR methods to meet the demand of UAV-AAR, many efforts have been made on both PDR and BRR method in the last decade [4-9], and recent researches have gradually focused on the BRR method. During the process of UAV-AAR, it is very important and critical to estimate the relative pose information between the tanker and the UAV, which requires sufficiently accurate and reliable navigation and control systems. In the literature, traditional navigation and control techniques for UAVs are generally based on Global Position System (GPS), Inertial Navigation System (INS) and so on. However, there might be some limits and problems: GPS signal may be blocked by the tanker during UAV-AAR because the tanker is usually much bigger than UAVs [5], and the main drawback of INS is integration drift. Therefore, computer vision based technique is proposed as an alternative or an addition to the above-mentioned sensor systems [10]. Computer vision based sensor system that usually contains one or more cameras is very suitable for UAVs because it is light-weight, low-cost, and can provide more information of surrounding environment. Through fusion of the data from visual sensors and other sensor systems, more precise pose estimation can be obtained [6]. The general block diagram of a typical vision-based UAV-AAR navigation and control system is shown in Fig.1.

AbstractUnmanned Aerial Vehicles (UAVs) are expected to play a similar role to manned aircraft in both military and civilian field. At present, the major shortcoming of UAVs is lack of payload and endurance, which significantly impacts the performance of UAVs in long-time and complex missions. To overcome the shortcoming, the capability of in-flight aerial refueling has become a critical aspect of UAVs. The purpose of this paper is to provide a survey of the vision based techniques which could be applied to autonomous aerial refueling for UAVs, including vision-based navigation and visual servo control. Multi-sensor fusion methods and simulation approaches of autonomous aerial refueling are also discussed.

I. INTRODUCTION

HE techniques of Unmanned Aerial Vehicles (UAVs) have rapidly developed during the past 20 years. UAVs are playing a more and more important role in both military and civilian field [1, 2]. They are widely used in many applications such as reconnaissance, surveillance, real-time monitoring, air-to-field attack, high-risk tasks, and so on. However, lack of payload and endurance significantly limits UAVs application. Enabling UAVs capability of autonomous aerial refueling (AAR) is a very effective way to overcome this shortcoming and improve UAVs performance. AAR will observably increase UAVs mission capabilities by extending their endurance and range [1, 3]. The development of autonomous aerial refueling for UAVs (UAV-AAR) is based on existing in-flight aerial refueling (AR) technology for manned aircraft. There are two major methods of in-flight AR [1, 4]. The first one is known as the probe-and-drogue refueling (PDR) method. In PDR method, the receiver aircraft uses a probe mounted on its fore body to insert into a drogue which trails from the tanker. Once the refueling operation is finished, the receiver will slow down to disconnect. The second one is the boom-and-receptacle refueling (BRR) method. In this method, a boom operator onboard the tanker takes charge of the operations of refueling and disconnecting while the two aircraft are flying in formation. The receiver aircraft has to maintain an appropriate pose with respect to the tanker. Compared to BRR method, the PDR method is simpler and easier to implement. However, the PDR method suffers from poorer

Fig. 1. Block diagram of a typical vision-based UAV-AAR navigation and control system.

Manuscript received April 9, 2012. Borui Li is with the Department of Automation, Tsinghua University, Beijing, China (e-mail: lbr07@mails.tsinghua.edu.cn). Chundi MU is with the Department of Automation, Tsinghua University, Beijing, China (e-mail: muchd@mail.tsinghua.edu.cn). Botao Wu is with the Department of Automation, Tsinghua University, Beijing, China (e-mail: lottan3@gmail.com).
978-1-4577-2143-4/12/$26.00 2012 IEEE 1

Although there are already many vision based techniques for UAVs available, there still exists an urgent need to develop more appropriate and robust vision based techniques for UAV-AAR, because UAV-AAR requires high-frequency updates and centimeter-level accuracy of navigation [4, 11], which are very strict and critical. In this paper, we will discuss various issues associated with vision-based UAV-AAR and give a review of computer vision-based techniques for UAV-AAR. The rest of this paper is organized as follows. In Section II, some practical mapless vision-based navigation

methods are introduced. In Section III, we introduce three major approaches of visual servo control. Multi-sensor fusion techniques are discussed in Section IV. We summarize the existing simulation approaches of UAV-AAR using both the PDR and BRR method in Section V. Finally, the conclusions are given in Section VI. II. VISION-BASED NAVIGATION Vision-based navigation refers to using computer vision information to determine a proper route from the initial position to the target position. According to whether a map is necessary or not, vision-based navigation for all types of mobile robots, including airborne, ground and underwater robots, could be divided into two categories [2, 12]: map-based navigation and mapless navigation. Since the great majority of UAVs use mapless navigation systems, we will focus on mapless navigation in this section. Mapless vision-based navigation for UAVs is commonly achieved through the following two methods [2]: feature detecting and tracking as well as optical flow. A. Feature Detecting and Tracking In feature detecting and tracking based navigation systems, features extracted from the image are detected at first, and then these features are matched and tracked by relevant algorithms. Features are usually moving elements in the image such as corners, lines, edges and so on. Madison R. et al. [13] develop a vision-aided navigation for small UAVs using a certain number of texture features in two dimensional (2-D) images to track landmarks. Ludington et al. [14] use a dark cube on a much lighter background as the target. David G. Lowe [15] develops an innovative image feature generation method called Scale Invariant Feature Transform (SIFT). This method extracts features which are invariant to image scaling, translation, and rotation. SIFT has become a commonly used method for mobile robots to detect landmarks [2]. Ollero et al. [16] propose a new feature matching method which computes and utilizes homography between two consecutive images, and apply it to motion compensation and object detection for UAVs. In UAV-AAR, selecting features may be easier because artificial optic markers could be mounted on both the tanker and the UAV to help the process of navigation. Doebbler et al. [4] paint a passive visual target image on the UAV, whose size, shape and location are already known. They employ visual snake methods that use a closed, nonintersecting contour to segment the target area of the image to track the painted target image. Campa et al. [5] mount some optic markers on the tanker and a visual camera on the UAV. A set of vision-based techniques including feature extraction, point matching, and pose estimation are employed for the UAV to detect and track these specific features. B. Optical Flow Optical flow is the pattern of apparent visual motion of objects in consecutive images, and it contains motion information of the objects. Optical flow is caused by relative motion between visual sensor and target objects. There is a
2

common belief that optical flow could be used for motion estimation in navigation systems of UAVs [2, 12]. Ashit Talukder and Larry Matthies [17] combine stereo visual system and dense optical flow to develop a comprehensive real-time solution for moving objects detection. This solution could be implemented without any constraint condition of the robot motion, the objects or the environment. Ding et al. [18] use optical flow measurements as additional information in the multi-sensor fusion strategy based on extended Kalman filter to build an integrated navigation system of GPS, INS and optical flow for UAVs. Within this area, a class of researches focuses on how flying insects such as bees use optical flow in their navigation systems, motivated by the fact that flying insects are able to obtain highly accurate and reliable navigation systems based on optical flow [2]. An excellent comprehensive review of these biology-inspired navigation methods is given by Srinivasan et al [19]. III. VISUAL SERVO CONTROL Visual servo could be roughly described as a control approach using computer vision information to control the pose of mobile robots (e.g. UAVs) with respect to reference objects such as landmarks and targets [20]. Let e(t) denote the difference between the current and the desired pose of the mobile robot, then the objective of visual servo control is to minimize e(t). One of the difficult problems of visual servo is the out-of-view problem: when mobile robots are carrying out missions, the reference objects may not remain in the field of view of visual sensors, and then visual servo would be unable to carry out [21]. There are mainly three approaches of visual servo [22, 23]: position-based visual servo (PBVS), image-based visual servo (IBVS) and advanced visual servo. A huge number of works have been done within this area, among which this section presents an overview of outstanding and representative ones. Moreover, the out-of-view problem is discussed. A. Position-Based Visual Servo Position-based visual servo consists of two steps [20]. First, features are extracted from the image data provided by visual sensors. Second, the features are used to estimate the current pose information of mobile robots such as UAVs with respect to reference objects, and the pose information would be directly used in the control law [24]. The difference e(t) is described with 3-D parameters, thus PBVS is also known as 3-D visual servo approach [22]. Mondragn et al. [25] present a robust real-time 3-D pose estimation method for UAVs based on projection matrix and homographies while using planar reference objects. After the detection of corners on the images, this method uses the pyramidal Lucas-Kanade optical ow algorithm to generate a set of matched points, and a Random Sample Consensus (RANSAC) algorithm to estimate a robust homograph between the reference object and the image. The overall method was tested on real UAV ights. The results showed that estimated pose information had the same level of accuracy and quality as the pose information provided by INS. To tackle the out-of-view problem, Thuilot et al. [21] come

up with a novel approach that tracks an on-line established trajectory which is iteratively computed adapting to the current state of the robot. The experimental result demonstrated the feasibility and validity of this approach. B. Image-Based Visual Servo Unlike PBVS, IBVS treats visual sensors as 2-D sensors since all the features used are directly available in images [22]. The difference e(t) is usually described in the image feature parameter space when using IBVS approach [20]. Hamel and Mahony [26] develop a new IBVS control algorithm for a class of under-actuated rigid-body eye-in-hand robot systems. Backstepping techniques are used to derive a Lyapunov control algorithm. This algorithm considers the full dynamic of the rigid-body motion and only needs the bounds of the relative depths of the observed image points. However, pose information from inertial devices is still used in the construction of the visual error. Hamel and Mahony extend their work in [27] by constructing the visual error completely related to the image data. The local exponential stability of the system is also proved. Guenard et al. [28] implement the above-mentioned IBVS control algorithm on a quad rotor UAV with a camera mounted below it. The experiment considers stabilizing the UAV for quasi-stationary ights over a specied target fixed on the ground. Robustness and excellent performance of the IBVS control algorithm are confirmed by the experimental results. To solve the out-of-view problem using IBVS approach, Mehta et al. [29] employ multiple references to develop Euclidean homographies for an unmanned ground vehicle using the image data from a moving airborne monocular camera for simultaneous localization and mapping (SLAM). Anytime a reference object moves out of the cameras field of view, another reference object enters. The numerical simulation results prove asymptotic regulation of the controller. Daewon et al. [30] propose an adaptive IBVS control algorithm for a quad rotor UAV using the adaptive gain to keep the reference objects in the eld of view. C. Advanced Visual Servo PBVS and IBVS are the two fundamental and typical approaches of visual servo. However, both of them have disadvantages. Convergence and stability problems may occur when applying visual servo control, which are emphasized with examples by Chaumette [31]. When using IBVS, local minima may be reached because of unrealized image motion, and the singularity problem that will cause unstable behaviors may occur while calculating the image Jacobian matrix. PBVS uses the estimated pose information in the control law, so the performance of the system is highly sensitive to camera calibration errors [27]. These problems may come up especially when the initial position is far away from the desired position [32]. In order to combine their respective advantages and circumvent the disadvantages, some advanced visual servo approaches are proposed [23], such as 2-1/2-D visual servo and switching schemes. Chaumette et al. [32] propose an approach called 2-1/2-D visual servo which selects visual features defined partly in 2-D and partly in 3-D. Additional information can be found in

[33]. This 2-1/2-D visual servo approach obtains 3-D information by projective reconstruction instead of using 3-D models of reference objects, and its able to solve out-of-view problem no matter where the initial position of visual camera is. A decoupled control law is designed using image features and 3-D information extracted from the partial pose estimation. Another application of 2-1/2-D visual servo can be found in [34], by Metni et al. A 2-1/2-D visual servo control law is applied to a UAV for bridges inspection. Gans et al. [35] present a switching scheme that uses a Lyapunov function as an evaluation function for visual servo system which integrates an IBVS and a PBVS controller together. The system begins by using one of the visual servo controllers, and switches to the other controller once the Lyapunov function exceeds a predefined threshold. Another example of switching schemes can be found in [36], by Chesi et al. Besides, utilizing multi-camera systems is another effective way to improve the performance of visual servo controllers. If a stereo-vision sensor system is used, 3-D pose information could be easily estimated by a triangulation process even if the 3-D models of the reference objects are unknown [22]. IV. MULTI-SENSOR FUSION UAVs are usually equipped with a number of sensors such as visual sensors, GPS, INS and so on. Since the multi-sensor fusion system can achieve a higher level of accuracy than single-type sensor systems, it becomes an essential and critical subsystem of UAV-AAR. This section presents some representative approaches for data fusion using Kalman filter and other methods. A. Kalman Filter Based Methods Kalman filter is still recognized as the most powerful technique for multi-sensor fusion. Extended Kalman filter (EKF) and unscented Kalman filter (UKF) are preferred by the vast majority of multi-sensor fusion approaches for UAV-AAR, especially the EKF. Mammarella et al. [37] use the EKF for the fusion between data from visual sensors and GPS/INS system, and the implement of EKF is discussed in detail. Webb et al. [38] utilize the implicit EKF (IEKF) and the coplanarity constraint for pose estimation, and a satisfying accuracy is achieved in simulation experiments. Williamson et al. [6] design an approach for integrating data from visual sensors and either of GPS and INS. This approach is applied to the relative pose estimation between the tanker and the UAV using a modied gain EKF (MGEKF). The simulation results demonstrate that vision-based and GPS measurements could be used to correct the inertial states with both of them or either one of them. The performance of Kalman filter is significantly affected by the uncertainty in the covariance of the process noise and observation errors, thus the adaptive Kalman filter is proposed to reduce this impact. Ding et al. [18] design an adaptive filter based on covariance matching for integrating measurements of GPS, INS and optical flow. This adaptation algorithm compares the covariance estimation of innovation and residual series of Kalman filter with their theoretical
3

values. It determines an optimal relative weighting between the covariance matrices of the process noise and observation errors by minimizing the root mean square of innovations series. Sattigeri et al. [39] propose an adaptive pose estimator for vision-based target tracking using a neural network augmented Kalman filter. Seung-Min Oh and Eric N. Johnson [40] implement both EKF and UKF in their inertial navigation system for multi-sensor fusion. Simulation was carried out to compare the performance of the two navigation systems using a neural-network-based adaptive nonlinear controller. The experiment results showed that UKF-based navigation system had a higher computational complexity but got a higher accuracy than EKF-based navigation system in the specified integrated inertial navigation system. B. Fuzzy Strategy and Others Campa et al. [41] present a fuzzy fusion strategy to combine vision-based and GPS-based measurements. According to the change of relative distance between the tanker and the UAV, the fuzzy strategy is defined as follows: 1) Large distance: Only GPS-based measurements are used because visual sensor system may face the out-of-view problem. 2) Intermediate distance: A weighted average of GPS based and vision-based measurements is used. 3) Short distance: Only vision-based measurements are used because GPS signal may be blocked or jammed. Seraji et al. [42] design a hierarchical multi-sensor decision fusion system for terrain safety assessment. This system consists of three main steps. First, three methods of multi-sensor fusion (fuzzy set theory, Bayesian probability theory and Dempster-Shafer belief theory) are used to independently provide a safety score to represent safeness of the terrain. Second, the maximum, minimum and an adjustable weighted average of three scores are calculated. Finally, the appropriate result is selected based on the distance between the spacecraft and the landing target. V. SIMULATION OF VISION-BASED UAV-AAR UAV-AAR approaches should be carefully and adequately simulated before being applied to the actual system. We should attach importance to both the digital simulation and the hardware-in-the-loop simulation. This section gives a review of different modeling and simulation approaches for both the PDR method and the BRR method. An overview of the outstanding publications referenced in this section is shown is Table I. A. Probe-and-Drogue Refueling Method One of the most outstanding efforts in vision-based UAV-AAR using the PDR method is VisNav [7, 43, 44], by Valasek et al. VisNav is an active vision-based navigation system which could provide six degree-of-freedom (DOF) information. It achieves a high accuracy when estimating the relative pose of a UAV with respect to a stationary drogue, while using light-emitting diodes (LEDs) mounted on the drogue as beacons. A proportional integral lter optimal nonzero setpoint controller with control rate weighting
4

TABLE I SIMULATION APPROACHES REFERENCED IN SECTION V References [7,43,44] [9,45] [8] [5,10,46] [4] [6] UAV-AAR Method PDR PDR PDR BRR BRR BRR Camera Position UAV UAV UAV UAV Tanker Tanker Accuracy/cm 2 <5 2 <10

(PIF-NZSP-CRW) control law is utilized. A digital UAV model called UCAV6 which is a small-scale AV-8B Harrier aircraft is used for numerical simulation. Kimmett et al. [43] develop a more realistic simulation environment. A variational Kalman filter (VKF) is employed to filter the noises and estimate controller states which are not supplied by VisNav. Furthermore, a discrete proportional integral filter command generator tracker (PIF-CGT) controller is derived in [44]. Pollini et al. [9] describe a digital simulation system for UAV-AAR using the PDR method to test the related vision-based algorithms. This simulation system uses Commercial Off The Shelf (COTS) software. It includes models of the tanker and UAV based on simple point-mass approximation, as well as models of refueling hose and wake effects. The visual simulation system that generates frames of the view from the camera mounted on UAV is based on a piece of synthetic environment creation software which is called DynaWORLDS. The other parts of simulation system are developed with MATLAB/Simulink. Two shared memory buffers are used to simulate the inter-process communications between these two softwares. Fravolini et al. [45] improved this simulation system by linking the models to a Virtual Reality Interface (VRT) which makes simulation process visualized. Digital model of tanker is based on the model of a Boeing 747 (B-747) and UAV based on an F-4 fighter. The autopilot system of the tanker is designed using a LQR-based approach, while the UAV uses a robust H controller. Herrnberger et al. [8] also present a simulation approach to investigate vision-based probe-and-drogue UAV-AAR maneuver. A nonlinear digital model of a slightly unstable receiver aircraft is used. The simulation approach also includes a simplified dynamic model of the drogue, a flight control system based on linear methods, path control algorithm and trajectory control algorithm. A Kalman filter is employed for relative position estimation. Oscar J. Murillo, Jr and Ping Lu [3] compared three control techniques (robust servomechanism, model following and mixed sensitivity H) which were used to design an AAR control system with a reduced order model. The simulation system was based on linearized models of a B-747-tanker and an F/A-18-receiver. B. Boom-and-Receptacle Refueling Method Campa et al. [5] describe a simulation environment for a vision-based UAV-AAR approach using the BRR method. A visual camera is mounted on top of the UAV, and some passive optical markers are placed on the tanker to help the process of relative position estimation between the tanker and

the UAV. Dynamic simulation of two aircraft, boom, atmospheric turbulence and wake effects are performed in MATLAB/Simulink, and the simulation results are exported to a Virtual Reality Toolbox interface to present a realistic view of the refueling maneuver. The 3-D models of aircraft and boom for virtual reality are designed by 3D Studio. A B-747 model and a B-2 model are rescaled to match the size of a KC-135 tanker and an ICE-101 UAV. The VRT interface continuously provides images of the tanker to simulate the camera mounted on the UAV. This simulation environment also involves the related vision-based algorithm modules. Additional information of simulation environment and comparison results of various vision-based algorithms can be found in [10, 46]. An equivalent model of ICE-101 for UAV-AAR research is presented in [47], by Barfield and Hinchman. Doebbler et al. [4] propose a different solution to the boom-and-receptacle UAV-AAR. The visual snake optical camera is mounted in the rear of the tanker above the boom, looking down on the UAV. The UAV has a visual docking target painted on the area of refueling receptacle. The boom docking control law is the PIF-NZSP-CRW controller mentioned above. The simulation results demonstrate that the control system is able to dock the boom into the receptacle on UAV and achieve an accuracy of 2cm while limiting the maximum docking relative velocity less than 0.5m/s. Williamson et al. [6] present an impressive real-time and hardware-in-the-loop simulation system to demonstrate the performance of a multi-sensor fusion system for AAR. It uses one-eighth scale physical models of an F-16 receiver aircraft and the boom, which makes the simulation environment closer to the reality. The tanker is simulated in software, which is assumed to fly at an altitude of 6100m with a velocity of 123m/s. Some beacons are mounted on top of the receiver and a camera system located on the tanker is used to capture images of the receiver aircraft. The test results show that centimeter level of accuracy could be achieved. VI. CONCLUSIONS It has been proved that AAR is a very valuable technology which could be applied to both unmanned and manned aircraft to increase their mission capabilities. Computer vision based technology provides an effective way to achieve the high level of accuracy required by UAV-AAR. Visual sensor system could also enhance the ability of UAVs to sense the surroundings and increase the UAVs autonomy. Among the countless researches on computer vision based techniques for UAVs, this survey primarily focuses on the typical ones. Visual navigation technique based on feature detecting and tracking should be highlighted because some specific optical markers could be used as features in UAV-AAR. Particle filter and other types of filters may provide more effective methods for multi-sensor fusion. At present, most of the digital UAV-AAR simulation approaches are based on MATLAB/Simulink, and the next stage would be the hardware-in-the-loop simulation.

REFERENCES
[1] J. P. Nalepka and J. L. Hinchman, "Automated aerial refueling: Extending the effectiveness of unmanned air vehicles," in Proc. AIAA Modeling and Simulation Technologies Conf., San Francisco, 2005, pp. 240247. F. Bonin-Font, A. Ortiz and G. Oliver, "Visual navigation for mobile robots: A survey," Journal of Intelligent & Robotic System, vol. 53, no. 3, pp. 263296, Nov. 2008. O. Murillo and P. Lu, "Comparison of autonomous aerial refueling controllers using reduced order models," in Proc. AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, 2008. J. Doebbler, T. Spaeth, J. Valasek, M. J. Monda, and H. Schaub, "Boom and receptacle autonomous air refueling using visual snake optical sensor," Journal of Guidance, Control, and Dynamics, vol. 30, no. 6, pp. 17531769, Nov.Dec. 2007. G. Campa, M. R. Napolitano and M. L. Fravolini, "Simulation environment for machine vision based aerial refueling for UAVs," IEEE Trans. Aerospace and Electronic Systems, vol. 45, no. 1, pp. 138151, Jan. 2009. W. Williamson, G. Glenn, V. Dang, J. Speyer, S. Stecko, and J. Takacs, "Sensor fusion applied to autonomous aerial refueling," Journal of Guidance, Control, and Dynamics, vol. 32, no. 1, pp. 262275, Jan.Feb. 2009. J. Valasek, K. Gunnam, J. Kimmett, M. D. Tandale, J. L. Junkins, and D. Hughes, "Vision-based sensor and navigation system for autonomous air refueling," Journal of Guidance, Control, and Dynamics, vol. 28, no. 5, pp. 979989, Sep.Oct. 2005. M. Herrnberger, G. Sachs, F. Holzapfel, W. Tostmann, and E. Weixler, "Simulation analysis of autonomous aerial refueling procedures," in Proc. AIAA Guidance, Navigation, and Control Conference and Exhibit, San Francisco, 2005. L. Pollini, G. Campa, F. Giulietti, and M. Innocenti, "Virtual simulation set-up for UAVs aerial refuelling," in Proc. AIAA Modeling and Simulation Technologies Conference and Exhibit, Austin, 2003. M. L. Fravolini, G. Campa and M. R. Napolitano, "Evaluation of machine vision algorithms for autonomous aerial refueling for unmanned aerial vehicles," Journal of Aerospace Computing, Information and Communication, vol. 4, no. 9, pp. 968985, Sep. 2007. Y. Liu and Q. Dai, "A survey of computer vision applied in Aerial robotic Vehicles," in Proc. Int. Conf. Optics, Photonics and Energy Engineering, Wuhan, China, 2010, pp. 277280. G. N. Desouza and A. C. Kak, "Vision for mobile robot navigation: a survey," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237267, Feb. 2002. R. Madison, G. Andrews, P. DeBitetto, S. Rasmussen, and M. Bottkol, "Vision-aided navigation for small UAVs in GPS-challenged environments," in Proc. AIAA InfoTech at Aerospace Conf., Rohnert Park, 2007, pp. 318325. B. Ludington, E. N. Johnson and G. J. Vachtsevanos, "Vision based navigation and target tracking for unmanned aerial vehicles," in Intelligent Systems, Control and Automation: Science and Engineering. vol. 33, K. P. Valavanis, Ed. Dordrecht: Springer Netherlands, 2007, pp. 245266. D. G. Lowe, "Object recognition from local scale-invariant features," in Proc. 7th IEEE Int. Conf. Computer Vision, Kerkyra, Greece, 1999, pp. 11501157. A. Ollero, J. Ferruz, F. Caballero, S. Hurtado, and L. Merino, "Motion compensation and object detection for autonomous helicopter visual navigation in the COMETS system," in Proc. IEEE Int. Conf. Robotics and Automation, New Orleans, 2004, pp. 1924. A. Talukder and L. Matthies, "Real-time detection of moving objects from moving vehicles using dense stereo and optical flow," in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Sendai, Japan, 2004, pp. 37183725. W. Ding, J. Wang and A. Almagbile, "Adaptive filter design for UAV navigation with GPS/INS/optic flow integration," in Proc. Int. Conf. Electrical Engineering, Wuhan, China, 2010, pp. 46234626. M. V. Srinivasan, S. W. Zhang, J. S. Chahl, G. Stange, and M. Garratt, "An overview of insect-inspired guidance for application in ground and airborne platforms," Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, vol. 218, no. 6, pp. 375388, Dec. 2004.

[2] [3] [4]

[5]

[6]

[7]

[8]

[9] [10]

[11] [12] [13]

[14]

[15] [16]

[17]

[18] [19]

[20] S. Hutchinson, G. D. Hager and P. I. Corke, "A tutorial on visual servo control," IEEE Trans. Robotics and Automation, vol. 12, no. 5, pp. 651670, Oct. 1996. [21] B. Thuilot, P. Martinet, L. Cordesses, and J. Gallice, "Position based visual servoing: keeping the object in the field of vision," in Proc. IEEE Int. Conf. Robotics and Automation, Washington DC, 2002, pp. 16241629. [22] F. Chaumette and S. Hutchinson, "Visual servo control Part I: Basic approaches," IEEE Robotics & Automation Magazine, vol. 13, no. 6, pp. 8290, Dec. 2006. [23] F. Chaumette and S. Hutchinson, "Visual servo control Part II: Advanced approaches," IEEE Robotics & Automation Magazine, vol. 14, no. 1, pp. 109118, Mar. 2007. [24] O. Bourquardez, R. Mahony, N. Guenard, F. Chaumette, T. Hamel, and L. Eck, "Image-based visual servo control of the translation kinematics of a quadrotor aerial vehicle," IEEE Trans. Robotics, vol. 25, no. 3, pp. 743749, June 2009. [25] I. F. Mondragn, P. Campoy, C. Martnez, and M. A. Olivares-Mndez, "3D pose estimation based on planar object tracking for UAVs control," in Proc. IEEE Int. Conf. Robotics and Automation, Anchorage, 2010, pp. 3541. [26] T. Hamel and R. Mahony, "Visual servoing of an under-actuated dynamic rigid-body system: An image-based approach," IEEE Trans. Robotics and Automation, vol. 18, no. 2, pp. 187198, Apr. 2002. [27] T. Hamel and R. Mahony, "Image based visual servo control for a class of aerial robotic systems," Automatica, vol. 43, no. 11, pp. 19751983, Mar. 2007. [28] N. Guenard, T. Hamel and R. Mahony, "A practical visual servo control for an unmanned aerial vehicle," IEEE Trans. Robotics, vol. 24, no. 2, pp. 331340, Apr. 2008. [29] S. Mehta, G. Hu, A. Dani, and W. Dixon, "Multi-reference visual servo control of an unmanned ground vehicle," in Proc. AIAA Guidance, Navigation and Control Conference and Exhibit, Honolulu, 2008. [30] D. Lee and H. J. Kim, "Adaptive visual servo control for a quadrotor helicopter," in Proc. Int. Conf. Control Automation and Systems, Gyeonggi-do, Korea, 2010, pp. 10491052. [31] F. Chaumette, "Potential problems of stability and convergence in image-based and position-based visual servoing," Lecture Notes in Control and Information Sciences, vol. 237, pp. 6678, 1998. [32] F. Chaumette and E. Malis, "2 1/2 D visual servoing: A possible solution to improve image-based and position-based visual servoings," in Proc. IEEE Int. Conf. Robotics and Automation, San Francisco, 2000, pp. 630635. [33] E. Malis, F. Chaumette and S. Boudet, "2-1/2-D visual servoing," IEEE Trans. Robotics and Automation, vol. 15, no. 2, pp. 238-250, Apr. 1999. [34] N. Metni and T. Hamel, "A UAV for bridge inspection: Visual servoing control law with orientation limits," Automation in Construction, vol. 17, no. 1, pp. 310, Nov. 2007. [35] N. R. Gans and S. A. Hutchinson, "An asymptotically stable switched system visual controller for eye in hand robots," in Proc. IEEE/RSJ Intl. Conf. Intelligent Robots and Systems, Las Vegas, 2003, pp. 735742. [36] G. Chesi, K. Hashimoto, D. Prattichizzo, and A. Vicino, "Keeping features in the field of view in eye-in-hand visual servoing: a switching approach," IEEE Trans. Robotics, vol. 20, no. 5, pp. 908914, Oct. 2004. [37] M. Mammarella, G. Campa, M. R. Napolitano, M. L. Fravolini, Y. Gu, and M. G. Perhinschi, "Machine vision/GPS integration using EKF for the UAV aerial refueling problem," IEEE Trans. Systems, Man and Cybernetics-Part C: Applications and Reviews, vol. 38, no. 6, pp. 791801, Nov. 2008. [38] T. Webb, R. Prazenica, A. Kurdila, and R. Lind, "Vision-based state estimation for uninhabited aerial vehicles," in Proc. AIAA Guidance, Navigation, and Control Conf. and Exhibit, San Francisco, 2005. [39] R. Sattigeri, E. Johnson, A. Calise, and J. Ha, "Vision-based target tracking with adaptive target state estimator," in Proc. AIAA Guidance, Navigation and Control Conference and Exhibit, Hilton Head, 2007. [40] S. Oh and E. Johnson, "Development of UAV navigation system based on unscented Kalman filter," in Proc. AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, 2006. [41] G. Campa, M. L. Fravolini, A. Ficola, M. R. Napolitano, B. Seanor, and M. G. Perhinschi, "Autonomous aerial refueling for UAVs using a combined GPS-machine vision guidance," in Proc. AIAA Guidance, Navigation, and Control Conf., Providence, 2004, pp. 31253135.

[42] H. Seraji and N. Serrano, "A multisensor decision fusion system for terrain safety assessment," IEEE Trans. Robotics, vol. 25, no. 1, pp. 99108, Feb. 2009. [43] J. Kimmett, J. Valasek and J. Junkins, "Autonomous aerial refueling utilizing a vision based navigation system," in Proc. AIAA Guidance, Navigation, and Control Conference and Exhibit, Monterey, 2002. [44] J. Kimmett, J. Valasek and J. L. Junkins, "Vision based controller for autonomous aerial refueling," in Proc. IEEE Int. Conf. Control Applications, Glasgow, U.K., 2002, pp. 11381143. [45] M. Fravolini, A. Ficola, M. Napolitano, G. Campa, and M. Perhinschi, "Development of modelling and control tools for aerial refueling for uavs," in Proc. AIAA Guidance, Navigation, and Control Conference and Exhibit, Austin, 2003. [46] M. Mammarella, G. Campa, M. R. Napolitano, and M. L. Fravolini, "Comparison of point matching algorithms for the UAV aerial refueling problem," Machine Vision and Applications, vol. 21, no. 3, pp. 241251, Apr. 2010. [47] A. F. Barfield and J. L. Hinchman, "An equivalent model for UAV automated aerial refueling research," in Proc. AIAA Modeling and Simulation Technologies Conf., San Francisco, 2005, pp. 248254.

También podría gustarte