hosted by
publicationslist.org
    

Jorge Lobo


jlobo@isr.uc.pt

Journal articles

2007
P Corke, J Lobo, J Dias (2007)  An introduction to inertial and visual sensing   INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH 26: 6. 519-535  
Abstract: In this paper we present a tutorial introduction to two important senses for biological and robotic systems - inertial and visual perception. We discuss the fundamentals of these two sensing modalities from a biological and an engineering perspective. Digital camera chips and micro-machined accelerometers and gyroscopes are now commodities, and when combined with today's available computing can provide robust estimates of self-motion as well 3D scene structure, without external infrastructure. We discuss the complementarity of these sensors, describe some fundamental approaches to fusing their outputs and survey the field.
Notes: Times Cited: 9
J Lobo, J Dias (2007)  Relative pose calibration between visual and inertial sensors   INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH 26: 6. 561-575  
Abstract: This paper proposes an approach to calibrate off-the-shelf cameras and inertial sensors to have a useful integrated system to be used in static and dynamic situations. When both sensors are integrated in a system their relative pose needs to be determined. The rotation between the camera and the inertial sensor can be estimated, concurrently with camera calibration, by, having both sensors observe the vertical direction in several poses. The camera relies on a vertical chequered planar target and the inertial sensor on gravity to obtain a vertical reference. Depending on the setup and system motion, the translation between the two sensors can also be important. Using a simple passive turntable and static images, the translation can be estimated. The system needs to be placed in several poses and adjusted to turn about the inertial sensor centre, so that the lever arm to the camera can be determined. Simulation and real data results are presented to show the validity and simple requirements of the proposed methods.
Notes: Times Cited: 9
2004
J Lobo, J Dias (2004)  Inertial sensed ego-motion for 3D vision   JOURNAL OF ROBOTIC SYSTEMS 21: 1. 3-12  
Abstract: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to a vertically aligned world frame of reference. In order to detect the ground plane, a histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks. (C) 2004 Wiley Periodicals, Inc.
Notes: Times Cited: 10
2003
J Lobo, J Dias (2003)  Inertial sensed ego-motion for 3D vision   PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3 1907-1914  
Abstract: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. In this article we set a framework for the combination of these two sensing modalities. Cameras can be seen as ray direction measuring devices, and in the case of stereo vision, depth along the ray can also be computed. The ego-motion can be sensed by the inertial sensors, but there are limitations determined by the sensor noise level. Keeping track of the vertical direction is required, so that gravity acceleration can be compensated for, and provides a valuable spatial reference. Results are shown of stereo depth map alignment using the vertical reference. The depth map points are mapped to the a vertically aligned world frame of reference. In order to detect the ground plane, an histogram is performed for the different heights. Taking the ground plane as a reference plane for the acquired maps, the fusion of multiple maps reduces to a 2D translation and rotation problem. The dynamic inertial cues can be used as a first approximation for this transformation, allowing a fast depth map registration method. They also provide an image independent location of the image focus of expansion and center of rotation useful during visual based navigation tasks.
Notes: Times Cited: 1
J Alves, J Lobo, J Dias (2003)  Camera-inertial sensor modelling and alignment for visual navigation   PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS 2003, VOL 1-3 1693-1698  
Abstract: Inertial sensors attached to a camera can provide valuable data about camera pose and movement. In biological vision systems, inertial cues provided by the vestibular system, are fused with vision at an early processing stage. Vision systems in autonomous vehicles can also benefit by taking inertial cues into account. In order to use off-the-shelf inertial sensors attached to a camera, appropriate modelling and calibration techniques are required. Camera calibration has been extensively studied, and standard techniques established. Inertial navigation systems, relying on high-end sensors, also have established techniques. This paper presents a technique for modelling and calibrating the camera integrated with low-cost inertial sensors, three gyros and three accelerometers for full 3D sensing. Using a pendulum with an encoded shaft, inertial sensor alignment, bias and scale factor can be estimated. Having both the camera and the inertial sensors observing the vertical direction at different poses, the rigid rotation between the two frames of reference can be estimated. Preliminary simulation and real data results are presented.
Notes: Times Cited: 2
J Lobo, C Queiroz, J Dias (2003)  World feature detection and mapping using stereovision and inertial sensors   ROBOTICS AND AUTONOMOUS SYSTEMS 44: 1. 69-81  
Abstract: This paper explores the fusion of inertial information with vision for 3D reconstruction. A method is proposed for vertical line segment detection and subsequent local geometric map building. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous vehicles and enlarging the application potential of vision systems. From the inertial sensors, a camera stereo rig, and a few system parameters we can recover the 3D parameters of the ground plane and vertical lines. The homography between stereo images of ground points can be found. By detecting the vertical line segments in each image, and using the homography of ground points for the foot of each segment, the lines can be matched and reconstructed in 3D. The mobile robot then maps the detected vertical line segments in a world map as it moves. To build this map an outlier removal method is implemented and a statistical approach used, so that a simplified metric map can be obtained for robot navigation. (C) 2003 Elsevier Science B.V. All rights reserved.
Notes: Times Cited: 7
J Lobo, J Dias (2003)  Vision and inertial sensor cooperation using gravity as a vertical reference   IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 25: 12. 1597-1608  
Abstract: This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineation of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.
Notes: Times Cited: 32
J Lobo, L Almeida, J Alves, J Dias (2003)  Registration and segmentation for 3D map building - A solution based on stereo vision and inertial sensors   2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS 139-144  
Abstract: This article presents a technique for registration and segmentation of dense depth maps provided by a stereo vision system. The vision system uses inertial sensors to give a reference for camera pose. The maps are registered using a modified version of the ICP - Iterative Closest Point algorithm to register dense depth maps obtained from a stereo vision system. The proposed technique explores the integration of inertial sensor data for dense map registration. Depth maps obtained by vision systems, are very point of view dependent, providing discrete layers of detected depth aligned with the camera. In this work we use inertial sensors to recover camera pose, and rectify the maps to a reference ground plane, enabling the segmentation of vertical and horizontal geometric features and map registration. We propose a real-time methodology to segment these dense depth maps, including segmentation of structures, object recognition, robot navigation or any other task that requires a three-dimensional representation of the physical environment. The aim of this work is a fast real-time system, that can be applied to autonomous robotic systems or to automated car driving systems, for modelling the road, identifying obstacles and roadside features in real-time.
Notes: Times Cited: 0
2002
J Lobo, L Almeida, J Dias (2002)  Segmentation of dense depth maps using inertial data A real-time implementation   2002 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-3, PROCEEDINGS 92-97  
Abstract: In this paper we propose a real-time system that extracts information from dense relative depth maps. This method enables the integration of depth cues on higher level processes including segmentation of structures, object recognition, robot navigation or any other task that requires a three-dimensional representation of the physical environment. Inertial sensors coupled to a vision system can provide important inertial cues for the ego-motion and system pose. The sensed gravity provides a vertical reference. Depth maps obtained from a stereo camera system can be segmented using this vertical reference, identifying structures such as vertical features and levelled planes. In our work we explore the integration of inertial sensor data in vision systems. Depth maps obtained by vision systems, are very point of view dependant, providing discrete layers of detected depth aligned with the camera. In this work we use inertial sensors to recover camera pose, and rectify the maps to a reference ground plane, enabling the segmentation of vertical and horizontal geometric features. The aim of this work is a fast real-time system, so that it can be applied to autonomous robotic systems or to automated car driving systems, for modelling the road, identifying obstacles and roadside features in real-time.
Notes: Times Cited: 0
J F Ferreira, J Lobo, J Dias (2002)  Tele-3D - Developing a handheld scanner using structured light projection   FIRST INTERNATIONAL SYMPOSIUM ON 3D DATA PROCESSING VISUALIZATION AND TRANSMISSION 788-791  
Abstract: Three-dimensional surface reconstruction from two-dimensional images is a process with great potential for use on different fields of research, commerce and industrial production. In this article we will describe the evolution of a project comprising the study and development of systems which implement the aforementioned process, exploring several techniques with the final aim of devising the best possible compromise between flexibility, performance and cost-effectiveness. We will firstly focus our attention on past work, namely the description of the implementation and results of a fixed system involving a camera and a laser-stripe projector mounted on a pan-tilt unit which sweeps the surface vertically with a horizontal stripe. Then we will describe our current work on the development of a fully portable, handheld system using cameras, projected structured light and inertial/magnetic positioning and attitude sensors - the Tele-3D scanner.
Notes: Times Cited: 0
2001
J Lobo, J Dias (2001)  Fusing of image and inertial sensing for camera calibration   MFI2001 : INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS 103-108  
Abstract: This paper explores the integration of inertial sensor data with vision. A method is proposed for the estimation of camera focal distance based on vanishing points and inertial sensors. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovering of 3D structure from images, increasing the capabilities of autonomous vehicles and enlarging the application potential of vision systems. In this paper we show that using just one vanishing point, obtained from two parallel lines belonging to some levelled plane, and using the cameras attitude taken from the inertial sensors, the unknown scaling factor f in the camera's perspective projection can be estimated. The quality of the estimation of f depends on the quality of the vanishing point used and the noise level in the accelerometer data. Nevertheless it provides a reasonable estimate for a completely uncalibrated camera. The advantage over using two vanishing points is that the best (i.e. more stable) vanishing point can be chosen, and that in indoors environment the vanishing point point can sometimes be obtained from the scene without placing any specific calibration target.
Notes: Times Cited: 1
1998
J Dias, I Fonseca, J Lobo (1998)  New perspectives on mobile robot navigation with visual and inertial information   1998 5TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL - PROCEEDINGS 261-266  
Abstract: Advanced sensor systems, exploring high integrity and multiple sensorial modalities, have been significantly increasing the capabilities of autonomous vehicles and enlarging their application potential. The article describes two relevant sensors for mobile robot navigation - active vision systems and inertial sensors. Vision and inertial sensing are two sensory modalities that can be explored for navigation. This article presents our results on the use and integration of those two modalities. In a first example we present a computational solution for the problem of visual based guidance of a moving observer, by detecting the orientation of the cameras set that maximises the value of visual information. The algorithm explores the geometric properties of log-polar mapping. The second example, relies on the integration of inertial and visual information to detect the regions in the scene that we can drive a mobile platform: in our case the ground plane. The solution is based on information about the scene that could be obtained during a process of the visual fixation, complemented by the information provided by inertial sensors. The tests were performed with a mobile platform equipped with one active vision system and inertial sensors. The paper presents our recent results on simulation of visual behaviours fbr navigation.
Notes: Times Cited: 0
A Valejo, J Lobo, J Dias (1998)  Short-range DGPS for mobile robots with wireless ethernet links   1998 5TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL - PROCEEDINGS 334-339  
Abstract: For outdoor mobile robot applications the satellite based GPS system is available for position estimation. Differential GPS can add to the precision but requires a fixed receiver at a know position and some communication link. Typically differential GPS data is sent over dedicated radio links or broadcast. For short-range applications, were multiple robots move around an outdoors workspace, a wireless ethernet link with multiple access paints, and supporting multiple robots, is a good and cost-effective solution. The net can be used, amongst other things, for sending the differential data from a local fixed receiver to the robots. In the article we present a client/server model that enables the mobile robot to sign-up with the server and receive the DGPS data over the Internet, allowing the calculation of a more precise position. This provides a simple and flexible method of implementing DGPS corrections. Some field tests are presented that show that this simple approach can provide good results, without being an expensive solution.
Notes: Times Cited: 0
J Lobo, J Dias (1998)  Ground plane detection by fusing visual and inertial information   1998 5TH INTERNATIONAL WORKSHOP ON ADVANCED MOTION CONTROL - PROCEEDINGS 175-179  
Abstract: In mobile systems the position and attitude of active vision system's cameras can be hard to determine. Inertial sensors coupled to the active vision system can provide valuable information, as happens with the vestibular system in human and other animals. In this article, we present our integrated inertial and vision systems. The active vision system has a set of stereo cameras capable of vergence, with a common baseline, pan and tilt, and an implemented process of visual fixation. An inertial system prototype, based on low-cost sensors, was built. The inertial sensor data is used to segment the ground plane in the images. It is used to keep track of the gravity vector allowing the identification of the vertical in Me images. By performing visual fixation of a ground plane point, and knowing the 3D vector normal to level ground we can determine the ground plane. The image can therefore be segmented, and the ground plane along which the robot can move identified.
Notes: Times Cited: 0
J Lobo, J Dias (1998)  Ground plane detection using visual and inertial data fusion   1998 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS - PROCEEDINGS, VOLS 1-3 912-917  
Abstract: Active vision systems can be used in robotic systems for navigation. The active vision system provides data on the robot's environment In mobile systems the position and attitude a the cameras relative to the world earn be hard to determine. Inertial sensors coupled to the active vision system can provide valuable information to aid the image processing task In human and other animals the vestibular system plays a similar role. In this article, we explain our recent steps in the integration of inertial data with an active vision system. The active vision system hers a set of stereo cameras capable of vergence, with a common baseline, pan and tilt. A process of visual fixation has already been implemented, enabling symmetric vergence on any selected point An inertial system prototype, based on law-cost sensors was built. It is used to keep track of the gravity vector, allowing the identification of the vertical in the images. By performing visual fixation of a ground plane point, and knowing the SD vector normal to level ground, we can determine the ground plane. The image can therefore be segmented, and the ground plane along which the robot can move identified. For on-the-fry visualisation of the segmented images and the detected points a VRML viewer is used.
Notes: Times Cited: 0
J Lobo, J Dias (1998)  Integration of inertial information with vision   IECON '98 - PROCEEDINGS OF THE 24TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4 1263-1267  
Abstract: Active vision systems can be used in robotic systems for navigation. The active vision system provides data on the robot's environment. In mobile systems the position and attitude of the cameras relative to the world can be hard to determine. Inertial sensors coupled to the active vision system can provide valuable information to aid the image processing task. In human and other animals the vestibular system plays a similar role. In this article, we explain our recent steps in the integration of inertial data with an active vision system. The active vision system has a set of stereo cameras capable of vergence, with a common baseline, pan and tilt. A process of visual fixation has already been implemented, enabling symmetric vergence on any selected point. An inertial system prototype, based on low-cost sensors was built. It is used to keep track of the gravity vector, allowing the identification of the vertical in the images. By performing visual fixation of a ground plane point, and knowing the 3D vector normal to level ground, we can determine the ground plane. The image can therefore be segmented, and the ground plane along which the robot can move identified. For on-the-fly visualisation of the segmented images and the detected points a VRML viewer is used.
Notes: Times Cited: 5
1997
J Lobo, J Dias (1997)  Integration of inertial information with vision towards robot autonomy   ISIE '97 - PROCEEDINGS OF THE IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS, VOLS 1-3 825-830  
Abstract: Reconstructing 3D data from images acquired by cameras is a difficult task. The problem becomes harder if the goal is to recover the dynamics of the 3D world from the image flow. However, it is known that humans integrate and combine the information from different sensorial systems to perceive the world. For example, the human vision system has close links with the vestibular system to perform everyday tasks. A computational approach for sensorial data integration, inertial and vision, is presented for a mobile robot equipped with an active vision system and inertial sensors. The inertial information is a different sensorial modality and, in this article, we explain our initial steps to combine this information with other sensorial systems, namely vision. Some of the benefits of using inertial information for navigation and dynamic visual processing are described in the article. During the development of these studies a low-cost inertial system prototype was developed. A brief description of low-cost inertial sensors and their integration in an inertial system prototype is also described. The set of sensors used in the prototype include three piezoelectric vibrating gyroscopes, a tri-axial capacitive accelerometer and a dual axis clinometer. As a first approach the clinometer is used to track camera's pan and tilt, relative to a plane normal to the gravity vector and parallel to the ground floor. This provides the orientation data that, combined with a process of visual fixation, enables the identification of the ground plane dr others parallel to it. An algorithm that segments the image, identifying: the floor along which the vehicle can move is thus obtained.
Notes: Times Cited: 1
1995

Conference papers

2009
J Lobo, J F Ferreira, J Dias (2009)  Robotic Implementation of Biological Bayesian Models Towards Visuo-inertial Image Stabilization and Gaze Control   443-448  
Abstract: Robotic implementations of gaze control and image stabilization have been previously proposed, that rely on fusing inertial and visual sensing modalities. They are bioinspired in the sense that human and biological system also combine the two sensing modalities for the same goal. In this work we build upon these previous results and, with the contribution of psychophysical studies, attempt a more biomimetic approach to the robotic implementation. Since Bayesian models have been successfully used to explain psychophysical experimental findings, we propose a robotic implementation using Bayesian inference.
Notes: Times Cited: 0
J Prado, J Lobo, J Dias (2009)  POSTER : Robotic Visual and Inertial Gaze Control using Human Learning   27-30  
Abstract: Humans make use of inertial and vision cues to determine ego-motion. Bayesian models can be used to represent the human behaviour to be used in a robot. An environment may be composed by an infinite number of variables and humans deal with some of them each time a motor decision needs to be taken.
Notes: Times Cited: 0
2006
L Mirisola, J Lobo, J Dias (2006)  Stereo vision 3D map registration for airships using vision-inertial sensing   102-107  
Abstract: A depth map registration method is proposed in this article, and experimental results are presented for long three-dimensional map sequences obtained from a moving observer. In vision based systems used in mobile robotics the perception of self-motion and the structure of the environment is essential. Inertial and earth field magnetic pose sensors can provide valuable data about camera ego-motion, as well as absolute references for structure feature orientations. In this work we explore the fusion of stereo techniques with data from the inertial and magnetic sensors, enabling registration of 3D maps aquired by a moving observer. The article reviews the camera-inertial calibration used, other works on registering stereo point clouds from aerial images, as well as related problems as robust image matching. The map registration approach is presented and validated with experimental results on ground outdoor environments.
Notes: Times Cited: 0
Powered by PublicationsList.org.