PhD Contract
INVETT Research Group (Intelligent Vehicles and Traffic Technologies), Universidad de Alcalá, Madrid, Spain
After obtaining a 8sC in Technical Engineering in Computer Systems in 2014, I worked for three years in a consulting firm. Wanting to change my working life, in 2017, I asked for a leave of absence and enrolled in the MSC in Artificial Intelligence at the Polytechnic University of Madrid, where I discovered my passion for research.
During the master's degree, two teammates from the amateur soccer team I belong to, Raúl Quintero and Sergio Álvarez, told me about the INVETT (INtelligent VEhicles and Traffic Technologies) research group and the possibility of continuing my studies by pursuing a Ph.D. there.
After completing my MSc, conducting the relevant interviews, and applying for enrollment, I started working for the INVETT research group in 2018. Thus, I started my Ph.D. in Information and Communication Technologies under the supervision of Dr. Miguel Angel Sotelo and Dr. Ignacio Parra. My thesis is focused on semantic road segmentation and intersection detection in the city using computer vision. During this stage, my work has been funded through project contracts by both the Spanish Ministry of Research and Science and the European Union, and the Community of Madrid. Since 2019 I am a reviewer in some international conferences (IEEE Conference on Intelligent Transportation Systems).
I am currently working on my thesis.
INVETT Research Group (Intelligent Vehicles and Traffic Technologies), Universidad de Alcalá, Madrid, Spain
PhD in Information and Communications Technologies
Universidad de Alcalá, Madrid, Spain
Master in Artificial Intelligence
Universidad Politécnica, Madrid, Spain
Bachelor in Ingeniería Técnica en Informática de Sistemas, similar to computer science
Universidad de Alcalá, Madrid, Spain
My research interests mainly focus on environment perception for ADAS, Intelligent Transportation Systems, Autonomous Vehicles. For the environment perception, I have experience using GPS, color and greyscale cameras, monocular and stereo camera systems, single-beam and multi-beam LIDARs.
Recent trends in car industry are the installation of ADAS and different types of sensors in todays cars. I think this is a promising field of research but I am also interested in augmented reality and computer vision applications for sports.
DRIVERTIVE is a driverless cooperative vehicle developed at the University of Alcalá intended for autonomous operation on urban areas and highways. The DRIVERTIVE team won the Prize for the Best Team with Full Automation in the Grand Cooperative Driving Challenge (GCDC 2016) held in Helmond, The Netherlands, in 28-29 May 2016. (2015-2017).
We define an Assistive Pedestrian Crossing as a pedestrian crossing able to interact with users with disabilities and provide an adaptive response to increase, maintain or improve their functional capabilities while crossing. Thus, the infrastructure should be able to locate the pedestrians with special needs as well as to identify their specific disability. user location is obtained by means of a stereo-based pedestrian detection system. Disability identification is proposed by means of a RFID-based anonymous procedure from which pedestrians are only required to wear a portable and passive RFID tag. (2014-2016).
Automatic vehicle model detection is a still unresolved task, the need for a full vehicle identification approach is getting more relevant due to the increased demand for effectiveness and security. Current traffic surveillance applications, speed and access control platforms, automatic tollgate systems, etc., rely on the use of License Plate Recognition (LPR) systems that provide a unique and weak identifier for each detected vehicle: the license plate. A more detailed description of the different parameters of the vehicle would enhance current vehicle identification systems. Besides the license plate, vehicle colour, plate colour, car make, and finally, the car model, are representative variables of the vehicles. (2013-2015).
Company: Orbital Aerospace. Description: Development of an API in C++ to connect, configure and capture images from a FLIR camera using GigE Vision protocol. (2013).
The main goal of this project is to design, implement and test an intelligent transportation system able to manage a fleet of autonomous vehicles in a dedicated area to solve transportation needs on demand. This goal will lead us to solve important issues in global coordination of a fleet of vehicles; in autonomous vehicles area, like to join and leave traffic in roundabout, sensor fusion for positioning improving, positioning backing systems to improve reliability, etc. And also it will be needed to develop other functionalities to detect the system users: pedestrians that want to be transported, accident victims that need an ambulance, or even expensive items that need to moved. The specific goals are:
1. Develop a supervisor control system capable of integrating both infrastructure and vehicle information in order to manage the traffic in a dedicated area using wireless communications.
2. Develop fully-autonomous vehicles able to drive safely in dedicated areas to move persons or valuable items under real conditions.
3. Develop an infrastructure sensor and actuation (traffic lights) network able to give the needed data and operative control over the manually driven cars to the supervisor control system. In summary, work in the frontier of the knowledge in the intelligent transportation system field and, due to the available facilities we aim to run an open, public demonstration event in order to show the results of the project, showing the capacity of Spanish research institutions to go beyond the state of the art in frontier knowledge in the field of Intelligent Transportation Systems. (2012-2016).
Company: Euroconsult. Description: 2D/3D railway geometry monitoring for imperfection detection. For this project, high resolution LIDAR, cameras and IMU sensor are installed in a train and geometry of the tunnel and rails are processed using Point Cloud Library and OpenCV. (2012-2013).
The driverless public transportation systems, which are at present operating in some airports and train stations, are restricted to dedicated roads and exhibit serious trouble dynamically avoiding obstacles in the trajectory. In this project, an electric autonomous mini-bus is used during the demonstration event of the 2012 IEEE Intelligent Vehicles Symposium that took place in Alcalá de Henares (Spain). The demonstration consisted of a route 725 metres long containing a list of latitude-longitude points (waypoints). The mini-bus was capable of driving autonomously from one waypoint to another using a GPS sensor. Furthermore, the vehicle is provided with a multi-beam Laser Imaging Detection and Ranging (LIDAR) sensor for surrounding reconstruction and obstacle detection. When an obstacle is detected in the planned path, the planned route is modified in order to avoid the obstacle and continue its way to the end of the mission. On the demonstration day, a total of 196 attendees had the opportunity to get a ride on the vehicles. A total of 28 laps were successfully completed in full autonomous mode in a private circuit located in the National Institute for Aerospace Research (INTA), Spain. In other words, the system completed 20.3 km of driverless navigation and obstacle avoidance. Funded by CSIC. (2012).
In this project we have built a robotic research platform to develop high-level applications using the robotics development platform Robot Operating System (ROS). It is a differential traction platform equipped with odometry and distance sensors (ultrasound and infrared). It is designed to work indoors. The embedded cards run ROS modules to control the motors and to perceive the information from sensors. In this way the perception is completely transparent to the remote control station. The modular design has been chosen to increase the functionality and autonomy. In addition, we designed a 3D model in Gazebo simulator that can be used as prior before designing the actual application. (2012-2013).
One of the challenge still open in traffic sign recognition is to discard detected signs that do not pertain to the host road. The position of each detected traffic sign is obtained from stereo pair of cameras and those whose position are far from the vehicle lane will be discarded. However, there are some scenarios where the 3D relative position is not enough for discarding signs that do not apply to the host road, in those cases information from vehicle-to-infrastructure (V2I) communication system is proposed as solution, so V2I communication system using, wireless technology, works as support of the traffic sign recognition system. Financed by the Regional Government of Madrid. (2011).
The objective of Robocity2030-II is to develop an innovative integration of applications of Service Robots, in an effort to increase the quality of living of citizens in metropolitan areas. This means that the human is now the centre of things and the Service Robots are developed from, for and to the benefit of humans. To do so, the project brings together and coordinates the research of six leading Service Robot groups in the Community of Madrid, with around 70 projects of R+D in robotics in the past five years, nearly a third of which are European. (2010-2013).
In this project a real-time vision-based blind spot warning system that has been specially designed for motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that frequently remain unseen to other drivers, mainly in the blind-spot area. In fact, although in recent years the number of fatal accidents has decreased overall, motorcycle accidents have increased by 20%. The risks are primarily linked to the inner characteristics of this mode of travel: motorcycles are fast moving vehicles, light, unstable and fragile. These features make the motorcycle detection problem a difficult but challenging task to be solved from the computer vision point of view. In this project, we developed a daytime and nighttime vision-based motorcycle and car detection system in the blind spot area using a single camera installed on the side mirror. On the one hand, daytime vehicle detection is carried out using optical flow features and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information about the position and the type of vehicle. Extensive experiments have been carried out in 172 minutes of sequences recorded in real traffic scenarios in both daytime and nighttime conditions, in the context of the Valencia MotoGP Grand Prix 2009. (2010).
GUIADE's aim is the development of an autonomous public transport fleet based on a multi-modal perception of the environment, using information collected by the vehicles from the environment as well as from the infrastructure. Financed by the Spanish Ministry of Science and Innovation (MICINN). (2008-2011).
Company: 3M. Description: This project is a complement of VISUALISE. The goal is to install a RFID antenna in the VISUALISE vehicle and read the traffic sign RFID tag. Then, a matching process is applied to add the retrorreflection measurement with VISUALISE to the traffic sign history. Every traffic sign information (position, installation date, type of reflective sheet, etc.) is stored in a database to improve the quality of service in the road maintenance company. (2008-2010).
Company: Euroconsult. Description: VISUALISE is a high-performance unit for dynamic auscultation of traffic signs on roads. It allows automatic determination of the traffic signs conditions in regard with night visibility. By means of this Equipment it can be carried out signs retroreflection measurements at regular traffic speed. The main technological innovation of this equipment is that test data acquisition is performed dynamically. The system is installed in a vehicle that circulates at regular traffic speed. Valid data can be obtained while travelling at speeds up to 110 km/h. (2008-2010).
Disclaimer: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
IEEE material: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Stereo-based object detection systems can be greatly enhanced thanks to the use of passive UHF RFID technology. By combining tag localization with its identification capability, new features can be associated with each detected object, extending the set of potential applications. The main problem consists in the association between RFID tags and objects due to the intrinsic limitations of RSSI-based localization approaches. In this paper, a new directional RSSIdistance model is proposed taking into account the angle between the object and the antenna. The parameters of the model are automatically obtained by means of a stereo-RSSI automatic calibration process. A robust data association method is presented to deal with complex outdoor scenarios in medium sized areas with a measurement range up to 15m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags
Stereo-based object detection systems can be greatly enhanced thanks to the use of wireless identification technology. By combining tag localization with its identification capability, new features can be associated with each detected object, extending the set of potential applications. The main problem consists in the association between wireless tags and objects due to the intrinsic limitations of Received Signal Strength Indicator-based localization approaches. In this paper, an experimental comparison between two specific technologies is presented: passive UHF Radio Frequency IDentification (RFID) and Bluetooth Low Energy (BLE). An automatic calibration process is used to model the relationship between RSSI and distance values. A robust data association method is presented to deal with complex outdoor scenarios in medium sized areas with a measurement range up to 15m. The proposed approach is validated in crosswalks with pedestrians wearing portable RFID passive tags and active BLE beacons.
Assistive technology usually refers to systems used to increase, maintain, or improve functional capabilities of individuals with disabilities. This idea is here extended to transportation infrastructures, using pedestrian crossings as a specific case study. We define an Assistive Pedestrian Crossing as a pedestrian crossing able to interact with users with disabilities and provide an adaptive response to increase, maintain or improve their functional capabilities while crossing. Thus, the infrastructure should be able to locate the pedestrians with special needs as well as to identify their specific disability. In this paper, user location is obtained by means of a stereo-based pedestrian detection system. Disability identification is proposed by means of a RFID-based anonymous procedure from which pedestrians are only required to wear a portable and passive RFID tag. Global nearest neighbor is applied to solve data association between stereo targets and RFID measurements. The proposed assistive technology is validated in a real crosswalk, including different complex scenarios with multiple RFID tags.
In this paper a comparative analysis of decision trees based classifiers is presented. Two different approaches are presented, the first one is a speficic classifier depending on the type of scene. The second one is a general classifier for every type of scene. Both approaches are trained with a set of features that enclose texture, color, shadows, vegetation and other 2D features. As well as 2D features, 3D features are taken into account, such as normals, curvatures and heights with respect to the ground plane. Several tests are made on five different classifiers to get the best parameters configuration and obtain the importance of each features in the final classification. In order to compare the results of this paper with the state of the art, the system has been tested on the KITTI Benchmark public dataset.
This paper addresses the problem of curb detection for ADAS or autonomous navigation in urban scenarios. The algorithm is based on clouds of 3D points. It is evaluated using 3D information from a pair of stereo cameras and a LIDAR. Curbs are detected based on road surface curvature. The curvature estimation requires a dense point cloud, therefore the density of the LIDAR cloud has been augmented using Iterative Closest Point (ICP) based on the previous scans. The proposed algorithm can deal with curbs of different curvature and heights, from as low as 3 cm, in a range up to 20 m (whenever that curbs are connected in the curvature image). The curb parameters are modeled using straight lines and compared to the ground-truth using the lateral error as the key parameter indicator. The ground-truth sequences were manually labeled on urban images from the KITTI dataset and made publicly available for the scientific community.
In this paper, a stereo- and infrastructure-based pedestrian detection system is presented to deal with infrastructure-based pedestrian safety measurements as well as to assess pedestrian behaviour modelling methods. Pedestrian detection is performed by region growing over temporal 3D density maps, which are obtained by means of stereo reconstruction and background modelling. 3D tracking allows to correlate the pedestrian position with the different pedestrian crossing regions (waiting and crossing areas). As an example of an infrastructure safety system, a blinking luminous traffic sign is switched on to warn the drivers about the presence of pedestrians in the waiting and the crossing regions. The detection system provides accurate results even for nighttime conditions: an overall detection rate of 97.43% with one false alarm per each 10 minutes. In addition, the proposed approach is validated for being used in pedestrian behaviour modelling, applying logistic regression to model the probability of a pedestrian to cross or wait. Some of the predictor variables are automatically obtained by using the pedestrian detection system. Other variables are still needed to be labelled using manual supervision. A sequential feature selection method showed that time-to-collision and pedestrian waiting time (both variables automatically collected) are the most significant parameters when predicting the pedestrian intent. An overall predictive accuracy of 93.10% is obtained, which clearly validates the proposed methodology.
This paper addresses a framework for road curb and lanes detection in the context of urban autonomous driving, with particular emphasis on unmarked roads. Based on a 3D point cloud, the 3D parameters of several curb models are computed using curvature features and Conditional Random Fields (CRF). Information regarding obstacles is also computed based on the 3D point cloud, including vehicles and urban elements such as lampposts, fences, walls, etc. In addition, a gray-scale image provides the input for computing lane markings whenever they are present and visible in the scene. A high level decision-making system yields accurate information regarding the number and location of drivable lanes, based on curbs, lane markings, and obstacles. Our algorithm can deal with curbs of different curvature and heights, from as low as 3 cm, in a range up to 20 m. The system has been successfully tested on images from the KITTI data-set in real traffic conditions, containing different number of lanes, marked and unmarked roads, as well as curbs of quite different height. Although preliminary results are promising, further research is needed in order to deal with intersection scenes where no curbs are present and lane markings are absent or misleading.
This paper describes a real-time vision-based blind spot warning system that has been specially designed for motorcycles detection in both daytime and nighttime conditions. Motorcycles are fast moving and small vehicles that frequently remain unseen to other drivers, mainly in the blind-spot area. In fact, although in recent years the number of fatal accidents has decreased overall, motorcycle accidents have increased by 20%. The risks are primarily linked to the inner characteristics of this mode of travel: motorcycles are fast moving vehicles, light, unstable and fragile. These features make the motorcycle detection problem a difficult but challenging task to be solved from the computer vision point of view. In this paper we present a daytime and nighttime vision-based motorcycle and car detection system in the blind spot area using a single camera installed on the side mirror. On the one hand, daytime vehicle detection is carried out using optical flow features and Support Vector Machine-based (SVM) classification. On the other hand, nighttime vehicle detection is based on head lights detection. The proposed system warns the driver about the presence of vehicles in the blind area, including information about the position and the type of vehicle. Extensive experiments have been carried out in 172 minutes of sequences recorded in real traffic scenarios in both daytime and nighttime conditions, in the context of the Valencia MotoGP Grand Prix 2009.
At present, the topic of automated vehicles is one of the most promising research areas in the field of Intelligent Transportation Systems (ITS). The use of automated vehicles for public transportation also contributes to reductions in congestion levels and to improvements in traffic flow. Moreover, electrical public autonomous vehicles are environmentally friendly, provide better air quality and contribute to energy conservation. The driverless public transportation systems, which are at present operating in some airports and train stations, are restricted to dedicated roads and exhibit serious trouble dynamically avoiding obstacles in the trajectory. In this paper, an electric autonomous mini-bus is presented. All datasets used in this article were collected during the experiments carried out in the demonstration event of the 2012 IEEE Intelligent Vehicles Symposium that took place in Alcalá de Henares (Spain). The demonstration consisted of a route 725 metres long containing a list of latitude-longitude points (waypoints). The mini-bus was capable of driving autonomously from one waypoint to another using a GPS sensor. Furthermore, the vehicle is provided with a multi-beam Laser Imaging Detection and Ranging (LIDAR) sensor for surrounding reconstruction and obstacle detection. When an obstacle is detected in the planned path, the planned route is modified in order to avoid the obstacle and continue its way to the end of the mission. On the demonstration day, a total of 196 attendees had the opportunity to get a ride on the vehicles. A total of 28 laps were successfully completed in full autonomous mode in a private circuit located in the National Institute for Aerospace Research (INTA), Spain. In other words, the system completed 20.3 km of driverless navigation and obstacle avoidance.
This paper describes an automatic system that detects thermal insulation properties of the different components of buildings envelope by combining laser data with thermal images. Sensor data is obtained from a moving vehicle equipped with a GPS sensor. Range data is integrated to obtain the 3D structure of the building facade, and combined with thermal images to separate components such as walls, windows frames and glasses. Thermal leakage is detected by detecting irregularities in the thermal measurements of each building component separately (window glasses, window frames and walls).
There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results.
In this paper, a real-time free space detection system is presented using a medium-cost lidar sensor and a low cost camera. The extrinsic relationship between both sensors is obtained after an off-line calibration process. The lidar provides measurements corresponding to 4 horizontal layers with a vertical resolution of 3.2 degrees. These measurements are integrated in time according to the relative motion of the vehicle between consecutive laser scans. A special case is considered here for Spanish speed humps, since these are usually detected as an obstacle. In Spain, speed humps are directly related with raised zebra-crossings so they should have painted white stripes on them. Accordingly the conditions required to detect a speed hump are: detect a slope shape on the road and detect a zebra crossing at the same time. The first condition is evaluated using lidar sensor and the second one using the camera.
This paper presents the results of a set of extensive experiments carried out in daytime and nighttime conditions in real traffic using an enhanced or extended Floating Car Data system (xFCD) that includes a stereo vision sensor for detecting the local traffic ahead. The detection component implies the use of previously monocular approaches developed by our group in combination with new stereo vision algorithms that add robustness to the detection and increase the accuracy of the measurements corresponding to relative distance and speed. Besides the stereo pair of cameras, the vehicle is equipped with a low-cost GPS and an electronic device for CAN Bus interfacing. The xFCD system has been tested in a 198-minutes sequence recorded in real traffic scenarios with different weather and illumination conditions, which represents the main contribution of this paper. The results are promising and demonstrate that the system is ready for being used as a source of traffic state information.
This paper describes a new approach for improving the estimation of the global position of a vehicle in complex urban environments by means of visual odometry and map fusion. The visual odometry system is based on the compensation of the heterodasticity in the 3D input data using a weighted nonlinear least squares based system. RANdom SAmple Consensus (RANSAC) based on Mahalanobis distance is used for outlier removal. The motion trajectory information is used to keep track of the vehicle position in a digital map during GPS outages. The final goal is the autonomous vehicle outdoor navigation in large-scale environments and the improvement of current vehicle navigation systems based only on standard GPS. This research is oriented to the development of traffic collective systems aiming vehicle-infrastructure cooperation to improve dynamic traffic management. We provide examples of estimated vehicle trajectories and map fusion using the proposed method and discuss the key issues for further improvement.
This paper presents a vision-based road surface classification in the context of infrastructure inspection and maintenance, proposed as stage for improving the performance of a distress detection system. High resolution road images are processed to distinguish among surfaces arranged according to the different materials used to build roads and their grade of granulation and striation. A multi-class Support Vector Machine (SVM) classification system using mainly Local Binary Pattern (LBP), Gray-Level Co-occurrence Matrix (GLCM) and Maximally Stable Extremal Regions (MSER) derived features is described. The different texture analysis methods are compared based on accuracy and computational load. Experiments with real application images show a significant improvement on the the distress detection system performance by combining several feature extraction methods.
This paper describes a new approach for improving the estimation of a vehicle motion trajectory in complex urban environments by means of visual odometry. A new strategy for compensating the heterodasticity in the 3D input data using a weighted non-linear least squares based system is presented. A Matlab simulator is used in order to analyze the error in the estimation and validate the new solution. The obtained results are discussed and compared to the previous system. The final goal is the autonomous vehicle outdoor navigation in large-scale environments and the improvement of current vehicle navigation systems based only on standard GPS. This research is oriented to the development of traffic collective systems aiming vehicle-infrastructure cooperation to improve dynamic traffic management. We provide examples of estimated vehicle trajectories using the proposed method and discuss the key issues for further improvement.
This paper presents a complete vision-based vehicle detection system for Floating Car Data (FCD) enhancement in the context of Vehicular Ad hoc NETworks (VANETs). Three cameras (side, forward and rear looking cameras) are installed onboard a vehicle in a fleet of public buses. Thus, a more representative local description of the traffic conditions (extended FCD) can be obtained. Specifically, the vision modules detect the number of vehicles contained in the local area of the host vehicle (traffic load) and their relative velocities. Absolute velocities (average road speed) and global positioning are obtained after combining the outputs provided by the vision modules with the data supplied by the CAN Bus and the GPS sensor. This information is transmitted by means of a GPRS/UMTS data connection to a central unit which merges the extended FCD in order to maintain an updated map of the traffic conditions (traffic load and average road speed). The presented experiments are promising in terms of detection performance and computational costs. However, significant effort is further necessary before deploying a system for large-scale real applications.
The goal of this paper is to study a noisy WiFi range-only sensor and its application in the development of localization and mapping systems. Moreover, the paper shows several localization and mapping techniques to be compared. These techniques have been applied successfully with other technologies, like ultra-wide band (UWB), but we demonstrate that even using a much more noisier sensor these systems can be applied correctly. We use two trilateration techniques and a particle filter to develop the localization and mapping systems based on the range-only sensor. Some experimental results and conclusions are presented.
Computer vision systems used on road maintenance, either related to signs or to the road itself, are playing a major role in many countries because of the higher investment on public works of this kind. These systems are able to collect a wide range of information automatically and quickly, with the aim of improving road safety. In this context, the suitability of the information contained on the road signs located above the road, typically known as traffic panels, is vital for a correct and safe use by the road user. This paper describes an approach to the first steps of a developing system which will be able to make an inventory and to check the reliability of the information contained on the traffic panels, and whose final aim is to take part on an automatic visual inspection system of signs and panels.
I taught the practical part of different subjects in engineering degrees for 5 years. During the period 2012-2016, I taught programming in C/C++ in the Telecommunications Engineering degree. In addition, I taught computer vision for 3 years in the Electronics and Industrial Automation Engineering degree at University of Alcala.
The objective of the course is the study of computer vision and image acquisition systems for industry applications. The programme of the course includes camera configuration and image acquisition, camera calibration, motion detection, objects detection, segmentation algorithms, image filtering and pattern recognition.
The objective of the course is the study in depth the structured programming using C programming language. The programme of the course is: review of basic concepts about pointers, advanced use of pointers, advanced management of functions, creation and manipulation of files, dynamic data structures and algorithms.
This dataset were recorded using a Fire-i camera with a resolution of 640 x 480 @ 30 fps. Two examples of code are available to use the videos. The first one plays a video using mplayer and the second one reads the video and shows the frame using OpenCV library. Chessboard pattern images are provided for camera calibration.
When using this dataset in your research, please cite us: C. Fernández, D. F. Llorca, M. A. Sotelo, I. G. Daza, A. M. Hellín, S. Alvarez, Real-time Vision-based blind spot warning system: experiments with motorcycles in daytime/nighttime conditions, International Journal of Automotive Technology, Vol. 14, Issue 1, 113 – 122 (2013).Day 1 and day 2 include daytime sequences in urban environment, highway and roundabouts. Day 3 folder has sequences in highway after Valencia MotoGP Grand Prix 2009. Finally, day 4 includes daytime and nighttime in urban environment, highway and roundabouts.
This software is written in C/C++ and the GUI is designed using QT. Furthermore, the labelling application requires OpenCV library to run.
This software is written in C/C++ and the GUI is designed using QT. Furthermore, the labelling application requires Point Cloud Library (PCL) library to run.
Room E-202
Dpto. Automatica
Escuela Politecnica. Campus Universitario
Ctra. Madrid-Barcelona, Km. 33,600
28805 Alcalá de Henares (Madrid), Spain