The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes
Our paper has been accepted at CVPR 2016 (spotlight presentation!).
Our paper has been accepted at CVPR 2016 (spotlight presentation!).
The data subset SYNTHIA-Rand used in our CVPR paper is now available. (a dataset with more than 13,000 random driving images) is now available
German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, Antonio M. Lopez; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3234-3243
The future of autonomous cars: understanding the city with the use of videogames
Researchers of the Computer Vision Center in Barcelona have created a new virtual world with the aim of teaching autonomous driving vehicles to see and comprehend a city.
Currently, autonomous vehicles -such as the Google Car or Tesla models- need to develop a “core intelligence” which will allow them to identify and recognize visually different elements, such as a road, sidewalks, buildings, pedestrians, etc. In short: to see and understand a road like humans do. The project is promoted by researcher Germán ros along with Dr. Antonio M. López, both from the Computer vision Center in Barcelona.
As Mr. German Ros puts it: “These vehicles need the use of Artificial intelligence (IAs) to understand what is happening around them. This is achieved with the construction of artificial systems which simulate the structure and functioning of human neuronal connections. Our new simulator, SYNTHIA is a huge step forward within this process”.
SYNTHIA (which stands for ‘System of synthetic images’) is able to accelerate and improve the way in which artificial intelligence learn to understand the city and their elements. This is a significant advancement in one of the major challenges within this scientific area. The data generated by the simulator will be delivered openly to the scientific community in Las Vegas in the International Conference on computer Vision and Pattern Recognition. With this, researchers want to trigger the scientific advancement in areas such as artificial intelligence and autonomous driving.
Up till now, the main limitation in the development of artificial intelligence was the big volume of data and human work required for IAs to learn complex visual concepts in diverse conditions (as for example the difference between a road and the sidewalk in a rainy day). A tedious and expensive process which would require a big number of hours of human supervision.
SYNTHIA is therefore a revolution. It makes use of a virtual simulator in order to generate artificial intelligence in a simple and automatic way (with no human intervention). Thanks to this advancement, the typical limitations of human work (time and errors) are left behind making the process much cheaper and opening the door to the development of more sophisticated and secure systems for autonomous driving.
*SYNTHIA: http://synthia-dataset.net/
More info: acanet@cvc.uab.es
93 581 30 73
Virtual/Augmented Reality for Visual Artificial Intelligence (VARVAI2016) workshop will be held at ECCV 2016 at Amsterdam. Follow this link for more information
TASK-CV: Transfering and Adapting Source Knowledge in Computer Vision workshop will be held at ECCV 2016 at Amsterdam. Follow this link for more information
We have changed the download procedure. Now when you request to download a sequence you will receive an email with a link to a txt file. This txt file has the links to download the sequence. The links are .rar file parts. You can download them directly or with you favorite download manager. Once downloaded you can unrar just the first file and it will automatically unrar all the files.
The dataset SYNTHIA-SF used in our BMVC’17 paper is now available. A dataset with more than 2,000 driving images, with Cityscapes compatible semantic labels.
The paper ‘Slanted stixels: Representing San Francisco’s Steepest Streets‘ has been awarded the best industrial paper award at the British Machine Vision Conference 2017 in London. This paper is the result of the work produced by CVC, UAB and Daimler, more specifically authors Daniel Hernández, Lukas Schneider, Dr. Antonio Espinosa, Dr. David Vázquez, Dr. Uwe Franke, Dr. Marc Pollefeys and Dr. Juan C. Moure. The paper was presented as an oral session on the 5th of September at BMVC 2017 and it presents a novel compact scence representation based on stixels that infers geometric and semantic information. Congratulations to all the authors!