Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
Overview ======== ---------- **UASOL** is a new dataset for outdoor depth estimation from single and stereo RGB images. The dataset has been acquired from the point of view of a pedestrian. Currently, the most novel approaches take advantage of deep learning based techniques which have proven to outperform traditional state-of-the-art computer vision methods. Nonetheless, these methods require high amounts of reliable ground truth data. Despite there already exists several datasets which could be used for depth estimation, nearly none of them are outdoor oriented from an egocentric point of view. Our dataset introduces large amounts of high definition pairs of color frames and corresponding depth maps from a human perspective. In addition, the proposed dataset also features human interaction and great variability of data. ![][1] ![][2] ![][3] ***Figure 1.-** Images of the UASOL dataset. Left image with the corresponding depth (the depth image has been colored to provide a clearer perspective of the details).* How to download the dataset ------------------------------------------ ---------- To be able to download a complete scene from the dataset you have to download all the .zXY and .zip files, the easiest way is to download the entire scene folder or just the Images folder. When all the files have been downloaded, just click on the .zip file (in Windows) or use the unzip function (in Linux). **NOTE:** *Without one of the .zXY files the data will not be extracted due to missing information.* How is this dataset structured? ------------------------------------------ ---------- The dataset is distributed as follows: The dataset is divided in folders. Each folder contains the data for a particular sequence. Inside each folder, it is provided a log file, a manifest file and another folder with the actual RGB color pairs and the corresponding depth maps. The directory tree of the dataset is as follows: Dataset: Sequence 1 Log.txt Complete.JSON Images img_0_depth.png img_Y_depth.png ... img_left0_color.png img_leftY_color.png ... img_right0_color.png img_rightY_color.png ... Sequence 1 GC-Net img_0_depth.png img_Y_depth.png ... img_left0_color.png img_leftY_color.png ... img_right0_color.png img_rightY_color.png ... Sequence 2 ... ... The Log file ------------------------------------------ ---------- The TXT file named "log" (log.txt) stores the camera settings. This data was obtained using the ZED API. A log file is provided for each sequence. The information provided is listed below: For each camera: - Optical center along x axis (pixels) - Optical center along y axis (pixels) - Focal length along x axis (pixels) - Focal length along y axis (pixels) - Vertical field of view after stereo rectification (angle in degrees) - Horizontal field of view after stereo rectification (angle in degrees) - Diagonal field of view after stereo rectification (angle in degrees) - Distortion factor of the right cam before calibration - Distortion factor of the right cam after calibration For each sequence: - Confidence threshold - Depth min and max range values (millimeters) - Resolution of the images (pixels) - Camera FPS Frame count The Manifest File ------------------------------------------ ---------- The "manifest" file (complete.json) packs the core information for each sequence. The information provided is listed below: - Filename of the left color image - Filename of the right color image - Filename of the depth map provided by the GC-NET method - Translation matrix (3x1) - Orientation matrix (3x1) - M matrix (4x4) which contains the rotation and translation - Timestamp (ms) The GPS Data ------------------------------------------ ---------- Every frame of each one of the sequences of video has a GPS coordinate assigned, which can be visualized in a real map using the code gps2csv.py contained in the **Code Repository**. Results of this are shown below: ![](https://files.osf.io/v1/resources/64532/providers/osfstorage/5c5418e121303d001a2fbc83?mode=render =50%x) ***Figure 2.-** Images of the GPS data provided by the UASOL dataset.* -------- **[Code Repository][5]** ------- ----- **[Project Website][6]** ------- ----- ## Any Question? ## Send us an email at: zbauer@dccia.ua.es [1]: https://files.osf.io/v1/resources/64532/providers/osfstorage/5c3b0dece8da0e0019564b01?mode=render [2]: https://files.osf.io/v1/resources/64532/providers/osfstorage/5c3b0e477cf3f5001ab5b871?mode=render [3]: https://files.osf.io/v1/resources/64532/providers/osfstorage/5c3b0f3c8047080018fe32fc?mode=render [4]: https://files.osf.io/v1/resources/64532/providers/osfstorage/5c5418e121303d001a2fbc83?mode=render [5]: https://bitbucket.org/rovitlib/uasol-utils [6]: http://www.rovit.ua.es/dataset/uasol/
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.