Light-field imaging provides full spatio-angular information of the real world by capturing the light rays in various directions. This allows image processing algorithms to result in immersive user experiences such as VR. To evaluate, and develop reconstruction algorithms, a precise and dense light-field dataset of the real world that can be used as ground truth is desirable. Fraunhofer IIS presents a dataset that includes two scenes that are captured by an accurate industrial robot with an attached color camera such that the camera is looking outward. The arm moves on a cylindrical path for a field of view of 125 degrees with angular step size of 0,01 degrees. The images are pre-processed in different steps. The disparity between two adjacent views with resolution of 5168x3448 is less than 1,6 pixels; the parallax between the foreground and the background objects is less than 0,6 pixels. The dataset is based on the paper "Non-planar inside-out dense light-field dataset and reconstruction pipeline" by Faezeh Sadat Zakeri, Ahmed Durmush, Matthias Ziegler, Michel Bätz, and Joachim Keinert.