Integration of 3D environment models generated from the sections of the image sequence based on the consistency of the estimated camera trajectories
松本 拓 ; 羽成 敏秀 ; 川端 邦明 ; 八代 大*; 中村 啓太*
Matsumoto, Taku; Hanari, Toshihide; Kawabata, Kuniaki; Yashiro, Hiroshi*; Nakamura, Keita*
This paper describes a method that integrates Three-Dimensional (3D) environment models reconstructed from image sequences to reduce the computation time of 3D environment modeling that estimates camera poses and simultaneously reconstructs a 3D environment model from images based on photogrammetry. However, 3D environment modeling is time-consuming when using many images because it finds correspondence points between them by feature matching. Therefore, we assume that the computation time is reduced by reconstructing 3D environment models from image sequences divided from an image sequence because the number of images used in 3D environment modeling becomes less. However, it is difficult to integrate the 3D environment models because the scale between them may not be the same, and overlapping regions between 3D environment models are small for integrating the models. In this paper, we propose a method that integrates 3D environment models based on camera trajectories corresponding to overlapped images between image sequences used in 3D environment modeling. To integrate them, transformation parameters are calculated from poses of camera trajectories between 3D environment models. After that, the transformed camera trajectory is aligned using coarse and fine registration. Consequently, compared with 3D environment modeling that processes an image sequence in batch, the proposed method could reduce the computation time and reconstruct a comparable integrated model.