High-speed 3D modeling for nuclear reactor environment based on feature extraction results from video images (Contract research); FY2023 Nuclear Energy Science & Technology and Human Resource Development Project
Collaborative Laboratories for Advanced Decommissioning Science; Sapporo University*
The Collaborative Laboratories for Advanced Decommissioning Science (CLADS), Japan Atomic Energy Agency (JAEA), had been conducting the Nuclear Energy Science & Technology and Human Resource Development Project (hereafter referred to "the Project") in FY2023. The Project aims to contribute to solving problems in the nuclear energy field represented by the decommissioning of the Fukushima Daiichi Nuclear Power Station (1F), Tokyo Electric Power Company Holdings, Inc. (TEPCO). For this purpose, intelligence was collected from all over the world, and basic research and human resource development were promoted by closely integrating/collaborating knowledge and experiences in various fields beyond the barrier of conventional organizations and research fields. The sponsor of the Project was moved from the Ministry of Education, Culture, Sports, Science and Technology to JAEA since the newly adopted proposals in FY2018. On this occasion, JAEA constructed a new research system where JAEA-academia collaboration is reinforced and medium-to-long term research/development and human resource development contributing to the decommissioning are stably and consecutively implemented. Among the adopted proposals in FY2023, this report summarizes the research results of the "High-speed 3D modeling for nuclear reactor environment based on feature extraction results from video images" conducted in FY2023. The present study aims to develop a 3D model for a workspace that maximizes the amount of information based on the features extracted from video, which is taken when surveying the primary containment vessel and inside the reactor building as part of the decommissioning of 1F, considering within a specified time. In FY2023, we verified extracting effective shooting conditions for obtaining 3D reconstruction based on photogrammetry and the method extracting feature values that can generate 3D restoration results from a small amount of data within a specified time based on deep learning. In addition, we applied point cloud data extracted from video to segmentation and classified it into parts with instance labels.