Refine your search:     
Report No.
 - 
Search Results: Records 1-20 displayed on this page of 23

Presentation/Publication Type

Initialising ...

Refine

Journal/Book Title

Initialising ...

Meeting title

Initialising ...

First Author

Initialising ...

Keyword

Initialising ...

Language

Initialising ...

Publication Year

Initialising ...

Held year of conference

Initialising ...

Save select records

JAEA Reports

High-speed 3D modeling for nuclear reactor environment based on feature extraction results from video images (Contract research); FY2023 Nuclear Energy Science & Technology and Human Resource Development Project

Collaborative Laboratories for Advanced Decommissioning Science; Sapporo University*

JAEA-Review 2025-033, 71 Pages, 2025/11

JAEA-Review-2025-033.pdf:4.48MB

The Collaborative Laboratories for Advanced Decommissioning Science (CLADS), Japan Atomic Energy Agency (JAEA), had been conducting the Nuclear Energy Science & Technology and Human Resource Development Project (hereafter referred to "the Project") in FY2023. The Project aims to contribute to solving problems in the nuclear energy field represented by the decommissioning of the Fukushima Daiichi Nuclear Power Station (1F), Tokyo Electric Power Company Holdings, Inc. (TEPCO). For this purpose, intelligence was collected from all over the world, and basic research and human resource development were promoted by closely integrating/collaborating knowledge and experiences in various fields beyond the barrier of conventional organizations and research fields. The sponsor of the Project was moved from the Ministry of Education, Culture, Sports, Science and Technology to JAEA since the newly adopted proposals in FY2018. On this occasion, JAEA constructed a new research system where JAEA-academia collaboration is reinforced and medium-to-long term research/development and human resource development contributing to the decommissioning are stably and consecutively implemented. Among the adopted proposals in FY2023, this report summarizes the research results of the "High-speed 3D modeling for nuclear reactor environment based on feature extraction results from video images" conducted in FY2023. The present study aims to develop a 3D model for a workspace that maximizes the amount of information based on the features extracted from video, which is taken when surveying the primary containment vessel and inside the reactor building as part of the decommissioning of 1F, considering within a specified time. In FY2023, we verified extracting effective shooting conditions for obtaining 3D reconstruction based on photogrammetry and the method extracting feature values that can generate 3D restoration results from a small amount of data within a specified time based on deep learning. In addition, we applied point cloud data extracted from video to segmentation and classified it into parts with instance labels.

Journal Articles

Method for creating large datasets for deep learning to improve image depth accuracy

Murayama, Masahiro*; Harazono, Yuki*; Ishii, Hirotake*; Shimoda, Hiroshi*; Taruta, Yasuyoshi

E-Journal of Advanced Maintenance (Internet), 17(2), p.15 - 24, 2025/08

Journal Articles

Deep learning-based bubble detection with Swin Transformer

Uesawa, Shinichiro; Yoshida, Hiroyuki

Journal of Nuclear Science and Technology, 61(11), p.1438 - 1452, 2024/11

 Times Cited Count:4 Percentile:75.59(Nuclear Science & Technology)

We developed a deep learning-based bubble detector with a Shifted window Transformer (Swin Transformer) to detect and segment individual bubbles among overlapping bubbles. To verify the performance of the detector, we calculated its average precision (AP) with different number of training images. The mask AP increased with the increase in the number of training images when there were less than 50 images but remained constant when there were more than 50 images. It was observed that the AP for the Swin Transformer and ResNet were almost the same when there were more than 50 images; however, when few training images were used, the AP of the Swin Transformer were higher than that of the ResNet. Furthermore, with regard to the increase in void fraction, the AP of the Swin Transformer showed a decrease similar to that in the case of the ResNet; however, for few training images, the AP of the Swin Transformer was higher than that of the ResNet in all void fractions. Moreover, we confirmed the detector trained with synthetic bubble images was able to segment overlapping bubbles and deformed bubbles in a bubbly flow experiment. Thus, we verified that the new bubble detector with Swin Transformer provided higher AP than the detector with ResNet for fewer training images.

Journal Articles

Feasibility study on noise reduction from images using deep learning to improve spatial awareness in remote operation

Tanifuji, Yuta; Hanari, Toshihide; Kawabata, Kuniaki

Proceedings of International Topical Workshop on Fukushima Decommissioning Research 2024 (FDR2024) (Internet), 3 Pages, 2024/10

Journal Articles

Development of a surrogate system of a plant dynamics simulation model and an abnormal situation identification system for nuclear power plants using deep neural networks

Seki, Akiyuki; Yoshikawa, Masanori; Nishinomiya, Ryota*; Okita, Shoichiro; Takaya, Shigeru; Yan, X.

Nuclear Technology, 210(6), p.1003 - 1014, 2024/06

 Times Cited Count:1 Percentile:27.40(Nuclear Science & Technology)

Two types of deep neural network (DNN) systems have been constructed with the intent to assist safety operation of a nuclear power plant. One is a surrogate system (SS) that can estimate physical quantities of a nuclear power plant in a computational time of several orders less than a physical simulation model. The other is an abnormal situation identification system (ASIS) that can estimate the state of the disturbance causing an anomaly from physical quantities of a nuclear power plant. Both systems are trained and tested using data obtained from the analytical code for incore and plant dynamics (ACCORD), which reproduces the steady and dynamic behavior of the actual high Temperature engineering test reactor (HTTR) under various scenarios. The DNN models are built by adjusting, the main hyperparameters. Through these procedures, these systems are shown able to perform with a high degree of accuracy.

Journal Articles

Machine learning models for sub-grid-scale (SGS) term

Asahi, Yuichi; Maeyama, Shinya*; Fujii, Keisuke*

Keisan Kogaku Koenkai Rombunshu (CD-ROM), 28, 4 Pages, 2023/05

We have developed a deep-learning model to surrogate the effect of small-scale on large-scale fluctuations. We have constructed the sub-grid-scale (SGS) models based on the Mori-Zwanzig projection operatormethod and neural networks. We have performed large eddy simulations (LESs) of the Kuramoto-Sivashinsky turbulence with these SGS models. We have demonstrated that the time averaged energy spectrumof LESs agree with that of the dynamic numerical simulation (DNS).

Journal Articles

Attention-based time series analysis for data-driven anomaly detection in nuclear power plants

Dong, F.*; Chen, S.*; Demachi, Kazuyuki*; Yoshikawa, Masanori; Seki, Akiyuki; Takaya, Shigeru

Nuclear Engineering and Design, 404, p.112161_1 - 112161_15, 2023/04

 Times Cited Count:34 Percentile:99.36(Nuclear Science & Technology)

Journal Articles

CityTransformer; A Transformer-based model for contaminant dispersion prediction in a realistic urban area

Asahi, Yuichi; Onodera, Naoyuki; Hasegawa, Yuta; Shimokawabe, Takashi*; Shiba, Hayato*; Idomura, Yasuhiro

Boundary-Layer Meteorology, 186(3), p.659 - 692, 2023/03

 Times Cited Count:4 Percentile:33.26(Meteorology & Atmospheric Sciences)

We develop a Transformer-based deep learning model to predict the plume concentrations in the urban area under uniform flow conditions. Our model has two distinct input layers: Transformer layers for sequential data and convolutional layers in convolutional neural networks (CNNs) for image-like data. Our model can predict the plume concentration from realistically available data such as the time series monitoring data at a few observation stations and the building shapes and the source location. It is shown that the model can give reasonably accurate prediction with orders of magnitude faster than CFD simulations. It is also shown that the exactly same model can be applied to predict the source location, which also gives reasonable prediction accuracy.

Journal Articles

Journal Articles

AMR-Net: Convolutional neural networks for multi-resolution steady flow prediction

Asahi, Yuichi; Hatayama, Sora*; Shimokawabe, Takashi*; Onodera, Naoyuki; Hasegawa, Yuta; Idomura, Yasuhiro

Proceedings of 2021 IEEE International Conference on Cluster Computing (IEEE Cluster 2021) (Internet), p.686 - 691, 2021/10

 Times Cited Count:3 Percentile:65.14(Computer Science, Hardware & Architecture)

We develop a convolutional neural network model to predict the multi-resolution steady flow. Based on the state-of-the-art image-to-image translation model pix2pixHD, our model can predict the high resolution flow field from the set of patched signed distance functions. By patching the high resolution data, the memory requirements in our model is suppressed compared to pix2pixHD.

Journal Articles

Multi-resolution steady flow prediction with convolutional neural networks

Asahi, Yuichi; Hatayama, Sora*; Shimokawabe, Takashi*; Onodera, Naoyuki; Hasegawa, Yuta; Idomura, Yasuhiro

Keisan Kogaku Koenkai Rombunshu (CD-ROM), 26, 4 Pages, 2021/05

We develop a convolutional neural network model to predict the multi-resolution steady flow. Based on the state-of-the-art image-to-image translation model Pix2PixHD, our model can predict the high resolution flow field from the signed distance function. By patching the high resolution data, the memory requirements in our model is suppressed compared to Pix2PixHD.

Journal Articles

A Learning data collection using a simulator for point cloud based identification system

Tanifuji, Yuta; Kawabata, Kuniaki

Proceedings of International Workshop on Nonlinear Circuits, Communications and Signal Processing (NCSP 2020) (Internet), p.246 - 249, 2020/02

Journal Articles

Development of a GUI-based operation system for building a 3D point cloud classifier

Tanifuji, Yuta; Kawabata, Kuniaki; Hanari, Toshihide

Proceedings of 2019 IEEE Region Ten Conference (TENCON 2019) (Internet), p.36 - 40, 2019/10

Journal Articles

A Structure discrimination method by deep learning with point cloud data

Tanifuji, Yuta; Kawabata, Kuniaki

Proceedings of International Topical Workshop on Fukushima Decommissioning Research (FDR 2019) (Internet), 4 Pages, 2019/05

Oral presentation

Generating observation guided ensembles for data assimilation with denosing diffusion probabilistic model

Asahi, Yuichi; Hasegawa, Yuta; Onodera, Naoyuki; Shimokawabe, Takashi*; Shiba, Hayato*; Idomura, Yasuhiro

no journal, , 

This paper presents a data assimilation (DA) method using the pseudo ensembles generated by denoising diffusion probabilistic model. Since the model is trained against noisy and sparse observation data, this method can produce reasonable ensembles consistent with observations. This method displays better performance than well-established DA method when the simulation model is imperfect.

Oral presentation

Targeting exa-scale systems; Performance portability and scalable data analysis

Asahi, Yuichi; Maeyama, Shinya*; Bigot, J.*; Garbet, X.*; Grandgirard, V.*; Obrejan, K.*; Padioleau, T.*; Fujii, Keisuke*; Shimokawabe, Takashi*; Watanabe, Tomohiko*; et al.

no journal, , 

We will demonstrate the performance portable implementation of a kinetic plasma code over CPUs, Nvidia and AMD GPUs. We will also discuss the performance portability of the code with C++ parallel algorithm. Deep learning based surrogate models for fluid simulations will also be demonstrated.

Oral presentation

Targeting exa-scale systems; Performance portability and scalable data analysis

Asahi, Yuichi; Maeyama, Shinya*; Bigot, J.*; Garbet, X.*; Grandgirard, V.*; Obrejan, K.*; Padioleau, T.*; Fujii, Keisuke*; Shimokawabe, Takashi*; Watanabe, Tomohiko*; et al.

no journal, , 

We will demonstrate the performance portable implementation of a kinetic plasma code over CPUs, Nvidia and AMD GPUs. We will also discuss the performance portability of the code with C++ parallel algorithm. Deep learning based surrogate models for fluid simulations will also be demonstrated.

Oral presentation

Development of AI system to support safety operation of nuclear power plants

Seki, Akiyuki; Yoshikawa, Masanori; Okita, Shoichiro; Takaya, Shigeru; Yan, X.

no journal, , 

Oral presentation

Development of a deep learning model for predicting plume concentrations in the urban area

Asahi, Yuichi; Onodera, Naoyuki; Hasegawa, Yuta; Idomura, Yasuhiro

no journal, , 

We have developed a convolutional neural network (CNN) model to predict the plume concentrations in the urban area under uniform flow condition. By combining the Transformer or Multilayer Perceptron (MLP) layers with CNN model, our model can predict the plume concentrations from the building shapes, release points of plume and time series data at observation stations. It is also shown that the exactly same model can be applied to predict the source location, which also gives reasonable prediction accuracy.

Oral presentation

Integrating deep learning-based object detection and optical character recognition for automatic extraction of link information from piping and instrumentation diagrams

Dong, F.*; Chen, S.*; Demachi, Kazuyuki*; Hashidate, Ryuta; Takaya, Shigeru

no journal, , 

Piping and Instrumentation Diagrams contain information about the piping and process equipment together with the instrumentation and control devices, which is essential to the design and management of Nuclear Power Plants. There are abundant complex objects on P&IDs, with imbalanced distribution of these objects and their linked information across different diagrams. Therefore, the content of P&IDs is generally extracted and analyzed manually, which is time consuming and error prone. To efficiently address these issues, we integrate state-of-the-art deep learning-based object detection and Optical Character Recognition models to automatically extract link information from P&IDs. Besides, we propose a novel image pre-processing approach using sliding windows to detect low resolution small objects. The performance of the proposed approach was experimentally evaluated, and the experimental results demonstrate it capable to extract link information from P&IDs of NPPs.

23 (Records 1-20 displayed on this page)