Refine your search:     
Report No.
 - 
Search Results: Records 1-20 displayed on this page of 59

Presentation/Publication Type

Initialising ...

Refine

Journal/Book Title

Initialising ...

Meeting title

Initialising ...

First Author

Initialising ...

Keyword

Initialising ...

Language

Initialising ...

Publication Year

Initialising ...

Held year of conference

Initialising ...

Save select records

Journal Articles

Hybrid data assimilation methods for nuclear-data-induced uncertainties

Maruyama, Shuhei; Yamamoto, Akio*; Endo, Tomohiro*

Journal of Nuclear Science and Technology, 14 Pages, 2025/00

 Times Cited Count:0

Journal Articles

Quantifying uncertainty induced by scattering angle distribution using maximum entropy method

Maruyama, Shuhei; Yamamoto, Akio*; Endo, Tomohiro*

Annals of Nuclear Energy, 205, p.110591_1 - 110591_13, 2024/09

 Times Cited Count:1 Percentile:0.00(Nuclear Science & Technology)

Journal Articles

Neutron importance estimation via new recursive Monte Carlo method for deep penetration neutron transport

Tuya, D.; Nagaya, Yasunobu

Nuclear Science and Engineering, 198(5), p.1021 - 1035, 2024/05

 Times Cited Count:1 Percentile:25.62(Nuclear Science & Technology)

In Monte Carlo neutron transport calculations for local response or deep penetration problems, some estimation of an importance function is generally required in order to improve their efficiency. In this work, a new recursive Monte Carlo (RMC) method, which is partly based on the original RMC method, for estimating an importance function for local variance reduction (i.e., source-detector type) problems has been developed. The new RMC method has been applied to two sample problems of varying degrees of neutron penetrations, namely a one-dimensional iron slab problem and a three-dimensional concrete-air problem. The biased Monte Carlo calculations with variance reduction parameters based on the obtained importance functions by the new RMC method have been performed to estimate detector responses in these problems. The obtained results are in agreement with those by the reference unbiased Monte Carlo calculations. Furthermore, the biased calculations offered an increase in efficiency on the order of 1 to 10$$^{4}$$ in terms of the figure of merit (FOM). The results also indicated that the efficiency increased as the neutron penetration became deeper.

Journal Articles

A Preliminary uncertainty analysis of PWR depletion numerical test problem on OECD/NEA/NSC LWR-UAM benchmark phase II based on JENDL-5

Fujita, Tatsuya

Proceedings of Best Estimate Plus Uncertainty International Conference (BEPU 2024) (Internet), 14 Pages, 2024/05

The uncertainty analysis of PWR depletion test problem on the OECD/NEA/NSC LWR-UAM benchmark Phase II based on JENDL-5 was performed as a preliminary investigation. The random sampling was used to quantify the uncertainty of k-infinity and nuclide inventories, the cross section (XS), the fission product yield (FPY), the decay constant, and the decay branch ratio were randomly perturbed, and several times of SERPENT 2.2.1 calculations were performed. XSs in the ACE file were perturbed by the ACE file perturbation tool using FRENDY with the 56-group covariance matrix generated by NJOY2016.72. The perturbation quantity of independent FPY was evaluated using the FPY covariance matrix prepared in JENDL-5, and the perturbed cumulative FPY was reconstructed based on the relationship between the independent and cumulative FPYs. The decay constant was independently perturbed for each nuclide. To perturb the decay branch ratios, the covariance matrix was generated by applying the generalized least square method and randomly perturbed based on this covariance matrix in the same manner as the independent FPY. In general, the influence due to decay data was an order of magnitude smaller than the influences due to XS and FPY uncertainties. For the uncertainty of k-infinity and transuranic nuclide inventories, the influence due to XS uncertainty was dominant, and that due to FPY and decay data uncertainties was one or a few orders of magnitude smaller. On the other hand, for the uncertainty of FP nuclide inventories, the influence due to FPY uncertainty was almost the same or larger than that due to XS uncertainty. It was also confirmed that the influence due to either XS or FPY uncertainty became different for each FP nuclide. In future studies, the influence due to XS uncertainty on FP nuclides will be discussed because it was not prepared in JENDL-5 and not considered in the present paper.

Journal Articles

A Comparative study of efficient sampling techniques for uncertainty quantification due to cross-section covariance data

Fujita, Tatsuya

Proceedings of International Conference on Physics of Reactors (PHYSOR 2024) (Internet), p.718 - 727, 2024/04

The convergence process of the k-infinity uncertainty during random-sampling-based uncertainty quantification was compared between several efficient sampling techniques. The k-infinity uncertainty was evaluated by statistically processing several times of SERPENT 2.2.1 calculations using perturbed ACE files based on JENDL-5 cross-section covariance data. The antithetic sampling (AS), the Latin hypercube sampling (LHS), the control variates (CV), and the combination approaches of them were focused on in the present paper. In PWR-UO$$_{2}$$ fuel assembly geometry without the nuclide depletion, as discussed in past studies, AS and LHS showed higher efficient convergence than nominal sampling without any efficient sampling techniques. In terms of CV, though a stand-alone application did not have a large impact on the k-infinity uncertainty convergence, its performance was improved in combination with AS, as discussed in the past study. In addition, a new combined approach of LHS and CV (CV+LHS) was proposed in the present paper. CV+LHS improved the k-infinity uncertainty convergence and was more efficient than CV+AS. The main reason for this improvement was that the convergence for the mean value of alternative parameters in CV was enhanced by applying LHS. Consequently, this study proposed the new combined approach of CV+LHS and confirmed its efficiency performance for the random-sampling-based uncertainty quantification in the PWR-UO$$_{2}$$ fuel assembly geometry. The applicability of CV+LHS for the nuclide-depletion calculations will be confirmed in future studies.

Journal Articles

Application of quasi-Monte Carlo and importance sampling to Monte Carlo-based fault tree quantification for seismic probabilistic risk assessment of nuclear power plants

Kubo, Kotaro; Tanaka, Yoichi; Hakuta, Yuto*; Arake, Daisuke*; Uchiyama, Tomoaki*; Muramatsu, Ken

Mechanical Engineering Journal (Internet), 10(4), p.23-00051_1 - 23-00051_17, 2023/08

The significance of probabilistic risk assessments (PRAs) of nuclear power plants against external events was re-recognized after the Fukushima Daiichi Nuclear Power Plant accident. Regarding the seismic PRA, handling correlated failures of systems, components, and structures (SSCs) is very important because this type of failure negatively affects the redundancy of accident mitigation systems. The Japan Atomic Energy Research Institute initially developed a fault tree quantification methodology named the direct quantification of fault tree using Monte Carlo simulation (DQFM) to handle SSCs' correlated failures in detail and realistically. This methodology allows quantifying the top event occurrence probability by considering correlated uncertainties related to seismic responses and capacities with Monte Carlo sampling. The usefulness of DQFM has already been demonstrated. However, improving its computational efficiency would allow risk analysts to perform several analyses. Therefore, we applied quasi-Monte Carlo and importance sampling to the DQFM calculation of simplified seismic PRA and examined their effects. Specifically, the conditional core damage probability of a hypothetical pressurized water reactor was analyzed with some assumptions. Applying the quasi-Monte Carlo sampling accelerates the convergence of results at intermediate and high ground motion levels by an order of magnitude over Monte Carlo sampling. The application of importance sampling allows us to obtain a statistically significant result at a low ground motion level, which cannot be obtained through Monte Carlo and quasi-Monte Carlo sampling. These results indicate that these applications provide a notable acceleration of computation and raise the potential for the practical use of DQFM in risk-informed decision-making.

Journal Articles

Sensitivity coefficient evaluation of an accelerator-driven system using ROM-Lasso method

Katano, Ryota; Yamamoto, Akio*; Endo, Tomohiro*

Nuclear Science and Engineering, 196(10), p.1194 - 1208, 2022/10

 Times Cited Count:1 Percentile:14.76(Nuclear Science & Technology)

In this study, we propose the ROM-Lasso method that enables efficient evaluation of sensitivity coefficients of neutronics parameters to cross-sections. In the proposed method, a vector of sensitivity coefficients is expanded by subspace bases, so-called Active Subspace (AS) based on the idea of Reduced Order Modeling (ROM). Then, the expansion coefficients are evaluated by the Lasso linear regression between cross-sections and neutronics parameters obtained by the random sampling. The proposed method can be applied in the case where the adjoint method is difficult to be applied since the proposed method uses only forward calculations. In addition, AS is an effective subspace that can expand the vector of sensitivity coefficients with the lower number of dimension. Thus, the number of unknows is reduced from the original number of input parameters and the calculation cost is dramatically improved compared to the Lasso regression without AS. In this paper, we conducted ADS burnup calculations as a verification. We have shown how AS bases are obtained and the applicability of the proposed method.

Journal Articles

A Scoping study on the use of direct quantification of fault tree using Monte Carlo simulation in seismic probabilistic risk assessments

Kubo, Kotaro; Fujiwara, Keita*; Tanaka, Yoichi; Hakuta, Yuto*; Arake, Daisuke*; Uchiyama, Tomoaki*; Muramatsu, Ken*

Proceedings of 29th International Conference on Nuclear Engineering (ICONE 29) (Internet), 8 Pages, 2022/08

After the Fukushima Daiichi Nuclear Power Plant accident, the importance of conducting probabilistic risk assessments (PRAs) of external events, especially seismic activities and tsunamis, was recognized. The Japan Atomic Energy Agency has been developing a computational methodology for seismic PRA, called the direct quantification of fault tree using Monte Carlo simulation (DQFM). When appropriate correlation matrices are available for seismic responses and capacities of components, the DQFM makes it possible to consider the effect of correlated failures of components connected through AND and/or OR gates in fault trees, which is practically difficult when methods using analytical solutions or multidimensional numerical integrations are used to obtain minimal cut set probabilities. The usefulness of DQFM has already been demonstrated. Nevertheless, a reduction of the computational time of DQFM would allow the large number of analyses required in PRAs conducted by regulators and/or operators. We; therefore, performed scoping calculations using three different approaches, namely quasi-Monte Carlo sampling, importance sampling, and parallel computing, to improve calculation efficiency. Quasi-Monte Carlo sampling, importance sampling, and parallel computing were applied when calculating the conditional core damage probability of a simplified PRA model of a pressurized water reactor, using the DQFM method. The results indicated that the quasi-Monte Carlo sampling works well at assumed medium and high ground motion levels, importance sampling is suitable for assumed low ground motion level, and that parallel computing enables practical uncertainty and importance analysis. The combined implementation of these improvements in a PRA code is expected to provide a significant acceleration of computation and offers the prospect of practical use of DQFM in risk-informed decision-making.

Journal Articles

Dynamic probabilistic risk assessment of nuclear power plants using multi-fidelity simulations

Zheng, X.; Tamaki, Hitoshi; Sugiyama, Tomoyuki; Maruyama, Yu

Reliability Engineering & System Safety, 223, p.108503_1 - 108503_12, 2022/07

AA2021-0588.pdf:1.29MB

 Times Cited Count:20 Percentile:82.18(Engineering, Industrial)

Journal Articles

Adjoint-weighted correlated sampling for $$k$$-eigenvalue perturbation in Monte Carlo calculation

Tuya, D.; Nagaya, Yasunobu

Annals of Nuclear Energy, 169, p.108919_1 - 108919_9, 2022/05

 Times Cited Count:4 Percentile:53.26(Nuclear Science & Technology)

Estimating an effect of a perturbation in a fissile system on its $$k$$-eigenvalue requires special technique called perturbation theory when the considered perturbation is small. In this study, we develop an adjoint-weighted correlated sampling (AWCS) method based on the exact perturbation theory without any approximation by combining the correlated sampling (CS) method with iterated-fission probability (IFP) based adjoint-weighting method. With the advantages of the CS method being good at providing very small uncertainty for small perturbations and the IFP-based adjoint-weighting method being suitable for continuous-energy Monte Carlo calculation, the developed AWCS method based on the exact perturbation theory offers a new rigorous approach for perturbation calculations. The obtained results by the developed AWCS method for verification problems involving Godiva and simplified STACY density perturbations showed good agreement with the reference calculations.

Journal Articles

Proposal and application of ROM-Lasso method for sensitivity coefficient evaluation

Katano, Ryota; Yamamoto, Akio*; Endo, Tomohiro*

Proceedings of International Conference on Physics of Reactors 2022 (PHYSOR 2022) (Internet), p.2032 - 2041, 2022/05

We have proposed the ROM-Lasso method to perform an efficient evaluation of the sensitivity coefficients of ADS core parameters to cross sections without major modification of the core analysis system. In the ROM-Lasso method, the sensitivity coefficient vector is expanded via the subspace bases so-called Active Subspace (AS), and the effective number of unknowns is reduced. Then, the expansion coefficients are determined via the penalized linear regression with the core parameters obtained by the random sampling, and the sensitivity coefficient vector is estimated. Owing to the AS, the required number of the core calculations is dramatically reduced in the ROM-Lasso method. In this work, we take the sensitivity coefficient evaluation of the coolant void reactivity at the end of the cycle for example and demonstrate how estimation accuracy depends on the number of samples and the AS.

Journal Articles

Quasi-Monte Carlo sampling method for simulation-based dynamic probabilistic risk assessment of nuclear power plants

Kubo, Kotaro; Jang, S.*; Takata, Takashi*; Yamaguchi, Akira*

Journal of Nuclear Science and Technology, 59(3), p.357 - 367, 2022/03

 Times Cited Count:7 Percentile:62.34(Nuclear Science & Technology)

Dynamic probabilistic risk assessment (PRA), which handles epistemic and aleatory uncertainties by coupling the thermal-hydraulics simulation and probabilistic sampling, enables a more realistic and detailed analysis than conventional PRA. However, enormous calculation costs are incurred by these improvements. One solution is to select an appropriate sampling method. In this paper, we applied the Monte Carlo, Latin hypercube, grid-point, and quasi-Monte Carlo sampling methods to the dynamic PRA of a station blackout sequence in a boiling water reactor and compared each method. The result indicated that quasi-Monte Carlo sampling method handles the uncertainties most effectively in the assumed scenario.

Journal Articles

The Analysis for Ex-Vessel debris coolability of BWR

Matsumoto, Toshinori; Iwasawa, Yuzuru; Ajima, Kohei*; Sugiyama, Tomoyuki

Proceedings of Asian Symposium on Risk Assessment and Management 2020 (ASRAM 2020) (Internet), 10 Pages, 2020/11

The probability of ex-vessel debris coolability under the wet cavity strategy is analyzed. The first step is the uncertainty analyses by severe accident analysis code MELCOR to obtain the melt condition. Five uncertain parameters which are relating with the core degradation and transfer process were chosen. Input parameter sets were generated by LHS. The analyses were conducted and the conditions of the melt were obtained. The second step is the analyses for the behavior of melt under the water by JASMINE code. The probabilistic distribution of parameters are determined from the results of MELCOR analyses. Fifty-nine parameter sets were generated by LHS. The depth of water pool is set to be 0.5, 1.0 and 2.0 m. Debris height were compared with the criterion to judge the debris coolability. As the result, the success probability of debris cooling was obtained through the sequence of calculations. The technical difficulties of this evaluation method are also discussed.

Journal Articles

Estimation of uncertainty in lead spallation particle multiplicity and its propagation to a neutron energy spectrum

Iwamoto, Hiroki; Meigo, Shinichiro

Journal of Nuclear Science and Technology, 57(3), p.276 - 290, 2020/03

 Times Cited Count:2 Percentile:17.88(Nuclear Science & Technology)

This paper presents an approach to uncertainty estimation of spallation particle multiplicity of lead ($$^{rm nat}$$Pb), primarily focusing on proton-induced spallation neutron multiplicity ($$x_{pn}$$) and its propagation to a neutron energy spectrum. The $$x_{pn}$$ uncertainty is estimated from experimental proton-induced neutron-production double-differential cross sections (DDXs) and model calculations with the Particle and Heavy Ion Transport code System (PHITS). Uncertainties in multiplicities for $$(n,xn)$$, $$(p,xp)$$, and $$(n,xp)$$ reactions are then inferred from the estimated $$x_{pn}$$ uncertainty and the PHITS calculation. Using these uncertainties, uncertainty in a neutron energy spectrum produced from a thick $$^{rm nat}$$Pb target bombarded with 500 MeV proton beams, measured in a previous experiment, is quantified by a random sampling technique, and propagation to the neutron energy spectrum is examined. Relatively large uncertainty intervals (UIs) were observed outside the lower limit of the measurement range, which is prominent in the backward directions. Our findings suggest that a reliable assessment of spallation neutron energy spectra requires systematic DDX experiments for detector angles and incident energies below 100 MeV as well as neutron energy spectrum measurements at lower energies below $$sim$$1.4 MeV with an accuracy below the quantified UIs.

Journal Articles

Estimation of sensitivity coefficient based on lasso-type penalized linear regression

Katano, Ryota; Endo, Tomohiro*; Yamamoto, Akio*; Tsujimoto, Kazufumi

Journal of Nuclear Science and Technology, 55(10), p.1099 - 1109, 2018/10

 Times Cited Count:4 Percentile:33.30(Nuclear Science & Technology)

In this study, we propose the penalized regression "adaptive smooth-lasso" for the estimation of sensitivity coefficients of the neutronics parameters. The proposed method estimates the sensitivity coefficients of the neutronics parameters using the variation of the microscopic cross sections and the neutronics parameter obtained by random sampling. The proposed method utilizes only the forward calculations. Thus, the proposed method can be applied for the complex reactor analysis for which the application of the adjoint method is difficult. In this study, we proposed a penalty term considering the characteristics of the sensitivity coefficients of the neutronics parameter to the microscopic multi-group cross sections. Through verification calculation, we show that the proposed method achieves high accuracy with less computational cost compared to the method based on random sampling proposed in the previous studies.

Journal Articles

Characteristics of radiocesium contamination of dry riverbeds due to the Fukushima Daiichi Nuclear Power Plant accident assessed by airborne radiation monitoring

Azami, Kazuhiro*; Otagaki, Takahiro*; Ishida, Mutsushi; Sanada, Yukihisa

Landscape and Ecological Engineering, 14(1), p.3 - 15, 2018/01

 Times Cited Count:2 Percentile:10.14(Biodiversity Conservation)

Journal Articles

Bayesian optimization analysis of containment-venting operation in a Boiling Water Reactor severe accident

Zheng, X.; Ishikawa, Jun; Sugiyama, Tomoyuki; Maruyama, Yu

Nuclear Engineering and Technology, 49(2), p.434 - 441, 2017/03

 Times Cited Count:5 Percentile:39.65(Nuclear Science & Technology)

Journal Articles

Bayesian optimization analysis of containment venting operation in a BWR severe accident

Zheng, X.; Ishikawa, Jun; Sugiyama, Tomoyuki; Maruyama, Yu

Proceedings of 13th Probabilistic Safety Assessment and Management Conference (PSAM-13) (USB Flash Drive), 10 Pages, 2016/10

Journal Articles

Calculation of prompt neutron decay constant with Monte Carlo differential operator sampling

Nagaya, Yasunobu

Proceedings of Joint International Conference on Mathematics and Computation, Supercomputing in Nuclear Applications and the Monte Carlo Method (M&C + SNA + MC 2015) (CD-ROM), 9 Pages, 2015/04

A new method to calculate the prompt neutron decay constant ($$alpha$$) with the Monte Carlo method is proposed. It is based on the conventional $$alpha$$-$$k$$ search algorithm but no iteration is required for the $$alpha$$ value search. The $$k$$ eigenvalue is expressed in the truncated Taylor series with regard to $$alpha$$; the differential coefficients are calculated with the differential operator sampling, which is one of the Monte Carlo perturbation techniques. In order to examine the applicability of the proposed method, verification has been performed for simple geometries of a bare fast system (Godiva) and an unreflected thermal system (STACY). Comparisons has been done with the pulsed neutron source (PNS) simulation and the direct calculation from the definition of the $$alpha$$ value. The results with the proposed method show good agreement with the reference PNS simulation.

Journal Articles

Influence of multidimensionality on convergence of sampling in protein simulation

Metsugi. Shoichi

Journal of the Physical Society of Japan, 74(6), p.1865 - 1870, 2005/06

 Times Cited Count:0 Percentile:0.00(Physics, Multidisciplinary)

We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein's conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

59 (Records 1-20 displayed on this page)