Refine your search:     
Report No.
 - 
Search Results: Records 1-20 displayed on this page of 107

Presentation/Publication Type

Initialising ...

Refine

Journal/Book Title

Initialising ...

Meeting title

Initialising ...

First Author

Initialising ...

Keyword

Initialising ...

Language

Initialising ...

Publication Year

Initialising ...

Held year of conference

Initialising ...

Save select records

Journal Articles

Dynamics of radiocaesium within forests in Fukushima; Results and analysis of a model inter-comparison

Hashimoto, Shoji*; Tanaka, Taku*; Komatsu, Masabumi*; Gonze, M.-A.*; Sakashita, Wataru*; Kurikami, Hiroshi; Nishina, Kazuya*; Ota, Masakazu; Ohashi, Shinta*; Calmon, P.*; et al.

Journal of Environmental Radioactivity, 238-239, p.106721_1 - 106721_10, 2021/11

 Times Cited Count:0 Percentile:0(Environmental Sciences)

This study was aimed at analysing performance of models for radiocesium migration mainly in evergreen coniferous forest in Fukushima, by inter-comparison between models of several research teams. The exercise included two scenarios of countermeasures against the contamination, namely removal of soil surface litter and forest renewal, and a specific konara oak forest scenario in addition to the evergreen forest scenario. All the models reproduced trend of time evolution of radiocesium inventories and concentrations in each of the components in forest such as leaf and organic soil layer. However, the variations between models enlarged in long-term predictions over 50 years after the fallout, meaning continuous field monitoring and model verification/validation is necessary.

Journal Articles

Acceleration of fusion plasma turbulence simulation on Fugaku and Summit

Idomura, Yasuhiro; Ina, Takuya*; Ali, Y.*; Imamura, Toshiyuki*

Dai-34-Kai Suchi Ryutai Rikigaku Shimpojiumu Koen Rombunshu (Internet), 6 Pages, 2020/12

A new communication avoiding (CA) Krylov solver with a FP16 (half precision) preconditioner is developed for a semi-implicit finite difference solver in the Gyrokinetic Toroidal 5D full-f Eulerian code GT5D. In the solver, the bottleneck of global collective communication is resolved using a CA-Krylov subspace method, and halo data communication is reduced by the FP16 preconditioner, which improves the convergence property. The FP16 preconditioner is designed based on the physics properties of the operator and is implemented using the new support for FP16 SIMD operations on A64FX. The solver is ported also on GPUs, and the performance of ITER size simulations with $$sim 0.1$$ trillion grids is measured on Fugaku (A64FX) and Summit (V100). The new solver accelerates GT5D by $$2 sim3times$$ from the conventional non-CA solver, and excellent strong scaling is obtained up to 5,760 CPUs/GPUs both on Fugaku and Summit.

Journal Articles

Acceleration of fusion plasma turbulence simulations using the mixed-precision communication-avoiding Krylov method

Idomura, Yasuhiro; Ina, Takuya*; Ali, Y.*; Imamura, Toshiyuki*

Proceedings of International Conference for High Performance Computing, Networking, Storage, and Analysis (SC 2020) (Internet), p.1318 - 1330, 2020/11

The multi-scale full-$$f$$ simulation of the next generation experimental fusion reactor ITER based on a five dimensional (5D) gyrokinetic model is one of the most computationally demanding problems in fusion science. In this work, a Gyrokinetic Toroidal 5D Eulerian code (GT5D) is accelerated by a new mixed-precision communication-avoiding (CA) Krylov method. The bottleneck of global collective communication on accelerated computing platforms is resolved using a CA Krylov method. In addition, a new FP16 preconditioner, which is designed using the new support for FP16 SIMD operations on A64FX, reduces both the number of iterations (halo data communication) and the computational cost. The performance of the proposed method for ITER size simulations with 0.1 trillion grids on 1,440 CPUs/GPUs on Fugaku and Summit shows 2.8x and 1.9x speedups respectively from the conventional non-CA Krylov method, and excellent strong scaling is obtained up to 5,760 CPUs/GPUs.

Journal Articles

Communication-avoiding Krylov solvers for extreme scale nuclear CFD simulations

Idomura, Yasuhiro; Ina, Takuya*; Ali, Y.*; Imamura, Toshiyuki*

Proceedings of Joint International Conference on Supercomputing in Nuclear Applications + Monte Carlo 2020 (SNA + MC 2020), p.225 - 230, 2020/10

A new communication avoiding (CA) Krylov solver with a FP16 (half precision) preconditioner is developed for a semi-implicit finite difference solver in the Gyrokinetic Toroidal 5D full-f Eulerian code GT5D. In the solver, the bottleneck of global collective communication is resolved using a CA-Krylov subspace method, while the number of halo data communication is reduced by improving the convergence property using the FP16 preconditioner. The FP16 preconditioner is designed based on the physics properties of the operator and is implemented using the new support for FP16 SIMD operations on A64FX. The solver is ported on Fugaku (A64FX) and Summit (V100), which respectively show $$sim$$63x and $$sim$$29x speedups in socket performance compared to the conventional non-CA Krylov solver on JAEA-ICEX (Haswell).

Journal Articles

Non-invasive imaging of radiocesium dynamics in a living animal using a positron-emitting $$^{127}$$Cs tracer

Suzui, Nobuo*; Shibata, Takuya; Yin, Y.-G.*; Funaki, Yoshihito*; Kurita, Keisuke; Hoshina, Hiroyuki*; Yamaguchi, Mitsutaka*; Fujimaki, Shu*; Seko, Noriaki*; Watabe, Hiroshi*; et al.

Scientific Reports (Internet), 10, p.16155_1 - 16155_9, 2020/10

 Times Cited Count:0 Percentile:0.01(Multidisciplinary Sciences)

Journal Articles

Communication avoiding multigrid preconditioned conjugate gradient method for extreme scale multiphase CFD simulations

Idomura, Yasuhiro; Onodera, Naoyuki; Yamada, Susumu; Yamashita, Susumu; Ina, Takuya*; Imamura, Toshiyuki*

Supa Kompyuteingu Nyusu, 22(5), p.18 - 29, 2020/09

A communication avoiding multigrid preconditioned conjugate gradient method (CAMGCG) is applied to the pressure Poisson equation in a multiphase CFD code JUPITER, and its computational performance and convergence property are compared against the conventional Krylov methods. The CAMGCG solver has robust convergence properties regardless of the problem size, and shows both communication reduction and convergence improvement, leading to higher performance gain than CA Krylov solvers, which achieve only the former. The CAMGCG solver is applied to extreme scale multiphase CFD simulations with 90 billion DOFs, and its performance is compared against the preconditioned CG solver. In this benchmark, the number of iterations is reduced to $$sim 1/800$$, and $$sim 11.6times$$ speedup is achieved with keeping excellent strong scaling up to 8,000 nodes on the Oakforest-PACS.

Journal Articles

Implementation and performance evaluation of a communication-avoiding GMRES method for stencil-based code on GPU cluster

Matsumoto, Kazuya*; Idomura, Yasuhiro; Ina, Takuya*; Mayumi, Akie; Yamada, Susumu

Journal of Supercomputing, 75(12), p.8115 - 8146, 2019/12

 Times Cited Count:1 Percentile:26.51(Computer Science, Hardware & Architecture)

A communication-avoiding generalized minimum residual method (CA-GMRES) is implemented on a hybrid CPU-GPU cluster, targeted for the performance acceleration of iterative linear system solver in the gyrokinetic toroidal five-dimensional Eulerian code GT5D. In addition to the CA-GMRES, we implement and evaluate a modified variant of CA-GMRES (M-CA-GMRES) proposed in our previous study to reduce the amount of floating-point calculations. This study demonstrates that beneficial features of the CA-GMRES are in its minimum number of collective communications and its highly efficient calculations based on dense matrix-matrix operations. The performance evaluation is conducted on the Reedbush-L GPU cluster, which contains four NVIDIA Tesla P100 GPUs per compute node. The evaluation results show that the M-CA-GMRES is 1.09x, 1.22x and 1.50x faster than the CA-GMRES, the generalized conjugate residual method (GCR), and the GMRES, respectively, when 64 GPUs are used.

Journal Articles

GPU acceleration of communication avoiding Chebyshev basis conjugate gradient solver for multiphase CFD simulations

Ali, Y.*; Onodera, Naoyuki; Idomura, Yasuhiro; Ina, Takuya*; Imamura, Toshiyuki*

Proceedings of 10th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA 2019), p.1 - 8, 2019/11

 Times Cited Count:6 Percentile:99.17

Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (P-CG) method is one of the most widely used iterative methods. However, in the P-CG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the P-CG method are becoming increasingly important. In this paper, the P-CG and Preconditioned Chebyshev Basis CA CG (P-CBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is re-designed to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the P-CG and P-CBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the inter-node interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit.

Journal Articles

Communication avoiding multigrid preconditioned conjugate gradient method for extreme scale multiphase CFD simulations

Idomura, Yasuhiro; Ina, Takuya*; Yamashita, Susumu; Onodera, Naoyuki; Yamada, Susumu; Imamura, Toshiyuki*

Proceedings of 9th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA 2018) (Internet), p.17 - 24, 2018/11

 Times Cited Count:2 Percentile:77.41

A communication avoiding (CA) multigrid preconditioned conjugate gradient method (CAMGCG) is applied to the pressure Poisson equation in a multiphase CFD code JUPITER, and its computational performance and convergence property are compared against CA Krylov methods. In the JUPITER code, the CAMGCG solver has robust convergence properties regardless of the problem size, and shows both communication reduction and convergence improvement, leading to higher performance gain than CA Krylov solvers, which achieve only the former. The CAMGCG solver is applied to extreme scale multiphase CFD simulations with $$sim 90$$ billion DOFs, and it is shown that compared with a preconditioned CG solver, the number of iterations is reduced to $$sim 1/800$$, and $$sim 11.6times$$ speedup is achieved with keeping excellent strong scaling up to 8,000 nodes on the Oakforest-PACS.

Journal Articles

Ce substitution and reduction annealing effects on electronic states in Pr$$_{2-x}$$Ce$$_x$$CuO$$_4$$ studied by Cu $$K$$-edge X-ray absorption spectroscopy

Asano, Shun*; Ishii, Kenji*; Matsumura, Daiju; Tsuji, Takuya; Ina, Toshiaki*; Suzuki, Kensuke*; Fujita, Masaki*

Journal of the Physical Society of Japan, 87(9), p.094710_1 - 094710_5, 2018/09

 Times Cited Count:7 Percentile:63.33(Physics, Multidisciplinary)

Journal Articles

Development of a water purifier for radioactive cesium removal from contaminated natural water by radiation-induced graft polymerization

Seko, Noriaki*; Hoshina, Hiroyuki*; Kasai, Noboru*; Shibata, Takuya; Saiki, Seiichi*; Ueki, Yuji*

Radiation Physics and Chemistry, 143, p.33 - 37, 2018/02

 Times Cited Count:8 Percentile:77.16(Chemistry, Physical)

Journal Articles

Application of a preconditioned Chebyshev basis communication-avoiding conjugate gradient method to a multiphase thermal-hydraulic CFD code

Idomura, Yasuhiro; Ina, Takuya*; Mayumi, Akie; Yamada, Susumu; Imamura, Toshiyuki*

Lecture Notes in Computer Science 10776, p.257 - 273, 2018/00

A preconditioned Chebyshev basis communication-avoiding conjugate gradient method (P-CBCG) is applied to the pressure Poisson equation in a multiphase thermal-hydraulic CFD code JUPITER, and its computational performance and convergence properties are compared against a preconditioned conjugate gradient (P-CG) method and a preconditioned communication-avoiding conjugate gradient (P-CACG) method on the Oakforest-PACS, which consists of 8,208 KNLs. The P-CBCG method reduces the number of collective communications with keeping the robustness of convergence properties. Compared with the P-CACG method, an order of magnitude larger communication-avoiding steps are enabled by the improved robustness. It is shown that the P-CBCG method is $$1.38times$$ and $$1.17times$$ faster than the P-CG and P-CACG methods at 2,000 processors, respectively.

Journal Articles

Large-scale simulations on molten material behavior in severe accidents of nuclear reactors

Yamashita, Susumu; Ina, Takuya*; Idomura, Yasuhiro; Yoshida, Hiroyuki

Dai-31-Kai Suchi Ryutai Rikigaku Shimpojiumu Koen Rombunshu (DVD-ROM), 7 Pages, 2017/12

no abstracts in English

Journal Articles

Application of a communication-avoiding generalized minimal residual method to a gyrokinetic five dimensional Eulerian code on many core platforms

Idomura, Yasuhiro; Ina, Takuya*; Mayumi, Akie; Yamada, Susumu; Matsumoto, Kazuya*; Asahi, Yuichi*; Imamura, Toshiyuki*

Proceedings of 8th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA 2017), p.7_1 - 7_8, 2017/11

A communication-avoiding generalized minimal residual (CA-GMRES) method is applied to the gyrokinetic toroidal five dimensional Eulerian code GT5D, and its performance is compared against the original code with a generalized conjugate residual (GCR) method on the JAEA ICEX (Haswell), the Plasma Simulator (FX100), and the Oakforest-PACS (KNL). The CA-GMRES method has $$sim 3.8times$$ higher arithmetic intensity than the GCR method, and thus, is suitable for future Exa-scale architectures with limited memory and network bandwidths. In the performance evaluation, it is shown that compared with the GCR solver, its computing kernels are accelerated by $$1.47times sim 2.39times$$, and the cost of data reduction communication is reduced from $$5%sim 13%$$ to $$sim1%$$ of the total cost at 1,280 nodes.

Journal Articles

A Numerical simulation method for molten material behavior in nuclear reactors

Yamashita, Susumu; Ina, Takuya; Idomura, Yasuhiro; Yoshida, Hiroyuki

Nuclear Engineering and Design, 322, p.301 - 312, 2017/10

 Times Cited Count:14 Percentile:90.26(Nuclear Science & Technology)

In recent years, significant attention has been paid to the precise determination of relocation of molten materials in reactor pressure vessels of boiling water reactors (BWRs) during severe accidents. To address this problem, we have developed a computational fluid dynamics code JUPITER, based on thermal-hydraulic equations and multi-phase simulation models. Although the Poisson solver has previously been a performance bottleneck in the JUPITER code, this is resolved by a new hybrid parallel Poisson solver, whose strong scaling is extended up to $$sim$$200k cores on the K-computer. As a result of the improved computational capability, the problem size and physical models are dramatically expanded. A series of verification and validation studies are enabled, which are in agreement with previous numerical simulations and experiments. These physical and computational capabilities of JUPITER enable us to investigate molten material behaviors in reactor relevant situations.

Journal Articles

Quadruple-precision BLAS using Bailey's arithmetic with FMA instruction; Its performance and applications

Yamada, Susumu; Ina, Takuya*; Sasa, Narimasa; Idomura, Yasuhiro; Machida, Masahiko; Imamura, Toshiyuki*

Proceedings of 2017 IEEE International Parallel & Distributed Processing Symposium Workshops (IPDPSW) (Internet), p.1418 - 1425, 2017/08

 Times Cited Count:3 Percentile:72.66

no abstracts in English

Journal Articles

Optimization of fusion kernels on accelerators with indirect or strided memory access patterns

Asahi, Yuichi*; Latu, G.*; Ina, Takuya; Idomura, Yasuhiro; Grandgirard, V.*; Garbet, X.*

IEEE Transactions on Parallel and Distributed Systems, 28(7), p.1974 - 1988, 2017/07

 Times Cited Count:4 Percentile:50.79(Computer Science, Theory & Methods)

High-dimensional stencil computation from fusion plasma turbulence codes involving complex memory access patterns, the indirect memory access in a Semi-Lagrangian scheme and the strided memory access in a Finite-Difference scheme, are optimized on accelerators such as GPGPUs and Xeon Phi coprocessors. On both devices, the Array of Structure of Array (AoSoA) data layout is preferable for contiguous memory accesses. It is shown that the effective local cache usage by improving spatial and temporal data locality is critical on Xeon Phi. On GPGPU, the texture memory usage improves the performance of the indirect memory accesses in the Semi-Lagrangian scheme. Thanks to these optimizations, the fusion kernels on accelerators become 1.4x - 8.1x faster than those on Sandy Bridge (CPU).

Journal Articles

Left-preconditioned communication-avoiding conjugate gradient methods for multiphase CFD simulations on the K computer

Mayumi, Akie; Idomura, Yasuhiro; Ina, Takuya; Yamada, Susumu; Imamura, Toshiyuki*

Proceedings of 7th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA 2016) (Internet), p.17 - 24, 2016/11

The left-preconditioned communication avoiding conjugate gradient (LP-CA-CG) method is applied to the pressure Poisson equation in the multiphase CFD code JUPITER. The arithmetic intensity of the LP-CA-CG method is analyzed, and is dramatically improved by loop splitting for inner product operations and for three term recurrence operations. Two LP-CA-CG solvers with block Jacobi preconditioning and with underlap preconditioning are developed. It is shown that on the K computer, the LP-CA-CG solvers with block Jacobi preconditioning is faster, because the performance of local point-to-point communications scales well, and the convergence property becomes worse with underlap preconditioning. The LP-CA-CG solver shows good strong scaling up to 30,000 nodes, where the LP-CA-CG solver achieved higher performance than the original CG solver by reducing the cost of global collective communications by 69%.

Journal Articles

Computational challenges towards Exa-scale fusion plasma turbulence simulations

Idomura, Yasuhiro; Asahi, Yuichi; Ina, Takuya; Matsuoka, Seikichi

Proceedings of 24th International Congress of Theoretical and Applied Mechanics (ICTAM 2016), p.3106 - 3107, 2016/08

Turbulent transport in fusion plasmas is one of key issues in ITER. To address this issue via the five dimensional (5D) gyrokinetic model, a novel computing technique is developed, and strong scaling of the Gyrokinetic Toroidal 5D Eulerian code GT5D is improved up to $$sim 0.6$$ million cores on the K-computer. The computing technique consists of multi-dimensional/multi-layer domain decomposition, overlap of communication and computation, and optimization of computing kernels for multi-core CPUs. The computing power enabled us to study ITER relevant issues such as the plasma size scaling of turbulent transport. Towards the next generation burning plasma turbulence simulations, the physics model is extended including kinetic electrons and multi-species ions, and computing kernels are further optimized for the latest many-core architectures.

Journal Articles

Technical estimation for mass production of highly-concentrated $$^{rm 99m}$$Tc solution from $$^{99}$$Mo to be obtained by ($$n,gamma$$) reaction; A Preliminary study using inactive Re instead of $$^{rm 99m}$$Tc

Tanase, Masakazu*; Fujisaki, Saburo*; Ota, Akio*; Shiina, Takayuki*; Yamabayashi, Hisamichi*; Takeuchi, Nobuhiro*; Tsuchiya, Kunihiko; Kimura, Akihiro; Suzuki, Yoshitaka; Ishida, Takuya; et al.

Radioisotopes, 65(5), p.237 - 245, 2016/05

no abstracts in English

107 (Records 1-20 displayed on this page)