Verification and Uncertainty Analysis of Fuel Codes Using Distributed Computing

Main Article Content

D. Evens
R. Rock

Abstract

Of late, nuclear safety analysis computer codes have been held to increasingly high standards of quality assurance. As well, best estimate with uncertainty analysis is taking a more prominent role, displacing to some extent the idea of a limit consequence analysis. In turn, these activities have placed ever-increasing burdens on available computing resources. A recent project at Ontario Hydro has been the development of the capability of using the workstations on our Windows NT LAN as a distributed batch queue. The application developed is called SheepDog. This paper reports on the challenges and opportunities met in this project, as well as the experience gained in applying this method to verification and uncertainty analysis of fuel codes. SheepDog has been applied to performing uncertainty analysis, in a basically CSAU like method, of fuel behaviour during postulated accident scenarios at a nuclear power station. For each scenario, several hundred cases were selected according to a Latin Hypercube scheme, and used to construct a response surface surrogate for the codes. Residual disparities between code predictions and response surfaces led to the suspicion that there were discontinuities in the predictions of the analysis codes. This led to the development of "stress testing" procedures. This refers to two procedures: coarsely scanning through several input parameters in combination, and finely scanning individual input parameters. For either procedure, the number of code runs required is several hundred. In order to be able to perform stress testing in a reasonable time, SheepDog was applied. The results are examined for such considerations as continuity, smoothness, and physical reasonableness of trends and interactions. In several cases, this analysis uncovered previously unknown errors in analysis codes, and allowed pinpointing the part of the codes that needed to be modified. The challenges involved include the following: The usual choices of development language and environment had to be made; Significant learning curve for building Windows NT service programs; Activity by the distributed jobs was not permitted to interfere with the activity of the local workstation user; The workstation component of the system must be exceptionally robust; Codes to be run must be ported to Windows NT. Some of the opportunities involved include the following. The amount of computing power is quite large; The marginal cost of utilising this computing power is very small since it makes use of unused cycles on existing hardware; The system is easily scalable; The system is easily customisable. Results of the verification efforts will be discussed in another paper.

Article Details

Section
Articles