Theory Weekly Highlights for May 2000

May 26, 2000

The “processor group factorization” subgridding method, which allows scaleup to greater than 256 processors, has been implemented in the GYRO microturbulence code. This gives a factor of two performance increase on 45 STELLA processors, and will be tested on up to 512 T3E processors. The method is applicable in general to massively parallel solution of multidimensional PDE's, not just the gyrokinetic-Maxwell equations.

May 19, 2000

A factor of 2 to 3 improvement in the computational speed of the GLF23 transport model has been obtained by reducing calculations where all the drift-waves are stable. This model is the state of the art for predicting transport in tokamak discharges. The speed up will make it easier to explore new transport regimes with the model.

Automatic between-shot analysis of DIII-D data is now being directed by the MDSplus dispatching system, marking the first production-scale operation of the unix port of this code. The dispatching system greatly simplifies the triggering of between-shot analysis codes based on acquisition of requisite data. For example, EFIT analysis is dispatched, to the new EFIT linux cluster, immediately following acquisition of magnetics data. The new system will be easier to maintain, and provides more flexibility in scheduling analysis jobs. It will also allow between-shot analysis to be load balanced across the LSF cluster for better utilization of CPU power.

May 12, 2000

Long-time nonlinear electrostatic toroidal ITG simulations, using the GYRO full gyrokinetic code, now run in only a few hours on stella.gat.com. Many visualization utilities, including automated MPEG animation of the turbulent density evolution, have been developed. A new processor sub-gridding algorithm has also been invented, and will allow GYRO to scale to greater than 256 processors.

May 05, 2000

Using a kinetic approach to obtain the proper pressure boundary condition at the cloud-plasma end face, a new model for pellet ablation and grad-B mass redistribution has been developed. The parallel dynamics and the pressure or beta relaxation rate of the cloud are controlled by conservation of parallel momentum stress across the interface, where an electrostatic potential barrier exists within a thin sheath. The total parallel momentum stress comes from cold fluid cloud particles, with scalar cloud pressure, and from the momentum flux moments evaluated from the distribution functions of each of the five types of hot kinetic species. The improved boundary condition will be used in the 1-D parallel expansion Lagrangian code, and a future 2-D code development at GA.



Disclaimer
These highlights are reports of research work in progress and are accordingly subject to change or modification