GW

Parameters needed to set up a GW calculation for electronic level energies of molecules and the band structure of materials (currently only 2D materials tested). For the GW algorithm for molecules, see [https://doi.org/10.1021/acs.jctc.0c01282]. For 2D materials, see [http://arxiv.org/abs/2306.16066]. [Edit on GitHub]

Subsections

Keywords

Keyword descriptions

SECTION_PARAMETERS: logical = F

Lone keyword: T

Controls the activation of the GW calculation. [Edit on GitHub]

EPS_FILTER: real = 1.00000000E-008

Usage: EPS_FILTER 1.0E-6

Determines a threshold for the DBCSR based sparse multiplications. Normally, this EPS_FILTER determines accuracy and timing of low-scaling GW calculations. (Lower filter means higher numerical precision, but higher computational cost.) [Edit on GitHub]

GROUP_SIZE_TENSOR: integer = -1

Usage: GROUP_SIZE_TENSOR 16

Specify the number of MPI processes for a group performing tensor operations. If ‘-1’, automatic choice which should be good enough for most cases. [Edit on GitHub]

KPOINTS_CHI_EPS_W: integer[3] = -1 -1 -1

Usage: KPOINTS_CHI_EPS_W N_x N_y N_z

Monkhorst-Pack k-point mesh of size N_x, N_y, N_z for the polarizability χ, the dielectric function ε and the screened Coulomb interaction W.Automatic choice of the k-point mesh for negative values, e.g. KPOINTS_CHI_EPS_W -1 -1 -1, which should be good enough in almost all cases. Only even k-point meshes implemented. For non-periodic directions α, choose N_α = 1. The N_x, N_y, N_z mesh and a N_x+2, N_y+2, N_z+2 mesh is used for extrapolating the k-point integration of the screened Coulomb interaction. Automatic choice of the k-point mesh for negative values, e.g. KPOINTS_CHI_EPS_W -1 -1 -1. [Edit on GitHub]

MEMORY_PER_PROC: integer = 2

Usage: MEMORY_PER_PROC 16

Specify the available memory per MPI process. Set this number as accurately as possible for good performance. If ‘MEMORY_PER_PROC’ is set lower as the actually available memory per MPI process, the performance will be bad; if ‘MEMORY_PER_PROC’ is set higher as the actually available memory per MPI process, the program might run out of memory. You can calculate ‘MEMORY_PER_PROC’ as follows: Get the memory per node on your machine, mem_per_node (for example, from a supercomputer website, typically between 100 GB and 2 TB), get the number of MPI processes per node, n_MPI_proc_per_node (for example from your run-script; if you : use slurm, the number behind ‘–ntasks-per-node’ is the number of MPI processes per node). Then calculate MEMORY_PER_PROC = mem_per_node / n_MPI_proc_per_node (typically between 2 GB and 50 GB). Unit of keyword: Gigabyte (GB). [Edit on GitHub]

NUM_TIME_FREQ_POINTS: integer = 30

Usage: NUM_TIME_FREQ_POINTS 30

Number of discrete points for the imaginary-time grid and the imaginary-frequency grid. The more points, the more precise is the calculation. Typically, 10 points are good for 0.1 eV precision of band structures and molecular energylevels, 20 points for 0.03 eV precision,and 30 points for 0.01 eV precision, see Table I in [https://doi.org/10.1021/acs.jctc.0c01282]. GW computation time increases roughly linearly with NUM_TIME_FREQ_POINTS. [Edit on GitHub]

SIZE_LATTICE_SUM_V: integer = 5

Usage: SIZE_LATTICE_SUM_V 4

Determines number of cells used for summing the cells R in the Coulomb matrix, V_PQ(k) = \sum_R <P,cell=0 | 1/r | Q,cell=R>. SIZE_LATTICE_SUM_V 5 gives excellent convergence, parameter does not need to be touched. [Edit on GitHub]