Logbook  (07-04-2025)
Static problems
3.1.1 The structure of the code

Introduction

The computer code of all numerical experiments share the same structure. This structure is described on this page by considering the code of the Method of manufactured solutions (mms/) numerical experiment as an example. The other pages in this book that document various numerical experiments describe how the code of a particular numerical experiment differs from the structure discussed on this page. It is highly recommended to read this page before experimenting with the code.

The root directory.

The figure below illustrates a listing of the root directory of the static numerical experiments, static/.

Below on this page we will refer to this directory (static/) as the root/ directory.

Most of the directories in the root/ directory contain numerical experiments. There are, however, three special directories: doc/, backup/, and shared/.

The first special directory, doc/, contains the CHANGELOG.txt file.

The second special directory, backup/, is where automatic backups are saved. Each time the user executes

user@computer:root/mms/build/Release$ ./clean

in the build/Release/, build/Debug/, or gmsh/ directories of any experiment, the root/shared/scripts/backup_shared script is called. This script selectively backs up the directory of the experiment into the backup/ directory. See the root/shared/scripts/backup_shared script to get an idea on which files are selected for the backup. The list of files to backup can be easily extended by modifying the find command in the end the backup_shared script. The name of a backup file contains: (i) the name of the numerical experiment, (ii) the date of the backup, and (iii) the time of the backup. The listing below illustrates how the backup/ directory could possibly look like.

The user needs to empty the backup directory time to time as its size grows each time a clean command is executed. The backup option can be disabled by changing the second line in the root/shared/scripts/backup_shared file from

DO_BACKUP=1

to

DO_BACKUP=0

Note that no spaces are allowed around the identity sign, "=".

The third special directory, shared/, contains the code shared by multiple numerical experiments. The backup_shared script discussed above is an example of such shared code. Another example is the StaticScalarSolver::Solver template which is used in most of numerical experiments. The code of this solver is located in the file root/shared/include/static_scalar_solver.hpp. The code in the shared directory is organized in three subdirectories as shown in the figure below.

The four scripts in the root/ directory, i.e., clean, setup, build, and run-all, allow to clean up all the numerical experiments, build the debug and release versions of the executable files of all numerical experiments, and run the release versions of them. The setup scripts deletes the links in all directories of the numerical experiments and creates them anew. That is, the following chain of commands executed in the root/ directory allows to clean, link, build, and run all numerical experiments.

user@computer:root$ ./clean
user@computer:root$ ./setup
user@computer:root$ ./build
user@computer:root$ ./run-all

Note that these scripts need to know which items in the root/ directory are directories of experiments. They do so by making a list of the experiment directories:

list_dirs=`ls -I backup -I shared -I doc -I clean -I build -I setup -I run-all -I README.md`

This line can be translated in English as: any item in the current directory is a directory of an experiment if it is not named as backup, shared, doc, clean, build, setup, run-all, or README.md. Therefore, it is not allowed to place arbitrary files and directories into the root/ directory. All new items will be interpreted as directories of numerical experiments that have a particular structure (discussed below) and contain code that can be compiled and run without errors. If you need to add a new item to the root/ directory which is not a new numerical experiment modify the list_dir statements in all four scripts like so:

list_dirs=`ls -I backup -I shared -I doc -I clean -I build -I setup -I run-all -I README.md -I new_item`

Of course, you are free to save any files and directories in the root/ directory. In this case, however, the four scripts will not work as intended. The last is not a big trouble: you can always execute these four operations (clean, link, build, and run) in the directory of the numerical experiment you are interested in. There is no a particular need to execute these four operations in batch.

A directory of a numerical experiment

We will consider the structure of the mms/ directory as an example. The figure below illustrates the content of the directory after clean up.

There are two files in the mms/ directory, backup and CMakeLists.txt, and one link, compile_commands.json.

The backup script executes the backup of the mms/ directory by calling backup_shared script discussed above.

The CMakeLists.txt, file contains configuration information and instructions for the cmake build system. Some macro definitions are specified in the CMakeLists.txt. If you cannot find a macro definition in the C++ code, have a look at the CMakeLists.txt. For example, the macro definitions DIMENSION__ and HYPERCUBE__ in the code of the mms/ experiment are specified in the last four lines of the CMakeLists.txt:

...
set(PRJ_NAME "mms")
...
set(TARGET_SQUARE "${PRJ_NAME}-square")
set(TARGET_CIRCLE "${PRJ_NAME}-circle")
set(TARGET_CUBE "${PRJ_NAME}-cube")
set(TARGET_SPHERE "${PRJ_NAME}-sphere")
...
target_compile_options(${TARGET_SQUARE} PRIVATE -DDIMENSION__=2 -DHYPERCUBE__=1)
target_compile_options(${TARGET_CIRCLE} PRIVATE -DDIMENSION__=2 -DHYPERCUBE__=0)
target_compile_options(${TARGET_CUBE} PRIVATE -DDIMENSION__=3 -DHYPERCUBE__=1)
target_compile_options(${TARGET_SPHERE} PRIVATE -DDIMENSION__=3 -DHYPERCUBE__=0)

This code instructs cmake to generate four executable files: two for the two- dimensional version of the experiment (mms-square, mms-circle) and two for the three- dimensional version (mms-cube, mms-sphere). The second word in the file names, such as "square" in "mms-square", denotes the shape of the problem domain.

The link to compile_commands.json is not essential for the code of the numerical experiment. This link provides information on the exact compiler calls for all translation units. This is only needed to supply an editor (vim with YouCompleteMe plugin installed, for example) with information needed for auto completion and quick access to the different parts of the code. Otherwise this link is completely useless. It is not a part of the code. The compile_commands.json file itself is located in the build/Debug directory. It is generated each time the ./build command is executed in the build/Debug directory. This file is deleted each time the ./clean command is executed in the build/Debug directory. If the link to compile_commands.json is red (which is not good) change to build/Debug and execute the following:

used@computer:mms/build/Debug$ ./build

The color of the link will change and the data will be available to the editor.

Next, let us consider the directories in mms/.

The C++ header and source files specific for the mms/ experiment are located in the include/ and src/ directories.

There are two build configurations: debug and release. Accordingly, there are two directories, Debug and Release, in mms/build and in mms/bin. To build the code in the debug configuration one needs to change into the mms/build/Debug/ directory and execute the following.

user@computer:root/mms/build/Debug$ ./build

As a result, the script root/shared/scripts/build_debug will be executed. The executable files, mms-square, mms-circle, mms-cube, and mms-sphere will be written into mms/bin/Debug/ directory. One needs to change into this directory to start debugging:

user@computer:root/mms/build/Debug$ cd ../../bin/Debug
user@computer:root/mms/bin/Debug$ gdb -quiet -tui mms-cube

The data files generated by the program will be saved into Data/ directory. Executing the clean command in the mms/build/Debug/ directory will: (i) backup the mms/ directory into root/backup/ as discussed above, (ii) remove all the files generated by the build script, (iii) remove all the data files generated by the executable files, and (iv) will setup the mms/bin/Debug/Data/ directory and its subdirectories. Building and executing the files in the release configuration is done similarly in the mms/build/Release and mms/bin/Release directories. Note that the executable files generated in the release configuration run significantly faster. Their debug counterparts, however, contain the symbols for debugging. Moreover, the Assert macro definitions of deal.II work only in the debug configuration. For this reason, the error messages are way more informative in the debug configuration. I must strongly recommend using both, the debug and release configurations.

In all numerical experiments Gmsh is used for creating meshes, see the discussion on this page. The gmsh/ directory contains the master geo files, and the scripts ./build and ./clean. Executing the ./build script generates a set of refined (cloned) versions of the master meshes in the data/ directory. The executable files located in the mms/bin/ directory use the mesh files located in the mms/gmsh/data/ directory. If the mesh files are missing, rebuild them by executing ./build in the gmsh/ directory.

The simulation code.

Finally, let us consider the simulation code itself. The class diagram below illustrates the structure of the code of the mms/ experiment.

The class Constants::Physics contains various constants such as permittivity of free space. The SettingsMMS class augments these constants with the input data specific for the mms/ numerical experiment. The ID of the boundary to which the Dirichlet boundary condition must be applied is an example of such input data.

The class template Constants::QuadratureTableScalar maps degree of the finite elements to the amount of quadrature points if finite elements are scalar. There are four tables that do the mapping. Two tables are for two-dimensional problems: one is to be used for assembling the system of linear equations, another is to be used for computing the error norms. There are two similar tables for the three-dimensional problems. This class template is the place where the amount of the quadrature points used in different circumstances can be adjusted simultaneously for all experiments. The class template Constants::QuadratureTableVector is used for exact same purpose if the finite elements are vector.

The class Misc::MainOutputTable implements the convergence tables that are printed on the screen and saved into files.

The class BatchMMS contains the main loop of the program.

The mms/ experiment computes the electric scalar potential \(\Phi\), electric field \(\vec{E}\), and displacement \(\vec{D}\). Then the numerically computed fields are compared to their exact closed-form analytical expressions. The class templates ExactSolutionMMS_PHI, ExactSolutionMMS_E, and ExactSolutionMMS_D implement the exact expressions.

The class template SolverMMS inherits from StaticScalarSolver::Solver which solves the general static scalar boundary value problem. To make the boundary value problem specific to the mms/ experiment, one needs to communicate to StaticScalarSolver::Solver the coefficients and the right-hand sides of the boundary value problem, i.e., \(\epsilon\), \(\rho_f\), \(\eta\), \(\gamma\), \(\sigma\), \(\kappa_f\), specific to the mms/ experiment. The coefficients and the right-hand sides that belong to the PDE and to the natural boundary and interface conditions (that is, all of them but \(\eta\)) are communicated to the solver by implementing the methods value_list(...) of the following class templates:

StaticScalarSolver::TheCoefficient - \(\epsilon\)
StaticScalarSolver::PdeRhs - \(\rho_f\)
StaticScalarSolver::Gamma - \(\gamma\)
StaticScalarSolver::RobinRhs - \(\sigma\)
StaticScalarSolver::FreeSurfaceCharge - \(\kappa_f\)

The definitions of these class templates is located in shared/include/static_scalar_input.hpp. These definitions are shared by all experiments. The implementation if their value_list(...) member functions is specific for each numerical experiment. The implementation is located in the scr/ directory of each numerical experiment. In the mms/ experiment, for example, it is located in the mms/src/static_scalar_input.cpp.

The situation is a bit different with \(\eta\). The Dirichlet boundary condition is essential (Section 4.1.5 Dichotomy of boundary and interface conditions). It is communicated to the StaticScalarSolver::Solver in a different way. The class template SolverMMS overrides a virtual function StaticScalarSolver::fill_dirichlet_stack

template<int dim>
class SolverMMS : public SettingsMMS, public Solver<dim>
{
...
const ExactSolutionMMS_PHI<dim> exact_solution;
...
virtual void fill_dirichlet_stack() override final;
...
};

and fills the stack:

template<int dim>
void SolverMMS<dim>::fill_dirichlet_stack()
{
Solver<dim>::dirichlet_stack = {{bid_dirichlet, & exact_solution}};
}

The mms/ experiment has only one boundary with the Dirichlet condition. In general, the stack above can have many pairs of boundary IDs and the corresponding boundary functions. The stack of the Dirichlet boundary conditions is a data member of the class template StaticScalarSolver::Solver:

template<int dim, int stage = 1>
class Solver
{
...
protected:
std::map<types::boundary_id, const Function<dim> *> dirichlet_stack;
...
};

The StaticScalarSolver::Solver applies the Dirichlet boundary conditions when setting up the matrices:

template<int dim, int stage>
void Solver<dim, stage>::setup()
{
...
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-but-set-variable"
for (auto item : dirichlet_stack)
{
Assert(item.first %2 == 1, ExcInternalError());
}
#pragma GCC diagnostic pop
VectorTools::interpolate_boundary_values(
MappingQ<dim>(mapping_degree),
dof_handler,
dirichlet_stack,
constraints);
}
...

By convention the ID of a boundary to which the Dirichlet condition is applied must be odd.

As discussed above, one of the functions of the SolverMMS class template is to fill the stack with the Dirichlet boundary conditions. It has another two functions: creating the mesh and implementing the solver of the system of linear equations. These two functions are implemented similarly, i.e., by overriding the respective virtual functions

template<int dim>
class SolverMMS : public SettingsMMS, public Solver<dim>
{
...
virtual void make_mesh() override final;
virtual void fill_dirichlet_stack() override final;
virtual void solve() override final;
};

Note that creating a mesh is not just uploading a mesh from a file. Creating a mesh also involves assigning boundary IDs, material IDs, user IDs, and manifolds to mesh components. These IDs eventually end up in the objects that describe the coefficients and the right-hand sides of the boundary value problem, so they can be used for implementing complex problem domains. See, for example, how the surface free-current density, \(\kappa_f\), is distributed on an interface in the sch/ experiment. Note also that the boundary IDs can be assigned in the geo file. So, if you cannot find the place in the program where the boundary IDs are assigned - look in the geo files that are located in the gmsh/ directory. In the file mms/gmsh/square.geo, for example, the boundary IDs are assigned as the following:

Physical Line(1) = {2, 3};
Physical Line(2) = {1, 4};

Here the sides of the square nr. 2 and nr. 3 get boundary ID=1. The boundary ID=2 is assigned to the sides of the square nr.1 and nr. 4.

The class template StaticScalarSolver::PdeRhsCvp in the diagram above deserves a few words. The abbreviation Cvp stands for "current vector potential". Normally, the function fed to the right-hand side of the div-grad partial differential equation, such as \(\rho_f\), is a scalar. When solving for the two-dimensional current vector potential, the function on the right-hand side of the partial differential equation is a vector, i.e., \(\vec{J}_f\). Therefore, there is a need for two class templates that implement the right-hand side of the div-grad equation. One must implement the scalar function \(\rho_f\) when solving

\[ - \vec{\nabla} \cdot \big( \epsilon \vec{\nabla} \Phi \big)= \rho_f \]

and another must implement a two-dimensional vector function \(\vec{J}_f\) when solving

\[ -\vec{\nabla} \cdot \big(\vec{\nabla} T \big)= \vec{\nabla}\overset{S}{\times} \vec{J}_f \]

These two class templates are StaticScalarSolver::PdeRhs and StaticScalarSolver::PdeRhsCvp, respectively. If object StaticScalarSolver::Solver is constructed with the parameter type_of_pde_rhs=2 or type_of_pde_rhs=3 in a two-dimensional problem (dim=2), StaticScalarSolver::PdeRhsCvp is used to derive the values of the function on the right-hand side of the PDE. If type_of_pde_rhs=1, StaticScalarSolver::PdeRhs is used in two- and three- dimensional problems.

The SolverMMS computes the electric potential, \(\Phi\). The computed potential is then converted into the electric field, \(\vec{E}\), by an object derived from the class template StaticScalarSolver::ProjectPHItoE. This is a wrap-around class template derived from StaticScalarSolver::ProjectHgradToHcurl. Similarly the displacement, \(\vec{D}\), is computed by feeding the numerically computed potential to an object derived from the class template StaticScalarSolver::ProjectPHItoD. The last is the wrap-around class template derived from StaticScalarSolver::ProjectHgradToHdiv. All projectors are shared between the numerical experiments. They are located in shared/include directory.

All numerical experiments more or less share the structure of the code discussed above. The solvers and projectors can differ from experiment to experiment. The main components, however, remain the same. All numerical experiments have class templates that implement the solver, the exact solution, the global settings, the value_list(...) member functions that make the boundary value problem specific to the experiment, etc.

Finally, let us discuss the "stage" template parameter, see diagram above. Multiple solvers and projectors can be used in the same experiment. In the mms-vt-i/ experiment, for example, the there are two solvers and two projectors. All of them belong to the StaticVectorSolver name space. They share the same input class templates that are declared in shared/include/static_vector_input.hpp file. The implementation of the input specific for the experiment is in the mms-vt-i/src/static_vector_input.cpp file. Consequently, there is only one set of the coefficients and the right-hand sides of the boundary value problem for all four solvers and projectors. The two solvers in the mms-vt-i/ experiments solve different boundary value problems, so there should be at list two sets of the coefficients and the right-hand sides. This is what the stage template parameter for. Each solver and its input can be instantiated with a unique value of the stage parameter. This allows each solver to have its own set of input parameters. In the mms-vt-i/ numerical experiment, for example, the two solvers are instantiated at stage=0 and stage=2. In the mms-vt-i/include/solver.hpp file:

class SolverMMSVTI_T : public SettingsMMSVTI, public Solver1<3,0>
{...};
...
class SolverMMSVTI_A : public SettingsMMSVTI, public Solver2<3,2>
{...};

This allows them to have different implementations of the coefficient in the mms-vt-i/src/static_vector_input.cpp:

template<>
void StaticVectorSolver::TheCoefficient<3, 0>::value_list(
const std::vector<Point<3>> &r,
types::material_id mid,
unsigned int cuid,
std::vector<double> & values) const
{
Assert(r.size() == values.size(),
ExcDimensionMismatch(r.size(), values.size()))
for (unsigned int i = 0 ; i < values.size(); i++)
values[i] = 1.0;
}
...
template<>
void StaticVectorSolver::TheCoefficient<3, 2>::value_list(
const std::vector<Point<3>> &r,
types::material_id mid,
unsigned int cuid,
std::vector<double> & values) const
{
Assert(r.size() == values.size(),
ExcDimensionMismatch(r.size(), values.size()))
for (unsigned int i = 0 ; i < values.size(); i++)
values[i] = permeability(r.at(i)[0], r.at(i)[1], mu_0);
}

The stage template parameter allows to have as many inputs in one numerical experiment as needed. Consequently, we can solve as many different boundary value problems in one numerical experiment as we pleased.