Resolution

Settings of the solver

The settings of the solver can be customized by passing parameters at the initialization of BEMSolver:

solver = cpt.BEMSolver(engine=..., method=...)

Note

As a shortcut, assuming you want to use the default engine, the green_function argument can be used to directly change the implementation of the Green function:

solver = cpt.BEMSolver(green_function=..., method=...)

which is equivalent to

solver = cpt.BEMSolver(engine=BasicMatrixEngine(green_function=…), method=…)

Method

The argument method (default value: "indirect") controls the approach employed to solve for the potential velocity solutions. Two methods are implemented:

  1. direct method (also known as “potential formulation”, among other names) with method="direct",

  2. indirect method (also known as “source formulation”), by default and with method="indirect".

The direct method appears to be slightly more accurate on some test cases (especially when thin plates are involved) but is only implemented for the computation of the forces on the floating body without forward speed. Any other post-processing (e.g. free surface elevation) and forward speed currently require the indirect method.

Since v2.3, the method is a parameter of BEMSolver. For backward compatibility, it can also be passed to solve(), solve_all() and fill_dataset(), then overriding the general setting of the solver.

Engine

A class to build a interaction matrix, deriving from MatrixEngine. A single one is built-in, but others with other features can be found in other packages.

BasicMatrixEngine (Default)

Capytaine’s default engine, that should be a good compromise between robustness, complexity and speed.

The object can be initialized with the following options:

green_function

See below for details.

linear_solver (Default: 'lu_decomposition')

This option is used to set the solver for linear systems that is used in the resolution of the BEM problem. Passing a string will make the code use one of the predefined solver. Three of them are available:

  • 'lu_decomposition' is a direct linear solver with caching of the LU decomposition. It is the default as it is the most robust method while providing predictable computation time.

  • 'lu_decompositon_with_overwrite' is the same as 'lu_decomposition' but overwrite the matrix data when computing the LU decomposition, which reduces the RAM usage of the method, but might be less robust when doing non-standard workflows.

  • 'gmres' is the GMRES iterative solver, which is faster in many situations but also completely fails in some other (for instance in presence of irregular frequencies). Its RAM usage is lower than 'lu_decomposition' and similar to 'lu_decompositon_with_overwrite'.

Alternatively, any function taking as arguments a matrix and a vector and returning a vector can be given to the solver:

import numpy as np

def my_linear_solver(A, b):
        """A dumb solver for testing."""
        return np.linalg.inv(A) @ b

my_bem_solver = cpt.BEMSolver(
   engine=BasicMatrixEngine(linear_solver=my_linear_solver)
   )

This option can be used for instance to apply a custom preconditioning to the iterative solver.

Green function

A class used to evaluate the Green function, deriving from AbstractGreenFunction. The following classes are available:

Delhommeau (Default)

The method implemented in Nemoh (see [Del87] and [Del89]). See the documentation for details on the available options.

The floating_point_precision argument accepts the float64 (default) and float32 values. The latter uses less RAM, so it might be preferable for very large meshes.

In Capytaine (and Nemoh), the integral of the wave term \(\mathcal{G}(r, z)\) (and its derivative \(\frac{\partial \mathcal{G}}{\partial r}\)) are approximated using surrogate models, which take the form of a tabulation of these function values for a grid of \((r, z)\), precomputed at the initialization of the program. A third-order Lagrange polynomial interpolation is employed to obtain the values between the precomputed values.

In version 1 of Capytaine (as in version 2 of Nemoh), the tabulation ranges of \(r\) and \(z\) are set as \([0, 100]\) with \(328\) discretization values and \([-16, 0]\) with \(46\) discretization values, respectively. In the new version, these can be user-defined with the following options:

import capytaine as cpt

# Legacy (versions 1.x)
gf = cpt.Delhommeau(tabulation_nr=324, tabulation_rmax=100,
                    tabulation_nz=46, tabulation_zmin=-16,
                    tabulation_nb_integration_points=251,
                    tabulation_grid_shape="legacy",
                    finite_depth_method="legacy",
                    finite_depth_prony_decomposition_method="fortran",
                    gf_singularities="high_freq",
                    floating_point_precision="float64")

# Default in Capytaine 2.1
gf = cpt.Delhommeau(tabulation_nr=676, tabulation_rmax=100,
                    tabulation_nz=372, tabulation_zmin=-251,
                    tabulation_nb_integration_points=1001,
                    tabulation_grid_shape="scaled_nemoh3",
                    finite_depth_method="legacy",
                    finite_depth_prony_decomposition_method="fortran",
                    gf_singularities="high_freq",
                    floating_point_precision="float64")

# Default in Capytaine 2.2 and 2.2.1
gf = cpt.Delhommeau(tabulation_nr=676, tabulation_rmax=100,
                    tabulation_nz=372, tabulation_zmin=-251,
                    tabulation_nb_integration_points=1001,
                    tabulation_grid_shape="scaled_nemoh3",
                    finite_depth_method="legacy",
                    finite_depth_prony_decomposition_method="fortran",
                    gf_singularities="low_freq",
                    floating_point_precision="float64")

# Default in Capytaine 2.3 and 2.3.1
gf = cpt.Delhommeau(tabulation_nr=676, tabulation_rmax=100,
                    tabulation_nz=372, tabulation_zmin=-251,
                    tabulation_nb_integration_points=1001,
                    tabulation_grid_shape="scaled_nemoh3",
                    finite_depth_method="newer",
                    finite_depth_prony_decomposition_method="python",
                    gf_singularities="low_freq",
                    floating_point_precision="float64")

In version 2.1, the default numbers of \(r\) and \(z\) values have been increased to \(676\) and \(372\), respectively. While the range of \(r\) is kept the same, the z range has been extended to \([-251, 0]\). The option tabulation_grid_shape is used to switched between the new distribution of points inspired by Nemoh version 3 or the "legacy" approach. The tabulation_nb_integration_points controls the accuracy of the precomputed tabulation points themselves.

In version 2.2, the way singularities are extracted of the infinite depth Green function to be integrated has changed. The "low_freq" variant is expected to be more accurate at low frequency and near the free surface. The former variant is still available by setting the gf_singularities parameter as in the above example.

In version 2.3, some better variants to compute the finite depth Green function were introduced.

The first time it is initialize with a given set of parameters, some tabulated data are precomputed and stored on disk. The default location is a os-dependant cache directory. The location at which the data is stored can be configured by passing tabulation_cache_dir to Delhommeau or by setting the environment variable CAPYTAINE_CACHE_DIR.

LiangWuNoblesseGF

The infinite depth Green function from the following papers:

[1] H. Wu, C. Zhang, Y. Zhu, W. Li, D. Wan, F. Noblesse,

A global approximation to the Green function for diffraction radiation of water waves, Eur. J. Mech. B Fluids 65 (2017) 54-64.

[2] H. Liang, H. Wu, F. Noblesse,

Validation of a global approximation for wave diffraction-radiation in deep water, Appl. Ocean Res. 74 (2018) 80-86.

Please cite them if you use this implementation.

FinGreen3D

The finite depth Green function from the following paper, as implemented in HAMS:

Yingyi Liu, Shigeo Yoshida, Changhong Hu, Makoto Sueyoshi, Liang Sun, Junliang Gao, Peiwen Cong, Guanghua He. A reliable open-source package for performance evaluation of floating renewable energy systems in coastal and offshore regions. Energy Conversion and Management, 174 (2018): 516-536.

Please cite this paper if you use this implementation.

HAMS_GF

This class is just a thin wrapper around the two implementation above, using one or the other depending of the water depth.

Advanced users can write their own class to evaluate the Green function. See the example in the Example scripts section.

Solving the problem

Once the solver has been initialized, it can be used to solve problems with the solve() method:

result = solver.solve(problem, keep_details=False)

The optional argument keep_details (default value: True) controls whether the source and potential distributions should be saved in the result object. These data are necessary for some post-processing such as the computation of the Kochin function or the reconstruction of the free surface elevation. However, when only the force on the body is of interest, they can be discarded to save space in memory.

A list of problems can be solved at once in an optimal order with:

list_of_results = solver.solve_all(list_of_problems, keep_details=False)

where solve_all() accepts the same optional keyword arguments as solve().

When using solve_all(), a single problem raising an error do not interrupt the full resolution. Instead, the error is displayed in the log and the output result is replaced by a FailedDiffractionResult or a FailedRadiationResult.

Progress bar

The methods solve_all() and fill_dataset() display by default an animated progress bar while solving. This behavior can be turned off by giving the optional argument progress_bar=False to either method or by setting the environment variable CAPYTAINE_PROGRESS_BAR to False. This might be useful in testing environments and CI.

Timer

The solver BEMSolver keeps track of the time spent in some step of the resolution. Results are stored in timer attribute and can also be accessed by timer_summary().

RAM usage

At the beginning of a batch of computation, the solver will compute the estimated RAM usage of the resolutions, taking the parallelisation into account. The estimation is displayed at the INFO log level (off by default) if it is low, and WARNING log level (on by default) if it is higher than 8 GB. Expect the resolution to fail if the RAM usage is higher than the available RAM.

If the optional dependency psutil is installed, the actual RAM usage is measured and displayed at the end of the computation at the INFO log level.

Parallelization

Capytaine includes two kinds of parallelization.

joblib

OpenMP

Single resolution (solve)

Batch resolution (solve_all and fill_dataset)

✓ (if installed)

Single problem with OpenMP

When solving a single problem, matrix constructions and linear algebra operations (using BLAS or MKL depending on your installation) can be parallelized by OpenMP. This feature is installed and on by default. The number of threads used can be controlled by the environment variable OMP_NUM_THREADS, as well as MKL_NUM_THREADS (for the linear algebra when using Intel’s MKL library usually distributed with conda). Note that the environment variable should be set before the start of the Python interpreter. Alternatively, if you’d like to change dynamically the number of threads, it can be done with the threadpoolctl library (see also GH 47).

Batch resolution with joblib

When solving several independent problems, they can be solved in parallel. This feature (new in version 1.4) requires the optional dependency joblib to be installed. The methods solve_all() and fill_dataset() take an optional keyword-argument n_jobs which control the number of jobs to run in parallel during the batch resolution. Since joblib may disturb user feedback (logging and error reporting), it is currently disabled by default.

When n_jobs=1 (the default) or joblib is not installed, no parallel batch resolution happens (although OpenMP parallelization might still be enabled).

When n_jobs=-1, all CPU cores are used (and joblib should automatically disable the OpenMP parallelization.)

The two parallelization layers (OpenMP and joblib) have different usage. If you have a relatively small mesh but study a large number of sea states, you should use the joblib parallelization. On the other hand, if your mesh is large or your available RAM is low, it might be beneficial to turn off the joblib parallelization and use only the OpenMP one.