Wrong assembled global PETSc matrix or vector on envinf test machines
The global assembly of global PETSc matrix and vector on envinf test machines exhibits weird behavior.
Two benchmarks can be used to test the issue:
- computing with 1 partition (mpirun -np 1) use ThermoRichardsFlow/TaskCDECOVALEX2023/Decovalex-0-TRF.prj.
- computing with 3 partition (mpirun -np 3) use ThermoRichardsFlow/TaskCDECOVALEX2023/WithPicardNonLinearSolverAndPETSc/Decovalex-0-TRF.prj.
The branch for the test is update_petsc_test.
Matrix and vector comparisons between what obtained on my laptop (Arch Linux) and envinf2 (Arch Linux), respectively, show the issue. The results obtained on my laptop are correct.
For the test 1, the Newton method is used, and the computed is forced to stop after computeResidual. The computed xdot vector is output for comparison. All entries of the rhs vector obtained on envinf2 are doubled of that of negative b vector () after calling
LinAlg::axpy(res, -1.0, b);
which is wrong because the initial value of res
is zero.
For the test 2, the Picard method is used, and the computed is forced to stop after global assembly. The computed A matrix before and after applying known values are output for comparison. The output A matrix from envinf2 is wrong. All local matrices and vectors, DOF tables, global vectors from both computers are identical. The wrong computation is in calling of LinAlg::aypx
in computeA
.
The same PETSc version is used on both computer for the tests.
The potential bug may exist in axpy calculation, or somewhere in MatrixVectorProvider.