|
|
体系整体电荷为0
:-) GROMACS - gmx grompp, 2025.3 (-:
Executable: /home/sxh/gromacs-2025.3/build/bin/gmx_mpi
Data prefix: /home/sxh/gromacs-2025.3 (source tree)
Working dir: /mnt/f/AI4S/Gromacs/LPME/mxene-datcha
Command line:
gmx_mpi grompp -f system.mdp -c system.gro -p system.top -o system.tpr -maxwarn 2
Ignoring obsolete mdp entry 'title'
Ignoring obsolete mdp entry 'ns_type'
Setting the LD random seed to -269216301
Generated 15 of the 15 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 15 of the 15 1-4 parameter combinations
Excluding 3 bonded neighbours molecule type 'Ti3C2OH2'
Analysing residue names:
There are: 1 Other residues
Analysing residues not classified as Protein/DNA/RNA/Water and splitting into groups...
Number of degrees of freedom in T-Coupling group rest is 48597.00
The integrator does not provide a ensemble temperature, there is no system ensemble temperature
The largest distance between excluded atoms is 0.667 nm between atom 11102 and 15848
Calculating fourier grid dimensions for X Y Z
Using a fourier grid of 144x80x40, spacing 0.111 0.116 0.112
Estimate for the relative computational load of the PME mesh part: 0.37
This run will generate roughly 12 Mb of data
Back Off! I just backed up system.tpr to ./#system.tpr.2#
GROMACS reminds you: "Don't You Wish You Never Met Her, Dirty Blue Gene?" (Captain Beefheart)
:-) GROMACS - gmx mdrun, 2025.3 (-:
Executable: /home/sxh/gromacs-2025.3/build/bin/gmx_mpi
Data prefix: /home/sxh/gromacs-2025.3 (source tree)
Working dir: /mnt/f/AI4S/Gromacs/LPME/mxene-datcha
Command line:
gmx_mpi mdrun -deffnm system
Back Off! I just backed up system.log to ./#system.log.2#
Reading file system.tpr, VERSION 2025.3 (single precision)
GPU-aware MPI was not detected, will not use direct GPU communication. Check the GROMACS install guide for recommendations for GPU-aware support. If you are certain about GPU-aware support in your MPI library, you can force its use by setting the GMX_FORCE_GPU_AWARE_MPI environment variable.
1 GPU selected for this run.
Mapping of GPU IDs to the 1 GPU task in the 1 rank on this node:
PP:0
PP tasks will do (non-perturbed) short-ranged interactions on the GPU
PP task will update and constrain coordinates on the CPU
Using 1 MPI process
Using 32 OpenMP threads
Back Off! I just backed up system.trr to ./#system.trr.2#
Back Off! I just backed up system.edr to ./#system.edr.2#
Steepest Descents:
Tolerance (Fmax) = 1.00000e-01
Number of steps = 100000
Energy minimization has stopped, but the forces have not converged to the
requested precision Fmax < 0.1 (which may not be possible for your system).
It stopped because the algorithm tried to make a new step whose size was too
small, or there was no change in the energy since last step. Either way, we
regard the minimization as converged to within the available machine
precision, given your starting configuration and EM parameters.
Double precision normally gives you higher accuracy, but this is often not
needed for preparing to run molecular dynamics.
writing lowest energy coordinates.
Back Off! I just backed up system.gro to ./#system.gro.3#
Steepest Descents converged to machine precision in 20 steps,
but did not reach the requested Fmax < 0.1.
Potential Energy = 1.9421158e+07
Maximum force = 7.0130646e+01 on atom 6927
Norm of force = 2.3182152e+01
GROMACS reminds you: "Yeah, uh uh, Neil's Head !" (Neil)
|
|