|
本帖最后由 飞翔的猪 于 2019-5-8 16:50 编辑
大家好,我想请问一下:我是在自己电脑上面运行VASP,系统:Ubuntun16.04,vasp为5.4.4,计算体系:Cu-111-slab,4层,最下面两层固定,做slab结构优化
最近提交任务总是显示:Error reading item 'VCAIMAGES' from file INCAR,想请教是哪个部分出现了问题,谢谢
输入文件如下:
(1)INCAR
system=Cu111
ISTART=0
ICHARG=2
ENCUT=400
EDIFF=1E-4
EDIFFG=-0.01
NSW=200
IBRION=2
ISIF=2
ISMEAR=0
SIGMA=0.1
LWAVE=.FALSE
LCHARG=.FALSE
PREC=High
(2)KPOINTS
Automatic Cu-111
0
Monkhorst-Pack
4 4 1
0 0 0
(3)POSCAR
Cu(111)
1.0
5.1366000175 0.0000000000 0.0000000000
-2.5683000088 4.4484261043 0.0000000000
0.0000000000 0.0000000000 21.2910003662
Cu
16
Selective Dynamics
Cartesian
+0.0000000000 +0.0000000000 +0.0000000000 F F F
+2.5683257585 +1.4827939267 +2.0969505409 F F F
-0.0000257524 +2.9656324400 +4.1941142047 T T T
+0.0000000000 +0.0000000000 +6.2910652140 T T T
-1.2841500044 +2.2242131233 +0.0000000000 F F F
+1.2841758748 +3.7070069299 +2.0969505409 F F F
+1.2841243727 +0.7414191966 +4.1941142047 T T T
-1.2841500044 +2.2242131233 +6.2910652140 T T T
+1.2841500044 +2.2242131233 +0.0000000000 F F F
-1.2841243754 +3.7070069299 +2.0969505409 F F F
+3.8524241452 +0.7414191966 +4.1941142047 T T T
+1.2841500044 +2.2242131233 +6.2910652140 T T T
+2.5683000088 +0.0000000000 +0.0000000000 F F F
+0.0000256316 +1.4827939267 +2.0969505409 F F F
+2.5682740201 +2.9656324400 +4.1941142047 T T T
+2.5683000088 +0.0000000000 +6.2910652140 T T T
(4)POTCAR: PAW_GGA Cu
运行后会发生报错:
running on 4 total cores
distrk: each k-point on 4 cores, 1 groups
distr: one band on 1 cores, 4 groups
using from now: INCAR
vasp.5.4.4.18Apr17-6-g9f103f2a35 (build Apr 26 2019 19:12:43) complex
POSCAR found type information on POSCAR Cu
POSCAR found : 1 types and 16 ions
scaLAPACK will be used
LDA part: xc-table for Ceperly-Alder, standard interpolation
POSCAR, INCAR and KPOINTS ok, starting setup
Fatal error in PMPI_Alltoallv: Other MPI error, error stack:
PMPI_Alltoallv(665).............: MPI_Alltoallv(sbuf=0xc2b3f00, scnts=0xbe90da0, sdispls=0xbe90d00, MPI_INTEGER, rbuf=0xc2c5b20, rcnts=0xbe90ea0, rdispls=0xbe90e00, MPI_INTEGER, comm=0xc4000003) failed
MPIR_Alltoallv_impl(416)........: fail failed
MPIR_Alltoallv(373).............: fail failed
MPIR_Alltoallv_intra(226).......: fail failed
MPIR_Waitall_impl(221)..........: fail failed
PMPIDI_CH3I_Progress(623).......: fail failed
pkt_RTS_handler(317)............: fail failed
do_cts(662).....................: fail failed
MPID_nem_lmt_dcp_start_recv(302): fail failed
dcp_recv(165)...................: Internal MPI error! Cannot read from remote process
Two workarounds have been identified for this issue:
1) Enable ptrace for non-root users with:
echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope
2) Or, use:
I_MPI_SHM_LMT=shm
尝试了输入: echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope 显示无效
|
|