24小时热门版块排行榜    

CyRhmU.jpeg
查看: 2195  |  回复: 6
当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖

maoxinxina

银虫 (小有名气)

[求助] GPAW计算过程中报错已有1人参与

gpaw计算过程中出现如下错误:
ImportError: numpy.core.multiarray failed to import
--------------------------------------------------------------------------
mpirun has exited due to process rank 9 with PID 25960 on
node a530 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
想问下应该如何解决呢?
回复此楼
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

pwl//

新虫 (小有名气)

引用回帖:
6楼: Originally posted by maoxinxina at 2017-03-23 11:20:41
已经解决了,就是在用GPAW计算的时候,当体系比较大,这时候K点得取得比较小,才能正常计算。...

我计算气体分子的能量的时候,也出现这个问题,那体系很小啊,K点也是取得gamma点,不知道怎么设置参数
7楼2018-07-27 12:18:28
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
查看全部 7 个回答

charleslian

木虫 (小有名气)

2楼2017-03-16 11:06:45
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

maoxinxina

银虫 (小有名气)

安装了你说的这个,但是还是报错,变成这样了。An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process.  Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption.  The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.  

The process that invoked fork was:

  Local host:          a264 (PID 28715)
  MPI_COMM_WORLD rank: 4

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
[a264:28710] 23 more processes have sent help message help-mpi-runtime.txt / mpi_init:warn-fork
[a264:28710] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
rank=20 L00: Traceback (most recent call last):
rank=20 L01:   File "C6N7-C3N3.py", line 64, in <module>
rank=20 L02:     atom.get_potential_energy()
rank=20 L03:   File "/public/home/users/zkchu/program/python-tools/ase-3.9.1/lib/python2.7/site-packages/ase/atoms.py", line 640, in get_potential_energy
rank=20 L04:     energy = self._calc.get_potential_energy(self)
rank=20 L05:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/aseinterface.py", line 50, in get_potential_energy
rank=20 L06:     self.calculate(atoms, converge=True)
rank=20 L07:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/paw.py", line 251, in calculate
rank=20 L08:     self.set_positions(atoms)
rank=20 L09:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/paw.py", line 329, in set_positions
rank=20 L10:     self.wfs.initialize(self.density, self.hamiltonian, spos_ac)
rank=20 L11:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/wavefunctions/fdpw.py", line 71, in initialize
rank=20 L12:     basis_functions, density, hamiltonian, spos_ac)
rank=20 L13:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/wavefunctions/fdpw.py", line 108, in initialize_wave_functions_from_basis_functions
rank=20 L14:     lcaobd.mynbands)
rank=20 L15:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/wavefunctions/fd.py", line 250, in initialize_from_lcao_coefficients
rank=20 L16:     kpt.psit_nG = self.gd.zeros(self.bd.mynbands, self.dtype)
rank=20 L17:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/grid_descriptor.py", line 199, in zeros
rank=20 L18:     return self._new_array(n, dtype, True, global_array, pad)
rank=20 L19:   File "/public/home/users/dicp004/gpaw/lib/python2.7/site-packages/gpaw/grid_descriptor.py", line 224, in _new_array
rank=20 L20:     return np.zeros(shape, dtype)
rank=20 L21: MemoryError
GPAW CLEANUP (node 20): <type 'exceptions.MemoryError'> occurred.  Calling MPI_Abort!
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 20 in communicator MPI_COMM_WORLD
with errorcode 42.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
gpaw-python: c/extensions.h:36: gpaw_malloc: Assertion `p != ((void *)0)' failed.
gpaw-python: c/extensions.h:36: gpaw_malloc: Assertion `p != ((void *)0)' failed.
gpaw-python: c/extensions.h:36: gpaw_malloc: Assertion `p != ((void *)0)' failed.
gpaw-python: c/extensions.h:36: gpaw_malloc: Assertion `p != ((void *)0)' failed.
--------------------------------------------------------------------------
mpirun has exited due to process rank 20 with PID 28731 on
node a264 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
3楼2017-03-20 16:20:48
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

adormer

新虫 (初入文坛)

【答案】应助回帖

Memory不够了吧,多加点核数试试
4楼2017-03-22 04:04:54
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
信息提示
请填处理意见