±±¾©Ê¯ÓÍ»¯¹¤Ñ§Ôº2026ÄêÑо¿ÉúÕÐÉú½ÓÊÕµ÷¼Á¹«¸æ
²é¿´: 4131  |  »Ø¸´: 1

sic029

Ìú³æ (³õÈëÎÄ̳)

[ÇóÖú] qsubÌá½»²¢ÐÐsiesta²»³É¹¦£¬ÇóÖú

´ó¼ÒºÃ£¬½»Á÷ϼ¯Èº³ÌÐòʹÓÃÓöµ½µÄÎÊÌ⣬¶àл¡£
[node21:10714] *** An error occurred in MPI_Comm_rank
[node21:10714] *** on communicator MPI_COMM_WORLD
[node21:10714] *** MPI_ERR_COMM: invalid communicator
[node21:10714] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 3 with PID 10711 on
node node21 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[node21:10707] 7 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[node21:10707] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages


бàÒëµÄ¼ÆËã³ÌÐòsiesta£¬ÓÃqsub jobÌá½»ÉÏÈ¥ºÜ¿ì½áÊøÌáʾµÄÐÅÏ¢£¬ÄÜ·ñ°ïæÕï¶ÏÒ»ÏÂÇé¿ö¡£ÔÚÁíÍâÒ»¸ö¼¯ÈºÉϱàÒëºóÖ±½ÓÓÃmpirun -np 4 siesta¿ÉÒÔ˳ÀûÖ´Ðе쬲»ÖªµÀΪºÎÔÚм¯ÈºÓÃqsub³öÏÖÕâ¸öÎÊÌ⣬Õâ¸öм¯Èº²»ÈýøÈëµ½×ӽڵ㣬ËùÒÔ±ØÐëÒª½â¾öÕâ¸öÎÊÌâ²ÅÐУ¬¶àлÁË¡£

²»ÖªµÀÊÇÄÄÀïµÄÎÊÌ⣬֮ǰÔڸû·¾³²¢ÐбàÒëµÄlammpsºÍvasp¶¼Ê¹ÓúÜ˳Àû£¬¾ÍÊÇsiestaÓÃqsubÌá½»×÷Òµ×ÜÊÇÎÞ·¨Õý³£¼ÆË㣬µ«ÊDz¢ÐбàÒëµÄsiestaÔÚÁíÍâ»·¾³ÏµÄ×Ó½ÚµãÓÃmpirun -np 4 siestaÖ´ÐкÜ˳Àû£¬¾À½áÁË¡£

Ŷ£¬µÇ¼½ÚµãÉÏmpirunÎÒÊÔ¹ýµÄ£¬Çë°ïæ¿´¿´£¬¸Ð¾õ±»¹ÜÀíÔ±ÉèÖÃÁËÒ²ÎÞ·¨Óãº
mpirun -np 4 siesta
libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
    This will severely limit memory registrations.
libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
    This will severely limit memory registrations.
libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
    This will severely limit memory registrations.
libibverbs: Warning: RLIMIT_MEMLOCK is 32768 bytes.
    This will severely limit memory registrations.
--------------------------------------------------------------------------
The OpenFabrics (openib) BTL failed to initialize while trying to
allocate some locked memory.  This typically can indicate that the
memlock limits are set too low.  For most HPC installations, the
memlock limits should be set to "unlimited".  The failure occured
here:

  Local host:    manage1
  OMPI source:   btl_openib_component.c:1115
  Function:      ompi_free_list_init_ex_new()
  Device:        mlx4_0
  Memlock limit: 32768

You may need to consult with your system administrator to get this
problem fixed.  This FAQ entry on the Open MPI web site may also be
helpful:

    http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   manage1
  Local device: mlx4_0
--------------------------------------------------------------------------
[manage1:16214] *** An error occurred in MPI_Comm_rank
[manage1:16214] *** on communicator MPI_COMM_WORLD
[manage1:16214] *** MPI_ERR_COMM: invalid communicator
[manage1:16214] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 16212 on
node manage1 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[manage1:16211] 3 more processes have sent help message help-mpi-btl-openib.txt / init-fail-no-mem
[manage1:16211] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[manage1:16211] 3 more processes have sent help message help-mpi-btl-openib.txt / error in device init
[manage1:16211] 3 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal

¼ÆËã×Ó½ÚµãÎÞ·¨½øÈ룬±»ÏÞÖÆËÀÁ˵ġ£

Õâ±ßÓõÄpbs×÷Òµ¹ÜÀígridview£¬ÎÒÓõÄÌá½»½Å±¾ÊÇ£º
====================================
#PBS -N test
#PBS -l nodes=1:ppn=8
#PBS -j oe
#PBS -l walltime=24:00:00  
                 
cd $PBS_O_WORKDIR
NP=`cat $PBS_NODEFILE|wc -l`
source /public/software/mpi/openmpi1.5.4-intel.sh                    
mpirun  -machinefile $PBS_NODEFILE -np $NP  \
/home/sw/siesta/siesta-3.1/Obj/siesta < fe.fdf | tee output  
=====================================

¸Ðл³æÓѰïæ£¬¶àл¡£
»Ø¸´´ËÂ¥

» ²ÂÄãϲ»¶

» ±¾Ö÷ÌâÏà¹Ø¼ÛÖµÌùÍÆ¼ö£¬¶ÔÄúͬÑùÓаïÖú:

research
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

redsnowolf

Òø³æ (СÓÐÃûÆø)

¡¾´ð°¸¡¿Ó¦Öú»ØÌû

¡ï
liliangfang: ½ð±Ò+1, лл½»Á÷ 2012-09-15 15:12:35
ÎÒǰÁ½ÌìÓÃvaspÒ²³öÏÖÀàËÆÎÊÌ⣬¸Õ¸Õ½â¾ö¡«

The OpenFabrics (openib) BTL failed to initialize while trying to
allocate some locked memory.  This typically can indicate that the
memlock limits are set too low.  For most HPC installations, the
memlock limits should be set to "unlimited".  The failure occured
here:

  Local host:    node21
  OMPI source:   btl_openib_component.c:1055
  Function:      ompi_free_list_init_ex_new()
  Device:        mlx4_0
  Memlock limit: 65536

You may need to consult with your system administrator to get this
problem fixed.  This FAQ entry on the Open MPI web site may also be
helpful:

    http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
ÉÏÃæÄǸöÍøÖ·Àï15¡¢16¡¢17˵µÄͦÇå³þµÄ£¬ÎÒµÄÇé¿öÊÇÔÚÿ¸ö½Úµãulimit -aÏÔʾlocked memory¶¼Õý³££¬¿É¾ÍÊdzö´í˵ÄÚ´æ·ÖÅä²»Õý³££¬ÄǸöÍøÖ·Àï˵¿ÉÄÜÊǵǼʱûÓÐÕý³£Ö´ÐÐϵͳËùÉèµÄlocked memory£¬»òÕß×÷Òµµ÷¶ÈϵͳûÓзÖÅ䏸ӦÓóÌÐò×ã¹»´óµÄÄڴ棬×îºóÖØÆôÁËÒ»ÏÂÿ¸ö½ÚµãµÄpbsµ÷¶ÈϵͳµÄÊØ»¤½ø³Ì£¬ÎÊÌâ½â¾öÁË¡«
»òÕßÄã¿ÉÒÔÔÚmpirunǰ±ß¶ù¼ÓÉÏulimit -l unlimited£¬ÓÃqsubÌá½»ÏÂÊÔÊÔ
Ï£ÍûÒÔÉÏÐÅÏ¢¶ÔÂ¥Ö÷ÓÐÓá«
2Â¥2012-09-15 14:24:17
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû
Ïà¹Ø°æ¿éÌø×ª ÎÒÒª¶©ÔÄÂ¥Ö÷ sic029 µÄÖ÷Ìâ¸üÐÂ
×î¾ßÈËÆøÈÈÌûÍÆ¼ö [²é¿´È«²¿] ×÷Õß »Ø/¿´ ×îºó·¢±í
[¿¼ÑÐ] ²ÄÁÏ¿¼ÑÐÇóµ÷¼Á×Ü·Ö280 +9 mkjlz1 2026-04-06 9/450 2026-04-06 17:18 by dongzh2009
[¿¼ÑÐ] 362Çóµ÷¼ÁÒ»Ö¾Ô¸ÖйúʯÓÍ´óѧ +4 ÎÒÒª¿¼´ó 2026-04-06 6/300 2026-04-06 14:11 by Î޼ʵIJÝÔ­
[¿¼ÑÐ] 302·ÖÇóµ÷¼Á Ò»Ö¾Ô¸°²»Õ´óѧ085601 +9 zyxÉϰ¶£¡ 2026-04-04 9/450 2026-04-06 13:05 by Öí»á·É
[¿¼ÑÐ] ¹¤¿Æ08-»úеר˶-Çóµ÷¼Á +3 À×Å··ÉÌß 2026-04-02 3/150 2026-04-05 18:49 by À¶ÔÆË¼Óê
[¿¼ÑÐ] Ò»Ö¾Ô¸Çà¿Æ085500£¬³õÊÔ295·Ö£¬¹«¹²¿Î213·Ö +3 Óöµ½µÄÈËÔ¸Íû¶¼Ä 2026-04-05 3/150 2026-04-05 18:45 by À¶ÔÆË¼Óê
[¿¼ÑÐ] 385·Ö ÉúÎïѧ£¨071000£©Çóµ÷¼Á +11 qf626 2026-04-01 11/550 2026-04-05 17:35 by Ecowxq666£¡
[¿¼ÑÐ] 313Çóµ÷¼Á +5 º£ÈÕº£ÈÕ 2026-04-04 5/250 2026-04-05 15:52 by jndximd
[¿¼ÑÐ] 358Çóµ÷¼Á +7 Çïgk 2026-04-04 7/350 2026-04-05 13:29 by huangmoli
[¿¼ÑÐ] 0832ʳƷ¿ÆÑ§Ó빤³Ìѧ˶282µ÷¼Á +6 ÓãÔÚË®ÖÐÓÎa 2026-04-02 9/450 2026-04-05 11:45 by flysky1234
[¿¼ÑÐ] 285Çóµ÷¼Á +11 ŶßϺôo 2026-04-04 11/550 2026-04-05 08:15 by 544594351
[¿¼ÑÐ] Äܶ¯µ÷¼Á326ר˶ +4 wan112233 2026-04-04 4/200 2026-04-04 22:47 by yu221
[¿¼ÑÐ] 085400µç×ÓÐÅÏ¢319Çóµ÷¼Á£¨½ÓÊÜ¿çרҵµ÷¼Á£© +5 ÐÇÐDz»Õ£ÑÛà¶ 2026-04-03 6/300 2026-04-04 21:50 by hemengdong
[¿¼ÑÐ] 306Çóµ÷¼Á +3 hybÉÏÃû¹¤ 2026-04-02 3/150 2026-04-04 18:12 by ÈÈÇéɳĮ
[¿¼ÑÐ] ±¾¿Æ985£¬×¨Òµ0812·Ö336Çóµ÷¼Á +4 ĪĪºÜÐÐ 2026-04-03 4/200 2026-04-03 21:31 by zhq0425
[¿¼ÑÐ] 320Çóµ÷¼Á +5 Õñ¡ªTZ 2026-04-02 5/250 2026-04-03 14:42 by fxue1114
[¿¼ÑÐ] 282Çóµ÷¼Á +5 ºôÎü¶¼ÊǼõ·Ê 2026-03-31 5/250 2026-04-03 12:03 by 1753564080
[¿¼ÑÐ] 316Çóµ÷¼Á +14 ÖÛ×Ô¹£ 2026-04-01 18/900 2026-04-03 10:28 by linyelide
[¿¼ÑÐ] 22408 266Çóµ÷¼Á +3 masss11222 2026-04-02 3/150 2026-04-02 18:11 by ±ÊÂä½õÖÝ
[¿¼ÑÐ] Çóµ÷¼Á +7 Aniyaio 2026-04-02 7/350 2026-04-02 16:42 by zzsw+
[¿¼ÑÐ] ²ÄÁϵ÷¼Á +11 Ò»ÑùYWY 2026-03-31 11/550 2026-04-01 22:25 by zhouyuwinner
ÐÅÏ¢Ìáʾ
ÇëÌî´¦ÀíÒâ¼û