Znn3bq.jpeg
²é¿´: 3124  |  »Ø¸´: 6

xiaowu787

ľ³æ (ÕýʽдÊÖ)


[½»Á÷] GAMESS×îа²×°Í¨¹ý

GAMESS×îа²×°Í¨¹ý£­£­²»ÖªÓÐûÓÐÒÅ©£¬²âÊÔ½á¹ûÓÐЩÎÊÌ⣬Çë¸ßÊÖÖ¸µã£¬Ð»Ð»
CODE:
.........
.o zheev.o zmatrx.o

Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

Message passing libraries are ../ddi/libddi.a -L/home/u06/lammps/mpich2/lib -lmpich -lrt -lpthread

Other libraries to be searched are /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_intel_lp64.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_sequential.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_core.a

Linker messages (if any) follow...

The linking of GAMESS to binary gamess.00.x was successful.
0.285u 0.192s 0:04.00 11.7%     0+0k 0+0io 8pf+0w

ÕâЩÔõôûÓмÓÈë
CODE:
Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

²âÊÔ½á¹û
CODE:
[u06@pc07 gamess]$ mpirun -np 2 ./gamess.00.x
YOU MUST ASSIGN GENERIC NAME INPUT WITH A SETENV.
EXECUTION OF GAMESS TERMINATED -ABNORMALLY- AT Mon Sep 26 19:38:07 2011
STEP CPU TIME =     0.00 TOTAL CPU TIME =        0.0 (    0.0 MIN)
TOTAL WALL CLOCK TIME=        0.0 SECONDS, CPU UTILIZATION IS 100.00%
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
rank 0 in job 46  pc07_50155   caused collective abort of all ranks
  exit status of rank 0: return code 143

²âÊÔ½á¹û£­£­ DDI Process 0: error code 911 £¬²»ÖªºÎ¹Ê
CODE:
[u06@pc07 GAMESS]$ mpirun -np 2 gamess.00.x >exam01
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
[u06@pc07 GAMESS]$

»Ø¸´´ËÂ¥

» ²ÂÄãϲ»¶

» ±¾Ö÷ÌâÏà¹Ø¼ÛÖµÌùÍÆ¼ö£¬¶ÔÄúͬÑùÓаïÖú:

» ÇÀ½ð±ÒÀ²£¡»ØÌû¾Í¿ÉÒԵõ½:

²é¿´È«²¿É¢½ðÌù

ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû
¡ï
xiaowu787(½ð±Ò+1):лл²ÎÓë
Ŷ£¬Ò»¶Îʱ¼äûÓйØ×¢£¬ÏÖÔÚ GAMESS µÄ²¢ÐиÄÓà MPI ÁËÂð£¿
2Â¥2011-09-28 12:42:24
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


ÒýÓûØÌû:
2Â¥: Originally posted by snoopyzhao at 2011-09-28 12:42:24:
Ŷ£¬Ò»¶Îʱ¼äûÓйØ×¢£¬ÏÖÔÚ GAMESS µÄ²¢ÐиÄÓà MPI ÁËÂð£¿

ÎÒ°²×°Ê±Ñ¡µÄµÄmpi¶ø·Çsockets,±àÒëcompddi ûÓеõ½ddikick.x,µÃµ½µÄÊÇlibddi.a¡£ÎÒ¿´ÁËcompddi £¬ÀïÃæÌáʾºÃÏñÖ»ÓÐsockets²ÅµÃµ½ddikick.x£¬¶ørungmsÀïÃæÐèÒªddikick.x£¬³ÌÐòÖ´ÐÐʱ³öÏÖddikick.xÕÒ²»µ½¡£²»ÖªÔõô»ØÊ£¿Çë¸ßÊÖÖ¸µãÒ»ÏÂ
3Â¥2011-09-28 15:56:31
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


±àÒë³ÌÐòʱÓõÄMPICH2£¬rungmsÀïÃæµÄÄÚÈÝÐ޸ĸãµÄ²»ÊǺÜÃ÷°×£¬Çë¸ßÊÖÖ¸µãһϡ£¿´¿´ÄÄЩµØ·½ÐèÒªÐ޸ģ¬rungmsÀïĬÈϵÄÊÇIntel MPI

    rungms:
        Here we use two constant node names, compute-0-0 and compute-0-1,
        each of which is assumed to be SMP (ours are 8-ways):

        Each user must set up a file named ~/.mpd.conf containing
        a single line: "secretword=GiantsOverDodgers" which is
        set to user-only access permissions "chmod 600 ~/.mpd.conf".
        The secret word shouldn't be a login password, but can be
        anything you like: "secretword=VikingsOverPackers" is just
        as good.

if ($TARGET == mpi) then
   #
   #     Run outside of the batch schedular Sun Grid Engine (SGE)
   #     by faking SGE's host assignment file: $TMPDIR/machines.
   #     This script can be executed interactively on the first
   #     compute node mentioned in this fake 'machines' file.
   set TMPDIR=$SCR
   #              perhaps SGE would assign us two node names...
   echo "compute-0-1"  > $TMPDIR/machines
   echo "compute-0-2" >> $TMPDIR/machines
   #              or if you want to use these four nodes...
   #--echo "compute-0-0"  > $TMPDIR/machines
   #--echo "compute-0-1" >> $TMPDIR/machines
   #--echo "compute-0-2" >> $TMPDIR/machines
   #--echo "compute-0-3" >> $TMPDIR/machines
   #
   #      besides the usual three arguments to 'rungms' (see top),
   #      we'll pass in a "processers per node" value.  This could
   #      be a value from 1 to 8 on our 8-way nodes.
   set PPN=$4
   #
   #  Allow for compute process and data servers (one pair per core)
   #
   @ NPROCS = $NCPUS + $NCPUS
   #
   #  MPICH2 kick-off is guided by two disk files (A and B).
   #
   #  A. build HOSTFILE, saying which nodes will be in our MPI ring
   #
   setenv HOSTFILE $SCR/$JOB.nodes.mpd
   if (-e $HOSTFILE) rm $HOSTFILE
   touch $HOSTFILE
   #
   if ($NCPUS == 1) then
             # Serial run must be on this node itself!
      echo `hostname` >> $HOSTFILE
      set NNODES=1
   else
             # Parallel run gets node names from SGE's assigned list,
             # which is given to us in a disk file $TMPDIR/machines.
      uniq $TMPDIR/machines $HOSTFILE
      set NNODES=`wc -l $HOSTFILE`
      set NNODES=$NNODES[1]
   endif
   #           uncomment these if you are still setting up...
   #--echo '------------'
   #--echo HOSTFILE $HOSTFILE contains
   #--cat $HOSTFILE
   #--echo '------------'
   #
   #  B. the next file forces explicit "which process on what node" rules.
   #
   setenv PROCFILE $SCR/$JOB.processes.mpd
   if (-e $PROCFILE) rm $PROCFILE
   touch $PROCFILE
   #
   if ($NCPUS == 1) then
      @ NPROCS = 2
      echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
   else
      @ NPROCS = $NCPUS + $NCPUS
      if ($PPN == 0) then
             # when our SGE is just asked to assign so many cores from one
             # node, PPN=0, we are launching compute processes and data
             # servers within our own node...simple.
         echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
      else
             # when our SGE is asked to reserve entire nodes, 1<=PPN<=8,
             # the $TMPDIR/machines contains the assigned node names
             # once and only once.  We want PPN compute processes on
             # each node, and of course, PPN data servers on each.
             # Although DDI itself can assign c.p. and d.s. to the
             # hosts in any order, the GDDI logic below wants to have
             # all c.p. names before any d.s. names in the $HOSTFILE.
             #
             # thus, lay down a list of c.p.
         @ PPN2 = $PPN + $PPN
         @ n=1
         while ($n <= $NNODES)
            set host=`sed -n -e "$n p" $HOSTFILE`
            set host=$host[1]
            echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
            @ n++
         end      endif
   endif
   #           uncomment these if you are still setting up...
   #--echo PROCFILE $PROCFILE contains
   #--cat $PROCFILE
   #--echo '------------'
   #
   echo "MPICH2 will be running GAMESS on $NNODES nodes."
   echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x"
   echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers."
   if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node."
   #
   #  Next sets up MKL usage
   setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t
   #  force old MKL versions (version 9 and older) to run single threaded
   setenv MKL_SERIAL YES
   #
   setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH
   set path=(/opt/mpich2/gnu/bin $path)
   #
   echo The scratch disk space on each node is $SCR
   chdir $SCR
   #
   #  Now, at last, we can actually launch the processes, in 3 steps.
   #  a) bring up a 'ring' of MPI demons
   #
   set echo
   mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE
   #
   #  b) kick off the compute processes and the data servers
   #
   mpiexec -configfile $PROCFILE < /dev/null
   #
   #  c) shut down the 'ring' of MPI demons
   #
   mpdallexit
   unset echo
   #
   #    HOSTFILE is passed to the file erasing step below
   rm -f $PROCFILE
endif
4Â¥2011-09-29 09:20:52
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


ÇëÖ¸µ¼Ò»Ï£¬rungms¹ØÓÚmpiµÄÐ޸ģ¬Ð»Ð»£¡£¡ ʵÑéÊÒµÄmpdÒ»Ö±ÔÚÕý³£ÔËÐÐÁË£¬²»ÐèÒªÔÙÆô¶¯mpd½ø³ÌÁË£¬ÊµÔÚ²»ÖªÕâÀï¸ÃÈçºÎÐÞ¸Ä

[u06@pc07 tests]$ ../rungms exam01.inp
----- GAMESS execution script -----
This job is running on host pc07
under operating system Linux at 2011Äê 09ÔÂ 29ÈÕ ÐÇÆÚËÄ 10:50:47 CST
Available scratch disk space (Kbyte units) at beginning of the job is
Îļþϵͳ               1K-¿é        ÒÑÓà    ¿ÉÓà ÒÑÓÃ% ¹ÒÔØµã
store:/data          2536545984 1253686304 1154010560  53% /home
cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05
unset echo
setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat
setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata
setenv EXTBAS /dev/null
setenv NUCBAS /dev/null
.......

setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97
setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98
setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99
unset echo
Intel MPI (iMPI) will be running GAMESS on 1 nodes.
The binary to be kicked off by 'mpiexec' is gamess.00.x
iMPI will run 1 compute processes and 1 data servers.
The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06
/home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255
----- accounting info -----
Files used on the master node pc07 were:
-rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05
-rw-r--r-- 1 u06 usbfs    5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd
-rw-r--r-- 1 u06 usbfs   66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd
2011Äê 09ÔÂ 29ÈÕ ÐÇÆÚËÄ 10:50:49 CST
0.204u 0.084s 0:01.71 16.3%     0+0k 0+0io 18pf+0w
[u06@pc07 tests]$

[ Last edited by xiaowu787 on 2011-9-29 at 10:20 ]
5Â¥2011-09-29 10:16:42
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

°®ÉÏÓ¢Óï

гæ (³õÈëÎÄ̳)


¡ï
xiaowu787(½ð±Ò+1): лл²ÎÓë
ÕâÊÇʲô°¡£¬¿´²»¶®¿ÉÒÔ·¢¸ö°²×°½Ì³Ì¸øÎÒÂð£¿

·¢×ÔСľ³æIOS¿Í»§¶Ë
7Â¥2016-12-15 21:49:44
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû
¼òµ¥»Ø¸´
2013-10-25 17:35   »Ø¸´  
xiaowu787(½ð±Ò+1): лл²ÎÓë
Ïà¹Ø°æ¿éÌø×ª ÎÒÒª¶©ÔÄÂ¥Ö÷ xiaowu787 µÄÖ÷Ìâ¸üÐÂ
×î¾ßÈËÆøÈÈÌûÍÆ¼ö [²é¿´È«²¿] ×÷Õß »Ø/¿´ ×îºó·¢±í
[¿¼ÑÐ] 286Çóµ÷¼Á +7 ²Ýľ²»ÑÔ 2026-04-04 7/350 2026-04-10 16:15 by ²ñС°×
[¿¼ÑÐ] Ò»Ö¾Ô¸211£¬»¯Ñ§Ñ§Ë¶£¬310·Ö£¬±¾¿ÆÖصãË«·Ç£¬Çóµ÷¼Á +19 ŬÁ¦·Ü¶·112 2026-04-04 20/1000 2026-04-10 12:15 by pengliang8036
[¿¼ÑÐ] ²ÄÁϵ÷¼Á +5 hzhahg 2026-04-06 5/250 2026-04-10 10:10 by may_ÐÂÓî
[¿¼ÑÐ] 347²ÄÁÏר˶Çóµ÷¼Á +19 zj8215216 2026-04-06 19/950 2026-04-10 09:36 by 690616278
[¿¼ÑÐ] ÉúÎïÓëÒ½Ò©273Çóµ÷¼Á +18 ÀóÌâÄÏǽ 2026-04-05 19/950 2026-04-10 08:14 by kangsm
[¿¼ÑÐ] ¿¼Ñе÷¼Á-²ÄÁÏÀà-284 +28 Ïë»»ÊÖ»ú²»Ïë½âÊ 2026-04-08 28/1400 2026-04-09 20:08 by µ¹Êý321?
[¿¼ÑÐ] Ò»Ö¾Ô¸Öйú¿ÆÑ§ÔºÉϺ£ÓлúËù£¬Óлú»¯Ñ§356·ÖÕÒµ÷¼Á +11 Nadiums 2026-04-09 11/550 2026-04-09 18:04 by lijunpoly
[¿¼ÑÐ] Çóµ÷¼Á²ÄÁÏ¿ÆÑ§Ó빤³ÌÒ»Ö¾Ô¸985³õÊÔ365·Ö +5 ²Ä»¯Àî¿É 2026-04-08 5/250 2026-04-09 17:00 by Lilly_Li
[¿¼ÑÐ] 085600£¬321·ÖÇóµ÷¼Á +19 ´ó²öС×Ó 2026-04-04 20/1000 2026-04-09 14:12 by Delta2012
[¿¼ÑÐ] Çóµ÷¼Á +8 ³Ô¿Ú±ù¼¤Áè 2026-04-07 8/400 2026-04-09 08:03 by 5268321
[¿¼²©] ²ÄÁÏ·½Ïò¿¼²©£¬ÇóÍÆ¼ö +3 ÑÔÓïaaa 2026-04-05 4/200 2026-04-08 22:22 by nxgogo
[¿¼ÑÐ] ±¾¿ÆÉúÎïÐÅϢѧ£¬×Ü·Ö362 Çó07 08µ÷¼Á +6 qСٻ1210 2026-04-06 6/300 2026-04-07 19:40 by macy2011
[¿¼ÑÐ] Ò»Ö¾Ô¸ÄϿƴóÉúÎïѧ297·Ö£¬Çóµ÷¼ÁÍÆ¼ö +8 Y-yyusx 2026-04-06 9/450 2026-04-07 19:38 by biomichael
[¿¼ÑÐ] Èí¹¤Ñ§Ë¶299Çóµ÷¼Á +6 useryy 2026-04-07 6/300 2026-04-07 09:50 by vgtyfty
[¿¼ÑÐ] 333Çóµ÷¼Á +6 ºÏ³ËÑîϰϦ 2026-04-06 6/300 2026-04-07 09:44 by Öí»á·É
[¿¼ÑÐ] 081200-11408-276ѧ˶Çóµ÷¼Á +5 ´Þwj 2026-04-05 5/250 2026-04-06 15:40 by lin-da
[¿¼ÑÐ] Çóµ÷¼Á +11 xzghyuj 2026-04-04 11/550 2026-04-06 11:49 by lijunpoly
[¿¼ÑÐ] Çóµ÷¼Á +10 chenxrlkx 2026-04-05 10/500 2026-04-06 11:31 by Öí»á·É
[¿¼ÑÐ] 081200-11408-276ѧ˶Çóµ÷¼Á +4 ´Þwj 2026-04-04 5/250 2026-04-05 14:06 by imissbao
[¿¼ÑÐ] ÊýÒ»Ó¢Ò»274»úеµ÷¼Á +5 ÐÇÔÉÁ÷ϼ 2026-04-04 6/300 2026-04-05 11:38 by arrow8852
ÐÅÏ¢Ìáʾ
ÇëÌî´¦ÀíÒâ¼û