²é¿´: 3088  |  »Ø¸´: 6

xiaowu787

ľ³æ (ÕýʽдÊÖ)


[½»Á÷] GAMESS×îа²×°Í¨¹ý

GAMESS×îа²×°Í¨¹ý£­£­²»ÖªÓÐûÓÐÒÅ©£¬²âÊÔ½á¹ûÓÐЩÎÊÌ⣬Çë¸ßÊÖÖ¸µã£¬Ð»Ð»
CODE:
.........
.o zheev.o zmatrx.o

Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

Message passing libraries are ../ddi/libddi.a -L/home/u06/lammps/mpich2/lib -lmpich -lrt -lpthread

Other libraries to be searched are /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_intel_lp64.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_sequential.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_core.a

Linker messages (if any) follow...

The linking of GAMESS to binary gamess.00.x was successful.
0.285u 0.192s 0:04.00 11.7%     0+0k 0+0io 8pf+0w

ÕâЩÔõôûÓмÓÈë
CODE:
Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

²âÊÔ½á¹û
CODE:
[u06@pc07 gamess]$ mpirun -np 2 ./gamess.00.x
YOU MUST ASSIGN GENERIC NAME INPUT WITH A SETENV.
EXECUTION OF GAMESS TERMINATED -ABNORMALLY- AT Mon Sep 26 19:38:07 2011
STEP CPU TIME =     0.00 TOTAL CPU TIME =        0.0 (    0.0 MIN)
TOTAL WALL CLOCK TIME=        0.0 SECONDS, CPU UTILIZATION IS 100.00%
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
rank 0 in job 46  pc07_50155   caused collective abort of all ranks
  exit status of rank 0: return code 143

²âÊÔ½á¹û£­£­ DDI Process 0: error code 911 £¬²»ÖªºÎ¹Ê
CODE:
[u06@pc07 GAMESS]$ mpirun -np 2 gamess.00.x >exam01
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
[u06@pc07 GAMESS]$

»Ø¸´´ËÂ¥

» ²ÂÄãϲ»¶

» ±¾Ö÷ÌâÏà¹Ø¼ÛÖµÌùÍÆ¼ö£¬¶ÔÄúͬÑùÓаïÖú:

» ÇÀ½ð±ÒÀ²£¡»ØÌû¾Í¿ÉÒԵõ½:

²é¿´È«²¿É¢½ðÌù

ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû
¡ï
xiaowu787(½ð±Ò+1):лл²ÎÓë
Ŷ£¬Ò»¶Îʱ¼äûÓйØ×¢£¬ÏÖÔÚ GAMESS µÄ²¢ÐиÄÓà MPI ÁËÂð£¿
2Â¥2011-09-28 12:42:24
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


ÒýÓûØÌû:
2Â¥: Originally posted by snoopyzhao at 2011-09-28 12:42:24:
Ŷ£¬Ò»¶Îʱ¼äûÓйØ×¢£¬ÏÖÔÚ GAMESS µÄ²¢ÐиÄÓà MPI ÁËÂð£¿

ÎÒ°²×°Ê±Ñ¡µÄµÄmpi¶ø·Çsockets,±àÒëcompddi ûÓеõ½ddikick.x,µÃµ½µÄÊÇlibddi.a¡£ÎÒ¿´ÁËcompddi £¬ÀïÃæÌáʾºÃÏñÖ»ÓÐsockets²ÅµÃµ½ddikick.x£¬¶ørungmsÀïÃæÐèÒªddikick.x£¬³ÌÐòÖ´ÐÐʱ³öÏÖddikick.xÕÒ²»µ½¡£²»ÖªÔõô»ØÊ£¿Çë¸ßÊÖÖ¸µãÒ»ÏÂ
3Â¥2011-09-28 15:56:31
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


±àÒë³ÌÐòʱÓõÄMPICH2£¬rungmsÀïÃæµÄÄÚÈÝÐ޸ĸãµÄ²»ÊǺÜÃ÷°×£¬Çë¸ßÊÖÖ¸µãһϡ£¿´¿´ÄÄЩµØ·½ÐèÒªÐ޸ģ¬rungmsÀïĬÈϵÄÊÇIntel MPI

    rungms:
        Here we use two constant node names, compute-0-0 and compute-0-1,
        each of which is assumed to be SMP (ours are 8-ways):

        Each user must set up a file named ~/.mpd.conf containing
        a single line: "secretword=GiantsOverDodgers" which is
        set to user-only access permissions "chmod 600 ~/.mpd.conf".
        The secret word shouldn't be a login password, but can be
        anything you like: "secretword=VikingsOverPackers" is just
        as good.

if ($TARGET == mpi) then
   #
   #     Run outside of the batch schedular Sun Grid Engine (SGE)
   #     by faking SGE's host assignment file: $TMPDIR/machines.
   #     This script can be executed interactively on the first
   #     compute node mentioned in this fake 'machines' file.
   set TMPDIR=$SCR
   #              perhaps SGE would assign us two node names...
   echo "compute-0-1"  > $TMPDIR/machines
   echo "compute-0-2" >> $TMPDIR/machines
   #              or if you want to use these four nodes...
   #--echo "compute-0-0"  > $TMPDIR/machines
   #--echo "compute-0-1" >> $TMPDIR/machines
   #--echo "compute-0-2" >> $TMPDIR/machines
   #--echo "compute-0-3" >> $TMPDIR/machines
   #
   #      besides the usual three arguments to 'rungms' (see top),
   #      we'll pass in a "processers per node" value.  This could
   #      be a value from 1 to 8 on our 8-way nodes.
   set PPN=$4
   #
   #  Allow for compute process and data servers (one pair per core)
   #
   @ NPROCS = $NCPUS + $NCPUS
   #
   #  MPICH2 kick-off is guided by two disk files (A and B).
   #
   #  A. build HOSTFILE, saying which nodes will be in our MPI ring
   #
   setenv HOSTFILE $SCR/$JOB.nodes.mpd
   if (-e $HOSTFILE) rm $HOSTFILE
   touch $HOSTFILE
   #
   if ($NCPUS == 1) then
             # Serial run must be on this node itself!
      echo `hostname` >> $HOSTFILE
      set NNODES=1
   else
             # Parallel run gets node names from SGE's assigned list,
             # which is given to us in a disk file $TMPDIR/machines.
      uniq $TMPDIR/machines $HOSTFILE
      set NNODES=`wc -l $HOSTFILE`
      set NNODES=$NNODES[1]
   endif
   #           uncomment these if you are still setting up...
   #--echo '------------'
   #--echo HOSTFILE $HOSTFILE contains
   #--cat $HOSTFILE
   #--echo '------------'
   #
   #  B. the next file forces explicit "which process on what node" rules.
   #
   setenv PROCFILE $SCR/$JOB.processes.mpd
   if (-e $PROCFILE) rm $PROCFILE
   touch $PROCFILE
   #
   if ($NCPUS == 1) then
      @ NPROCS = 2
      echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
   else
      @ NPROCS = $NCPUS + $NCPUS
      if ($PPN == 0) then
             # when our SGE is just asked to assign so many cores from one
             # node, PPN=0, we are launching compute processes and data
             # servers within our own node...simple.
         echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
      else
             # when our SGE is asked to reserve entire nodes, 1<=PPN<=8,
             # the $TMPDIR/machines contains the assigned node names
             # once and only once.  We want PPN compute processes on
             # each node, and of course, PPN data servers on each.
             # Although DDI itself can assign c.p. and d.s. to the
             # hosts in any order, the GDDI logic below wants to have
             # all c.p. names before any d.s. names in the $HOSTFILE.
             #
             # thus, lay down a list of c.p.
         @ PPN2 = $PPN + $PPN
         @ n=1
         while ($n <= $NNODES)
            set host=`sed -n -e "$n p" $HOSTFILE`
            set host=$host[1]
            echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
            @ n++
         end      endif
   endif
   #           uncomment these if you are still setting up...
   #--echo PROCFILE $PROCFILE contains
   #--cat $PROCFILE
   #--echo '------------'
   #
   echo "MPICH2 will be running GAMESS on $NNODES nodes."
   echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x"
   echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers."
   if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node."
   #
   #  Next sets up MKL usage
   setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t
   #  force old MKL versions (version 9 and older) to run single threaded
   setenv MKL_SERIAL YES
   #
   setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH
   set path=(/opt/mpich2/gnu/bin $path)
   #
   echo The scratch disk space on each node is $SCR
   chdir $SCR
   #
   #  Now, at last, we can actually launch the processes, in 3 steps.
   #  a) bring up a 'ring' of MPI demons
   #
   set echo
   mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE
   #
   #  b) kick off the compute processes and the data servers
   #
   mpiexec -configfile $PROCFILE < /dev/null
   #
   #  c) shut down the 'ring' of MPI demons
   #
   mpdallexit
   unset echo
   #
   #    HOSTFILE is passed to the file erasing step below
   rm -f $PROCFILE
endif
4Â¥2011-09-29 09:20:52
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

xiaowu787

ľ³æ (ÕýʽдÊÖ)


ÇëÖ¸µ¼Ò»Ï£¬rungms¹ØÓÚmpiµÄÐ޸ģ¬Ð»Ð»£¡£¡ ʵÑéÊÒµÄmpdÒ»Ö±ÔÚÕý³£ÔËÐÐÁË£¬²»ÐèÒªÔÙÆô¶¯mpd½ø³ÌÁË£¬ÊµÔÚ²»ÖªÕâÀï¸ÃÈçºÎÐÞ¸Ä

[u06@pc07 tests]$ ../rungms exam01.inp
----- GAMESS execution script -----
This job is running on host pc07
under operating system Linux at 2011Äê 09ÔÂ 29ÈÕ ÐÇÆÚËÄ 10:50:47 CST
Available scratch disk space (Kbyte units) at beginning of the job is
Îļþϵͳ               1K-¿é        ÒÑÓà    ¿ÉÓà ÒÑÓÃ% ¹ÒÔØµã
store:/data          2536545984 1253686304 1154010560  53% /home
cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05
unset echo
setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat
setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata
setenv EXTBAS /dev/null
setenv NUCBAS /dev/null
.......

setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97
setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98
setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99
unset echo
Intel MPI (iMPI) will be running GAMESS on 1 nodes.
The binary to be kicked off by 'mpiexec' is gamess.00.x
iMPI will run 1 compute processes and 1 data servers.
The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06
/home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255
----- accounting info -----
Files used on the master node pc07 were:
-rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05
-rw-r--r-- 1 u06 usbfs    5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd
-rw-r--r-- 1 u06 usbfs   66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd
2011Äê 09ÔÂ 29ÈÕ ÐÇÆÚËÄ 10:50:49 CST
0.204u 0.084s 0:01.71 16.3%     0+0k 0+0io 18pf+0w
[u06@pc07 tests]$

[ Last edited by xiaowu787 on 2011-9-29 at 10:20 ]
5Â¥2011-09-29 10:16:42
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû

°®ÉÏÓ¢Óï

гæ (³õÈëÎÄ̳)


¡ï
xiaowu787(½ð±Ò+1): лл²ÎÓë
ÕâÊÇʲô°¡£¬¿´²»¶®¿ÉÒÔ·¢¸ö°²×°½Ì³Ì¸øÎÒÂð£¿

·¢×ÔСľ³æIOS¿Í»§¶Ë
7Â¥2016-12-15 21:49:44
ÒÑÔÄ   »Ø¸´´ËÂ¥   ¹Ø×¢TA ¸øTA·¢ÏûÏ¢ ËÍTAºì»¨ TAµÄ»ØÌû
¼òµ¥»Ø¸´
2013-10-25 17:35   »Ø¸´  
xiaowu787(½ð±Ò+1): лл²ÎÓë
Ïà¹Ø°æ¿éÌø×ª ÎÒÒª¶©ÔÄÂ¥Ö÷ xiaowu787 µÄÖ÷Ìâ¸üÐÂ
×î¾ßÈËÆøÈÈÌûÍÆ¼ö [²é¿´È«²¿] ×÷Õß »Ø/¿´ ×îºó·¢±í
[¿¼ÑÐ] Ò»Ö¾Ô¸»ªÖпƼ¼´óѧ071000£¬Çóµ÷¼Á +3 ÑØ°¶Óб´¿Ç6 2026-03-21 3/150 2026-03-21 10:35 by ÄºÔÆÇ庮
[¿¼ÑÐ] 085601µ÷¼Á 358·Ö +3 zzzzggh 2026-03-20 4/200 2026-03-21 10:21 by luoyongfeng
[¿¼ÑÐ] 265Çóµ÷¼Á +3 Jack?k?y 2026-03-17 3/150 2026-03-21 03:17 by JourneyLucky
[¿¼ÑÐ] ²ÄÁϹ¤³Ì£¨×¨£©Ò»Ö¾Ô¸985 ³õÊÔ335Çóµ÷¼Á +3 hiloiy 2026-03-17 4/200 2026-03-21 03:04 by JourneyLucky
[¿¼ÑÐ] 083200ѧ˶321·ÖÒ»Ö¾Ô¸ôßÄÏ´óѧÇóµ÷¼Á +3 innocenceF 2026-03-17 3/150 2026-03-21 02:35 by JourneyLucky
[¿¼ÑÐ] Ò»Ö¾Ô¸ Î÷±±´óѧ £¬070300»¯Ñ§Ñ§Ë¶£¬×Ü·Ö287£¬Ë«·ÇÒ»±¾£¬Çóµ÷¼Á¡£ +3 ³¿»èÏßÓëÐǺ£ 2026-03-18 3/150 2026-03-21 00:46 by JourneyLucky
[¿¼ÑÐ] ²ÄÁÏרҵÇóµ÷¼Á +6 hanamiko 2026-03-18 6/300 2026-03-21 00:24 by JourneyLucky
[¿¼ÑÐ] 324Çóµ÷¼Á +5 luckyѽѽѽѼ 2026-03-20 5/250 2026-03-20 22:30 by ´ÙÌì³É
[¿¼ÑÐ] 290Çóµ÷¼Á +7 ^O^Ø¿ 2026-03-19 7/350 2026-03-20 21:43 by JourneyLucky
[¿¼ÑÐ] ²ÄÁÏѧ˶297ÒѹýËÄÁù¼¶Çóµ÷¼ÁÍÆ¼ö +11 adaie 2026-03-19 11/550 2026-03-20 21:30 by laoshidan
[¿¼ÑÐ] 320Çóµ÷¼Á0856 +3 ²»ÏëÆðÃû×Ö112 2026-03-19 3/150 2026-03-19 22:53 by ѧԱ8dgXkO
[¿¼ÑÐ] 288Çóµ÷¼Á£¬Ò»Ö¾Ô¸»ªÄÏÀí¹¤´óѧ071005 +5 ioodiiij 2026-03-17 5/250 2026-03-19 18:22 by zcl123
[¿¼ÑÐ] 0703»¯Ñ§µ÷¼Á +4 18889395102 2026-03-18 4/200 2026-03-19 16:13 by 30660438
[¿¼ÑÐ] ²ÄÁÏÓ뻯¹¤Çóµ÷¼Á +7 Ϊѧ666 2026-03-16 7/350 2026-03-19 14:48 by ¾¡Ë´Ò¢1
[¿¼ÑÐ] ²ÄÁÏר˶306Ó¢Ò»Êý¶þ +10 z1z2z3879 2026-03-16 13/650 2026-03-18 14:20 by 007_lilei
[¿¼ÑÐ] ÉúÎïѧ071000 329·ÖÇóµ÷¼Á +3 ÎÒ°®ÉúÎïÉúÎﰮΠ2026-03-17 3/150 2026-03-18 10:12 by macy2011
[¿¼²©] 26²©Ê¿ÉêÇë +3 1042136743 2026-03-17 3/150 2026-03-17 23:30 by ÇáËɲ»ÉÙËæ
[¿¼ÑÐ] 301Çóµ÷¼Á +4 A_JiXing 2026-03-16 4/200 2026-03-17 17:32 by ruiyingmiao
[¿¼ÑÐ] Ò»Ö¾Ô¸ËÕÖÝ´óѧ²ÄÁϹ¤³Ì£¨085601£©×¨Ë¶ÓпÆÑо­ÀúÈýÏî¹ú½±Á½¸öʵÓÃÐÍרÀûÒ»ÏîÊ¡¼¶Á¢Ïî +6 ´ó»ðɽС»ðɽ 2026-03-16 8/400 2026-03-17 15:05 by ÎÞи¿É»÷111
[¿¼ÑÐ] Ò»Ö¾Ô¸£¬¸£ÖÝ´óѧ²ÄÁÏר˶339·ÖÇóµ÷¼Á +3 ľ×ÓmomoÇàÕù 2026-03-15 3/150 2026-03-17 07:52 by laoshidan
ÐÅÏ¢Ìáʾ
ÇëÌî´¦ÀíÒâ¼û