| ²é¿´: 3124 | »Ø¸´: 6 | |||
[½»Á÷]
GAMESS×îа²×°Í¨¹ý
|
|||
|
GAMESS×îа²×°Í¨¹ý££²»ÖªÓÐûÓÐÒÅ©£¬²âÊÔ½á¹ûÓÐЩÎÊÌ⣬Çë¸ßÊÖÖ¸µã£¬Ð»Ð» ÕâЩÔõôûÓмÓÈë ²âÊÔ½á¹û ²âÊÔ½á¹û££ DDI Process 0: error code 911 £¬²»ÖªºÎ¹Ê |
» ²ÂÄãϲ»¶
ºÓÄÏũҵ´óѧ£¬»¯Ñ§Ñ§Ë¶ºÍ²ÄÁÏ»¯¹¤×¨Ë¶£¬¶þÂÖµ÷¼Á
ÒѾÓÐ0È˻ظ´
¹ú¼Ò´óÈ˲ſÎÌâ×éÕÐÊÕ»¯Ñ§¡¢²ÄÁÏ»¯¹¤Àà2026Äêµ÷¼ÁÉú
ÒѾÓÐ0È˻ظ´
ÎïÀí»¯Ñ§ÂÛÎÄÈóÉ«/·ÒëÔõôÊÕ·Ñ?
ÒѾÓÐ285È˻ظ´
¹ú¼Ò´óÈ˲ſÎÌâ×éÕÐÊÕ»¯Ñ§¡¢²ÄÁÏ»¯¹¤Àà2026Äêµ÷¼ÁÉú
ÒѾÓÐ0È˻ظ´
¹ú¼Ò´óÈ˲ſÎÌâ×éÕÐÊÕ»¯Ñ§¡¢²ÄÁÏ»¯¹¤Àà2026Äêµ÷¼ÁÉú
ÒѾÓÐ0È˻ظ´
»´±±Ê¦·¶´óѧ »¯Ñ§Ó뻯¹¤¿ÆÑ§Ñ§Ôº2026Äê˶ʿÑо¿ÉúÕÐÉú µ÷¼ÁÐÅÏ¢¹«¸æ
ÒѾÓÐ1È˻ظ´
ºÓÄÏũҵ´óѧ£¬»¯Ñ§Ñ§Ë¶ºÍ²ÄÁÏ»¯¹¤×¨Ë¶£¬¶þÂÖµ÷¼Á
ÒѾÓÐ1È˻ظ´
»´±±Ê¦·¶´óѧ »¯Ñ§Ó뻯¹¤¿ÆÑ§Ñ§Ôº µç´ß»¯¿ÎÌâ×黹ÓÐ1Ãû˶ʿÑо¿ÉúÕÐÉúÃû¶î
ÒѾÓÐ0È˻ظ´
»¯Ñ§Ïà¹ØË¶Ê¿Ñо¿ÉúÕÐÊÕµ÷¼Á-ºþ±±Ê¦·¶´óѧ £¨¿ìÀ´¼ñ©£¬×îе÷¼ÁÐÅÏ¢£¬¹ý¹ú¼ÒÏß¼´¿É£©
ÒѾÓÐ1È˻ظ´
ºÓÄÏÀí¹¤´óѧ»¯Ñ§»¯¹¤Ñ§ÔºÕÐÊÕѧ˶µ÷¼ÁÉú
ÒѾÓÐ0È˻ظ´
ºÚÁú½¿Æ¼¼´óѧ»¯¹¤ »·¾³¹¤³Ìµ÷¼Á
ÒѾÓÐ11È˻ظ´
» ±¾Ö÷ÌâÏà¹Ø¼ÛÖµÌùÍÆ¼ö£¬¶ÔÄúͬÑùÓаïÖú:
×îмÝÕÕÀíÂÛ¿¼ÊÔËÙ³É2012.1.0 °²×°°æ¡¾×ªÔØ¡¿
ÒѾÓÐ80È˻ظ´
×ßÏò²ÆÎñ×ÔÓÉ£º×îÐÂÇî°Ö°Ö¸»°Ö°ÖÏÖ½ðÁ÷ÓÎÏ·¼òÌåÖÐÎİ氲װ°ü¡¾×ªÔØ¡¿
ÒѾÓÐ96È˻ظ´
Gamess-NEDA½á¹ûÇóÖú...
ÒѾÓÐ7È˻ظ´
¡¾ÇóÖú¡¿°²×°Gamessʱ³öÏÖÈçÏÂÎÊÌâÔõô½â¾ö£¿
ÒѾÓÐ14È˻ظ´
¡¾ÇóÖú¡¿gamessµÄÒ»µãСÎÊÌ⣬ÈçºÎ×¢²á»·¾³±äÁ¿
ÒѾÓÐ4È˻ظ´
¡¾ÇóÖú¡¿gamess °²×°ºó£¬×ö²âÊÔµÄʱºò³öÏÖDDI Process 0: error code 911ÎÊÌâ¡£
ÒѾÓÐ9È˻ظ´
¡¾Ô´´¡¿linux Ï icc+MKL ·½°¸±àÒë°²×° gamess
ÒѾÓÐ62È˻ظ´
¡¾ÇóÖú¡¿Çógamess ÊäÈëʹÓÃØÍÊÆ»ù×éÓë·ÖÁÑ»ù×éµÄÀý×Ó
ÒѾÓÐ6È˻ظ´
¡¾·ÖÏí¡¿ÎªPC-GAMESSдÁ˸öС³ÌÐò£¬¿ÉÒÔÓÃMolekelÏÔʾNBO¹ìµÀ
ÒѾÓÐ13È˻ظ´
» ÇÀ½ð±ÒÀ²£¡»ØÌû¾Í¿ÉÒԵõ½:
ÉòÑô»¯¹¤´óѧ»·¾³Ñ§ÔºÉúÎï¼¼ÊõÓ빤³Ì£¨ÎÞÊýѧ£©¡¢»·¾³¿ÆÑ§Ó빤³ÌµÈ¿Éµ÷¼Á
+1/482
2007ÄêÒ»ÆðÍæË£µÄÀϳæ×ÓÃÇ£¬ÏëËÀÄãÃÇÀ²~~~³öÀ´Ã°¸öÅݰɡ¡
+1/164
¼ÃÄÏ´óÑ§Ç°ÑØ½»²æ¿ÆÑ§Ñо¿Ôº2026Ä껯ѧ/»¯¹¤×¨ÒµË¶Ê¿Ñо¿Éúµ÷¼ÁÕÐÉú
+1/91
ÑûÄúͶ¸å Polymers ÌØ¿¯-¸ß·Ö×Ó²ÄÁϼӹ¤Ö÷Ìâ
+1/87
´óÍåÇø´óѧÀîºì¸ýÓëÖйú¿ÆÑ§¼¼Êõ´óѧÍõÁø½ÌÊÚÁªºÏÕÐÊÕ²©Ê¿ºó
+1/86
Äþ²¨´óѧ2026Äê½ô¼±²¹ÕÐÖ²±£·½Ïò£¨À¥³æ»òÖ²²¡£©²©Ê¿Ñо¿Éú1-2Ãû
+1/83
ÅÊÖ¦»¨Ñ§Ôº´óÁ¿ÕÐÊÕµ÷¼Á£º²ÄÁÏ¿ÆÑ§Ó빤³Ì(ѧ˶£º0805)ºÍ²ÄÁϹ¤³Ì£¨×¨Ë¶£º0856£©
+1/53
ƽ¶¥É½Ñ§Ôº »¯¹¤²ÄÁÏרҵÑо¿Éú±¨¿¼µ÷¼ÁÃû¶î³ä×ã
+1/41
ɽÎ÷´óѧ¹âÁ¿×Ó¼¼ÊõÓëÆ÷¼þ¹ú¼ÒʵÑéÊÒÕÐÊÕÎïÀí¡¢¹âѧ¡¢²ÄÁÏרҵ2026¼¶²©Ê¿¡¢Ë¶Ê¿Ñо¿Éú
+1/40
¸£½¨Ò½¿Æ´óѧÉúÐÅϵÈÄÊÀÌÎÀÏʦ2026ÄêÕÐÊÕµ÷¼ÁÓÅÐã˶ʿÉú1Ãû-ÉúÎïҽѧÐÅϢѧ
+1/37
¼ª´óÉú¿Æ²©Ò»²©Ê¿ÁªÅà×Ô¼ö-ÒÑ·¢±í+ÔÚͶһ×÷SCI3ƪ-̤ʵ¿Ï¸ÉÄܼӰà
+1/19
26É격×Ô¼ö£¨ÐÂÄÜÔ´µç³Ø·½Ïò£©
+1/14
ÉîÛÚÀí¹¤´óѧÀî½ø¿ÎÌâ×éÕÐÊÕ¿ÆÑÐÖúÀí/·ÃÎÊѧÉú
+1/11
ºþÄÏÀí¹¤´óѧµ÷¼ÁϵͳÒÑ¿ª¡£ »¯Ñ§¡¢»¯¹¤¡¢²ÄÁÏ»¯¹¤ µ÷¼ÁÖ¸±ê³ä×㣬¼ȡÂʸߣ¬»¶Óµ÷¼Á
+1/10
ÓëÆä·¢Õ¹³èÎᄐ㬲»Èç¸Ä±äÄêÇáÈ˵ĻéÁµ¹Û
+1/10
´óÍåÇø´óѧÀîºì¸ý¿ÎÌâ×éÕÐÆ¸Ñо¿ÖúÀí
+1/6
º£Ë®ÓãÀàÒÅ´«ÓýÖÖ¿ÎÌâ×éÕÐÊÕ˶ʿµ÷¼ÁÉú
+1/4
´óÁ¬¹¤Òµ´óѧ·ÄÖ¯Óë²ÄÁϹ¤³ÌѧԺ³¬ÁÙ½çÁ÷Ìå¼¼Êõ¿ÎÌâ×é Ñо¿Éúµ÷¼Á
+1/4
ɽÎ÷´óѧÕÐÊÕ2026¼¶ÉêÇ뿼ºËÖÆ»¯Ñ§²ÄÁÏÀ಩ʿÑо¿ÉúÒ»Ãû
+1/3
½ËÕ´óѧ»·¾³Ó밲ȫ¹¤³ÌѧԺÕÅìõÀÏʦ¿ÎÌâ×éÕÐÊÕ»·¾³£¬»¯Ñ§£¬»úе£¬µç×ÓÐÅÏ¢µÈרҵµ÷¼Á
+1/2
snoopyzhao
ÖÁ×ðľ³æ (Ö°Òµ×÷¼Ò)
- QCÇ¿Ìû: 1
- Ó¦Öú: 157 (¸ßÖÐÉú)
- ¹ó±ö: 0.02
- ½ð±Ò: 18844.7
- Ìû×Ó: 3803
- ÔÚÏß: 1422.4Сʱ
- ³æºÅ: 183750
¡ï
xiaowu787(½ð±Ò+1):лл²ÎÓë
xiaowu787(½ð±Ò+1):лл²ÎÓë
| Ŷ£¬Ò»¶Îʱ¼äûÓйØ×¢£¬ÏÖÔÚ GAMESS µÄ²¢ÐиÄÓà MPI ÁËÂ𣿠|
2Â¥2011-09-28 12:42:24
3Â¥2011-09-28 15:56:31
|
±àÒë³ÌÐòʱÓõÄMPICH2£¬rungmsÀïÃæµÄÄÚÈÝÐ޸ĸãµÄ²»ÊǺÜÃ÷°×£¬Çë¸ßÊÖÖ¸µãһϡ£¿´¿´ÄÄЩµØ·½ÐèÒªÐ޸ģ¬rungmsÀïĬÈϵÄÊÇIntel MPI rungms: Here we use two constant node names, compute-0-0 and compute-0-1, each of which is assumed to be SMP (ours are 8-ways): Each user must set up a file named ~/.mpd.conf containing a single line: "secretword=GiantsOverDodgers" which is set to user-only access permissions "chmod 600 ~/.mpd.conf". The secret word shouldn't be a login password, but can be anything you like: "secretword=VikingsOverPackers" is just as good. if ($TARGET == mpi) then # # Run outside of the batch schedular Sun Grid Engine (SGE) # by faking SGE's host assignment file: $TMPDIR/machines. # This script can be executed interactively on the first # compute node mentioned in this fake 'machines' file. set TMPDIR=$SCR # perhaps SGE would assign us two node names... echo "compute-0-1" > $TMPDIR/machines echo "compute-0-2" >> $TMPDIR/machines # or if you want to use these four nodes... #--echo "compute-0-0" > $TMPDIR/machines #--echo "compute-0-1" >> $TMPDIR/machines #--echo "compute-0-2" >> $TMPDIR/machines #--echo "compute-0-3" >> $TMPDIR/machines # # besides the usual three arguments to 'rungms' (see top), # we'll pass in a "processers per node" value. This could # be a value from 1 to 8 on our 8-way nodes. set PPN=$4 # # Allow for compute process and data servers (one pair per core) # @ NPROCS = $NCPUS + $NCPUS # # MPICH2 kick-off is guided by two disk files (A and B). # # A. build HOSTFILE, saying which nodes will be in our MPI ring # setenv HOSTFILE $SCR/$JOB.nodes.mpd if (-e $HOSTFILE) rm $HOSTFILE touch $HOSTFILE # if ($NCPUS == 1) then # Serial run must be on this node itself! echo `hostname` >> $HOSTFILE set NNODES=1 else # Parallel run gets node names from SGE's assigned list, # which is given to us in a disk file $TMPDIR/machines. uniq $TMPDIR/machines $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif # uncomment these if you are still setting up... #--echo '------------' #--echo HOSTFILE $HOSTFILE contains #--cat $HOSTFILE #--echo '------------' # # B. the next file forces explicit "which process on what node" rules. # setenv PROCFILE $SCR/$JOB.processes.mpd if (-e $PROCFILE) rm $PROCFILE touch $PROCFILE # if ($NCPUS == 1) then @ NPROCS = 2 echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else @ NPROCS = $NCPUS + $NCPUS if ($PPN == 0) then # when our SGE is just asked to assign so many cores from one # node, PPN=0, we are launching compute processes and data # servers within our own node...simple. echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else # when our SGE is asked to reserve entire nodes, 1<=PPN<=8, # the $TMPDIR/machines contains the assigned node names # once and only once. We want PPN compute processes on # each node, and of course, PPN data servers on each. # Although DDI itself can assign c.p. and d.s. to the # hosts in any order, the GDDI logic below wants to have # all c.p. names before any d.s. names in the $HOSTFILE. # # thus, lay down a list of c.p. @ PPN2 = $PPN + $PPN @ n=1 while ($n <= $NNODES) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE @ n++ end endif endif # uncomment these if you are still setting up... #--echo PROCFILE $PROCFILE contains #--cat $PROCFILE #--echo '------------' # echo "MPICH2 will be running GAMESS on $NNODES nodes." echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x" echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers." if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node." # # Next sets up MKL usage setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t # force old MKL versions (version 9 and older) to run single threaded setenv MKL_SERIAL YES # setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH set path=(/opt/mpich2/gnu/bin $path) # echo The scratch disk space on each node is $SCR chdir $SCR # # Now, at last, we can actually launch the processes, in 3 steps. # a) bring up a 'ring' of MPI demons # set echo mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE # # b) kick off the compute processes and the data servers # mpiexec -configfile $PROCFILE < /dev/null # # c) shut down the 'ring' of MPI demons # mpdallexit unset echo # # HOSTFILE is passed to the file erasing step below rm -f $PROCFILE endif |
4Â¥2011-09-29 09:20:52
|
ÇëÖ¸µ¼Ò»Ï£¬rungms¹ØÓÚmpiµÄÐ޸ģ¬Ð»Ð»£¡£¡ ʵÑéÊÒµÄmpdÒ»Ö±ÔÚÕý³£ÔËÐÐÁË£¬²»ÐèÒªÔÙÆô¶¯mpd½ø³ÌÁË£¬ÊµÔÚ²»ÖªÕâÀï¸ÃÈçºÎÐÞ¸Ä [u06@pc07 tests]$ ../rungms exam01.inp ----- GAMESS execution script ----- This job is running on host pc07 under operating system Linux at 2011Äê 09Ô 29ÈÕ ÐÇÆÚËÄ 10:50:47 CST Available scratch disk space (Kbyte units) at beginning of the job is Îļþϵͳ 1K-¿é ÒÑÓà ¿ÉÓà ÒÑÓÃ% ¹ÒÔØµã store:/data 2536545984 1253686304 1154010560 53% /home cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 unset echo setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata setenv EXTBAS /dev/null setenv NUCBAS /dev/null ....... setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97 setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98 setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99 unset echo Intel MPI (iMPI) will be running GAMESS on 1 nodes. The binary to be kicked off by 'mpiexec' is gamess.00.x iMPI will run 1 compute processes and 1 data servers. The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06 /home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255 ----- accounting info ----- Files used on the master node pc07 were: -rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 -rw-r--r-- 1 u06 usbfs 5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd -rw-r--r-- 1 u06 usbfs 66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd 2011Äê 09Ô 29ÈÕ ÐÇÆÚËÄ 10:50:49 CST 0.204u 0.084s 0:01.71 16.3% 0+0k 0+0io 18pf+0w [u06@pc07 tests]$ [ Last edited by xiaowu787 on 2011-9-29 at 10:20 ] |
5Â¥2011-09-29 10:16:42
7Â¥2016-12-15 21:49:44
¼òµ¥»Ø¸´
ÉñÍþ½Ü6Â¥
2013-10-25 17:35
»Ø¸´
xiaowu787(½ð±Ò+1): лл²ÎÓë













»Ø¸´´ËÂ¥
¿ÉÒÔ·¢¸ö°²×°½Ì³Ì¸øÎÒÂð£¿