24小时热门版块排行榜    

查看: 3040  |  回复: 6
当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖

xiaowu787

木虫 (正式写手)


[交流] GAMESS最新安装通过

GAMESS最新安装通过--不知有没有遗漏,测试结果有些问题,请高手指点,谢谢
CODE:
.........
.o zheev.o zmatrx.o

Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

Message passing libraries are ../ddi/libddi.a -L/home/u06/lammps/mpich2/lib -lmpich -lrt -lpthread

Other libraries to be searched are /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_intel_lp64.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_sequential.a /home/u06/lammps/intel/composer_xe_2011_sp1.6.233/mkl/lib/intel64/libmkl_core.a

Linker messages (if any) follow...

The linking of GAMESS to binary gamess.00.x was successful.
0.285u 0.192s 0:04.00 11.7%     0+0k 0+0io 8pf+0w

这些怎么没有加入
CODE:
Choices for some optional plug-in codes are
   Using qmmm.o, Tinker/SIMOMM code is not linked.
   Using vbdum.o, neither VB program is linked.
   Using neostb.o, Nuclear Electron Orbital code is not linked.

测试结果
CODE:
[u06@pc07 gamess]$ mpirun -np 2 ./gamess.00.x
YOU MUST ASSIGN GENERIC NAME INPUT WITH A SETENV.
EXECUTION OF GAMESS TERMINATED -ABNORMALLY- AT Mon Sep 26 19:38:07 2011
STEP CPU TIME =     0.00 TOTAL CPU TIME =        0.0 (    0.0 MIN)
TOTAL WALL CLOCK TIME=        0.0 SECONDS, CPU UTILIZATION IS 100.00%
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
rank 0 in job 46  pc07_50155   caused collective abort of all ranks
  exit status of rank 0: return code 143

测试结果-- DDI Process 0: error code 911 ,不知何故
CODE:
[u06@pc07 GAMESS]$ mpirun -np 2 gamess.00.x >exam01
DDI Process 0: error code 911
application called MPI_Abort(MPI_COMM_WORLD, 911) - process 0
[u06@pc07 GAMESS]$

回复此楼

» 猜你喜欢

» 本主题相关价值贴推荐,对您同样有帮助:

» 抢金币啦!回帖就可以得到:

查看全部散金贴

已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

爱上英语

新虫 (初入文坛)



xiaowu787(金币+1): 谢谢参与
这是什么啊,看不懂可以发个安装教程给我吗?

发自小木虫IOS客户端
7楼2016-12-15 21:49:44
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
查看全部 7 个回答

xiaowu787(金币+1):谢谢参与
哦,一段时间没有关注,现在 GAMESS 的并行改用 MPI 了吗?
2楼2011-09-28 12:42:24
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

xiaowu787

木虫 (正式写手)


引用回帖:
2楼: Originally posted by snoopyzhao at 2011-09-28 12:42:24:
哦,一段时间没有关注,现在 GAMESS 的并行改用 MPI 了吗?

我安装时选的的mpi而非sockets,编译compddi 没有得到ddikick.x,得到的是libddi.a。我看了compddi ,里面提示好像只有sockets才得到ddikick.x,而rungms里面需要ddikick.x,程序执行时出现ddikick.x找不到。不知怎么回事?请高手指点一下
3楼2011-09-28 15:56:31
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

xiaowu787

木虫 (正式写手)


编译程序时用的MPICH2,rungms里面的内容修改搞的不是很明白,请高手指点一下。看看哪些地方需要修改,rungms里默认的是Intel MPI

    rungms:
        Here we use two constant node names, compute-0-0 and compute-0-1,
        each of which is assumed to be SMP (ours are 8-ways):

        Each user must set up a file named ~/.mpd.conf containing
        a single line: "secretword=GiantsOverDodgers" which is
        set to user-only access permissions "chmod 600 ~/.mpd.conf".
        The secret word shouldn't be a login password, but can be
        anything you like: "secretword=VikingsOverPackers" is just
        as good.

if ($TARGET == mpi) then
   #
   #     Run outside of the batch schedular Sun Grid Engine (SGE)
   #     by faking SGE's host assignment file: $TMPDIR/machines.
   #     This script can be executed interactively on the first
   #     compute node mentioned in this fake 'machines' file.
   set TMPDIR=$SCR
   #              perhaps SGE would assign us two node names...
   echo "compute-0-1"  > $TMPDIR/machines
   echo "compute-0-2" >> $TMPDIR/machines
   #              or if you want to use these four nodes...
   #--echo "compute-0-0"  > $TMPDIR/machines
   #--echo "compute-0-1" >> $TMPDIR/machines
   #--echo "compute-0-2" >> $TMPDIR/machines
   #--echo "compute-0-3" >> $TMPDIR/machines
   #
   #      besides the usual three arguments to 'rungms' (see top),
   #      we'll pass in a "processers per node" value.  This could
   #      be a value from 1 to 8 on our 8-way nodes.
   set PPN=$4
   #
   #  Allow for compute process and data servers (one pair per core)
   #
   @ NPROCS = $NCPUS + $NCPUS
   #
   #  MPICH2 kick-off is guided by two disk files (A and B).
   #
   #  A. build HOSTFILE, saying which nodes will be in our MPI ring
   #
   setenv HOSTFILE $SCR/$JOB.nodes.mpd
   if (-e $HOSTFILE) rm $HOSTFILE
   touch $HOSTFILE
   #
   if ($NCPUS == 1) then
             # Serial run must be on this node itself!
      echo `hostname` >> $HOSTFILE
      set NNODES=1
   else
             # Parallel run gets node names from SGE's assigned list,
             # which is given to us in a disk file $TMPDIR/machines.
      uniq $TMPDIR/machines $HOSTFILE
      set NNODES=`wc -l $HOSTFILE`
      set NNODES=$NNODES[1]
   endif
   #           uncomment these if you are still setting up...
   #--echo '------------'
   #--echo HOSTFILE $HOSTFILE contains
   #--cat $HOSTFILE
   #--echo '------------'
   #
   #  B. the next file forces explicit "which process on what node" rules.
   #
   setenv PROCFILE $SCR/$JOB.processes.mpd
   if (-e $PROCFILE) rm $PROCFILE
   touch $PROCFILE
   #
   if ($NCPUS == 1) then
      @ NPROCS = 2
      echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
   else
      @ NPROCS = $NCPUS + $NCPUS
      if ($PPN == 0) then
             # when our SGE is just asked to assign so many cores from one
             # node, PPN=0, we are launching compute processes and data
             # servers within our own node...simple.
         echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
      else
             # when our SGE is asked to reserve entire nodes, 1<=PPN<=8,
             # the $TMPDIR/machines contains the assigned node names
             # once and only once.  We want PPN compute processes on
             # each node, and of course, PPN data servers on each.
             # Although DDI itself can assign c.p. and d.s. to the
             # hosts in any order, the GDDI logic below wants to have
             # all c.p. names before any d.s. names in the $HOSTFILE.
             #
             # thus, lay down a list of c.p.
         @ PPN2 = $PPN + $PPN
         @ n=1
         while ($n <= $NNODES)
            set host=`sed -n -e "$n p" $HOSTFILE`
            set host=$host[1]
            echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE
            @ n++
         end      endif
   endif
   #           uncomment these if you are still setting up...
   #--echo PROCFILE $PROCFILE contains
   #--cat $PROCFILE
   #--echo '------------'
   #
   echo "MPICH2 will be running GAMESS on $NNODES nodes."
   echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x"
   echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers."
   if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node."
   #
   #  Next sets up MKL usage
   setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t
   #  force old MKL versions (version 9 and older) to run single threaded
   setenv MKL_SERIAL YES
   #
   setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH
   set path=(/opt/mpich2/gnu/bin $path)
   #
   echo The scratch disk space on each node is $SCR
   chdir $SCR
   #
   #  Now, at last, we can actually launch the processes, in 3 steps.
   #  a) bring up a 'ring' of MPI demons
   #
   set echo
   mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE
   #
   #  b) kick off the compute processes and the data servers
   #
   mpiexec -configfile $PROCFILE < /dev/null
   #
   #  c) shut down the 'ring' of MPI demons
   #
   mpdallexit
   unset echo
   #
   #    HOSTFILE is passed to the file erasing step below
   rm -f $PROCFILE
endif
4楼2011-09-29 09:20:52
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
普通表情 高级回复 (可上传附件)
最具人气热帖推荐 [查看全部] 作者 回/看 最后发表
[找工作] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 7/350 2026-02-08 07:46 by vs90ilomwc
[公派出国] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:32 by vs90ilomwc
[教师之家] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:26 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:07 by vs90ilomwc
[硕博家园] 博士延得我,科研能力直往上蹿 +8 偏振片 2026-02-02 8/400 2026-02-08 06:52 by liyeqik
[公派出国] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 5lbyq5wrhb 2026-02-07 3/150 2026-02-08 03:05 by vs90ilomwc
[考博] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 5lbyq5wrhb 2026-02-07 3/150 2026-02-08 02:52 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 3rkserf6qr 2026-02-07 3/150 2026-02-08 02:32 by vs90ilomwc
[考博] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +5 2h7du0nuhk 2026-02-07 5/250 2026-02-08 02:25 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 5/250 2026-02-08 02:12 by vs90ilomwc
[考博] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 6/300 2026-02-08 02:07 by vs90ilomwc
[教师之家] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 6/300 2026-02-08 02:05 by vs90ilomwc
[教师之家] 有院领导为了换新车,用横向课题经费买了俩车 +7 瞬息宇宙 2026-02-04 7/350 2026-02-07 21:47 by tfang
[有机交流] 酰胺脱乙酰基 10+5 chibby 2026-02-03 12/600 2026-02-07 19:29 by 江东闲人
[基金申请] 同年申请2项不同项目,第1个项目里不写第2个项目的信息,可以吗 +4 hitsdu 2026-02-06 4/200 2026-02-07 13:07 by jurkat.1640
[基金申请] 有时候真觉得大城市人没有县城人甚至个体户幸福 +9 苏东坡二世 2026-02-04 10/500 2026-02-07 12:37 by 小毛球
[考博] 天津大学招2026.09的博士生,欢迎大家推荐交流(博导是本人) +4 a793625982 2026-02-05 5/250 2026-02-07 10:57 by a793625982
[公派出国] CSC & MSCA 博洛尼亚大学能源材料课题组博士/博士后招生|MSCA经费充足、排名优 +4 雨念 2026-02-01 6/300 2026-02-06 23:32 by MelissaPon
[基金申请] 面上项目申报 +3 Tide man 2026-02-01 3/150 2026-02-05 22:56 by god_tian
[教师之家] 遇见不省心的家人很难过 +18 otani 2026-02-03 22/1100 2026-02-04 11:06 by tangmnt
信息提示
请填处理意见