| 查看: 3002 | 回复: 6 | |||
[交流]
GAMESS最新安装通过
|
|||
|
GAMESS最新安装通过--不知有没有遗漏,测试结果有些问题,请高手指点,谢谢 这些怎么没有加入 测试结果 测试结果-- DDI Process 0: error code 911 ,不知何故 |
» 猜你喜欢
研究发现一个新的临界系数,独立于临界压缩因子
已经有7人回复
华南师范大学广州市光谱分析与功能探针重点实验室诚招青年英才和博士后岗位研究人员
已经有11人回复
物理化学论文润色/翻译怎么收费?
已经有139人回复
华南师范大学广州市光谱分析与功能探针重点实验室诚招青年英才和博士后岗位研究人员
已经有10人回复
200年来人们一直无法破解的难题: 气液临界点状态方程
已经有7人回复
关于引力延迟下双星系统角动量不守恒的疑惑---求高手解答!
已经有10人回复
求标准卡片Cu0.4In0.4Zn1.2S2 PDF#32-0340
已经有2人回复
量子点电致发光器件(QLED):蓝光量子点(QDs)和纳米氧化锌电子传输材料
已经有0人回复
七嗪类物质合成求助
已经有0人回复
需要一个圆偏振光诱导化学反应,请问使用圆偏振光作为光源,应该需要哪些设备搭建
已经有0人回复
» 本主题相关价值贴推荐,对您同样有帮助:
最新驾照理论考试速成2012.1.0 安装版【转载】
已经有80人回复
走向财务自由:最新穷爸爸富爸爸现金流游戏简体中文版安装包【转载】
已经有96人回复
Gamess-NEDA结果求助...
已经有7人回复
【求助】安装Gamess时出现如下问题怎么解决?
已经有14人回复
【求助】gamess的一点小问题,如何注册环境变量
已经有4人回复
【求助】gamess 安装后,做测试的时候出现DDI Process 0: error code 911问题。
已经有9人回复
【原创】linux 下 icc+MKL 方案编译安装 gamess
已经有62人回复
【求助】求gamess 输入使用赝势基组与分裂基组的例子
已经有6人回复
【分享】为PC-GAMESS写了个小程序,可以用Molekel显示NBO轨道
已经有13人回复
» 抢金币啦!回帖就可以得到:
持续创业者,A8离异男征婚,事业遇到瓶颈,寻找创业生活伴侣
+1/160
西南大学化学化工学院彭云贵教授课题组招有机化学博士研究生
+2/86
供应EXAKT德国艾卡特3D打印材料分散用三辊研磨机80E PLUS
+1/80
招聘:中国科学院山西煤炭化学研究所
+1/77
锌离子混合电容器
+1/72
坐标广州,诚征男友,大个子女生,非诚勿扰
+2/60
广州,真诚找对象
+1/54
Win10系统Xshell窗口小、无法移动、不显示工具栏的一个解决办法
+1/38
学生申博
+1/38
昆士兰科技大学(QUT)博士招生信息 导师:李志勇教授
+1/34
上海理工大学“新能源材料”专业-赵斌教授招收申请考核制博士生【能源催化方向】
+1/26
帮导师招2026CSC博士(巴塞罗那自治大学UAB-CSC博士项目)
+1/23
香港浸会大学化学系质谱分析测试中心招聘研究助理
+1/12
南京邮电大学-材料院尹超教授课题组-诚聘材料、化学、生物医学博士后(长期有效)
+1/9
江西理工大学稀土学院稀土发光材料研究所招收2026届材料科学与工程专业博士研究生2名
+2/8
香港科技大学 招生 2026 Fall全奖博士 -- 机械/电子/材料/化学
+1/8
生殖医学与子代健康全国重点实验室华鹏课题组招收博士后及研究生(长期有效)
+1/4
华中科技大学 煤燃烧全重 紧急招博士生报考 (1月19日截止)
+1/4
重庆大学诚招2026年生物材料方向博士生
+1/3
博士后招聘(高薪40万+)
+1/1
2楼2011-09-28 12:42:24
3楼2011-09-28 15:56:31
|
编译程序时用的MPICH2,rungms里面的内容修改搞的不是很明白,请高手指点一下。看看哪些地方需要修改,rungms里默认的是Intel MPI rungms: Here we use two constant node names, compute-0-0 and compute-0-1, each of which is assumed to be SMP (ours are 8-ways): Each user must set up a file named ~/.mpd.conf containing a single line: "secretword=GiantsOverDodgers" which is set to user-only access permissions "chmod 600 ~/.mpd.conf". The secret word shouldn't be a login password, but can be anything you like: "secretword=VikingsOverPackers" is just as good. if ($TARGET == mpi) then # # Run outside of the batch schedular Sun Grid Engine (SGE) # by faking SGE's host assignment file: $TMPDIR/machines. # This script can be executed interactively on the first # compute node mentioned in this fake 'machines' file. set TMPDIR=$SCR # perhaps SGE would assign us two node names... echo "compute-0-1" > $TMPDIR/machines echo "compute-0-2" >> $TMPDIR/machines # or if you want to use these four nodes... #--echo "compute-0-0" > $TMPDIR/machines #--echo "compute-0-1" >> $TMPDIR/machines #--echo "compute-0-2" >> $TMPDIR/machines #--echo "compute-0-3" >> $TMPDIR/machines # # besides the usual three arguments to 'rungms' (see top), # we'll pass in a "processers per node" value. This could # be a value from 1 to 8 on our 8-way nodes. set PPN=$4 # # Allow for compute process and data servers (one pair per core) # @ NPROCS = $NCPUS + $NCPUS # # MPICH2 kick-off is guided by two disk files (A and B). # # A. build HOSTFILE, saying which nodes will be in our MPI ring # setenv HOSTFILE $SCR/$JOB.nodes.mpd if (-e $HOSTFILE) rm $HOSTFILE touch $HOSTFILE # if ($NCPUS == 1) then # Serial run must be on this node itself! echo `hostname` >> $HOSTFILE set NNODES=1 else # Parallel run gets node names from SGE's assigned list, # which is given to us in a disk file $TMPDIR/machines. uniq $TMPDIR/machines $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif # uncomment these if you are still setting up... #--echo '------------' #--echo HOSTFILE $HOSTFILE contains #--cat $HOSTFILE #--echo '------------' # # B. the next file forces explicit "which process on what node" rules. # setenv PROCFILE $SCR/$JOB.processes.mpd if (-e $PROCFILE) rm $PROCFILE touch $PROCFILE # if ($NCPUS == 1) then @ NPROCS = 2 echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else @ NPROCS = $NCPUS + $NCPUS if ($PPN == 0) then # when our SGE is just asked to assign so many cores from one # node, PPN=0, we are launching compute processes and data # servers within our own node...simple. echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else # when our SGE is asked to reserve entire nodes, 1<=PPN<=8, # the $TMPDIR/machines contains the assigned node names # once and only once. We want PPN compute processes on # each node, and of course, PPN data servers on each. # Although DDI itself can assign c.p. and d.s. to the # hosts in any order, the GDDI logic below wants to have # all c.p. names before any d.s. names in the $HOSTFILE. # # thus, lay down a list of c.p. @ PPN2 = $PPN + $PPN @ n=1 while ($n <= $NNODES) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE @ n++ end endif endif # uncomment these if you are still setting up... #--echo PROCFILE $PROCFILE contains #--cat $PROCFILE #--echo '------------' # echo "MPICH2 will be running GAMESS on $NNODES nodes." echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x" echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers." if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node." # # Next sets up MKL usage setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t # force old MKL versions (version 9 and older) to run single threaded setenv MKL_SERIAL YES # setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH set path=(/opt/mpich2/gnu/bin $path) # echo The scratch disk space on each node is $SCR chdir $SCR # # Now, at last, we can actually launch the processes, in 3 steps. # a) bring up a 'ring' of MPI demons # set echo mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE # # b) kick off the compute processes and the data servers # mpiexec -configfile $PROCFILE < /dev/null # # c) shut down the 'ring' of MPI demons # mpdallexit unset echo # # HOSTFILE is passed to the file erasing step below rm -f $PROCFILE endif |
4楼2011-09-29 09:20:52
|
请指导一下,rungms关于mpi的修改,谢谢!! 实验室的mpd一直在正常运行了,不需要再启动mpd进程了,实在不知这里该如何修改 [u06@pc07 tests]$ ../rungms exam01.inp ----- GAMESS execution script ----- This job is running on host pc07 under operating system Linux at 2011年 09月 29日 星期四 10:50:47 CST Available scratch disk space (Kbyte units) at beginning of the job is 文件系统 1K-块 已用 可用 已用% 挂载点 store:/data 2536545984 1253686304 1154010560 53% /home cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 unset echo setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata setenv EXTBAS /dev/null setenv NUCBAS /dev/null ....... setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97 setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98 setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99 unset echo Intel MPI (iMPI) will be running GAMESS on 1 nodes. The binary to be kicked off by 'mpiexec' is gamess.00.x iMPI will run 1 compute processes and 1 data servers. The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06 /home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255 ----- accounting info ----- Files used on the master node pc07 were: -rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 -rw-r--r-- 1 u06 usbfs 5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd -rw-r--r-- 1 u06 usbfs 66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd 2011年 09月 29日 星期四 10:50:49 CST 0.204u 0.084s 0:01.71 16.3% 0+0k 0+0io 18pf+0w [u06@pc07 tests]$ [ Last edited by xiaowu787 on 2011-9-29 at 10:20 ] |
5楼2011-09-29 10:16:42
7楼2016-12-15 21:49:44
简单回复
神威杰6楼
2013-10-25 17:35
回复
xiaowu787(金币+1): 谢谢参与











回复此楼
可以发个安装教程给我吗?