| 查看: 3039 | 回复: 6 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
[交流]
GAMESS最新安装通过
|
|||
|
GAMESS最新安装通过--不知有没有遗漏,测试结果有些问题,请高手指点,谢谢 这些怎么没有加入 测试结果 测试结果-- DDI Process 0: error code 911 ,不知何故 |
» 猜你喜欢
国家级人才课题组招收2026年入学博士
已经有6人回复
中国科学院成都山地灾害与环境研究所2026年特别研究助理(博士后)招聘启事
已经有0人回复
物理化学论文润色/翻译怎么收费?
已经有215人回复
意大利米兰理工大学急聘CSC公派留学博士生(物理或无机材料科学方向)
已经有47人回复
九江学院2026年最新高层次人才招聘公告
已经有0人回复
求加氢处理工艺与工程(第二版)pdf 李大东 中国石化出版社 9787511436689
已经有0人回复
新加坡ASTAR招收CSC或学校资助联培博士生/访问学生
已经有2人回复
求助Amsterdam Density Functional
已经有3人回复
武汉纺织大学全国重点实验室陈嵘教授团队招收2026级学术/专业型硕士研究生
已经有0人回复
深圳信息职业技术学院2026年教师招聘公告(最新)
已经有0人回复
» 本主题相关价值贴推荐,对您同样有帮助:
最新驾照理论考试速成2012.1.0 安装版【转载】
已经有80人回复
走向财务自由:最新穷爸爸富爸爸现金流游戏简体中文版安装包【转载】
已经有96人回复
Gamess-NEDA结果求助...
已经有7人回复
【求助】安装Gamess时出现如下问题怎么解决?
已经有14人回复
【求助】gamess的一点小问题,如何注册环境变量
已经有4人回复
【求助】gamess 安装后,做测试的时候出现DDI Process 0: error code 911问题。
已经有9人回复
【原创】linux 下 icc+MKL 方案编译安装 gamess
已经有62人回复
【求助】求gamess 输入使用赝势基组与分裂基组的例子
已经有6人回复
【分享】为PC-GAMESS写了个小程序,可以用Molekel显示NBO轨道
已经有13人回复
» 抢金币啦!回帖就可以得到:
-大龄未婚男找女朋友结婚
+1/252
中国石油大学(华东)吴传德教授团队(国家杰青)2026硕、博招生
+2/214
华南师范大学(211)- 光电科学与工程学院 - 申请审核制(2026年4-5月份面试考核)
+2/142
哈工大医康学院材料模拟计算方向人才招聘
+1/94
欢迎报考南京农业大学植物环境适应课题组课题组2026级博士生。
+1/83
湘潭大学“过程强化与绿色化工”创新团队补招2026年秋入学博士生
+2/58
深圳大学信息功能电子材料方向“申请-考核制”博士生招生
+2/30
德国Karlsruhe Institute of Technology招收电化学储能及联合培养CSC博士
+1/11
南京医科大学国家级高层次青年人才团队招收博士研究生
+1/11
招收中国CSC或学校资助联培博士生/访问学生-- Tsinghua-A*STAR 2025 Joint Funding
+1/10
华南师范大学(211)- 光电科学与工程学院 - 申请审核制(2026年4-5月份面试考核)
+2/10
广东工业大学马琳教授课题组招收2026年博士(材料物理与化学、光学专业)
+1/8
【科研助理招聘-北京理工大学-集成电路与电子学院-国家杰青团队】
+1/8
【经验分享】CRISPR基因敲除细胞系构建全流程踩坑指南——从递送方式选择到克隆筛选
+1/7
哈工大(深圳)物理招收2026年9月入学博士生1个名额
+1/6
2026年博士申请考核+福州大学+管理科学与工程
+1/4
国家杰青低维材料与器件力学团队2026年招收博士研究生
+1/4
海南大学化学院—功能分子器件团队2026博士/研究助理招生+博士后招聘
+1/3
江汉大学轩亮教授课题组招博士研究生/博士后
+1/2
上海理工大学“新能源材料”专业-赵斌教授招收申请考核制博士生【能源催化方向】
+1/1
|
编译程序时用的MPICH2,rungms里面的内容修改搞的不是很明白,请高手指点一下。看看哪些地方需要修改,rungms里默认的是Intel MPI rungms: Here we use two constant node names, compute-0-0 and compute-0-1, each of which is assumed to be SMP (ours are 8-ways): Each user must set up a file named ~/.mpd.conf containing a single line: "secretword=GiantsOverDodgers" which is set to user-only access permissions "chmod 600 ~/.mpd.conf". The secret word shouldn't be a login password, but can be anything you like: "secretword=VikingsOverPackers" is just as good. if ($TARGET == mpi) then # # Run outside of the batch schedular Sun Grid Engine (SGE) # by faking SGE's host assignment file: $TMPDIR/machines. # This script can be executed interactively on the first # compute node mentioned in this fake 'machines' file. set TMPDIR=$SCR # perhaps SGE would assign us two node names... echo "compute-0-1" > $TMPDIR/machines echo "compute-0-2" >> $TMPDIR/machines # or if you want to use these four nodes... #--echo "compute-0-0" > $TMPDIR/machines #--echo "compute-0-1" >> $TMPDIR/machines #--echo "compute-0-2" >> $TMPDIR/machines #--echo "compute-0-3" >> $TMPDIR/machines # # besides the usual three arguments to 'rungms' (see top), # we'll pass in a "processers per node" value. This could # be a value from 1 to 8 on our 8-way nodes. set PPN=$4 # # Allow for compute process and data servers (one pair per core) # @ NPROCS = $NCPUS + $NCPUS # # MPICH2 kick-off is guided by two disk files (A and B). # # A. build HOSTFILE, saying which nodes will be in our MPI ring # setenv HOSTFILE $SCR/$JOB.nodes.mpd if (-e $HOSTFILE) rm $HOSTFILE touch $HOSTFILE # if ($NCPUS == 1) then # Serial run must be on this node itself! echo `hostname` >> $HOSTFILE set NNODES=1 else # Parallel run gets node names from SGE's assigned list, # which is given to us in a disk file $TMPDIR/machines. uniq $TMPDIR/machines $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif # uncomment these if you are still setting up... #--echo '------------' #--echo HOSTFILE $HOSTFILE contains #--cat $HOSTFILE #--echo '------------' # # B. the next file forces explicit "which process on what node" rules. # setenv PROCFILE $SCR/$JOB.processes.mpd if (-e $PROCFILE) rm $PROCFILE touch $PROCFILE # if ($NCPUS == 1) then @ NPROCS = 2 echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else @ NPROCS = $NCPUS + $NCPUS if ($PPN == 0) then # when our SGE is just asked to assign so many cores from one # node, PPN=0, we are launching compute processes and data # servers within our own node...simple. echo "-n $NPROCS -host `hostname` /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE else # when our SGE is asked to reserve entire nodes, 1<=PPN<=8, # the $TMPDIR/machines contains the assigned node names # once and only once. We want PPN compute processes on # each node, and of course, PPN data servers on each. # Although DDI itself can assign c.p. and d.s. to the # hosts in any order, the GDDI logic below wants to have # all c.p. names before any d.s. names in the $HOSTFILE. # # thus, lay down a list of c.p. @ PPN2 = $PPN + $PPN @ n=1 while ($n <= $NNODES) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] echo "-n $PPN2 -host $host /home/mike/gamess/gamess.$VERNO.x" >> $PROCFILE @ n++ end endif endif # uncomment these if you are still setting up... #--echo PROCFILE $PROCFILE contains #--cat $PROCFILE #--echo '------------' # echo "MPICH2 will be running GAMESS on $NNODES nodes." echo "The binary to be kicked off by 'mpiexec' is gamess.$VERNO.x" echo "MPICH2 will run $NCPUS compute processes and $NCPUS data servers." if ($PPN > 0) echo "MPICH2 will be running $PPN of each process per node." # # Next sets up MKL usage setenv LD_LIBRARY_PATH /opt/intel/mkl/10.0.3.020/lib/em64t # force old MKL versions (version 9 and older) to run single threaded setenv MKL_SERIAL YES # setenv LD_LIBRARY_PATH /opt/mpich2/gnu/lib:$LD_LIBRARY_PATH set path=(/opt/mpich2/gnu/bin $path) # echo The scratch disk space on each node is $SCR chdir $SCR # # Now, at last, we can actually launch the processes, in 3 steps. # a) bring up a 'ring' of MPI demons # set echo mpdboot --rsh=ssh -n $NNODES -f $HOSTFILE # # b) kick off the compute processes and the data servers # mpiexec -configfile $PROCFILE < /dev/null # # c) shut down the 'ring' of MPI demons # mpdallexit unset echo # # HOSTFILE is passed to the file erasing step below rm -f $PROCFILE endif |
4楼2011-09-29 09:20:52
2楼2011-09-28 12:42:24
3楼2011-09-28 15:56:31
|
请指导一下,rungms关于mpi的修改,谢谢!! 实验室的mpd一直在正常运行了,不需要再启动mpd进程了,实在不知这里该如何修改 [u06@pc07 tests]$ ../rungms exam01.inp ----- GAMESS execution script ----- This job is running on host pc07 under operating system Linux at 2011年 09月 29日 星期四 10:50:47 CST Available scratch disk space (Kbyte units) at beginning of the job is 文件系统 1K-块 已用 可用 已用% 挂载点 store:/data 2536545984 1253686304 1154010560 53% /home cp exam01.inp /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 unset echo setenv ERICFMT /home/u06/lammps/gamess/GAMESS/gamess/u06/ericfmt.dat setenv MCPPATH /home/u06/lammps/gamess/GAMESS/gamess/u06/mcpdata setenv EXTBAS /dev/null setenv NUCBAS /dev/null ....... setenv GMCDIN /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F97 setenv GMC2SZ /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F98 setenv GMCCCS /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F99 unset echo Intel MPI (iMPI) will be running GAMESS on 1 nodes. The binary to be kicked off by 'mpiexec' is gamess.00.x iMPI will run 1 compute processes and 1 data servers. The scratch disk space on each node is /home/u06/lammps/gamess/GAMESS/gamess/u06 /home/u06/lammps/mpich2/bin/mpdroot: open failed for root's mpd conf filempiexec_pc07 (__init__ 1208): forked process failed; status=255 ----- accounting info ----- Files used on the master node pc07 were: -rw-r--r-- 1 u06 usbfs 1136 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.F05 -rw-r--r-- 1 u06 usbfs 5 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.nodes.mpd -rw-r--r-- 1 u06 usbfs 66 09-29 10:20 /home/u06/lammps/gamess/GAMESS/gamess/u06/exam01.processes.mpd 2011年 09月 29日 星期四 10:50:49 CST 0.204u 0.084s 0:01.71 16.3% 0+0k 0+0io 18pf+0w [u06@pc07 tests]$ [ Last edited by xiaowu787 on 2011-9-29 at 10:20 ] |
5楼2011-09-29 10:16:42













回复此楼