| 查看: 2521 | 回复: 10 | ||
wmy8802217木虫 (正式写手)
|
[求助]
vasp5.3.3并行编译出错已有4人参与
|
|
mpich2并行编译 怎么办? scala.o: In function `scala_mp_ppotrf_trtri_': scala.f90:(.text+0x1ba): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1dc): undefined reference to `numroc_' scala.f90:(.text+0x205): undefined reference to `numroc_' scala.f90:(.text+0x63d): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x679): undefined reference to `pzpotrf_' scala.f90:(.text+0x79b): undefined reference to `pztrtri_' scala.f90:(.text+0x7ed): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x80f): undefined reference to `numroc_' scala.f90:(.text+0x838): undefined reference to `numroc_' scala.o: In function `scala_mp_pdssyex_zheevx_': scala.f90:(.text+0x14cb): undefined reference to `pzheevx_' scala.f90:(.text+0x191f): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1940): undefined reference to `numroc_' scala.f90:(.text+0x1967): undefined reference to `numroc_' scala.f90:(.text+0x1f50): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x1f71): undefined reference to `numroc_' scala.f90:(.text+0x1f95): undefined reference to `numroc_' scala.o: In function `scala_mp_pssyex_cheevx_': scala.f90:(.text+0x2d34): undefined reference to `pzheevx_' scala.f90:(.text+0x318b): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x31ac): undefined reference to `numroc_' scala.f90:(.text+0x31d3): undefined reference to `numroc_' scala.f90:(.text+0x3923): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x3944): undefined reference to `numroc_' scala.f90:(.text+0x396b): undefined reference to `numroc_' scala.o: In function `scala_mp_pssyex_cheevx_single_': scala.f90:(.text+0x4733): undefined reference to `pcheevx_' scala.f90:(.text+0x4c44): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x4c65): undefined reference to `numroc_' scala.f90:(.text+0x4c89): undefined reference to `numroc_' scala.f90:(.text+0x5332): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x5353): undefined reference to `numroc_' scala.f90:(.text+0x537a): undefined reference to `numroc_' scala.o: In function `scala_mp_distri_': scala.f90:(.text+0x5c32): undefined reference to `blacs_gridinfo_' scala.f90:(.text+0x5c50): undefined reference to `numroc_' scala.f90:(.text+0x5c70): undefined reference to `numroc_' scala.o: In function `scala_mp_distri_single_': 。。。。。。。。。。。。。。 下面是makefile .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron # based systems # we recommend this makefile for both Intel as well as AMD systems # for AMD based systems appropriate BLAS (libgoto) and fftw libraries are # however mandatory (whereas they are optional for Intel platforms) # For Athlon we recommend # ) to link against libgoto (and mkl as a backup for missing routines) # ) odd enough link in libfftw3xf_intel.a (fftw interface for mkl) # feedback is greatly appreciated # # The makefile was tested only under Linux on Intel and AMD platforms # the following compiler versions have been tested: # - ifc.7.1 works stable somewhat slow but reliably # - ifc.8.1 fails to compile the code properly # - ifc.9.1 recommended (both for 32 and 64 bit) # - ifc.10.1 partially recommended (both for 32 and 64 bit) # tested build 20080312 Package ID: l_fc_p_10.1.015 # the gamma only mpi version can not be compiles # using ifc.10.1 # - ifc.11.1 partially recommended (some problems with Gamma only and intel fftw) # Build 20090630 Package ID: l_cprof_p_11.1.046 # - ifort.12.1 strongly recommended (we use this to compile vasp) # Version 12.1.5.339 Build 20120612 # # it might be required to change some of library path ways, since # LINUX installations vary a lot # # Hence check ***ALL*** options in this makefile very carefully #----------------------------------------------------------------------- # # BLAS must be installed on the machine # there are several options: # 1) very slow but works: # retrieve the lapackage from ftp.netlib.org # and compile the blas routines (BLAS/SRC directory) # please use g77 or f77 for the compilation. When I tried to # use pgf77 or pgf90 for BLAS, VASP hang up when calling # ZHEEV (however this was with lapack 1.1 now I use lapack 2.0) # 2) more desirable: get an optimized BLAS # # the two most reliable packages around are presently: # 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium) # http://developer.intel.com/software/products/mkl/ # this is really excellent, if you use Intel CPU's # # 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD, # around 30 GFlops on Quad core) # Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # http://www.tacc.utexas.edu/resources/software/ # #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- FFLAGS = -FR -names lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O2 -ip OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # we recommend to use mkl, that is simple and most likely # fastest in Intel based machines #----------------------------------------------------------------------- # mkl path for ifc 11 compiler #MKL_PATH=$(MKLROOT)/lib/em64t # mkl path for ifc 12 compiler MKL_PATH=$(MKLROOT)/lib/intel64 MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/ # BLAS # setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution # BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide # faster linking and available from at least version 11 BLAS= -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -lpthread # LAPACK, use vasp.5.lib/lapack_double #LAPACK= ../vasp.5.lib/lapack_double.o # LAPACK from mkl, usually faster and contains scaLAPACK as well LAPACK= /opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_intel_lp64.a # here a tricky version, link in libgoto and use mkl as a backup # also needs a special line for LAPACK # this is the best thing you can do on AMD based systems !!!!!! #BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5 #LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a #----------------------------------------------------------------------- #LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o $(LAPACK) \ $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (recommended if mkl is available) # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \ -DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK \ -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave this section commented out #----------------------------------------------------------------------- # usually simplest link in mkl scaLAPACK #BLACS= -lmkl_blacs_openmpi_lp64 #SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS) #----------------------------------------------------------------------- # libraries #----------------------------------------------------------------------- LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o \ $(SCA) $(LAPACK) $(BLAS) #----------------------------------------------------------------------- # parallel FFT #----------------------------------------------------------------------- # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o \ radial.o pseudo.o gridq.o ebs.o \ mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o \ $(BASIC) nonl.o nonlr.o nonl_high.o dfast.o choleski2.o \ mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o \ constrmag.o cl_shift.o relativistic.o LDApU.o \ paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o \ mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o \ dos.o elf.o tet.o tetweight.o hamil_rot.o \ chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o \ aedens.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o \ mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \ nmr.o pead.o subrot.o subrot_scf.o \ force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o \ electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o \ hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o \ lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o \ linear_optics.o \ setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \ mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o \ local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o \ bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o \ rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp main.o $(SOURCE) $(FFT3D) $(LIB) $(LINK) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules have been tested for ifc.11 and ifc.12 only fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftw3d.o : fftw3d.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) fftmpi.o : fftmpi.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftmpiw.o : fftmpiw.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) wave_high.o : wave_high.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) # the following rules are probably no longer required (-O3 seems to work) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) paw.o : paw.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) cl_shift.o : cl_shift.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) |
» 猜你喜欢
2025冷门绝学什么时候出结果
已经有3人回复
天津工业大学郑柳春团队欢迎化学化工、高分子化学或有机合成方向的博士生和硕士生加入
已经有4人回复
康复大学泰山学者周祺惠团队招收博士研究生
已经有6人回复
AI论文写作工具:是科研加速器还是学术作弊器?
已经有3人回复
孩子确诊有中度注意力缺陷
已经有6人回复
2026博士申请-功能高分子,水凝胶方向
已经有6人回复
论文投稿,期刊推荐
已经有4人回复
硕士和导师闹得不愉快
已经有13人回复
请问2026国家基金面上项目会启动申2停1吗
已经有5人回复
同一篇文章,用不同账号投稿对编辑决定是否送审有没有影响?
已经有3人回复
» 本主题相关价值贴推荐,对您同样有帮助:
vasp 编译主程序时候出错,求好心人看一下
已经有17人回复
求助--Openmpi-1.8集群编译问题
已经有37人回复
vasp串行,并行编译成功,但很奇怪。
已经有23人回复
求助:VASP5.3并行编译报错
已经有6人回复
vasp5.2 并行编译 BLAS问题 求助
已经有4人回复
安装lammps并行编译时 出现的问题
已经有5人回复
vasp并行编译通过,运行却出现错误提示
已经有5人回复
ifort(mpiifort)并行编译siesta
已经有9人回复
运行已经编译好的资源:vasp52opt.rar 出现并行不能用的问题!
已经有3人回复
Vasp并行编译出错
已经有7人回复
求助vasp中的自旋轨道耦合
已经有3人回复
vasp并行编译后出现错误,请大家指教
已经有23人回复
Dalton2011并行编译详记
已经有33人回复
cp2k并行编译出错
已经有3人回复
并行编译vasp出错,make: *** 没有规则可以创建“vasp”需要的目标“xml.o”。 停止
已经有12人回复
vasp5.2 并行编译如何修改makefile
已经有5人回复
【求助成功】Vasp 5.2.11编译出错
已经有10人回复
【求助成功】单机编译并行VASP,出现语法错误
已经有6人回复
【求助】VASP 编译出错
已经有24人回复

mywai520
铁杆木虫 (著名写手)
- 应助: 339 (大学生)
- 金币: 10425.6
- 红花: 45
- 帖子: 1664
- 在线: 296.9小时
- 虫号: 265142
- 注册: 2006-07-09
- 专业: 凝聚态物性 II :电子结构
【答案】应助回帖
★ ★
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢指导 2014-05-07 19:35:29
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢指导 2014-05-07 19:35:29
|
初看是看不出错误。这跟你的数据库以其环境变量有很大关系,这样的错误猜测是一些库的应用不对导致。具体的你的自己去找了。我建议你看看这个帖子的makefile,然后找找库函数的路劲。 http://muchong.com/bbs/viewthread.php?tid=7255487 |
2楼2014-05-06 22:44:46
liliangfang
荣誉版主 (著名写手)
- 1ST强帖: 7
- 应助: 138 (高中生)
- 贵宾: 0.952
- 金币: 10648.7
- 散金: 4575
- 红花: 42
- 沙发: 3
- 帖子: 1856
- 在线: 825.2小时
- 虫号: 1275010
- 注册: 2011-04-23
- 性别: GG
- 专业: 微/纳机械系统
- 管辖: 第一性原理
3楼2014-05-07 07:44:18
chuanghua304
禁虫 (职业作家)
★ ★
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢回帖! 2014-05-07 19:35:50
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢回帖! 2014-05-07 19:35:50
|
本帖内容被屏蔽 |
4楼2014-05-07 09:34:23
wmy8802217
木虫 (正式写手)
- 应助: 89 (初中生)
- 金币: 2877
- 红花: 12
- 帖子: 350
- 在线: 461.9小时
- 虫号: 2935682
- 注册: 2014-01-16
- 性别: GG
- 专业: 凝聚态物性 II :电子结构

5楼2014-05-07 11:19:05
wmy8802217
木虫 (正式写手)
- 应助: 89 (初中生)
- 金币: 2877
- 红花: 12
- 帖子: 350
- 在线: 461.9小时
- 虫号: 2935682
- 注册: 2014-01-16
- 性别: GG
- 专业: 凝聚态物性 II :电子结构
|
这样算么?我使用mpich3直接并行串行编译的vasp...但是貌似是双核各自算一遍= = wmy@wmy:~ /usr/local/mpich/bin/mpiexec -n 2 vasp vasp.5.2.2 15Apr09 complex vasp.5.2.2 15Apr09 complex POSCAR found : 2 types and 4 ions POSCAR found : 2 types and 4 ions LDA part: xc-table for Pade appr. of Perdew LDA part: xc-table for Pade appr. of Perdew WARNING: stress and forces are not correct POSCAR, INCAR and KPOINTS ok, starting setup WARNING: stress and forces are not correct POSCAR, INCAR and KPOINTS ok, starting setup WARNING: small aliasing (wrap around) errors must be expected FFT: planning ...( 1 ) WARNING: small aliasing (wrap around) errors must be expected FFT: planning ...( 1 ) reading WAVECAR reading WAVECAR charge-density read from file: BiH charge-density read from file: BiH entering main loop N E dE d eps ncg rms rms(c) entering main loop N E dE d eps ncg rms rms(c) DAV: 1 0.422598100495E+02 0.42260E+02 -0.20528E+03 1636 0.444E+02 DAV: 1 0.422598100495E+02 0.42260E+02 -0.20528E+03 1636 0.444E+02 DAV: 2 -0.711140028514E+01 -0.49371E+02 -0.46922E+02 2608 0.904E+01 DAV: 2 -0.711140028514E+01 -0.49371E+02 -0.46922E+02 2608 0.904E+01 DAV: 3 -0.129856833239E+02 -0.58743E+01 -0.58397E+01 2060 0.407E+01 DAV: 3 -0.129856833239E+02 -0.58743E+01 -0.58397E+01 2060 0.407E+01 top信息 Cpu(s): 95.4%us, 4.2%sy, 0.0%ni, 0.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3716592k total, 3262492k used, 454100k free, 148292k buffers Swap: 3856380k total, 168k used, 3856212k free, 1258312k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 27817 wmy 20 0 478m 216m 8584 R 177 6.0 6:17.11 vasp 27816 wmy 20 0 478m 216m 8600 R 166 6.0 6:14.28 vasp |

6楼2014-05-07 11:41:24
wmy8802217
木虫 (正式写手)
- 应助: 89 (初中生)
- 金币: 2877
- 红花: 12
- 帖子: 350
- 在线: 461.9小时
- 虫号: 2935682
- 注册: 2014-01-16
- 性别: GG
- 专业: 凝聚态物性 II :电子结构
|
ld: k1om architecture of input file `/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_scalapack_lp64.a(cmmtadd.o)' is incompatible with i386:x86-64 output ld: k1om architecture of input file `/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_scalapack_lp64.a(cmmtcadd.o)' is incompatible with i386:x86-64 output ..... 编译的话又出现这样的错误 |

7楼2014-05-07 11:48:48
yuraining
木虫 (小有名气)
- 应助: 17 (小学生)
- 金币: 2187.2
- 帖子: 56
- 在线: 259.3小时
- 虫号: 1503708
- 注册: 2011-11-22
- 专业: 凝聚态物性 II :电子结构
【答案】应助回帖
★ ★
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢回帖! 2014-05-07 19:36:07
感谢参与,应助指数 +1
fzx2008: 金币+2, 谢谢回帖! 2014-05-07 19:36:07
|
#SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS) 改为: SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a -lmkl_blacs_openmpi_lp64 可能还需要: INCS = -I$(MKLROOT)/include/fftw 这个错误我以前遇到过,是vasp5.3 需要这个数学库的scalapack 建议看一下官方的wiki: http://cms.mpi.univie.ac.at/wiki/index.php/Installing_VASP |
8楼2014-05-07 13:04:55
wmy8802217
木虫 (正式写手)
- 应助: 89 (初中生)
- 金币: 2877
- 红花: 12
- 帖子: 350
- 在线: 461.9小时
- 虫号: 2935682
- 注册: 2014-01-16
- 性别: GG
- 专业: 凝聚态物性 II :电子结构
|
rm -f vasp /usr/local/mpich/bin/mpif90 -mkl -o vasp main.o base.o mpi.o smart_allocate.o xml.o constant.o jacobi.o main_mpi.o scala.o asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o radial.o pseudo.o gridq.o ebs.o mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o symmetry.o symlib.o lattlib.o random.o nonl.o nonlr.o nonl_high.o dfast.o choleski2.o mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o constrmag.o cl_shift.o relativistic.o LDApU.o paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o dos.o elf.o tet.o tetweight.o hamil_rot.o chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o aedens.o wavpre.o wavpre_noio.o broyden.o dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o brent.o stufak.o fileio.o opergrid.o stepver.o chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o nmr.o pead.o subrot.o subrot_scf.o force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o linear_optics.o setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o -L../vasp.5.lib -ldmy ../vasp.5.lib/linpack_double.o /opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_scalapack_lp64.a /opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a -L/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -L/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 -lmkl_blas95_lp64 -limf -lm 上面是make.log貌似是到 MKL_PATH=/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64 BLAS= -L$(MKL_PATH) -lmkl_blas95_lp64 LAPACK= -L$(MKL_PATH) -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(MKL_PATH)/libmkl_blacs_openmpi_lp64.a就下不去了怎么办?出现错误/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a(zgebs2d_.o):../../../../scalapack/BLACS/SRC/MPI/zgebs2d_.c .text+0x28e): more undefined references to `ompi_mpi_byte' follow/opt/intel/composer_xe_2013.4.183/mkl/lib/intel64/libmkl_blacs_openmpi_lp64.a(igebr2d_.o): In function `igebr2d_': ../../../../scalapack/BLACS/SRC/MPI/igebr2d_.c .text+0x108): undefined reference to `ompi_mpi_int'怎么办 0.0 |

9楼2014-05-07 19:09:37
pangrui1985
铜虫 (小有名气)
- 应助: 25 (小学生)
- 金币: 23.2
- 红花: 6
- 帖子: 142
- 在线: 89.2小时
- 虫号: 2808500
- 注册: 2013-11-18
- 专业: 凝聚态物性I:结构、力学和
10楼2014-05-07 20:57:55













回复此楼
.text+0x28e): more undefined references to `ompi_mpi_byte' follow