| 查看: 4567 | 回复: 19 | ||
pinwei2014铜虫 (初入文坛)
|
[求助]
VASP5.3并行安装时报错,求大神指点 已有1人参与
|
|
|
make之后就变成这样了 ./preprocess <fftmpi.F | /usr/bin/cpp -P -C -traditional >fftmpi.f90 -DMPI -DHOST=\"LinuxIFC\" -DIFC -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf -DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK mpif90 -FR -lowercase -O1 -c fftmpi.f90 fftmpi.f90:75.14: USE prec 1 致命错误: (1)处打开的文件的‘prec.mod’并非一个 GFORTRAN 模块文件 make: *** [fftmpi.o] 错误 1 是我的fftw没装好么? Makefile如下: .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron # based systems # we recommend this makefile for both Intel as well as AMD systems # for AMD based systems appropriate BLAS (libgoto) and fftw libraries are # however mandatory (whereas they are optional for Intel platforms) # For Athlon we recommend # ) to link against libgoto (and mkl as a backup for missing routines) # ) odd enough link in libfftw3xf_intel.a (fftw interface for mkl) # feedback is greatly appreciated # # The makefile was tested only under Linux on Intel and AMD platforms # the following compiler versions have been tested: # - ifc.7.1 works stable somewhat slow but reliably # - ifc.8.1 fails to compile the code properly # - ifc.9.1 recommended (both for 32 and 64 bit) # - ifc.10.1 partially recommended (both for 32 and 64 bit) # tested build 20080312 Package ID: l_fc_p_10.1.015 # the gamma only mpi version can not be compiles # using ifc.10.1 # - ifc.11.1 partially recommended (some problems with Gamma only and intel fftw) # Build 20090630 Package ID: l_cprof_p_11.1.046 # - ifort.12.1 strongly recommended (we use this to compile vasp) # Version 12.1.5.339 Build 20120612 # # it might be required to change some of library path ways, since # LINUX installations vary a lot # # Hence check ***ALL*** options in this makefile very carefully #----------------------------------------------------------------------- # # BLAS must be installed on the machine # there are several options: # 1) very slow but works: # retrieve the lapackage from ftp.netlib.org # and compile the blas routines (BLAS/SRC directory) # please use g77 or f77 for the compilation. When I tried to # use pgf77 or pgf90 for BLAS, VASP hang up when calling # ZHEEV (however this was with lapack 1.1 now I use lapack 2.0) # 2) more desirable: get an optimized BLAS # # the two most reliable packages around are presently: # 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium) # http://developer.intel.com/software/products/mkl/ # this is really excellent, if you use Intel CPU's # # 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD, # around 30 GFlops on Quad core) # Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # http://www.tacc.utexas.edu/resources/software/ # #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp #CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ # -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- FFLAGS = -FR -names lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O2 -ip OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # we recommend to use mkl, that is simple and most likely # fastest in Intel based machines #----------------------------------------------------------------------- # mkl path for ifc 11 compiler #MKL_PATH=$(MKLROOT)/lib/em64t # mkl path for ifc 12 compiler MKL_PATH=$(MKLROOT)/lib/intel64 MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/ # BLAS # setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution # BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide # faster linking and available from at least version 11 BLAS=-L/opt/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread # LAPACK, use vasp.5.lib/lapack_double #LAPACK= ../vasp.5.lib/lapack_double.o # LAPACK from mkl, usually faster and contains scaLAPACK as well LAPACK= $(MKL_PATH)/libmkl_intel_lp64.a BLACS= /home/lbg/BLACS/LIB/blacs_MPI-LINUX-0.a SCA= /home/lbg/scalapack-2.0.2/libscalapack.a $(BLACS) # here a tricky version, link in libgoto and use mkl as a backup # also needs a special line for LAPACK # this is the best thing you can do on AMD based systems !!!!!! #BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5 #LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a #----------------------------------------------------------------------- #LIB = -L../vasp.5.lib -ldmy \ # ../vasp.5.lib/linpack_double.o $(LAPACK) \ # $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o /home/lbg/fftw-3.3.3/lib/libfftw3_mpi.a # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (recommended if mkl is available) # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \ -DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK ## -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave this section commented out #----------------------------------------------------------------------- # usually simplest link in mkl scaLAPACK #BLACS= -lmkl_blacs_openmpi_lp64 #SCA= $(MKL_PATH)/libmkl_scalapack_lp64.a $(BLACS) #----------------------------------------------------------------------- # libraries #----------------------------------------------------------------------- LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o \ $(SCA) $(LAPACK) $(BLAS) #----------------------------------------------------------------------- # parallel FFT #----------------------------------------------------------------------- # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D =fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o /home/lbg/fftw-3.3.3/lib/libfftw3_mpi.a # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o mgrid.o xclib.o vdw_nl.o xclib_grad.o \ radial.o pseudo.o gridq.o ebs.o \ mkpoints.o wave.o wave_mpi.o wave_high.o spinsym.o \ $(BASIC) nonl.o nonlr.o nonl_high.o dfast.o choleski2.o \ mix.o hamil.o xcgrad.o xcspin.o potex1.o potex2.o \ constrmag.o cl_shift.o relativistic.o LDApU.o \ paw_base.o metagga.o egrad.o pawsym.o pawfock.o pawlhf.o rhfatm.o hyperfine.o paw.o \ mkpoints_full.o charge.o Lebedev-Laikov.o stockholder.o dipol.o pot.o \ dos.o elf.o tet.o tetweight.o hamil_rot.o \ chain.o dyna.o k-proj.o sphpro.o us.o core_rel.o \ aedens.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o hamil_high.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ chgloc.o fast_aug.o fock_multipole.o fock.o mkpoints_change.o sym_grad.o \ mymath.o internals.o npt_dynamics.o dynconstr.o dimer_heyden.o dvvtrajectory.o vdwforcefield.o \ nmr.o pead.o subrot.o subrot_scf.o \ force.o pwlhf.o gw_model.o optreal.o steep.o davidson.o david_inner.o \ electron.o rot.o electron_all.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o elpol.o \ hamil_lr.o rmm-diis_lr.o subrot_cluster.o subrot_lr.o \ lr_helper.o hamil_lrf.o elinear_response.o ilinear_response.o \ linear_optics.o \ setlocalpp.o wannier.o electron_OEP.o electron_lhf.o twoelectron4o.o \ mlwf.o ratpol.o screened_2e.o wave_cacher.o chi_base.o wpot.o \ local_field.o ump2.o ump2kpar.o fcidump.o ump2no.o \ bse_te.o bse.o acfdt.o chi.o sydmat.o dmft.o \ rmm-diis_mlr.o linear_response_NMR.o wannier_interpol.o linear_response.o vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp main.o $(SOURCE) $(FFT3D) $(LIB) $(LINK) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules have been tested for ifc.11 and ifc.12 only fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftw3d.o : fftw3d.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) fftmpi.o : fftmpi.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftmpiw.o : fftmpiw.F $(CPP) $(FC) -FR -lowercase -O1 $(INCS) -c $*$(SUFFIX) wave_high.o : wave_high.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) # the following rules are probably no longer required (-O3 seems to work) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) paw.o : paw.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) cl_shift.o : cl_shift.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) |
» 猜你喜欢
有没有人能给点建议
已经有5人回复
假如你的研究生提出不合理要求
已经有12人回复
实验室接单子
已经有7人回复
全日制(定向)博士
已经有5人回复
萌生出自己或许不适合搞科研的想法,现在跑or等等看?
已经有4人回复
Materials Today Chemistry审稿周期
已经有4人回复
参与限项
已经有3人回复
对氯苯硼酸纯化
已经有3人回复
所感
已经有4人回复
要不要辞职读博?
已经有7人回复
» 本主题相关价值贴推荐,对您同样有帮助:
求助:VASP5.3并行编译报错
已经有6人回复
菜鸟求助 VASP5.2并行安装 出错
已经有7人回复
vasp 4.6 并行安装 问题
已经有15人回复
【求助成功】vasp5.2并行安装不上
已经有18人回复
【求助】vasp并行安装mpich2的错误
已经有10人回复
chuanghua304
禁虫 (职业作家)
2楼2014-02-28 15:42:08
chuanghua304
禁虫 (职业作家)
|
本帖内容被屏蔽 |
19楼2014-10-27 22:08:20
pinwei2014
铜虫 (初入文坛)
- 应助: 0 (幼儿园)
- 金币: 85.3
- 帖子: 30
- 在线: 17.3小时
- 虫号: 2970810
- 注册: 2014-02-16
- 专业: 金属材料表面科学与工程
3楼2014-03-03 16:36:34
chuanghua304
禁虫 (职业作家)
|
本帖内容被屏蔽 |
4楼2014-03-03 16:40:50
pinwei2014
铜虫 (初入文坛)
- 应助: 0 (幼儿园)
- 金币: 85.3
- 帖子: 30
- 在线: 17.3小时
- 虫号: 2970810
- 注册: 2014-02-16
- 专业: 金属材料表面科学与工程
|
恩,验证通过了。不过它又提示: Missing optional pre-requisites -- Intel(R) VTune(TM) Amplifier XE 2013 Update 4: Ptrace protection is active. Product may fail to collect analysis data. -- Intel(R) VTune(TM) Amplifier XE 2013 Update 4: Power analysis is not enabled -- Intel(R) Inspector XE 2013 Update 4: Ptrace protection is active. Product may fail to collect analysis data. -- Intel(R) Advisor XE 2013 Update 2: Ptrace protection is active. Product may fail to collect analysis data. 这个可以跳过么,还是我得都装上? |
5楼2014-03-03 16:51:20
chuanghua304
禁虫 (职业作家)
|
本帖内容被屏蔽 |
6楼2014-03-03 22:47:12
pinwei2014
铜虫 (初入文坛)
- 应助: 0 (幼儿园)
- 金币: 85.3
- 帖子: 30
- 在线: 17.3小时
- 虫号: 2970810
- 注册: 2014-02-16
- 专业: 金属材料表面科学与工程
|
恩,这个没问题了,不过装Openmpi时,到最后make install的时候又出错了…… libtool: install: error: relink `libmpi_cxx.la' with the above command before installing it make[3]: *** [install-libLTLIBRARIES] 错误 1 make[3]:正在离开目录 `/home/lbg/openmpi-1.6.4/ompi/mpi/cxx' make[2]: *** [install-am] 错误 2 make[2]:正在离开目录 `/home/lbg/openmpi-1.6.4/ompi/mpi/cxx' make[1]: *** [install-recursive] 错误 1 make[1]:正在离开目录 `/home/lbg/openmpi-1.6.4/ompi' make: *** [install-recursive] 错误 1 |
7楼2014-03-04 16:10:01
pinwei2014
铜虫 (初入文坛)
- 应助: 0 (幼儿园)
- 金币: 85.3
- 帖子: 30
- 在线: 17.3小时
- 虫号: 2970810
- 注册: 2014-02-16
- 专业: 金属材料表面科学与工程
8楼2014-03-04 19:57:09
chuanghua304
禁虫 (职业作家)
|
本帖内容被屏蔽 |
9楼2014-03-04 20:13:55
chuanghua304
禁虫 (职业作家)
|
本帖内容被屏蔽 |
10楼2014-03-04 20:24:14












回复此楼
linhuincu