| 查看: 2820 | 回复: 6 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
[交流]
【求助】安装vasp出错 make: *** [fftmpi_map.o] 错误 1【已解决】
|
|||
|
最近安装vasp,最后总是出错 guo@ubuntu:~/vasp.4.6$ make ./preprocess mpif90 -FR -lowercase -assume byterecl -O3 -ip -ftz -c fftmpi_map.f90 fftmpi_map.f90(77): error #6460: This is not a field name that is defined in the encompassing structure. [NODE_ME] NODE_ME=C%NODE_ME ----------------^ fftmpi_map.f90(78): error #6460: This is not a field name that is defined in the encompassing structure. [IONODE] IONODE =C%IONODE ----------------^ fftmpi_map.f90(97): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] NC=C%NCPU+1 -----------^ fftmpi_map.f90(142): error #6460: This is not a field name that is defined in the encompassing structure. [MPI_COMM] CALL MPI_barrier( C%MPI_COMM, ierror ) ----------------------------^ fftmpi_map.f90(327): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] DO I=MAP%PTRI(COMM%NCPU+1)+1,NZERO ----------------------------^ fftmpi_map.f90(383): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] DO I=MAP%PTR(COMM%NCPU+1)+1,NZERO ---------------------------^ compilation aborted for fftmpi_map.f90 (code 1) make: *** [fftmpi_map.o] 错误 1 guo@ubuntu:~/vasp.4.6$ cd .. guo@ubuntu:~$ which mpicc /usr/local/bin/mpicc guo@ubuntu:~$ source .bashrc guo@ubuntu:~$ source .bashrc guo@ubuntu:~$ cd vasp.4.6 guo@ubuntu:~/vasp.4.6$ make ./preprocess mpif90 -FR -lowercase -assume byterecl -O3 -ip -ftz -c fftmpi_map.f90 fftmpi_map.f90(77): error #6460: This is not a field name that is defined in the encompassing structure. [NODE_ME] NODE_ME=C%NODE_ME ----------------^ fftmpi_map.f90(78): error #6460: This is not a field name that is defined in the encompassing structure. [IONODE] IONODE =C%IONODE ----------------^ fftmpi_map.f90(97): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] NC=C%NCPU+1 -----------^ fftmpi_map.f90(142): error #6460: This is not a field name that is defined in the encompassing structure. [MPI_COMM] CALL MPI_barrier( C%MPI_COMM, ierror ) ----------------------------^ fftmpi_map.f90(327): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] DO I=MAP%PTRI(COMM%NCPU+1)+1,NZERO ----------------------------^ fftmpi_map.f90(383): error #6460: This is not a field name that is defined in the encompassing structure. [NCPU] DO I=MAP%PTR(COMM%NCPU+1)+1,NZERO ---------------------------^ compilation aborted for fftmpi_map.f90 (code 1) make: *** [fftmpi_map.o] 错误 1 我的makefile文件 .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # Makefile for Intel Fortran compiler for Pentium/Athlon/Opteron # bases systems # we recommend this makefile for both Intel as well as AMD systems # for AMD based systems appropriate BLAS and fftw libraries are # however mandatory (whereas they are optional for Intel platforms) # # The makefile was tested only under Linux on Intel and AMD platforms # the following compiler versions have been tested: # - ifc.7.1 works stable somewhat slow but reliably # - ifc.8.1 fails to compile the code properly # - ifc.9.1 recommended (both for 32 and 64 bit) # - ifc.10.1 partially recommended (both for 32 and 64 bit) # tested build 20080312 Package ID: l_fc_p_10.1.015 # the gamma only mpi version can not be compiles # using ifc.10.1 # # it might be required to change some of library pathes, since # LINUX installation vary a lot # Hence check ***ALL*** options in this makefile very carefully #----------------------------------------------------------------------- # # BLAS must be installed on the machine # there are several options: # 1) very slow but works: # retrieve the lapackage from ftp.netlib.org # and compile the blas routines (BLAS/SRC directory) # please use g77 or f77 for the compilation. When I tried to # use pgf77 or pgf90 for BLAS, VASP hang up when calling # ZHEEV (however this was with lapack 1.1 now I use lapack 2.0) # 2) more desirable: get an optimized BLAS # # the two most reliable packages around are presently: # 2a) Intels own optimised BLAS (PIII, P4, PD, PC2, Itanium) # http://developer.intel.com/software/products/mkl/ # this is really excellent, if you use Intel CPU's # # 2b) probably fastest SSE2 (4 GFlops on P4, 2.53 GHz, 16 GFlops PD, # around 30 GFlops on Quad core) # Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # http://www.tacc.utexas.edu/resources/software/ # #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ # -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \ # -DRPROMU_DGEMV -DRACCMU_DGEMV # #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- FFLAGS = -FR -lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O3 -ip -ftz OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # VASP works fastest with the libgoto library # so that's what we recommend #----------------------------------------------------------------------- # mkl.10.0 # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines #BLAS=-L/opt/intel/mkl100/lib/em64t -lmkl -lpthread # even faster for VASP Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # parallel goto version requires sometimes -libverbs #BLAS=-L/public/intel/mkl/lib/em64t -lguide lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl -lpthread BLAS=-L/opt/intel/Compiler/11.1/072/mkl/lib/32 -lguide -lmkl_intel -lmkl_sequential -lmkl_core -lpthread # LAPACK, simplest use vasp.5.lib/lapack_double LAPACK= ../vasp.4.lib/lapack_double.o # use the mkl Intel lapack #LAPACK= -lmkl_lapack #----------------------------------------------------------------------- #LIB = -L../vasp.4.lib -ldmy \ # ../vasp.4.lib/linpack_double.o $(LAPACK) \ # $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #---------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \ -DMPI_BLOCK=8000 # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave that section commented out #----------------------------------------------------------------------- #BLACS=$(HOME)/archives/SCALAPACK/BLACS/ #SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK #SCA= $(SCA_)/libscalapack.a \ # $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a SCA= #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \ $(SCA) $(BLAS) #LIB = -L../vasp.4.lib -ldmy \ # ../vasp.4.lib/linpack_double.o $(LAPACK) \ # $(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o setex.o radial.o \ pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \ nonl.o nonlr.o dfast.o choleski2.o \ mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \ metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \ tet.o hamil.o steep.o \ chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \ ebs.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \ edtest.o electron.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o \ elpol.o setlocalpp.o vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp main.o $(SOURCE) $(FFT3D) $(LIB) $(LINK) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F poscar.o: poscar.inc poscar.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules are cummulative (that is once failed # in one compiler version, stays in the list forever) # -tpp5|6|7 P, PII-PIII, PIV # -xW use SIMD (does not pay of on PII, since fft3d uses double prec) # all other options do no affect the code performance since -O1 is used fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) fftw3d.o : fftw3d.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) wave_high.o : wave_high.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) radial.o : radial.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symlib.o : symlib.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symmetry.o : symmetry.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) wave_mpi.o : wave_mpi.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) dynbr.o : dynbr.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) asa.o : asa.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) broyden.o : broyden.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) 请大家帮忙分析一下 [ Last edited by zzy870720z on 2011-8-1 at 17:19 ] |
» 猜你喜欢
请问哪里可以有青B申请的本子可以借鉴一下。
已经有4人回复
真诚求助:手里的省社科项目结项要求主持人一篇中文核心,有什么渠道能发核心吗
已经有6人回复
孩子确诊有中度注意力缺陷
已经有14人回复
三甲基碘化亚砜的氧化反应
已经有4人回复
请问下大家为什么这个铃木偶联几乎不反应呢
已经有5人回复
请问有评职称,把科研教学业绩算分排序的高校吗
已经有5人回复
2025冷门绝学什么时候出结果
已经有3人回复
天津工业大学郑柳春团队欢迎化学化工、高分子化学或有机合成方向的博士生和硕士生加入
已经有4人回复
康复大学泰山学者周祺惠团队招收博士研究生
已经有6人回复
AI论文写作工具:是科研加速器还是学术作弊器?
已经有3人回复
» 本主题相关价值贴推荐,对您同样有帮助:
vasp跨节点运行出错,mpiexec_node-1 (handle_stdin_input 1089)
已经有5人回复
vasp并行测试时出错。
已经有7人回复
vasp 并行运算出错
已经有7人回复
vasp5.2 并行编译如何修改makefile
已经有5人回复
vasp计算在进行静态计算时,出错。
已经有3人回复
编译vasp出错
已经有11人回复
【求助】vasp计算过渡态结果出错
已经有17人回复
【求助】vasp静态计算能量值错误
已经有21人回复
【求助】vasp并行安装mpich2的错误
已经有10人回复
【求助】VASP 编译出错
已经有24人回复
» 抢金币啦!回帖就可以得到:
我的现状交流,续:老公辞职读博,我一个人白天工作晚上带孩子,真的累啊!
+1/464
东北大学数字钢铁全国重点实验室刘振宇教授课题组拟招收2026级入学博士研究生1~2名
+2/108
加拿大/英属哥伦比亚大学曹彦凯课题组招收全奖博士/博后 [机器学习/优化/控制方向]
+1/85
因为雪而勾起的一些往事
+1/72
北京化工大学生命科学与技术学院岗位招聘信息
+1/64
双一流大学湘潭大学“化工过程模拟与强化”国家地方联合工程研究中心招收各类博士生
+1/49
国家青年人才叶立群教授课题组招收2026级博士研究生
+1/34
捷克布拉格查理大学(QS260)招收第一性原理计算博士生
+1/32
北京理工大学国家杰青梁军教授课题组招聘2026级博士研究生
+1/30
2026年度智能交通课题组诚招理工科背景博士
+1/26
美国圣母大学张艳良教授诚招全奖博士生
+2/20
华北电力大学(北京)(第一性原理计算)博士招生——学博,专博各1人
+2/20
招聘2026年入学博士生
+1/17
招募懂有限元、编程能力强的同学,待遇优厚
+1/15
东华大学 唐正 课题组诚招2026年博士研究生-有机半导体材料与器件等
+1/8
伦敦大学学院(University College London)机械工程系博士招生/CSC联培招生
+1/7
浙江大学-化工学院刘平伟课题组-二维材料/功能聚合物开发
+1/6
南开大学齐迹课题组诚聘博士后
+1/5
中国科学技术大学 精准智能化学重点实验室 武建昌课题组招聘博士后
+1/3
澳洲皇家墨尔本理工RMIT招收网安方向CSC PhD
+1/2
7楼2012-03-23 13:50:21
2楼2011-03-29 16:31:38
★
zzy870720z(金币+1): 谢谢提示 2011-03-29 16:48:19
chuyong(金币+8): 2011-03-30 21:05:26
zzy870720z(金币+1): 谢谢提示 2011-03-29 16:48:19
chuyong(金币+8): 2011-03-30 21:05:26
| 看看这个,前段时间我刚装的,http://muchong.com/bbs/viewthread.php?tid=2956358 |
3楼2011-03-29 16:47:21
★ ★
ellsaking(金币+1): 我感觉是要么没写路径,要么没装fft库 2011-03-30 09:25:11
ellsaking(金币+1): 我感觉是要么没写路径,要么没装fft库 2011-03-30 09:25:13
chuyong(金币+10): 2011-03-30 21:06:00
ellsaking(金币+1): 我感觉是要么没写路径,要么没装fft库 2011-03-30 09:25:11
ellsaking(金币+1): 我感觉是要么没写路径,要么没装fft库 2011-03-30 09:25:13
chuyong(金币+10): 2011-03-30 21:06:00
|
是不是需要将libfftw3xf_intel.a这个文件的路径写上试试? FFT3D= fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o\ /opt/intel/composerxe-2011.2.137/mkl/interfaces/fftw3xf/libfftw3xf_intel.a |
4楼2011-03-29 17:35:48













回复此楼