| 查看: 745 | 回复: 6 | |||
| 当前主题已经存档。 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
fuzp铁杆木虫 (正式写手)
|
[交流]
【求助】vasp编译出错
|
||
|
系统: Xeon 5420+8G Ram, CentOS 64bit 5.2+ifort+intel mkl+mpich+libgoto 基本上按http://hi.baidu.com/%C7%A3%B3%CC ... abe351092302a6.html修改 makefile如下: .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 # SUFFIX=.f77 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- FC=ifort # fortran linker FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # IFC work around some IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4 # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) #----------------------------------------------------------------------- CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ -Dkind8 -DNGXhalf -DIFC -DCACHE_SIZE=16000 -DPGF90 -Davoidalloc \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) #----------------------------------------------------------------------- FFLAGS = -FR -lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- OFLAG=-O3 -xT -tpp7 OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # on P4, VASP works fastest with the libgoto library # so that's what I recommend #----------------------------------------------------------------------- # Atlas based libraries #ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_P4SSE2/ #BLAS= -L$(ATLASHOME) -lf77blas -latlas # use specific libraries (default library path might point to other libraries) #BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a # use the mkl Intel libraries for p4 (www.intel.com) # mkl.5.1 # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread # mkl.5.2 requires also to -lguide library # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lguide -lpthread # even faster Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so # BLAS= /opt/intel/cmkl/8.0/lib/em64t/libmkl_blas95.a # BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so # BLAS= /opt/intel/cmkl/8.0/lib/32/libmkl_blas95.a # BLAS= -L/opt/intel/cmkl/8.0/lib/32 -lmkl_blas95 -lguide -l # BLAS= -L/opt/intel/cmkl/8.0/lib/32 -L/usr/lib -lmkl_p4 -lguide -lpthread # BLAS= /home/pleu/usr/lib/libgoto_prescott64p-r1.00.so # BLAS= -L/share/apps/lib -lgoto_prescott32p-r1.00 -L/opt/intel/cmkl/8.0/lib/32 -lpthread BLAS= /lib64/libgoto_penrynp-r1.26.so -L/opt/intel/cmkl/9.1/lib/em64t -L/usr/lib -lmkl_p4 -lguide -lpthread # LAPACK, simplest use vasp.4.lib/lapack_double LAPACK= ../vasp.4.lib/lapack_double.o -L/opt/intel/cmkl/9.1/lib/em64t/ -lmkl_lapack # use atlas optimized part of lapack #LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas # use the mkl Intel lapack # LAPACK= -lmkl_lapack # LAPACK= /opt/intel/cmkl/8.0/lib/32/libmkl_lapack.a # LAPACK= -L/opt/intel/cmkl/8.0/lib/32/ -lmkl_lapack #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o \ $(LAPACK) \ $(BLAS) # options for linking (for compiler version 6.X, 7.1) nothing is required # LINK = -L/opt/intel/fc/9.0/lib -lsvml LINK = -L/opt/intel/fce/10.1.018/lib/ -lsvml # compiler version 7.0 generates some vector statments which are located # in the svml library, add the LIBPATH and the library (just in case) # LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml #----------------------------------------------------------------------- # fft libraries: # VASP.4.6 can use fftw.3.0.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o FFT3D = fftw3d.o fft3dlib.o /usr/local/lib/libfftw3.a # /opt/libs/fftw-3.0.1/lib/libfftw3.a #======================================================================= # MPI section, uncomment the following lines # # one comment for users of mpich or lam: # You must *not* compile mpi with g77/f77, because f77/g77 # appends *two* underscores to symbols that contain already an # underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc # compilers however append only one underscore. # Precompiled mpi version will also not work !!! # # We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable # mpich.1.2.1 was configured with # ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \ # -f90="pgf90 " \ # --without-romio --without-mpe -opt=-O \ # # lam was configured with the line # ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \ # --with-f77flags=-O --without-romio # # please note that you might be able to use a lam or mpich version # compiled with f77/g77, but then you need to add the following # options: -Msecond_underscore (compilation) and -g77libs (linking) # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi: if you use LAM and compiled it with the options # suggested above, you can use the following line #----------------------------------------------------------------------- FC=/usr/local/bin/mpif90 # FCL=$(FC) # FFLAGS= #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DNGZhalf -DCACHE_SIZE=16000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=500 \ -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply uncomment the line SCA #----------------------------------------------------------------------- BLACS=$(HOME)/archives/SCALAPACK/BLACS/ SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK SCA= $(SCA_)/libscalapack.a \ $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a SCA= #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \ $(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller #FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o # fftw.3.0.1 is slighly faster and should be used if available FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /usr/local/lib/libfftw3.a #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o setex.o radial.o \ pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \ nonl.o nonlr.o dfast.o choleski2.o \ mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \ metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \ tet.o hamil.o steep.o \ chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \ ebs.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \ edtest.o electron.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o \ elpol.o setlocalpp.o INC= vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS) $(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F poscar.o: poscar.inc poscar.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.inc wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules are cummulative (that is once failed # in one compiler version, stays in the list forever) # -tpp5|6|7 P, PII-PIII, PIV # -xW use SIMD (does not pay of on PII, since fft3d uses double prec) # all other options do no affect the code performance since -O1 is used #----------------------------------------------------------------------- fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O1 -tpp7 -xT -prefetch- -unroll0 -vec_report3 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) radial.o : radial.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symlib.o : symlib.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symmetry.o : symmetry.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) dynbr.o : dynbr.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) broyden.o : broyden.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O0 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) [ Last edited by wuchenwf on 2009-6-22 at 21:41 ] |
» 猜你喜欢
博士读完未来一定会好吗
已经有24人回复
导师想让我从独立一作变成了共一第一
已经有9人回复
到新单位后,换了新的研究方向,没有团队,持续积累2区以上论文,能申请到面上吗
已经有11人回复
读博
已经有4人回复
JMPT 期刊投稿流程
已经有4人回复
心脉受损
已经有5人回复
Springer期刊投稿求助
已经有4人回复
小论文投稿
已经有3人回复
申请2026年博士
已经有6人回复
fuzp
铁杆木虫 (正式写手)
- 应助: 12 (小学生)
- 金币: 5745.3
- 散金: 139
- 红花: 12
- 帖子: 555
- 在线: 72.8小时
- 虫号: 422943
- 注册: 2007-07-19
- 性别: GG
- 专业: 无机非金属类光电信息与功
5楼2009-01-06 17:13:14
fuzp
铁杆木虫 (正式写手)
- 应助: 12 (小学生)
- 金币: 5745.3
- 散金: 139
- 红花: 12
- 帖子: 555
- 在线: 72.8小时
- 虫号: 422943
- 注册: 2007-07-19
- 性别: GG
- 专业: 无机非金属类光电信息与功
|
编译后出错,信息如下,网上也查不到错误原因,请教大家 /usr/local/bin/mpif90 -FR -lowercase -assume byterecl -O3 -xT -tpp7 -c mpi.f90 ifort: command line remark #10148: option '-tp' not supported fortcom: Error: mpi.f90, line 75: This module file was not generated by any release of this compiler. [PREC] USE prec ----------^ fortcom: Error: mpi.f90, line 86: A kind type parameter must be a compile-time constant. [Q] COMPLEX(q),SAVE :: ZTMP_m(NZTMP) --------------^ fortcom: Error: mpi.f90, line 87: A kind type parameter must be a compile-time constant. [Q] REAL(q),SAVE :: DTMP_m(NDTMP) -----------^ fortcom: Error: mpi.f90, line 259: This name does not have a type, and must have an explicit type. [MPI_SUCCESS] IF ( ierror /= MPI_success ) & ---------------------^ fortcom: Error: mpi.f90, line 284: This name does not have a type, and must have an explicit type. [MPI_COMM_WORLD] call MPI_abort(MPI_comm_world , 1, ierror ) ---------------------^ fortcom: Error: mpi.f90, line 292: This module file was not generated by any release of this compiler. [PREC] USE prec ----------^ fortcom: Error: mpi.f90, line 302: This name does not have a type, and must have an explicit type. [MPI_COMM_WORLD] call MPI_abort(MPI_comm_world , 1, ierror ) ---------------------^ fortcom: Error: mpi.f90, line 331: This name does not have a type, and must have an explicit type. [MPI_INTEGER] call MPI_send( ivec(1), n, MPI_integer, node-1, 200, & ---------------------------------^ fortcom: Error: mpi.f90, line 332: This is not a field name that is defined in the encompassing structure. [MPI_COMM] & COMM%MPI_COMM, ierror ) --------------------------^ fortcom: Error: mpi.f90, line 333: This name does not have a type, and must have an explicit type. [MPI_SUCCESS] IF ( ierror /= MPI_success ) & ---------------------^ fortcom: Error: mpi.f90, line 329: A specification expression object must be a dummy argument, a COMMON block object, or an object accessible through host or use association [MPI_STATUS_SIZE] INTEGER status(MPI_status_size), ierror ---------------------^ fortcom: Error: mpi.f90, line 329: This name does not have a type, and must have an explicit type. [MPI_STATUS_SIZE] INTEGER status(MPI_status_size), ierror ---------------------^ fortcom: Error: mpi.f90, line 346: This module file was not generated by any release of this compiler. [PREC] USE prec ----------^ fortcom: Error: mpi.f90, line 351: This derived type name has not been declared. [COMMUNIC] TYPE(communic) COMM -----------^ fortcom: Error: mpi.f90, line 356: This name does not have a type, and must have an explicit type. [MPI_INTEGER] call MPI_recv( ivec(1), n, MPI_integer , node-1, 200, & ---------------------------------^ fortcom: Error: mpi.f90, line 357: This name does not have a type, and must have an explicit type. [COMM] & COMM%MPI_COMM, status, ierror ) ---------------------^ fortcom: Error: mpi.f90, line 357: This is not a field name that is defined in the encompassing structure. [MPI_COMM] & COMM%MPI_COMM, status, ierror ) --------------------------^ fortcom: Error: mpi.f90, line 358: This name does not have a type, and must have an explicit type. [MPI_SUCCESS] IF ( ierror /= MPI_success ) & ---------------------^ fortcom: Error: mpi.f90, line 354: A specification expression object must be a dummy argument, a COMMON block object, or an object accessible through host or use association [MPI_STATUS_SIZE] INTEGER status(MPI_status_size), ierror ---------------------^ fortcom: Error: mpi.f90, line 354: This name does not have a type, and must have an explicit type. [MPI_STATUS_SIZE] INTEGER status(MPI_status_size), ierror ---------------------^ fortcom: Error: mpi.f90, line 390: This is not a field name that is defined in the encompassing structure. [NCPU] IF (n==0 .OR. COMM%NCPU==1 ) THEN -------------------------^ fortcom: Error: mpi.f90, line 400: This name does not have a type, and must have an explicit type. [NITMP] DO j = 1, n, NITMP -------------------^ fortcom: Error: mpi.f90, line 401: The intrinsic data types of the arguments must be the same. [MIN] ichunk = MIN( n-j+1 , NITMP) -------------------------------^ fortcom: Error: mpi.f90, line 403: This name does not have a type, and must have an explicit type. [ITMP_M] call MPI_allreduce( ivec(j), ITMP_m(1), ichunk, MPI_integer, & --------------------------------------^ fortcom: Error: mpi.f90, line 403: This name does not have a type, and must have an explicit type. [MPI_INTEGER] call MPI_allreduce( ivec(j), ITMP_m(1), ichunk, MPI_integer, & ---------------------------------------------------------^ fortcom: Error: mpi.f90, line 404: This name does not have a type, and must have an explicit type. [MPI_SUM] & MPI_sum, COMM%MPI_COMM, ierror ) -----------------------------^ fortcom: Error: mpi.f90, line 404: This is not a field name that is defined in the encompassing structure. [MPI_COMM] & MPI_sum, COMM%MPI_COMM, ierror ) -------------------------------------------^ fortcom: Error: mpi.f90, line 405: This name does not have a type, and must have an explicit type. [MPI_SUCCESS] IF ( ierror /= MPI_success ) & ------------------------^ fortcom: Error: mpi.f90, line 387: A specification expression object must be a dummy argument, a COMMON block object, or an object accessible through host or use association [MPI_STATUS_SIZE] INTEGER ierror, status(MPI_status_size), ichunk -----------------------------^ fortcom: Error: mpi.f90, line 387: This name does not have a type, and must have an explicit type. [MPI_STATUS_SIZE] INTEGER ierror, status(MPI_status_size), ichunk -----------------------------^ fortcom: Severe: Too many errors, exiting compilation aborted for mpi.f90 (code 1) make: *** [mpi.o] 错误 1 |
2楼2009-01-05 22:08:34
wuchenwf
荣誉版主 (职业作家)
- 应助: 0 (幼儿园)
- 贵宾: 3.433
- 金币: 19419.2
- 散金: 10
- 红花: 4
- 帖子: 3560
- 在线: 1035.7小时
- 虫号: 398569
- 注册: 2007-06-10
- 性别: GG
- 专业: 凝聚态物性I:结构、力学和
- 管辖: 第一性原理
3楼2009-01-05 22:15:45
★ ★ ★ ★ ★ ★ ★ ★
fegg7502(金币+4,VIP+0):thank you very much!
fuzp(金币+4,VIP+0):谢谢
fegg7502(金币+4,VIP+0):thank you very much!
fuzp(金币+4,VIP+0):谢谢
|
简单的改了下,你用这个试试。 .SUFFIXES: .inc .f .f90 .F #----------------------------------------------------------------------- # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 # SUFFIX=.f77 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #CL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # IFC work around some IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4 # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ -Dkind8 -DNGXhalf -DIFC -DCACHE_SIZE=16000 -DPGF90 -Davoidalloc \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) #----------------------------------------------------------------------- FFLAGS = -FR -lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- OFLAG=-O3 -xT OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # on P4, VASP works fastest with the libgoto library # so that's what I recommend #----------------------------------------------------------------------- # Atlas based libraries #ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_P4SSE2/ #BLAS= -L$(ATLASHOME) -lf77blas -latlas # use specific libraries (default library path might point to other libraries) #BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a # use the mkl Intel libraries for p4 (www.intel.com) # mkl.5.1 # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread # mkl.5.2 requires also to -lguide library # set -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines #BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lguide -lpthread # even faster Kazushige Goto's BLAS # http://www.cs.utexas.edu/users/kgoto/signup_first.html # BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so # BLAS= /opt/intel/cmkl/8.0/lib/em64t/libmkl_blas95.a # BLAS= /opt/libs/libgoto/libgoto_p4_512-r0.6.so # BLAS= /opt/intel/cmkl/8.0/lib/32/libmkl_blas95.a # BLAS= -L/opt/intel/cmkl/8.0/lib/32 -lmkl_blas95 -lguide -l # BLAS= -L/opt/intel/cmkl/8.0/lib/32 -L/usr/lib -lmkl_p4 -lguide -lpthread # BLAS= /home/pleu/usr/lib/libgoto_prescott64p-r1.00.so # BLAS= -L/share/apps/lib -lgoto_prescott32p-r1.00 -L/opt/intel/cmkl/8.0/lib/32 -lpthread BLAS= /lib64/libgoto_penrynp-r1.26.so -L/opt/intel/cmkl/9.1/lib/em64t -L/usr/lib -lmkl_p4 -lguide -lpthread # LAPACK, simplest use vasp.4.lib/lapack_double LAPACK= ../vasp.4.lib/lapack_double.o -L/opt/intel/cmkl/9.1/lib/em64t/ -lmkl_lapack # use atlas optimized part of lapack #LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas # use the mkl Intel lapack # LAPACK= -lmkl_lapack # LAPACK= /opt/intel/cmkl/8.0/lib/32/libmkl_lapack.a # LAPACK= -L/opt/intel/cmkl/8.0/lib/32/ -lmkl_lapack #----------------------------------------------------------------------- #LIB = -L../vasp.4.lib -ldmy \ # ../vasp.4.lib/linpack_double.o \ # $(LAPACK) \ # $(BLAS) # options for linking (for compiler version 6.X, 7.1) nothing is required # LINK = -L/opt/intel/fc/9.0/lib -lsvml LINK = -L/opt/intel/fce/10.1.018/lib/ -lsvml # compiler version 7.0 generates some vector statments which are located # in the svml library, add the LIBPATH and the library (just in case) # LINK = -L/opt/intel/compiler70/ia32/lib/ -lsvml #----------------------------------------------------------------------- # fft libraries: # VASP.4.6 can use fftw.3.0.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- #FFT3D = fft3dfurth.o fft3dlib.o #FFT3D = fftw3d.o fft3dlib.o /usr/local/lib/libfftw3.a # /opt/libs/fftw-3.0.1/lib/libfftw3.a #======================================================================= # MPI section, uncomment the following lines # # one comment for users of mpich or lam: # You must *not* compile mpi with g77/f77, because f77/g77 # appends *two* underscores to symbols that contain already an # underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90/ifc # compilers however append only one underscore. # Precompiled mpi version will also not work !!! # # We found that mpich.1.2.1 and lam-6.5.X to lam-7.0.4 are stable # mpich.1.2.1 was configured with # ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \ # -f90="pgf90 " \ # --without-romio --without-mpe -opt=-O \ # # lam was configured with the line # ./configure -prefix /opt/libs/lam-7.0.4 --with-cflags=-O -with-fc=ifc \ # --with-f77flags=-O --without-romio # # please note that you might be able to use a lam or mpich version # compiled with f77/g77, but then you need to add the following # options: -Msecond_underscore (compilation) and -g77libs (linking) # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi: if you use LAM and compiled it with the options # suggested above, you can use the following line #----------------------------------------------------------------------- FC=/usr/local/bin/mpif90 FCL=$(FC) # FFLAGS= #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DNGZhalf -DCACHE_SIZE=16000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=500 \ -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply uncomment the line SCA #----------------------------------------------------------------------- BLACS=$(HOME)/archives/SCALAPACK/BLACS/ SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK SCA= $(SCA_)/libscalapack.a \ $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a SCA= #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \ $(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o # fftw.3.0.1 is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /usr/local/lib/libfftw3.a #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o setex.o radial.o \ pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \ nonl.o nonlr.o dfast.o choleski2.o \ mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \ metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \ tet.o hamil.o steep.o \ chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \ ebs.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \ edtest.o electron.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o \ elpol.o setlocalpp.o INC= vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.g *.f *.o *.L *.mod ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS) $(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F poscar.o: poscar.inc poscar.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.inc wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # these special rules are cummulative (that is once failed # in one compiler version, stays in the list forever) # -tpp5|6|7 P, PII-PIII, PIV # -xW use SIMD (does not pay of on PII, since fft3d uses double prec) # all other options do no affect the code performance since -O1 is used #----------------------------------------------------------------------- fft3dlib.o : fft3dlib.F $(CPP) $(FC) -FR -lowercase -O1 -tpp7 -xT -prefetch- -unroll0 -vec_report3 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) radial.o : radial.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symlib.o : symlib.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) symmetry.o : symmetry.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) dynbr.o : dynbr.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) broyden.o : broyden.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) us.o : us.F $(CPP) $(FC) -FR -lowercase -O1 -c $*$(SUFFIX) wave.o : wave.F $(CPP) $(FC) -FR -lowercase -O0 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(FC) -FR -lowercase -O2 -c $*$(SUFFIX) |
4楼2009-01-05 23:00:07













回复此楼
