24小时热门版块排行榜    

查看: 1271  |  回复: 12
当前主题已经存档。
当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖

veryman

木虫 (正式写手)

$_$

[交流] 【求助】编译vasp的问题

编译vasp.4.lib时没有什么问题,但是编译vasp.4.6时出错,提示为:

pgf90 -Mfree -MX,119,0x200000    -02  -tp p6  -c wave.f
PGF90-F-0000-Internal compiler error. index overflow     4 (wave.f:1316)
PGF90/x86 Linux 7.2-5:compilation aborted
make:***[wave.o] 错误 2

在本站搜索到一个帖子,但是没有看懂,地址如下:http://muchong.com/html/200801/679096.html

里面说的FFTW我已经装了,系统版本redhat linux Enterprise 4 AS,使用makefile为makefile.linux_pg,更改了ATLAS库的路径:ATLASHOME=/home//ATLAS/
其他没有动过,编译器为pgifortran7.25

[ Last edited by wuchenwf on 2009-6-21 at 21:05 ]
回复此楼
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

suntao1982

木虫 (著名写手)

小木虫

★ ★
zzgyb(金币+2,VIP+0):谢谢你的参与,欢迎再次光临计算模拟版!
能够理解你的心情,这几天也在玩编译,很头疼,考虑过换一个编译器吗?比如gfortran 或者 intel fortran.
做中国人的化学!!!!
4楼2008-10-17 22:01:08
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
查看全部 13 个回答

veryman

木虫 (正式写手)

$_$

大家帮帮忙啊,附上我用的makefile

.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler release 3.0-1, 3.1
# and release 1.7
# (http://www.pgroup.com/ & ftp://ftp.pgroup.com/x86/, you need
#  to order the HPF/F90 suite)
#  we have found no noticable performance differences between
#  any of the releases, even Athlon or PIII optimisation does
#  not seem to improve performance
#
# The makefile was tested only under Linux on Intel platforms
# (Suse X,X)
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
#   retrieve the lapackage from ftp.netlib.org
#   and compile the blas routines (BLAS/SRC directory)
#   please use g77 or f77 for the compilation. When I tried to
#   use pgf77 or pgf90 for BLAS, VASP hang up when calling
#   ZHEEV  (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
#   for a list of optimized BLAS try
#     http://www.kachinatech.com/~hjjou/scilib/opt_blas.html
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
#     http://developer.intel.com/software/products/mkl/
#   this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
#     http://math-atlas.sourceforge.net/
#   you certainly need atlas on the Athlon, since the  mkl
#   routines are not optimal on the Athlon.
#
#-----------------------------------------------------------------------

# all CPP processed fortran files have the extension .f
SUFFIX=.f

#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
FC=pgf90
# fortran linker
FCL=$(FC)


#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
#  CPP_   =  /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
#  CPP_   =  /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
#  SUSE 6.X, maybe some Red Hat distributions:

CPP_ =  ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)

#-----------------------------------------------------------------------
# possible options for CPP:
# possible options for CPP:
# NGXhalf             charge density   reduced in X direction
# wNGXhalf            gamma point only reduced in X direction
# avoidalloc          avoid ALLOCATE if possible
# IFC                 work around some IFC bugs
# CACHE_SIZE          1000 for PII,PIII, 5000 for Athlon, 8000 P4
# RPROMU_DGEMV        use DGEMV instead of DGEMM in RPRO (usually  faster)
# RACCMU_DGEMV        use DGEMV instead of DGEMM in RACC (faster on P4)
#  **** definitely use -DRACCMU_DGEMV if you use the mkl library
#-----------------------------------------------------------------------

CPP    = $(CPP_) -DHOST=\"LinuxPgi\" \
          -Dkind8 -DNGXhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc \
          -DRPROMU_DGEMV  

#-----------------------------------------------------------------------
# general fortran flags  (there must a trailing blank on this line)
# the -Mx,119,0x200000 is required if you use older pgf90 versions
# on a more recent LINUX installation
# the option will not do any harm on other 3.X pgf90 distributions
#-----------------------------------------------------------------------

FFLAGS =  -Mfree -Mx,119,0x200000  

#-----------------------------------------------------------------------
# optimization,
# we have tested whether higher optimisation improves
# the performance, and found no improvements with -O3-5 or -fast
# (even on Athlon system, Athlon specific optimistation worsens performance)
#-----------------------------------------------------------------------

OFLAG  = -O2  -tp p6

OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =
OBJ_NOOPT =
DEBUG  = -g -O0
INLINE = $(OFLAG)


#-----------------------------------------------------------------------
# the following lines specify the position of BLAS  and LAPACK
# what you chose is very system dependent
# P4: VASP works fastest with Intels mkl performance library
# Athlon: Atlas based BLAS are presently the fastest
# P3: no clue
#-----------------------------------------------------------------------

# Atlas based libraries
ATLASHOME= $(HOME)/ATLAS/
BLAS=   -L$(ATLASHOME)  -lf77blas -latlas

# use specific libraries (default library path points to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a

# use the mkl Intel libraries for p4 (www.intel.com)
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4  -lpthread

# LAPACK, simplest use vasp.4.lib/lapack_double
#LAPACK= ../vasp.4.lib/lapack_double.o

# use atlas optimized part of lapack
LAPACK= ../vasp.4.lib/lapack_atlas.o  -llapack -lcblas

# use the mkl Intel lapack
#LAPACK= -lmkl_lapack


#-----------------------------------------------------------------------

LIB  = -L../vasp.4.lib -ldmy \
     ../vasp.4.lib/linpack_double.o $(LAPACK) \
     $(BLAS)

# options for linking (none required)
LINK    =

#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.5 can use FFTW (http://www.fftw.org)
# since the FFTW is very slow for radices 2^n the fft3dlib is used
# in these cases
# if you use fftw3d you need to insert -lfftw in the LIB line as well
# please do not send us any querries reltated to FFTW (no support)
# if it fails, use fft3dlib
#-----------------------------------------------------------------------

FFT3D   = fft3dfurth.o fft3dlib.o
#FFT3D   = fftw3d+furth.o fft3dlib.o


#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77            
# appends *two* underscores to symbols that contain already an        
# underscore (i.e. MPI_SEND becomes mpi_send__).  The pgf90
# compiler however appends only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X are stable
# mpich.1.2.1 was configured with
#  ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000"  \
# -f90="pgf90 -Mx,119,0x200000" \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
#  ./configure  -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \
# --with-f77flags=-O --without-romio
#
# lam was generally faster and we found an average communication
# band with of roughly 160 MBit/s (full duplex)
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above,  you can use the following lines
#-----------------------------------------------------------------------


#FC=mpif77
#FCL=$(FC)

#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf               charge density   reduced in Z direction
# wNGZhalf              gamma point only reduced in Z direction
# scaLAPACK             use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------

#CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxPgi\" \
#     -Dkind8 -DNGZhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV

#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------

BLACS=/usr/local/BLACS_lam
SCA_= /usr/local/SCALAPACK_lam

SCA= $(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a $(SCA_)/tools_LINUX.a \
$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a

SCA=

#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------

#LIB     = -L../vasp.4.lib -ldmy  \
#      ../vasp.4.lib/linpack_double.o $(LAPACK) \
#      $(SCA) $(BLAS)

# FFT: only option  fftmpi.o with fft3dlib of Juergen Furthmueller

#FFT3D   = fftmpi.o fftmpi_map.o fft3dlib.o

#-----------------------------------------------------------------------
# general rules and compile lines
#-----------------------------------------------------------------------
BASIC=   symmetry.o symlib.o   lattlib.o  random.o   

SOURCE=  base.o     mpi.o      smart_allocate.o      xml.o  \
         constant.o jacobi.o   main_mpi.o  scala.o   \
         asa.o      lattice.o  poscar.o   ini.o      setex.o     radial.o  \
         pseudo.o   mgrid.o    mkpoints.o wave.o      wave_mpi.o  $(BASIC) \
         nonl.o     nonlr.o    dfast.o    choleski2.o    \
         mix.o      charge.o   xcgrad.o   xcspin.o    potex1.o   potex2.o  \
         metagga.o  constrmag.o pot.o      cl_shift.o force.o    dos.o      elf.o      \
         tet.o      hamil.o    steep.o    \
         chain.o    dyna.o     relativistic.o LDApU.o sphpro.o  paw.o   us.o \
         ebs.o      wavpre.o   wavpre_noio.o broyden.o \
         dynbr.o    rmm-diis.o reader.o   writer.o   tutor.o xml_writer.o \
         brent.o    stufak.o   fileio.o   opergrid.o stepver.o  \
         dipol.o    xclib.o    chgloc.o   subrot.o   optreal.o   davidson.o \
         edtest.o   electron.o shm.o      pardens.o  paircorrection.o \
         optics.o   constr_cell_relax.o   stm.o    finite_diff.o \
         elpol.o    setlocalpp.o

INC=

vasp: $(SOURCE) $(FFT3D) $(INC) main.o
        rm -f vasp
        $(FCL) -o vasp $(LINK) main.o  $(SOURCE)   $(FFT3D) $(LIB)
makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC)
        $(FCL) -o makeparam  $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB)
zgemmtest: zgemmtest.o base.o random.o $(INC)
        $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB)
dgemmtest: dgemmtest.o base.o random.o $(INC)
        $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB)
ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC)
        $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB)
kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC)
        $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB)

clean:       
        -rm -f *.f *.o *.L ; touch *.F

main.o: main$(SUFFIX)
        $(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c main$(SUFFIX)
xcgrad.o: xcgrad$(SUFFIX)
        $(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcgrad$(SUFFIX)
xcspin.o: xcspin$(SUFFIX)
        $(FC) $(FFLAGS) $(INLINE)  $(INCS) -c xcspin$(SUFFIX)

makeparam.o: makeparam$(SUFFIX)
        $(FC) $(FFLAGS)$(DEBUG)  $(INCS) -c makeparam$(SUFFIX)

makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F

$(OBJ_HIGH):
        $(CPP)
        $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX)
$(OBJ_NOOPT):
        $(CPP)
        $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX)

fft3dlib_f77.o: fft3dlib_f77.F
        $(CPP)
        $(F77) $(FFLAGS_F77) -c $*$(SUFFIX)

.F.o:
        $(CPP)
        $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
.F$(SUFFIX):
        $(CPP)
$(SUFFIX).o:
        $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX)
2楼2008-10-15 19:32:06
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

sars518

木虫 (著名写手)

小木虫灌水大队队长

这个我也不懂,高手进来看看吧
人生是一场旅行,在欣赏风景的同时,别忘了你也是别人的风景。
3楼2008-10-17 20:10:52
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖

veryman

木虫 (正式写手)

$_$

我的cpu是amd的,没敢用intel的,gfortran和mpich的编译还走不到现在这步……
有没有人知道http://muchong.com/html/200801/679096.html中说的“文件太大改小一点”是什么意思?
5楼2008-10-18 08:04:35
已阅   回复此楼   关注TA 给TA发消息 送TA红花 TA的回帖
普通表情 高级回复 (可上传附件)
最具人气热帖推荐 [查看全部] 作者 回/看 最后发表
[公派出国] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 5lbyq5wrhb 2026-02-07 4/200 2026-02-08 08:47 by vs90ilomwc
[考博] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 5lbyq5wrhb 2026-02-07 4/200 2026-02-08 08:46 by vs90ilomwc
[论文投稿] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 3rkserf6qr 2026-02-07 5/250 2026-02-08 08:32 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +3 3rkserf6qr 2026-02-07 4/200 2026-02-08 08:27 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +5 2h7du0nuhk 2026-02-07 6/300 2026-02-08 08:26 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 6/300 2026-02-08 08:07 by vs90ilomwc
[教师之家] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 7/350 2026-02-08 07:52 by vs90ilomwc
[找工作] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 7/350 2026-02-08 07:46 by vs90ilomwc
[公派出国] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:32 by vs90ilomwc
[教师之家] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:26 by vs90ilomwc
[硕博家园] 售SCI一区文章,我:8 O5 51O 54,科目齐全 +4 2h7du0nuhk 2026-02-07 8/400 2026-02-08 07:07 by vs90ilomwc
[硕博家园] 博士延得我,科研能力直往上蹿 +8 偏振片 2026-02-02 8/400 2026-02-08 06:52 by liyeqik
[教师之家] 有院领导为了换新车,用横向课题经费买了俩车 +7 瞬息宇宙 2026-02-04 7/350 2026-02-07 21:47 by tfang
[有机交流] 酰胺脱乙酰基 10+5 chibby 2026-02-03 12/600 2026-02-07 19:29 by 江东闲人
[基金申请] 同年申请2项不同项目,第1个项目里不写第2个项目的信息,可以吗 +4 hitsdu 2026-02-06 4/200 2026-02-07 13:07 by jurkat.1640
[基金申请] 有时候真觉得大城市人没有县城人甚至个体户幸福 +9 苏东坡二世 2026-02-04 10/500 2026-02-07 12:37 by 小毛球
[考博] 天津大学招2026.09的博士生,欢迎大家推荐交流(博导是本人) +4 a793625982 2026-02-05 5/250 2026-02-07 10:57 by a793625982
[公派出国] CSC & MSCA 博洛尼亚大学能源材料课题组博士/博士后招生|MSCA经费充足、排名优 +4 雨念 2026-02-01 6/300 2026-02-06 23:32 by MelissaPon
[基金申请] 面上项目申报 +3 Tide man 2026-02-01 3/150 2026-02-05 22:56 by god_tian
[教师之家] 遇见不省心的家人很难过 +18 otani 2026-02-03 22/1100 2026-02-04 11:06 by tangmnt
信息提示
请填处理意见