|
[交流]
【求助】vasp 编译错误已有3人参与
出错信息:
choleski2.o: In function `choleski_orthch_':
choleski2.f:(.text+0x15d8): undefined reference to `ztrtri_'
LDApU.o: In function `ldaplusu_module_ldaplusu_printocc_':
LDApU.f:(.text+0xa4cb): undefined reference to `zheev_'
egrad.o: In function `egrad_egrad_write_efg_':
egrad.f:(.text+0x15f9): undefined reference to `dsyev_'
wavpre.o: In function `mwavpre_wavpre_':
wavpre.f:(.text+0x3116): undefined reference to `zheev_'
wavpre_noio.o: In function `mwavpre_noio_wavpre_noio_':
wavpre_noio.f:(.text+0x3336): undefined reference to `zheevx_'
broyden.o: In function `broyden_broyd_':
broyden.f:(.text+0x5599): undefined reference to `dgegv_'
dynbr.o: In function `brzero_':
dynbr.f:(.text+0x2995): undefined reference to `dgegv_'
rmm-diis.o: In function `rmm_diis_eddrmm_':
rmm-diis.f:(.text+0x29c3): undefined reference to `zhegv_'
mymath.o: In function `mymath_svdvalvec_':
mymath.f:(.text+0xc0f): undefined reference to `dgebrd_'
mymath.f:(.text+0xd01): undefined reference to `dorgbr_'
mymath.f:(.text+0xd46): undefined reference to `dorgbr_'
subrot.o: In function `subrot_eddiag_':
subrot.f:(.text+0x30e5): undefined reference to `zhegv_'
subrot.f:(.text+0x3338): undefined reference to `zheevx_'
davidson.o: In function `david_eddav_':
davidson.f:(.text+0x4e24): undefined reference to `zhegv_'
davidson.f:(.text+0x508d): undefined reference to `zhegv_'
davidson.f:(.text+0x7088): undefined reference to `zheevx_'
rot.o: In function `rot_rotdia_':
rot.f:(.text+0xebbc): undefined reference to `zheev_'
rot.o: In function `rot_rot2_':
rot.f:(.text+0xfbfb): undefined reference to `zheev_'
rot.o: In function `rot_roteta_':
rot.f:(.text+0x112d8): undefined reference to `zheev_'
finite_diff.o: In function `finite_differences_finite_diff_':
finite_diff.f:(.text+0x2a8c): undefined reference to `dsyev_'
finite_diff.o: In function `finite_differences_finite_diff_id_':
finite_diff.f:(.text+0xaa1a): undefined reference to `dsyev_'
finite_diff.o: In function `finite_differences_inv_second_deriv_':
finite_diff.f:(.text+0xf338): undefined reference to `dsyev_'
subrot_cluster.o: In function `subrot_cluster_setup_deg_clusters_':
subrot_cluster.f:(.text+0x11a3): undefined reference to `zheev_'
linear_response.o: In function `mlr_main_lr_skeleton_':
linear_response.f:(.text+0xd42c): undefined reference to `dsyev_'
wave_cacher.o: In function `wave_cacher_eddiag_gw_':
wave_cacher.f:(.text+0x6569): undefined reference to `zgetri_'
wave_cacher.f:(.text+0x6abc): undefined reference to `ztrtri_'
wave_cacher.f:(.text+0x72eb): undefined reference to `zheev_'
wave_cacher.o: In function `wave_cacher_rothalf_':
wave_cacher.f:(.text+0x8c09): undefined reference to `zheev_'
chi_base.o: In function `chi_base_chi_invert_':
chi_base.f:(.text+0x12190): undefined reference to `zheev_'
local_field.o: In function `local_field_rotinv_':
local_field.f:(.text+0x10a4d): undefined reference to `zheev_'
bse.o: In function `bse_calculate_bse_':
bse.f:(.text+0x40be): undefined reference to `zheevx_'
acfdt.o: In function `acfdt_rotln_trace_':
acfdt.f:(.text+0x2312): undefined reference to `dsyev_'
acfdt.f:(.text+0x2386): undefined reference to `zheev_'
acfdt.o: In function `acfdt_rotln_':
acfdt.f:(.text+0x28ce): undefined reference to `zheev_'
chi.o: In function `xi_xi_invert_':
chi.f:(.text+0x1e139): undefined reference to `zgetri_'
chi.f:(.text+0x1e75a): undefined reference to `zgetri_'
chi.o: In function `xi_xi_local_field_':
chi.f:(.text+0x1f7ae): undefined reference to `zgetri_'
chi.f:(.text+0x206ee): undefined reference to `zgetri_'
chi.o: In function `xi_xi_local_field_sym_':
chi.f:(.text+0x2387f): undefined reference to `zgetri_'
chi.o:chi.f:(.text+0x26272): more undefined references to `zgetri_' follow
make: *** [vasp] Error 2
Makefile为
.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler release 3.0-1, 3.1
# and release 1.7
# (http://www.pgroup.com/ & ftp://ftp.pgroup.com/x86/, you need
# to order the HPF/F90 suite)
# we have found no noticable performance differences between
# any of the releases, even Athlon or PIII optimisation does
# not seem to improve performance
#
# The makefile was tested only under Linux on Intel platforms
# (Suse X,X)
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
# for a list of optimized BLAS try
# http://www.kachinatech.com/~hjjou/scilib/opt_blas.html
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
#
#-----------------------------------------------------------------------
# all CPP processed fortran files have the extension .f
SUFFIX=.f
#-----------------------------------------------------------------------
# fortran compiler and linker
#-----------------------------------------------------------------------
#FC=pgf90
# fortran linker
#FCL=$(FC)
#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE 6.X, maybe some Red Hat distributions:
CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX)
#-----------------------------------------------------------------------
# possible options for CPP:
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (usually faster)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (faster on P4)
# **** definitely use -DRACCMU_DGEMV if you use the mkl library
#-----------------------------------------------------------------------
#CPP = $(CPP_) -DHOST=\"LinuxPgi\" \
-Dkind8 -DNGXhalf -DCACHE_SIZE=2000 -DPGF90 -Davoidalloc \
-DRPROMU_DGEMV
#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
# the -Mx,119,0x200000 is required if you use older pgf90 versions
# on a more recent LINUX installation
# the option will not do any harm on other 3.X pgf90 distributions
#-----------------------------------------------------------------------
FFLAGS = -Mfree
#-----------------------------------------------------------------------
# optimization,
# we have tested whether higher optimisation improves
# the performance, and found no improvements with -O3-5 or -fast
# (even on Athlon system, Athlon specific optimistation worsens performance)
#-----------------------------------------------------------------------
OFLAG = -O0 -tp k8-64
OFLAG_HIGH = $(OFLAG)
OBJ_HIGH =
OBJ_NOOPT =
DEBUG = -g -O0
INLINE = $(OFLAG)
#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# what you chose is very system dependent
# P4: VASP works fastest with Intels mkl performance library
# Athlon: Atlas based BLAS are presently the fastest
# P3: no clue
#-----------------------------------------------------------------------
# Atlas based libraries
#ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_ATHLONXP_SSE1/
BLAS= /public1/software/pgi/linux86-64/7.1/lib/libblas.a /home/users/jinnzh/soft/BLAS/blas.a
# use specific libraries (default library path points to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a
# use the mkl Intel libraries for p4 (www.intel.com)
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread
# LAPACK, simplest use vasp.5.lib/lapack_double
LAPACK= ../vasp.5.lib/lapack_double.o
# use atlas optimized part of lapack
#LAPACK= ../vasp.5.lib/lapack_atlas.o -llapack -lcblas
# use the mkl Intel lapack
#LAPACK= -lmkl_lapack
#-----------------------------------------------------------------------
#LIB = -L../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o $(LAPACK) \
$(BLAS)
# options for linking (none required)
LINK =
#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.5 can use FFTW (http://www.fftw.org)
# since the FFTW is very slow for radices 2^n the fft3dlib is used
# in these cases
# if you use fftw3d you need to insert -lfftw in the LIB line as well
# please do not send us any querries reltated to FFTW (no support)
# if it fails, use fft3dlib
#-----------------------------------------------------------------------
#FFT3D = fft3dfurth.o fft3dlib.o
#FFT3D = fftw3d+furth.o fft3dlib.o
#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90
# compiler however appends only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 -Mx,119,0x200000" \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \
# --with-f77flags=-O --without-romio
#
# lam was generally faster and we found an average communication
# band with of roughly 160 MBit/s (full duplex)
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following lines
#-----------------------------------------------------------------------
FC=/public1/software/mpi/mpich127-gcc-pgf/bin/mpif90
FCL=$(FC)
#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------
CPP = $(CPP_) -DMPI -DHOST=\"LinuxPgi\" \
-Dkind8 -DNGZhalf -DCACHE_SIZE=10000 -DPGF90 -Davoidalloc -DRPROMU_DGEMV
#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------
BLACS=/public1/software/pgi/linux86-64/7.1/mpi/mpich/lib/blacsF77init_MPI-LINUX-0.a /public1/software/pgi/linux86-64/7.1/mpi/mpich/lib/blacs_MPI-LINUX-0.a /public1/software/pgi/linux86-64/7.1/mpi/mpich/lib/blacsF77init_MPI-LINUX-0.a
#SCA_=
#SCA= $(SCA_)/scalapack_LINUX.a $(SCA_)/pblas_LINUX.a $(SCA_)/tools_LINUX.a \
$(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a
SCA=/public1/software/pgi/linux86-64/7.1/mpi/mpich/lib/libscalapack.a
#-----------------------------------------------------------------------
# libraries for mpi
#-----------------------------------------------------------------------
LIB = -L ../vasp.5.lib -ldmy \
../vasp.5.lib/linpack_double.o $(SCA) \
$(BLACS) $(BLAS)
# FFT: only option fftmpi.o with fft3dlib of Juergen Furthmueller
FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o /home/users/jinnzh/fftw2/lib/libdrfftw_mpi.a /home/users/jinnzh/fftw2/lib/libsfftw_mpi.a /home/users/jinnzh/fftw2/lib/libsrfftw_mpi.a
#----------------------------------------------------------------------- |
» 猜你喜欢
» 本主题相关价值贴推荐,对您同样有帮助:
|