.SUFFIXES: .inc .f .f90 .F
#-----------------------------------------------------------------------
# Makefile for Portland Group F90/HPF compiler release 3.0-1, 3.1
# and release 1.7
# (http://www.pgroup.com/ & ftp://ftp.pgroup.com/x86/, you need
# to order the HPF/F90 suite)
# we have found no noticable performance differences between
# any of the releases, even Athlon or PIII optimisation does
# not seem to improve performance
#
# The makefile was tested only under Linux on Intel platforms
# (Suse X,X)
#
# it might be required to change some of library pathes, since
# LINUX installation vary a lot
# Hence check ***ALL**** options in this makefile very carefully
#-----------------------------------------------------------------------
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# Mind that some Linux distributions (Suse 6.1) have a bug in
# libm causing small errors in the error-function (total energy
# is therefore wrong by about 1meV/atom). The recommended
# solution is to update libc.
#
# BLAS must be installed on the machine
# there are several options:
# 1) very slow but works:
# retrieve the lapackage from ftp.netlib.org
# and compile the blas routines (BLAS/SRC directory)
# please use g77 or f77 for the compilation. When I tried to
# use pgf77 or pgf90 for BLAS, VASP hang up when calling
# ZHEEV (however this was with lapack 1.1 now I use lapack 2.0)
# 2) most desirable: get an optimized BLAS
# for a list of optimized BLAS try
# http://www.kachinatech.com/~hjjou/scilib/opt_blas.html
#
# the two most reliable packages around are presently:
# 3a) Intels own optimised BLAS (PIII, P4, Itanium)
# http://developer.intel.com/software/products/mkl/
# this is really excellent when you use Intel CPU's
#
# 3b) or obtain the atlas based BLAS routines
# http://math-atlas.sourceforge.net/
# you certainly need atlas on the Athlon, since the mkl
# routines are not optimal on the Athlon.
#
#-----------------------------------------------------------------------
# all CPP processed fortran files have the extension .f
SUFFIX=.f
#-----------------------------------------------------------------------
# whereis CPP ?? (I need CPP, can't use gcc with proper options)
# that's the location of gcc for SUSE 5.3
#
# CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C
#
# that's probably the right line for some Red Hat distribution:
#
# CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C
#
# SUSE 6.X, maybe some Red Hat distributions:
#-----------------------------------------------------------------------
# possible options for CPP:
# possible options for CPP:
# NGXhalf charge density reduced in X direction
# wNGXhalf gamma point only reduced in X direction
# avoidalloc avoid ALLOCATE if possible
# IFC work around some IFC bugs
# CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000 P4
# RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (usually faster)
# RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (faster on P4)
# **** definitely use -DRACCMU_DGEMV if you use the mkl library
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
# general fortran flags (there must a trailing blank on this line)
# the -Mx,119,0x200000 is required if you use older pgf90 versions
# on a more recent LINUX installation
# the option will not do any harm on other 3.X pgf90 distributions
#-----------------------------------------------------------------------
FFLAGS = -Mfree -Mx,119,0x200000
#-----------------------------------------------------------------------
# optimization,
# we have tested whether higher optimisation improves
# the performance, and found no improvements with -O3-5 or -fast
# (even on Athlon system, Athlon specific optimistation worsens performance)
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
# the following lines specify the position of BLAS and LAPACK
# what you chose is very system dependent
# P4: VASP works fastest with Intels mkl performance library
# Athlon: Atlas based BLAS are presently the fastest
# P3: no clue
#-----------------------------------------------------------------------
# Atlas based libraries
ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_ATHLONXP_SSE1/
BLAS= -L$(ATLASHOME) -lf77blas -latlas
# use specific libraries (default library path points to other libraries)
#BLAS= $(ATLASHOME)/libf77blas.a $(ATLASHOME)/libatlas.a
# use the mkl Intel libraries for p4 (www.intel.com)
#BLAS=-L/opt/intel/mkl/lib/32 -lmkl_p4 -lpthread
# LAPACK, simplest use vasp.4.lib/lapack_double
#LAPACK= ../vasp.4.lib/lapack_double.o
# use atlas optimized part of lapack
LAPACK= ../vasp.4.lib/lapack_atlas.o -llapack -lcblas
#-----------------------------------------------------------------------
# fft libraries:
# VASP.4.5 can use FFTW (http://www.fftw.org)
# since the FFTW is very slow for radices 2^n the fft3dlib is used
# in these cases
# if you use fftw3d you need to insert -lfftw in the LIB line as well
# please do not send us any querries reltated to FFTW (no support)
# if it fails, use fft3dlib
#-----------------------------------------------------------------------
#=======================================================================
# MPI section, uncomment the following lines
#
# one comment for users of mpich or lam:
# You must *not* compile mpi with g77/f77, because f77/g77
# appends *two* underscores to symbols that contain already an
# underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90
# compiler however appends only one underscore.
# Precompiled mpi version will also not work !!!
#
# We found that mpich.1.2.1 and lam-6.5.X are stable
# mpich.1.2.1 was configured with
# ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \
# -f90="pgf90 -Mx,119,0x200000" \
# --without-romio --without-mpe -opt=-O \
#
# lam was configured with the line
# ./configure -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \
# --with-f77flags=-O --without-romio
#
# lam was generally faster and we found an average communication
# band with of roughly 160 MBit/s (full duplex)
#
# please note that you might be able to use a lam or mpich version
# compiled with f77/g77, but then you need to add the following
# options: -Msecond_underscore (compilation) and -g77libs (linking)
#
# !!! Please do not send me any queries on how to install MPI, I will
# certainly not answer them !!!!
#=======================================================================
#-----------------------------------------------------------------------
# fortran linker for mpi: if you use LAM and compiled it with the options
# suggested above, you can use the following lines
#-----------------------------------------------------------------------
#FC=mpif77
#FCL=$(FC)
#-----------------------------------------------------------------------
# additional options for CPP in parallel version (see also above):
# NGZhalf charge density reduced in Z direction
# wNGZhalf gamma point only reduced in Z direction
# scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net)
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
# location of SCALAPACK
# if you do not use SCALAPACK simply uncomment the line SCA
#-----------------------------------------------------------------------
makeparam$(SUFFIX): makeparam.F main.F
#
# MIND: I do not have a full dependency list for the include
# and MODULES: here are only the minimal basic dependencies
# if one strucuture is changed then touch_dep must be called
# with the corresponding name of the structure
#
base.o: base.inc base.F
mgrid.o: mgrid.inc mgrid.F
constant.o: constant.inc constant.F
lattice.o: lattice.inc lattice.F
setex.o: setexm.inc setex.F
pseudo.o: pseudo.inc pseudo.F
poscar.o: poscar.inc poscar.F
mkpoints.o: mkpoints.inc mkpoints.F
wave.o: wave.inc wave.F
nonl.o: nonl.inc nonl.F
nonlr.o: nonlr.inc nonlr.F