| 查看: 916 | 回复: 0 | |||
ljzhou86金虫 (正式写手)
|
[交流]
vasp,自旋打开的计算进入不了电子步
|
|
为什么我的vasp,自旋打开的计算算不动(进不了),但如关闭自旋极化,就可以正常计算? 同样的体系,设置就是一个ISPIN=2,一个注释掉。 计算一直停留在这个状态, 不能进入电子自洽步。我估计是编译的问题,先后换了不同的intel mkl数据库,但问题依旧。大家有知道原因的吗?makefile的主要设置如下: ....... # all CPP processed fortran files have the extension .f90 SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=ifort # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -traditional >$*$(SUFFIX) # this release should be fpp clean # we now recommend fpp as preprocessor # if this fails go back to cpp CPP_=fpp -f_com=no -free -w0 $*.F $*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #CPP = $(CPP_) -DHOST=\"LinuxIFC\" \ # -DCACHE_SIZE=12000 -DPGF90 -Davoidalloc -DNGXhalf \ # -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) # byterecl is strictly required for ifc, since otherwise # the WAVECAR file becomes huge #----------------------------------------------------------------------- #FFLAGS = -FR -names lowercase -assume byterecl FFLAGS = -I/sw/intel/composer_xe_2013_sp1.1.106/mkl/include/fftw -FR -lowercase -assume byterecl #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -axK SSE1 optimization, but also generate code executable on all mach. # xK improves performance somewhat on XP, and a is required in order # to run the code on older Athlons as well # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp6 P3 optimization # -tpp7 P4 optimization #----------------------------------------------------------------------- # ifc.9.1, ifc.10.1 recommended OFLAG=-O2 -ip OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # we recommend to use mkl, that is simple and most likely # fastest in Intel based machines #----------------------------------------------------------------------- # mkl path for ifc 11 compiler #MKL_PATH=$(MKLROOT)/lib/em64t # mkl path for ifc 12 compiler #MKL_PATH=$(MKLROOT)/lib/intel64 #MKL_FFTW_PATH=$(MKLROOT)/interfaces/fftw3xf/ # BLAS # setting -DRPROMU_DGEMV -DRACCMU_DGEMV in the CPP lines usually speeds up program execution # BLAS= -Wl,--start-group $(MKL_PATH)/libmkl_intel_lp64.a $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -lguide # faster linking and available from at least version 11 #BLAS= -lguide -mkl # LAPACK, use vasp.5.lib/lapack_double #LAPACK= ../vasp.5.lib/lapack_double.o # LAPACK from mkl, usually faster and contains scaLAPACK as well #LAPACK= $(MKL_PATH)/libmkl_intel_lp64.a # here a tricky version, link in libgoto and use mkl as a backup # also needs a special line for LAPACK # this is the best thing you can do on AMD based systems !!!!!! #BLAS = -Wl,--start-group /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_thread.a $(MKL_PATH)/libmkl_core.a -Wl,--end-group -liomp5 #LAPACK= /opt/libs/libgoto/libgoto.so $(MKL_PATH)/libmkl_intel_lp64.a #----------------------------------------------------------------------- #LIB = -L../vasp.5.lib -ldmy \ # ../vasp.5.lib/linpack_double.o $(LAPACK) \ # $(BLAS) # options for linking, nothing is required (usually) LINK = #----------------------------------------------------------------------- # fft libraries: # VASP.5.2 can use fftw.3.1.X (http://www.fftw.org) # since this version is faster on P4 machines, we recommend to use it #----------------------------------------------------------------------- FFT3D = fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #======================================================================= # MPI section, uncomment the following lines until # general rules and compile lines # presently we recommend OPENMPI, since it seems to offer better # performance than lam or mpich # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (recommended if mkl is available) # avoidalloc avoid ALLOCATE if possible # PGF90 work around some for some PGF90 / IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4, PD # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (depends on used BLAS) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (depends on used BLAS) # tbdyn MD package of Tomas Bucko #----------------------------------------------------------------------- #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc -DNGZhalf \ -DMPI_BLOCK=8000 -Duse_collective -DscaLAPACK ## -DRPROMU_DGEMV -DRACCMU_DGEMV #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply leave this section commented out #----------------------------------------------------------------------- # usually simplest link in mkl scaLAPACK BLACS= -lmkl_blacs_openmpi_lp64 SCA= /sw/intel/composer_xe_2015.2.164/mkl/lib/intel64/libmkl_scalapack_lp64.a $(BLACS) #SCA= /sw/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64/libmkl_scalapack_lp64.a $(BLACS) BLAS= -L/sw/intel/composer_xe_2015.2.164/mkl/lib/intel64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64 -lmkl_blacs_lp64 -lmkl_intel_thread -liomp5 -lpthread -lmkl_sequential -lmkl_core -i-static #BLAS= -L/sw/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64 -lmkl_blacs_openmpi_lp64 -lmkl_intel_lp64 -lmkl_blacs_lp64 -lmkl_intel_thread -liomp5 -lpthread -lmkl_sequential -lmkl_core -i-st atic # LAPACK, simplest use vasp.5.lib/lapack_double #LAPACK=-L/sw/intel/composer_xe_2013_sp1.1.106/mkl/lib -lmkl_intel_lp64 -lmkl_lapack95_ilp64 LAPACK= ../vasp.5.lib/lapack_double.o #LAPACK= -L/sw/intel/composer_xe_2013_sp1.1.106/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_lapack95_ilp64 #LAPACK= -L/sw/intel/composer_xe_2015.2.164/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_lapack95_ilp64 #----------------------------------------------------------------------- # libraries #----------------------------------------------------------------------- LIB = -L../vasp.5.lib -ldmy \ ../vasp.5.lib/linpack_double.o \ $(SCA) $(LAPACK) $(BLAS) #----------------------------------------------------------------------- # parallel FFT #----------------------------------------------------------------------- # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dfurth.o fft3dlib.o # alternatively: fftw.3.1.X is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o /opt/libs/fftw-3.1.2/lib/libfftw3.a # you may also try to use the fftw wrapper to mkl (but the path might vary a lot) # it seems this is best for AMD based systems #FFT3D = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o $(MKL_FFTW_PATH)/libfftw3xf_intel.a #INCS = -I$(MKLROOT)/include/fftw #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ …… ………… |
» 猜你喜欢
Bioresource Technology期刊,第一次返修的时候被退回好几次了
已经有7人回复
2025冷门绝学什么时候出结果
已经有4人回复
真诚求助:手里的省社科项目结项要求主持人一篇中文核心,有什么渠道能发核心吗
已经有8人回复
寻求一种能扛住强氧化性腐蚀性的容器密封件
已经有5人回复
请问哪里可以有青B申请的本子可以借鉴一下。
已经有4人回复
孩子确诊有中度注意力缺陷
已经有14人回复
请问下大家为什么这个铃木偶联几乎不反应呢
已经有5人回复
请问有评职称,把科研教学业绩算分排序的高校吗
已经有5人回复
天津工业大学郑柳春团队欢迎化学化工、高分子化学或有机合成方向的博士生和硕士生加入
已经有4人回复
康复大学泰山学者周祺惠团队招收博士研究生
已经有6人回复













回复此楼