| 查看: 740 | 回复: 7 | |||
| 当前主题已经存档。 | |||
| 当前只显示满足指定条件的回帖,点击这里查看本话题的所有回帖 | |||
tjpm金虫 (正式写手)
|
[交流]
【讨论】vasp单机并行的问题
|
||
|
在台式机(Centos 5.3 x86_64 ,cpu:E4400,内存:8GB)上装过一个并行版的vasp. ivf是 11.0的吧,mkl也是inetl fortran编译器安装包里自带的(11.0后都集成在一起了,可以直接一起安装)。 mkl的lib路径写添加到mkl.conf里并移到 /etc/ld.so.conf.d 下,然后 sudo ldconfig一下保证之后运行时可用找到动态链接库。 mpich2编译的过程是: export FC=ifort export F77=ifort ./configure --prefix=/dir 编译vasp的 过程我也不说了。 lapack和blas都是用mkl的。 vasp.4.lib是用mpif90编译的。 直接贴makefile吧。 ================================= .SUFFIXES: .inc .f .f90 .F SUFFIX=.f90 #----------------------------------------------------------------------- # fortran compiler and linker #----------------------------------------------------------------------- #FC=/export/mpi/mpich_intel_up/bin/mpif90 # fortran linker #FCL=$(FC) #----------------------------------------------------------------------- # whereis CPP ?? (I need CPP, can't use gcc with proper options) # that's the location of gcc for SUSE 5.3 # # CPP_ = /usr/lib/gcc-lib/i486-linux/2.7.2/cpp -P -C # # that's probably the right line for some Red Hat distribution: # # CPP_ = /usr/lib/gcc-lib/i386-redhat-linux/2.7.2.3/cpp -P -C # # SUSE X.X, maybe some Red Hat distributions: CPP_ = ./preprocess <$*.F | /usr/bin/cpp -P -C -ansi >$*$(SUFFIX) #----------------------------------------------------------------------- # possible options for CPP: # NGXhalf charge density reduced in X direction # wNGXhalf gamma point only reduced in X direction # avoidalloc avoid ALLOCATE if possible # IFC work around some IFC bugs # CACHE_SIZE 1000 for PII,PIII, 5000 for Athlon, 8000-12000 P4 # RPROMU_DGEMV use DGEMV instead of DGEMM in RPRO (usually faster) # RACCMU_DGEMV use DGEMV instead of DGEMM in RACC (faster on P4) #----------------------------------------------------------------------- #CPP = $(CPP_) -DMPI \ #----------------------------------------------------------------------- # general fortran flags (there must a trailing blank on this line) #----------------------------------------------------------------------- FFLAGS = -FR -lowercase #----------------------------------------------------------------------- # optimization # we have tested whether higher optimisation improves performance # -xW SSE2 optimization # -axW SSE2 optimization, but also generate code executable on all mach. # -tpp7 P4 optimization # -prefetch #----------------------------------------------------------------------- OFLAG = -O0 OFLAG_HIGH = $(OFLAG) OBJ_HIGH = OBJ_NOOPT = DEBUG = -FR -O0 INLINE = $(OFLAG) #----------------------------------------------------------------------- # the following lines specify the position of BLAS and LAPACK # on P4, VASP works fastest with Intels mkl performance library # so that's what I recommend #----------------------------------------------------------------------- # Atlas based libraries #ATLASHOME= $(HOME)/archives/BLAS_OPT/ATLAS/lib/Linux_P4SSE2/ #BLAS= -L$(ATLASHOME) -lf77blas -latlas # use specific libraries (default library path points to other libraries) BLAS=-L/opt/intel/Compiler/11.0/069/mkl/lib/em64t -lmkl -lguide -lpthread LAPACK=-L/opt/intel/Compiler11.0/069/mkl/lib/em64t -lmkl_lapack #======================================================================= # MPI section, uncomment the following lines # # one comment for users of mpich or lam: # You must *not* compile mpi with g77/f77, because f77/g77 # appends *two* underscores to symbols that contain already an # underscore (i.e. MPI_SEND becomes mpi_send__). The pgf90 # compiler however appends only one underscore. # Precompiled mpi version will also not work !!! # # We found that mpich.1.2.1 and lam-6.5.X are stable # mpich.1.2.1 was configured with # ./configure -prefix=/usr/local/mpich_nodvdbg -fc="pgf77 -Mx,119,0x200000" \ # -f90="pgf90 -Mx,119,0x200000" \ # --without-romio --without-mpe -opt=-O \ # # lam was configured with the line # ./configure -prefix /usr/local/lam-6.5.X --with-cflags=-O -with-fc=pgf90 \ # --with-f77flags=-O --without-romio # # lam was generally faster and we found an average communication # band with of roughly 160 MBit/s (full duplex) # # please note that you might be able to use a lam or mpich version # compiled with f77/g77, but then you need to add the following # options: -Msecond_underscore (compilation) and -g77libs (linking) # # !!! Please do not send me any queries on how to install MPI, I will # certainly not answer them !!!! #======================================================================= #----------------------------------------------------------------------- # fortran linker for mpi: if you use LAM and compiled it with the options # suggested above, you can use the following line #----------------------------------------------------------------------- FC=mpif90 FCL=$(FC) #----------------------------------------------------------------------- # additional options for CPP in parallel version (see also above): # NGZhalf charge density reduced in Z direction # wNGZhalf gamma point only reduced in Z direction # scaLAPACK use scaLAPACK (usually slower on 100 Mbit Net) #----------------------------------------------------------------------- CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=500 -DPROC_GROUP=8 \ -DRPROMU_DGEMV -DRACCMU_DGEMV # -DNGZhalf #----------------------------------------------------------------------- # location of SCALAPACK # if you do not use SCALAPACK simply uncomment the line SCA #----------------------------------------------------------------------- #BLACS=$(HOME)/archives/SCALAPACK/BLACS/ #SCA_=$(HOME)/archives/SCALAPACK/SCALAPACK #SCA= $(SCA_)/libscalapack.a \ # $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a $(BLACS)/LIB/blacs_MPI-LINUX-0.a $(BLACS)/LIB/blacsF77init_MPI-LINUX-0.a SCA= #----------------------------------------------------------------------- # libraries for mpi #----------------------------------------------------------------------- LIB = -L../vasp.4.lib -ldmy \ ../vasp.4.lib/linpack_double.o $(LAPACK) \ $(SCA) $(BLAS) # FFT: fftmpi.o with fft3dlib of Juergen Furthmueller FFT3D = fftmpi.o fftmpi_map.o fft3dlib.o # fftw.3.0 is slighly faster and should be used if available #FFT3D = fftmpiw.o fftmpi_map.o fft3dlib.o /opt/libs/fftw-3.0/lib/libfftw3.a #----------------------------------------------------------------------- # general rules and compile lines #----------------------------------------------------------------------- BASIC= symmetry.o symlib.o lattlib.o random.o SOURCE= base.o mpi.o smart_allocate.o xml.o \ constant.o jacobi.o main_mpi.o scala.o \ asa.o lattice.o poscar.o ini.o setex.o radial.o \ pseudo.o mgrid.o mkpoints.o wave.o wave_mpi.o $(BASIC) \ nonl.o nonlr.o dfast.o choleski2.o \ mix.o charge.o xcgrad.o xcspin.o potex1.o potex2.o \ metagga.o constrmag.o pot.o cl_shift.o force.o dos.o elf.o \ tet.o hamil.o steep.o \ chain.o dyna.o relativistic.o LDApU.o sphpro.o paw.o us.o \ ebs.o wavpre.o wavpre_noio.o broyden.o \ dynbr.o rmm-diis.o reader.o writer.o tutor.o xml_writer.o \ brent.o stufak.o fileio.o opergrid.o stepver.o \ dipol.o xclib.o chgloc.o subrot.o optreal.o davidson.o \ edtest.o electron.o shm.o pardens.o paircorrection.o \ optics.o constr_cell_relax.o stm.o finite_diff.o \ elpol.o setlocalpp.o INC= vasp: $(SOURCE) $(FFT3D) $(INC) main.o rm -f vasp $(FCL) -o vasp $(LINK) main.o $(SOURCE) $(FFT3D) $(LIB) makeparam: $(SOURCE) $(FFT3D) makeparam.o main.F $(INC) $(FCL) -o makeparam $(LINK) makeparam.o $(SOURCE) $(FFT3D) $(LIB) zgemmtest: zgemmtest.o base.o random.o $(INC) $(FCL) -o zgemmtest $(LINK) zgemmtest.o random.o base.o $(LIB) dgemmtest: dgemmtest.o base.o random.o $(INC) $(FCL) -o dgemmtest $(LINK) dgemmtest.o random.o base.o $(LIB) ffttest: base.o smart_allocate.o mpi.o mgrid.o random.o ffttest.o $(FFT3D) $(INC) $(FCL) -o ffttest $(LINK) ffttest.o mpi.o mgrid.o random.o smart_allocate.o base.o $(FFT3D) $(LIB) kpoints: $(SOURCE) $(FFT3D) makekpoints.o main.F $(INC) $(FCL) -o kpoints $(LINK) makekpoints.o $(SOURCE) $(FFT3D) $(LIB) clean: -rm -f *.f *.o *.L ; touch *.F main.o: main$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c main$(SUFFIX) xcgrad.o: xcgrad$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcgrad$(SUFFIX) xcspin.o: xcspin$(SUFFIX) $(FC) $(FFLAGS) $(INLINE) $(INCS) -c xcspin$(SUFFIX) makeparam.o: makeparam$(SUFFIX) $(FC) $(FFLAGS)$(DEBUG) $(INCS) -c makeparam$(SUFFIX) makeparam$(SUFFIX): makeparam.F main.F # # MIND: I do not have a full dependency list for the include # and MODULES: here are only the minimal basic dependencies # if one strucuture is changed then touch_dep must be called # with the corresponding name of the structure # base.o: base.inc base.F mgrid.o: mgrid.inc mgrid.F constant.o: constant.inc constant.F lattice.o: lattice.inc lattice.F setex.o: setexm.inc setex.F pseudo.o: pseudo.inc pseudo.F poscar.o: poscar.inc poscar.F mkpoints.o: mkpoints.inc mkpoints.F wave.o: wave.inc wave.F nonl.o: nonl.inc nonl.F nonlr.o: nonlr.inc nonlr.F $(OBJ_HIGH): $(CPP) $(FC) $(FFLAGS) $(OFLAG_HIGH) $(INCS) -c $*$(SUFFIX) $(OBJ_NOOPT): $(CPP) $(FC) $(FFLAGS) $(INCS) -c $*$(SUFFIX) fft3dlib_f77.o: fft3dlib_f77.F $(CPP) $(F77) $(FFLAGS_F77) -c $*$(SUFFIX) .F.o: $(CPP) $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) .F$(SUFFIX): $(CPP) $(SUFFIX).o: $(FC) $(FFLAGS) $(OFLAG) $(INCS) -c $*$(SUFFIX) # special rules #----------------------------------------------------------------------- # -tpp5|6|7 P, PII-PIII, PIV # -xW use SIMD (does not pay of on PII, since fft3d uses double prec) # all other options do no affect the code performance since -O1 is used fft3dlib.o : fft3dlib.F $(CPP) # $(F77) -FR -lowercase -O1 -tpp7 -xW -prefetch- -prev_div -unroll0 -e95 -vec_report3 -c $*$(SUFFIX) $(F77) -FR -lowercase -O1 -tpp7 -xW -prefetch- -prev_div -unroll0 -vec_report3 -c $*$(SUFFIX) fft3dfurth.o : fft3dfurth.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) radial.o : radial.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) symlib.o : symlib.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) symmetry.o : symmetry.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) dynbr.o : dynbr.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) us.o : us.F $(CPP) $(F77) -FR -lowercase -O1 -c $*$(SUFFIX) wave.o : wave.F $(CPP) $(F77) -FR -lowercase -O0 -c $*$(SUFFIX) LDApU.o : LDApU.F $(CPP) $(F77) -FR -lowercase -O2 -c $*$(SUFFIX) ================================= 现在的问题: 试了下 mpirun -np 2 vasp 和mpirun -np 1 vasp 这样单机并行的效果不怎么好。 在服务器上使用1个CPU计算的时间大概7分钟左右,2个CPU算要5分钟左右。 自己的现在想的是 是不是MPICH2安装时没有加上shm参数,以前测试过debian源里的通用版的mpich2和加了shm选项编译的mpich2效率差异比较大。不知道大家安装时是否添加过这个参数? 还有就是帮忙看看vasp的 makefile有问题没有。 [ Last edited by tjpm on 2009-5-7 at 18:36 ] |
» 猜你喜欢
天津大学招2026.09的博士生,欢迎大家推荐交流(博导是本人)
已经有3人回复
有时候真觉得大城市人没有县城人甚至个体户幸福
已经有6人回复
面上项目申报
已经有3人回复
酰胺脱乙酰基
已经有9人回复
CSC & MSCA 博洛尼亚大学能源材料课题组博士/博士后招生|MSCA经费充足、排名优
已经有5人回复
博士延得我,科研能力直往上蹿
已经有7人回复
退学或坚持读
已经有27人回复
面上基金申报没有其他的参与者成吗
已经有5人回复
遇见不省心的家人很难过
已经有22人回复
tjpm
金虫 (正式写手)
- 1ST强帖: 1
- 应助: 0 (幼儿园)
- 金币: 1025.7
- 散金: 10
- 红花: 1
- 帖子: 367
- 在线: 19.3小时
- 虫号: 717334
- 注册: 2009-03-07
- 专业: 凝聚态物性 II :电子结构
7楼2009-05-27 12:11:46
tjpm
金虫 (正式写手)
- 1ST强帖: 1
- 应助: 0 (幼儿园)
- 金币: 1025.7
- 散金: 10
- 红花: 1
- 帖子: 367
- 在线: 19.3小时
- 虫号: 717334
- 注册: 2009-03-07
- 专业: 凝聚态物性 II :电子结构
|
不知道是不是RP爆发了。 刚才重新在服务器上测试了下,就是算下载的 “vasp集合.rar”中《VASP几个计算实例.doc》里 的一个H原子能量。 单个核花了4分钟40秒。 2个核2分钟30秒。 大家讨论一下mpich2编译时的参数吧。 还有就是CPP中的选项-DCACHE_SIZE的选择 CPP = $(CPP_) -DMPI -DHOST=\"LinuxIFC\" -DIFC \ -Dkind8 -DCACHE_SIZE=4000 -DPGF90 -Davoidalloc \ -DMPI_BLOCK=500 -DPROC_GROUP=8 \ -DRPROMU_DGEMV -DRACCMU_DGEMV |
2楼2009-05-07 19:21:27
y1ding
铁杆木虫 (著名写手)
- 1ST强帖: 1
- 应助: 61 (初中生)
- 贵宾: 0.33
- 金币: 5959.3
- 散金: 1
- 红花: 21
- 帖子: 1884
- 在线: 491.1小时
- 虫号: 142265
- 注册: 2005-12-21
- 专业: 凝聚态物性 II :电子结构
4楼2009-05-27 10:30:29
5楼2009-05-27 10:52:00













回复此楼