| ²é¿´: 1962 | »Ø¸´: 10 | ||||
| ¡¾½±Àø¡¿ ±¾Ìû±»ÆÀ¼Û7´Î£¬×÷Õßzxzj05Ôö¼Ó½ð±Ò 6.5 ¸ö | ||||
| µ±Ç°Ö÷ÌâÒѾ´æµµ¡£ | ||||
zxzj05ÈÙÓþ°æÖ÷ (ÖøÃûдÊÖ)
|
[×ÊÔ´]
¡¾·ÖÏí¡¿ABINIT ²¢Ðа²×°ÓëÓ¦ÓÃ
|
|||
|
ABINIT ²¢Ðа²×° IFORT ,MPICH °²×°ºÃ£¬²¢ÇÒÔÚ.BASHRCÖаÑ·¾¶Ö¸ºÃºó ÔÚABINITĿ¼ÀïÔËÐÐÈçÏÂÃüÁî ./configure --prefix=/home/abinit FC=mpif90 --enable-mpi=yes make make install ¾Í»áÔÚ/home/abinitµÄÖ¸¶¨Â·¾¶ÀïÕÒµ½BINĿ¼£¬È»ºóÕÒµ½¿ÉÖ´ÐÐÎļþ ########################################## ABINIT ²¢ÐÐÓ¦Óà 0) ABINIT ²¢Ðз½·¨µÄʵʩ - ABINIT ¿ÉÒÔÓ¦Óà Kµã £¬×ÔÐý¼«»¯²¢ÐУ¬ÄÜ´ø²¢ÐÐÈýÖÖ²¢ÐÐËã·¨¡£¶ÔÓÚÏßÐÔÏìÓ¦ÀíÂÛ£¬¿ÉÒÔ×Ô¶¯½øÐÐÄÜ´ø²¢ÐС£¶ÔÓÚ»ù̬ÄܼÆË㣬Ҳ¿ÉÒÔʹÓÃÄÜ´ø²¢ÐУ¬µ«ÊÇÐèÒª¶îÍâÊäÈë±äÁ¿À´¿ØÖÆ£¬²¢ÇÒʹÓÃÄÜ´ø²¢ÐкͲ»Ê¹ÓÃÄÜ´ø²¢ÐвÉÓõÄËã·¨ÓÐËù²»Í¬£¬¶øÇÒ¿ÉÄܻ᲻Îȶ¨¡£ Kµã£¬×ÔÐý¼«»¯£¬ÄÜ´ø ²¢Ðз½·¨¶ÔÓÚÏßÐÔÏìÓ¦ÀíÂÛ»ù±¾ÉÏ¿ÉÒÔ²»ÊÜÏÞÖÆµÄʹÓ㬵«ÊÇÈç¹ûÄã½øÐлù̬ÄܼÆË㣬²¢ÇÒÄãÖ»ÓÃÒ»¸öKµã£¨Õâ¸öÇé¿ö¾³£³öÏÖÔÚ´óÏä×ӵķÖ×ÓÌåϵ£¬ºÍ³¬¹ý20¸öÔ×ӵĴóÌåϵ£©£¬Äã²¢ÐеÄʱºò»áÊܵ½Ò»Ð©ÏÞÖÆ£¬Ö»Äܹ»Ê¹ÓÃÄÜ´ø²¢ÐУ¬Í¨³£4-8¸ö½Úµã»òÕ߸üÉÙ¡£Kµã²¢Ðз½·¨ÔÚͨѶÉÏÒªÇó½Ï¸ß£¬ËùÒÔ ¶ÔÓÚCLUSTER£¬ÐèÒª100MÍø¿¨»òÕß1000MÍø¿¨¡£¶øÇÒÈç¹ûʹÓÃMKMEM=0ÃüÁKµã²¢ÐÐʱºò²»»á½µµÍÿ¸ö½ÚµãÄÚ´æµÄʹÓã¨Ïà¶ÔÓÚµ¥»ú£©¡£ÄÜ´ø²¢Ðз½·¨Ð§ÂÊÏà¶ÔµÍЩ£¬µ«ÊÇÄã¿ÉÄܸܺßÐËʹÓÃÕâ¸ö·½·¨£¬ÕâÈ¡¾öÓÚÄãµÄÎÊÌâºÍÄãµÄ»úÆ÷¡£ 1) ÈçºÎ½¨Á¢ABINIT²¢ÐгÌÐò ÄãÓ¦¸Ã½¨Á¢Ò»¸ömakefile_macrosÎļþ£¬Õâ¸öÎļþµÄÑ¡ÔñÒÀ¾ÝÄãʹÓõĻúÆ÷ÓÐËù²»Í¬£¬ËüÃÇ¿ÉÒÔÔÚ~/abinit/Mach_dept_files ÕÒµ½¡£²¢Ðгɹ¦ºó²úÉúµÄABINIT²¢ÐгÌÐòÃû³ÆÎªABINIP 2) ÈçºÎʹÓÃABINIT²¢ÐгÌÐò ²»Í¬²âÊÔʵÀý¿ÉÒÔÔÚTESTĿ¼ÀïÕÒµ½¡£²¢ÐÐÃüÁîµÄʹÓÃÒÀ¾Ý»úÆ÷µÄ²»Í¬¿ÉÄܻ᲻ͬ£¬Í¨³£ÄãÐèÒªÔÚÃüÁîÐÐÀﶨÒå»úÆ÷µÄ½ø³ÌÊý£¬µ±È»ÄãÒ²¿ÉÒÔÔÚÊäÈëÎļþÖеĻ·¾³±äÁ¿ÀﶨÒå¡£Ö»ÓÐÔÚ»ù̬ÄÜ´ø²¢ÐÐʱºò£¬²¢Ðа汾ºÍµ¥»ú°æ±¾ÊäÈëÎļþÊÇÒ»ÑùµÄ¡£ ×÷Ϊ²¢ÐÐÊäÈëÃüÁî»úÆ÷²»Í¬¶øÓÐËù²»Í¬ - ÔÚ CRAY »úÆ÷ÉÏ, ²¢ÐÐÃüÁîΪ mpprun -n number_of_processors abinip < files >& log or mpirun -np number_of_processors abinip < files >& log number_of_processors Êǽø³ÌÊý - ÔÚ IBM clusterÉÏ, ʹÓÃLAM²¢ÐгÌÐò, Äã±ØÐëÊ×ÏÈÒýµ¼CLUSTER£¬Ê¹ÓÃÃüÁî lamboot cluster_file£¬È»ºóÔËÐв¢ÐÐÃüÁî mpirun -w -c number_of_processors abinip < files >& log - ÔÚ a DEC Cluster, ʹÓà MPICH²¢ÐгÌÐò, mpirun -np number_of_processors-machinefile cluster_file abinip < files >& log 3) ÓÅ»¯ Óëµ¥»ú°æ±¾Ïà±È£¬ÕâÀXºõûÓÐÁíÍâµÄËÙ¶È£¬Äڴ棬ӲÅÌÃüÁîÌṩ¸øÊ¹ÓÃÕß - ÎÒÃÇÒѾ֪µÀÁËÈçºÎÖ¸¶¨CPUÊýÁ¿£¬¼´½ø³ÌÊý¡£ - Èç¹ûÄãûÓаÑmkmemÉèÖÃΪ0 , ²»Í¬Kµã»òÕß×ÔÐý¼«»¯µÄ²¨º¯Êý»áÉ¢²¥µ½²»Í¬µÄ½ø³Ì£¨CPU£©ÉÏ£¬.ÕâÑù»á½ÚÊ¡ÄÚ´æµÄʹÓᣠµ«ÊÇÈç¹ûÄãÒѾ°ÑmkmenÉèÖÃΪ0£¬Ò»¸öKµãµÄ²¨º¯Êý¶¼´æ´¢µ½Äڴ棬Äã²»»áÌåÑéµ½²¢ÐдøÀ´ÈκεÄÓÅÊÆ¡£ ÄãÓ¦¸Ã×¢ÒâÊäÈë²ÎÊý"localrdwf"µÄʹÓÃ, ËüµÄĬÈÏÊÇΪ0¡£ In this case, the default is always 0, as it is usually more convenient to work with only one file, read by one processor, then transmitted to the others. 4) Usaged of compilation options and keywords for parallelisation - if the option -DMPI is introduced in cpp flags, the parallelisation over k-points, spin or perturbations is activated. - A band/FFT parallelisation can be added to the previous one with the keywords paral_kgb. To be activated, band/FFT parallelisation requires the option compilation -DMPI to be present. Thus, the work load is split over the two dimensions of a band/FFT 3d cartesian grid. By default, this parallelisation is off. This parallelisation can not be specified for each datasets. ×Ô¼º×öµÄ¼òµ¥·Ò룬¿ÉÄÜÓкܶà´íÎ󣬴ó¼Ò¼ûÁ ÏÂÃæÊÇÔÎÄ Paral_use HOW TO RUN THE PARALLEL VERSION OF ABINIT ? Copyright (C) 1999-2008 ABINIT group (XG,DCA) This file is distributed under the terms of the GNU General Public License, see ~abinit/COPYING or http://www.gnu.org/copyleft/gpl.txt . For the initials of contributors, see ~abinit/doc/developers/contributors.txt . The reader should have read the ~abinit/doc/users/abinis_help.html file, and be sufficiently experienced with the use of the sequential code (abinis). The parallelisation described in this document is the one implemented using MPI. There is another parallelisation, in development, based on OpenMP, for SMP machines. Since this parallelisation is in development, it is not yet described in detail. Let us simply mention that it uses comments (parallelisation directives), except in the Src_*/rhohxc.f routine, for which the -DOPENMP directive should be used at compile time. ======================================== 0) Present implementation of parallelism. - ABINIT can benefit of parallelism for different k-points and spin-polarisation - ABINIT can also benefit of parallelism for different bands, in the case of response functions, quite automatically. - ABINIT can also benefit of parallelism for different bands, for the ground-state calculations, but the user has to set up a few input variables, and moreover, the algorithm that is used is not exactly the same as without band parallelism (and might not be as stable). There is little limitations of this combined approach (k-points, spin-polarisation, bands) for response functions, but if you are conducting ground-state calculations, and if you are using only ONE k-point (which is frequent for molecules in a big box, or for large systems - more than 20 atoms) you will be quite limited, by using the only available band parallelisation (typically, 4-8 processors or less). The parallelisation over k-points is very efficient in terms of communications (so that running it on a cluster of workstations linked by an Ethernet 100 Mbits/sec network is OK). However, it cannot decrease the memory needed for each node with respect to the memory of the same run in sequential, ran with mkmem=0. The parallelisation over bands is less efficient, but you might be very happy with it, depending on your problem, and your type of machines. The library of routines called MPI has been used to implement the parallelism. So, MPI must be available on the machine(s) you want to use. 1) How to make the parallel code You should set up a makefile_macros file with specific indications for parallelism. Presently, there are examples of such files for : - SMP machines (DEC/Compaq, CRAY_T3E, SGI Origin, HP, Compaq, Fujitsu, Ultrasparc, Intel) - cluster running MPI under MPICH (DEC/Compaq, Intel) - cluster running MPI under LAM (IBM workstations) They can be found in the appropriate subdirectories of the ~/abinit/Mach_dept_files directory. You might also have to provide a machine-dependent mpif.h file also (or more precisely, a link from mpif.h in the ~abinit directory to the appropriate mpif.h file). Then, supposing that the library archives have already been made (nothing special about them in the parallel case) you have to issue : make abinip This will make 'abinip', the parallel version of ABINIT. 2) How to use the parallel code. Different examples are given in the Test_paral directory. The command to be used will differ from machine to machine. Usually, you will have to define the number of processor in the command (alternatively, sometimes, this number is specified in an environment variable). Except in the case of ground-state band parallelism (see later), the input file in the sequential case is the same as in the parallel case. For example (compare with the sequential case) : - on a CRAY machine, the command line might be mpprun -n number_of_processors abinip < files >& log or mpirun -np number_of_processors abinip < files >& log where number_of_processors is the number of processors on which the job must be run. The user's path might have to be updated to find the commands mpprun or mpirun. - on an IBM cluster, under LAM, one has first to boot the cluster, using a command like lamboot cluster_file where cluster_file is a file that contains the name of machines belonging to the cluster. Then, one will issue mpirun -w -c number_of_processors abinip < files >& log where number_of_processors is the number of processors on which the job must be run. The user's path might have to be updated to find the command mpirun. Then, one might have to wipe the cluster : wipe cluster_file - on a DEC cluster, under MPICH, one will issue mpirun -np number_of_processors \ (continued) -machinefile cluster_file abinip < files >& log where cluster_file is a file that contains the name of machines belonging to the cluster, and number_of_processors is the number of processors on which the job must be run. The user's path might have to be updated to find the command mpirun. When running on a cluster, one might have to pay attention to paths for files in the "files" file. Depending on the way the cluster is set up, one might be forced to use absolute paths for these file names instead of relative paths. The "log" file is actually closed by each processor, at the very beginning of the run, and a new file, different for each processor, is created. Its name is derived from the tmp root name, followed by "LOG", and the number of the processor. In order to use the band parallelism for the ground-state, you need to use the following input variables : wfoptalg 1 nbdblock (to be set to the number of processors you would like to use to treat one k point, typically 4-8, higher might cause unstabilities) See Test_v3#41 for an example. 3) Optimisation Compared with the sequential version, there is nearly no additional tuning of speed, memory or disk usage to be done by the user. - We have already seen how to define the number of processor. - In the case you have not set mkmem 0 , the wavefunctions belonging to different k points and/or spin polarization will be spread on the different processors. This will save a lot of memory. However, if you are already only storing the wavefunctions of one k point in core memory, and using files for the other k points (input variable mkmem set to 0), then, you will gain nothing thanks to the use of the parallelism. - One might also pay attention to the input variable "localrdwf", for which the above-mentioned issue is also important, although less critical for cluster machines. In this case, the default is always 0, as it is usually more convenient to work with only one file, read by one processor, then transmitted to the others. 4) Usaged of compilation options and keywords for parallelisation - if the option -DMPI is introduced in cpp flags, the parallelisation over k-points, spin or perturbations is activated. - A band/FFT parallelisation can be added to the previous one with the keywords paral_kgb. To be activated, band/FFT parallelisation requires the option compilation -DMPI to be present. Thus, the work load is split over the two dimensions of a band/FFT 3d cartesian grid. By default, this parallelisation is off. This parallelisation can not be specified for each datasets. [ Last edited by zxzj05 on 2009-3-21 at 15:54 ] |
» ÊÕ¼±¾ÌûµÄÌÔÌûר¼ÍƼö
Á¿»¯¼ÆËã¼°Ä£Äâ | Äý¾Û̬ÎïÀí |
» ²ÂÄãϲ»¶
¹ú×Ô¿ÆÃæÉÏ»ù½ð×ÖÌå
ÒѾÓÐ7È˻ظ´
ҩѧ383 Çóµ÷¼Á
ÒѾÓÐ4È˻ظ´
286Çóµ÷¼Á
ÒѾÓÐ5È˻ظ´
085601Çóµ÷¼Á
ÒѾÓÐ3È˻ظ´
302Çóµ÷¼Á
ÒѾÓÐ5È˻ظ´
¿¼Ñл¯Ñ§Ñ§Ë¶µ÷¼Á£¬Ò»Ö¾Ô¸985
ÒѾÓÐ5È˻ظ´
328Çóµ÷¼Á£¬Ó¢ÓïÁù¼¶551£¬ÓпÆÑоÀú
ÒѾÓÐ4È˻ظ´
»úеר˶325£¬Ñ°ÕÒµ÷¼ÁԺУ
ÒѾÓÐ5È˻ظ´
²ÄÁÏר˶306Ó¢Ò»Êý¶þ
ÒѾÓÐ6È˻ظ´
»ï°éÃÇ£¬×£ÎÒÉúÈÕ¿ìÀÖ°É
ÒѾÓÐ26È˻ظ´
» ±¾Ö÷ÌâÏà¹ØÉ̼ÒÍÆ¼ö: (ÎÒÒ²ÒªÔÚÕâÀïÍÆ¹ã)
2Â¥2009-03-21 17:25:57
3Â¥2009-03-22 19:32:52
4Â¥2009-03-25 12:40:04
fegg7502
ÈÙÓþ°æÖ÷ (ÖªÃû×÷¼Ò)
- Ó¦Öú: 20 (СѧÉú)
- ¹ó±ö: 13.133
- ½ð±Ò: 18801.8
- Ìû×Ó: 7483
- ÔÚÏß: 590.2Сʱ
- ³æºÅ: 352334
8Â¥2009-07-19 20:56:18
9Â¥2009-09-15 10:54:26
10Â¥2009-10-09 00:11:40
¼òµ¥»Ø¸´
dalongmao5Â¥
2009-06-01 15:52
»Ø¸´

wuli86Â¥
2009-06-01 17:11
»Ø¸´
lili5588117Â¥
2009-07-18 16:05
»Ø¸´

zdhlover11Â¥
2009-10-09 20:35
»Ø¸´

















»Ø¸´´ËÂ¥
