Textos

Publicado por Marcelo Soares Souza em 20 de Agosto de 2012 licenciado sobre Creative Commons 3.0 Brasil

mpich2Foi anunciado um novo candidato de lançamento do MPICH2 1.6 (1.5rc1). O MPICH é uma implementação multiplataforma do padrão de troca de mensagens MPI para o desenvolvimento de aplicações de memória distribuida usadas em computaçao paralela.

O MPICH2 1.5 (futuro 1.6) traz muitas novidades como a implementação do novo padrão MPI-3 e um novo gerenciador de processos chamado Hydra.

MPICH é um Software Livre e esta disponível para diversos Unix-Like (incluíndo Linux e Mac OS X) e para o Microsoft Windows. É também uma das mais populares implementações do padrão MPI, utilizada como base para outras implementações do MPI incluindo IBM MPI (para o Blue Gene), Intel MPI, Cray MPI, Microsoft MPI, Myricom MPI, OSU MVAPICH/MVAPICH2, e muitas outras.

Download
http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads
http://www.cebacad.net/files/mpich2/ubuntu (Pacotes para Ubuntu)
http://www.cebacad.net/files/mpich2/debian (Pacotes para Debian)
http://www.cebacad.net/files/mpich2/slackware (Pacotes para Slackware)


Lista de mudanças entre a versão estável 1.4.1p1 e 1.5rc1

  • OVERALL: Nemesis now supports an "--enable-yield=..." configure option for better performance/behavior when oversubscribing processes to cores.  Some form of this option is enabled by default on Linux, Darwin, and systems that support sched_yield().
  • OVERALL: Added support for Intel Many Integrated Core (MIC) architecture: shared memory, TCP/IP, and SCIF based communication.
  • MPI-3: const support has been added to mpi.h, although it is disabled by default.  It can be enabled on a per-translation unit basis with "#define MPICH2_CONST const".
  • MPI-3: Added support for MPIX_Type_create_hindexed_block.
  • MPI-3: The new MPI-3 nonblocking collective functions are now available as "MPIX_" functions (e.g., "MPIX_Ibcast").
  • MPI-3: The new MPI-3 neighborhood collective routines are now available as "MPIX_" functions (e.g., "MPIX_Neighbor_allgather").
  • MPI-3: The new MPI-3 MPI_Comm_split_type function is now available as an "MPIX_" function.
  • MPI-3: The new MPI-3 tools interface is now available as "MPIX_T_" functions.  This is a beta implementation right now with several limitations, including no support for multithreading.  Several performance variables related to CH3's message matching are exposed through this interface.
  • MPI-3: The new MPI-3 matched probe functionality is supported via the new routines MPIX_Mprobe, MPIX_Improbe, MPIX_Mrecv, and MPIX_Imrecv.
  • MPI-3: The new MPI-3 nonblocking communicator duplication routine, MPIX_Comm_idup, is now supported.  It will only work for single-threaded programs at this time.
  • MPI-3: MPIX_Comm_reenable_anysource support
  • MPI-3: Native MPIX_Comm_create_group support (updated version of the prior MPIX_Group_comm_create routine).
  • MPI-3: MPI_Intercomm_create's internal communication no longer interferes with point-to-point communication, even if point-to-point operations on the parent communicator use the same tag or MPI_ANY_TAG.
  • MPI-3: Eliminated the possibility of interference between MPI_Intercomm_create and point-to-point messaging operations.
  • Build system: Completely revamped build system to rely fully on autotools.  Parallel builds ("make -j8" and similar) are now supported.
  • Build system: rename "./maint/updatefiles" --> "./autogen.sh" and "configure.in" --> "configure.ac"
  • JUMPSHOT: Improvements to Jumpshot to handle thousands of timelines, including performance improvements to slog2 in such cases.
  • JUMPSHOT: Added navigation support to locate chosen drawable's ends when viewport has been scrolled far from the drawable.
  • PM/PMI: Added support for memory binding policies.
  • PM/PMI: Various improvements to the process binding support in Hydra.  Several new pre-defined binding options are provided.
  • PM/PMI: Upgraded to hwloc-1.5rc2
  • PM/PMI: Several improvements to PBS support to natively use the PBS launcher.
Tag(s): Software Livre MPICH MPI Desenvolviment Computação Paralela Beowulf Clusters

É preciso esta logado para comentar