You are here

Dynamic and Adaptive optimization techniques to enhance the performance of MPI applications by using HECToR and Eddie clusters.

NeSC Research Seminar Series
Speaker: 
Rosa Filgueira

Dynamic and Adaptive optimization techniques to enhance the performance of MPI applications by using HECToR and Eddie clusters.
Parallel Computation on cluster architectures has become the most common solution for developing high-performance scientific applications. Message Passing Interface (MPI) is widely is the message-passing library most widely used to provide communications in clusters. The recent popularity of multi-core processor provide a flexible way to increase the computational capability of clusters. Nowadays parallel applications are shifting to multi-core clusters and new optimization technique are demanded for exploiting this architecture.

Although the system performance may improve with multi-core processors in a cluster, bottleneck form other components can still restrict cluster scalability. One of them could be the I/O performance, as a consequence of the I/O request initiated by multiple cores. The I/O bottleneck increase when then processes use multiple non-contiguous disk accesses.

Other problem in multi-core clusters is the communication network. Nowadays, the network used in clusters are fast and have low latency. However, the multi-core systems cause the computational capability to increase even more. As a consequence, the frequency of messages increase becoming insufficient the increase in bandwidth and latency.

Therefore, my goal is improve the performance of MPI-based applications executed in multi-core clusters (HECToR and EDDIE clusters) reducing the overhead of I/O and communications subsystems. To solve these challenges I propose two strategies:

1. Improve the performance of collective I/O operations by using dynamic I/O aggregator pattern.

2. Improve the communication operations by using compression techniques and the message passing interface profiling (PMPI). This strategy is implemented at application level, and it is completely independet of the application and the MPI implementation (XMPI, MPICH2, OPEN MPI) that we are using. This allows us to use it without the modifying the source code of the application or the MPI implementation.

View Presentation Here

Date and time: 
Friday, 4 November, 2011 - 10:30
Length: 
60 minutes
Location: 
IFG07A