### Abstract

To exploit parallelism on shared memory parallel computers (SMPCs), it is natural to focus on decomposing the computation (mainly by distributing the iterations of the nested Do-Loops). In contrast, on distributed memory parallel computers (DMPCs), the decomposition of computation and the distribution of data must both be handled - in order to balance the computation load and to minimize the migration of data. We propose and validate experimentally a method for handling computations and data synergistically to minimize the overall execution time on DMPCs. The method is based on a number of novel techniques, also presented in this article. The core idea is to rank the "importance" of data arrays in a program and specify some of the dominant. The intuition is that the dominant arrays are the ones whose migration would be the most expensive. Using the correspondence between iteration space mapping vectors and distributed dimensions of the dominant data array in each nested Do-loop, allows us to design algorithms for determining data and computation decompositions at the same time. Based on data distribution, computation decomposition for each nested Do-loop is determined based on either the "owner computes" rule or the "owner stores" rule with respect to the dominant data array. If all temporal dependence relations across iteration partitions are regular we use tiling to allow pipelining and the overlapping of computation and communication However, in order to use tiling on DMPCs, we needed to extend the existing techniques for determining tiling vectors and tile sizes, as they were originally suited for SMPCs only. The overall method is illustrated on programs for the 2D heat equation, for the Gaussian elimination with pivoting, and for the 2D fast Fourier transform on a linear processor array and on a 2D processor grid.

Original language | English (US) |
---|---|

Pages (from-to) | 1-50 |

Number of pages | 50 |

Journal | ACM Transactions on Programming Languages and Systems |

Volume | 24 |

Issue number | 1 |

DOIs | |

State | Published - Jan 2002 |

### Fingerprint

### Keywords

- Algorithms
- Computation decomposition
- D.3.4 [Programming Languages]: Processors - compilers
- Data alignment
- Data distribution
- Distributed-memory computers
- Dominant data array
- E.1 [Data Structures]: arrays
- Languages
- Optimization

### ASJC Scopus subject areas

- Computer Graphics and Computer-Aided Design
- Software

### Cite this

**Automatic data and computation decomposition on distributed memory parallel computers.** / Lee, Peizong Z.; Kedem, Zvi Meir.

Research output: Contribution to journal › Article

*ACM Transactions on Programming Languages and Systems*, vol. 24, no. 1, pp. 1-50. https://doi.org/10.1145/509705.509706

}

TY - JOUR

T1 - Automatic data and computation decomposition on distributed memory parallel computers

AU - Lee, Peizong Z.

AU - Kedem, Zvi Meir

PY - 2002/1

Y1 - 2002/1

N2 - To exploit parallelism on shared memory parallel computers (SMPCs), it is natural to focus on decomposing the computation (mainly by distributing the iterations of the nested Do-Loops). In contrast, on distributed memory parallel computers (DMPCs), the decomposition of computation and the distribution of data must both be handled - in order to balance the computation load and to minimize the migration of data. We propose and validate experimentally a method for handling computations and data synergistically to minimize the overall execution time on DMPCs. The method is based on a number of novel techniques, also presented in this article. The core idea is to rank the "importance" of data arrays in a program and specify some of the dominant. The intuition is that the dominant arrays are the ones whose migration would be the most expensive. Using the correspondence between iteration space mapping vectors and distributed dimensions of the dominant data array in each nested Do-loop, allows us to design algorithms for determining data and computation decompositions at the same time. Based on data distribution, computation decomposition for each nested Do-loop is determined based on either the "owner computes" rule or the "owner stores" rule with respect to the dominant data array. If all temporal dependence relations across iteration partitions are regular we use tiling to allow pipelining and the overlapping of computation and communication However, in order to use tiling on DMPCs, we needed to extend the existing techniques for determining tiling vectors and tile sizes, as they were originally suited for SMPCs only. The overall method is illustrated on programs for the 2D heat equation, for the Gaussian elimination with pivoting, and for the 2D fast Fourier transform on a linear processor array and on a 2D processor grid.

AB - To exploit parallelism on shared memory parallel computers (SMPCs), it is natural to focus on decomposing the computation (mainly by distributing the iterations of the nested Do-Loops). In contrast, on distributed memory parallel computers (DMPCs), the decomposition of computation and the distribution of data must both be handled - in order to balance the computation load and to minimize the migration of data. We propose and validate experimentally a method for handling computations and data synergistically to minimize the overall execution time on DMPCs. The method is based on a number of novel techniques, also presented in this article. The core idea is to rank the "importance" of data arrays in a program and specify some of the dominant. The intuition is that the dominant arrays are the ones whose migration would be the most expensive. Using the correspondence between iteration space mapping vectors and distributed dimensions of the dominant data array in each nested Do-loop, allows us to design algorithms for determining data and computation decompositions at the same time. Based on data distribution, computation decomposition for each nested Do-loop is determined based on either the "owner computes" rule or the "owner stores" rule with respect to the dominant data array. If all temporal dependence relations across iteration partitions are regular we use tiling to allow pipelining and the overlapping of computation and communication However, in order to use tiling on DMPCs, we needed to extend the existing techniques for determining tiling vectors and tile sizes, as they were originally suited for SMPCs only. The overall method is illustrated on programs for the 2D heat equation, for the Gaussian elimination with pivoting, and for the 2D fast Fourier transform on a linear processor array and on a 2D processor grid.

KW - Algorithms

KW - Computation decomposition

KW - D.3.4 [Programming Languages]: Processors - compilers

KW - Data alignment

KW - Data distribution

KW - Distributed-memory computers

KW - Dominant data array

KW - E.1 [Data Structures]: arrays

KW - Languages

KW - Optimization

UR - http://www.scopus.com/inward/record.url?scp=0040027475&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0040027475&partnerID=8YFLogxK

U2 - 10.1145/509705.509706

DO - 10.1145/509705.509706

M3 - Article

AN - SCOPUS:0040027475

VL - 24

SP - 1

EP - 50

JO - ACM Transactions on Programming Languages and Systems

JF - ACM Transactions on Programming Languages and Systems

SN - 0164-0925

IS - 1

ER -