Efficient and correct execution of parallel programs that share memory

Dennis Shasha, Marc Snir

Research output: Contribution to journalArticle

Abstract

In this paper we consider an optimization problem that arises in the execution of parallel programs on shared-memory multiple-instruction-stream, multiple-data-stream (MIMD) computers. A program on such machines consists of many sequential program segments, each executed by a single processor. These segments interact as they access shared variables. Access to memory is asynchronous, and memory accesses are not necessarily executed in the order they were issued. An execution is correct if it is sequentially consistent: It should seem as if all the instructions were executed sequentially, in an order obtained by interleaving the instruction streams of the processors. Our work has implications for the design of multiprocessors; it offers new compiler optimization techniques for parallel languages that support shared variables.

Original languageEnglish (US)
Pages (from-to)282-312
Number of pages31
JournalACM Transactions on Programming Languages and Systems
Volume10
Issue number2
DOIs
StatePublished - Apr 1988

Fingerprint

Data storage equipment

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Software

Cite this

Efficient and correct execution of parallel programs that share memory. / Shasha, Dennis; Snir, Marc.

In: ACM Transactions on Programming Languages and Systems, Vol. 10, No. 2, 04.1988, p. 282-312.

Research output: Contribution to journalArticle

@article{67da50fca50346a9b7021a1cd1d03088,
title = "Efficient and correct execution of parallel programs that share memory",
abstract = "In this paper we consider an optimization problem that arises in the execution of parallel programs on shared-memory multiple-instruction-stream, multiple-data-stream (MIMD) computers. A program on such machines consists of many sequential program segments, each executed by a single processor. These segments interact as they access shared variables. Access to memory is asynchronous, and memory accesses are not necessarily executed in the order they were issued. An execution is correct if it is sequentially consistent: It should seem as if all the instructions were executed sequentially, in an order obtained by interleaving the instruction streams of the processors. Our work has implications for the design of multiprocessors; it offers new compiler optimization techniques for parallel languages that support shared variables.",
author = "Dennis Shasha and Marc Snir",
year = "1988",
month = "4",
doi = "10.1145/42190.42277",
language = "English (US)",
volume = "10",
pages = "282--312",
journal = "ACM Transactions on Programming Languages and Systems",
issn = "0164-0925",
publisher = "Association for Computing Machinery (ACM)",
number = "2",

}

TY - JOUR

T1 - Efficient and correct execution of parallel programs that share memory

AU - Shasha, Dennis

AU - Snir, Marc

PY - 1988/4

Y1 - 1988/4

N2 - In this paper we consider an optimization problem that arises in the execution of parallel programs on shared-memory multiple-instruction-stream, multiple-data-stream (MIMD) computers. A program on such machines consists of many sequential program segments, each executed by a single processor. These segments interact as they access shared variables. Access to memory is asynchronous, and memory accesses are not necessarily executed in the order they were issued. An execution is correct if it is sequentially consistent: It should seem as if all the instructions were executed sequentially, in an order obtained by interleaving the instruction streams of the processors. Our work has implications for the design of multiprocessors; it offers new compiler optimization techniques for parallel languages that support shared variables.

AB - In this paper we consider an optimization problem that arises in the execution of parallel programs on shared-memory multiple-instruction-stream, multiple-data-stream (MIMD) computers. A program on such machines consists of many sequential program segments, each executed by a single processor. These segments interact as they access shared variables. Access to memory is asynchronous, and memory accesses are not necessarily executed in the order they were issued. An execution is correct if it is sequentially consistent: It should seem as if all the instructions were executed sequentially, in an order obtained by interleaving the instruction streams of the processors. Our work has implications for the design of multiprocessors; it offers new compiler optimization techniques for parallel languages that support shared variables.

UR - http://www.scopus.com/inward/record.url?scp=0023994389&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0023994389&partnerID=8YFLogxK

U2 - 10.1145/42190.42277

DO - 10.1145/42190.42277

M3 - Article

AN - SCOPUS:0023994389

VL - 10

SP - 282

EP - 312

JO - ACM Transactions on Programming Languages and Systems

JF - ACM Transactions on Programming Languages and Systems

SN - 0164-0925

IS - 2

ER -