## Piecewise Regular Arrays: Application-Specific ComputationsApplication-specific regular array processors have been widely used in signal and image processing, multimedia and communication systems, for example, in data compression and HDTV. One of the main problems of application-specific computing is how to map algorithms into hardware. The major achievement of the theory of regular arrays is that an algorithm, represented as a data dependence graph, is embedded into a Euclidean space, where the integer points are the elementary computations and the dependencies between computations are denoted by vectors between points. The process of mapping an algorithm into hardware is reduced to finding, for the given Euclidean space, a new coordinate system that can be associated with the physical properties of space and time - so called space-time. The power of the synthesis method is that it provides a bridge between "abstract" and "physical" representations of algorithms, thus providing a methodological basis for synthesizing computations in space and in time. This book will extend the existing synthesis theory by exploiting the associativity and commutativity of computations. The practical upshot being a controlled increase in the dimensionality of the Euclidean space representing an algorithm. This increase delivers more degrees of freedom in the choice of the space-time mapping and leads, subsequently, to more choice in the selection of cost-effective application-specific designs. |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Common terms and phrases

1-D convolution 1-D systolic array 2-D arrays 2-D piecewise 3-D piecewise regular affine function affine statement affine transformation algorithms application-specific architectures array structures arrays for 1-D associativity and commutativity basis vectors bidirectional bounded lattice characteristic vectors clock rate convolvers data dependence data-spreading dataflow defined delay buffers dependence graph dependence vector depicted in Fig develop dimensionality elements example FIR-filter FPGAs implementation increase index space input and output input stream integer interconnection iso-plane iso-sets kernel latency layers loop matrix methods multilayered arrays Nul(P Nul(Q number of processors out-degree parallel computing parallelepiped partial order partial results partially ordered set partitioning physical piecewise regular arrays piecewise regular pipestructures pipelining pipestructure domain pipestructure root polynomial product polytope model possible processing processor array processor space real-time recurrence equations reduction operator represented scalability sequential set of basis shared variables source polytope space-time transformation speed-up subarrays synthesizing tiling unbounded VLSI

### Popular passages

Page 244 - H. Le Verge, C. Mauras, and P. Quinton. The ALPHA language and its use for the design of systolic arrays.

Page 251 - P. Quinton and V. Van Dongen. The mapping of linear recurrence equations on regular arrays.

Page 252 - V. Roychowdhury, L. Thiele, SK Rao, and T. Kailath. On the localization of algorithms for VLSI processor arrays, in: VLSI Signal Processing III, IEEE Press, New York, pages 459-470, 1989.

Page 253 - Fortes. Time optimal linear schedules for algorithms with uniform dependencies.

Page 246 - M. Maresca and H. Li, Connection Autonomy in SIMD Computers : a VLSI implementation, Journal of Parallel and Distributed Computing Vol.

Page 251 - SK Rao and T. Kailath. Regular iterative algorithms and their implementations on processor arrays.