Issue No.06 - June (2001 vol.50)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/12.931896
<p><b>Abstract</b>—Two VLSI architectures for the computationally efficient implementation of the elementary 3D geometrical transformations are introduced. The first one is based on a single floating-point multiply/add unit, while the other one comprises a four processing-element vector unit. By exploiting the structure of the elementary transformation matrices, some of the elements of which are ones and zeros, the proposed architectures avoid full-matrix multiplication for the matrix multiplications involved in the calculation of the transformation matrix by treating them as updates of specific elements, the new values of which are obtained by scalar operations in the case of the single-processor architecture or by simple vector operations in the case of the processor array. Thus, the floating-point operation count and the number of memory accesses required by a transformation are reduced and, therefore, the performance of the circuit which computes the transformation matrix, in terms of execution time, is improved at minimal hardware cost. Furthermore, a circuit is proposed which, for each sequence of transformations, selects the most appropriate direction for computing the product of the matrices in the corresponding stack of transformation matrices in order to further reduce the number of floating-point operations compared to the case where the direction of the computation of the successive matrix products is predetermined. The proposed single-processor architecture is suitable for low-cost applications, while the parallel execution scheme implemented by the introduced parallel processor may be implemented by any four-PE processor with small overhead.</p>
Elementary geometrical transformations, VLSI architecture, graphics processor, vector unit.
Konstantina Karagianni, Vassilis Paliouras, George Diamantakos, Thanos Stouraitis, "Operation-Saving VLSI Architectures for 3D Geometrical Transformations", IEEE Transactions on Computers, vol.50, no. 6, pp. 609-622, June 2001, doi:10.1109/12.931896