This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
A Multi-Level Cache Model for Run-Time Optimization of Remote Visualization
September/October 2007 (vol. 13 no. 5)
pp. 991-1003
Remote visualization is an enabling technology aiming to resolve the barrier of physical distance. While many researchers have developed innovative algorithms for remote visualization, previous work has focused little on systematically investigating optimal configurations of remote visualization architectures. In this paper, we study caching and prefetching, an important aspect of such architecture design, in order to optimize the fetch time in a remote visualization system. Unlike a processor cache or web cache, caching for remote visualization is unique and complex. Through actual experimentation and numerical simulation, we have discovered ways to systematically evaluate and search for optimal configurations of remote visualization caches under various scenarios, such as different network speeds, sizes of data for user requests, prefetch schemes, cache depletion schemes, etc. We have also designed a practical infrastructure software to adaptively optimize the caching architecture of general remote visualization systems, when a different application is started or the network condition varies. The lower bound of achievable latency discovered with our approach can aid the design of remote visualization algorithms and the selection of suitable network layouts for a remote visualization system.
Index Terms:
Remote visualization, distributed visualization, performance analysis, caching
Citation:
Robert Sisneros, Chad Jones, Jian Huang, Jinzhu Gao, Byung-Hoon Park, Nagiza Samatova, "A Multi-Level Cache Model for Run-Time Optimization of Remote Visualization," IEEE Transactions on Visualization and Computer Graphics, vol. 13, no. 5, pp. 991-1003, Sept.-Oct. 2007, doi:10.1109/TVCG.2007.1046
Usage of this product signifies your acceptance of the Terms of Use.