The Community for Technology Leaders
2016 International Conference on Big Data and Smart Computing (BigComp) (2016)
Hong Kong, China
Jan. 18, 2016 to Jan. 20, 2016
ISSN: 2375-9356
ISBN: 978-1-4673-8795-8
pp: 329-332
Yasunari Takatsuka , Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, 152-8552 Japan
Hiroya Nagao , Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, 152-8552 Japan
Takashi Yaguchi , Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, 152-8552 Japan
Masatoshi Hanai , Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, 152-8552 Japan
Kazuyuki Shudo , Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, 152-8552 Japan
ABSTRACT
Use of cache is an effective way to improve access to a databases. However, when data in the cache are not updated or invalidated when writing to databases, the possibility increases with time that data in the cache will differ from data in the database. Therefore, we propose to adjust the balance between data freshness and access performance by switching between the use and non-use of a cache based on the load on the databases or on access performance. We also suggest and test a way of implementing the switching.
INDEX TERMS
Switches, Distributed databases, Database systems, Writing, Servers, Design methodology
CITATION

Y. Takatsuka, H. Nagao, T. Yaguchi, M. Hanai and K. Shudo, "A caching mechanism based on data freshness," 2016 International Conference on Big Data and Smart Computing (BigComp)(BIGCOMP), Hong Kong, China, 2016, pp. 329-332.
doi:10.1109/BIGCOMP.2016.7425940
88 ms
(Ver 3.3 (11022016))