A machine learning approach for a scalable, energy-efficient utility-based cache partitioning
Guney, I.A. | Yildiz, A. | Bayindir, I.U. | Serdaroglu, K.C. | Bayik, U. | Kucuk, G.
Conference Object | 2015 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)9137 LNCS , pp.409 - 421
Inmulti- andmany-core processors, a shared Last Level Cache (LLC) is utilized to alleviate the performance problems resulting from long latency memory instructions. However, an unmanaged LLC may become quite useless when the running threads have conflicting interests. In one extreme, a thread can make benefit from every portion of the cache whereas, in the other end, another thread may just want to thrash the whole LLC. Recently, a variety of way-partitioning mechanisms are introduced to improve cache performance. Today, almost all of the studies utilize the Utility-based Cache Partitioning (UCP) algorithm as their allocation policy . . .. However, the UCP look-ahead algorithm, although it provides a better utility measure than its greedy counterpart, requires a very complex hardware circuitry and dissipates a considerable amount of energy at the end of each decision period. In this study, we propose an offline supervised machine learning algorithm that replaces the UCP lookahead circuitry with a circuitry requiring almost negligible hardware and energy cost.Depending on the cache and processor configuration, our thorough analysis and simulation results show that the proposed mechanism reduces up to 5% of the overall transistor count and 5% of the overall processor energy without introducing any performance penalty. © Springer International Publishing Switzerland 2015
Daha fazlası
Daha az