MIT researchers able to enhance the multicore processors performance by using software-based cache memory management. Presently processor chips with on-chip cache memory manage read/write of data into the cache memory using hardware techniques. In the multicore kind of environment the delay in accessing the memory cache by multiple processor cores at same time is affecting the overall performance of computing.
Daniel Sanchez, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science, and his student Nathan Beckmann presented a new system, called Jigsaw, that monitors the computations being performed by a multicore chip and manages cache memory accordingly. This technique they call it as software based cache memory management.
These researchers have conducted experiment, where they could simulate and execute 100s of applications on 16- and 64-core chips. Sanchez and Beckmann have reported their technique Jigsaw speeding up execution by an average of 18% ( in some cases more than that) and energy consumption reduced by up to 72%. The also noticed this performance increasing with the increasing number of cores in a high MIPS processor chip.
In multicore processor chips, each core has several small, private caches and also something called last-level cache, which is shared by all the cores. “That cache is on the order of 40 to 60 percent of the chip,” Sanchez says. “It is a significant fraction of the area because it’s so crucial to performance. If we didn’t have that cache, some applications would be an order of magnitude slower.”
The multiple number of memory banks in last-level cache are distributed across the multicore chip. The researchers idea here is to provide access to the nearest memory bank of the cache for each processor core, accessing the nearest bank takes less time and consumes less energy than accessing those farther away.
Instead of assigning data into the memory banks randomly by the processor cores, Jigsaw monitors which cores are accessing which data most frequently and, on the fly, calculates the most efficient assignment of data to cache banks. Example is data being used exclusively by a single core is stored near that core, whereas data that all the cores are accessing with equal frequency is stored near the center of the chip, minimizing the average distance it has to travel.
Jigsaw also varies the amount of cache space allocated to each type of data, by providing more space to the data that is reused frequently.
The algorithm developed by Sanchez and Beckmann takes care of change in number of cores and the type of data.
The technique is based on the observations of chip's activity over a time period, and create a software program based on that activity. It is assumed by the researchers the programs will behave in the next 20 milliseconds the way they did in the last 20 milliseconds. Sanchez says. “But there’s very strong experimental evidence that programs typically have stable phases of hundreds of milliseconds, or even seconds.”