If you haven’t heard of “memory wall” yet, you probably will soon. Originally theorized in 1994 by Wulf and McKee, this concept revolves around the idea that computer processing units (CPUs) are advancing at a fast enough pace that will leave memory (RAM) stagnant. This isn’t going to happen immediately, but assuming current trends in CPU and RAM remain the same, we could hit a memory wall sometime in the near future.
One study found that CPU speed increased at an average rate of 55% per year from 1986 to 2000, whereas RAM speed increased by just 10% per year. A similar study, however, projected just a maximum 12.5% annual increase in CPU performance from 2000 to 2014. The latter study was conducted on the slower Intel Processors, which delivered slower speeds than the Core 2 Duo processors.
According to Moore’s Law, which states that the number of transistors in a circuit doubles every two years, CPUs will eventually become too fast to yield any noticeable difference in computing speed. Once we reach this so-called memory wall, program/app execution time will depend almost entirely on the speed at which RAM can send data to the CPU. So even if you have an incredibly fast processor in your computer, it’s function may be limited to the speed of your RAM.
Intel described this phenomenon back in 2005, saying: “First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat… Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies.”
The processor chip maker also noted that certain applications have become less efficient as processors continue to evolve – something known as the Von Neumann bottleneck effect. This effectively reduces gains that normal frequency increases may achieve. More so, delays in signal transmission continue to grow while feature sizes shrink, further stressing the problem of bottleneck.
But there are some solutions available to combat the problem of memory wall, one of which is the use of cached data. By delivering small amounts of high-speed memory on multiple levels of caching, computers can bridge the gap between RAM and CPU speeds. So, just how much of a difference can this make? Some experts suggest it can make up to a 53% difference between the growth speed of processors and the growth speed of RAM.