Está en la página 1de 4

1.

INTRODUCTION
1.1 Overview
Multi-level on-chip cache systems have been widely adopted in high-performance
microprocessors. To keep data consistence throughout the memory hierarchy, write-through and
write-back policies are commonly employed. Under the write-back policy, a modified cache
block is copied back to its corresponding lower level cache only when the
block is about to be replaced. While under the write-through policy, all copies of a cache block
are updated immediately after the cache block is modified at the current cache, even though the
block might not be evicted. As a result, the write-through policy maintains identical data copies
at all levels of the cache hierarchy throughout most of their life time of execution. This feature is
important as CMOS technology is scaled into the nanometre range, where soft errors have
emerged as a major reliability issue in on-chip cache systems.
At the architecture level, an effective solution is to keep data consistent among different
levels of the memory hierarchy to prevent the system from collapse due to soft errors. Benefited
from immediate update, cache write-through policy is inherently tolerant to soft errors because
the data at all related levels of the cache hierarchy are always kept consistent. Due to this feature,
many high-performance microprocessor designs have adopted the write-through policy
In this paper, we propose a new cache architecture, referred to as way-tagged cache, to
improve the energy efficiency of write-through cache systems with minimal area overhead and
no performance degradation. Consider a two-level cache hierarchy, where the L1 data cache is
write-through and the L2 cache is inclusive for high performance.
It is observed that all the data residing in the L1 cache will have copies in the L2 cache.
In addition, the locations of these copies in the L2 cache will not change until they are evicted
from the L2 cache. Thus, we can attach a tag to each way in the L2 cache and send this tag
information to the L1 cache when the data is loaded to the L1 cache. By doing so, for all the
data in the L1 cache, we will know exactly the locations (i.e., ways) of
their copies in the L2 cache. During the subsequent accesses when there is a write hit in the L1
cache (which also initiates a write access to the L2 cache under the write-through


policy), We can access the L2 cache in an equivalent direct mapping manner because the way tag
of the data copy in the L2 cache is available. As this operation accounts for the majority of L2
cache accesses in most applications, the energy consumption of L2 cache can be reduced
significantly.

1.2 Objective

Objective of the seminar is to improve the energy efficiency of write-through cache
systems with minimal area overhead and no performance degradation by new cache architecture,
referred to as way-tagged cache.

1.3 Organization of the report

This report is organized as follows. In section 3, we provide a review of related low-
power cache design techniques. In section 5, we present the proposed way-tagged cache. In
section 6, we discuss the detailed VLSI architecture of the way-tagged cache. Section 7 extends
the idea of way tagging to existing cache design techniques to further improve energy efficiency.
Conclusion provided in the section 8.













2. CACHE MEMORY

2.1 Basic cache structure
Processors are generally able to perform operations on operands faster than the access
time of large capacity main memory. Though semiconductor memory which can operate at
speeds comparable with the operation of the processor exists, it is not economical to provide all
the main memory with very high speed semiconductor memory. The problem can be alleviated
by introducing a small block of high speed memory called a cache between the main memory
and the processor.
The idea of cache memories is similar to virtual memory in that some active portion of a
low-speed memory is stored in duplicate in a higher-speed cache memory. When a memory
request is generated, the request is first presented to the cache memory, and if the cache cannot
respond, the request is then presented to main memory.
The difference between cache and virtual memory is a matter of implementation; the two
notions are conceptually the same because they both rely on the correlation properties observed
in sequences of address references. Cache implementations are totally different from virtual
memory implementation because of the speed requirements of cache.
We define a cache miss to be a reference to a item that is not resident in cache, but is
resident in main memory. The corresponding concept for cache memories is page fault, which is
defined to be a reference to a page in virtual memory that is not resident in main memory. For
cache misses, the fast memory is cache and the slow memory is main memory. For page faults
the fast memory is main memory, and the slow memory is auxiliary memory.
2.2 Fully associative mapping
Perhaps the most obvious way of relating cached data to the main memory address is to
store both memory address and data together in the cache. This the fully associative


mapping approach. A fully associative cache requires the cache to be composed of associative
memory holding both the memory address and the data for each cached line.
The incoming memory address is simultaneously compared with all stored addresses
using the internal logic of the associative memory, as shown in Fig.3. If a match is fund, the
corresponding data is read out. Sin

También podría gustarte