Skip to content

CS 315-02 Lecture/Lab — Meeting Summary (Fall 2025)

Date: October 2, 2025
Time: 02:51 PM Pacific
Meeting ID: 868 6589 0521

Quick Recap

  • Greg shared debugging strategies for Project 4, emphasizing incremental development and use of tools like GDB to inspect execution.
  • He introduced cache memory fundamentals: cache types, address mapping, tags, and valid bits.
  • The session covered core cache concepts and implementation details, including spatial locality, block sizes, and mapping strategies.
  • The class reviewed starter code and pseudocode for direct-mapped, set-associative, and fully associative caches.

Next Steps

  • Students: Implement a direct-mapped cache with a block size of 4 words by extending the provided starter code.
  • Students: Implement a set-associative cache, including correct slot selection logic and support for block sizes.
  • Students: Complete the LRU replacement function for the set-associative cache.
  • Greg: Post the recordings from the previous day and today by this evening.
  • Students: Review the cache memory guide for deeper coverage of the discussed concepts.

Summary

Debugging Tips for Project 4

  • Emphasis on incremental implementation and frequent testing.
  • Use of GDB for detailed inspection:
  • Set breakpoints, step through execution, and inspect instruction words and register values.
  • Employ GDB commands to trace behavior and confirm assumptions.
  • Add custom printf logging to observe control flow and state changes.
  • Reinforcement of a hands-on, iterative approach to understanding code execution.
  • Brief continuation of the prior day’s cache implementation discussion.

Cache Memory Simulation Basics

  • Project 4 simulates cache requests for instruction memory only (not data memory).
  • Overview of three cache types, starting with the simplest: the direct-mapped cache.
  • Direct-mapped cache tracks data, valid bits, and tags to uniquely identify addresses.

Direct-Mapped Cache Fundamentals

  • Addresses map to cache slots using word addressing and modular arithmetic.
  • Byte addresses are converted to word addresses; specific bits determine the slot index.
  • Tags and valid bits enable hit/miss detection.
  • Emphasis on the performance importance of caches and understanding underlying algorithms.

Cache Slots and Memory Addressing

  • Relationship between cache size and address breakdown (tag/index):
  • More cache slots reduce contention and shrink tag size (fewer addresses collide per slot).
  • Spatial locality: programs tend to access neighboring instructions/data; caches exploit this behavior.

Cache Organization and Access

  • Exploit spatial locality by fetching and storing multiple words per access (blocks).
  • Memory is divided into blocks (each containing multiple words).
  • Cache structures store these blocks and must track:
  • Block index (which block),
  • Word offset (which word within the block),
  • Tag and valid bits.

Slot Indexing and Cache Management

  • Slot index and word offset are derived from specific bits in the address:
  • Block index selects the slot (or set).
  • Word offset selects the word within the block.
  • On a miss:
  • Read the entire block (multiple words) from memory.
  • Compute the block’s starting address to fetch aligned data.
  • Example workflows demonstrate computing the byte address of a word within a block.

Cache Implementation and Addressing

  • Calculations for word and byte addresses can be done via arithmetic or bitwise methods.
  • Starter code notes:
  • A lean initialization function still sets up required structures.
  • Configurable as direct-mapped or set-associative.
  • Constraints: up to 4096 slots; maximum block size of 4 words.

Cache Mapping and Replacement Algorithms

  • Mapping strategies:
  • Direct-mapped: each address maps to exactly one slot.
  • Fully associative: any block can go in any slot; requires replacement policy.
  • Set-associative: addresses map to a set; placement within the set is flexible.
  • Implementation focus:
  • Direct-mapped cache with 4-word blocks.
  • LRU (Least Recently Used) replacement for fully associative and set-associative caches.
  • Pseudocode discussed for set-associative lookup/insert:
  • On hit: update recency.
  • On miss: select the least recently used slot in the set for replacement, then insert and update metadata.