Skip to content

CS 315-02 Lecture/Lab — Meeting Summary (Fall 2025)

  • Date: Oct 07, 2025
  • Time: 02:50 PM Pacific
  • Meeting ID: 868 6589 0521

Quick Recap

The session prepared students for the upcoming midterm. Greg covered: - Key RISC-V concepts: load/store instructions, address calculations, and shift operations - Cache fundamentals: direct-mapped and set-associative caches, tag/index/offset, block size, cache sizes - Emulator implementation details relevant to the exam - Administrative notes about grading and scheduling

Next Steps

  • Prepare for the Thursday midterm; one-page notes are allowed.
  • Sign up for interactive grading slots when the sign-up sheet is posted in the evening.
  • Arrive on time for interactive grading with code loaded in a terminal.
  • Review past midterms and solutions in the “solutions” section.
  • Review the Fall 2024 midterm for reference.
  • Understand emulator implementation details for the midterm.
  • Follow the RISC-V spec for shift operations in Project 4.
  • Complete Project 4: implement the instruction cache simulation.
  • Rotate graders: students who graded with Greg last time should grade with a TA this time, and vice versa.
  • Read all key concept posts on Campus Wire.

Summary

Midterm Exam Preparation Overview

  • The midterm will be interactive with two separate sessions (morning and afternoon), each having different problems.
  • Topics emphasized: loads/stores, cache memory, and emulator implementation.
  • Students should understand RISC-V shift operations for both 32-bit and 64-bit instructions and be familiar with the emulator codebase.

Data Loading and Pointer Types

  • Address calculation and memory access were explained for different data sizes: byte, word, and double.
  • Correct pointer types and typecasting are required for different load widths.
  • Store instruction formats differ from load instructions; both require careful handling of immediate values and address computation.

RV64I Load–Store Instruction Basics

  • Differences between load and store formats and immediate calculations were reviewed.
  • Target address computation and correct casting for various data widths were demonstrated.
  • Basics of cache organization were introduced, including deriving slot indices from word addresses via modular arithmetic and bit manipulation.

Cache Address Mapping Concepts

  • Word-to-byte address conversion: multiply word addresses by 4 to obtain byte addresses.
  • Address mapping to cache slots: addresses with the same slot index map to the same slot and are distinguished by tag bits.
  • Comparison of cache sizes: smaller caches (e.g., 4 slots) have more contention per slot than larger caches (e.g., 16 slots).

Cache Slot Index Calculation Changes

  • Increasing from 4 to 16 slots increases the slot-index width from 2 bits to 4 bits.
  • Tag size decreases with more slots due to reduced contention.
  • Both algebraic reasoning (byte/word arithmetic) and binary reasoning (bit fields) are important, including for the Project Forgiven codebase.

Cache Systems and the Spotty Project

  • Discussion included instruction and data caches, block sizes, and set associativity.
  • Greg referenced work on the Spotty project and plans to analyze related data.
  • Emphasis was placed on starting address calculations at the correct block boundary to avoid errors.

Technical Discussions on Coding and Cache

  • The session concentrated on cache memory behavior and processor operations, along with coding practices related to these systems.
  • Organizational follow-ups and event registrations were mentioned without final timing details.
  • Personal anecdotes were shared but were tangential to the core technical material.

Understanding L1 iCache Simulation

  • Modern processors typically include multiple cache levels (L1/L2/L3).
  • Project 4 focuses on simulating an L1 instruction cache (iCache) that caches only instruction words to keep scope manageable.
  • Cache block size trade-offs were discussed (latency vs. spatial locality).

Cache Block Addressing Concepts

  • Each cache slot can hold a block with multiple words to exploit spatial locality.
  • From a byte address, students should extract:
  • Tag
  • Block (set) index
  • Word offset within the block
  • Cache lookup process:
  • On hit: validate tag and retrieve the word via offset.
  • On miss: fetch the entire block from memory, place it in the appropriate slot, and then return the requested word.

Cache Block Address Calculation Method

  • To compute a block’s starting word address on a miss:
  • Use modulus and subtraction to align the address to the block boundary.
  • Read all words in the block from memory and place them in the targeted cache block.
  • Bitwise methods can achieve the same alignment efficiently.

Memory Address Calculation Techniques

  • To find the starting address of a block in a 64-bit address space:
  • Shift-right by the block-offset width, then shift-left by the same amount; or
  • Mask out the lower offset bits with a hexadecimal mask to zero them.
  • Example (for 16-byte blocks): clear the lower 4 bits to align to the block boundary.

Set-Associative Cache Concepts

  • Addresses map to sets; each set contains multiple ways (slots).
  • On a miss within a set, use an eviction policy such as LRU (Least Recently Used).
  • Set index and set base calculations determine placement and lookup within the cache.
  • Reminder: grading rotation applies (students switch between Greg and TAs as noted above).