Contents

Overlays & Virtual Memory in OS

Overlay Memory Management

What are Overlays?

  • Technique to run programs larger than physical memory by swapping code sections
  • Manually divides program into overlay segments that share same memory region
  • Example: Encyclopedia divided into volumes - only needed volume stays on desk

Why Use Overlays?

  1. Run large programs on memory-constrained systems (1960s-1980s systems)
  2. No hardware/OS memory management unit required
  3. Manual control of memory usage
  4. Useful for embedded systems even today

Overlay Techniques

1. Static Overlays

  • Fixed structure defined during program development
  • Programmer specifies overlay hierarchy
  • Example:
    // Main overlay manager
    load_overlay("math_functions.ovl");
    result = calculate();
    unload_overlay("math_functions.ovl");

2. Dynamic Overlays

  • Runtime decision-making for overlay loading
  • Requires overlay supervisor (small resident module)
  • Example: Choose between sorting algorithms based on input size

3. Tree-Structured Overlays

  • Organized as hierarchical tree
  • Parent overlays call child overlays
  • Visual representation:
    Root (Always resident)
    ├─ Math Ops (Overlay 1)
    │  ├─ Trigonometry (Overlay 1A)
    │  └─ Calculus (Overlay 1B)
    └─ I/O Ops (Overlay 2)
       ├─ File Handling (Overlay 2A)
       └─ Network (Overlay 2B)

Example: Text Editor with Overlays

  • Permanent: Core editing functions
  • Overlay 1: Spell check module
  • Overlay 2: Formatting tools
  • Overlay 3: Export/Import converters
  • Only load specific overlay when feature is used

Advantages

  • Enables large programs on small memory systems
  • No special hardware requirements
  • Predictable memory usage

Disadvantages

  • Complex program design
  • No execution protection
  • Manual memory management
  • Poor utilization of memory

Virtual Memory

Core Concept

  • Creates illusion of large contiguous memory using disk storage
  • Key components:
    • Page Table: Maps virtual to physical addresses
    • MMU: Memory Management Unit hardware
    • Swap Space: Disk area for inactive pages

Page Fault Handling Process

  1. CPU accesses virtual address
  2. MMU checks TLB (Translation Lookaside Buffer)
  3. If page table entry invalid → Page Fault
  4. OS steps: a) Validate memory reference b) Find free frame (or page replacement) c) Load page from disk d) Update page table e) Resume execution

Example Flow:

User Process       OS              Hardware
    |               |                  |
Access Page X → Page Fault → Trap to OS
    |               |←Find Free Frame  |
    |               |←Load Page X      |
    |               | Update Tables →  |
    |←Resume Instruction               |

Caching in OS

1. TLB (Translation Lookaside Buffer)

  • Cache for page table entries
  • Typical hit rate: 98-99%
  • Organization:
    TLB EntryVPNPFNFlags
    00x50x3R/W

2. Page Cache

  • Caches frequently used disk blocks in memory
  • Uses write-back policy for better performance
  • Managed via CLOCK or LRU algorithms

3. Buffer Cache

  • Stores frequently accessed disk data
  • Different from page cache (blocks vs pages)
  • Example: Directory structures, file metadata

Virtual Memory Techniques

1. Demand Paging

  • Load pages only when needed (“lazy loading”)
  • Example: Program startup loads only core pages

2. Prefetching

  • Anticipate future page needs
  • Types:
    • Spatial: Load adjacent pages
    • Temporal: Track access patterns

3. Page Replacement Algorithms

AlgorithmDescriptionBelady’s Anomaly
FIFOReplace oldest pageYes
LRULeast Recently UsedNo
OPTOptimal (Theoretical)No
CLOCKCircular buffer with use bitsNo

Example CLOCK Algorithm:

Frame: [1(0)] → [2(1)] → [3(1)] → [4(0)]
Pointer starts at Frame 1
1. Check Frame 1: Ref bit 0 → Replace
2. If ref bit 1: Set to 0 and move pointer

4. Working Set Model

  • Maintains pages used in recent Δ time
  • Prevents thrashing by limiting active pages
  • Formula: WS(t, Δ) = {pages accessed in (t-Δ, t]}

Thrashing in Virtual Memory

  • Occurs when: ∑(Working Sets) > Physical Memory
  • Detection: High page fault rate (>100/sec)
  • Solution: Adjust process mix using working set principle

Example: Database Server

  • Physical Memory: 16GB
  • Virtual Memory: 64GB (using swap space)
  • Active working set: 12GB (indexes + hot data)
  • Cold data remains on disk until accessed

Comparison: Overlay vs Virtual Memory

AspectOverlaysVirtual Memory
ManagementManualAutomatic
HardwareNone requiredMMU needed
TransparencyProgrammer visibleFully transparent
Memory UseStatic allocationDynamic allocation
ProtectionNoneFull memory protection
Typical UseEmbedded systemsModern OS