CSCE 313 Lecture 18

From Notes
Jump to navigation Jump to search

« previous | Tuesday, March 27, 2012 | next »

Lecture Notes


Memory Management

Program to Process

  1. Source code
  2. Compiler/Assembler → Object module
  3. linker + other objects → load module (executable)
  4. loader + system library → running in memory (load time)
  5. dynamic library (dynamic linking) (execution/run time)

Naming

There are two hard things in Computer Science: Cache Invalidation and Naming things.

3 Levels of Naming indirection

  1. Symbolic: known in context or path such as file path, device name, program name, username (convenient to reference)
  2. Logical: label entity, but not actual physical entity (pid, uid, gid, absolute address in program)
  3. Physical: machine-identifiable adress of entity such as inode adress on disk, entry-point/variable address, PCB, register, etc.

Physical names are difficult for programmers to use, so compilers take care of converting symbolic names into physical names. Names are bound to addresses/values in memory at different times and through different stages:

  1. compile time (.c → .o; early binding)
  2. link time (.o → executable)
  3. run time (executable → memory; late binding)
late-binding
variables get memory space at run-time
early-binding
variables get memory space at compile-time

Memory Map

  • stack (hi address; grows down; laid out by compiler by calculating offset)
  • heap (constructed by allocator malloc)
  • initialized data
  • code (low address)

Multi-Programming in Memory

Because we store many things in memory for different amounts of time, fragmentation can result, which can slow things down. Usually this is dynamic size allocation, which leads to external fragmentation, specifically.

Fixed allocation: still leads to internal fragmentation, unable to reallocate.

Best-fit
put in free memory "hole" with smallest free space left over
lots of small holes that cannot be used
Worst-fit
put in biggest hole with largest free space left over
makes it difficult to find large holes for large programs
First-fit
put in first place where it fits
creates average-sized holes

Contiguous allocation: Storing things in memory where they can be accessed together.

bitmap
point to location where each piece of data is stored.
occupies more space
faster
linked list
each piece points to next piece of data
occupies less space
slower

Fragmentation can be solved with compaction (moving all used memory to a single contiguous space), but this takes a long time


Virtual Memory

What if there are more processes than we can fit into RAM?

Swapping & Paging

Memory stored on disk, so only active memory is cached while inactive memory is archived.

  • swap-out saves info to disk
  • swap-in loads info back into memory
  1. Provide user with as much virtual memory as needed
  2. Store that much memory on the disk
  3. Cache memory currently being used in real memory (RAM)
  4. Preempt (replace) cached memory when necessary
  5. No user program intervention required; happens automatically

Cacheable units are called pages.

Virtual-to-Real Mapping

  1. Memory requested
  2. TLB lookup "table" stores Virtual-to-Physical addresses
  3. If there is a hit, the object in physical memory is returned
  4. If there is a miss (page fault), the lookup is passed to the real page table
  5. Real page table looks up information on disk

Perhaps this would best be described in pseudocode...

struct mem_address {
    long page;
    long offset;
};

struct mem_address physical_address_of(struct mem_address virtual_addr) {
    struct mem_address physical_addr;

    if ((physical_addr.page = tlb_lookup(virtual_addr.page)) {
        // cache hit!
        physical_addr.offset = virtual_addr.offset;  // keep same offset
    } else {
        // cache miss!
        physical_addr = pagetable_lookup(virtual_addr);
        tlb_cache(virtual_addr.page, physical_addr.page); // replace some existing entry with this entry
    }

    return physical_addr;
}