CSCE 313 Lecture 18
« previous | Tuesday, March 27, 2012 | next »
Memory Management
Program to Process
- Source code
- Compiler/Assembler → Object module
- linker + other objects → load module (executable)
- loader + system library → running in memory (load time)
- dynamic library (dynamic linking) (execution/run time)
Naming
There are two hard things in Computer Science: Cache Invalidation and Naming things.
3 Levels of Naming indirection
- Symbolic: known in context or path such as file path, device name, program name, username (convenient to reference)
- Logical: label entity, but not actual physical entity (pid, uid, gid, absolute address in program)
- Physical: machine-identifiable adress of entity such as inode adress on disk, entry-point/variable address, PCB, register, etc.
Physical names are difficult for programmers to use, so compilers take care of converting symbolic names into physical names. Names are bound to addresses/values in memory at different times and through different stages:
- compile time (.c → .o; early binding)
- link time (.o → executable)
- run time (executable → memory; late binding)
- late-binding
- variables get memory space at run-time
- early-binding
- variables get memory space at compile-time
Memory Map
- stack (hi address; grows down; laid out by compiler by calculating offset)
- heap (constructed by allocator malloc)
- initialized data
- code (low address)
Multi-Programming in Memory
Because we store many things in memory for different amounts of time, fragmentation can result, which can slow things down. Usually this is dynamic size allocation, which leads to external fragmentation, specifically.
Fixed allocation: still leads to internal fragmentation, unable to reallocate.
- Best-fit
- put in free memory "hole" with smallest free space left over
- lots of small holes that cannot be used
- Worst-fit
- put in biggest hole with largest free space left over
- makes it difficult to find large holes for large programs
- First-fit
- put in first place where it fits
- creates average-sized holes
Contiguous allocation: Storing things in memory where they can be accessed together.
- bitmap
- point to location where each piece of data is stored.
- occupies more space
- faster
- linked list
- each piece points to next piece of data
- occupies less space
- slower
Fragmentation can be solved with compaction (moving all used memory to a single contiguous space), but this takes a long time
Virtual Memory
What if there are more processes than we can fit into RAM?
Swapping & Paging
Memory stored on disk, so only active memory is cached while inactive memory is archived.
- swap-out saves info to disk
- swap-in loads info back into memory
- Provide user with as much virtual memory as needed
- Store that much memory on the disk
- Cache memory currently being used in real memory (RAM)
- Preempt (replace) cached memory when necessary
- No user program intervention required; happens automatically
Cacheable units are called pages.
Virtual-to-Real Mapping
- Memory requested
- TLB lookup "table" stores Virtual-to-Physical addresses
- If there is a hit, the object in physical memory is returned
- If there is a miss (page fault), the lookup is passed to the real page table
- Real page table looks up information on disk
Perhaps this would best be described in pseudocode...
struct mem_address {
long page;
long offset;
};
struct mem_address physical_address_of(struct mem_address virtual_addr) {
struct mem_address physical_addr;
if ((physical_addr.page = tlb_lookup(virtual_addr.page)) {
// cache hit!
physical_addr.offset = virtual_addr.offset; // keep same offset
} else {
// cache miss!
physical_addr = pagetable_lookup(virtual_addr);
tlb_cache(virtual_addr.page, physical_addr.page); // replace some existing entry with this entry
}
return physical_addr;
}