MindMap Gallery 4. Memory management
408 postgraduate entrance examination collection (operating system 4), including continuous allocation storage management method, program loading and linking, Memory hierarchy, segmented storage management methods, etc.
Edited at 2024-03-07 14:43:40This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about bacteria, and its main contents include: overview, morphology, types, structure, reproduction, distribution, application, and expansion. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about plant asexual reproduction, and its main contents include: concept, spore reproduction, vegetative reproduction, tissue culture, and buds. The summary is comprehensive and meticulous, suitable as review materials.
This is a mind map about the reproductive development of animals, and its main contents include: insects, frogs, birds, sexual reproduction, and asexual reproduction. The summary is comprehensive and meticulous, suitable as review materials.
Chapter 4: Memory Management
memory hierarchy
Multi-layer storage system
Multi-layer structure of memory
CPU register
main memory
auxiliary storage
executable memory
General term for registers and main memory
The access speed is fast, and the process can complete the access in a few clock cycles with a load or store instruction.
Main memory and registers
Cache and disk cache
Program loading and linking
step
compile
Source program ->Object modules--------Compiler
The user source program is compiled by the compiler to form several target modules.
Link
A set of target modules -> Load Module ----------Linker
The linker links together a set of compiled target templates and the library functions they require to form a complete load module.
load
Load module ->Memory --------Loader
The load module is loaded into memory by the loader
Program loading
absolute mount mode
At compile time, if it is known that the program will reside at the specified location in memory. The compiler will generate object code with absolute addresses.
Relocatable load mode
In the executable file, list each address unit and relative address value that need to be relocated. When the user program is loaded into the memory, the conversion from the logical address to the physical address is implemented once and will not be converted again (usually completed by software when loading into the memory).
Advantages: No hardware support is required and limited multi-programming can be loaded.
Disadvantages: A program usually needs to occupy a continuous memory space, and the program cannot be moved after it is loaded into the memory. Not easy to share.
Dynamic runtime loading methods
After loading the load module into the memory, the dynamic runtime loader does not immediately convert the logical address in the load module into a physical address, but postpones this address conversion until the program is actually executed.
advantage:
The OS can store a program in discrete memory spaces, move the program, and use it to achieve sharing.
Able to support address references generated during program execution, such as pointer variables (not just address references when generating executable files).
Disadvantages: Hardware support is required, and OS implementation is complex.
It is the basis of virtual storage.
Program link
Static linking method (lib)
Dynamic linking on load
Runtime dynamic linking (dll)
Continuous allocation storage management method
continuous allocation
single sequential allocation (DOS)
Fixed partition allocation (wastes a lot of space)
Dynamic partition allocation
Address mapping and storage protection measures
Base address register: the minimum physical address of the program
Limit register: the logical address range of the program
physical address = logical address base address
Inner fragmentation: occupying unused space within a partition
Outer fragmentation: occupied free partitions between partitions that are difficult to utilize (usually small free partitions)
Divide the memory into several fixed-sized contiguous partitions. Fixed partitioning is also called static partitioning.
Equal partition sizes: only suitable for concurrent execution of multiple identical programs (processing multiple objects of the same type).
Partition sizes vary: multiple small partitions, a moderate number of medium partitions, and a few large partitions. Based on the size of the program, allocate currently free, appropriately sized partitions.
Advantages: No external fragmentation, easy to implement, low overhead.
shortcoming:
There is internal fragmentation, causing waste
The total number of partitions is fixed, limiting the number of programs that can be executed concurrently.
General Os is rarely used and is used in some control systems.
Dynamically creating partitions: When a job is loaded into memory, a continuous area is allocated to it from the available memory, and the partition size is exactly equal to the size of the job. In variable partitioning, the size and number of partitions are variable and are dynamically divided according to the size and number of jobs.
Dynamic Partition Allocation Algorithm Based on Sequential Search
First fit algorithm (first fit, FF)
Search in order, and allocate if you find one that satisfies it, but there may be waste.
This method aims to reduce search time.
The free partitions in the free partition table (free area chain) should be sorted from low to high by address.
Loop first fit algorithm (next fit, NF)
Compared with the above, it is not a sequence, similar to the left and right cross sorting in the hash algorithm.
Free partitions are more evenly distributed and search overhead is small
Start searching from the next free area of the last found free area until you find the first free area that meets the requirements, and allocate a memory space equal to the requested size to the job.
Best fit algorithm (best fit, BF)
Find the best fit, but large areas have fewer visits
This method can keep the outer debris as small as possible.
The free partitions in the free partition table (free area chain) must be sorted from small to large in size, and the first free partition allocation that meets the requirements is found starting from the head of the table.
Worst fit algorithm (worst fit, WF)
Compared with the best, look for the largest area to start with, resulting in a small number of largest areas and a lot of debris.
Free partitions are sorted by size from large to small.
Dynamic partition allocation algorithm based on index search
Quick fit algorithm
buddy system
Hash algorithm
Dynamic relocatable partition allocation
compact
dynamic relocation
Loaded during dynamic runtime, address conversion is performed when the instruction is executed, and requires the support of the hardware address conversion mechanism
Memory address = relative address starting address
Dynamic relocation partition allocation algorithm
1. Compact a partition immediately after it is released. The system always has only one continuous partition without fragmentation. This method is very time-consuming.
2. When the "request allocation module" cannot find a free partition large enough to be allocated to the user, compaction is performed. This method requires much fewer compactions than the previous method, but the management is complicated. The block diagram of the dynamic relocation partition allocation algorithm using this method is as follows:
Advantages: No internal fragmentation.
Disadvantages: external debris.
Swap (understand)
The system places all jobs in external storage, and only calls one job at a time to run in memory. When the time slice runs out, it transfers it to the external storage backup queue to wait, and then transfers another job from the backup queue to run in memory.
Basic paging storage management method
Basic methods of paging storage management
page
Divide the logical address space of a process into several equal-sized slices
frame
Memory space is divided into storage blocks of the same size as pages
Because the last page of the process often does not fit into one piece, it forms unusable fragments, which is called "in-page fragmentation"
Address structure
Page number P Displacement W (0-31)
page table
In the paging system, each page of the process is allowed to be stored discretely in any physical block in the memory. In order to ensure that the process can still run correctly, that is, the physical block corresponding to each page can be found in the memory, the system A page image table is also established for each process, referred to as page table.
The role of the page table is to implement address mapping from page numbers to physical block numbers.
Address Translation Authority
Basic address translation mechanism
To access memory twice
Page tables mostly reside in memory
In order to implement the address conversion function, a page table register (PTR) is set up in the system to store the starting address of the page table and the length of the page table.
When the process is not executing, the starting address and length of the page table corresponding to each process are stored in the PCB of the process. When the process is scheduled, they are loaded into the page table register.
Address translation mechanism with fast table
Improved efficiency, there will be calculation questions here
If the page table is stored in memory, every time you access the memory, you must first access the page table in the memory, and then access the memory based on the physical address formed. In this way, the CPU must access the memory twice to store a piece of data, thus reducing the computer's processing speed by 1/2.
In order to increase the speed of address translation, a special cache memory with parallel query function, called "associative memory" or "fast table", is added to the address translation mechanism to store the page table entries currently accessed.
The address conversion process is:
1. The CPU gives a valid address
2. The address translation mechanism automatically sends the page number to the cache to determine whether the required page is in the cache table.
3. If yes, directly read the physical block number corresponding to the page and send it to the physical address register;
4. If the corresponding page table entry is not found in the fast table, you need to access the page table in memory again.
5. After finding it, store the page table entry read from the page table into a register unit in the fast table to replace an old page table entry.
Two-level and multi-level page tables
The main reason is that sometimes there are too many page tables and they need to be simplified.
Format: outer page number P1 outer page inner address P2 inner page address d
Basic method: Paging the page table, the size of each page is the same as the size of the physical block of memory, and numbering them, each page can be stored discretely in different physical blocks.
Inverted page table
The inverted page table sets a page table entry for each physical block (page frame) and sorts it by physical block. Its content is the page number and the identification of the process to which it belongs.
advantage:
There are no outer fragments, and each inner fragment does not exceed the page size.
A program does not have to be stored contiguously.
It is convenient to change the size of the space occupied by the program. That is, as the data dynamically generated increases as the program runs, the address space can grow accordingly.
Disadvantages: All programs are loaded into memory.
Segmented storage management method
introduce
Easy to program
Information Sharing
dynamic growth
dynamic link
In the segmented storage management method, the address space of the job is divided into several segments. Each segment is a complete set of logical information. Each segment has its own name and is a continuous segment addressed from zero. In the address space, the lengths of each segment are not equal.
The memory space is dynamically divided into several areas of different lengths, called physical segments. Each physical segment is determined by the starting address and length.
Basic principles of segmented systems
segmentation
Format: segment number address within segment
segment table
The segment table implements the mapping from logical segments to physical memory areas.
Address Translation Authority
The difference between paging
Page is the physical unit of information
Page size is fixed and fixed by the system
Paged user program address space is one-dimensional
Usually the segment is larger than the page, so the segment table is shorter than the page table, which can shorten the search time and increase the access speed.
Paging is a need for system management, and segmentation is a need for user applications. An instruction or an operand may cross a page boundary but not a segment boundary.
Information Sharing
This is the most important advantage of segmentation
Segmented page storage management method
Fundamental
Format: segment number (S) page number in segment (P) address in page (W)
Address translation process
Requires three visits
In a segmented page system, in order to obtain an instruction or data, three accesses to the memory are required: the first time to access the segment table in the memory to obtain the starting address of the page table; the second time to access the page table in the memory to retrieve the page. The physical block number where it is located, and the block number and the page address are combined to form the physical address of the instruction or data; the third access is the actual fetching of the instruction or data based on the obtained physical address.