Memory space

Introduction

Address space (address space) represents the amount of memory occupied by any computer entity. The assembly module of the program formed by the assembly or compilation of the source program and then processed by the link editing program, and the module that is converted into a relative address addressing module, which is addressed in the order of 0 as the base address. Relative addresses are also called logical addresses or virtual addresses, and the space composed of relative addresses in the program is called logical address space. The relative address space is converted to the absolute address space through the address relocation mechanism, and the absolute address space is also called the physical address space. Memory space generally refers to the main memory space (physical address space) or the system allocates memory space for a user program. There are four methods for the system to allocate memory space for a user program: single continuous allocation, fixed partition allocation, dynamic partition allocation, and dynamic relocation partition allocation.

Method of allocating memory space

Single continuous allocation

This is the simplest way of storage management, but it can only be used for single-user, single-task Operating system. When using this storage management method, the memory can be divided into two parts: the system area and the user area. The system area is only provided to the OS, usually in the lower part of the memory; the user area refers to all the memory except the system area. Space is provided to users. Although many of the early single-user and single-task operating systems were equipped with memory protection mechanisms to prevent user programs from destroying the operating system, several common single-user operating systems in recent years, such as CP/ M, MS-DOS and RT11, etc., have not adopted memory protection measures. This is because, on the one hand, hardware can be saved, and on the other hand, because it is feasible. In a single-user environment, the machine is exclusively owned by one user, and there is no possibility of interference from other users; at this time, the damage that may occur is only the user program itself destroys the operating system, and the consequences are not serious, but it will affect the user program , And the operating system can easily be reloaded into the memory by restarting the system.

Fixed partition allocation

Fixed partition allocation is the simplest storage management method that can run multiple programs. This is to divide the memory user space into several fixed-size areas, and only load one job in each partition. In this way, dividing the user space into several partitions allows several jobs to run concurrently. When there is a free partition, you can select an appropriate size job from the backup job queue in the external storage to load into the partition. When the job ends, you can find another job from the backup job queue and transfer it to the partition. Partition.

Methods for partitioning

The following two methods can be used to divide the user space of the memory into several fixed-size partitions:

< p>(1) The partition sizes are the same, even if all the memory partitions are the same size. The disadvantage is lack of flexibility, that is, when the program is too small, it will cause a waste of memory space; when the program is too large, a partition is not enough to load the program, making the program unable to run. Nevertheless, this division method is still used to use a computer to control multiple identical objects, because the memory space required for these objects is equal in size. For example, the furnace temperature group control system uses one computer to control multiple identical smelting furnaces.

(2) The partition sizes are not equal. In order to overcome this shortcoming of equal partition size and lack of flexibility, the memory area can be divided into multiple smaller partitions, a moderate amount of medium partitions, and a few large partitions. In this way, an appropriate partition can be allocated according to the size of the program.

Memory allocation

In order to facilitate memory allocation, partitions are usually queued according to their size, and a partition usage table is created for them. Each entry Including the starting address, size and status of each partition (whether it has been allocated). When a user program is to be loaded, the memory allocation program retrieves the table, finds an unallocated partition that meets the requirements, allocates it to the program, and then sets the status in the table entry to "allocated". Allocate"; if a partition with sufficient size is not found, it will refuse to allocate memory for the user program.

Dynamic partition allocation

Dynamic partition allocation is also called variable partition allocation, which is a method of partitioning memory dynamically. This partitioning method does not divide the memory in advance, but dynamically establishes the partition according to the size of the process when the process is loaded into the memory, and makes the size of the partition just suitable for the needs of the process. Therefore, the size and number of partitions in the system are variable.

Dynamic partition is very good at the beginning of the allocation, but then it will cause many small memory blocks to appear in the memory. As time goes by, more and more fragments will be generated in the memory, and the utilization of the memory will decrease accordingly. These small memory blocks are called external fragments, which means that the storage space outside all partitions will become more and more fragments, which is exactly the opposite of internal fragments in fixed partitions. Overcoming external fragmentation can be solved by compact (Compaction) technology, that is, the operating system moves and organizes processes from time to time. However, this requires the support of dynamic relocation registers and is relatively time-consuming. The compact process is actually similar to the disk defragmentation program in the Windows system, but the latter is the compactness of the external storage space.

When the process is loaded or swapped into the main memory, if there are multiple large enough free blocks in the memory, the operating system must determine which memory block to allocate to the process. This is the allocation strategy of dynamic partitions. Consider the following algorithms:

First Fit algorithm: Free partitions are linked in increasing order of addresses. When allocating memory, search in order to find the first free partition whose size can meet the requirements.

Best Fit algorithm: Free partitions form a partition chain according to increasing capacity, and find the first free partition that can meet the requirements.

Worst Fit algorithm: Also known as the Largest Fit algorithm, free partitions are linked in order of decreasing capacity. Find the first free partition that can meet the requirements, that is, select the largest partition.

Next Fit (Next Fit) algorithm: also known as the cyclic first-fit algorithm, evolved from the first-fit algorithm. The difference is that when the memory is allocated, the search continues from the position where the last search ended.

Among these methods, the first adaptation algorithm is not only the simplest, but also usually the best and fastest. In the initial version of the UNIX system, the first adaptation algorithm was used to allocate memory space for the process, which was implemented using an array data structure (not a linked list). However, the first adaptation algorithm will cause many small free partitions to appear in the low-address part of the memory, and these partitions must be passed through each time a search is allocated, so the search overhead is also increased.

The proximity adaptation algorithm tries to solve this problem, but in fact, it often results in the allocation of space at the end of the memory (because in a scan, when the front part of the memory is used and then released, it will not participate in the allocation) , Split into small pieces. It is usually worse than the result of the first adaptation algorithm.

Although the best adaptation algorithm is called "best", its performance is usually very poor, because every time the best allocation will leave a small block of memory that is difficult to use, it will generate the most external Fragments.

The worst-adapted algorithm is opposite to the best-adapted algorithm. Choose the largest available block, which seems to be the least prone to fragmentation, but divides the largest continuous memory, which will quickly lead to no available Large memory block, so performance is also very poor.

Dynamic relocation partition allocation

In the continuous allocation mode, a system or user program must be loaded into a continuous memory space. If there are only a few small partitions in the system, even if the sum of their capacities is greater than the program to be loaded, because these partitions are not adjacent, the program cannot be loaded into the memory. Such small partitions that cannot be used are called "fractions" or "fragments".

If you want to load jobs, one method that can be used is: move all the jobs in the memory so that they are all adjacent to each other, so that multiple small partitions that were originally scattered Spliced ​​into a large partition, then jobs can be loaded into this area. This method of splicing multiple scattered small partitions into one large partition by moving the job position in the memory is called "splicing" or "compact". Because the location of some user programs in the memory has changed after being compacted, if the addresses of the programs and data are not modified (transformed) at this time, the programs will not be executed. For this reason, after each "collection", the moved program or data must be relocated.

In the way of dynamic runtime loading, all addresses after the job is loaded into the memory are still relative addresses, and the work of converting relative addresses to physical addresses is postponed until the program instructions are actually executed. . In order that the address conversion will not affect the execution speed of the instruction, it must be supported by the hardware address conversion mechanism, that is, a relocation register must be added to the system to store the starting address of the program (data) in the memory. When the program is executed, the memory address actually accessed is formed by adding the relative address and the address in the relocation register.

Method of expanding memory space

Virtual memory

Virtual memory is a technology of computer system memory management. It makes the application think that it has continuously available memory (a continuous and complete address space), but in fact, it is usually divided into multiple physical memory fragments, and some are temporarily stored on external disk storage, when needed Perform data exchange. Compared with systems that do not use virtual memory technology, systems that use this technology make it easier to write large programs and use real physical memory (such as RAM) more efficiently.

Virtual storage is automatically realized by the hardware and operating system to schedule and manage storage information. Its working process includes 6 steps:

①The logical address of the central processing unit's access to the main memory is decomposed into group number a and group address b, and the group number a is converted into addresses, that is, the logical group number a As an index, check the address conversion table to determine whether the group of information is stored in the main memory.

②If the group number is already in the main memory, go to ④; if the group number is not in the main memory, check whether there is a free area in the main memory. The temporarily unused group is transferred to the auxiliary storage, so that this group of information can be transferred to the main storage.

③Read the desired group from the auxiliary storage and send it to the free area of ​​the main storage, and then register the free physical group number a and logical group number a in the address conversion table.

④Read the physical group number a corresponding to the logical group number a from the address conversion table.

⑤Get the physical address from the physical group number a and the byte address b in the group.

⑥ Access necessary information from the main memory according to the physical address.

There are three types of scheduling methods: paged, segmented, and paged. Page scheduling is to divide the logical and physical address space into fixed-size pages. The main memory is numbered in page order, and each independently addressed program space has its own page number sequence. By scheduling the pages of the program in the auxiliary memory, the pages of the program can be discretely loaded into different page positions in the main memory. Corresponding retrieval. The advantage of page scheduling is that the fraction of pages in the page is small, the page table is transparent to the programmer, the address changes quickly, and the loading operation is simple; the disadvantage is that each page is not an independent module of the program, and it is inconvenient to protect the program and data. Segment scheduling is to divide the address space according to the logical structure of the program. The length of the segment is arbitrary and allowed to extend. Its advantage is that it eliminates memory fragments, is easy to implement storage protection, and is convenient for program dynamic assembly; the disadvantage is that the loading operation is complicated. . Combining these two methods constitutes segment page scheduling. In segment page scheduling, the physical space is divided into pages, the program is segmented by modules, and each segment is subdivided into pages as small as the physical space page. Segment page scheduling combines the advantages of segment and page scheduling. The disadvantage is that it increases the cost of hardware and the software is more complicated. Most large general-purpose computer systems use segment page scheduling.

Physical Address Extension

Physical Address Extension (PAE), which also explains physical location extension, is a function of the x86 processor, allowing the central processing unit to reach 32 Access more than 4GB of physical memory under a bit operating system.

PAE is supported by IntelPentium Pro and above CPUs (including all new models of Pentium series processors except this version of Pentium M with a bus frequency of 400MHz). Other compatible processors, such as Newer models of CPUs from Athlon and AMD also support PAE.

x86 processors have added additional address lines to select those increased memory, so the size of physical memory has increased from 32 bits to 36 bits. The largest physical memory has increased from 4GB to 64GB.

The 32-bit virtual address (linear address) remains unchanged, so general application software can continue to use 32-bit instructions; if the flat memory mode is used, the address space of these software is also limited Is 4GB. The operating system uses the page table to map the 4GB address space to the 64GB physical memory, and this mapping is generally different for each process. In this way, even if it cannot be used by a single program, the increased physical memory can still be used.

Related Articles
TOP