9.16 Consider a system that uses pure demand paging.

a. When a process first starts execution, how would you characterize the page fault rate?
b. Once the working set for a process is loaded into memory, how would you characterize the page fault rate?
c. Assume that a process changed its locality and the size of the new working set is too large to be stored in available free memory. Identify some options system designers could choose from to handle this situation.

a. When a process first starts execution in a pure demand paging system, the page fault rate is initially high. This is because initially, none of the pages of the process are present in the memory, and each time the process accesses a page that is not in memory, a page fault occurs. The OS then retrieves the required page from the secondary storage and brings it into memory, resolving the page fault.

b. Once the working set for a process is loaded into memory, the page fault rate decreases significantly. The working set refers to the subset of pages that a process is actively using at a given time. When the working set is in memory, most of the page references generated by the process can be satisfied from the memory, resulting in a lower page fault rate.

c. If a process changes its locality and the size of the new working set is too large to fit into available free memory, system designers have several options:

1. Increase the size of the physical memory: If there is enough physical memory capacity, the system designer can choose to add more memory to accommodate the larger working set. This will minimize page faults and allow the entire working set to be stored in memory.

2. Implement a page replacement algorithm: In situations where increasing physical memory is not feasible or cost-effective, the system designer can implement a page replacement algorithm. This algorithm determines which pages to remove from memory when there is insufficient free memory for the new working set. Common page replacement algorithms include Least Recently Used (LRU), Optimal, and Clock algorithms.

3. Use memory-mapped files: Instead of keeping the entire working set in memory, the system designer can choose to store some parts of the working set in memory-mapped files. This approach allows the system to use secondary storage as an extension of the physical memory, effectively increasing the available memory space for the working set.

4. Implement memory compression: Memory compression techniques, like the use of compressed memory pages, can help reduce the size of the working set in memory. This allows for more efficient memory utilization and can potentially fit larger working sets within the available memory.

5. Employ swapping: Swapping involves temporarily moving entire processes or parts of processes between main memory and secondary storage. If the working set is too large to fit in memory, the system designer can choose to swap out non-essential parts of the process to free up memory for the new working set. This technique, however, may lead to increased overhead due to the frequent swapping of pages.