I need help with the following:

Consider a system that uses pure demand paging.

a.When a process first starts execution, how could you characterize the page fault rate?

b.Once the working set for a process is loaded into memory, how would you characterize the page fault rate?

c.Assume that a process changes it locality and the size of the new working set is too large to be stored in available free memory. Identify some options system designers could choose from to handle this situation.

You can look up some references, such as:

http://en.wikipedia.org/wiki/Page_replacement_algorithm
to start.

a. When a process first starts execution in a system that uses pure demand paging, the page fault rate is typically high. This is because the process needs to bring in its initial pages from secondary storage (such as a hard disk) into main memory before it can execute. Since the working set of the process is not yet loaded into memory, every memory reference will likely result in a page fault. As a result, the page fault rate will be initially high.

b. Once the working set for a process is loaded into memory, the page fault rate will likely decrease. This is because the working set includes the pages that the process frequently accesses. As long as the working set remains in memory, most memory references can be satisfied without needing to retrieve pages from secondary storage. Therefore, the page fault rate will likely be low if the working set is fully loaded into memory.

c. When a process changes its locality and the size of the new working set is too large to be stored in available free memory, system designers have several options to handle this situation:

1. Increase the available physical memory: If the system has the capability to add more physical memory, this would allow more pages to be stored in memory, potentially accommodating the larger working set.

2. Employ aggressive page replacement policies: The system could use more aggressive page replacement policies, such as LRU (Least Recently Used) or LFU (Least Frequently Used). These policies prioritize replacing pages that are less likely to be referenced in the near future, freeing up space for the new working set.

3. Implement a swap space or paging to secondary storage: If the system has limited physical memory, it could utilize a portion of secondary storage, such as a swap space, to temporarily store pages that are not currently in the working set. When the process requires these pages, they can be loaded back into memory, causing a page fault. This allows the system to accommodate larger working sets without requiring additional physical memory.

4. Use memory-mapped files: Memory-mapped files allow portions of files to be mapped directly into the virtual memory of a process. This can be useful if the process frequently accesses large amounts of data that won't fit in physical memory. The system can map the file into memory and load only the necessary portions of the file as needed.

The choice among these options will depend on the specific system constraints, such as available hardware resources and performance requirements. System designers need to carefully evaluate the trade-offs and choose the most suitable solution for handling larger working sets in a system that uses pure demand paging.