SE350 Study
My answers to final practice exams.
final-00
- Are both of the following two context-switch routines useful for hard real-time systems? Justify your answer clearly or provide a counter example. Switch routine 1 always uses exactly 13us to complete the context switch. Switch routine 2 on average requires 666ms, sometimes only needs 422ms, but never uses more than 672ms.
No, only the first context-switch routine is useful. Switch routine 1 is deterministic, as we know it uses exactly 13us to complete the context switch. On the other hand, switch routine 2 has a much longer requirement of 672ms in the worst case. This is way more than the requirements of hard real-time systems, which operate in microseconds.
- Draw the seven state diagram for process state transitions. Explain the nature of all transitions that leave the Running state in one sentence each.
Running → Ready = timeout Running → Ready/Suspend = process has been suspended. Running → Blocked = event occurs Running → Exit = process ended
- Explain the difference between user-level threads and kernel-level threads and explain the advantages of user-level threads over kernel-level threads.
Threading done at ULT is invisible to the kernel compared to KLT. All of the threading is done in user space. The advantage of this is that it has much less overhead of switching between threads, since it is not managed by the kernel. Code written for ULT is also more portable, since it is not kernel specific unlike KLT.
- Explain the term race condition. Provide a pseudo code example and an execution trace that demonstrates the problem.
A race condition is a condition in which the order of execution between two or more processes that share one of more variables produce different outputs.
Consider the following scenario, where a = 1
initially:
P1: a = a+1
P2: a = a*2
If P1 then P2, then a = 4 If P2 then P1, then a = 3
- Specify a set of tasks with their periods and execution times, so the task set is not schedulable with rate-monotonic scheduling but it is schedulable with earliest deadline first scheduling.
In RMS, the task with the lowest period gets scheduled first.
A: period = 7ms, execution time = 4ms B: period = 10ms, execution time = 3ms C: period = 15, execution time = 10ms
RMS: A → B → A → B → A → …
- C never gets scheduled
EDF: A → B→ A → C → B → A
-
N/A
-
Buffering: (a) What is the use of buffers when communicating with I/O devices?
They help speed up the flow of data between I/O and CPU, since I/O is generally much slower than CPU, so the added buffer can perform input transfers ahead of requests and output transfers after the request is being made. and help reduce cases of deadlock due to memory having to be in main memory during the transition, and process getting suspended.
(b) What is the advantage of circular buffers over double buffering? There is less switching overhead switching between buffers (in the case of double buffering), since now we have multiple buffers each of length 1. They are ideal for bursty I/O.
- No raid
The disk just served the requests: 94, 100. The read buffer in the disk has requests listed in this chronological order: 184, 38, 150, 160, 90, 18, 39, 58, 55. Describe the order in which the disk executes these requests using the following scheduling algorithms: (a) Last in First Out
55, 58, 39, 18, 90, 160, 150, 38, 184
(b) Shortest Service Time First 90, 58, 55, 39, 38, 18, 150, 160, 184
(c) SCAN 150, 160, 184, 90, 58, 55, 39, 38, 18
- (was hard for me)SCAN typically performs as well as shortest service time first. Describe a scenario when SCAN performs poorly compared to shortest service time first.
Previous two requests are 94, 100. We have requests 99, 101, 98, 120. SCAN will process 101, 120, 99, 98 (total = 19 + 21 + 1 = 41) SSTF will process 101, 99, 98, 120 (total = 2 + 1 + 22 = 25)
SCAN does not exploit locality as well.
- Assume a clock page replacement algorithm with a 1 bit use flag and the OS monitors the movement of the pointer in [increments per ms]. (a) If the rate is below a given lower threshold, what does this mean? [9p] That means that there is a low page fault, low page replacement rate. The pointer doesn’t move around a lot, which means that the page accessed are already in the page frame. (b) If the rate is below this threshold, should the OS increase or decrease the multiprogramming level.
Increase the multiprogramming level to increase the page fault level, since the memory capacity is not fully strained.
- How are the working set size and the principle of locality linked together.
The working set size is the size of the working set, i.e. the portion of a process that resides in main memory. The working set size generally relies on the principle of locality, as the working set generally contains memory that is accessed frequently, which is the principle of locality.
- What is the main problem with a global replacement, fixed allocation strategy for virtual memory management?
A global replacement with fixed allocation is not possible, since global replacement would mean that pages from potentially other processes will be taken. This would increase the working set size of the current process and reduce the working set size of the other process, in which case it would be a dynamic-allocation.
final-01
SE350 Final W2012
SE350 Final W2013
SE350 Final W2016
- Easy
- A zombie process is a process that has exited, however still remains in the process table because its parent process needs information from it.
- Done
- ??
- (??) Explain two advantages of hierarchic directories in contrast to file lists for individual users.
Enhanced organization, improved efficiency in searching and access since things are nested, rather than having to search to an entire list.
SE350 Final W2017
- Name two important advantages/disadvantages for each of the following I/O handling techniques: programmed I/O, interrupt-drive I/O. [8p]
Programmed I/O: Advantages: Easy to implement. Provides Precise control over I/O operations. Disadvantages: Requires the CPU to busy wait, as it needs to periodically check the status of the IO module. Does not scale well due to poor performance.
Interrupt-driven I/O: Advantages: Faster than programmed I/O, as it frees up the CPU to do other work on a different process while the current process is blocked on I/O. Handles I/O as soon as they occur due to the nature of interrupt. Disadvantages: Interrupts add overhead. Still requires the CPU to do memory copy in the case of disk I/O (as opposed to DMA).
- Explain the rationale for moving from a uni-programming batch system (S1) to a multi-programming time-sharing system (S2). Provide an illustrative example that clarifies the rationale and demonstrates two advantages of S2 over S1. [10p]
Consider the following processes with their respective execution times:
- P1: 100ms
- P2: 30ms
- P3: 5ms
With S1, P3 will only execute after P1 and P2 are done (130ms latency). However, with S2, because we are working with a multi-programming time-sharing system, where processes can execute in a round-robin fashion, there’s a much smaller latency before P3 gets executed (ex: for 1ms time slice, it would take 2ms before P3 gets executed). This is particularly important in safety-critical systems.
- Name and describe at least three processor registers that an operating system must preserve during a context switch between processes. [6p]
Program Status Word, Program Counter, Stack Pointer.
- Explain the concept and purpose of a working directory for processes.
The working directory for a process facilitates access to files by enabling it to use relative paths to access particular files. It is where the process starts and operates in terms of file handling.
- Explain a memory placement policy that requires periodic compaction. Justify your answer by describing the policy, the purpose of compaction, and whether the policy tackles internal or external fragmentation. [9p]
Dynamic partitioning is a memory placement policy that requires periodic compaction. In dynamic partitioning, the memory is allocated dynamic based on the stack size of the process created. However, this can cause holes once the process exits, and memory is deallocated. To tackle this problem, compaction is used, and tackles external fragmentation (since the gaps are external to the partition).
6 - 10. NOT COVERED. 11. (TODO this is gonna take time) Assume a system with three resource types and four active processes. The system uses the banker’s algorithm to determine whether a deadlock exists. (a) Create a configuration where all three resource types together have more than 10 resources, and the resulting configuration should lead to a deadlock that involves at least two processes. [4p]
(b) Run the bankers’s algorithm to detect the deadlock in the system.
- (see pdf for question)
(a)
(b)
- Explain the fundamental problem of the following page replacement strategy: global replacement fixed allocation.
In fixed allocation, the resident set size is fixed, meaning that portion of the process that resides in main memory remains fixed for all processes. However, with global replacement, since we select a page from another process, in which case the size of resident set might change. That is no longer fixed allocation, but rather variable-allocation.