Glossary and Concepts
- Convoy Effect: A situation in which during the execution of a First Come, First Serve scheduling algorithm starts a long running CPU-bound job and behind it are multiple, short running and or I/O bound jobs that could have otherwise been run concurrently (if they are I/O bound) or quickly run and finished before the long-running CPU-bound job. Essentially, jobs line up behind the long-running job.
- Interrupt Controller: All of the device IRQ lines are conntected to the interrupt controller. The interrupt controller is then connected to the CPU via a single interrupt line. It uses the ids of each devices IRQ line as a priority to indicate the relative importance of each interrupt. The lower the number, the higher the priority.
- Interrupt Vector Table: An array of function pointers. Each pointing to a specific ISR for a specific device. This table can be stored directly in a CPU register and is initialized at boot time.
- IRQ: Interrupt Request Line is a predefined location where a computer can expect a specific device to provide interrupts when the device is sending signals about its operation. There is a wire connected to the CPU (via the interrupt controller) from each device and it is on this Line that the device sends its interrupts.
- ISR: Interrupt Service Routine, also called an interrupt handler, is a software process invoked by an interrupt request from a hardware device. The routine is sent to the CPU and interrupts the currently active process. It then runs, and after completion, the previously interrupted process is resumed. It is essentially a piece of the OS responsible for doing the right operations for a given piece of hardware’s interrupt.
- Livelock: When waiting, or excluded, processes are looping forever waiting for a lock on a critical section of code.
- LWP: LightWeight Process is a “thread” of execution that can be seen and scheduled by the kernel.
- PCB: A Process Control Block is a data structure that contains information regarding a specific process. To include: process state, PID, program counter, registers, memory limits, list of open files, cpu scheduling information, memory management information, accounting information, I/O status, etc. It is this information that the CPU saves and then restores when a given processes context is switched.
- PID:
- Process: An abstraction that represents the entirety of a running program. This includes threads that have access to shared memory. Essentially, all of the running code and operations under the auspices of a single PID.
- Quantum: The amount of time that a job can run without being interrupted by the scheduler
- Thread: A task within a process. A discrete execution of work in in progress. Threads usually share memory with other threads in the same process and must cooperate with each other so as not to corrupt that memory.
- Trap: A trap is a software based interrupt. It can be caused by errors in instructions (dividing by zero, illegal memory access), or by a service request from the OS to make a system call. They are synchronous calls and the user space program is paused while the syscall executes. Traps are a subset of interrupts.
A Process’s Memory
The process’s memory is laid out in the following fashion
|--------------|
| stack |
|--------------|
| | |
| v |
| |
| ^ |
| | |
|--------------|
| heap |
|--------------|
| bss |
|--------------|
| data |
|--------------|
| text |
|--------------|
The stack grows downward as function calls are pushed onto it. The heap grows upwards as memory is dynamically allocated for objects. The bss (Block Starting Symbol) area that contains statically allocated variables that are declared but not yet assigned a value. Data stores global and static variables that have been defined initial values by the executable. Text stores the program code.
Scheduling Algorithms
- FIFO/FCFS: First In, First Out/First Come, First Serve.
- SJF: Shortest Job First
- STCF: Shortest CPU Time To Completion First
- PRI: Priority Scheduling
- RR: Round Robin
- MQS: Multilevel Queue Scheduling
- MFQ: Multilevel Feedback Queue
Threading and Concurrency
The primary challenge in multi-threaded, or concurrent processing, is ensuring that only one process reads or mutates shared memory at a time. The overall term for this is synchronization. There are a number of different algorithms that have been devised over time to implement mutexes solely in code; they are very tricky. Having underlying hardware support makes things much easier to implement and more efficient.
One of the ways is to disable interrupts to ensure that critical regions of the code cannot be preempted. This is a naive approach and only works on uniprocessors.
Mutexes
Test And Set
One such hardware instruction enables an atomic testing and setting of a variable. The basic idea is that it is assumed that a given memory location is initialized to 0. If we see that the value is 1 it means that another process has the lock. If not, we set the value to 1 and return. Execute the critical section of code, and then update the value back to 0.
With this semantic we can build a primitive spin lock mutex as follows:
//
// Test and set atomic instruction. This is guaranteed to complete as one
// atomic section at the hardware level
//
int test_and_set(int *mem_loc) {
if (mem_loc == 0) {
mem_loc = 1;
return 0;
} else {
return 1;
}
}
// ---------------------------------------------------------------------
// Mutex implementation
//
aquire_mutex() {
// Spin forever until the function returns 0
while(test_and_set == 1);
}
// Execute the critical section of the code
release_mutex() {
mutex = 0;
}
Compare And Swap
This instruction set is more similar to the typical instruction set in most hardware. The basic idea is that it is also an atomic hardware instruction. We test that a variable is an expected value. If it is, we then set it to a new value and return; we have acquired the lock. If it is not the expected value we wait until it is. If we acquired the lock, when we have finished executing our critical section of code we update the value to the default “free”/expected value to release it.
Semaphores
With hardware support and the abstraction of a mutex we can build semaphores. A semaphore is a type of lock that allows us to allow N number of processed access to a logical lock. We build upon the mutex by using it to protect the critical section of code in which we increment or decrement our semaphore value.
Conditional Variables
A synchronization primitive that provides a queue in which threads can be stored that are waiting for a resource to become available. Threads test to see if the resource if available and if so uses the resource. If not, it adds itself to the queue of waiting threads. When a thread is finished with the resource it wakes up the next thread in the queue, or none if the queue is empty. Some semantics include the ability to broadcast a message to all threads to wake them up. There are three basic operations that conditional variables support:
- Wait: Add a thread to the queue and put it to sleep
- Signal: Remove a single thread from the head of the queue and wake it up
- Broadcast: Remove and wake up all threads waiting on the queue
Multiple mutexes are required in the implementation of a conditional variable.
Monitors
A higher level synchronization abstraction that enables programmers to encapsulate code in a structure that will guarantee that only a single thread of execution will run the code in the monitor at a time.
Not only code, but also data can be protected by the monitor. Said data can only be accessed from within the monitor guaranteeing mutual exclusion for both the execution of the code and the access to the data within the monitor.
Access to monitors is typically brokered by a synchronized wait and signal queues.
There are multiple types of monitor semantics:
- Mesa Semantics
- Hoare Semantics
- Brinch Hansen Semantics
One of the most commonly seen implementation of this sort of monitor are synchronized blocks of code in Java.
Memory barriers
https://en.wikipedia.org/wiki/Memory_barrier
Terms
- Race Condition:
- Deadlocks:
- Livelocks:
- Barriers:
fork() exec()
Links
- Difference between monitors and semaphores
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture2.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture3.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture4.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture5.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture6.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture7.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture8.html
- https://cseweb.ucsd.edu/classes/sp17/cse120-a/applications/ln/lecture9.html