Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Memory-management
Memory management information: page table/segment table pointers
Based on slides © OS Concepts by Silberschatz, Galvin and Gagne, 2008
Additional material by Diana Palsetia
CIT 595 2
1
Ready Queue Representation Dispatcher
Dispatcher module gives control of the CPU to the
process selected by the scheduler
This involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart
that program
Dispatch latency
Time it takes for the dispatcher to stop one process and start
another running
CIT 595 Source: Operating System Concept (7 Ed) – Silberschatz, Gavin and Gagne 5 CIT 595 6
Priority
Allow more important processes
CIT 595 Source: Operating System Concept (7 Ed) – Silberschatz, Gavin and Gagne 7 CIT 595 8
2
Scheduling Algorithm Metrics Scheduling Scheme: FCFS
CPU utilization: Fraction of time CPU is utilized First Come First Served (FCFS)
Throughput: # of processes that completed per time unit The process that requests the CPU first is allocated the CPU
P1 P2 P3
Waiting time for P1 = 6; P2 = 0; P3 = 3
0 24 27 30
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Waiting time for P1 = 0; P2 = 24; P3 = 27
Average waiting time: (0 + 24 + 27)/3 = 17
Convoy effect: all processes wait for the one big
process to get off the CPU
3
Scheduling Algorithm: Shortest Job First (SJF) SJF with Preemption Example
The process with the shortest execution time takes priority over Process Arrival Time Next CPU Ex Time
others P1 0.0 7
P2 2.0 4
Associate with each process the length of its next CPU
execution time P3 4.0 1
Achieved by guessing the run time of process for its next execution P4 5.0 4
SJF Advantages and Disadvantages Estimating the Next CPU Execution Time
4
Example of Exponential Averaging Scheduling Scheme: Priority
A priority number (integer) is associated with each process
Solution: Aging
g g – as time p
progresses
g increase the p
priority
y
of the process
5
User Threads Kernel Threads
Implemented by a library at the user level OS having its own threads
Supports thread creation, scheduling, and Also known as LWP (Light Weight Processes)
management with no support from the kernel A LWP can be viewed as “virtual CPUs” to which the
scheduler of threads libraryy schedules user-level threads
Advantage
Ad t
Fast and easy to create (no overhead of system
calls)
Advantage
If one thread blocks then OS can schedule another thread of
Disadvantage the application to execute
If one thread blocks then all block
Disadvantage
Slower to create and manage as creation and management is
happening via system calls
6
Thread Models: One to One Model One to One Model
7
Thread Scheduling: Contention Scope Thread Scheduling: Scheme
Many-to-one and many-to-many models, thread library Most user-level thread libraries provide getting/setting
schedules user-level threads to run on LWP scheduling policies
Known as process-contention scope (PCS) since
scheduling competition is within the process
Example Pthreads API
int pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy);
Kernel thread scheduled onto available CPU is int pthread_attr_getschedpolicy(const pthread_attr_t *attr, int
system-contention scope (SCS) *policy);
Competition among all threads in system ¾ policy -> SCHED_FIFO, SCHED_RR and SCED_OTHER
In Pthreads,, scope
p can be set with thread attribute
object (pthread_attr_t) via the
pthread_attr_getscope(..)
By default the scope used is PCS
.h also provided macros PTHREAD_SCOPE_PROCESS and
PTHREAD_SCOPE_SYSTEM
CIT 595 29 CIT 595 30