main | files
March 14th, 2024    

CISC 3320 3.0 17279 EW6
Main
Files
Syllabus
Links
Homeworks

Notes
0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
Networ Intro
LAN Intro
Topologies
Comp Networks

Big Data
Security
Email

Misc
What is ELF/COFF?

Projects
Project 1
Project 2

Past Tests
Midterm
OldSample Midterm
OldMidterm
OldSample Final
OldMidterm Exam

Notes 0004

Process Management

(read chapter 4 and 5 of the dinosaur book)

Overall Picture

Systems usually consist of several processes cooperating to accomplish some task. Operating system processes execute system code, user processes execute user code.

A process is a program in execution. It needs resources like memory, CPU time, IO, etc.

Traditionally, a process involved only a single thread of execution, but now most operating systems are capable of executing several threads per process.

From lesson 0003: Operating systems are responsible for creation of both user and system processes, scheduling of processes, and the provision of mechanisms for synchronization, communication, and deadlock handling.

Processes

A process is a program in execution. Whatever your CPU executes is referred to as a process. A program is NOT a process. A program is a passive file (it doesn't do anything). A process is that same program when it is executing (running).

Internally, an operating system keeps information about every process. Usually, the information consists of the following: text section (the binary code that the CPU is executing), program counter (where we're currently executing), cpu registers, stack, and data section.

While a 'program' may have several processes associated with it, they're still considered separate processes.

Process States

A state of the process is mostly defined by its activity.

New: A new process that was just created.
Running: Process' instructions are being executed.
Waiting: Process is waiting for some event to occur.
Ready: Process is waiting to be assigned to a process.
Terminated: A process has finished execution.

[refer to page figure 4.1 on page 97 for a state diagram]

Process Control Block

Operating Systems represent a process internally by Process Control Block (PCB) - also called a task control block.

In essence, it is just a structure that contains information such as: process state, program counter, CPU register values, scheduling information, memory management information, accounting information, and IO status information.

Not all the information is used all the time. CPU register values are not saved while the process is executing, and are only saved when a task switch occurs (so that a process can seamlessly recover without loosing anything).

Various operating systems may have various needs though.

Process Scheduling

The point of multiprogramming is to have some process running at all times as to maximize CPU utilization. The point of time-sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. A system with 1 CPU can only have 1 running process; the rest must wait for their turn.

As processes enter the system, they are put on the 'job queue'; this consists of all jobs in the system. A process that resides in main memory and is ready to execute, are put on the 'ready queue'.

An operating system may have many such queues. When a process is allocated CPU time, it executes for a while, and exists, is interrupted, or waits for some IO. In the case of IO, many processes may be waiting for the same IO, so the process is made to wait and is played on the device queue.

A common representation of scheduling is a queuing diagram (figure 4.5, page 101).

In essence, as soon as a process enters the system, it is put on a queue. When it becomes ready to execute, it is put on another queue. When it gets some interrupt, it is again put on a queue, when it does some IO, it is put on a queue, when its cpu time expires, it is again, put on a queue.

[your class project is surely to use some sort of a queue]

Schedulers

Schedulers are processes that maintain these queues. They pick out processes form the queue based on their priority, etc.

There may be several schedulers. Long-term schedulers may be responsible for scheduling non-ready processes (processes that may reside on the hard drive), while a short-term (or CPU scheduler) may be responsible for scheduling ready processes that are already loaded in main memory and are ready to execute.

Obviously, the long term scheduler brings the processes into memory and hands them over to the cpu scheduler. Long-term scheduler controls the number of processes that the cpu scheduler is juggling, and maintains the degree of multiprogramming (number of processes in memory).

Processes may be IO bound or CPU bound. IO bound processes spend a lot of their time doing IO, while a CPU bound process doesn't do much IO, but uses a lot of CPU. A long term scheduler should pick a relatively good mix of IO and CPU bound processes so that the system is better utilized.

It is not required to have many separate schedulers, but there needs to be some mechanism to handle basic scheduling. More complex systems may have more advanced scheduling; simpler systems might have 1 main cpu scheduler, etc.

Context Switch

When the CPU switches to another process, the task is known as a Context Switch. The context is the current state of the current process. Switching it involves copying registers, etc., to the Process Control Block (PCB), saving it, loading the PCB of the process you want to load, and letting it run.

Context Switching happens as a result of interrupts.

Operations on Processes

Process Creation

A process may create a new process via various system calls. On Unix, the system call is fork(). Creating a process involves creating a new PCB for that process, scheduling the process for execution, etc.

Depending on policy, a process may inherit resources from its parent, or it may acquire its own resources from the operating system. The process may be bound to the parent or totally independent.

Process Termination

A process terminates by signaling the OS that it's finished. On Unix, it is accomplished via the exit() system call. The operating system de-allocates memory, resources, etc., that were allocated for that process.

On some systems, when a parent process terminates, the operating system also terminates all child processes.

Cooperating Processes

Processes can be either independent or cooperating. An independent process doesn't share memory nor communicates with other processes. A cooperating process can affect the state of other processes by sharing memory, sending signals, etc.

Let's take a look at a classic inter-process communication problem, the producer-consumer problem. The idea is that an operating system may have many processes that need to communicate. Imagine a program that prints output. Somewhere internally, that program needs to send data to the printer driver.

In this contrived example, the program that produces the data is called 'producer' and the program that consumes the data is called the 'consumer'. The goal is to allow them to communicate.

The first major restriction we must place on the system is that the consumer cannot consume non-produced things. We need to design our system in such a way that the consumer doesn't ruin anything if it asks for something that doesn't exist yet.

We can view this producer-consumer model in many ways. There usually needs to be some buffer between the two (we write things someplace, the other process reads it from there). (if there is no buffer, communication is severely limited).

In one approach, we can make an unbounded buffer; the consumer still has to wait for things to appear in the buffer, but the producer just sticks things in without care of buffer overflow (or the speed at which the consumer consumes the things).

In another approach, we have a bounded-buffer, where a consumer waits for things to be published to the buffer, and the producer waits if the buffer is full.

We could imagine a queue like structure (page 109 in your book), where we have an array with 2 pointers, one for insertion, another for removal. [see page 109 of the book].

Process Communication

Operating Systems usually provide a lot of resources to allow processes to communicate. One common approach is to pass messages between the two processes. This is the base of the entire Internet. A program passes a message to another program (possibly executing on another machine).

In message passing, you can have direct communication or indirect. Direct communication implies that you're directly communicating with the other program. Indirect implies that you communicate through a 3rd partly, maybe leaving a message in the mailbox and such for the user.

Communication can also be either blocking or non-blocking (synchronous or asynchronous).

Blocking send: blocks until the message is received.
Non-blocking send: the process sends the massage and resumes - it doesn't wait for the receiver to get the message.
Blocking receive: a receiver blocks until a message is available.
Non-blocking receive: A receiver checks if message is available, if it is, it gets it, otherwise it returns nothing.

Various operating systems have various ways for its processes to communicate with each other.

Client - Server Systems

Socket is an endpoint in communication, etc. Networks communication can be utilized by software to communicate with programs on different computers. Network communication is usually a lot less efficient than system specific methods for inter process communication.

Remote Procedure Calls - RMI - etc.

Some systems also provide ways for programs to seamlessly communicate with processes on other computers via remote procedure calls. These make it seem like the call it local, even though the call is using the network.

Distributed Systems

This is a bit overboard for an operating systems class and would best be left for a networking class (or a class on distributed systems).

THREADS Overview

Threads are sometimes referred to as lightweight processes (LWP). They are the basic unit of CPU utilization; they contain a thread ID, a program counter, a register set, and a stack. The code, data, and operating system resources like files, etc., are shared among threads.

A traditional (heavyweight) process (described earlier) has a single thread of control.

A process that has many threads can do many things at once. It can execute many parts of its code at the same time.

[refer to figure 5.1 on page 130]

Motivation for threads is that for many programs, it is more important to be able to do more things at once (possibly slower), than to do 1 thing (usually faster).

Imagine a web-server, which receives client requests and replies to clients. If it was using 1 thread, all clients would have to wait until the server completely finishes processes that 1 currently executing client. On a multi-threaded server, many clients can be interacting with the server at the same time (possibly a bit slower than 1 client interacting with the server, but overall, the server can satisfy more clients than serving them one by one).

Another motivation is better utilization of multiprocessor machines, where each thread can be executing on it own CPU.

User and Kernel Threads

User threads are threads managed above the kernel, by some thread library, or a user program.

Kernel threads are threads managed by the operating system kernel. The kernel handles thread creation, scheduling, etc. Kernel threads are usually more efficient than user threads. Most operating systems now support kernel threads.

Multithreading Models

You can mix and match user and kernel threads. You can map many user threads and have them run on 1 kernel thread. You can also have one kernel thread for one user thread. And the many-to-many view, where you can have user threads to many kernel threads.

Threading Issues

Various system calls behave differently when using many threads. In fact, usually there are several versions of the same system call; one is to be used in a single threaded environment, another for when you have many threads. Threaded system/library calls are usually safer and do not use any static memory (since several threads may be calling that method at the same time).

Another major issue is thread cancellation (or killing a thread). There are two major ways you can proceed to shut down a thread, one is to simply kill it, or to let it know to shut itself down (and wait for it).

Just stopping a thread may not be very safe, since a thread may be doing something critical (like updating a file, etc.), so telling the thread to terminate is usually safer.

There are also some issues related to signaling. When a process gets a signal, how should it be handled?

Should it be handled in a thread where the signal applies?
Should it be delivered to every thread?
Should it be delivered a few certain threads?
Should it have a specific thread to which to deliver signals?

There are all design issues, which are handled in different ways in various operating systems.

Thread Creation

While threads a lightweight, and are easier (faster) to create than processes (because they share code and data), they still take a considerable time to create. To alleviate this problem, systems can maintain thread pools, these are threads that are waiting for things to do; once they get something to do, they do it, and continue waiting. The idea is that you don't have to 'create' many threads; you just use the threads in the pool.

Thread Libraries

There are various thread libraries for various systems. Some thread libraries may have thread control not found in other libraries. For example, there is no explicit way to exit for a thread under Windows, etc.

Pthreads - POSIX threads are standard threads available on POSIX compatible systems (most UNIX boxes). They're fairly easy to use from C programs, and the API more or less reflects the functionality you'd find in other more specific libraries.

Java Threads - Easy to setup and use, etc. [some examples in class]



































© 2006, Particle