main | files
October 2nd, 2024    

CISC 3320 3.0 17279 EW6
Main
Files
Syllabus
Links
Homeworks

Notes
0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
Networ Intro
LAN Intro
Topologies
Comp Networks

Big Data
Security
Email

Misc
What is ELF/COFF?

Projects
Project 1
Project 2

Past Tests
Midterm
OldSample Midterm
OldMidterm
OldSample Final
OldMidterm Exam

Notes 0003

Operating System Structure

(read chapter 3 of the dinosaur book)

Operating Systems are usually fairly complex. To deal with complexity, they're usually broken up into separate components that do specific tasks. These components usually interoperate to get a job done. Some are high level components others are low level.

Process Management

A process can be thought of a program in execution. It needs certain resources to run - CPU time, memory, IO, files, etc. A process is not a program though - program is a passive entity, while a process is an active entity - a process is the thing that executes.

An operating system is usually responsible for:

Creating and deleting both user and system processes.
Suspending and resuming processes.
Providing mechanisms for process synchronization.
Providing mechanisms for process communication.
Providing mechanisms for deadlock handling.

Main Memory Management

Main memory is a large array of bytes that is directly addressable by the CPU. It is the RAM and ROM of your computer (there is usually a lot more RAM than ROM). ROM stores BIOS and bootstrap code. Parts of memory addresses may be mapped to DMA devices to be used for IO.

An operating system is usually responsible for:

Keeping track of which parts of memory are currently being used and by whom.
Deciding which processes are to be loaded into memory when memory space becomes available.
Allocating and deallocating memory space as needed.

File Management

Operating Systems deal with various hardware devices for data storage, including hard drives, tape drives, CD-ROMs, floppies, networks, etc. Each of these devices has various structure, components, etc. In order to make it simple to interact with such a various array of components, most operating systems create the abstraction of files.

Files are data (programs in files are just data of some format) which are managed by the computer and manipulatable by the user. The operating systems hides any hardware specifics, and just presents 'files' to the user, no matter where and how they're stored.

Before a process can get at the data in a file, it needs to load the file data into memory (to allow the CPU direct access to the data). A lot of the functionality of the OS deals with reading and writing files to/from main memory, and generally managing the file system.

An operating system is usually responsible for:

Creating and deleting files.
Creating and deleting directories.
Supporting primitives for manipulating files and directories.
Mapping files onto secondary storage.
Backing up files on stable (nonvolatile) storage media.

I/O-System Management

Computers have many input and output devices, and controlling them is the job of the operating system. In many cases, devices have their own drivers, but then the operating system must control those device drivers.

In general, a user program should not be able to do IO without the assistance of the operating system, since that would represent the shift of burden of managing the IO to user code, instead of the OS or drivers where it belongs.

Secondary-Storage Management

Hard drives operate at a level far lower than that of files. An Operating System needs to have low level interfaces for managing secondary storage (non-RAM) memory. This includes:

Free-space management.
Storage allocation.
Disk scheduling.

Usually, the file system uses these types of low level interfaces to create files.

Networking

An operating system is also responsible for managing the network connections on the computer where it is running.

Protection System

Most modern operating systems are multi-user and provide some sort of protection from malicious users and their programs.

A modern operating system should not allow an unprivileged program to access restricted memory, to overwrite the operating system, to erase any protected files, or to allow the program to take over the computer.

Some of this protection (like memory protection) is partly handled by the CPU after the OS sets it up to do that.

Command Interpreter System

Each operating system should have some sort of a user interface for the user to enter commands (or otherwise let the operating system know what the user wants done).

On many operating systems (Unix, DOS), this is accomplished via a command shell, where the user literally types in commands. Still on others, a GUI environment has largely taken over (Windows, Mac).

The key is that internally, they still accomplish the same thing: allow the user to interact with the system.

Services

Operating Systems also have several convenience services to make it easier to develop programs for the OS. These include the program loader, resource allocator, many device drivers, libraries, etc.

A program loader, for example, can take an executable, load it into memory, setup the environment, and get it running. Users don't concern themselves how their program is executed; it is all handled by the OS.

System Calls

When writing a user program, programs interact with the operating system using system calls.

If a program wants to save a file, it tells the operating system what to do using a system call. If it wants to display something on the screen, again, it tells the operating system what to do via a system call.

All operating systems expose many system calls to do all sorts of things that user programs may need.

See Figure 3.2 in the dinosaur book for a list of types of system calls.

A set of System Calls can be divided into five major categories: process control, file management, file management, device management, and information maintenance.

Process Control

Controlling processes involves starting up new processes, ending processes, forking new processes, waiting for processes, etc.

File Management

An operating system must provide system calls to create, delete, rename, read, and write files. As well as changing file attributes, etc.

Device Management

System calls that control devices very often take the form of file management. Operating Systems such as Unix enable devices to be bound to files (writing to a particular file, causes a write to the device). [for example, on many Unix systems, /dev/fd0 file represents the floppy drive]

Information Maintenance

Operating System must provide the user (and programs) with a TON of information about files, time, date, system status, etc. There are many system calls that allow for getting and setting the information.

Communication

There are two primary ways programs can communicate. They can either send messages to each other, or they can share memory.

A process can send a message to another process using operating system facilities. The other process may not even run on the same computer, but on another on a network.

Processes can also agree to share some memory for communication. One process may write something to memory which some other process may read from that same memory. Memory sharing is very common, threads, for example, share memory by default.

Shared memory has some problems with protection (you cannot restrict the other program from improperly using the shared memory).

System Programs

Another vital part of operating systems (besides providing libraries and services) are system programs.

User programs provide useful functionality to the user. Sometimes they're just disguised system calls (like 'rm' command to remove files, etc.) At other times, they can be whole big applications like word processors or web-browsers. (again, assuming that everything that comes on the OS CD-ROM is actually part of the OS).

Usually, there is a very important user program that comes with all operating system. Under Unix, it is the command interpreter (or simply a 'shell'). This allows users to type in commands and have them do something useful. Windows comes with the GUI environment, which is sort of like the command shell only graphical.

System Structure

As stated earlier, operating systems are complex and need to be designed well. The design involves taking the complex operating system functionality and breaking it up into many smaller components, pieces that handle specific tasks.

This breakup makes it easier to write and update operating systems, because dealing with small pieces of code at a time is simpler and usually causes fewer bugs than dealing with a huge monolithic system.

Layering

Some operating systems are built using the layering structure. Layer 0 is hardware, and the highest layer is user interface. Operating system components sit at some players in between.

The idea behind layering is that lower layers are not aware of higher layers, and thus, can be developed in relative isolation from the higher layers. You can write a hard drive driver without thinking about files, or a file system without thinking about the user interface. Obviously, higher layers need to call lower layers to get things accomplished.

As nice as it sounds, there are serious problems with layering. For one, some components cannot be cleanly defined to be at a particular layer, and may actually need to be situated in several years at once. Another major issue is performance. Every high-level call to do something has to travel down each layer until it gets to the appropriate layer that can handle the job.

Windows NT uses a layered approach, where your call to the API gets transferred to lower and lower layers. Hardware Abstraction Layer (HAL) is just one example.

Another system structure that's in common use is the microkernel. A microkernel is in essence a chunk of code that provides the core operating system services and delegates everything else to other programs. All inter-process communication happens through the microkernel.

For example, a user may request to execute some program, a request comes in through some high level shell, and is sent to the kernel (microkernel), which may then invoke a loader service to load the program, which in turn may invoke a scheduling program to scheduling the program for execution, etc. The key thing to note is that microkernel is intentionally kept small and simple, and that other services handle things which from some perspective may seem a core part of the OS (scheduler, etc.)

Hybrid

In reality, operating system structure is usually a mix of the two extremes. Kernels are kept relatively small (or as small as makes sense practically). And every operating system has some layering and abstraction. For some tasks, one approach makes sense, for others, it doesn't.

Layering may work fine if you're writing a file and don't want to put all the sector or file system code into the core kernel, but it may not work well for things like video where an added layering step causes significant performance problems.

[Windows NT has video drivers running as part of the kernel to increase performance and decrease layering - which means that any video driver problem can crash the whole operating system]

Virtual Machines

Instead of building an operating system to run on some hardware, some vendors have taken a slightly different route and built operating systems (and software) for virtual machines.

An operating system can be thought of as an environment in which programs can execute, etc., and environments like Java Virtual Machine (and many similar ones), provide just that.

Virtual Machines have distinct advantages over 'real' operating systems, since they can define their environment any way they wish. They can make it standard enough without worrying about incompatibilities. Virtual Machines also tend to be portable, so in essence, if you write code to run on a virtual machine, your code will be able to run anywhere.

Virtual Machines are becoming more and more important now a days. Java is one example of a popular VM. Microsoft is releasing its Web Services architecture, which is mostly based on interpreted code, so it's very likely that in a not-so-distant future, huge chunks of Windows operating system will run under a virtual machine.

Design and Implementation

When designing an operating system, first steps are usually to figure out and decide on the type of the operating system that is needed/required/wanted. Should the OS just handle batch programs? Should it be able to run several programs at a time? Will it only support one user? Will it communicate with other computers? Who are the users of the system?

The answers to those questions will drive the way you develop the system. If, for example, the system will be used only by low level developers for them to easily build elevator controllers (someone presses a button on 3rd floor, elevator goes to that floor, etc.) then the system doesn't really need to be as complex as Windows (or even DOS). It just needs to be good enough for its domain where it is to be used.

Another thing is security, which should be either built in or left out right from the beginning. If you need security and leave it out, it will be almost impossible to stick it back in later on, but if you don't need it and still implemented, then you've wasted your time.

Again, many design decisions don't have easy answers, and most operating systems have major design problems, which hopefully aren't too noticeable to a vast majority of the users.

As the book says, "The specification and design of an operating system is a highly creative task."

Implementation

Various Operating Systems have used various programming languages. Easy OSs mostly used assembly language. This allowed easier low level hardware control (perfect for OS code), but was tedious and error prone to code.

Most modern operating systems use a high-level language like C/C++ for a vast majority of their code. There still needs to be a little bit of assembly for system specific tasks, but the algorithms, etc., all can be implemented using a high level language.

Performance isn't particularly affected by this though, as is well put in the book: "As is true with other systems, major performance improvements in operating systems are more likely to be the result of a better data structure and algorithms than of excellent assembly-language coder."

Installation

Operating Systems usually cannot be just copied to a machine and work as expected. They need to be installed properly with all their configuration setup correctly. Operating System needs to setup a boot sector on the hard drive to have the system boot it when the computer restarts. It also needs to have the hard drive use some specific file system in order to get at the data. There are also various configurations which are specific to any one particular machine, like your video card and sound card. Some various configurations that you can setup on your machine may have never been tested or tried ever before, yet the operating system adapts and configures itself to work properly.

Try installing/reinstalling Windows/Linux, and nothing all questions that the operating system asks you. All that information (if not already predefined by installer) is needed for the system to function properly.

[before Linux installers got simpler, it was a journey through hell installing Linux, you needed to specify everything, even the starting and ending blocks of your disk partitions in order to correctly setup the file system and swap space - all manually, using relatively low level commands] (now, in many ways, a Red Hat installer is simpler than that of Windows)



































© 2006, Particle