Introduction to Operating SystemContent
• What is an operating system?
• Simple Batch Systems
• Multiprogramming Batched Systems
• Time-Sharing Systems
• Personal-Computer Systems
• Parallel Systems
• Distributed Systems
• Real -Time Systems
What is an Operating System?
An operating system act as an intermediary between the user of a computer and computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs in a convenient and efficient manner. An operating system is a software that manages the computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering with the proper operation of the system.
Computer System Components
1. Hardware – provides basic computing resources (CPU, memory, I/O devices).
2. Operating system – controls and coordinates the use of the hardware among the various application programs for the various users.
3. Applications programs – define the ways in which the system resources are used to solve the computing problems of the users (compilers, database systems, video games, business programs).
4. Users (people, machines, other computers).
Abstract View of System Components
Simple Batch Systems
Utilisation of computer resources and improvement in programmer's productivity was still a
major prohibition. During the time that tapes were being mounted or programmer was operating the console, the CPU was sitting idle. The next logical step in the evolution of operating system was to automate the sequencing of operations involved in program execution and in the mechanical aspects of program development. Jobs with similar requirements were batched together and run through the computer as a group.
For example, suppose the operator received one FORTRAN program, one COBOL program and another FORTRAN program. If he runs them in that order, he would have to set up for FORTRAN program environment (loading the FORTRAN compiler tapes) then set up COBOL program and finally FORTRAN program again.
If he runs the two FORTRAN programs as a batch, however he could set up only once for FORTRAN, thus, saying operator's time. Batching similar jobs brought utilisation of system resources quite a bit. But there were still problems. For example, when a job is stopped, the operator would have to notice that fact by observing the console, determine why the program stopped and then load the card reader or paper tape reader with the next job and restart the computer. During this ion from one job to the next, the CPU sat idle.
To overcome this idle time, a small program called a resident monitor was created which is always resident in the memory. It automatically sequenced one job to another job. Resident monitor acts according to the directives given by a programming through control cards which contain informations like marking of job's beginnings and endings, commands for loading and executing programs, etc. These commands belong to job control language. These job control language commands are included with user program and data.
Here is an example of job control language commands.
$COB Execute
the COBOL compiler
$JOB First
card of a job
$END Last
card of a job
$LOAD Load
program into memory
$RUN Execute the user program
With sequencing of program execution mostly automated by batch operating system, the speed discrepancy between fast CPU and comparatively slow input/output devices such as card readers, printers emerged as a major performance bottleneck. Even a slow CPU works in the microsecond range, with millions of instructions per second. But, fast card reader, on the other hand, might read 1200 cards per minute. Thus, the difference in speed between the CPU and its input/output devices may be three orders of magnitude or more. The relative slowness of input/output devices can mean that CPU is often waiting for input/output. As an example, an Assembler or Compiler may be able to process 300 or more cards per second. A fast card
reader, on the other hand, may be able to read only 1200 cards per minute. This means that assembling or compiling a 1200 card program would require only 4 seconds of CPU time but 60 seconds to read. Thus, the CPU is idle for 56 out of 60 seconds or 93.3 per cent of the time. The resulting CPU utilisation is only 6.7 per cent. The process is similar for output operations. The problem is that while an input/output is occurring, the CPU is idle, waiting for the input/output to complete; while the CPU is executing, input/output devices are idle. Over the years, of course, improvements in technology resulted in faster input/output devices. But CPU speed increased even faster. Therefore, the need was to increase the throughput and resource utilisation by overlapping input/output and processing operations. Channels, peripheral controllers and later dedicated input/output processors brought a major improvement in this direction. DMA (Direct Memory Access) chip which directly transfers the entire block of data from its own buffer to main memory without intervention by CPU was a major development. While CPU is executing, DMA can transfer data between high speed input/output devices and main memory. CPU requires to be interrupted per block only by DMA.
Apart from DMA, there, are two other approaches to improving system performance by overlapping input, output and processing. These are buffering and spoofing.
Buffering is a method of overlapping input, output and processing of a single job. The idea is quite simple. After data has been read and the CPU is about to start operating on it, the input device is instructed to begin the next input immediately. The CPU and input device are then both busy. With luck, by the time that the CPU is ready for the next data item, the input device will have finished reading it. The CPU can then begin processing the newly read data, while the input device starts to read the following data. Similarly, this can be, done for output. In this case, the CPU creates data that is put into a buffer until an output device can accept it. If the CPU is, on the average much faster than an input device, buffering will be of little use. If the CPU is always faster, then it always finds an empty and have to wait for the input device. For output, the CPU can proceed at full speed until, eventually all system buffers are full. Then the CPU must wait for the output device. This situation occurs with input/output bound jobs where the amount of input/output relation to computation is very high. Since the CPU is faster than the input/output device, the speed of essentially is controlled by the input/output device, not by the speed of the CPU.
More sophisticated form of input/output buffering called SPOOLING (simultaneous peripheral operation on line) essentially use the disk as a very large buffer (figure 3) for reading and for storing output files.
Buffering overlaps input, output and processing of a single job whereas Spouting allows CPU to overlap the input of one job with the computation and output of other jobs. Therefore this approach is better than buffering. Even in a simple system, the spooler may be reading the input of one job while printing the output of a different job.
Multi-programmed Batched Systems
Buffering and spooling improve system performance by overlapping the input, output and computation of a single job, but both of them have their limitations. A single user cannot always keep CPU or I/0 devices busy at all times. Multiprogramming offers a more efficient approach to increase system performance. In order to increase the resource utilisation, systems supporting multiprogramming approach allow more than one job (program) to utilize CPU time at any moment. More number of programs competing for system resources, better will be resource utilisation.
The idea is implemented as follows. The main memory of a system contains more than one program. The operating system picks one of the programs and start executing. During execution process program 1 may need some 110 operation to complete. In a sequential execution environment , the CPU would sit idle.
In a multiprogramming system, operating system will simply switch over to the next program When that program needs to wait for some I/0 operation, it switches over to Program 3 and so on. If there is no other new program left in the main memory, the CPU will pass its control back to the previous programs.
Multiprogramming has traditionally been employed to increase the resource utilisation of a computer system and to support multiple simultaneously interactive users (terminals). Compared to operating system which supports only sequential execution, multiprogramming system requires some form of CPU and memory management strategies
Time-Sharing Systems
Time Sharing System: It is a form of multiprogrammed Operating system which operates in an interactive mode with a quick response time. The user types a request to the computer through a keyboard. The computer processes it and a response (if any) is displayed on the user's terminal. A time sharing system allows the many users to simultaneously share the computer resources. Since each action or command in a timeshared system take a very small fraction of time, only a little CPU time is needed for each user. As the CPU switches rapidly from one user to another user, each user is given impression that he has his own computer, while it is actually one computer shared among many users. Most time sharing system use
timeslice (round robin) scheduling of CPU. Memory management in time sharing system Provides for the protection and separation of user programs. Input/output management feature of timesharing system must be able to handle multiple users (terminals). However, the processing of terminals interrupts are not time critical due to the relative slow speed of terminals and users. As required by most multiuser environment allocation and deallocation of devices must be performed in a manner that preserves system integrity and provides for good performance.
The words multiprogramming, multiprocessing and multitasking are often confused. There are, of course, some distinctions between these similar, but distinct terms. The term multiprogramming refers to the situation in which a single CPU divides its time between more than one job. Time sharing is a special case of multiprogramming, where a single CPU serves a number of users at interactive terminals. In multiprocessing, multiple CPUs perform more than one job at one time.
Multiprogramming and multiprocessing are not mutually exclusive. Some mainframes and supermini computers have multiple CPUs each of which can juggle several jobs. The term multitasking is described any system that runs or appears to run more than one application program one time.
An effective multitasking environment must provide many services both to the user and to the application program it runs. The most important of these are resource management which divides the computers time, memory and peripheral devices among competing tasks and interprocess communication, which lets tasking coordinate their activities by exchanging information.
Personal Computer
Personal computers – computer system dedicated to a single user.
• I/O devices – keyboards, mice, display screens, small printers.
• PC operating systems were neither multi-user nor multi-tasking.
• The goal of PC operating systems were to maximize user convenience and responsiveness instead of maximizing CPU and I/O utilization.
Examples: Microsoft Windows and Apple Macintosh
Parallel operating System
These systems are used to interface multiple networked computers to complete tasks in parallel. The architecture of the software is often a UNIX-based platform, which allows it to coordinate distributed loads between multiple computers in a network. Parallel operating systems are able to use software to manage all of the different resources of the computers running in parallel, such as memory, caches, storage space, and processing power. Parallel operating systems also allow a user to directly interface with all of the computers in the network. A parallel operating system works by dividing sets of calculations into smaller parts and distributing them between the machines on a network.
Sharing memory allows the operating system to run very quickly, but it is usually not as powerful. When using distributed shared memory, processors have access to both their own local memory and the memory of other processors; this distribution may slow the operating system, but it is often more flexible and efficient.
Distributed Operating System
A distributed operating system is one that looks to its users like an ordinary centralized operating system but runs on multiple independent CPUs. The key concept here is transparency. In other words, the use of multiple processors should be invisible to the user. Another way of expressing the same idea is to say that user views the system as virtual uniprocessor but not as a collection of distinct machines. In a true distributed system, users are not aware of where their programs are being run or where their files are residing; they should all be handled automatically and efficiently by the operating system.
Distributed operating systems have many aspects in common with centralized ones but they also differ in certain ways. Distributed operating system, for example, often allow programs to run on several processors at the same time, thus requiring more complex processor scheduling (scheduling refers to a set of policies and mechanisms built into the operating systems that controls the order in which the work to be done is completed) algorithms in order to achieve maximum utilisation of CPU's time.
Fault tolerance is another area in which distributed operating systems are different. Distributed systems are considered to be more reliable than uniprocessor based system. They perform even if certain part of the hardware is malfunctioning. This additional feature , supported by distributed operating system has enormous implications for the operating system.
Real-Time Systems
A primary objective of realtime system is to provide quick response times. User convenience and resource utilisation are of secondary concern to realdme system. In the realtime system each process is assigned a certain level of priority according to the reletive importance of the evenit processes. The processor is normally allocated to the highest priority process among those which are ready to execute. Higher priority process usually preempte execution of lower priority processes. This form of scheduling called, priority based preemptive scheduling, is used by a majority of realtime systems.
Hard real-time system.
– Secondary storage limited or absent, data stored in shortterm memory, or read-only memory (ROM)
– Conflicts with time-sharing systems, not supported by general-purpose operating systems.
Soft real-time system
– Limited utility in industrial control or robotics
– Useful in applications (multimedia, virtual reality) requiring advanced operating-system features.
No comments:
Post a Comment