History of operating systems, Operating System

Operating Systems have evolved tremendously in the last few decades. The first approach for building Operating Systems, taken during the 40s through early 60s, was to allow only one user and one task at a time. Users had to wait for a task to be finished before they could specify another task, or even interact with the computer. In other words, not only were OS's single-user and single-tasking, there was no overlapping between computation and I/O.

The next step in the development of OS's was to allow batch processing. Now, multiple "jobs" could be executed in a batch mode, such that a program was loaded, executed, output was gener- ated, and then the cycle restarted with the next job. Although in this type of processing there was still no interference or communication between programs, some type of protection (from poorly or maliciously written programs, for instance) was clearly needed. Allowing overlap between I/O and computation was the next obvious problem to be addressed. Of course, this new feature brought with itself a series of new challenges, such as the need for buffers, interrupt handling, etc.

Although the OS's from this time allowed users to interact with the computer while jobs were being processed, only one task at a time was permitted. Multiprogramming solved this, and it was a task of Operating System to manage the interactions between the programs (e.g. which jobs to run at each time, how to protect a program's memory from others, etc). All these were complex issues that effectively led to OS failures in the old days. Eventually, this additional complexity began to require that OS's be designed in a scientific manner.

During the 70s, hardware became cheap, but humans (operators, users) were expensive. During this decade, interaction was done via terminals, in which a user could send commands to a main- frame. This was the Unix era. Response time and thrashing became problems to be dealt with; OS's started to treat programs and data in a more homogeneous way.

During the 80s, hardware became even cheaper. It was then that PCs became widespread, and very simple OS's, such as DOS and the original Mac OS, were used. DOS, for example, was so simple that it didn't have any multiprogramming features. From the 90s on (until today), hardware became even cheaper. Processing demands keep increas- ing since then, and "real" OS's, such as Windows NT, Mac OS X and Linux, finally became available for PCs. Operating systems are now used in a wide range of systems, from cell phones and car controller computers, to huge distributed systems such as Google.

Posted Date: 3/12/2013 3:58:02 AM | Location : United States







Related Discussions:- History of operating systems, Assignment Help, Ask Question on History of operating systems, Get Answer, Expert's Help, History of operating systems Discussions

Write discussion on History of operating systems
Your posts are moderated
Related Questions
What is the exclusive feature of UNIX Shell is the exclusive feature of UNIX.

basic advantage of using interrupt initiated data transfer over transfer under program control without an interrupt

Question: a) The following questions refers to Windows XP networking: i) Briefly, explain how a host joins a network using DHCP? ii) Which IP address could be assigned to a

Explain about disk scheduling with neat diagram? FCFS Scheduling SCAN scheduling C-SCAN scheduling SSTF scheduling LOOK Scheduling

1. What must a kernel provide for an effective user-level thread implementation? 2. With respect to the quantum q in a scheduling algorithm, explain and discuss the impact of the

Consider a computer system with a 32-bit logical address and 4KB page size. The system supports up to 512MB of physical memory. How many entries are there in a conventional single-

What is ERD? Entity Relationship Diagram is the graphical representation of the object relationship pair. It is mostly used in database applications.

What is Single-level Directory All files are constrained in the similar directory, which is simple to support and understood. One limitation is when the number of files enhance

Present your own fully documented and tested programming example illustrating the prevention of a data race in a parallelised program. This is an example where total number of p

draw a state diagram showing the transissions of a process from creation to termination