Scientific articles on operating systems. Books on operating systems

Andrey Annenkov

Academician of the Russian Academy of Sciences, Director of the Institute of System Programming of the Russian Academy of Sciences, Head of the Department of System Programming of Computer Science and Technology of Moscow State University, Chairman of the Russian Association of Free Software (RASPO) Viktor IVANNIKOV told our correspondent about the level of Russian fundamental science, the problems of training personnel for the IT industry and expressed his thoughts on the future operating systems.

-- In the minds of the average person, system programming means operating systems, compilers...

Also database management systems (DBMS) and development environments. I won’t say that “systems programming” is an established term. In the 60s, that’s what we called what we were doing, but we were doing what you listed. When Andrei Nikolaevich Tikhonov created the Faculty of Computational Mathematics and Cybernetics (VMiK) at Moscow State University, he created three programming departments. One of them was called the Department of System Programming, headed by Mikhail Romanovich Shura-Bura. In the early 90s, he asked me to replace him in this post. What does the department do? Exactly what you listed.

Is another idea correct: system programming in our country became a hopeless endeavor after the decision was made to create an EU computer, i.e. about copying IBM/360?

You are partly right. This applies not only to ES computers, but also to SM computers that copied DEC machines. This greatly limited our engineers and compressed the scope of activity of system programmers. Although I must say that our schools on compilers and operating systems were very strong. Programming technologies have also developed greatly, especially for defense applications. They were very large programs, and you had to be able to do them. As for database management systems, there is not much there.

-- Why doesn't Russia have its own operating system?

There were research projects, of course. But there is no domestic demand. In recent years, Linux has become increasingly popular, and we have teams participating in this project. The development infrastructure exists and you can work in it. Because you need to start from reality: Russian market operating systems (OS) are small, and there is no reason to develop your own OS.

Linux is the main direction of development of operating systems, and we must take a place there, do I understand you correctly?

Yes. An operating system is a small program. Well, seven million lines. But it contains application software. A new OS means having to rewrite applications. What's the point of this? Linux can be used industrially; it is quite possible to work in it. For research projects, you can make your own operating systems - for example, to teach students on some small operating system.

Operating systems are now on the cusp of change. There are too many attempts to do something innovative here. Don't you share this impression? Do we have a chance to do something ourselves in the field of future operating systems?

Yes, such work is going on in the world. They are related, among other things, to the fact that it is necessary to create a microkernel of the operating system, which would consist not of seven million lines, but, say, of several tens of thousands, but so that this code really does not contain errors. It's a crazy task. But for small programs it can be solved.

It also happens that a person just wants to make an operating system. For aesthetic reasons. Why not? Why shouldn't a person come up with something new? I've encountered this a couple of times. Two boys, from different cities, but not from Moscow, were making their own operating systems. This is extremely interesting. They, of course, gained some kind of skill, received some kind of internal satisfaction. One of them is from Omsk, now a student.

—What is the role of your institute in the industry?

I can’t say that I’m happy with this role, but the level is decent. Many joint grants with scientific institutions in Europe and the USA. Our guys speak at the most prestigious international conferences. I don’t have a feeling of inferiority - the game is equal, and somewhere we win. We demonstrate a good international level.

Quite a lot of contracts with leading IT manufacturers: Intel, Microsoft, IBM, Samsung. All projects are related to the development of new technologies.

- What is happening in the industry with personnel?

Difficult situation. Very heavy. The market requires significantly more people - there simply aren’t enough of them. And all over the world, and here.

This is a specific area of ​​activity, student programming competitions. It's like professional sports. It's similar to college basketball in the US - it's played by professionals, not amateur students. Any programming competition is a fairly limited set of tasks in a certain area: dynamic programming, working with very large numbers, etc. Quickly understand the meaning of the problem and quickly solve it. Many hours of daily training are required, which is what happens with the winners.

This is, of course, a huge prestige for the country. But victories in student competitions do not in any way affect professional programming activities. Competitions are separate, the professional level of programmers is separate.

We have always had enough talented people, and programming is a very convenient area to express yourself. Because God knows what is needed for this. As in poetry, as in mathematics. There is no need to wait, for example, for flight tests, as an aeronautical engineer has to do. Especially now that there is the Internet. My generation experienced a real hunger for information - it was difficult to get scientific articles and communicate with colleagues. Today there are no problems with information. The style of work itself has changed. We did everything from scratch. And now there is open software, and you can participate in its development. Lots of opportunities for self-expression.

- So why are there not enough people? After all, the most attractive field of activity... What is the reason?

There are probably several reasons. This, of course, is also a terrible shortage of highly qualified teaching staff. The person who teaches students must have his own professional experience in the area he teaches. A teacher who only interprets textbooks and then teaches cannot create a school.

The traditions of our - Soviet, Russian - education are that the teacher conveys personal, even life, experience. This was the strength of our teaching style. I'm not even talking about the Phystech model (Moscow Institute of Physics and Technology. -- Ed.), when they tried to include students in the research process from their junior years.

We have about a hundred Physics and Technology and Moscow State University students and 40 graduate students working at our institute. But which of them remains in the profession? About 20% (and this is a good figure). The reason is that the guys start working early. They have just learned how to simply write programs, and they are taken to work full time. From the third year. But they still have to study and study!

Someone, of course, is forced to do this. In our time, students' means of earning money were tutoring, construction teams, and unloading wagons. Now there are more opportunities. It's addictive. A student sees how his friend earns a thousand, say, dollars, and thinks: why am I worse, I’ll also go to work. Although you don’t need to do this, you need to learn! The money will come over time, and there will be more of it if you spend time on studying now.

Could you list the names of the people who had the greatest influence in our country on the development of the field of knowledge in which you work?

In my time there were few programmers, everyone knew each other. I have already mentioned Mikhail Romanovich Shura-Bura and Andrei Nikolaevich Tikhonov. Also Andrei Petrovich Ershov, Svyatoslav Sergeevich Lavrov, Nikolai Nikolaevich Govorun, Lev Nikolaevich Korolev.

I listed the people who received academic positions (Shura-Bura, by the way, did not become an academician). But there was a lot interesting people, incredibly talented. Eduard Zinovievich Lyubimsky. Igor Borisovich Zadykhailo. They were great programmers.

I really want to pick up a book with the words “Russian Academy of Sciences” on the cover, which is lying on your table.

Please. This is a reference book.

Isn’t seven and a half hundred members of the Russian Academy of Sciences too much for our country, and how do you assess the current level of our fundamental science?

During the 90s, we lost the military-industrial complex and entire industries. And I'm not sure that it will be possible to make up for it. Of course, there are losses in the Academy of Sciences. I'm not talking about the number of academicians - that's not that important.

We have lost several generations of scientists. I remember the early 90s. Our students and graduate students were leaving. We were leaving. The layer of specialists is very thin, and it has been washed away. But there were unique people... They still remain, but they are already old. The Academy has lost several generations, and this is very serious. What do we see? Lectures are given by 70-75 year old people. Can you imagine what a load this is? The work of a good lecturer is similar to the work of an artist; it is a huge physical and emotional burden. And when lectures to a stream of 200 students are given by 75-year-olds, this is not very healthy.

Let me return to the recent history of the institute. There was no internal need for work, I mean new programming technologies, in the country. There was no money either. People emigrated or went to banks. That's when I became the director of the institute.

For me, training is one of the sacred cows. It was not for nothing that I named the companies with which we have contracts. These are expensive agreements. They make it possible to pay decent wages to our employees.

But the situation is alarming. And what's alarming isn't how many members there are in the academy. Even if there are at least five thousand of them, I don’t care. This is not the point, but the fact that there are no young guys. Although among the members of the academy there are still people whom it can be proud of.

You are the chairman of RASPO (Russian Free Software Association). What is happening in the organization today?

RASPO gained some exciting fame. I would like the activities of RASPO to be closer to real affairs, to technical things. So that organizations included in RASPO are engaged in successful software development. Or successful integration, since RASPO has integrator companies. So that the committees created in RASPO work: legal, technical, etc. Now is the period when everyone must understand why he should be in the association, what he can give to it, to give to society. The grinding is underway, and it is very difficult. There is a process of developing mutual trust - ethical, professional. I hope that the results of RASPO’s activities will appear in six months or a year.

Materials on working with the system, useful articles about setting up and Windows optimization, features of working with different versions OS. Review of software, materials on protecting your PC from viruses and spyware.

A forgotten system password is always unpleasant. If you are faced with the fact that whenever you try to log into Windows 10, a message appears indicating that your password was entered incorrectly and attempts to remember it are in vain, then this article is just for you. So that the system...

One press of a button is enough for the computer to turn on. Unfortunately, there are times when the system fails to initialize HDD. In this case, the problem may arise not only with the additional hard drive, but also with the main HDD. Why doesn't the computer see the hard drive and...

Often, users experience that their computer starts to work slowly. Files take a long time to copy and the system boots slowly. It turns out that all these indicators are affected by disk speed. To find out the characteristics of the hard drive, it is recommended to use specialized software. It is important to note that for HDD...

Every advanced user should know what disk defragmentation is and how it is carried out. Over time, even powerful computer starts working slowly. This is due to the fact that the hard drive becomes clogged with files that are “scattered” across sectors. Don't be upset because...

Defragmentation is the most important process that involves transferring data from several areas to one part of the disk. After defragmentation, system performance improves, writing and reading of any type of file speeds up. Windows operating systems already have a built-in defragmenter, but it...

Users have to constantly install and also uninstall unnecessary applications. It does not matter for what reason you have to uninstall programs. It is worth noting that in most cases errors appear at the time of uninstallation. In the event that the program is not removed...

Often, users have a situation where they need to take a screenshot. In fact, this is not difficult to do, but when it comes to a laptop, beginners have some difficulties. This is due to the fact that such a device has a different keyboard....

Users face a problem when they need to take a screenshot. Beginning conquerors of the computer and the Internet immediately rush to the camera to take a photo of the monitor and then download the photo to the computer. Of course, this can be done, but there are more simple ways, allowing you to take a screen...

Live CD is an operating system that can be booted from any type of media. Such a system is not installed on a hard drive. Most often, such disks are used to resuscitate a computer when the installed operating system refuses to boot. It is important to consider that live SD can...

Let's look at how to remove McAfee from the Windows operating system. Very often this program is installed along with some software, downloaded from the Internet. The user does not uncheck the checkbox when installing the main program, and additional software appears on the computer in the form...

Introduction

1. Operating system

1.2 OS structure and functions

1.3 History of OS development

2. Windows Alternatives

2.1.1 Development history

2.1.3 Use

2.2.1. History of creation

2.2.3 Use

2.3.1 Development history

2.3.3 Use

Conclusion

List of used literature

Introduction

In our time information Technology are entering more and more firmly daily life, and the computer has already become a familiar part of it. For most people who have had experience with a computer, the words “icon”, “window”, “desktop”, “Start menu” have become familiar and understandable, and the logo of a four-color waving flag is not surprising. I want to say that many personal computer users are so accustomed to Windows that sometimes they don’t even know about the existence of other, alternative operating systems, much less ask themselves the question: “What is an operating system and how does it work?” But knowledge of all this will not only be useful in modern society, but can also help in choosing the most convenient and productive “shell” for your computer. That's why I decided to do short review operating systems that are used today instead of the Windows we are all familiar with.

In my work, I mainly used three literary sources. In E. Tanenbaum’s textbook “Modern Operating Systems,” I took information mainly on the history of the development of operating systems. The book “Operating systems, environments and shells”, the authors of which are Partyka T.L. and Popov I.I., I used to define the concept of an operating system and characteristics of the UNIX OS. And finally, the book by V.G. Olifer, N.A. Olifer. “Network Operating Systems” helped me in characterizing the basic functions of an operating system and its structure. Various Internet resources were also used, for example the free Internet encyclopedia Wikipedia.

My essay consists of two main chapters: operating systems, where I tried to explain what an operating system is, how it works and what it is needed for, and alternatives to Windows, where I directly consider operating systems that are used instead of Windows. It should be noted that, in order not to load the text with repetitions and for simplicity of presentation, in my abstract I used the words “computer”, “machine”, “computer” as synonyms to refer to a computer in our today’s understanding. I considered it appropriate to make footnotes to sources only in the case of exact copying or taking special information, such as definitions or classifications. In all other cases, I only relied on information from literary or Internet sources, retelling it in my own words and drawing certain conclusions.

My essay is not intended to find out which operating system is better. The purpose of my work is not a comparison, but a review of operating systems. This is what guided me when writing my essay. When characterizing each operating system, I tried to draw attention to its main advantages and disadvantages, the areas of its use today, and draw a conclusion about its competitiveness with Windows.

1.Operating system

1.1 What is an operating system?

First of all, it’s worth understanding what an operating system (OS) is.

An operating system is a set of programs that ensures the organization of the computing process on a computer. Speaking in simple language, this is a program designed to hide from the user all the difficulties of “communicating” with the computer. And there are many more difficulties than it seems at first glance. Without the help of the OS, even such a simple operation as writing a file to a hard drive, which we are used to doing by pressing a few keys on the keyboard, seems impossible for the uninitiated. Needs to be written to registers hard drive the address of the place where we want to save our file, the address in the main memory, the number of bytes to save, the direction of action, in this case writing. And that's just to write one file!

I think the importance of the invention of even the very first operating systems becomes clear, because they made it possible to save a person from communicating with the equipment directly, providing the programmer with a more convenient command system.

The OS serves as a link between a person and a computer, providing the user with a simple, file-oriented interface. The action of writing a file to disk then seems simpler than having to worry about moving the hard disk heads, waiting for them to settle on Right place etc.

This gives only a general idea of ​​the operating system. Next, I propose to consider the OS in more detail.

1.2 OS structure and functions

Most modern operating systems are modular systems (that is, divided into separate functional parts). Of course, there is no single OS architecture, but there are universal approaches to structuring operating systems. The most general approach is to divide all its modules into two groups:

· kernel – modules that perform the main functions of the OS;

· modules that perform auxiliary OS functions.

Kernel modules manage processes, memory, I/O devices, etc. The functions performed by kernel modules are the most frequently used, so the speed at which they are executed determines the performance of the entire system. To ensure high operating speed of the OS, most of the kernel modules are constantly located in random access memory, i.e. are resident

The remaining OS modules (auxiliary) perform useful, but not so mandatory functions, for example, checking the health of computer units, detecting device failures, etc.

It is often very difficult to draw the line between programs included in the OS and simple applications. It is believed that those programs that run in kernel mode (that is, the user does not have hardware access to them) are always part of the OS, while auxiliary programs run in user mode (that is, the user can change them if desired).

The kernel is the driving force of all computing processes, and the collapse of the kernel is tantamount to the collapse of the entire system, which is why developers pay special attention to the reliability of codes and protect them from free user intervention.

Well, now let's move on to the main functions that the OS performs as a whole. In general, they can be divided into two most important ones: the connection between man and machine and the management of the resources of the machine itself. We have already discussed the importance of the first function above, but it’s worth dwelling on the second in more detail.

Modern computers consist of a processor, memory, time sensors, disks, a mouse, a network interface, printers and a huge number of other devices. So, the function of the OS is the organized and controlled distribution of computer resources between various programs competing for the right to use them. Indeed, imagine what would happen if three programs were running on one computer and they all simultaneously tried to print their data on the same printer. Most likely, the first few lines on the sheet would appear from the first program, the next few from the second, etc. The result is complete confusion. The OS restores order in such situations. The operating system initially allows access to only one program, and saves the output of the other in a temporary file and queues it for printing. At this time, the second program continues to work, not noticing that it is not actually sending data to the printer. It turns out that the OS is, as it were, “deceiving” the program. This was an example of temporary resource allocation. Spatial distribution is equally important. It lies in the fact that the OS allocates to each program only a part of a specific resource, and not the entire resource. The most striking example, in my opinion, is the distribution of several programs in the computer's RAM. It's hard to imagine how much time it would take to process commands if each program was given the entire amount of RAM, and everyone else waited their turn!

The presence of all these functions once again proves the necessity and importance of operating systems. Without an OS, a computer for the user is just a pile of metal that is impossible to approach.

Based on the main functions of the OS, its development is guided by certain requirements:

· Modularity;

· Opportunity for development software system;

· Easy to learn;

· Flexibility and adaptability;

· Compatibility of software of various computers within the same hardware platform;

· Minimum human intervention;

· Parametric versatility;

· Functional redundancy (the presence in the system of several programs that implement the same function);

· Functional selectivity (the ability to configure the system for a specific user).

You can easily imagine what a long and interesting path the OS has gone through in its development, and what problems the developers faced in order to satisfy all the requirements presented above.

1.3 History of OS development

Of course, the development of the operating system is closely related to the development of the computers themselves. Early computers did not provide operating systems, so all processes of starting and stopping programs, connecting external devices were made by hand. Programming was carried out exclusively in machine language. At that time, machines were used more for research purposes rather than for solving specific practical problems. By the beginning of the 50s, with the invention of punched cards - special cards onto which the program execution algorithm was transferred - the situation changed somewhat, but in general, the maintenance and use of computers remained unacceptably difficult.

The first step towards making it easier to communicate with a machine was made in the late 50s with the invention batch processing data. The idea was to collect a complete package of tasks (a deck of punched cards), transfer them to magnetic tape, and then use special program(prototypes of modern operating systems) sequentially launch them for execution without operator participation. Such processing of tasks significantly reduced the time for auxiliary actions in organizing the calculation process itself. People no longer had to run around the hall to convey the results of data processing: they were now output to the printer in offline mode(i.e. without communication with the host computer). However, there was also a significant drawback: due to the fact that programmers lost direct access to the computer, it took much more time to correct errors in programs.

The next step on the path to modern operating systems was the invention of the principle of multitasking. Previously, the main processor could sit idle most of the time, waiting for I/O commands from a magnetic tape or other device. Naturally, this was very inconvenient, and during commercial information processing, such simple work could take up 80% of the working time. The solution to the problem was to split the memory into several parts, each of which was given a separate task. Now the processor did not wait for the completion of the I/O operation, but switched to a program that was already ready for execution.

Following multitasking, time sharing mode appeared. This mode was designed for multi-terminal systems, when each user could work at his own terminal. For example, twenty users could be registered on the system, and if seventeen of them were thinking, drinking coffee, or going about their business, the central processor would be made available to the three users who wanted to work on the machine. However, in such systems the efficiency of equipment use was lower, which was a price for convenience.

All these innovations naturally required writing an OS that could be used on both large and small machines, both with big amount peripheral devices, and with small, in the commercial field and in the field of scientific research. It was very difficult to comply with all these requirements. The operating systems written then contained millions of lines, were very complex and contained thousands of errors. However, they also contributed to the development of the OS: some technical techniques that were used in the first operating systems are still alive and present in modern operating systems.

By the mid-70s, minicomputers became widespread. Their architecture has been greatly simplified and resources are limited. All this is reflected in the OS for such computers. They have become more compact and much closer to the concepts of modern operating systems. The most common operating system of that time was UNIX, the history of which we will consider later.

The real revolution was the invention of silicon chips in the early 80s and, as a consequence, the appearance of the first personal computers (PCs). From an architectural point of view, PCs were no different from minicomputers, but their cost was much lower. This allowed them to be purchased not only by universities, enterprises or government agencies, but also by ordinary people. The then popular UNIX OS was too complex for non-professionals to use. The task was to create a user-friendly interface, i.e. intended for a user who knows nothing and does not want to know anything. This is where the well-known MS-DOS (MicroSoftDiskOperatingSystem) appeared. It should be noted that initially MS-DOS had an interface command line, which was not very convenient. And much later, a graphical environment for MS-DOS was created, called Windows, which later became an independent OS. It was she who embodied the idea of ​​a graphical interface consisting of windows, icons, various menus and a mouse.

From the history of the development of the OS it is clear that main task The operating system has always remained to provide convenient interaction between man and machine. It seems that modern operating systems cope with this task as much as possible. However, from year to year new versions of the OS appear, more advanced and with new capabilities, and the history of the development of operating systems receives more and more continuation.

2. Windows Alternatives

2.1 UNIX OS

2.1.1 Development history

UNIX was originally developed by Ken Thompson, an employee of BellLaboratories, in 1969 as a multitasking system for minicomputers and mainframes (huge computers the size of a room).

A huge role in the fact that UNIX has become so popular, I believe, was played by the ability to port this system to different computers. Before this, programmers had to rewrite systems for each specific machine, which was, of course, not a fun task. UNIX solved this problem. It was written in the language high level– S. This made it possible to release only one version of the OS, which could then be compiled (translated) on different machines.

In 1974, UNIX was transferred to universities for "educational purposes." Moreover, it was provided with a complete set source texts, which provided the owners with the opportunity to endlessly correct it. This is how UNIX found commercial use and became one of the most common operating systems. The only problem was that each manufacturer added its own non-standard improvements, so it was very for a long time could not write a package of programs for UNIX so that they could be run on any version of it. The solution to this problem was the creation POSIX standard, which incorporates the most common procedures found in most versions of UNIX. This simplified the situation somewhat and brought some unity to the development of UNIX versions.

Today, there are a huge number of clones of the UNIX system, including Linux, MINIX, SystemV, Solaries, XENIX, but in all these operating systems the basic principles of implementing algorithms, data structures and system calls are preserved.

The most interesting of these is Linux OS. What makes this UNIX clone special is its business model: it is free software. Unlike Windows, Mac OS and commercial UNIX-like systems, Linux does not have a geographical development center. There is no organization that owns this system. Programs for Linux are the result of the work of thousands of projects. Many projects bring together hackers from all over the world who only know each other through correspondence. Anyone can create their own project or join an existing one and, if successful, the results of the work will become known to millions of users. Users take part in testing free software and communicate directly with developers, which allows them to quickly find and fix errors and implement new features. This approach determines the economic efficiency and popularity of Linux. Today this OS is used in many devices, ranging from mobile phones, routers and ending with unmanned military vehicles.

Based on the diversity of this OS family, we can conclude what an important role UNIX played in the development of operating systems and, without exaggeration, call it historically one of the most important.

2.1.2 Main advantages and disadvantages

The main advantages of UNIX were initially inherent in the idea that was followed during its creation. “The operating system must rely on a small number of non-hardware-specific concepts that collectively provide a mobile application development and execution environment.” Based on this, we can highlight two main “advantages” of the UNIX OS: simplicity and mobility. This is perhaps the main thing that distinguishes it from other operating systems.

By simplicity we mean that UNIX, due to the compactness of the kernel, is undemanding to computer resources (unlike Windows). In addition, UNIX contains a significant number of other advantages.

First, a simplified file model that allows you to create an unlimited number of subdirectories on your hard drive.

Secondly, it uses only six basic commands. The fork operation. By performing a fork, a process creates its own exact copy. This way you get two identical copies. The spawned copy most often executes another process - replacing itself new program. This is the second basic operation. The remaining four calls - open, close, read and write - are intended to access files. These six system calls represent the simple operations that make up Unix. There are, of course, countless other commands, but knowing these five will help you perform basic operations in a UNIX environment with ease.

Thirdly, the significant simplification of UNIX was the use of a fairly developed command language in the basic interface of the system. Even today, with the advent of numerous graphical shells (for example, XWindowSystem), there are many users who prefer the primary command line interface.

UNIX portability means that it can be used on different hardware platforms. In addition, it is possible for several users to run programs from one machine at once, which facilitates the creation of networks. By the way, thanks to this principle of multi-terminality, UNIX played a big role in the development of the Internet.

Of course, the operating room UNIX system not ideal. You can find examples of dozens of other operating systems that are designed to be more sophisticated, provide more powerful programming tools, etc. The main disadvantages of the system include:

· Real-time mode is not supported (a type of multitasking in which the operating system itself transfers control from one executing program to another);

· Weak resistance to hardware failures;

· Reduced efficiency when solving similar problems;

Poorly developed means of interaction and synchronization tions of processes.

In addition, the latest versions of UNIX have been noted to be overloaded.

However, despite all its shortcomings, the UNIX family remains one of the most popular on the market and in the future can be a good competitor to Windows.

2.1.3 Use

Originally created to serve mainframe computers, today UNIX-like operating systems are mainly used to service servers, but there are versions that are quite suitable for home or office use. Also, UNIX, thanks to its powerful ability to combine standard commands, is ideal for creating applications.

UNIX is good for a skilled user because... requires knowledge of the principles of functioning of the processes occurring in it. Therefore, it is unlikely to be suitable for “beginners”. However, real multitasking and rigid memory sharing ensure high reliability of the system, and if you need a reliable, flexible OS, UNIX is one hundred percent suitable for you. This is why the UNIX line is so popular these days. In terms of reliability, most modern operating systems can hardly compare with it. It is no coincidence that armed forces and government organizations often give their preference to UNIX-like operating systems.

So, having originated almost as a toy project, today the UNIX family of operating systems is successfully implemented in a variety of areas of activity: from banks and government agencies, to offices and supermarkets.

2.2OS/2

2.2.1 History of creation

The OS/2 operating system began as a joint development by IBM and Microsoft (1984). However, the project subsequently fell apart, and Microsoft remade its version of OS/2 into WindowsNT, and OS/2 itself continued to be developed at IBM, which still did not pay enough attention to this operating system. In general, the competition for leadership in the OS market between these companies greatly influenced the further development of operating systems developed by both Microsoft and IBM.

OS/2 was originally intended as a replacement for MS-DOS. Even then it was clear that MS-DOS had a number of significant disadvantages related to limited memory and file systems, and could not use the full potential of computers of that time. The concepts under which the new OS was developed were promising: OS/2 was supposed to support preemptive multitasking, virtual memory, graphical user interface and run DOS applications. However, most of these plans were not implemented.

The first version of OS/2 1.0, released in 1987, contained most of the features needed for a multitasking OS. However, she didn't have graphical representation, and there were no drivers for many popular printers and other devices. In addition, it was quite demanding on computer resources; execution and interaction of DOS applications was very slow, and sometimes impossible; At any given time, the user could work with only one application, while the remaining processes ran in the background. All these shortcomings did not allow OS/2 to “explode” the operating system market like UNIX. Most users preferred the familiar MS-DOS, although not ideal, or switched to Windows 3.1, released by Microsoft around the same time.

I believe that IBM simply rushed the release of the first versions of OS/2. IN otherwise this operating system could compete with the Windows and MS-DOS line.

Of course, with each new version OS/2 became better and better. Already in OS/2 v2.00 (1992), the main shortcomings of the first version were eliminated, moreover, it was the first accessible and working 32-bit operating system for personal computers, which undoubtedly attracted attention to it in the OS market. This was followed by the release of fairly successful network versions of OS/2 (for example, Warp 3, WarpConnect, Warp 4). From this point on, OS/2-like operating systems began to be developed more as network operating systems.

In 1997, there were good reasons to say that OS/2 was living out its life as an operating system. For example, IBM officially announced the withdrawal of OS/2 from the consumer market, the OS/2 development department was disbanded, and users were advised to move to other operating systems. However, seeing that the world is increasingly immersed in the sphere of business and the Internet, IBM still returns to supporting OS/2-like systems and in 1999 introduces a new version: Warp4.5 ServerforE-business (Aurora).

Thus, the OS/2 family of systems has very real development prospects and it is at least premature to talk about the disappearance of this OS from the market.

2.2.2 Main advantages and disadvantages

It is quite difficult to single out any general advantages of OS/2 family systems, because Each version has its own pros and cons, which may not be present in subsequent upgrades. However, I think the following can be considered common to all versions:

· powerful support for Internet tools and work in networks (especially for network versions);

· stable operation of the system core, which means reliability.

The main and biggest disadvantage of OS/2 is the very small amount of software and applications written for this operating system. In part, I think this is due to the policies of IBM itself. At the very beginning of the development of OS/2, IBM did not pay enough attention to this system and practically did not cooperate with software developers. It is also surprising that today drivers for this system are not available on the official IBM website. In addition, none of the versions of OS/2 comes with primary codes, i.e. IBM, despite numerous requests from users, deprives them of the opportunity to independently develop the system, as is done in the case of Linux. (Although in fairness it is worth noting that a new version of OS/2 is currently being prepared for release, called osFree, which precisely implies open source code.) What is the reason for such a strange attitude of IBM towards its creation remains a mystery to me.

A relative disadvantage of the system is the rather difficult and confusing process of installing the OS on a computer. Although, for experienced users this is unlikely to be a problem.

Otherwise, OS/2 is a stable system that confidently occupies its (albeit small) niche in the operating system market.

2.2.3 Use

Today, many of the largest corporations in Europe trust OS/2 to manage their computer networks, but it should be noted that OS/2 is not widely used in Russia. OS/2 never enjoyed particular popularity as a home operating system, remaining in the shadow of Windows.

Of course, OS/2 is used as a server, where reliability and performance are required. Due to its stability, OS/2 is used in banking sector as an operating system for ATMs. OS/2 is also convenient for use where large amounts of information need to be processed, for example at weather stations or in the field of scientific research. Less often this system used for application development. It is interesting to note that OS/2 has gained some popularity among gamers because... The conflict rate of applications is significantly lower than that of the same Windows line.

So, we met another alternative to the Windows family. However, I doubt that the OS/2 family can greatly displace Windows in the OS market, at least for today. This is primarily due to the small amount of software for this OS, and therefore its low popularity among PC owners. However, you should not treat OS/2 with disdain and throw it off the scale, because Once IBM pays enough attention to its development, it will immediately reveal its full potential.

2.3 MacOS

2.3.1 Development history

It’s worth mentioning right away that MacOS is intended for installation on computers manufactured by Apple. The peculiarity of these computers is that both the software and the “internals” of the computer itself are assembled by one company, namely Apple. This approach allows us to achieve maximum balance between the software and the hardware that will be used with it, which, in turn, virtually eliminates the possibility of hardware conflicts that we often encounter when using the IBMPC. However, such computers cannot be called ideal. The fact is that they are monolithic computers, i.e. It is almost impossible to connect new devices or upgrade old ones. This, I believe, can be a serious drawback for some users, especially those who are used to assembling their computer themselves.

It is important to note that it was the Macintosh (that’s what Apple computers are called) that were the first personal computers, and MacOS is the first commercial operating system that offered the user not a command line interface, but a graphical one that is familiar to us today, with windows, folders, icons and the mouse pointer. The release of this operating system was a real revolution in the PC world, and many of the techniques used in it became the basis for the development of future operating systems. For example, the OCWindows GUI is almost identical graphical interface MacOS. So we can safely say that MacOS is a kind of progenitor of Windows.

The first version of MacOS was released in 1984 along with the first Macintosh personal computer from Apple. It took up only 216 KB of disk space and worked even with normal copying from one computer to another. But such a product was completely unprotected from counterfeiting, so the developers devoted all further time not only to its technical improvement, expanding functionality and stability, but also to protection. The main disadvantage of the first version was that just one “frozen” program led to the failure of the entire system, i.e. there was no principle of preemptive multitasking. This deficiency has been corrected in next versions OS. After the first version of MacOS, nine modifications were released. With each version, MacOS became more colorful, more impressive, easier to use and more reliable.

To date, the latest version of this operating system is MacOSX, which has absorbed all the best from previous versions, and in my opinion can rightfully be called one of the most convenient OS.

2.3.2 Main advantages and disadvantages

The debate about which is better, the IBMPC platform or the Macintosh, has been going on for a long time. From my point of view, the question of the pros and cons of Macintosh computers, and therefore the MacOS operating system, is quite relative.

Traditionally, the disadvantages of MacOS include the high price. Yes, indeed, prices for Apple computers are almost twice the price of conventional IBMPCs. But for this money you get a beautiful computer with its own special personality of excellent quality and a modern operating system, designed taking into account all latest technologies and scientific achievements. At the same time, the MacOS OS was created specifically for Macintosh computers, which allows you to use the capabilities of the hardware one hundred percent, and not overpay money for new products that you don’t know when and with what you can evaluate them.

The second disadvantage is limited model range Macintosh computers. It turns out that Apple is pushing the user into a certain framework: after all, in order to enjoy all the benefits of MacOS, he simply must buy himself a Macintosh. But on the other hand, when you come to the store, you won’t have to think long about which Macintosh you should choose, and the quality of each of them will be at the highest level.

Another unpleasant problem is the closed nature of MacOS, which primarily affects the lack of software for it from third-party developers. There are still some important software products written for Macintosh, and gamers won’t be able to have fun, since games are developed primarily for Windows, and then for MacOS, and you won’t find some games at all. But time does not stand still, and organizations appear that develop software products for MacOS, and well-known software developers are interested in making their product work on Macintosh computers. But most importantly, Apple latest version MaOS includes the BootCamp application, which makes it easy to install the Windows operating system on Macintosh computers and run any software on them.

Also among the undoubted advantages of MacOS, I think, is the absence of conflicts between software and hardware, which the same Windows cannot boast of at all, and almost complete protection against viruses, worms and other evil spirits, because the number malware, capable of infecting MacOS, is almost zero. Therefore, I believe that this operating system still contains more advantages than disadvantages.

The debate about which is better can continue ad infinitum, but if you ask those who took the plunge and purchased a Macintosh computer whether they would agree to exchange it for another, most likely you will get a negative answer. Macintosh people love their computers. This can be explained by the fact that Apple management creates its products primarily for people. Their main strategy is beauty and convenience. In addition, all their developments keep up with the times, and are even a little ahead of it. When you buy a Macintosh computer running MacOS, you can be sure that it will not become obsolete in six months, but will be relevant for a long time.

.3.3 Use

If we take into account all the advantages of MacOS, the question immediately arises as to why it is still not as widespread as its main competitor, the well-known Windows OS. The answer to this follows from the disadvantages listed above: high price, lack of software, limited models, etc. Therefore, most users prefer the familiar IBMPC configuration with its, again, familiar Windows.

However, despite this, MacOS still gained considerable popularity in the business sphere and among professionals involved in computer graphics and printing.

Based on this, I think the time is not far when Apple computers with the MacOS operating system will become so popular (and they have all the prerequisites for this) that they will compete with Microsoft with its Windows OS.

Conclusion

So, here we are, finishing our review of Windows alternatives. Of course, there are many other operating systems other than the ones in my work that can replace Windows. I tried to consider only the most widely used ones. We can say with certainty that there are no “bad” or “good” among them. Each of the operating systems discussed has its pros and cons. Their use depends on the scope of application and, accordingly, the tasks that are assigned to them. Some operating systems are ideal for processing large amounts of information and are reliable, for example the OS/2 line of systems. Others are more accessible, such as Linux. Still others delight with their colorfulness and effectiveness, for example MacOS.

Of course, it’s hard not to agree that Microsoft’s brainchild will be the leader among software on the OS market for a long time, especially among “home” operating systems. There are understandable reasons for this: mass distribution, accessibility, ease of use, etc. However, there are quite worthy competitors that are also suitable for home use including. The most striking of these systems, in my opinion, is MacOS. This system has its drawbacks, but they are all lost against the background of its convenience and reliability. In addition, Windows is also not an ideal system. Application conflicts alone are worth it, and Windows’ demands on hardware resources cannot be called low.

In any case, when choosing an operating system, you should not be guided by fashion trends. As I already said, you must first of all proceed from the tasks that the OS must perform. After all, as we found out at the very beginning of our work, the operating system is the main link when a person works with a computer. The success of this work, and simply its convenience, can greatly depend on the choice of OS.


1. Olifer V.G., Olifer N.A. Network operating systems. – St. Petersburg: Peter, 2002 -544 p.

2. - 400s.

3. Tanenbaum E. Modern operating systems. 2nd ed. – St. Petersburg: Peter, 2002 – 1040 p.

4. Kuznetsov S. “UNIX is dead, but I’m alive” - Article on the Internet (http://www.citforum.ru/database/articles/art_7.shtml)

5. PURPOSE AND FUNCTIONS OF THE OPERATING SYSTEM. – Article on the Internet (http://sapr.mgsu.ru/biblio/ibm/contents/nazn.htm#UNIX)

6. www.maclinks.ru – site dedicated to MacOS

7. Wikipedia – free encyclopedia (www.wikipedia.org)


Partyka T.L., Popov I.I. Operating systems, environments and shells: Tutorial. – M.: FORUM: INFRA – M, 2003.

V.G. Olifer, N.A. Olifer. Network operating systems. – St. Petersburg: Peter, 2002

Tanenbaum E. Modern operating systems. 2nd ed. – St. Petersburg: Peter, 2002

Partyka T.L., Popov I.I. Operating systems, environments and shells: A tutorial. – M.: FORUM: INFRA – M, 2003.

Kuznetsov S. “UNIX is dead, but I’m still alive.” – Article on the Internet. (http://www.citforum.ru/database/articles/art_7.shtml)

Wikipedia – free encyclopedia (www.wikipedia.org)

PURPOSE AND FUNCTIONS OF THE OPERATING SYSTEM. – Article on the Internet (http://sapr.mgsu.ru/biblio/ibm/contents/nazn.htm#UNIX)

OS

OS

Lecture No. 5

operating system

1. Purpose and main functions of the operating system.

By the term “operating system” we mean a complex

programs whose functions are to control the use and

distribution of computing system resources. We said that in

a computing system has physical resources, that is, those resources that

associated with real hardware (magnetic disks, RAM,

processor operating time). We also said that in the system for its success

functioning there are logical (sometimes called virtual)

resources, that is, resources that are not in the form of real equipment

exist, but are implemented in the form of some means provided

to the user. We will simply call physical and logical resources

computing system resources.

Any operating system (OS) operates with some entities,

which, together with the methods of managing them, largely characterize it

properties. Such entities may include the concepts of file, process,

object, etc. Each OS has its own set of such entities. For example, in the OS

Windows NT such entities include the concept of an object, and already through

management of this entity is provided with all possible functions. If we

Let's look at UNIX, then in it such an entity, first of all, is

the concept of a file, and secondly, the concept of a process.

A process is a certain entity that is present practically in

all OS. A process is a program that has ownership rights to resources.

Consider two programs (that is, the code and data that are used) and

Let's consider all those resources that belong to the program (this could be:

RAM space, data on external storage device,

ownership rights to other resources, for example, communication lines). If the sets

resources belonging to two programs coincide, then in this case we do not

we can talk about these programs as two processes - this is one

process. If each program has its own set of resources, and these

sets can intersect, but not coincide, then we are talking about two

processes.

In the case where sets of resources of several processes have

non-empty intersection, then we have a question about the use, so

called shared resources. We talked about this partly last time

lectures: remember the example with the printing device. We may have several

processes, each of which has a device as its resource

printing and can access this resource with an order at any time

to print some information. Synchronization of processes using an example

printing device illustrated for us one of the functions of the OS, which is

managing the functioning of processes. Let's see what is understood

under process control.

Process management:

1. Central time management

processor.

2. Paging and input buffer management.

3. Management of shared resources.

Basic problems of process management.

The first is managing CPU time usage.

(CPU), or this problem is sometimes called CPU scheduling, that is, managing

at what point in time which of the tasks or which of the processes will be

own CPU activity: which process will the CPU run on.

The second is the management of paging and the input buffer. Let's assume the situation

when a large number of people, for example the entire course, are sitting at computers,

and everyone simultaneously launched some tasks in the form of processes. In system

a lot of problems have arisen (more than a hundred are guaranteed). And all the computing

the system cannot accept one hundred tasks for operation in multiprogram mode -

it's too much. In this case, a so-called input buffer is formed

tasks, or process input buffer, that is, a buffer in which

those processes that are waiting for the processor to begin processing. Arises

the problem of the order in which processes are selected from this buffer to begin processing.

This is a buffer scheduling problem.

Now consider the swap scheduling problem. Processor

several processes are being processed, and we are faced with the task of releasing

real RAM for other tasks. In this case, there is

the need to transfer some of the processed tasks to an external

Memory device. And what algorithm will we use to pump out these

tasks? What will be the pumping strategy? You can pump out, for example, every

even task. How to more or less profitably organize the pumping process -

This is problem.

Third is the management of shared resources. There is a set of resources

access to which at certain points in time is organized on behalf of

various processes. This is the same conflict with the printing device. One of

functions that largely determine the properties of the OS is a function

ensuring the organization of interaction between processes and the use of common

resources. The problem from the example with the printing device is easily solved, but

if two programs have a common fragment of RAM, then control

such a shared resource is a difficult task.

Now let's look at the OS design. Almost any OS has

concept of the core. The OS kernel is usually its resident part, that is, the

part of the OS that is not involved in swapping processes (it is always

present in RAM) and operates in OS mode, or in

supervisor (in that very specialized mode that we talked about

in the last lecture). The core includes basic controls for the main

entities specific to a given OS, and may also include a set

programs that provide control of certain physical devices. IN

Kernel functions, in particular, include interrupt handling.

We will sometimes call programs that manage resources drivers

devices (physical or logical). For example, the OS kernel should include

random access memory driver.

computing system. The first level mainly consists of drivers

physical devices. The next level is logical control

then file management drivers may appear in our scheme, which, on

in fact, are associated with logical disk management drivers, and those in

in turn, are associated with the drivers of real physical devices, and so on

It is not at all necessary that all OS components operate in

supervisor, or in OS mode. Many of the components that logically

sufficiently remote from the kernel, can work in an ordinary user

mode. It’s also not necessary, all these OS components run in resident

mode. Typically, many functions do not require this.

Now let's move on to a more detailed consideration of the main functions of the OS.

Controlling CPU time usage.

In fact, it depends on what algorithm for selecting a task to transfer to it

CPU activity is implemented in the OS, many real operational

properties of this OS. The choice of algorithm is determined almost entirely by those

performance criteria that are used to evaluate effectiveness

OS operation. Therefore, managing CPU time usage is with you

Let's consider it against the backdrop of considering OS types.

First situation. I have a large number of tasks or programs,

requiring large volume computing power systems. These are the tasks

which are called counting problems; they require a lot of computation

and little access to external devices. These tasks must be performed on

one computing system. What will be the criterion of effectiveness

for the operation of the system when performing this package of tasks? What a set

parameters you can take and say: if they are large, then it’s good, if

on the contrary - is it bad? For such a situation, the criterion for work efficiency

computing system is the degree of CPU utilization. If he's little

idle (i.e. it works in standby mode, and all processes are busy

exchange, or the OS takes over the time), then we can say that such

the system works efficiently. This can be achieved using

the corresponding planning algorithm, which is as follows.

We launch for processing the set of tasks that we have according to

OS capabilities (either maximum or all tasks), which is provided

multiprogramming mode. CPU time scheduling algorithm in this

case will be the following: if the CPU is allocated to one of the processes, then this

The process will occupy the CPU until one of the following situations occurs:

1. Access to an external device.

2. Completing the process.

3. Recorded fact of process cycling.

As soon as one of these situations occurs, control is transferred to

to another process. Number of control transfers from one process to

to another is minimized. Since when transferring control from one process

on another OS must perform a set of certain actions, and this is a loss

time, then these losses are minimized. This mode of operation of the OS

called burst mode. An OS that runs in this mode

called package OS.

Now imagine a situation where a significant number of people

is in a computer lab and each of them edits some

text. Each terminal has its own copy of a text editor associated with it.

Let's see what happens to the system if we apply the planning algorithm,

stated for the first case. Suppose one of the users is slightly

dozed off at the terminal and shows no activity. CPU time will be

associated with this process because this process does not exchange and does not

completed as the editor is ready to work. At this time, everyone remaining

users will be forced to wait for the sleeper to wake up. It will work out

freezing situation. This means that an algorithm that is good for the first

case, this system is not suitable even with the most powerful machine.

Therefore, for tasks that solve the problem of providing a large number of

users of computing services (interactive tasks), apply

other algorithms based on other performance criteria.

For such a system, the user waiting time criterion is suitable: s

the moment he sent an order to perform some action, until the moment

system response to this order. The more efficiently the system operates, the more

the average waiting time in the system is less.

Let's consider the situation for the second case. The system contains some

number of processes, and the scheduler’s task is to distribute CPU time so

so that the system response time to a user request is

minimal, or at least guaranteed. The following is proposed

scheme. The system uses a certain parameter (t, which is called

quantum of time (in general case, a time slice is a certain value,

which may change during system setup). All the many processes

which is in multiprogram processing is divided into two

subsets. The first subset consists of those processes that have not yet

ready to continue execution: for example, those processes that ordered

make an exchange and wait for its results. Are there processes that are ready for

execution. The work will be carried out as follows. That process

which in this moment time occupies the CPU, will own it up to

occurrence of one of the following events:

1. Handling an exchange order.

2. Completing the process.

3. Exhaustion of the quantum allocated to this process

time (t.

When one of these events occurs, the OS scheduler selects from

processes ready to execute, some process and transfers resources to it

CPU. And he chooses this process depending on the algorithm

scheduling that was used in this particular OS. For example,

the process may be chosen randomly. The second way is that

there is a sort of sequential traversal of processes, that is, we took in

work first one of the processes, then it becomes free, and the CPU time will be

given to the next process in order of those ready to run.

The third criterion by which the next task is selected may be

the time that the process was not serviced by the CPU. In this case the system

can choose the process that has the longest time. These algorithms

must be implemented in the OS, which means they must be simple, otherwise

the system will work ineffectively, on its own (although there are such systems:

in particular, the Windows family suffers from this).

This type of OS is called time-sharing OS. It works in the mode

which minimizes the system response time to a user request. IN

ideally, due to the fact that the response time to a request is minimal,

the user should be given the illusion that all system resources

provided only to him.

Now let's look at the next problem. Let's say we have an airplane,

controlled by an autopilot, which performs an operation on autopilot

reduction. Every airplane has an instrument that measures altitude from

aircraft to the surface of the earth. The aircraft operating mode is such that the control

its functions are carried out by a computer according to a given program.

So, if we have an autopilot system and the plane descends, this is the system

must control the flight altitude. The central computer of this aircraft

can solve several problems: it can control the flight altitude,

fuel level in tanks, some indicators of engine performance, etc.

Each of these functions is managed by its own process. Let's assume

We have a batch OS, and we carefully monitor the fuel level in the tanks. At

this, obviously, an emergency situation arises, because the plane continues

decrease, but the OS does not notice this.

Let's say we have a time sharing system. One of the qualities

time sharing systems are ineffective due to the fact that

the system provides for a large number of switchings from process to

process, and this function is quite labor-intensive. Same situation: height

approaches zero, and the OS is busy reinstalling the registration tables. Such

option is also not suitable.

To solve this kind of problem you need your own planning tools. IN

In this case, the so-called real-time OS is used, the main

the criterion of which is the time of the guaranteed response of the system to

the occurrence of one or another event from a set of predetermined

events. That is, the system has a set of events to which the system at any time

situations will react and process them in a predetermined time.

For our example, such an event could be the receipt of information from

height sensor. In reality, OSs of this class use fairly simple

algorithms. All planning lies in this criterion, that is

it is guaranteed that the event will be processed in a time not exceeding a certain

threshold value. But a real-time OS usually has its own

a specific device that is determined not only by this simple

planning algorithm, but also internal reorganization of the system.

Drawing a line under the usage management function

CPU time and CPU scheduling, I draw attention to two facts. First fact

this is what those algorithms that are implemented in the planning system

CPU time distribution largely determines performance properties

computing system. I specifically gave examples, suggesting

use different OS for different purposes. Second fact. We looked at three

typical types of OS: batch processing systems, separation systems

time and real time systems. Today we can talk about

that a real-time system is a separate class of OS. Guaranteed

Windows OS will not manage any objects that have this real

time is very critical. Also will not manage such objects and OS

SOLARIS or LINUX, etc., because these systems are not systems

real time.

The first two modes, batch and time sharing, can be emulated

on such generally accepted OS. In reality, large and serious operating systems are

mixed systems, i.e. they have CPU scheduling elements

both algorithms that allow you to manage counting tasks, and algorithms that

allowing you to manage interactive tasks or debugging tasks,

for which you need a little CPU.

An example of such a CPU scheduling organization would be the following:

scheme. The scheduler is built on a two-level scheme. We believe that

a set of problems may contain, say, counting problems and

interactive tasks. The first level determines the priority between the two

classes of problems and either devotes the CPU to the counting task first, or to the interactive one

task. And the second level defines what we talked about before, i.e.

how to select a task within one class and how to interrupt it. Such

a mixed system can work as follows. First level

planning will work according to this principle: if at the moment there is no

not a single interactive task ready to be executed (and this is a very real

situation if users are editing text), then the CPU

is transferred to counting tasks, but one condition is added: as soon as

at least one interactive task appears, the counting task is interrupted and

control is transferred to the block of interactive tasks. This is what it concerns

the first process control function.

Paging and input buffer management.

Here planning algorithms are necessary, but not so critical. IN

In real systems, the swap buffer is often combined, i.e. then space

on external media, where information is pumped out from the operational

memory, and process input buffer. This is the first note.

Second note. Modern OSs are quite “lazy” and pump out

often carried out not in units of process memory blocks, but pumped out

the whole process. Two questions arise here: what is the criterion for substitution?

process and what is the criterion for selecting from the buffer the process that we need

required for multiprogram processing. The simplest option

consists in using the time spent in one state or another. IN

the first case, if we decide to pump out a process from an active one

states from among those processed in the swap area, then we can take that

the process that remains in the processing state for the longest time

astronomical time. The reverse process can be symmetrical, i.e. We

we can take from the process input buffer the process that has been there the longest

located. In fact, these are simple and real planning algorithms, and

they can be modified based on different criteria. One of

criteria if all tasks are divided into different categories, i.e. they can

be OS tasks - in this case they are considered first (and

among them there is an algorithm for estimating the time they spent there) and that’s all

other tasks. This model is similar to the model of life injustice

in a society where there are omnipotent people to whom everything is permitted, but there are

people who can wait.

Shared resource management.

Here we will only indicate the problem, because its specific solutions

We will look at the example of the UNIX OS.

Suppose there are two processes that run on a common

RAM space. In this case, shared resources can

to work in different modes, i.e. it is possible that two processes

are really on different cars, but they are connected by a common operational field

memory. In this case, there is a problem with memory buffering,

because each machine has its own read buffering mechanisms -

records. Here a bad situation arises when the state of physical

memory does not correspond to its real contents. And also arise

some problems for an OS running on two machines.

Next problem. Let there be two processes that work on

one car. There must be certain means that will allow

synchronize access to shared memory, that is, they will allow you to create

conditions under which the exchange of each of the running processes with the operational

memory will occur correctly. This means that every time you read

information from shared memory must be guaranteed that all

users who started writing something to this memory already have this process

completed - there should be synchronization over the exchange with shared memory.

In reality, when solving problems, such

shared resources, like Common memory, but I would like the processes

which function simultaneously could have some influence on each other

on a friend. The effect is similar to the interrupt apparatus. To implement this in

Many operating systems have a means of passing signals between processes, then

Some software emulation of interrupts occurs. One process says -

pass the signal to another process. Another process is interrupted

execution of this process and transfer of control to some

a predefined function that must process the received signal. This

third OS function.

I drew your attention to such OS functions that affect its

operational properties. In fact, any OS also contains a huge set

other functions that ensure the operation of this system.

Lecture No. 6

In the last lecture we talked about the fact that almost any

The operating system provides I/O buffering. In fact,

it is one of the main functions of the operating system. By analogy with the fight against

different access speeds to various components of the computer system

the operating system introduces software buffering within its limits,

which also solves access time smoothing and

synchronization in general (example with a printing device). Smoothing out problems

access is that almost every operating system

has cache buffers that accumulate access to external storage

device (VZU) is similar to hardware buffering when working with operational

memory. This allows you to significantly optimize the operating system.

An indication of the presence of such buffering is a request to stop

execution of the operating system before shutting down the machine. For example, working

with the MS-DOS operating system, you can turn off the computer at any time

time, because there is no such buffering. In operating systems

like Windows and UNIX, it is considered incorrect to simply turn off the machine when

working system, in this case there is a possibility that something will happen

some loss of information (since, for example, the moments of the order for exchange and

the exchanges themselves are far from identical). The extent of this buffering

determines the real efficiency of the system. When our faculty began

Pentiums appeared, it was discovered that when working with Windows 95

there is practically no qualitative difference between whether the system works on

486 processor or Pentium. This suggests that the effectiveness

The system does not depend on the efficiency of working with an external device. If

take the UNIX operating system, then this difference will be noticeable, since

here the speed of the processor has a greater impact on the quality of the system,

than for Windows 95, because the Windows 95 system exchanges with

external media is much larger due to some “dumbness” of the algorithms

buffering work with external devices.

2. File system.

We told you that each of the operating systems operates

some entities, one of which is a process. There is a second one

an entity that is also important is the concept of a file. The file system is

component of the operating system that provides organization of creation,

storage and access to named data sets. These named sets

data are called files.

Basic file properties

1. A file is an object that has a name and allows you to operate with

a sequence of some characters, the length of which depends on the specific

operating system.

2. File independence from location. To work with a specific file

there is no need to have information about the location of this file on the external

device.

3. A set of input/output functions. Almost every operating system

uniquely defines a set of functions that ensures exchange with a file. Usually,

this set of functions consists of the following queries:

1. Open a file for work. You can open either an existing one,

or a new file. The question may arise - why open the file?

this is a means to centrally declare

operating system that the file will work with a specific

process. And from this information she can already accept some

solutions (for example, blocking access to this file for others

processes).

2. Read/write. Typically, file sharing can be arranged

some data blocks. The data block that happens to

exchange carries a dual essence. On the one hand, for any

computing system, the sizes of data blocks are known, which

most effective for exchange, that is, it is software-

hardware dimensions. On the other hand, these data blocks when

real exchange can vary quite arbitrarily

programmer. Read/write functions usually include size

blocks of data to be exchanged and the number of data blocks that

must be read or written. From the selected block size

data may depend on the effectiveness of real exchanges, because

that, suppose for some machine the size of the effective

the data block is 256Kb, and you want to carry out exchanges by

128Kb, and you make two requests to read your

128Kb logical blocks. It is very likely that instead of

to read a block of 256Kb in one exchange, you contact two

times to one block and read first one half, and then

another. There are elements of inefficiency here, although they may

smoothed out by a “smart” operating system, and if it doesn’t

smooths it out, then it’s your fault.

3. File pointer management. With almost every open

The concept of a file pointer is associated with a file. This pointer

similar to the program counter register, at each moment of time

points to the next relative address in the file with which

exchange can be made. After exchange with this block, the pointer

is transferred to the position through the block. To organize work with

the file needs to be able to manage this pointer. Available

file pointer management function that allows

Move the pointer arbitrarily (within the available range)

file. A pointer is some variable accessible to the program,

which is associated with the file open function (which creates this

variable).

4. Closing the file. This operation can be carried out by two

file contents. 2) Destroy the file.

After closing the file, all connections with it are terminated, and it comes

into some canonical state.

4. Data protection. Many strategic decisions are repeated both on

hardware level and at the operating system level. If we remember

multiprogram mode, then one of necessary conditions his existence

is to provide protection (memory and data). If we consider the file

system, then it is the same as the operating system, maybe

single-user. In this case, there is no data protection problem,

because the person who works with this operating system is

the owner of all files. Examples of single-user systems - MS-DOS or

Windows 95. You can boot the machine and destroy all the files of others

users who are located on the disk, because on these systems

there is no protection. The multi-user system ensures correct

work for many users. MS-DOS can also run in

multiprogramming, but it is not correct enough because the error is in

one process can lead to overwriting the operating system and the neighboring

process. Also in the Windows 95 operating system it can work a lot

users, but this work is incorrect, because this operating system

the system does not provide all protection rights. So, multi-user

the system must ensure information protection from unauthorized

access. In fact, the security problem is not only related to file

system. In reality, the operating system provides data protection in all

areas: these are files, processes, and resources belonging to processes,

running as one user. Here I draw your attention to

this fact because for files this is the most critical point.

Basic properties of file systems.

The file system naturally includes all those properties that

were listed for the files, but adds some more. These properties

associated with the structural organization file system.

Let's look at some VZU space and see how we

We can organize the placement of files within this space.

1. Single-level organization of files in continuous segments. Term

“single-level” means that the system provides work with files

uniquely named. Within the space of the VZU, some

an area for storing data called a directory. The catalog has

following structure:

|name |start block |end block |

"Start block" refers to some relative address

space of the VRAM from which the file with the given name begins. "Finite

block" defines the last block this file. File open function

comes down to finding the file name in the directory and determining its beginning and

end (in reality, the data may take up slightly less space, more on this later

said later). This action is very simple, and the directory can be stored

in the operating system memory, and thereby reduce the number of exchanges.

If a new file is created, it is written to free space.

Similar to a name directory, there can be a table of free spaces

(fragments).

Reading/writing occurs almost without additional exchanges, since when

opening we get a range of data placement. Reading takes place in

according to this block structure and no additional information

is not required, so the exchange occurs very quickly.

What happens when you need to write additional

information, but there is no free space behind this file? In this case

The system can do two things. First, she will say that there is no room and you

must do something themselves, for example, launch a certain process that

will move this file to another location and add the necessary information. This

transfer is a rather expensive function. The second possibility is in exchange

will be refused. This means that when opening a file you had to

make a reservation extra bed; at the same time the file system checks

the size of the free buffer, and if it is not enough, it looks for free space where

this file will be located.

So, we see that this organization is simple, effective in exchanges, but

if there is not enough space for the file, inefficiency begins. Besides

However, during long-term operation of such a file system on the disk, something happens:

The same thing that happened to us with RAM is fragmentation. That is

a situation when there are free fragments, but among them there is no one where

it would be possible to post the file. Combating fragmentation for such an organization

file system - this is periodic compression when a long,

a process that is heavy and dangerous for the contents of the file system,

presses all files tightly together.

Such an organization may be suitable for a single-user

file system, because when large quantities users are very

Fragmentation will quickly occur, and the constant start of compression is death for

systems. On the other hand, the system is simple and requires almost no

overhead costs.

2. File system with block file organization. VZU space

divided into blocks (the same blocks that are effective for exchange). IN

In a file system of this type, the distribution of information occurs similarly

distribution of process information in RAM from page

organization. In general, each filename has a set of numbers associated with it.

device blocks in which the data of this file is located. Moreover, the numbers

these blocks have a random order, that is, the blocks can be scattered

throughout the device in random order. With such an organization there is no

fragmentation, although there may be losses that are multiples of a block (if the file took at least

one byte in a block, the entire block is considered occupied). Therefore no

compression problems, and this system can be used when

multi-user organization.

In this case, each file is associated with a set of attributes: file name, name

user by whom the file is accessed. Such an organization

allows you to get away from the uniqueness of names, which was required in the previous

case. In such a system, unique names are required only among files

one user.

The organization of such files can be through a directory. Directory structure

maybe next. The directory contains the strings; every i-th line

corresponds to the i-th block of the file system. This line contains

information about whether this block is free or busy. If he

busy, then this line indicates the file name (or a link to it), the name

user, and there may be some additional information.

When exchanging, the system can act in different ways. Or when opening

file, the system runs through the entire directory and builds a correspondence table

logical blocks of the file, their placement on the disk. Or with every exchange

a search for this correspondence is carried out.

This organization of the file system is single-level within

one user, that is, all files are linked into groups according to their membership

to some user.

3. Hierarchical file system. All files in the file system

built into a structure called a tree. At the root of the tree

there is the so-called root of the file system. If a tree node

is a sheet, then it is a file that can contain user data,

or be a file directory. Tree nodes other than a leaf are

directory files. Naming in such a hierarchical file system can

take place different ways. The first type is file naming relative

nearest directory, i.e. if we look at the files that are

closest to directory F0 is file F1, which is also

directory, and file F2. For successful naming in such a system on one

names cannot be repeated at a level. On the other hand, since all files

connected by a tree, we can talk about the so-called full name

file, which is made up of all the file names that make up the path from

file system root to a specific file. The full name of the F3 file will be

denoted as follows: /F0/F1/F3. The good thing about this kind of organization is that it

allows you to work with both a short file name (if the system

it is assumed that we work in this directory) and with the full name

file. Full file names are paths, and in any tree from its root to

For any node there is only one path, therefore this solves

problem of unification of names. The first time this approach was used was in

Multix operating system, which was developed at the University of Berkeley

in the late 60s. This beautiful solution began to appear later in

many operating systems. According to this hierarchy, each of the files

you can bind some attributes related to access rights. Rights

Both user files and directories can have access.

The structure of this system is good for organizing multi-user work,

due to the absence of a naming problem, and such a system can very well

build up.

4. Personalization and protection of data in the operating system. This nuance

which we will now consider, both simple and complex. Simple - because we

Let's talk about it in just a few sentences, but it's complex because there are

problems that can be discussed at length.

Personification is the ability of the operating system to identify

specific user and, in accordance with this, accept certain

actions, in particular regarding data protection.

If you and I look at our favorite operating system MS-DOS,

then there was no concept of a user with all the ensuing consequences -

it is single-user.

The second level of operating systems are operating systems that

allow users to register, but all users are introduced

in the form of a single set of certain subjects and are not connected with each other in any way.

An example of such operating systems are some operating systems.

IBM systems for mainframe computers. For example, the lecturer does not know who

of his listeners to which group belongs, but everyone sitting in front of him

users of his course. This is both good and bad. From a listening perspective

a course of lectures is good, but for this lecturer to conduct some kind of survey

this is bad, because in one day he will not have time to interview everyone. He needs

will somehow divide all the listeners, but it is not known how.

Accordingly, with such one-dimensional personification, all

those functions that we talked about (in particular protection), but such

user organization does not imply the formation of user groups.

And it’s convenient for me that, let’s say, on our faculty server my

laboratory was allocated, and within this laboratory it would be possible

grant each other access rights to files, etc.

Accordingly, similar to the file system, a hierarchical

user organization. That is, we have the concept of “all users”

and the concept of “user group”. There are real users in the group.

This hierarchical organization of personification entails the following

moments. When registering a user, you must first

tie it to some group - it could be a laboratory, department or

study group. Since users are grouped, it appears

the ability to separate access rights to user resources. That is

the user can, for example, declare that all his resources are available to

all users of the group. Such a scheme can be multi-level (groups

divided into subgroups, etc.) with a corresponding distribution of rights and

opportunities. Nowadays there are operating systems in which the rights

access can be determined not only by such a hierarchical structure, but also

can be more complex, i.e. access rights can be added, violating this

hierarchy.

Lecture No. 7

3. UNIX operating system.

We move on to studying the UNIX operating system, since many

decisions that are made in operating systems, we will

consider the example of this operating system.

In the mid-60s, AT&T Bell Laboratories conducted

research and development of one of the first operating systems in modern

its understanding - the Multix operating system. This operating system

had the properties of a time-sharing operating system,

multi-user system, as well as in this system were proposed

the main decisions on organizing file systems, in particular, were

A hierarchical tree file system is proposed. From this development

After some time, the UNIX operating system was launched. One of

the history of the development of this system suggests that the company had unnecessary

a PDP-7 computer with very underdeveloped software was required

a machine that would allow organizing comfortable work for the user,

in particular, processing of text information. A famous group of people is

Ken Thompson and Dennis Ritchie began developing a new operating room

systems. Another version of this story says that they were allegedly engaged in

implementation of a certain game and the means that were available to them,

turned out to be inconvenient - then they decided to play with this machine. As a result

The UNIX operating system appeared.

The peculiarity of this system was that it was the first system

program that was written using a language other than

machine language (assembler). For the purpose of writing this system

software, in particular the UNIX operating system, also

work was carried out that started from the BCPL language. From it was formed

language B, which operated with machine words. Further abstraction of machine

words - BN, and finally the C language. Since 1983, the UNIX operating system (its

original version) was rewritten into C language, and it turned out that about

90% of the operating system was written in a high-level language, not

depending on the machine architecture, and 10% of this system was written in

assembler. These ten percent included the most time-critical

parts of the operating system.

So, the first important and revolutionary result was the use

high level language. This fact caused discussion because no one

believed that this could be long-term, since the language is always high

level was associated with greater inefficiency. The C language was

designed in such a way that it allowed, on the one hand, to write

quite effective programs, on the other hand, broadcast it in

efficient code.

The first feature of the C language that made it more efficient was

working with pointers. The second property is that when

In assembly language programming we often use side effects.

For example, the effect when the result of evaluating some expression is

not only the recorded value, but also some intermediate values,

which can be written down along the way for later use. These

capabilities have become available in the C language, because the concept of expression in

C language is much broader than in those languages ​​​​that were available at that time

time. In particular, the assignment operation appeared instead of the operator

assignment, which made it possible to program side effects. These

properties and determined the “survivability” of the language, suitability for programming

system components and the ability to optimally translate the code of various

From a professional (canonical) point of view, the C language is terrible

language. The main requirement for languages

programming, is to ensure programming security.

Language tools should minimize the likelihood of introducing errors into

program, and obviously such means of modern languages ​​include

the following: strict type checking (i.e. you cannot, for example, fold

an integer variable with a real one without first converting the type

one of them to the type of the other). The C language has the ability to transform

default types. Another criminal thing is ensuring control over

access to program memory (i.e. if a memory cell stores

real number, then we cannot interpret it in any other way).

The possibility of uncontrolled use of these values ​​is provided by

pointers. Moreover, through a pointer you can “cheat” functions into

compliance and non-compliance actual parameters formal parameters

etc. The third property is control over the interaction of modules. A lot of

errors appear if one set is declared in the function

formal parameters, and access to it occurs according to a different set

(and the differences can be both in the number of parameters and in types). IN

in C language you can always trick the program - instead of a formal parameter

of one type give a parameter of another type, and instead of ten parameters pass

one. This leads to errors.

Here are three areas where the C language does not meet the requirements

security. However, experience shows that the most tenacious are

bad (from this point of view) languages.

So, 1973 is the year of the appearance of an operating system written in C language

UNIX systems. What main properties did this system have? First

property is a concept for files. The main object with which it operates

operating system, is a file. File, from an operating point of view

UNIX systems is an external device. A file is a directory that

contains information about the files it contains. And so on, on

which can work.

The second property is the special structure of the operating system. In contrast

from previous operating systems in which every command was "hardwired"

inside the operating system, i.e. it could not be modified in any way,

UNIX solves command problems very elegantly. First, UNIX

declares a standard interface for passing parameters from outside to inside

process. Secondly, all commands are implemented as files. It means,

that you can freely add new commands to the system, as well as remove and

modify them. That is, the UNIX system is open and can be easily developed.

Let's start looking at the specific properties of the operating system.

File system. File organization. Working with files.

The UNIX file system is a multi-user hierarchical

file system. We drew its structure in the last lecture. She

can be represented by a tree whose root is the so-called root

catalog. The nodes other than the leaves of the tree are directories. Leaves

can be either files (in the traditional sense) or empty directories.

The system defines the concept of a file name - this is the name that is associated with

a set of data within the directory to which this file belongs. Except

In addition, there is the concept of a full name - this is a unique path from the root of the file

system to a specific file. Allowed to match the names of files located

in different directories.

In fact, the file system is not entirely tree-like. Available

the possibility of breaking the hierarchy through means that allow association

several names (full) with the same file contents - this is so

hierarchical and tree-like.

The UNIX operating system uses a three-level hierarchy

users (all users are divided into groups), the structure of which is

the general case was discussed in the previous lecture. In this regard, everyone

A file system file has two attributes. The first attribute is, so

called the owner of the file. This attribute is associated with one specific

the user who is the owner of the file. The owner of the file can

become the default if you create this file, and there is also a command that

allows you to change the owner of a file. The second attribute is the attribute associated with

file access protection (more on this will be discussed later).

Access to each file is regulated according to three categories. First

can definitely do whatever he wants with this file. Second category -

the rights of the group to which the file owner belongs (different from the rights

excluding the relevant group. For these three categories

three actions are regulated - reading from a file, writing to a file and execution

file. Each file determines whether a user of a given category can

process.

Working with files

Let's look at the structure of the file system on the disk. For anyone

computing system, the concept of system external storage is defined

devices (SVZU). SVZU is a device that is accessed

the machine's hardware bootloader for the purpose of starting the operating system. Almost

any computer has a range of memory address space,

located in the so-called read-only memory (ROM). IN

The ROM contains a small program (hardware bootloader) that, when

turning on or hard rebooting the machine, accesses the fixed

block SVZU, places it in RAM and transfers control to

fixed address belonging to the read data. It is believed that this

The read data block is the so-called software loader.

It should be noted that the hardware bootloader does not depend on the operating system.

system, and the boot loader is a component of the operating system.

He already knows where the data needed to run the operating system is located.

In any system, it is customary to partition the VZU space into some

areas of data called blocks. Logical block size

is a fixed attribute of the operating system. In the operating room

On a UNIX system, the block size is determined by some parameter that can

vary depending on the system version. To be specific, we will

say that the logical block of the VRAM is 512 bytes.

Let's imagine the address space of the SVZU as a sequence

blocks. Zero block SVZU is a block bootstrap, or block, in

where the boot loader is located. Placing this block at zero

the SVZU block is determined by hardware, because the hardware bootloader

always addresses the zero block. This is the last component of the file

system, which depends on the equipment.

The next block is the superblock of the file system. He contains

operational information about the current state of the operating system, as well as

information about file system settings. In particular, the superblock

contains information about the number of so-called index descriptors in

file system. The superblock also contains information about the number of blocks,

components of the file system, as well as information about free blocks

files, free inodes, and other data,

characterizing time, date of modification and other special parameters.

|Block |Superblock|Area |Blocks |Area |

| initial | to | index | files | saving |

|download |file|descriptor| | |

| |system |s | | |

Following the superblock is an area (space) of inodes.

An inode is a special file system data structure that

which is put into one-to-one correspondence with each file.

The size of the inode space is determined by the parameter

generating a file system based on the number of inodes that

are indicated in the superblock. Each inode contains the following

information:

2. Security code field.

4. File length in bytes.

The next file system space is file blocks. This

space on the system device in which all information is located,

stored in files and about files, which did not fit into previous blocks

file system.

The last data area is implemented differently in different systems, but

behind the file system blocks is the save area.

This is the conceptual structure of a file system. Now let's consider

Superblock. The last two are of greatest interest in the superblock.

fields: fields for information about free file blocks and free index blocks

descriptors. In the UNIX file system, the influence of two factors is noticeable.

The first factor is that the file system was developed in those days,

when a hard drive capacity of 5-10MB was considered very large. Inside the structure

external device space. The second factor is the property of the file system

to optimize access. The criterion for optimal access is the number of exchanges

file system with an external device that it produces for

meeting needs.

The list (array) of free file blocks consists of fifty

elements and takes 100 bytes. In a buffer of fifty

elements, the numbers of free blocks of memory block space are recorded. These

the numbers are written from the second element to the forty-ninth. First element

array contains number last entry in this array. Zero element

this list continues, etc.

If some process needs to expand its file size

additional free block, then the system according to the Block Number (NB) index

selects an array element (a copy of the superblock is always present in

RAM, so in almost all such actions the system does not

needs to contact the SVZU) and this block is provided to the appropriate

file for expansion (in this case the NB pointer is adjusted). If

If the file size is reduced or the entire file is deleted, then

the released numbers are written to an array of free blocks, while

the NB indicator is also corrected.

Since the size of the array is fifty elements, there are two possible

critical situations. The first is when, when releasing new blocks, it is not

manages to place their numbers in this array, since it is already full. In that

In this case, one free block is selected from the file system (and it

is removed from the list of free blocks) and the filled array of free blocks

copied to this block. After this, the value of the NB pointer is reset to zero, and in

the zero element of the array is the number of the block that we have chosen for

copying our array into it. As a result, with constant release

blocks, a list is formed in which the numbers of absolutely all

free blocks of the file system.

The second critical situation is when you need to get a free block, but

The array's contents are exhausted. In this case, the system operates as follows: if

the null element of the list is zero, then this means that everything has been exhausted

file system space, and a message about this is displayed. If he doesn't

is zero, then its contents are equal to the address of the continuation of the array, and

the operating system reads the corresponding block and places it

continuation to the place of the array in the superblock.

The second array, which is located in the superblock, is an array consisting

of one hundred elements and containing the numbers of free inodes.

Working with this array is simple. As long as there's room in this

array, then when the index descriptors are released, the free index descriptors

descriptors are written to free places array. If the array is full

completely, then writing to this array stops. If, on the contrary, the content

the array is exhausted, then a process is launched that

scans the inode area and fills in accordingly

array with new values. There may be a situation when you need to create a file, i.e.

a new inode is needed and there are no elements in the array, and

The running process also did not find any free inodes. This

the second situation is when the system is forced to declare that the resource has been exhausted

(the first is when the file system runs out of free blocks).

Lecture No. 8

Indices. Inodes occupy several consecutive

running blocks on the disk. The size of the inode area is determined by

parameter (which was defined during system installation) that determines

the number of inodes for that particular file system.

The size of the area is equal to the product of this quantity and the size of the index

descriptor.

An inode is a UNIX object that is placed in mutual

unambiguous correspondence with the contents of the file, except when

a file is a special file associated with an external device. IN

The inode has the following fields:

1. A field defining the file type (directory or not).

2. Security code field.

3. The number of links to a given index descriptor out of all possible

directories of the file system (in the situation of violation of the file tree

systems). If the value of this field is zero, then it is considered that

this inode is free.

4. File length in bytes.

5. Statistics: fields characterizing the date and time of creation, etc.

6. File block addressing field.

Note that there is no file name in the inode,

although this object characterizes the contents of the file. Let's see how

Addressing of blocks in which the contents of the file is located is organized.

The addressing field contains the numbers of the first ten file blocks. If

the file is small, then all information about the placement of its blocks is in

index handle. If the file exceeds ten blocks, it starts

work some kind of list structure. Eleventh element of the addressing field

contains the block number from the file block space in which 128 are located

links to blocks of this file.

If the file is even larger, then the twelfth one is used

element of the addressing field. It contains the block number, which contains 128

records about block numbers containing 128 file block numbers

systems, i.e., double indirection is used here. If the file is still

is greater, then the thirteenth element is used and the triple is used

indirection (similar to double, but adds another level).

The file size limit (with a block size of 512) will be equal to (128 + 1282 +

1283) * 512 bytes = 1GB + 8MB + 64KB > 1GB.

We agreed that during our course we will pay

attention to speed discrepancies and smoothing them out. First

problem - if the file system did not have an array of free ones in the superblock

blocks, then we would have to somehow look for free ones each time

blocks, and this work would be crazy, and the file system would crash to

ideological level. Similarly with the list of free inodes,

although the search here would be simpler than for free blocks, but nevertheless,

there are also optimization elements here. Indirectness in block addressing

files allows the overhead of reading blocks of files to increase

proportional to the size of this file. That is, if the file is small, then overhead

there are no expenses, because when a file is opened, a

a copy of the file inode, and without additional calls to the VCU

you can access any of the ten blocks of the file immediately. If it is needed

work with blocks located at the first level of indirect addressing, then

one additional exchange appears, but access can still be made

already to 128 blocks. Similar reasoning for blocks two and three

order. It would seem that the bad thing is that when exchanging with a large file you have to

carry out many additional exchanges, but the UNIX system is tricky -

it uses deep in-echelon buffering of exchanges with the VCU. That is

if we do get some overhead at one level, then they

are compensated at another level of optimization of system interaction with

external memory.

File blocks. The size of the file block space is defined unambiguously

way due to the information in the superblock.

Process saving area. Although this area is shown behind blocks

files, but it can also be located in some file of the file system

or at any location of other VZU. It depends on the specific implementation

systems. Essentially, this area is a useful area to

processes are being pumped out, and this area is also used for

optimizing the launch of the most frequently used processes using,

the so-called t-bit of the file (more on this will be discussed later).

So, we have looked at the structure of the file system and its

organization on a system device. Like any system design,

The structure of the file system and the operating algorithms associated with it are simple

so that when working with them the overhead costs do not go beyond

reasonable. UNIX file system when real work obviously more optimal

Windows NT file system (compare development dates!!!), due to its simplicity

and optimization, which occurs at every step.

Catalogs

We told you that one of the properties of the UNIX operating system

is that all information is placed in files, i.e. no any

special tables used by the operating system, for

with the exception of those tables that it creates while already functioning in

RAM space. Directory, from a file system point of view,

This is a file that contains data about those files that belong to

catalogue.

Directory A contains files B, C and D, although

that files B and C can be either files or directories, and

file D is a directory.

The directory consists of items that contain two fields. First field -

inode number, the second field is the file name, which

associated with this inode. Index numbers

descriptors (in inode space) start at one.

The first inode is the directory inode. In general

there may be entries in the directory that refer to the same index

handle, but there cannot be entries in the directory that have the same name.

The name is unique within the directory, but the contents of the file may

associate an arbitrary number of names. Therefore there is some

ambiguity in the definition of a file in the UNIX operating system.

The file turns out to be more than just a named set of data: it has

inode and there can be multiple names (i.e. name is secondary

component).

When a directory is created, two entries are always created in it: an entry for

special file named "." (dot) with which the index is associated

handle to the directory itself, and the file ".." (two dots) with which

associated with the inode (ID) of the parent directory. For our

example, directory A has, for example, ID number 7, and directory D has ID number

number 5. File F has ID No. 10, file G has ID No. 101. In this case, the file is

directory D will have the following contents:

|Name |ID |

|“.” |5 | The first entry is an entry for yourself. |

|“..” |7 | The second entry is for the parent (directory A). |

| | |catalogue. |

|"G" |101 | This is what the contents of directory D will be. |

The difference between a directory file and regular user files is

the contents of the file type field in the ID. For the root directory, the parent field will be

refer to himself.

Now let's look schematically at how full names can be used

and directory structure. In the system, at every moment of operation

user is defined by the current directory, that is, the directory and the entire path from

root associated with this directory, which by default is substituted for

all file names that do not begin with the "/" character. If the current directory is D,

then we can simply talk about files F and G, and if we need to get to file B,

then you must use the full name or a special file “..”, i.e., in

in this case, the construction “../B”. We refer to the file ".." - this is

means that you need to read the parent ID and use it to get to the content

directory A. Then in the directory file A you need to select a line named B and

determine the ID of file B, and then open the file. This whole operation

quite labor-intensive, but given that files are not opened often, this

will not affect the speed of the system.

We said that the same content can be associated

several names, i.e. files with the same file can be opened at the same time

ID. A problem arises - how to synchronize work with the contents of the file in

if it is opened by different processes or with different names. In UNIX this is

can be solved quite correctly (we will look at this a little later).

Device special files

We already know two types of files: directory files and working files, in

which data is stored. There is a third type - device files. This

the variety is characterized by the type specified in the ID. Contents of files

There are no devices, but only ID and name. The ID contains information about

what type of device is associated with this file: byte-oriented

device or block-oriented device. Byte-oriented

device is the device with which exchange is carried out one at a time

byte (for example, keyboard). A block-oriented device is

a device with which exchange can be carried out in blocks.

There is also a field identifying the driver number associated with this

device (one device can have several drivers, but not

vice versa). This field is actually a number in the driver table

corresponding class of devices. The system has two tables: for block

and for byte-oriented devices. The ID also defines some

a digital parameter that can be passed to the driver as

clarifying information about the work.

Organization of data exchange with files

Let's first define what is low-level I/O in

system. The UNIX file system defines some special

functions called system calls. System calls

directly access the operating system, that is, it

functions that perform some operating system actions. Implementation

system and library functions (for example, mathematical) at the root

is different. If the library function is loaded into the process body,

who uses this library, then all actions in most cases

will be executed within this process, and the system call will immediately

transfers control to the operating system and it performs what is ordered

action. In UNIX, to provide low-level I/O, i.e.

I/O, which is implemented through system calls, is available

set of functions. Here are the main ones:

1. open - Open an existing file. One of the parameters of this function

is a string with the file name, and it returns a number,

which is called a file descriptor. In the body of the user process,

and also in the data associated with this process is located

(except for code and data, of course) some service information, in

specifically, the file descriptor table. She, like all the tables in

UNIX system, positional, i.e. descriptor number corresponds to

the entry number in this table. With file descriptor (FD)

the file name and all the necessary attributes for working with it are associated.

FD numbers are unique within one process. There is a similar

create function - function to open a new file.

2. read/write - read/write system calls, the parameters of which

is the FD number and some attributes that are not so important for

our consideration.

3. close - system call to complete work with a file, parameter

which is the FD number. After accessing this FD function

becomes free, and the work of this process with the file ends.

Here are some system calls that provide I/O (by the way, they are

almost no code is added to your program). See details

on one's own. I drew your attention to the fact that these are system calls, because

that I/O can also be done through I/O libraries. For

This is why there is the so-called file exchange and the functions fopen, fread, and

etc. (prefixed with f). These are library functions. These functions call themselves

to low-level functions within itself.

Let's consider the organization of exchange from a systemic point of view in the operating room

UNIX system. When organizing an exchange, the operating system divides

and data associated with the operating system.

Inode table open files. The first data table,

associated with the operating system - table of inodes

open files (TIDOF). This table contains records, each of which

contains a copy of the inode for each one open on the system

file. File blocks are accessed through this copy. Each

table records also contains a field characterizing the number of open

file system using this descriptor (counter). That is, if

the same file is opened on behalf of two processes, then an entry in TIDOF

one is created, but each additional opening of this file increases

counter per unit.

File table. The file table (TF) contains information about the name

Lecture No. 9

We said that the system can work with the contents of the file in that and

only if the process has registered its desire to work with

this file. The fact of such registration is called opening a file. At

opening a file within a process for each file name being opened

(may open already existing file, or new) is placed in

matches a unique integer called file

descriptor (FD). Within the process, PDs are numbered from 0 to k-1.

The k value is an operating system tuning parameter that determines whether

how many simultaneously open files a process can have. Here

It should be noted that we are talking about the number of simultaneously open files

(the same is written in any book on UNIX), however, in fact, k is

the maximum number of FDs that can be associated with one

file, because the same file can be opened two times within a process

times, and two FDs are formed. What this will lead to, we will look at a few

later, but this is quite correct. After opening the file, all exchange operations

are carried out through file descriptor instructions (i.e. the name is nowhere else

not specified). Each file descriptor has a number of parameters associated with it

(more on them a little later).

Let's see how input/output, or rather processing, is organized

low-level exchange, from the operating system's point of view. Will now be

told about logical circuit organization of input/output, because the real circuit

is arranged somewhat differently, but this is not so important for us.

All data that the system operates with is divided into two

class. The first type of data is data associated with the operational

system, that is, system-wide data. This data includes TIDOF.

The table size is fixed and is determined by the number of simultaneously open

FD. Each entry in this table contains some information, including

which we will be interested in is the following:

1) Copy of the open file ID. For any open file, the ID that

characterizes the contents of this file, is copied and placed in

TIDOPH. After this, all manipulations with the file (for example, changing

file addressing) occur with a copy of the ID, and not with the ID itself on disk.

TIDOF is located in RAM, i.e. access to information in

it is carried out quickly.

2) Counter of currently open files associated with this ID. This

means that for any number of times the file associated with

given an ID, the system works with a single copy of this ID.

Now let's move on to the so-called file table (TF). File table

consists of a fixed number of records. Each TF entry

corresponds to a file open in the system ((or more precisely FD)). At the same time, in

in the vast majority of cases this is a one-to-one correspondence.

In the case where this is not a one-to-one correspondence, we

Let's look at it below. Each TF record contains read/write pointers for

file. This means that if the same file is open in two processes

or twice in the same process, then each opening has its own pointer associated with it, and

they do not depend on each other (almost always, with the exception of some

cases). Each TF record contains a so-called index

heredity - this is some integer.

This is operating system level data, i.e. data that describes

the state of the problem in the system as a whole.

Each process is associated with a so-called open file table

(TOF). The entry number in this table is the FD number. Every line of this

information about indexes associated with FD seems to be torn apart. With one

On the other hand, file descriptors are data that is an attribute of the process,

on the other hand, a pointer is data that is an attribute of the operating

systems. It would seem illogical, and now we will look at what this

illogicality appears. To do this, let us briefly consider the conceptual

issues related to the formation of the process. UNIX operating system

has a fork() function. This is a system call. When accessing this system

call, some action occurs in the system, which for most of the

may seem pointless to you - the process is copied, in

which this function was encountered, i.e. A twin process is created. For what

this is necessary, I will say a little later.

The formation of a twin process has the following properties. First

property: child process that will be formed after calling the function

fork() has all those files that were opened in the father process. Second -

the system allows, by some of its own means, to identify where

the father process, and where is the son process, although in the general case they are absolutely

are the same.

Suppose there is process No. 1, and a table of open

files No. 1. In this process, a file named Name is open, and this file

file descriptor I is assigned. This means that in

the corresponding line of the TOF will be a record with a link to the TF. In TF

some attributes associated with opening a file are defined, and there is also

read/write pointer, i.e. the pointer by which we work,

exchanging information with the file. Entries in the TF have a link to TIDOF, in

which contains a copy of the ID corresponding to the file named Name.

Let's assume that this process once again opens a file named Name.

The system assigned it the file descriptor J. That is. this

the opening corresponds to the J-th line of the Pacific Fleet of the first process. In this entry

opening the file Name. And so far the heredity indices for both cases

will be equal to one. This entry will have its own related to this discovery,

read/write pointers. File descriptor pointers I and J are independent

from each other, i.e. when reading/writing through file descriptor I,

The file descriptor pointer J will not change. This post will link

to the same inode from TIDOF, and the counter value will be

equals two.

Let's assume that process #1 made a call to the fork() function,

a copy of the process has been formed, and both copies begin to work at the output

from fork(), and TOF No. 2 will be associated with the second process. It will be the same

the Name file is opened by ID I and by ID J. But in this case, when the process received

open files are inherited from the parent, then links from the corresponding lines

TOF will occur not on new TF records, but on the same ones to which

the corresponding FDs from the parent were referenced. These processes have read pointers

the entries will be the same, i.e. if you move the pointer in one process,

then it will automatically move to another process. This case is like

times when there is no one-to-one correspondence between the TF strings and

lines of the Pacific Fleet. When these links are spawned, the counter is increased by two. AND,

accordingly, from the ID, due to the addressing of blocks, access is provided to

blocks of files. Such information organization exchange means that exchange

with the contents of each file is carried out centrally, i.e. ultimately

As a result, all orders for exchange go through one single entry, no matter how many

no files associated with this ID were open on the system. There are no

collisions, when confusion begins in time in what is completed, or

outstanding exchanges associated with one handle.

Whenever a new process is formed, the system a priori establishes

zero, first and second file descriptors from TOF, linking them with

predefined files. Zero FD is associated with the system input file, with

An external device, a keyboard, is usually associated with it. The first FD is

standard output file, usually associated with a monitor screen. Second

FD is a standard file for outputting diagnostic messages, with it also

usually associated with a monitor screen.

Let us consider, as an example, typical actions when accessing certain

system calls.

Calling the fork() function. As you know, when calling this function

the system creates a copy of the original process. In this case, the system duplicates the Pacific Fleet

one process in the TOF of the successor process, and also increases by one

heredity index in TF lines associated with open files

the original process, and also increases the counter of open files associated with

according to ID, in TIDOF.

Calling the open() function. When calling this function, it happens

following:

1. The full name determines the directory in which the

2. The ID number is determined. The ID number is searched in

table TIDOF.

3. If a record with a given number is found, record the number

corresponding line TIDOF and go to step 5.

4. If the line is not detected, the

a new line corresponding to the new ID and fixes it

5. Adjust the link counter (arrow) to the TIDOF entry. Number

records in TIDOF are recorded in the TF record, as well as in the TOF

this results in the program returning the number of terms of the Pacific Fleet, in which

During I/O operations, the system's actions are obvious.

Interaction with devices. We have already said that all devices

that are served by the UNIX operating system may be

classified into two types - byte-oriented devices and block-oriented

oriented devices. It should be noted that the same device in

system can be considered both byte-oriented and block-oriented

oriented (example - RAM). Accordingly, there is

drivers are block-oriented and byte-oriented. At the last lecture we

considered special files, associated with external devices, and

They said that there is a table of drivers for block-oriented devices

and a table of byte-oriented device drivers. Accordingly, these

The main feature of organizing work with block-oriented

devices is the ability to buffer exchange. The point is

next. A pool of buffers is organized in the system's RAM, where

each buffer is one block in size. Each of these blocks can be

associated with the driver of one of the physical block-oriented

devices.

Let's look at how the sequence of actions is performed when executing

block from device number M.

1. Among the buffers of the buffer pool, a search is carried out for the specified

block, i.e. if a buffer containing the Nth block of the Mth is detected

device, then we record the number of this buffer. In this case,

appeal to the real physical device doesn't happen, but

the operation of reading information is presenting information

from the found buffer. Let's move on to step 4.

2. If the search for a given buffer is unsuccessful, then in the buffer pool

a buffer is searched for reading and placing this

block. If there is a free buffer (in reality, this situation

is possible only at system startup), then we record its number and

go to step 3. If a free buffer is not found, then we

select the buffer that has not been accessed for the longest time

time. If the buffer has a set sign

information has been written to the buffer, then

real record of the block allocated in the buffer on the physical

device. Then we fix its number and also go to

3. The Nth block of device M is read into the found one

4. The time counter in this buffer is reset and

incrementing counters in other buffers by one.

5. We transmit the contents of this data as a reading result

You see there is an optimization here related to minimization

real calls to the physical device. This is quite useful when

operation of the system. Blocks are written according to a similar scheme. So

This is how buffering is organized for low-level I/O.

The benefits are obvious. The disadvantage is that the system in this case

is critical to unauthorized power outages, i.e.

a situation when the system buffers are not unloaded, but an abnormal situation occurs

stopping the execution of operating system programs, which can lead to

loss of information.

The second disadvantage is that due to buffering, the

in time, the fact of turning to the system for exchange and the actual exchange. This

the disadvantage manifests itself if, during real physical exchange

a crash occurs. Those. it is necessary, suppose, to write down a block, it

is written to the buffer, and a response is received from the system that the exchange has ended

successfully, but when the system will actually write this block to the VSD is unknown. At

In this case, an emergency situation may arise due to the fact that the recording may

fail, suppose, due to defects in the media. This turns out to be a situation where

which the call to the system for the exchange function for the process was successful

(the process received the answer that everything was recorded), but, in fact, the exchange was not

Thus, this system is designed for reliable hardware and

correct professional operating conditions. To fight the odds

loss of information when emergency situations occur, the system is sufficiently

“smart” and acts correctly. Namely, the system has a certain parameter,

which can change quickly, which defines periods of time,

through which system data is reset. Second - there is

The command that may be available to the user is the SYNC command. By this

The command flushes data to disk. And thirdly, the system has

some redundancy, allowing in case of information loss, to produce

a set of actions that will restore information or controversial blocks that

could not be identified by file affiliation, will be written to

specific location in the file system. This is where you can try them

analyze and restore manually, or lose something. Our

the university was one of the first in the country to operate an operating room

UNIX system, and now we can already say that the problems of system unreliability,

in terms of fatal loss of information, there was none.

Today we started talking about the fact that we have system calls and

I/O libraries. Another tool that allows

optimize system performance, is a standard input/output library,

associated with the stdio.h include file. The essence of the conceptual exchange is the same

the same as when organizing low-level input/output. The difference is

what if open() returns the file descriptor number, fopen() returns

a pointer to some structure of a special type FILE. Second and main

This is a library of functions. Many of the service features this

library are implemented within your address space. IN

In particular, this service function is another level of buffering

input/output. Its essence is that on process resources you can

allocate a buffer that will work similarly to a buffer pool

operating system and, which minimizes your process's access to

I/O system calls. It is clear that this I/O library

implemented through a program using system calls

input/output. This means that there is a double buffering factor, although

this increases insecurity.

Double buffering is obviously a useful thing. It allows you to contact

to read or write through library functions with data volumes of half a block

or a third of the block, and if these parts are in a row, then the system itself, due to

buffering, collects these parts and, instead of several calls to the system

a call performs only one call. It's profitable. The disadvantage is that this

buffering is organized within the address space of the process with

all the ensuing consequences (synchronization of exchanges is lost, including

if other processes work with this file through this library,

because the body of each process has its own buffer, which can

accumulate this data and any uniformity that exists in

the scheme we considered does not work). However, standard

the I/O library is a handy tool; she also has the means

blocking this buffering.

Lecture No. 10

In the last lecture we discussed the following points related to

organizing the functioning of the file system. This is a systemic organization

low-level exchange. We found that by organizing the data,

UNIX operating system using fairly simple and transparent means

solves problems of possible conflicts in case of several openings of one and

the same file. We saw that all openings of the same file (under

by file we understand not the name, but the contents) ultimately come down to work

with a single copy of the ID. We have found out that almost all discoveries

files associated with the same ID give rise to the ability for processes to work

with their own read/write file pointers, unless

when a file in a process was obtained through inheritance, i.e. the file was received

from the father process through the fork() function to the son process.

We have found out that the system divides the devices it supports

into two classes: block-oriented and byte-oriented. Same

a device can be both byte-oriented and block-oriented at the same time

oriented. This depends both on the device itself and on the availability

drivers - programs that control this device. An example of this

device is RAM.

We have reviewed the principles of organizing low-level exchange for

block-oriented devices, and in the context of this we became acquainted with

buffering tools that are used in UNIX OS, the essence of which

is that, by analogy with read/write buffers from the operational

memory (hardware), the operating system creates software

tools that allow you to minimize the number of calls to

physical device. This mechanism distinguished and distinguishes the UNIX OS.

It should be noted here that exchange buffering can be multi-level.

The first additional level may appear due to the fact that the device

may have its own hardware buffers, implemented similarly to buffers

random access memory.

We also talked about the fact that in addition to low-level I/O, with

which are associated with functions that provide system calls (open, read, write

etc.), there are high-level access means - this is the standard

stdio.h input/output library, the connection of which allows you to use

to organize exchanges, another level of buffering (this is optimization

calls to system calls), which is associated with the process, i.e.

buffering occurs at the expense of process resources. We assessed what was good

what is wrong. Obviously, buffering reduces the number of exchanges with

slow external device, and the more such levels, the less

exchanges take place. However, the bad thing is that due to buffering it reduces

system reliability. For example, if the system unexpectedly turns off

power supply, all buffers lose information. The moment of turning to exchange is far from

real exchange, and therefore unpleasant situations are possible. But despite

these shortcomings, experience shows that fatal losses of information occur

rarely.

I would like to draw your attention to how much UNIX saves access to the VCU.

The superblock is located in RAM, and real actions with information

superblocks do not come from the disk, but from RAM, although here

The same problem occurs with unauthorized power off.

When opening a file, we work with ID. We found out that working with ID

carried out through working with its copy located in software

tables in RAM. This means there is almost no overhead

costs associated with small files, and this overhead is small when

working with huge files. It turns out that almost the entire infrastructure,

supporting the operation of the file system, works due to deep

layered buffering.

File attributes

We talked about organizing system users; she has

hierarchical three-level structure.

Any user belongs to a group. According to hierarchy

users, a hierarchy of file protection and user rights is defined.

The concept of a file owner is defined. The initial owner of the file is

the user (more precisely, the user process) who created this file. Attribute

The "file owner" can be changed with the changeown command. Each file has

security attributes associated with the hierarchy. There are access rights to some

actions of the file on the part of the file owner. These are read permissions

recording, for execution. Each file, except for the rights associated with the level

user has rights associated with the group level. These are rights for everyone

users of the group to which the file owner belongs, with the exception of

himself (i.e. the rights of the owner and his group are different). Third category

protection - everyone else. These are the rights that all users have

systems, with the exception of the owner and his group. The system has a command

change access rights changemode.

In addition to access attributes, each file can have characteristics such as

in particular, the so-called t-bit and s-bit, which are also set by some

team. We, already knowing the structure of the file system, understand that, in principle,

the file may be very fragmented. Besides,

the file may be large, and when opening large file, overheads arise

costs associated with accessing distant blocks of a file. Therefore the opening

file is a long process. To optimize this action,

The system has the ability to mark executable files with a t-bit. After that

the following happens: if called executable file,

marked with a t-bit, then during the first call during a system session,

copying the file body to the saving area. Each time you call again

file, first the directory of the saving area is viewed, and in that

If the file you are looking for is there, then the file is not loaded from the OSD,

but from this area. That is, this is another way to minimize calls to the VZU.

Usually the ability to set the t-bit is the prerogative of the system

administrator, and the system administrator himself selects those processes (and

respectively, files) that must be marked with a t-bit. Usually they are marked with it

those processes that are used most frequently (if, for example, there is

workshop, then it makes sense to mark the compiler file with a t-bit).

We will look at the S-bit somewhat superficially, but will return to it later.

There is the following problem. All funds in the system belong to someone, because... All

means, all commands (except some built-ins) in the final

account are files that have an owner. Some of

these commands can address one or another system files. Arises

problem related to the fact that, on the one hand, there must be protection from

unauthorized access to the file. On the other hand, all teams have

potential rights for all categories. What should I do? It is possible to mark

some files are s-bit. The owner of the s-bit file remains unchanged, but

when this file is launched, the owner of the process that launched this file,

rights to access data are granted from the owner of the source file.

Let's say there is an executable named file and it runs like this:

then the image with the file file2, which contains confidential information.

Let's say file updates file2, which contains information about

all registered users and, in particular, file can change

user password in the system. If I run file on my behalf, then they can

two situations will arise: either I will not be able to work with file2, which contains

user account information because it is closed to everyone

the rest; or it is open to everyone, then there is no protection. In this case

s-bit works. The essence of his work is as follows. Owner

the source file is ROOT user. Let's assume this file is wanted

run user named MASH. If MASH runs this file and there is no s-

bit, it turns out that the owner of the file is ROOT, and the owner

process became MASH. In this case, files that are not accessible to the user

MASH will be inaccessible to its process, and MASH will not be able to change its

password in the system. S-bit allows you to extend the rights of the owner (ROOT) of the file for

owner (MASH) of the process (launched from this file), and for the duration of the session

operation of the process, it will have access to all those files that were accessible

the owner of the file (ROOT).

Next question: how are directory permissions interpreted?

(since directories are also files)? Permission to read from

directory means that entering the directory and opening files from it is allowed

catalogue. Write permission provides the ability to create and

destroy files in this directory. Permission to execute is

the ability to search in a given directory (for example, using the ls command).

The file system from the user's point of view.

Let's look at the file system structure from the point of view

user. This structure will be considered for a generalized

operating system, since its actual structure may vary.

There is a file in the root directory called unix. This is the same file

which is launched by the boot loader, and which forms the kernel

ETC Catalog. This directory contains standard files data

systems and commands that provide some level of control

functioning of the system. 1. File passwd. All users in

system are registered through this file. This means that if

the user can work, then in the passwd file there is a line marked

username, which contains a set of some data separated

delimiter character. In particular, the passwd file line contains the number

the group to which the user belongs can sometimes contain

encrypted password for user login into the system. Coded -

means that the system uses a mutually ambiguous possibility

mapping a sequence of characters into some code, and in the system

A display of this password is stored. Modern UNIXes store passwords in

a separate secure database (although the passwd file is also present),

because the passwd file is usually open for reading, the conversion algorithm

is also usually known and it is possible to guess the password.

characterizing the last name, first name and patronymic of the user; field in which

the user status is indicated; field indicating the “home” directory.

The same line indicates (or can indicate) with which interpreter

commands this user will work. There may be some more

options.

2. File rc. This file contains in text form a set

commands that will be executed when the operating system boots.

For example, when booting, the operating system may start a process

checking the integrity of the file system.

Please note that the UNIX operating system, with the exception of

several cases, contains all its system information in ordinary

text files. This information is easy to view and easy to

is being adjusted. At one time, this was a revolutionary step.

3. In the same directory there are commands that allow

change user passwords (passwd executable file), allow

“mount” local file systems to the file system and

base the same local systems, allow you to start the process

testing and correction of the file system. This process checks the file

system according to a certain set of characteristics, for example, a set of free

files should, when combined with many busy files, give all

BIN catalogue. This catalog contains the overwhelming number of standard

system commands available to the user.

MNT Catalog. This is the directory to which you can “mount” local

file systems. Until today, we believed that the file system

located on one device, but in reality this is not the case. There is a main

file system on the system device, and there is an arbitrary (in

within reasonable limits) the number of local file systems that

mounted to the system using some command. The root of the local

file system will be the MNT directory.

DER Catalog. This directory contains files associated with

specific external device drivers, such as console drivers,

line printing, etc. You remember that files associated with drivers

external devices, in the ID that is associated with their name, have the attribute

that this is a file device, and also have links in the ID to the corresponding

driver tables. These files have no content.

USR Catalog. This directory has a LIB subdirectory, which usually contains

there are libraries that implement some group functions,

provided to the user, incl. C compiler with appropriate

support libraries.

Also, there is a subdirectory BIN (/USR/BIN), which contains

system administrator additional “homespun” commands, because they

placement in the /BIN directory is considered incorrect.

INCLUDE subdirectory. Do you remember what the include line looks like? .

This line tells the C preprocessor to take a file from the directory

/USR/INCLUDE. This directory has its own subdirectories, and is interesting for us

SYS subdirectory (/USR/INCLUDE/SYS). It contains include files,

associated with system capabilities, in particular signal.h is

a list of the signals that can be exchanged between two processes.

So, we have finished describing the file system, and we can conclude that

The UNIX file system is hierarchical, multi-user. File

The UNIX system has deep, multi-tiered buffering for exchanges with

real devices. The UNIX file system is informational

basis for the functioning of the operating system. This is an extensible file

system, while maintaining its integrity, i.e. and always

there is only one path from its root to any node (or leaf).

The UNIX file system, in terms of logical organization of files, has

its clear and transparent structure. This imposes certain conditions

on the system administration, because there are problems coordinating access rights to

various components of the file system, there are problems placing a new

information within the file system.

-----------------------

Heredity Index

Downloads: 3002

Window 7 OS is full of many features and tools that make the user's work easier. In this book the reader will find a description of everything new and interesting.


03.07.2013
K. Stephenson - Secrets of Windows XP. 500 best tricks and tips

Downloads: 3771

If you decide to improve your computer efficiency, you can’t do without advice from a specialist. Kleber Stephenson, President of the American company U.S. Diginet - Interactive Communications, an Internet solutions provider, has for many years specialized in the implementation of computer business systems based on Microsoft technologies in the Windows operating system. In this book, he offers readers about 500 tips for optimizing work in Windows XP.


03.07.2013
Vavilov S. - Modern tutorial for working on a computer in Windows 7

Downloads: 2354

This book will help you master Personal Computer easy and fast. With its help you will learn the principles of working in Windows 7, latest version the world's most popular operating system, get acquainted with the main software applications, allowing you to work with text documents, spreadsheets, graphic images, audio and video files, learn everything you need about the Internet


03.07.2013
Goltsman V. - Working on a laptop in Windows 7

Downloads: 2529

Have you purchased a laptop? And this is your first computer? Then you are holding in your hands a truly necessary book. After reading it, you will not only learn about how to work on a computer in general, but also master all the necessary subtleties related to working with laptops.


03.07.2013
Artemyev A. - Working on a laptop in Windows 7

Downloads: 1729

The author describes such important points as installing the operating system and working in it, integrating the laptop into local network and connect to the Internet using wireless technologies, using various programs and ensuring security. An important advantage of the book is that it describes a new operating room Windows system 7


03.07.2013
Gladky A. - Setting up Windows 7 with your own hands. How to make work easy and convenient

Downloads: 3076

Setting up an operating system is an important element of its operation. At Windows installation On a computer, most settings are set by default and, as a rule, these parameters are optimal for most users.

Computer