Principles of building a computer by John von Neumann. Computer architecture

The foundations of the doctrine of computer architecture were laid by the outstanding American mathematician John von Neumann. He became involved in the creation of the world's first tube computer, ENIAC, in 1944, when its design had already been selected. During his work, during numerous discussions with his colleagues G. Goldstein and A. Berks, von Neumann expressed the idea of ​​a fundamentally new computer. In 1946, scientists outlined their principles for constructing computers in the now classic article “Preliminary Consideration of the Logical Design of an Electronic Computing Device.” Half a century has passed since then, but the provisions put forward in it remain relevant today.

The article convincingly substantiates the use of the binary system to represent numbers (it is worth recalling that previously all computers stored processed numbers in decimal form). The authors convincingly demonstrated the advantages of the binary system for technical implementation, the convenience and simplicity of performing arithmetic and logical operations in it. Subsequently, computers began to process non-numeric types of information - text, graphic, sound and others, but binary coding of data is still information basis any modern computer.

Another truly revolutionary idea, the importance of which is difficult to overestimate, is the “stored program” principle proposed by Neumann. Initially, the program was set by installing jumpers on a special patch panel. This was a very labor-intensive task: for example, it took several days to change the program of the ENIAC machine (while the calculation itself could not last more than a few minutes - the lamps failed). Neumann was the first to realize that a program could also be stored as a series of zeros and ones, in the same memory as the numbers it processed. The absence of a fundamental difference between the program and the data made it possible for the computer to form a program for itself in accordance with the results of the calculations.

Von Neumann not only put forward the fundamental principles of the logical structure of a computer, but also proposed its structure, which was reproduced during the first two generations of computers. The main blocks according to Neumann are the control unit (CU) and the arithmetic-logical unit (ALU) (usually combined into a central processor), memory, external memory, input and output devices. It should be noted that external memory differs from input and output devices in that data is entered into it in a form convenient for a computer, but inaccessible to direct perception by a person. Thus, a magnetic disk drive refers to external memory, and a keyboard is an input device, display and printing are output devices.

The control device and the arithmetic-logical unit in modern computers are combined into one unit - the processor, which is a converter of information coming from memory and external devices (this includes retrieving instructions from memory, encoding and decoding, performing various, including arithmetic, operations, coordination of the operation of computer nodes). The functions of the processor will be discussed in more detail below.

Memory (memory) stores information (data) and programs. The storage device in modern computers is “multi-tiered” and includes random access memory (RAM), which stores the information with which the computer is working directly at a given time (an executable program, part of the data necessary for it, some control programs), and external storage devices (ESD). ) much larger capacity than RAM. but with significantly slower access (and significantly lower cost per 1 byte of stored information). The classification of memory devices does not end with RAM and VRAM - certain functions are performed by both SRAM (super-random access memory), ROM (read-only memory), and other subtypes of computer memory.

In a computer built according to the described scheme, instructions are sequentially read from memory and executed. Number (address) of the next memory cell. from which the next program command will be extracted is indicated by a special device - a command counter in the control unit. Its presence is also one of the characteristic features of the architecture in question.

The fundamentals of the architecture of computing devices developed by von Neumann turned out to be so fundamental that they received the name “von Neumann architecture” in the literature. The vast majority of computers today are von Neumann machines. The only exceptions are certain types of systems for parallel computing, in which there is no program counter, the classical concept of a variable is not implemented, and there are other significant fundamental differences from the classical model (examples include streaming and reduction computers).

Apparently, a significant deviation from the von Neumann architecture will occur as a result of the development of the idea of ​​fifth-generation machines, in which information processing is based not on calculations, but on logical conclusions.

Von Neumann's principles

Principle of memory homogeneity - Commands and data are stored in the same memory and are externally indistinguishable in memory. They can only be recognized by the method of use; that is, the same value in a memory cell can be used as data, as a command, and as an address, depending only on the way it is accessed. This allows you to perform the same operations on commands as on numbers, and, accordingly, opens up a number of possibilities. Thus, by cyclically changing the address part of the command, it is possible to access successive elements of the data array. This technique is called command modification and is not recommended from the standpoint of modern programming. More useful is another consequence of the principle of homogeneity, when instructions from one program can be obtained as a result of the execution of another program. This feature underlies translation - translation of program text from language high level into the language of a specific computer.

The principle of addressing - Structurally, the main memory consists of numbered cells, and any cell is available to the processor at any time. Binary codes of commands and data are divided into units of information called words and stored in memory cells, and to access them the numbers of the corresponding cells - addresses - are used.

Principle program control- All calculations provided for by the algorithm for solving the problem must be presented in the form of a program consisting of a sequence of control words - commands. Each command prescribes some operation from a set of operations implemented by the computer. Program commands are stored in sequential memory cells of the computer and are executed in a natural sequence, that is, in the order of their position in the program. If necessary, using special commands, this sequence can be changed. The decision to change the order of execution of program commands is made either based on an analysis of the results of previous calculations, or unconditionally.

Binary coding principle - According to this principle, all information, both data and commands, is encoded with binary digits 0 and 1. Each type of information is represented by a binary sequence and has its own format. A sequence of bits in a format that has a specific meaning is called a field. In numeric information, there is usually a sign field and a significant digits field. In the command format, two fields can be distinguished: the operation code field and the addresses field.

In 1946, D. von Neumann, G. Goldstein and A. Berks, in their joint article, outlined new principles for the construction and operation of computers. Subsequently, the first two generations of computers were produced on the basis of these principles. There have been some changes in later generations, although Neumann's principles are still relevant today.

In fact, Neumann managed to summarize the scientific developments and discoveries of many other scientists and formulate something fundamentally new on their basis.

Von Neumann's principles

  1. Use of the binary number system in computers. The advantage over the decimal number system is that the devices can be made quite simple, arithmetic and logical operations in the binary number system they are also quite simple.
  2. Computer software control. The operation of the computer is controlled by a program consisting of a set of commands. Commands are executed sequentially one after another. The creation of a machine with a stored program was the beginning of what we call programming today.
  3. Computer memory is used not only to store data, but also programs.. In this case, both program commands and data are encoded in the binary number system, i.e. their recording method is the same. Therefore, in certain situations, you can perform the same actions on commands as on data.
  4. Computer memory cells have addresses that are numbered sequentially. At any time, you can access any memory cell by its address. This principle opened up the possibility of using variables in programming.
  5. Possibility of conditional jump during program execution. Despite the fact that commands are executed sequentially, programs can implement the ability to jump to any section of code.

The most important consequence of these principles is that now the program was no longer a permanent part of the machine (like, for example, a calculator). It became possible to easily change the program. But the equipment, of course, remains unchanged and very simple.

By comparison, the program of the ENIAC computer (which did not have a stored program) was determined by special jumpers on the panel. It could take more than one day to reprogram the machine (set jumpers differently). And although programs for modern computers can take years to write, they work on millions of computers after a few minutes of installation on the hard drive.

How does a von Neumann machine work?

A von Neumann machine consists of a storage device (memory) - a memory, an arithmetic-logical unit - ALU, a control device - CU, as well as input and output devices.

Programs and data are entered into memory from the input device through an arithmetic logic unit. All program commands are written to adjacent memory cells, and data for processing can be contained in arbitrary cells. For any program, the last command must be the shutdown command.

The command consists of an indication of what operation should be performed (from the possible operations on a given hardware) and the addresses of memory cells where the data on which the specified operation should be performed is stored, as well as the address of the cell where the result should be written (if it needs to be saved in memory).

The arithmetic logic unit performs the operations specified by the instructions on the specified data.

From the arithmetic logic unit, the results are output to memory or an output device. The fundamental difference between a memory and an output device is that in a memory, data is stored in a form convenient for processing by a computer, and it is sent to output devices (printer, monitor, etc.) in a way that is convenient for a person.

The control unit controls all parts of the computer. From the control device, other devices receive signals “what to do”, and from other devices the control unit receives information about their status.

The control device contains a special register (cell) called the “program counter”. After loading the program and data into memory, the address of the first instruction of the program is written to the program counter. The control unit reads from memory the contents of the memory cell, the address of which is in the program counter, and places it in a special device - the “Command Register”. The control unit determines the operation of the command, “marks” in memory the data whose addresses are specified in the command, and controls the execution of the command. The operation is performed by the ALU or computer hardware.

As a result of the execution of any command, the program counter changes by one and, therefore, points to the next command of the program. When it is necessary to execute a command that is not next in order to the current one, but is separated from the given one by a certain number of addresses, then a special jump command contains the address of the cell to which control must be transferred.

· usage binary system to represent numbers. Von Neumann's work demonstrated the advantages of the binary system for technical implementation, the convenience of performing arithmetic and logical operations. Later they began to process non-numeric types of information: text, graphic, sound, etc. Binary coding is the basis of a modern computer.

· stored program principles. A program written using binary codes must be stored in the same memory as the data being processed.

· principle of targeting. Commands and data are moved to memory locations that are accessed by address. The address of a cell is its number; the location of information in RAM is also encoded in the form of binary systems.

In a computer, according to the von Neumann principle, instructions are sequentially read from memory and executed. The number (address) of the next memory cell from which the program command is extracted is generated and stored in a special program counter device.

In accordance with von Neumann's principles, a computer must contain the following devices:

· An arithmetic logic unit (ALU) is designed to process encoded information and can perform arithmetic and logical operations:;

· The control device (CU) organizes the execution of programs;

· Memory or storage device (memory) - storage of programs and data. Computer memory consists of a number of numbered cells. Each of them may contain processed data or program instructions;

· External devices for input and output of information, provide direct and feedback.

Let's consider the composition and purpose of the main PC blocks (Fig. 2).

Rice. 2. Block diagram of a personal computer

Microprocessor (MP). This is the central unit of the PC, designed to control the operation of all blocks of the machine and to perform arithmetic and logical operations on information.

The microprocessor includes:

§ control device(CU) – generates and supplies to all blocks of the machine at the right times certain control signals (control pulses), determined by the specifics of the operation being performed and the results of previous operations; generates addresses of memory cells used by the operation being performed and transmits these addresses to the corresponding computer blocks; the control device receives a reference sequence of pulses from the clock pulse generator;

§ arithmetic logic unit(ALU) – designed to perform all arithmetic and logical operations on numerical and symbolic information (in some PC models, an additional mathematical coprocessor);

§ microprocessor memory(MPP) – serves for short-term storage, recording and output of information directly used in calculations in the next cycles of machine operation. MPP is built on registers and is used to ensure high speed of the machine, because the main memory (RAM) does not always provide the speed of writing, searching and reading information necessary for the efficient operation of a high-speed microprocessor. Registers– high-speed memory cells of various lengths (in contrast to OP cells, which have a standard length of 1 byte and lower speed);

§ microprocessor interface system– implements pairing and communication with other PC devices; includes an internal MP interface, buffer storage registers and control circuits for input/output ports (I/O) and the system bus. Interface(interface) – a set of means for pairing and communicating computer devices, ensuring their effective interaction. I/O port(I/O – Input/Output port) – interface equipment that allows you to connect another PC device to the microprocessor.

Clock generator. It generates a sequence of electrical impulses; the frequency of the generated pulses determines the clock frequency of the machine.

The time interval between adjacent pulses determines the time of one cycle of machine operation or simply machine operation cycle.

The frequency of the clock pulse generator is one of the main characteristics of a personal computer and largely determines the speed of its operation, because each operation in the machine is performed in a certain number of clock cycles.

System bus. This is the main interface system of a computer, ensuring the pairing and communication of all its devices with each other.

The system bus includes:

§ code data bus(KSD), containing wires and interfacing circuits for parallel transmission of all bits of the numeric code (machine word) of the operand;

§ address code bus(KSA), including wires and interface circuits for parallel transmission of all bits of the address code of a main memory cell or an input/output port of an external device;

§ instruction code bus(KShI), containing wires and interface circuits for transmitting instructions (control signals, pulses) to all blocks of the machine;

§ power bus, having wires and interface circuits for connecting PC units to the power supply system.

The system bus provides three directions of information transfer:

1) between the microprocessor and main memory;

2) between the microprocessor and the input/output ports of external devices;

3) between the main memory and the I/O ports of external devices (in direct memory access mode).

All blocks, or rather their I/O ports, are connected to the bus in the same way through the corresponding unified connectors (joints): directly or through controllers (adapters). The system bus is controlled by the microprocessor either directly or, more often, through an additional chip - bus controller, generating the main control signals. Exchange of information between external devices and the system bus is performed using ASCII codes.

Main memory (RAM). It is designed for storing and promptly exchanging information with other units of the machine. The OP contains two types of storage devices: read only memory (ROM) and random access memory (RAM).

ROM serves to store unchangeable (permanent) software and reference information, allows you to quickly only read the information stored in it (you cannot change the information in the ROM).

RAM designed for online recording, storage and reading of information (programs and data) directly involved in the information and computing process performed by a PC in the current period of time. The main advantages of RAM are its high performance and the ability to access each memory cell separately (direct address access to the cell). As a disadvantage of RAM, it should be noted that it is impossible to save information in it after turning off the machine's power (volatility dependence).

External memory. It refers to external devices of the PC and is used for long-term storage of any information that may ever be required to solve problems. In particular, all computer software is stored in external memory. External memory contains various types of storage devices, but the most common, available on almost any computer, are hard disk drives (HDD) and floppy disk drives (FLMD).

The purpose of these drives is to store large amounts of information, record and release stored information upon request into a random access memory device. Hard disk drives and flat disk drives differ only in design, the volume of stored information and the time it takes to search, record and read information.

Storage devices on cassette magnetic tape (streamers), optical disk drives (CD-ROM - Compact Disk Read Only Memory - CD with read-only memory), etc. are also used as external memory devices. (cm. subsection 4.4).

Power supply. This is a block containing autonomous and network power supply systems for a PC.

Timer. These are in-machine Digital Watch, providing, if necessary, automatic recording of the current moment in time (year, month, hours, minutes, seconds and fractions of seconds). The timer is connected to an autonomous power source - a battery and continues to work when the machine is disconnected from the network.

External devices (ED). This is the most important component any computing system. Suffice it to say that in terms of cost, VAs sometimes account for 50 - 80% of the entire PC. The possibility and effectiveness of using PCs in control systems and in the national economy as a whole largely depends on the composition and characteristics of the computer.

PC computers ensure the interaction of the machine with the environment: users, control objects and other computers. VEs are very diverse and can be classified according to a number of characteristics. Thus, according to purpose, the following types of devices can be distinguished:

§ external storage devices (VSD) or external PC memory;

§ user dialog tools;

§ information input devices;

§ information output devices;

§ means of communication and telecommunications.

Dialogue tools user devices include video monitors (displays), less often remote control typewriters (printers with keyboards) and speech input/output devices.

Video monitor (display)– a device for displaying information input and output from a PC (cm. subsection 4.5).

Voice input/output devices belong to the fast-growing media. Speech input devices are various microphone Acustic systems, “sound mice,” for example, with sophisticated software that can recognize human-pronounced letters and words, identify them, and encode them.

Speech output devices are various sound synthesizers that convert digital codes into letters and words that are reproduced through loudspeakers (speakers) or speakers connected to a computer.

TO input devices relate:

§ keyboard– a device for manually entering numeric, text and control information into a PC (cm. subsection 4.5);

§ graphics tablets (digitizers)– for manual input graphic information, images by moving a special pointer (pen) across the tablet; when you move the pen, the coordinates of its location are automatically read and these coordinates are entered into the PC;

§ scanners(reading machines) – for automatic reading from paper media and entering typewritten texts, graphs, pictures, drawings into a PC; in the scanner encoding device in text mode, the read characters, after comparison with reference contours by special programs, are converted into ASCII codes, and in the graphic mode, the read graphs and drawings are converted into sequences of two-dimensional coordinates (cm. subsection 4.5);

§ manipulators(pointing devices): joystick- lever arm , mouse, trackball - ball in a frame, light pen etc. – to enter graphic information on the display screen by controlling the movement of the cursor across the screen, followed by encoding the cursor coordinates and entering them into the PC;

§ touch screens – for entering individual image elements, programs or commands from a split-screen display into a PC.

TO information output devices relate:

§ printers– printing devices for recording information on paper (cm. subsection 4.5);

§ plotters (plotters)– to output graphic information (graphs, drawings, drawings) from a PC onto paper; There are vector plotters with drawing images using a pen and raster plotters: thermographic, electrostatic, inkjet and laser. By design, plotters are divided into flatbed and drum plotters. The main characteristics of all plotters are approximately the same: plotting speed – 100 - 1000 mm/s, best models Color images and halftone rendering are possible; Laser plotters have the highest resolution and image clarity, but they are the most expensive.

Devices communications and telecommunications are used for communication with devices and other automation equipment (interface adapters, adapters, digital-to-analog and analog-to-digital converters, etc.) and for connecting PCs to communication channels, to other computers and computer networks (network interface cards, "joints" ", data transmission multiplexers, modems).

In particular, shown in Fig. 4.2 network adapter is an external interface of a PC and serves to connect it to a communication channel for exchanging information with other computers, for working as part of computer network. IN global networks functions network adapter performs a modulator-demodulator (modem, cm. Ch. 7).

Many of the devices mentioned above belong to a conditionally selected group - multimedia.

Multimedia(multimedia - multimedia) is a complex of hardware and software, allowing a person to communicate with a computer using a variety of natural environments: sound, video, graphics, texts, animation, etc.

Multimedia means include speech input and output devices; scanners that are already widespread (since they allow printed texts and drawings to be automatically entered into a computer); high-quality video (video-) and sound (sound-) cards, video capture cards (videograbber), which capture images from a VCR or video camera and enter it into a PC; high-quality acoustic and video reproduction systems with amplifiers, sound speakers, large video screens. But, perhaps, with even greater reason, multimedia includes external high-capacity storage devices on optical disks, often used for recording audio and video information.

CDs are widely used, for example, in the study of foreign languages, traffic rules, accounting, legislation in general and tax legislation in particular. And all this is accompanied by texts and drawings, speech information and animation, music and video. In a purely domestic aspect, CDs can be used to store audio and video recordings, i.e. use instead of player audio cassettes and video cassettes. It should be mentioned, of course, large quantities programs computer games, stored on CD.

Thus, CD-ROM provides access to huge volumes of information recorded on CDs that are diverse both in terms of functionality and playback environment.

Additional schemes. To the system bus and to the PC MP along with typical external devices can be connected and some additional integrated circuit boards that expand and improve functionality microprocessor: mathematical coprocessor, direct memory access controller, input/output coprocessor, interrupt controller, etc.

Math coprocessor widely used for accelerated execution of operations on binary floating-point numbers, on binary-coded decimal numbers, and for calculating some transcendental, including trigonometric, functions. The mathematical coprocessor has its own command system and works in parallel (in time) with the main MP, but under the control of the latter. Operations are accelerated tenfold. Latest models MPs, starting with MP 80486 DX, include a coprocessor in their structure.

Direct Memory Access Controller frees the MP from direct control of magnetic disk drives, which significantly increases the effective performance of the PC. Without this controller, data exchange between the VSD and RAM is carried out through the MP register, and if it is present, data is directly transferred between the VSD and RAM, bypassing the MP.

I/O coprocessor due to parallel work with the MP, it significantly speeds up the execution of I/O procedures when servicing several external devices (display, printer, HDD, HDD, etc.); frees the MP from processing I/O procedures, including implementing the direct memory access mode.

The interrupt controller plays a vital role in a PC.

Interrupt– temporary stop of the execution of one program for the purpose of prompt execution of another, in this moment more important (priority) program

Interruptions occur constantly when the computer is running. Suffice it to say that all information input/output procedures are performed using interrupts, for example, timer interrupts occur and are serviced by the interrupt controller 18 times per second (naturally, the user does not notice them).

Interrupt controller serves interrupt procedures, receives an interrupt request from external devices, determines the priority level of this request and issues an interrupt signal to the MP. The MP, having received this signal, pauses execution current program and proceeds to execute a special service program for the interrupt that the external device requested. After completion of the maintenance program, the interrupted program is resumed. The interrupt controller is programmable.

Computer architecture and von Neumann principles

The term "architecture" is used to describe the principle of operation, configuration and interconnection of the main logical nodes of a computer. Architecture is a multi-level hierarchy of hardware and software from which a computer is built.

The foundations of the doctrine of computer architecture were laid by the outstanding American mathematician John von Neumann. The first Eniak computer was created in the USA in 1946. The group of creators included von Neumann, who suggested basic principles of computer construction: transition to the binary number system for representing information and the principle of a stored program.

It was proposed to place the calculation program in a computer storage device, which would provide auto mode execution of commands and, as a result, an increase in computer speed. (Recall that previously all computers stored processed numbers in decimal form, and programs were specified by installing jumpers on a special patch panel.) Neumann was the first to guess that a program could also be stored as a set of zeros and ones, and in the same memory as and the numbers it processes.

Basic principles of computer construction:

1. Any computer consists of three main components: processor, memory and device. input-output (I/O).

2. The information with which the computer works is divided into two types:

    a set of processing commands (programs); data to be processed.

3. Both commands and data are entered into memory (RAM) – stored program principle .

4. The processing is controlled by the processor, whose control unit (CU) selects commands from RAM and organizes their execution, and the arithmetic-logical unit (ALU) performs arithmetic and logical operations on the data.


5. Input/output devices (I/O) are connected to the processor and RAM.

Von Neumann not only put forward the fundamental principles of the logical structure of computers, but also proposed a structure that was reproduced during the first two generations of computers.

External storage device (ESD)

Rice. 1. Computer architecture End of form,

Random Access Memory (RAM)

built on the principles

von Neumann

- direction of information flows; - direction of control signals from the processor to other computer nodes

The fundamentals of the architecture of computing devices developed by von Neumann turned out to be so fundamental that they received the name “von Neumann architecture” in the literature. The vast majority of VMs today are von Neumann machines.

The emergence of the third generation of computers was due to the transition from transistors to integrated circuits, which led to an increase in processor speed. Now the processor was forced to idle, waiting for information from slower input/output devices, and this reduced the efficiency of the entire computer as a whole. To solve this problem, special circuits were created to control the operation of external devices, or simply controllers.

Modern architecture personal computers based on backbone-modular principle. Information communication between computer devices is carried out through system bus(another name is system highway).

A bus is a cable consisting of many conductors. One group of conductors - data bus processed information is transmitted, on the other - address bus- addresses of memory or external devices accessed by the processor. The third part of the highway - control bus, control signals are transmitted through it (for example, a signal that the device is ready for operation, a signal to start operation of the device, etc.).

How does the system bus work? We have already said that one and zero bits exist only in the heads of programmers. For a processor, only the voltages at its contacts are real. Each pin corresponds to one bit, and the processor only needs to distinguish between two voltage levels: yes/no, high/low. Therefore, the address for a processor is a sequence of voltages on special contacts called the address bus. You can imagine that after voltages are set on the contacts of the address bus, voltages appear on the contacts of the data bus, encoding the number stored at the specified address. This picture is very rough because it takes time to retrieve data from memory. To avoid confusion, the operation of the processor is controlled by a special clock generator. It produces pulses that divide the processor's work into separate steps. The unit of processor time is one clock cycle, that is, the interval between two pulses of the clock generator.

The voltages appearing on the processor address bus are called the physical address. In real mode, the processor works only with physical addresses. On the contrary, the processor’s protected mode is interesting because the program works with logical addresses, and the processor invisibly converts them into physical ones. Windows system uses protected processor mode. Modern operating systems and programs require so much memory that the protected mode of the processor has become much more “real” than its real mode.

The system bus is characterized clock frequency and bit depth. The number of bits simultaneously transmitted on the bus is called bus width. Clock frequency characterizes the number of elementary data transfer operations in 1 second. The bus width is measured in bits, the clock frequency is measured in megahertz.


Any information transmitted from the processor to other devices via the data bus is accompanied by address transmitted over the address bus. This can be the address of a memory cell or the address of a peripheral device. It is necessary that the bus width allows the address of the memory cell to be transmitted. Thus, in words, the bus width limits the amount of computer RAM; it cannot be greater than , where n is the bus width. It is important that the performance of all devices connected to the bus is consistent. It's not wise to have fast processor and slow memory or fast processor and memory, but a slow hard drive.

Rice. 2. Diagram of a computer built on the backbone principle

In modern computers it is implemented open architecture principle, allowing the user to assemble the computer configuration he needs and, if necessary, upgrade it.

Configuration A computer refers to the actual collection of computer components that make up a computer. The principle of open architecture allows you to change the composition of computer devices. Additional peripheral devices can be connected to the information highway, and some device models can be replaced by others.

Hardware connection of a peripheral device to the backbone on physical level carried out through a special block - controller(other names - adapter, board, card). There are special connectors for installing controllers on the motherboard - slots.

Software control of the operation of a peripheral device is carried out through the program - driver, which is a component operating system. Since there is a huge variety of devices that can be installed on a computer, each device usually comes with a driver that interacts directly with this device.

The computer communicates with external devices through ports– special connectors on the back panel of the computer. Distinguish sequential And parallel ports. Serial (COM – ports) are used to connect manipulators, a modem and transmit small amounts of information over long distances. Parallel (LPT - ports) are used to connect printers, scanners and transmit large amounts of information to short distances. Recently, sequential universal ports(USB) to which you can connect various devices.

Thuring machine

Turing machine (MT)- abstract performer (abstract computing machine). It was proposed by Alan Turing in 1936 to formalize the concept of an algorithm.

A Turing machine is an extension of a finite state machine and, according to the Church-Turing thesis, capable of imitating all performers(by specifying transition rules) that somehow implement the step-by-step calculation process, in which each calculation step is quite elementary.

The structure of a Turing machine[

The Turing machine includes an unlimited in both directions ribbon(Turing machines are possible that have several infinite tapes), divided into cells, and control device(also called read-write head(GZCH)), capable of being in one of set of states. The number of possible states of the control device is finite and precisely specified.

The control device can move left and right along the tape, read and write characters of some finite alphabet into cells. Stands out special empty a symbol that fills all the cells of the tape, except those of them (the final number) on which the input data is written.

The control device operates according to transition rules, which represent the algorithm, realizable this Turing machine. Each transition rule instructs the machine, depending on the current state and the symbol observed in the current cell, to write a new symbol into this cell, move to a new state and move one cell to the left or right. Some Turing machine states can be labeled as terminal, and going to any of them means the end of the work, stopping the algorithm.

A Turing machine is called deterministic, if each combination of state and ribbon symbol in the table corresponds to at most one rule. If there is a "ribbon symbol - state" pair for which there are 2 or more instructions, such a Turing machine is called non-deterministic.

Description of the Turing machine[

A specific Turing machine is defined by listing the elements of the set of letters of the alphabet A, the set of states Q, and the set of rules by which the machine operates. They have the form: q i a j →q i1 a j1 d k (if the head is in the state q i, and the letter a j is written in the observed cell, then the head goes to the state q i1, a j1 is written in the cell instead of a j, the head makes a movement d k, which has three options: one cell to the left (L), one cell to the right (R), stay in place (N)). For every possible configuration there is exactly one rule (for a non-deterministic Turing machine there can be large quantity rules). There are no rules only for the final state, once in which the car stops. In addition, you must specify the final and initial states, the initial configuration on the tape, and the location of the machine head.

Example of a Turing machine[

Let's give an example of MT for multiplying numbers in the unary number system. The entry of the rule “q i a j →q i1 a j1 R/L/N” should be understood as follows: q i is the state in which this rule is executed, a j is the data in the cell in which the head is located, q i1 is the state to go to, a j1 - what needs to be written in the cell, R/L/N - command to move.

Computer architecture by John von Neumann

Von Neumann architecture- a well-known principle of joint storage of commands and data in computer memory. Computing systems of this kind are often referred to as “von Neumann machines,” but the correspondence of these concepts is not always unambiguous. IN general case When people talk about von Neumann architecture, they mean the principle of storing data and instructions in one memory.

Von Neumann principles

Von Neumann's principles[

The principle of memory homogeneity

Commands and data are stored in the same memory and are externally indistinguishable in memory. They can only be recognized by the method of use; that is, the same value in a memory cell can be used as data, as a command, and as an address, depending only on the way it is accessed. This allows you to perform the same operations on commands as on numbers, and, accordingly, opens up a number of possibilities. Thus, by cyclically changing the address part of the command, it is possible to access successive elements of the data array. This technique is called command modification and is not recommended from the standpoint of modern programming. More useful is another consequence of the principle of homogeneity, when instructions from one program can be obtained as a result of the execution of another program. This possibility underlies translation - the translation of program text from a high-level language into the language of a specific computer.

Targeting principle

Structurally, the main memory consists of numbered cells, and any cell is available to the processor at any time. Binary codes of commands and data are divided into units of information called words and stored in memory cells, and to access them the numbers of the corresponding cells - addresses are used.

Program control principle

All calculations provided for by the algorithm for solving the problem must be presented in the form of a program consisting of a sequence of control words - commands. Each command prescribes some operation from a set of operations implemented by the computer. Program commands are stored in sequential memory cells of the computer and are executed in a natural sequence, that is, in the order of their position in the program. If necessary, using special commands, this sequence can be changed. The decision to change the order of execution of program commands is made either based on an analysis of the results of previous calculations, or unconditionally.

Processor types

Microprocessor- this is a device that is one or more large integrated circuits (LSI) that perform the functions of a computer processor. Classic computing device consists of an arithmetic device (AU), a control device (CU), a storage device (SRAM) and an input-output device (IOU).

IntelCeleron 400 Socket 370 in a plastic PPGA case, top view.

There are processors of various architectures.

CISC(eng. ComplexInstructionSetComputing) is a processor design concept that is characterized by the following set of properties:

· a large number commands of different format and length;

· introduction of a large number of different addressing modes;

· has complex instruction coding.

A CISC processor has to deal with more complex instructions of unequal length. A single CISC instruction can execute faster, but processing multiple CISC instructions in parallel is more difficult.

Facilitating debugging of programs in assembler entails cluttering the microprocessor unit with nodes. To improve performance, the clock frequency and degree of integration must be increased, which necessitates improved technology and, as a result, more expensive production.

Advantages of CISC architecture[show]

Disadvantages of CISC architecture[show]

RISC(Reduced Instruction Set Computing). Processor with a reduced instruction set. The command system is simplified. All commands have the same format with simple encoding. Memory is accessed using load and write commands; the remaining commands are of the register-register type. The command entering the CPU is already divided into fields and does not require additional decryption.

Part of the crystal is freed up to accommodate additional components. The degree of integration is lower than in the previous architectural variant, so lower clock speeds are allowed for high performance. The command clutters up the RAM less, the CPU is cheaper. Software compatibility the specified architectures do not have. Debugging RISC programs is more difficult. This technology can be implemented in software compatible with CISC technology (for example, superscalar technology).

Because RISC instructions are simple, fewer logic gates are needed to execute them, which ultimately reduces the cost of the processor. But most software Today it was written and compiled specifically for CISC processors from Intel. To use the RISC architecture, current programs must be recompiled and sometimes rewritten.

Clock frequency

Clock frequency is an indicator of the speed at which commands are executed by the central processor.
Tact is the period of time required to perform an elementary operation.

In the recent past, the clock speed of a central processor was identified directly with its performance, that is, the higher the clock speed of the CPU, the more productive it is. In practice, we have a situation where processors with different frequencies have the same performance, because they can execute a different number of instructions in one clock cycle (depending on the core design, bus bandwidth, cache memory).

The processor clock speed is proportional to the frequency system bus (see below).

Bit depth

Processor capacity is a value that determines the amount of information that the central processor is capable of processing in one clock cycle.

For example, if the processor is 16-bit, this means that it is capable of processing 16 bits of information in one clock cycle.

I think everyone understands that the higher the processor bit depth, the larger volumes of information it can process.

Typically, the higher the processor capacity, the higher its performance.

Currently, 32- and 64-bit processors are used. The size of the processor does not mean that it is obliged to execute commands with the same bit size.

Cache memory

First of all, let's answer the question, what is cache memory?

Cache memory is a high-speed computer memory designed for temporary storage of information (code of executable programs and data) needed by the central processor.

What data is stored in cache memory?

Most frequently used.

What is the purpose of cache memory?

The fact is that RAM performance is much lower compared to CPU performance. It turns out that the processor is waiting for data to arrive from RAM - which reduces the performance of the processor, and therefore the performance of the entire system. Cache memory reduces processor latency by storing data and code of executable programs that were accessed most frequently by the processor (the difference between cache memory and computer RAM is that the speed of cache memory is tens of times higher).

Cache memory, like regular memory, has a capacity. The higher the cache memory capacity, the larger volumes of data it can work with.

There are three levels of cache memory: cache memory first (L1), second (L2) and third (L3). The first two levels are most often used in modern computers.

Let's take a closer look at all three levels of cache memory.

First cache level is the fastest and most expensive memory.

L1 cache is located on the same chip as the processor and operates at the CPU frequency (hence the fastest performance) and is used directly by the processor core.

The capacity of the first level cache is small (due to its high cost) and is measured in kilobytes (usually no more than 128 KB).

L2 cache is a high-speed memory that performs the same functions as the L1 cache. The difference between L1 and L2 is that the latter has more low speed, but larger in size (from 128 KB to 12 MB), which is very useful for performing resource-intensive tasks.

L3 cache located on the motherboard. L3 is significantly slower than L1 and L2, but faster than RAM. It is clear that the volume of L3 is greater than the volume of L1 and L2. Level 3 cache is found in very powerful computers.

Number of Cores

Modern processor manufacturing technologies make it possible to place more than one core in one package. The presence of several cores significantly increases the performance of the processor, but this does not mean that the presence n cores gives increased performance in n once. In addition, the problem with multi-core processors is that today there are relatively few programs written taking into account the presence of several cores in the processor.

The multi-core processor, first of all, allows you to implement the multitasking function: distributing the work of applications between the processor cores. This means that each individual core runs its own application.

Structure motherboard

Before choosing a motherboard, you need to at least superficially consider its structure. Although it is worth noting here that the location of the sockets and other parts of the motherboard do not play a special role.

The first thing you should pay attention to is the processor socket. This is a small square recess with a fastener.

For those who are familiar with the term “overlocking” (overclocking a computer), you should pay attention to the presence of a double radiator. Often motherboards do not have a double heatsink. Therefore, for those who intend to overclock their computer in the future, it is advisable to ensure that this element is present on the board.

Elongated PCI-Express slots are designed for video cards, TV tuners, audio and network cards. Video cards require high bandwidth and use PCI-Express X16 connectors. For other adapters, PCI-Express X1 connectors are used.

Expert advice!PCI slots with different bandwidths look almost the same. It is worth looking especially carefully at the connectors and reading the labels underneath them to avoid sudden disappointments at home when installing video cards.

Connectors smaller size Designed for RAM sticks. They are usually colored black or blue.

The board's chipset is usually hidden under the heatsink. This element is responsible for the joint operation of the processor and other parts of the system unit.

The small square connectors on the edge of the board are used for connecting hard disk. On the other side there are connectors for input and output devices (USB, mouse, keyboard, etc.).

Manufacturer

Many companies produce motherboards. It is almost impossible to single out the best or worst of them. Any company's payment can be called high-quality. Often even unknown manufacturers offer good products.

The secret is that all boards are equipped with chipsets from two companies: AMD and Intel. Moreover, the differences between the chipsets are insignificant and play a role only when solving highly specialized problems.

Form factor

In the case of motherboards, size matters. The standard ATX form factor is found in most home computers. The large size, and, consequently, the presence of a wide range of slots, allows you to improve the basic characteristics of the computer.

The smaller mATX version is less common. Possibilities for improvement are limited.

There is also mITX. This form factor is found in budget office computers. Improving performance is either impossible or makes no sense.

Often processors and boards are sold as a set. However, if the processor was purchased previously, it is important to ensure that it is compatible with the board. By looking at the socket, the compatibility of the processor and motherboard can be determined instantly.

Chipset

The connecting link of all components of the system is the chipset. Chipsets are manufactured by two companies: Intel and AMD. There is not much difference between them. At least for the average user.

Standard chipsets consist of a north and south bridge. The newest Intel models consist only of northern. This was not done for the purpose of saving money. This factor does not in any way reduce the performance of the chipset.

The most modern Intel chipsets consist of a single bridge, since most of the controllers are now located in the processor, including the DD3 RAM controller, PCI-Express 3.0 and some others.

AMD analogues are built on a traditional two-bridge design. For example, the 900 series is equipped with a southbridge SB950 and a northbridge 990FX (990X, 970).

When choosing a chipset, you should start from the capabilities of the north bridge. Northbridge 990FX can support simultaneous operation of 4 video cards in CrossFire mode. In most cases, such power is excessive. But for fans of heavyweight games or those who work with demanding graphic editors, this chipset will be the most suitable.

The slightly stripped-down version of the 990X can still support two video cards at the same time, but the 970 model works exclusively with one video card.

Motherboard Layout

· data processing subsystem;

· power supply subsystem;

· auxiliary (service) blocks and units.

The main components of the motherboard data processing subsystem are shown in Fig. 1.3.14.

1 – processor socket; 2 – front tire; 3 – north bridge; 4 – clock generator; 5 – memory bus; 6 – RAM connectors; 7 – IDE (ATA) connectors; 8 – SATA connectors; 9 – south bridge; 10 – IEEE 1394 connectors; 11 – USB connectors; 12 – connector Ethernet networks; 13 – audio connectors; 14 – LPC bus; 15 – Super I/O controller; 16 – PS/2 port;

17 – parallel port; 18 – serial ports; 19 – Floppy Disk connector;

20 – BIOS; 21 – PCI bus; 22 – PCI connectors; 23 – AGP connectors or PCI Express;

24 – internal bus; 25 – AGP/PCI Express bus; 26 – VGA connector

FPM (Fast Page Mode) is a type of dynamic memory.
Its name corresponds to the principle of operation, since the module allows faster access to data that is on the same page as the data transferred during the previous cycle.
These modules were used on most 486-based computers and early Pentium-based systems around 1995.

EDO (Extended Data Out) modules appeared in 1995 as a new type of memory for computers with Pentium processors.
This is a modified version of FPM.
Unlike its predecessors, EDO begins fetching the next block of memory at the same time it sends the previous block to the CPU.

SDRAM (Synchronous DRAM) is a type of random access memory that works so fast that it can be synchronized with the processor frequency, excluding standby modes.
The microcircuits are divided into two blocks of cells so that while accessing a bit in one block, preparations are in progress for accessing a bit in another block.
If the time to access the first piece of information was 60 ns, all subsequent intervals were reduced to 10 ns.
Since 1996, the majority Intel chipsets began to support this type of memory module, making it very popular until 2001.

SDRAM can operate at 133 MHz, which is almost three times faster than FPM and twice as fast as EDO.
Most computers with Pentium and Celeron processors released in 1999 used this type of memory.

DDR (Double Data Rate) was a development of SDRAM.
This type of memory module first appeared on the market in 2001.
The main difference between DDR and SDRAM is that instead of doubling the clock speed to speed things up, these modules transfer data twice per clock cycle.
Now this is the main memory standard, but it is already beginning to give way to DDR2.

DDR2 (Double Data Rate 2) is a newer variant of DDR that should theoretically be twice as fast.
DDR2 memory first appeared in 2003, and chipsets supporting it appeared in mid-2004.
This memory, like DDR, transfers two sets of data per clock cycle.
The main difference between DDR2 and DDR is the ability to work at much higher clock frequency, thanks to improvements in design.
But the modified operating scheme, which makes it possible to achieve high clock frequencies, at the same time increases delays when working with memory.

DDR3 SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory, Third Generation) is a type of random access memory used in computer technology as an operational and video memory.
It replaced DDR2 SDRAM memory.

DDR3 has a 40% reduction in energy consumption compared to DDR2 modules, which is due to the lower (1.5 V, compared to 1.8 V for DDR2 and 2.5 V for DDR) supply voltage of the memory cells.
Reducing the supply voltage is achieved through the use of a 90-nm (initially, later 65-, 50-, 40-nm) process technology in the production of microcircuits and the use of Dual-gate transistors (which helps reduce leakage currents).

DIMMs with DDR3 memory are not mechanically compatible with the same DDR2 memory modules (the key is located in a different location), so DDR2 cannot be installed in DDR3 slots (this is done to prevent the mistaken installation of some modules instead of others - these types of memory are not the same according to electrical parameters).

RAMBUS (RIMM)

RAMBUS (RIMM) is a type of memory that appeared on the market in 1999.
It is based on traditional DRAM but with a radically changed architecture.
The RAMBUS design makes memory access more intelligent, allowing pre-access to data while slightly offloading the CPU.
The main idea used in these memory modules is to receive data in small packets but at a very high clock speed.
For example, SDRAM can transfer 64 bits of information at 100 MHz, and RAMBUS can transfer 16 bits at 800 MHz.
These modules did not become successful as Intel had many problems with their implementation.
RDRAM modules appeared in the Sony Playstation 2 and Nintendo 64 game consoles.

RAM stands for Random Access Memory - memory that is accessed by address. Sequentially accessed addresses can take on any value, so any address (or "cell") can be accessed independently.

Statistical memory is memory built from static switches. It stores information as long as power is supplied. Typically, at least six transistors are required to store one bit in an SRAM circuit. SRAM is used in small systems (up to several hundred KB of RAM) and is used where access speed is critical (like cache inside processors or on motherboards).

Dynamic memory (DRAM) originated in the early 70s. It is based on capacitive elements. We can think of DRAM as a series of capacitors controlled by switching transistors. Only one "capacitor transistor" is needed to store one bit, so DRAM has more capacity than SRAM (and is cheaper).
DRAM is organized as a rectangular array of cells. To access a cell, we need to select the row and column in which that cell is located. Typically this is implemented in such a way that the high part of the address points to a row, and the low part of the address points to a cell in the row ("column"). Historically (due to slow speeds and small IC packets in the early 70s), the address was supplied to the DRAM chip in two phases - a row address with a column address on the same lines. First, the chip receives the row address and then after a few nanoseconds the column address is transmitted to the same line. The chip reads the data and transmits it to the output. During the write cycle, the data is received by the chip along with the column address. Several control lines are used to control the chip. RAS (Row Address Strobe) signals which transmit the row address and also activate the entire chip CAS (Column Address Strobe) signals that transmit the column address WE (Write Enable) indicating that the access performed is a write access OE (Output Enable) opens the buffers used to transfer data from the memory chip to the “host” (processor) .
FP DRAM

Since each access to classic DRAM requires the transfer of two addresses, it was too slow for 25 MHz machines. FP (Fast Page) DRAM is a variant of classic DRAM in which there is no need to transfer the row address in each access cycle. As long as the RAS line is active, the row remains selected and individual cells from that row can be selected by passing only the column address. So, while the memory cell remains the same, the access time is less because only one address transfer phase is needed in most cases.

EDO (Extended Data Out) DRAM is a variant of FP DRAM. In FP DRAM, the column address must remain correct during the entire data transfer period. Data buffers are activated only during the column address transmission cycle, by the CAS signal activity level signal. Data must be read from the memory data bus before it enters the chip. new address columns. EDO memory stores data in output buffers after the CAS signal returns to the inactive state and the column address is removed. The address of the next column can be transmitted in parallel with reading the data. This provides the ability to use partial matching when reading. While EDO RAM memory cells are the same speed as FP DRAM, sequential access can be faster. So EDO should be something faster than FP, especially for massive access (like in graphics applications).

Video RAM can be based on any of the DRAM architectures listed above. In addition to the "normal" access mechanism described below, VRAM has one or two special serial ports. VRAM is often referred to as dual-port or triple-port memory. Serial ports contain registers that can store the contents of a whole series. It is possible to transfer data from an entire row of a memory array to a register (or vice versa) in a single access cycle. The data can then be read from or written to the serial register in chunks of any length. Because a register is made up of fast, static cells, access to it is very fast, usually several times faster than a memory array. In most typical applications, VRAM is used as a screen memory buffer. The parallel port (standard interface) is used by the processor, and the serial port is used to transmit data about points on the display (or read data from a video source).

WRAM is a proprietary memory architecture developed by Matrox and (who else, let me remember... - Samsung?, MoSys?...). It is similar to VRAM, but allows faster access by the host. WRAM was used on Matrox's Millenium and Millenium II graphics cards (but not on the modern Millenium G200).

SDRAM is a complete redesign of DRAM, introduced in the 90s. "S" stands for Synchronous, since SDRAM implements a completely synchronous (and therefore very fast) interface. Inside SDRAM contains (usually two) DRAM arrays. Each array has its own its own Page Register, which is (a bit) like the serial access register on VRAM. SDRAM works much smarter than regular DRAM. The entire circuit is synchronized with an external clock signal. At each clock tick, the chip receives and executes a command transmitted along the command lines. The command line names remain the same as in classic DRAM chips, but their functions are only similar to the original. There are commands for transferring data between the memory array and page registers, and for accessing data in page registers. Access to a page register is very fast - modern SDRAMs can transfer a new word of data every 6..10 ns.

Synchronous Graphics RAM is a variant of SDRAM designed for graphics applications. The hardware structure is almost identical, so in most cases we can change SDRAM and SGRAM (see Matrox G200 cards - some use SD, others SG). The difference is in the functions performed by the page register. The SG can write multiple locations in a single cycle (this allows for very fast color fills and screen clearing), and can only write a few bits per word (the bits are selected by a bit mask stored by the interface circuit). Therefore, SG is faster in graphics applications, although not physically faster than SD in "normal" use. Additional features of SG are used by graphics accelerators. I think the screen clearing and Z-buffer capabilities in particular are very useful.

RAMBUS (RDRAM)

RAMBUS (trademark RAMBUS, Inc.) began to be developed in the 80s, so it is not new. Modern RAMBUS technologies combine old but very good ideas and today's memory production technologies. RAMBUS is based on a simple idea: we take any good DRAM, build a static buffer into the chip (as in VRAM and SGRAM), and provide a special, electronically configurable interface operating at 250..400 MHz. The interface is at least twice as fast as SDRAM, and while random access times are typically slower, sequential access is very, very, very fast. Remember that when 250 MHz RDRAMs were introduced, most DRAMs ran at 12..25 MHz. RDRAM requires a special interface and very careful physical placement on the PCB. Most RDRAM chips look very different from other DRAMs: they all have all the signal lines on one side of the package (so they are the same length), and only 4 power lines on the other side. RDRAMs are used in graphics cards based on Cirrus 546x chips. We will soon see RDRAMs used as main memory in PCs.

Hard drive device.

The hard drive contains a set of plates, most often representing metal disks, coated with a magnetic material - platter (gamma ferrite oxide, barium ferrite, chromium oxide...) and connected to each other using a spindle (shaft, axis).

The discs themselves (approximately 2 mm thick) are made of aluminum, brass, ceramics or glass. (see pic)

Both surfaces of the discs are used for recording. 4-9 plates are used. The shaft rotates at a high constant speed (3600-7200 rpm)

The rotation of the disks and radical movement of the heads is carried out using 2 electric motors.

Data is written or read using write/read heads, one on each surface of the disk. The number of heads is equal to the number of working surfaces of all disks.

Information is recorded on the disk in strictly defined places - concentric tracks (tracks). The tracks are divided into sectors. One sector contains 512 bytes of information.

Data exchange between RAM and NMD is carried out sequentially by an integer (cluster). Cluster - chains of sequential sectors (1,2,3,4,…)

A special motor, using a bracket, positions the read/write head over a given track (moves it in the radial direction).

When the disk is rotated, the head is located above the desired sector. Obviously, all heads move simultaneously and read information; data heads move simultaneously and read information from identical tracks on different drives.

Hard drive tracks with the same serial number on different hard drive drives are called a cylinder.

The read-write heads move along the surface of the platter. The closer the head is to the surface of the disk without touching it, the higher the permissible recording density .

Interfaces hard drives.

IDE (ATA – Advanced Technology Attachment) is a parallel interface for connecting drives, which is why it was changed (with SATA output) to PATA (Parallel ATA). Previously used to connect hard drives, but was supplanted by the SATA interface. Currently used to connect optical drives.

SATA (Serial ATA) – serial interface for data exchange with drives. An 8-pin connector is used for connection. As in the case of PATA, it is obsolete and is used only for working with optical drives. The SATA standard (SATA150) provided a throughput of 150 MB/s (1.2 Gbit/s).

SATA 2 (SATA300). The SATA 2 standard doubled the throughput, up to 300 MB/s (2.4 Gbit/s), and allows operation at 3 GHz. Standard SATA and SATA 2 are compatible with each other, however, for some models it is necessary to manually set the modes by rearranging the jumpers.

SATA 3, although according to the specifications it is correct to call it SATA 6Gb/s. This standard doubled the data transfer speed to 6 Gbit/s (600 MB/s). Other positive innovations include the NCQ program control function and commands for continuous data transfer for a high-priority process. Although the interface was introduced in 2009, it is not yet particularly popular among manufacturers and is not often found in stores. In addition to hard drives, this standard is used in SSDs ( solid state drives). It is worth noting that in practice the bandwidth of SATA interfaces does not differ in data transfer speed. In practice, the speed of writing and reading disks does not exceed 100 MB/s. Increasing the performance only affects the bandwidth between the controller and the drive cache.

SCSI (Small Computer System Interface) - a standard used in servers where increased data transfer speed is required.

SAS (Serial Attached SCSI) is a generation that replaced the SCSI standard, using serial data transmission. Like SCSI, it is used in workstations. Fully compatible with the SATA interface.

CF (Compact Flash) – Interface for connecting memory cards, as well as for 1.0 inch hard drives. There are 2 standards: Compact Flash Type I and Compact Flash Type II, the difference is in thickness.

FireWire is an alternative interface to the slower USB 2.0. Used to connect portable hard drives. Supports speeds up to 400 Mb/s, but the physical speed is lower than regular ones. When reading and writing, the maximum threshold is 40 MB/s.

Types of video cards

Modern computers(laptops) are available with various types video cards, on which performance directly depends graphics programs, video playback and so on.

There are currently 3 types of adapters in use that can be combined.

Let's take a closer look at the types of video cards:

  • integrated;
  • discrete;
  • hybrid;
  • two discrete;
  • Hybrid SLI.

Integrated graphics card- This inexpensive option. It does not have video memory and GPU. With the help of the chipset, graphics are processed by the central processor, RAM used instead of video memory. Such a device system significantly reduces the performance of the computer in general and graphic processing in particular.

Often used in budget PC or laptop configurations. Allows you to work with office applications, watch and edit photos and videos, but it is impossible to play games modern games. Only legacy options are available with minimum requirements to the system.

Computer