Data center what. Data center - what is it? Data processing center

Mostly information is provided on already implemented data center projects, or on advanced technologies used in its creation. For some reason, the question about the rationale for choosing a data center, questions about competently drawing up technical specifications for it, as well as questions about effective use all the opportunities inherent in the data center are lost. To the best of my ability and ability, I will try to cover these issues in more detail.

Scope of the document and list of issues to be considered

This document is intended to provide a set of necessary information for specialists involved in the creation and operation of data centers, server rooms and computer rooms.

The document covers:

  • Problems that arise during the design, construction and operation of data centers, as well as possible solutions to these problems
  • Recommendations on the use of modern standards are given, as well as a brief description of them.
  • The main design errors and problems that arise during operation are given, their consequences are shown, as well as possible ways troubleshooting and problem solving
  • The rules for creating successful IT projects are given separately.
  • The most important requirements for the main elements of the data center are revealed, and, if possible, the reason for these requirements and the consequences of their non-compliance are explained
  • The main trends in the creation of data centers and some statistical data of foreign and Russian data centers are listed

Of course, this is not a comprehensive document, and within the framework of one document it is not possible to consider the main issues that arise at the stages of justification, design, commissioning and operation itself. Therefore, I will try, if possible, to highlight the key points of everything life cycle Data center, pay special attention to issues that, in my opinion, are least described in the literature and on the Internet. The fact that some issues have not received proper coverage does not cease to be important, especially since some of them, as will be shown below, are hushed up for very specific reasons.

First, I will try to clarify the circle of specialists for whom the document will be aimed. These will be specialists from organizations that do not have a data center, but want to build one, specialists who have decided to build a data center, but do not know what to look for when writing technical specifications (T)Z and how to choose a partner, specialists who have built a data center, but are trying to operate it provide the declared characteristics and reduce costs. The document will also probably be of interest to equipment suppliers and data center developers, at least in terms of understanding the problems of their clients. Although the document will consider most of the issues that arise when justifying the choice of a data center, its design, construction and operation, the document will not contain instructions on the choice of this or that equipment, or even on the mandatory use of certain technologies. The fact is that new equipment, solutions and technologies appear every year, often, in fact, distinguished by the introduction of some insignificant changes, or the implementation of long-known solutions, but at a new technical level. Remember - " Knowledge of a few principles frees us from knowledge of many particulars " Based on this, I will try, first of all, to talk about the principles of design and operation of complex computing systems, which are ideally suited for a data center.

In order to discuss the problems of building and operating a data center, you need to define some terms and understand what a data center is. Therefore, first I will try to define the term “data center” itself.

Definition of the term "data center"

Recently, it has become very fashionable to talk about creating a data center. Almost every self-respecting company declares that one of its specializations is the construction of data centers or data centers. Typically companies refer to positive reviews, completed projects, etc. and so on.

Let's first try to figure out what a data center is, how it differs from just a good server room, and also what properties of a data center allow it to be called a data center. We will also try to understand what kind of work requires special attention when building a data center and where you can save money without losing quality. Analyzing all this will allow not only to create a better data center, but also useful when building other data storage and processing objects.

If you turn to Wikipedia That Data center or center storage and data processing (Data center/TsKOD) is a specialized building for placing (hosting) server and communication equipment and connecting subscribers to Internet channels. Another name for the data center is Data center(from English data center).

Comment : If the document contains the term “ Data center", this means that a document is quoted or retold where exactly this term is used, and not the term " Data center».

In fact, such an interpretation at least does not reveal the whole essence of what a data center is. Much closer in meaning is this interpretation: “A data center is a building (or part of it) for which complex solutions are used for storing, processing and distributing information data with an IT infrastructure that allows it to provide its functions that meet certain criteria.”

In any case, when defining a data center, you should not emphasize the presence of hosting and the Internet, because they really can be, but their absence is not critical for the data center. In the form in which the refined formulation of the data center is given, it most fully corresponds to the concept of the data center set out in the Standard TIA-942. Although, in my opinion, the wording “ Data center - This is a building, a part of it, or group of buildings, for which they are used... » further in the text. Because It may well turn out that when implementing a data center with duplication of subsystems, the data center will be geographically distributed between several buildings. Sometimes they remember that when operating a data center, it is necessary to develop a set of organizational procedures and constantly train staff. But this is no longer so important, because you just have to understand that a data center is not only a building, but also a set of engineering solutions, and not only that, but also the provision of necessary services and the availability of qualified personnel.

Historically, data centers (the name data center appeared later in Russia) grew out of large server servers owned by IT companies in the 90s. This qualitative change was facilitated by the emergence of client-server technology, the emergence of new cable network standards, and the emergence of hierarchical media management. The main features of data centers had developed by 2000, when data centers became in great demand for deploying Internet servers for organizations that did not have the capabilities to support them, as well as ensuring the operation of expanded databases of various organizations in their computer centers.

Currently, in St. Petersburg alone there are more than 30 Data center. In fact, there are more of them, because Some organizations have built infrastructures that fit the concept of a data center.

Regarding the Standard TIA-942 It should be noted that the document deals in detail with the issues of constructing (mainly in the form of statements of requirements) engineering subsystems, but if you try to ask the question of choosing a specific project for building a data center in order to perform specific tasks, questions immediately arise. The TIA-942 Standard introduces the concept TIER levels. The standard considers four levels associated with varying degrees readiness (TIA-942 terminology ) data center equipment infrastructure. Higher levels not only correspond to higher availability, but also result in higher infrastructure costs. In fact, the TIA-942 Standard divides (classifies) data centers only according to the level of reliability (sometimes they write that according to the level of availability, but this term, although close, is still narrower than the term “ reliability»).

Data center classification

The very concept of a data center is quite uninformative; the fact is that all data centers are different not only in size, but also in the tasks assigned to them, and, if possible, provide their basic functions with a certain level (quality). And the main functions of different data centers, depending on their orientation, can be considered different functions.

If you look more closely, you can identify quite a lot of criteria by which data centers can be divided. Basically, it is these criteria that will be decisive in the operation of data centers, or these criteria will carry a set of some properties that make it possible to select a certain group of data centers.

Data centers can be divided by:

  • Purpose, or more precisely, to divide them into public and non-public (the term “corporate” is more often used) data centers;
  • Reliability of data storage (to be more precise, in terms of the combination of reliability and availability).

There are also separate groups Disaster-resistant Data Processing Centers (DPC) and " trash data centers" The name "trash" comes from (eng. trash- garbage) – usually these are small data centers, in which cooling is implemented only through natural air exchange.

Such “junk” data centers for the most part do not fully meet the requirements for data centers, but they are less expensive, environmentally friendly, and renting server racks from them is significantly cheaper.

Everything is clear with the division into public and non-public data centers, and their approach to design is different. After all, when making a data center for itself, the organization knows quite well which of the basic properties it needs and where it can save money. Hence the possibility of selective fulfillment of requirements for the data center. In public data centers, everything is somewhat more complicated, and if they want to obtain certification at a data center in order to increase the number of their clients, then at a minimum, everyone will have to follow the mandatory recommendations.

If we talk about reliability, then we need to start by considering the term “mean time between failures”. In fact, it is not a fact that after a system fails due to the failure of one of its elements, it will cease to function. If upon failure (transition from working condition inoperative) of one of the elements of the system, the system becomes inoperable, then they say that something has happened refusal. If, nevertheless, the system remains operational, then they say that a problem occurred failure. The moment and frequency of occurrence of failures and failures are described by methods of probability theory and are not considered in this document. The only thing you need to remember is that, only by analyzing diagram reliability of the system and having data on the time between failures in digital terms of each of its component parts, we can speak about the level of availability or performance of the entire system. The percentage of time during the year that the system is up and/or down (% Uptime and Down time) are directly related. Downtime is the total downtime for the year. These terms are often used when discussing different levels ( Tier) Data center. But their digital expression for different levels is not correct, because The spread of fault tolerance indicators among data centers of the same level can be large. In the appropriate place in the document it will be shown that all the figures characterizing the period of downtime at various levels of the data center are from the evil one and cannot really be relied upon. In short, the list of the most characteristic features of the various data center levels can be summarized in a simple table.

Data center class (level)

Most characteristic feature Basic levellow fault tolerance With reservation With the possibility of parallel maintenance work High fault tolerance
Susceptible to disruptions to the normal course of work from both planned and unplanned actions. It has power distribution and computer cooling systems, but may or may not have raised floors, a UPS, or a generator. Even if there are UPS or generators, they are single-module systems and have many single points of failure. Every year, the infrastructure has to be completely shut down to perform scheduled maintenance and preventative repairs. Urgent needs may require more frequent outages. Operational errors or spontaneous failures of facility infrastructure components will cause interruptions in the normal operation of the data center. There are redundant components, somewhat less susceptible to disruptions to the normal course of work from planned and unscheduled actions than the base data center. In this case, there is a raised floor, UPS and generators, but the project has a rating of N+1 (Need plus One), which means a single-stream distribution path over the entire area. Maintenance and repair of the critical power path and other parts of the facility infrastructure will require shutdown of the data processing process. Allows you to carry out any planned activity of the facility infrastructure without any disruption to the normal operation of the technical equipment of the machine room. Planned activities include preventative and programmable maintenance, repair and replacement of components, addition or removal of components that affect performance, testing of components and systems, etc. Sufficient power and distribution capabilities must be available to simultaneously load non-load on the same track and at the same time. time to perform repairs or testing on another path. Unscheduled actions, such as operational errors or spontaneous failures of facility infrastructure components, will still cause interruptions in the normal operation of the data center. Level III facilities are often designed with the prospect of increasing resources to Level IV. Has multiple active power distribution and cooling paths. Provides an increased degree of fault tolerance due to the presence of 2 paths. Provides multiple paths for supplying power to all types of computing and telecommunications equipment. Requires all computer and telecommunications equipment to have multiple power inputs. The equipment continues to operate when one power input is disconnected. The facility's infrastructure is capable and capable of allowing any scheduled activity without disrupting the normal operation of the critical load. Fault-tolerant functionality also ensures that the data center infrastructure can withstand at least one unplanned worst-case failure (or event) without impacting the mission-critical workload. Has two separate UPS systems in which each system has N+1 redundancy.
Type of resource consuming company Medium and small businesses. Data center for servicing internal company processes Medium and small businesses. The data center operates in "5x8" mode Companies serving both internal and external customers 7X24 Global companies providing their services 24×365
Building type With neighbors Freestanding
Number of power inputs 1 One is active, the other is standby Two active

As an example, I give the correspondence between availability and the time the system is inoperative (per year). I won’t link levels to numbers, because... I already said above, the spread of accessibility indicators per year can be quite large within one level.

Availability, %
(%UP TIME)

Downtime per year, hour.
(
DOWNTIMEper year), hour

Reliability Solutions

Without redundancy, generator, and backup input
No redundancy, generator, but there is a backup input
With partial “cold” redundancy, without a generator but with a backup input
With "hot" backup of the most important parts and "cold" backup of almost everything else, the presence of a generator and backup input
With hot standby for the most critical parts and cold standby for almost everything else, with the generator in hot standby and the backup input in hot standby.
99,999 5.26 min. Full redundancy of everything, always having 2 paths (connections), often with duplication.

An entry of the form “Without reservation” does not mean that in the event of a failure, an order and receipt of the failed unit from the supplier will be expected. The presence of calculated stocks of spare parts and a decrease in the value of the MTTR indicator (mean time to repair) also significantly affects downtime.

One more important note. The data center will be the maximum level of the minimum level of one of its component parts. But on the other hand, you need to remember that not all recommendations from the standards are mandatory, and if you know exactly what and how their violation affects, you can usually save some money when building a data center.

Example

Developers, quite often struggling to improve the energy efficiency of the data center, which,measured as the ratio of total power to power IT equipment has long struggled with the ability to increase operating temperatures. The idea is sound, because in reality the service life of most computer equipment in a data center is 3-4 years, although it should be noted along the way that the equipment responsible for power supply is usually replaced less frequently, albeit with proper maintenance. After this period, either the equipment is replaced, or the most critical applications are transferred to other new equipment. An increase in the room temperature by several degrees does not really affect the likelihood of equipment failure during this period, but significantly reduces cooling losses, thereby increasing energy efficiency.Now there are trends for some classes of data centers to further increase the permissible temperature.

Therefore, it is very important to know why the Standards contain certain requirements, and what will happen if you deviate from the standard in one direction or another. All this can be sorted out only by analyzing the requirements for certain parts of the data center. It is also necessary to understand the question of what standards regulate the requirements for components Data centers, whether they contradict each other, and whether these standards are generally worth observing. Therefore, the next chapter will be devoted to standards and their requirements.

Standard requirements for data center components

First, you need to decide on the requirements, what standards need to be followed, and most importantly, what will happen if they are slightly “violated,” respectively, for the better or for the worse. At the very beginning of the chapter, I will express a somewhat seditious thought. It is necessary to know the standards so that, if necessary, they can be violated if necessary. More precisely, it is reasonable to make some of the requirements for your specific data center higher or lower than the standard requirements for the data center class you have chosen. I wrote this line and realized that now I definitely have to write the name of this “smart” standard, the requirements of which must be followed when developing a data center. But... no, it’s not that simple. Documents bearing the proud name “Standard...” in their title are in fact most often the generalized experience of a group of experts who created this Standard. Towards accessibility (%UP TIME) or downtime (DOWNTIME) recommendations are not directly relevant. Following the requirements of the standards really allows you to improve these indicators, but by how much, this is a mystery shrouded in darkness. The fact is that it is practically impossible to take into account all the factors that influence the decrease or increase in these indicators, and even more so it is impossible to obtain data for all the equipment you specifically use in your data center. What to do? First of all, having prioritized the requirements for the data center you are creating, try to take one of the standards as a basis and then follow its requirements as accurately as possible.

In my opinion, you need to start searching for a Standard that is suitable for you with the previously mentioned TIA-942 « Telecommunications infrastructureData Processing Centers". The first version of the standard was published in 2005. The requirements for structures, power supply, heat dissipation, safety control, redundancy, maintenance and commissioning procedures are detailed here.

In June 2010, Building Industry Consulting Service International Inc. (BICSI) published a new standard 002-2010 : Data Center Design and Implementation Best Practices. This standard BSCI 002-2010 reflects the growing complexity of arranging computer centers and the need for companies and organizations to understand the requirements for energy, mechanical loads and telecommunications when designing computing center infrastructure.

Which standard is better to use? What are their differences? So how do you get certified? After all, there are standards from other organizations. For example, the main difference with certification to Uptime Institute standards is that certified professionals from this organization must verify on site the implementation of the requirements set out in their standards. In mid-2010, the Uptime Institute released another standard “ Operational Sustainability(Operational Sustainability)” regulating and operating services. It was precisely the requirements for the operation service that were lacking in TIA-942 . And although jointly fulfilling the requirements of the Standard TIA-942 and standard Operational Sustainability It is already possible to quite accurately formulate the requirements for a data center, but in practice, builders of new data centers more often refer to the TIA-942 standard. The fact is that each of the standards was compiled by a different organization and differs from each other in many details. Moreover, according to Uptime Institute specialists, their order of division into availability levels is in no way functionally related to the TIA-942 levels; they evaluate the ability of computer centers to maintain functionality in the face of failures and accidents. To avoid confusion, Uptime Institute experts suggest denoting accessibility levels in their interpretation by Roman numerals I, II, III and IV. It is quite difficult to certify a data center. If you go to the site Uptime Institute(website http://uptimeinstitute.com) then at the end of May 2012, only 1 center actually provides Level IV (i.e., not only the documentation and the created building with the technical means in it, but also the level of operation), certification of the constructed facility for Tier IV was carried out for 6 data centers. Certification of documentation for the construction of Tier IV data centers was obtained for 22 objects. There are currently no Russian data centers among Tier IV. There are also not very many Tier III data centers. Provide complete meeting the requirements for Level III for “Operational Resilience” with only 4 data centers. There are no Russians among them. Documentation and premises correspond to Tier III in 5 Russian data centers (4 Design Documents and 1 Constructed Facility).

During 2012, the TIA-942-A Standard will be published, which will include changes and additions next versions TIA-942-1 and TIA-942-2. Unfortunately, the new version of the standard has changed greatly. New standard TIA-942-A will only address the topic of cabling and will not be as comprehensive as the TIA-942 standard was. Those. mostly he will only regulate the construction of cable systems. The section on energy efficiency will likely only address this topic from the perspective of the cabling system and the use of the green media of fiber optics.

The following is a list of the main changes included in the current TIA-942-A project (according to the developer's preliminary statement). This information is in italics.

TIA-942-A conforms to the TIA-568-C series of standards with respect to the topology, terminology and environmental classifications presented in the 568-C.0 standard, as well as the component specifications presented in TIA-568-C.2 and C .3;

  • The applications, TIA-942-1 and TIA-942-2, are included in the TIA-942-A standard;
  • Grounding information has been moved from TIA-942-A to TIA-607-B;
  • Administration information will be moved to the TIA-606-B standard;
  • Most of the information related to telecommunications cabinets and server racks, power separation and telecommunications cabling will be moved to the TIA-569-C standard;
  • External cabling information has been moved to TIA-758-B;
  • The length limitation of horizontal fiber optic cable systems to 100 meters has been lifted.
  • Category 3 and Category 5e cables should no longer be used in horizontal cabling systems. The working version of the standard allows the use of balanced twisted pairs type Category 6 and Category 6A in horizontal cable systems. Category 6 and Category 6A can also be used in backbone cable systems;
  • The use of multimode fiber optic cables of type OM3 and OM4 (multimode optical fiber with a core/cladding diameter of 50/125 μm, optimized for operation with laser-based light sources at a wavelength of 850 nm) has been approved for use in horizontal and backbone cabling systems. Cable types OM1 and OM2 are no longer permitted for use;
  • To connect one or two fiber cables, fiber optic connectors of the LC type and for multi-fiber connectors of the MPO type should be used;
  • The data center topology includes an intermediate distribution area (IDA);
  • A section on energy efficiency has been added to the standard;
  • Added the terms “equipment outlet” (EO - equipment outlet) and “external network interface"(ENI - external network interface), borrowed from international standard ISO/IEC 24764.

“Operational Sustainability” Standard just complements TIA-942 especially in terms of data center operation.

The Operational Sustainability standard describes the requirements to ensure the sustainability of data centers and minimize associated risks. As is known, the previous widespread standard “Tier Standard: Topology” regulated the technical parameters of the data center necessary to achieve a certain level of reliability. The peculiarity of the new standard is that it takes into account the human factor in the sustainable operation of the data center. And this is of great importance, since the percentage of errors in work associated with this factor reaches 70% , of which a little more 40% associated with errors by operating service managers. To minimize these errors, it is necessary to conduct targeted work with personnel, improve their qualifications, and take measures to retain qualified personnel.

If we consider the standards from the corporation BICSI, then you can see that their approach differs from approaches to assessing the sustainability levels of other organizations.

System for assessing sustainability levels and main sections of the standard BICSI 002 2010 . According to the association, the developers of the standard set themselves the goal of ensuring the design and construction of data centers taking into account the long-term perspective of their operation. Main sections of the document:

  • Data center layout
  • Site selection
  • Architectural solutions
  • Building construction
  • Electrical systems
  • Mechanical systems
  • Firefighting
  • Safety
  • Building automation systems
  • Telecommunications
  • Information Technology
  • Commissioning
  • Operation and Maintenance
  • Design Process
  • Reliability

Therefore, regarding the standards for building data centers, it should be noted that all developers of general standards for data centers do not contradict each other in terms of requirements and references to the Standards when building basic levels of data centers. Commercial data centers, due to their specific nature, must satisfy (and preferably be certified) all the requirements of the standard that they took as a basis. Not all recommendations affect the main quality of a data center - ensuring a given level of availability. Therefore, non-commercial data centers in some cases may ignore some requirements. Moreover, certification is not only an expensive thing, but also does not directly affect the level of performance of the data center. After the implementation of the data center, it is still possible to make some changes not only to the support level, but also to other levels, trying to meet the requirements of one of the standards to obtain certification.

The Uptime Institute once identified four levels associated with varying degrees of readiness of the data center equipment infrastructure (data center). In fact, although they are related to the level of accessibility, it is probably more correct to talk about TIER levels, although the term “TIER” itself is translated as “Level”. Above, it was not in vain that I explained the concept of “Level”, I did not give digital specifications data center availability level. Numerical expressions were obtained only from the analysis of completed projects. Here's some data from a document developed by The Uptime Institute in their newsletter, Industry Standard Tier Classifications Define Site Infrastructure Performance.

Parameter/Class
Data center (level)

1
Low fault tolerance

4
High fault tolerance

Building type With neighbors With neighbors Freestanding Freestanding
Number of power inputs 1 1 One active
second reserve
Two active
Initial power W per m 2 215 - 323 430 - 537 430 — 645 537 - 860
Maximum power W per m2 215 - 323 430 - 537 1075- 1615 1615+
Uninterrupted air conditioning No No Maybe Eat
Raised floor height in meters 0.3 0.45 0.75 - 0.9 0.75 - 0.9
415 488 732 732+
(according to the 2005 standard 1000+)
Total duration of failures per year 28.8 h 22 h 1.6 h 0.4 h
Data center availability 99,671 % 99,749 % 99,982 % 99,995%
Commissioning period (months) 3 3 - 6 15 - 20 15 - 20
The standard project was first implemented in 1965 1970 1985 1995

General conclusion on the use of standards:

  • The use of the TIA - 942 standard with the latest additions (for example, with the “Operational Sustainability” standard) should be considered fundamental;
  • The new TIA-942-A standard (approved April 24, 2012) addresses only the topic of cabling systems and will no longer be as comprehensive as the TIA-942 standard was;
  • When building a data center, you should use not only standards, but also common sense, which allows you to save significantly without compromising its most sought-after qualities;
  • Certification is more necessary for a commercial data center, but an organization's data center may not be involved in this. Of course, if the data center was created on the basis of standards, then all deviations from the recommendations must be justified;
  • After reading, and, most importantly, understanding which Standard to take as a basis and which requirements will need to be emphasized in future development, you cannot assume that you have finished working with the standards. Before moving on to the next stage, it is imperative to re-read the old, good, although currently mostly forgotten GOSTs - series 34. And it’s okay that they have not been updated for many years, but there is a detailed consideration of the pre-design stages. They do not contain the familiar words “business processes”, “processor approach”, but there is the concept “ information model" quite correctly replacing them. Therefore, especially at the technical specification stage, these documents will help you. Of course, you need to be creative and not follow literally all the recommendations, but you need to read them carefully.

Procedure for building a data center

Oddly enough, the greatest contribution to the success or failure of a future project is made by the initial stages. Actually, according to world statistics in the IT industry Only one project out of 3 becomes successful. If you take a more rigorous approach and evaluate the success of the project as:

  • the ability to perform the stated functions with the required quality
  • complete the work within the planned time
  • not exceeding the original project budget
  • absence of emergency work at various stages of the project
  • no need to immediately begin work on modernizing the project.

Things will get worse. Probably no more than 20% of projects will fall under the definition of “successful”.

There are many reasons for a project to fail. Here is the wrong policy (namely a policy, since resolving controversial issues most often means finding compromises) of the project management, lack of proper support from the head of the organization, poor development of the technical specifications and, as a result, a large number of unplanned works, poor participation of specialists from the organization for which the project any force majeure circumstances are met.

If the possibility of failure hangs over almost every project, then what about cheerful statements about dozens of successful projects from many companies? First, you need to immediately put everything in its place by defining the term “ Project».

Project(if you refer to Wikipedia) - this is a unique (as opposed to operations) activity that has a beginning and an end in time, aimed at achieving a predetermined result/goal, creating a specific, unique product or service, under given resource and time constraints, as well as quality and permissible level risk. Perhaps this definition can be simplified for greater specificity. Projectis a set of tasks, activities or work performed related to achieving the planned goal, which usually has a unique and non-repetitive character . The main thing is that the project is always unique (at least for the people performing it). Therefore, everything that the performers talk about as a successful project is actually successful implementation, those. implementation of a ready-made solution. The percentage of successful implementations is significantly higher than successful projects. And if for programmers writing any complex program is always a project, then in the field of building infrastructure, implementations are also possible. It is quite difficult to draw the line when implementation develops into a project. For example, if a small software and hardware complex is created for automation of some remote site and this is not the first time the developer has done this, and the number of differences from previously created ones both in the hardware and in the set of installed programs is minimal, then this is implementation. And it has a fairly high chance of success. If differences appear in terms of a significant amount of new hardware, the installation of new complex software, or the emergence of new requirements that cannot be met within the framework of the implementation of previous solutions, then the creation of such a hardware and software complex will be a project. Those. At the beginning of his work, the project performer is always in a state where goals are defined, solutions are uncertain, successful solution of the problem is in question. Let me explain why I dwelled in detail on what seemed to be a terminological issue.

The fact is that there are 2 approaches to performing work and evaluating it. This is the Developer's approach and the Customer's approach.

When implementing a task from the Customer, the developer tries to:

  1. Try to apply a solution already implemented earlier by the Developer;
  2. If this is not possible, tries to apply a solution tested by other companies (most often a solution recommended by the hardware or software manufacturer);
  3. Try to lower the Customer's requirements and, if possible, reduce them to the same standard solutions;
  4. In case of failure previous paragraph The developer is trying to increase the time it takes to complete the work or make the requirements for accepting his work more lenient;
  5. At the acceptance stage, try to concentrate on the strengths of the completed project and hide your mistakes and imperfections;
  6. Try to quickly complete the project and start a new one, or, as a last resort, secure outsourcing.

The Customer's approach is primarily characterized by:

  1. An attempt to get as much as possible from the Developer and for less money;
  2. Attempts during the development of the project to change or clarify the points of the original technical specifications;
  3. During acceptance, try to obtain as much documentation as possible and find the developer’s errors;
  4. Try, at the Customer’s expense, not only to correct errors identified during the acceptance process, but also to make further changes to the project.

Therefore, the use of implementation, instead of development of a project that has a significantly lower chance of success, is always desirable for the Contractor. The above option is of course most relevant if the project is being developed by a third-party organization. In fact, when ordering a truly complex project (and the construction of a data center is one of such projects) from a third-party company, the participation of the Customer’s specialists is absolutely necessary, at least in the initial stages of the project. Indeed, no one knows the requirements for the data center being created as well as the Customer’s specialists. Of course the Customer at a minimum, must be able to control the implementation of the project, more precisely, have information about the timing of each of the stages, the progress of its implementation, and also not only participate in the acceptance of the project, but also participate in the writing of the test program. Only in this case is a sufficiently precise formulation of the Technique possible. assignments, prompt resolution of emerging issues, comprehensive verification of the results obtained.

There are two options for executing a project to build a data center. The first involves completing the project on its own, while the second assigns these responsibilities to a third-party contractor. Such schemes are rarely found in their pure form. Almost always, the construction of such systems is a joint work of the Contractor (or several Contractors) and the Customer. But it all comes down to the question of who will lead the project. It would seem that who else but the Contractor should be given such rights, but... Participation in writing the technical specifications of both the Customer (since only he knows all the requirements for his data center) and the Contractor (since if the Contractor is not involved, then the Customer may well write such technical specifications that no one will be able to implement at all) allows us to develop, during the discussion process, a fairly accurate idea of ​​the system that will be created, and about software which should be applied. Those. specialists participating in the writing of technical specifications become at the time of completion of its writing the most competent in terms of specific requirements for a project carried out for a specific customer. I immediately answer possible questions about joint writing of technical specifications. Customer when developing large projects can single-handedly write only a Preliminary Specification Specification, which is only suitable for the purpose of holding a competition when searching for a Contractor. And the jointly written technical specifications with disputed issues settled between the Contractor and the Customer will serve as the main document for the acceptance of the data center, since the “Program and test methodology” will be written on the basis of the technical specifications.

Therefore, one of the main mistakes the Customer makes is elimination from the work of specialists involved in writing technical specifications and occasional participation in the preliminary and detailed design of only narrow specialists when resolving private issues. Specialists involved in the implementation of large projects must be located at the Customer’s complex works department. And they are the ones who should involve, if necessary, all specialists in individual areas. In this case, the specialists of the integrated department will be aware of all the “subtle” points of the project and the project itself will have a greater chance of successful completion. Also, specialists from the integrated department must participate in the acceptance of the Customer’s work, because By constantly monitoring the progress of work, they will be aware of all its problems.

Note regarding the work falling within the competence of the comprehensive department.

It is wrong to think that the workload of a complex department will be limited only to participation in large projects, of which the Customer usually does not have very many. Big projects do not exist on their own. Typically, each project requires its own expansion, integration with various subsystems, and modifications in connection with newly emerging tasks. It is in solving these issues that complex specialists will come in handy. The previous one concerned not only large projects, because it is necessary to understand that only the introduction of individual products that do not affect a large number of Customer’s employees, it is possible to implement, bypassing the complex department.

If we turn to the experience of implementing large projects, we will notice that large organizations (for example, banks), or those whose specialization is related to IT, themselves manage projects to create their own data centers.

Summing up the stages of justification and preparation of technical specifications

From the above we can conclude:

  1. When talking about creating a data center, you must first prioritize the requirements that it will have to satisfy.
  2. After prioritizing, you need to take as a basis one of the standards, the requirements of which you will follow. (I would recommend using TIA-942, but we must not forget that it does not consider operational issues.)
  3. All deviations from the standard, for better or worse, must be justified.
  4. To draw up technical specifications, you need to involve your own complex works department (or create one), because On your side, you need people who are personally interested in the successful implementation of the project and who will supervise all work with the Contractor.

If you noticed that in this part I considered the issues before starting to write the technical specifications, I emphasized that it is necessary to write the technical specifications with the Contractor, but I did not write anything about choosing a contractor. The fact is that choosing a Contractor is a separate and responsible task. And if we mention this very briefly, then the choice is usually divided into 2 stages:

  1. Determining the circle of applicants to solve the problem of building your specific data center.
  2. Analysis of material presented by companies and clarification of issues during personal meetings.

It is usually easier to select several companies implementing successful projects in this area and provide them with preliminary technical specifications (such technical specifications can be compiled by the Contractor’s specialists). Candidates for data center construction are then asked to create a short document briefly describing all the subsystems of the data center and the process of its operation. Usually, based on the completeness of the issues considered, the validity of the decisions and the results of personal communication, the choice of the Contractor becomes obvious. And I’ll also add on my own behalf: if during a personal meeting they promise you everything and for cheap (in any case, significantly cheaper than others), this is a reason not to believe and once again check the reality and quality of the projects completed by the company. In addition, often in truly complex data center construction projects, the implementation of some of its subsystems requires the involvement of other companies. In this case, you immediately need to agree that one of the companies is the system integrator for this project and you will resolve all technical and other issues with it. There is nothing worse than “piecemeal” implementation of a project. Otherwise, in case of any trouble, everything will be like Raikin’s immortal monologue “Do you have any complaints about buttons?”

»

The era of computers is already more than 50 years old, and accordingly, their life support infrastructure is just as old. The first computer systems were very difficult to use and maintain, they required a special integrated infrastructure to function.

A bewildering array of cables connected the various subsystems, and many were developed to organize them. technical solutions, still used today: equipment racks, false floors, cable trays, etc. In addition, cooling systems were required to prevent overheating. And since the first computers were military, security issues and restrictions came first. Subsequently, computers became smaller, cheaper, more unpretentious and penetrated into a variety of industries. At the same time, the need for infrastructure disappeared, and computers began to be placed anywhere.

The revolution occurred in the 90s, after the spread of the client-server model. Those computers that began to be considered servers began to be placed in separate rooms with a prepared infrastructure. The names of these rooms in English sounded like Computer room, Server room, Data Center, while in the Soviet Union we called them “Computer rooms” or “Computing Centers”. After the collapse of the USSR and the popularization of English terminology, our computer centers turned into “server” and “Data Processing Centers” (DPC). Are there fundamental differences between these concepts, or is it just a matter of terminology?

The first thing that comes to mind is the scale: if it’s small, then it’s a server one, and if it’s large, then it’s a data center. Or: if there are only its own servers inside, then this is a server room; and if server hosting services are provided to third-party companies, then a data center. Is it so? For the answer, let's turn to the standards.

Standards and criteria

The most common standard currently describing the design of data centers is the American TIA 942. Unfortunately, there is no Russian analogue; the Soviet CH 512-78 is long ago and hopelessly outdated (even though there was an edition from 2000), it can only be considered from the point of view of general approaches.

The TIA 942 standard itself states that the purpose of its creation is to formulate requirements and guidelines for the design and installation of a data center or computer room. Let's assume that the data center is something that meets the requirements of TIA 942, and the server room is just some kind of room with servers.

So, the TIA 942 standard classifies 4 levels (TIERs) of data centers and names a number of parameters by which this classification can be carried out. As an example, I decided to check whether my server room, built along with the plant three years ago, is a real data center.

As a small digression, I’ll point out that the plant produces stamped parts for the automotive industry. We produce body parts for companies such as Ford and GM. The enterprise itself is small (total staff of about 150 people), but with a very high level of automation: the number of robots is comparable to the number of workers in the workshop. The main difference between our production can be called the Just-In-Time work rhythm, that is, we cannot afford delays, including due to the fault of IT systems. IT is business critical.

The server room was designed to meet the needs of the plant; it was not intended to provide services to third-party companies; therefore, certification for compliance with any standards was not required. However, since our plant is a member of a large international holding, design and construction were carried out taking into account internal corporate standards. And these standards are, at least partially, based on international ones.

The TIA 942 standard is very extensive and describes in detail approaches to the design and construction of data centers. In addition, in the appendix there is a large table with more than two hundred parameters for compliance with the four data center levels. Naturally, it is not practical to consider all of them in the context of this topic, and some of them, for example, “Separate parking for visitors and employees,” “Thickness of the concrete slab at ground level,” and “Proximity to airports,” are not very directly related to the classification Data centers and especially their difference from a server room. Therefore, we will consider only the most important, in my opinion, parameters.

Basic parameters for data center classification

The standard establishes criteria for two categories - mandatory and recommended. Mandatory ones are indicated by the word “shall”, recommended ones - by the words “should”, “may”, “desirable” (should, may, desirable).
The first and most important criterion is the level of operational readiness. According to TIA 942, a data center of the highest - fourth - level must have 99.995% availability (i.e. no more than 15 minutes of downtime per year). Further, descending, 99.982%, 99.749% and 99.671% for the first level, which already corresponds to 28 hours of downtime per year. The criteria are quite strict, but what does data center availability look like? Here, only downtime of the entire data center due to the fault of one of the life support systems is considered, and the downtime of individual servers does not affect the operational readiness of the data center. And if so, then the most likely reason for the failure is rightly considered to be interruptions in the power supply system.

Our server room has a powerful APC UPS with N+1 redundancy and an additional battery cabinet, which is capable of maintaining the operation of not only servers, but also all computers in the enterprise for up to 7 hours (why do we need running servers if there is no one to connect to them). Over three years of operation there have never been any failures, so according to this parameter we can claim the highest TIER 4.

Speaking of power supply, the third and fourth classes of data centers require a second power input. We don't have one, so the maximum is second class. The standard also classifies power consumption per square meter of area. Strange parameter, never thought about it. I measured it: I have 6 kW per 20 square meters, that is, 300 W per square meter (only the first level). Although it is possible that I think incorrectly: the standard states that a good data center must have free space for scaling. That is, it turns out that the greater the “scaling margin”, the lower the level of the data center, but it should be the other way around. Here we have the lowest rating, but we still meet the standard.

For me, an important parameter is the connection point for external telecommunication systems. We interact online with clients to receive orders and ship components; therefore, a lack of communication can lead to a stop in our clients’ conveyor belts. And this will not only negatively affect our reputation, but will also lead to serious fines. It’s interesting that the standard itself talks about duplicating communication input points, but the appendix says nothing about this (although it states that at levels above the first, all subsystems must be redundant). We use two connection channels with automatic routing in case of failure in one of them, plus a backup GPRS router with manual connection. Here again we meet the highest requirements.

A significant part of the standard is devoted to cable networks and systems. These are distribution points for the main and vertical subsystems of the overall data center cabling system and cabling infrastructure. After reading several parts of this section, I realized that I either need to memorize it or suck it up and concentrate on more important things. Although at a superficial glance (category 6 twisted pair, separation of active equipment from passive), we still comply with the standard. Although I am not sure about such parameters as the distance between the cabinets, the bending angles of the trays and the correct spacing of the routes for low-current cables, optics and power. We will assume that here we partially meet the requirements.

Air conditioning systems: there are air conditioners, there is redundancy, we can say that there is even a cold and hot corridor (though there is only one, due to the size of the room.) But the cooling is not distributed under the false floor, as recommended, but directly in the work area. Well, we don’t control humidity, but according to the standard, this is an omission. We set a partial match.

A separate part is devoted to raised floors. The standard regulates both the height and the load on them. Moreover, the higher the class of the data center, the higher and more powerful the false floors should be. We have them, and in terms of height and loads they correspond to the second class of data centers. But my opinion is that the presence of false floors should not be a criterion, much less a characteristic of a data center. I was in the data center of the WestCall company, where they initially abandoned false floors, placing all the trays under the ceiling. Air conditioning is done with cold and hot aisles. The building is separate, the premises are large, and specific services are provided. That is, a good, “real” data center, but it turns out that without false floors it formally does not meet the standard.
The next important point is the security system. Large data centers are guarded almost like safe deposit boxes in a bank, and getting there is a whole procedure, starting from approval at different levels and ending with changing clothes and shoe covers. Ours is simpler, but everything is there: physical security is provided by a private security company, which also guards the plant itself, and the access control system ensures that only authorized employees enter the premises. Let's put a plus sign.

And finally, a gas fire extinguishing system. The main and reserve cylinders, sensors in the room itself, under the floor and above the ceiling and a control system - everything is there. By the way, an interesting point. When companies want to show off their data center, the first thing they show is the fire extinguishing system. Probably because this is the most unusual element of a data center, not found almost anywhere except data centers, and the rest of the equipment simply looks like cabinets of different colors and sizes.

The main thing, in my opinion, is the difference between the two upper levels Data centers from lower ones - that they should be located in a separate building. It would seem that this is the sacred meaning of the difference between a server room and a data center: if it is separated into a separate building, then it is a data center. But no, the standard says that the first two levels are also data centers.

I finally found a parameter for which my server room is not suitable for a data center: the size of the front door. According to the standard, there should be a minimum of 1.0×2.13 m, and preferably 1.2×2.13 m. But we have an ordinary door: 0.9×2.0 m. This is a minus, but it should be considered a criterion for distinguishing a data center from a server room The size of the front door is not serious.

Almost a real data center!

So what have we got? A small server room at a factory meets almost all the requirements of the standard for organizing a data center, albeit with minor reservations. The only major discrepancy is the size of the front door. The absence of a separate building for a server room leaves no chance for top positions. This means that the assumption that the data center is necessarily large, and the server room, on the contrary, is always small, is incorrect. As well as the second assumption that the data center serves many client companies. From everything it follows that a server room is just a synonym for a data center.

The concept of a data center appeared when they began to sell hosting services, renting racks and hosting servers. At that time, the concept of a server room was devalued by a negligent attitude towards infrastructure due to the unpretentiousness of PCs and the low cost of downtime. And, in order to show that the provider has everything built for convenient and trouble-free operation, and they are able to guarantee the quality of the service, they introduced the concept of data centers, and then the standards for their construction. Given the trends of centralization, globalization and virtualization, I think that the concept of a server room will soon disappear or turn into a designation for a telecommunications hub.

I believe our President is counting on approximately the same thing with the police law. The concept of “police” has been devalued, and it is too late to create new rules for them. Whether it will be possible to build competent standards for the new structure - we'll see in the near future.

In the modern sense, a data center, or data processing center (DPC), is a comprehensive organizational and technical solution designed to create a high-performance and fault-tolerant information infrastructure. In a narrower sense, a data center is a room designed to house equipment for processing and storing data and providing connection to fast communication channels. In order to more fully reveal the essence of the concept of a data center, let's start with the history of its origin.

In principle, computer centers, familiar to many from EC machines, which became widespread in our country 30 years ago, are in a certain sense the ancestors of modern data centers. Common to current data centers and old data centers is the idea of ​​resource consolidation. At the same time, the computer centers had quite complex subsystems for providing the necessary computer technology environment, which consisted of cooling, power, security, etc. subsystems, many of which are also used in modern data centers.

With the proliferation of PCs in the mid-1980s, there was a trend towards dispersal of computing resources - desktop computers did not require special conditions, and consequently, less and less attention was paid to providing a special environment for computing. However, with the development of client-server architecture in the late 90s, the need arose to install servers in special premises - server rooms. It often happened that servers were placed in the area of ​​old computer centers. Around this time, the term “data center” emerged, applied to specially designed computer rooms.

The heyday of data centers occurred during the dot-com boom. Companies that needed fast Internet access and business continuity began to design special premises that provide increased security data processing and transmission, - Internet Data Centers. Since all modern data centers provide access to the Internet, the first word in the name has been eliminated. Over time, a separate scientific direction has emerged that deals with optimization of the construction and operation of data centers.

At the beginning of the 21st century, many large companies, both abroad and in our country, came to the need to implement a data center - for some, ensuring business continuity became paramount, for others, data center solutions turned out to be very effective due to savings in operating costs. Many large companies have found that a centralized computing model provides the best TCO.

Over the past decade, many large IT companies have acquired an entire network of data centers. For example, the oldest global operator Cable & Wireless in 2002 bought the American company Digital Island, owner of 40 data centers around the world, and the European operator Interoute acquired the operator and hosting provider PSINet in 2005, connecting 24 data centers to its pan-European network.

The practice of applying risk-based approaches to doing business stimulates the use of data centers. Companies have begun to realize that investing in the uninterrupted operation of critical IT systems is much cheaper for many types of businesses than the possible damage from data loss as a result of a failure. The implementation of data centers is also facilitated by the adoption of laws requiring mandatory backup of IT systems, the emergence of recommendations for the use of an IT infrastructure outsourcing model, and the need to protect businesses from natural and man-made disasters.

Separate data centers began to occupy all b O larger territories. For example, information recently appeared that Google intends to build a large data center in Iowa with an area of ​​22.3 hectares, spending $600 million on it, which will start operating in the spring of 2009.

In Russia, the construction of data centers (in the modern sense of the term) began at the end of the last century - the beginning of the new century. One of the first large Russian data centers was the Sberbank Center. Today, many commercial structures (primarily financial organizations and large telecom operators) have their own data centers.

At the same time, reputable Russian Internet companies already have several data centers. For example, in September of this year, a message appeared that Yandex had opened a new (the fourth in a row) data center with 3 thousand servers (occupied area - 2 thousand sq.m, supplied power - 2 MW). The new complex is equipped with precision cooling systems that allow you to remove up to 10 kW from the rack, sources uninterruptible power supply and diesel generators. The data center is connected to the Moscow optical ring of Yandex, which connects other data centers and Yandex offices, as well as to M9 and M10 - traditional traffic exchange points with providers.

Simultaneously Russian operator Synterra announced the start of one of the largest projects (not only by Russian, but also by European standards) - the construction of a national network of its own data centers. The project was called “40x40”. Having created large data centers in broadband network nodes in most regions of Russia, the operator intends to turn them into points for localizing customers and selling the entire range of services.

By mid-2009, newly created data centers will be opened in 44 centers of the constituent entities of the Federation. The first will be Moscow, St. Petersburg, Kazan, Samara and Chelyabinsk. The operator plans that the first 20 sites will be put into operation by the end of 2008, the rest by mid-2009. The integrators of the project are Croc, Technoserv A/S and Integrated Service Group (ISG).

The area of ​​each data center, depending on the needs of the region, will vary from 500 to 1000 sq.m. on a raised floor and accommodate 200-300 technological racks. Two network rings with a total channel capacity of 4x10 Gbit/s should be connected to the data center, which will provide customers with a high level of redundancy and service availability.

The “40x40” project is aimed at a wide range of clients who need to outsource IT infrastructure throughout the country - telecom operators, network operators corporate clients, content and application developers, IP-TV operators and television companies, as well as government agencies responsible for the implementation of national ICT programs.

In our country, not only commercial but also government agencies, such as the Ministry of Internal Affairs, the Ministry of Emergency Situations and the Federal Tax Service, have their own data centers.

According to IDC, the number of data centers in the United States will reach 7 thousand by 2009 as companies transfer distributed computing systems to centralized ones.

Along with the construction of new data centers, the problem of modernizing old ones is on the agenda. According to Gartner, by 2009, 70% of data center equipment will no longer meet operational and performance requirements unless appropriate upgrades are made. The average time to update computer equipment in a data center is approximately three years. The data center infrastructure is designed taking into account a service life of about 15 years.

Purpose and structure of the data center

Depending on their purpose, modern data centers can be divided into corporate ones, which operate within a specific company, and data centers that provide services to third-party users.

For example, a bank may have a data center where information on transactions of its users is stored - it usually does not provide services to third-party users. Even if the data center does not provide such services, it can be separated into a separate organizational structure of the company and provide it with services for access to information services based on SLA. Many large companies have data centers of one kind or another, and international companies may have dozens of data centers.

The data center can also be used to provide professional IT outsourcing services for IT solutions on commercial terms.

All data center systems consist of the IT infrastructure itself and engineering infrastructure, which is responsible for maintaining optimal conditions for the functioning of the system.

IT infrastructure

A modern data processing center (DPC) includes a server complex, a data storage system, an operation system and a information security, which are integrated with each other and connected by a high-performance LAN (Fig. 1).

Rice. 1. IT infrastructure of a modern data center

Let's consider the organization of a server complex and data storage system.

Server complex data center

The most promising model of a server complex is a model with a multi-level architecture, in which several groups of servers are distinguished (see Fig. 1):

  • resource servers, or servers information resources, are responsible for storing and providing data to application servers; for example, file servers;
  • application servers perform data processing in accordance with the business logic of the system; for example, servers running SAP R/3 modules;
  • information presentation servers provide the interface between users and application servers; for example, web servers;
  • service servers ensure the operation of other data center subsystems; for example, system management servers Reserve copy.

Servers of different groups have different requirements depending on their operating conditions. In particular, information presentation servers are characterized by big flow short requests from users, so they must scale well horizontally (increasing the number of servers) to ensure load distribution.

For application servers, the requirement for horizontal scalability remains, but it is not critical. They require sufficient vertical scalability (the ability to increase the number of processors, volumes random access memory and input/output channels) for processing multiplexed requests from users and executing the business logic of the tasks being solved.

Storage systems

The most promising solution for organizing a data storage system (SDS) is SAN (Storage Area Network) technology, which provides fault-tolerant server access to storage resources and allows reducing the total cost of ownership of IT infrastructure due to the possibility of optimal online management of server access to storage resources.

The storage system consists of information storage devices, servers, a management system and communication infrastructure that provides physical communication between the elements of the storage network (Fig. 2).

Rice. 2. Data storage system based on SAN technology

This architecture allows for uninterrupted and secure data storage and data exchange between storage network elements.

The SAN concept is based on the ability to connect any server to any data storage device running via the Fiber Channel (FC) protocol. The technical basis of the SAN is made up of fiber optic connections, FC-HBAs and FC switches, currently providing transfer speeds of 200 MB/s.

The use of SAN as the transport basis of a data storage system allows for dynamic reconfiguration (adding new devices, changing configurations of existing ones and their maintenance) without stopping the system, and also ensures rapid regrouping of devices in accordance with changing requirements and rational use of production space.

The high speed of data transfer via SAN (200 MB/s) allows you to replicate changing data in real time to a backup center or to remote storage. Convenient means SAN administration makes it possible to reduce the number of maintenance personnel, which reduces the cost of maintaining the data storage subsystem.

Adaptive Engineering Data Center Infrastructure

In addition to the hardware and software complex itself, the data center must provide external conditions for its operation. The equipment located in the data center must operate around the clock under certain environmental parameters, which require a number of reliable support systems to maintain.

A modern data center has more than a dozen different subsystems, including main and backup power, low-current, power and other types of wiring, climate control systems, fire safety, physical security, etc.

It is quite difficult to ensure optimal climatic conditions for equipment. It is necessary to remove a large amount of heat generated by computer equipment, and its volume increases as the power of systems and the density of their layout increase. All this requires optimization of air flows, as well as the use of cooling equipment. According to IDC, this year the cost of supplying data centers with electricity and cooling will exceed the cost of computer equipment itself.

The listed systems are interconnected, therefore optimal solution can only be found if, when constructing it, we consider not individual components, but the infrastructure as a whole.

Design, construction and operation of a data center is a very complex and labor-intensive process. There are many companies offering the necessary equipment - both computer and auxiliary, but to build an individual solution you cannot do without the help of integrators. A number of large domestic system integrators, such as IBS Croc OpenTechnologies, as well as specialized companies: DataDome, IntelinePro, etc., are engaged in the creation of data centers in Russia.

Data center and IT outsourcing

According to IDC, the global market for data center hosting services alone is growing very quickly and will reach $22-23 billion by 2009.

The most comprehensive IT outsourcing service is information systems outsourcing. It is provided under a long-term agreement, under which the service provider receives full control of the client’s entire IT infrastructure or a significant part of it, including the equipment and software installed on it. These are projects with broad involvement of the contractor, which assume responsibility for systems, network and individual applications included in the IT infrastructure. Typically, IT infrastructure outsourcing is formalized through long-term contracts that last more than a year.

To create their own IT infrastructure from scratch, companies need large funds and highly paid specialists. Renting data center infrastructure allows you to reduce TCO by sharing resources between clients and provides access to the latest technologies, makes it possible to quickly deploy offices with the ability to expand resources. For many companies, the reliability of the uninterrupted operation of equipment and network infrastructure is today becoming a critical factor for the functioning of the business. Outsourcing of IT infrastructure allows you to ensure a high level of data reliability at a limited cost, providing clients with the opportunity to rent server racks and rack space to accommodate customer equipment (co-location), rent a dedicated server, licensed software, data transmission channels, and also obtain technical support.

The customer is freed from many procedures: technical support and administration of equipment, organizing round-the-clock security of premises, monitoring network connections, data backup, anti-virus software scanning, etc.

The data center can also provide outsourced application management services. This allows customers to use certified specialists, which guarantees a high level of service software products and provides an easy transition from one software to another with minimal financial costs.

In application outsourcing mode, data center clients can obtain outsourcing of email systems, Internet resources, data storage systems or databases.

By outsourcing their corporate systems for backup, customers reduce the risk of losing critical information by using professional systems restoration of the functionality of IT systems, and in the event of an accident, they receive the opportunity to insure information risks.

Typically, data center customers are offered multiple levels of business continuity. In the simplest case, this is placement backup systems in the data center with proper protection. In addition, there may be an option in which the client is also provided with the rental of software and hardware systems for reservation. The most complete version of the service involves the development of a full-scale system recovery plan in the event of a disaster (Disaster Recovery Plan, DRP), which involves an audit of the customer’s information systems, risk analysis, development of a disaster recovery plan, creation and maintenance backup copy systems, as well as the provision of equipped office space to continue work in the event of an accident in the main office.

Examples of commercial data centers

Stack Data Network Data Centers

The Stack Data Network unites three data centers built taking into account foreign experience.

Two of them (Stack data center and M1 data center) with a total capacity of 700 racks are located in Moscow, and the third (PSN data center) with a capacity of 100 racks is 100 km from the capital.

There are partnership agreements with a number of European data centers on the possibility of using their resources through the Stack Data Network.

Stack Data Network data centers provide a business continuity service - disaster recovery, as well as high-quality hosting: a collocation service - server placement (Fig. 3) and a dedicated server service (Fig. 4).

Rice. 3. Data Center Stack: server placement
(server collocation)

Rice. 4. Data center Stack: dedicated server rental

Data centers have autonomous power supply systems with uninterruptible power supplies and powerful diesel generator sets (Fig. 5), climate control and air conditioning systems (Fig. 6), round-the-clock monitoring systems for the condition of infrastructure elements and gas fire extinguishing systems. To ensure the reliability of life support systems, all systems are reserved according to the N+1 scheme. A special security regime is achieved through several access perimeters using individual plastic magnetic cards, a biometric access control system, a video surveillance system and motion sensors.

Rice. 5. Data Center Stack: Diesel Generator

Rice. 6. Data Center Stack: Liebert air conditioner

The Stack Data Network is organized in a network of data centers 24 hour service operation (duty operators and specialists), including life support systems. There are round-the-clock monitoring systems for life support systems, telecommunications and server equipment, networks and the state of communication channels. Data centers are connected to the main telecommunications hubs in Moscow and interconnected by their own redundant fiber-optic communication lines.

Sun Microsystems Introduces New Data Center in a Box Concept

The process of creating traditional data centers is very expensive and lengthy. To speed it up, Sun Microsystems proposed a solution called Blackbox.

The Blackbox system is mounted in a standard-length shipping container that can accommodate up to 120 SunFire T2000 servers or up to 250 SunFire T1000 servers (2 thousand cores in total) or up to 250 SunFire x64 servers (1 thousand cores), as well as storage systems, capacity which can reach up to 1.5 PB per hard drives and up to 2 PB on tapes. Up to 30 thousand Sun Ray terminals can be connected to the container.

The system runs Solaris 10.

The equipment is placed very tightly in the container; there is simply no room for air circulation. Because of this, air cooling is extremely ineffective, so water cooling is used.

According to SUN, placing equipment inside a shipping container can reduce the unit cost of computing power per unit area by five times compared to a conventional data center.

The Blackbox solution is at least an order of magnitude cheaper than a traditional data center organization, while it provides multiple acceleration of the installation process.

It should be noted that such a center cannot be implemented everywhere, since not every building can accommodate such a container. Sales of the ball's solution began this year.

Data center IBS DataFort

In 2001, IBS and Cable & Wireless announced the start of providing comprehensive services under the ASP scheme to Russian and foreign companies as part of the joint project DATA FORT based on a data center. A little later, DATA FORT began to live on its own, and in 2003, IBS announced the launch of its own DC, which belongs to a subsidiary of IBS - IBS DataFort. The IBS DataFort data center is focused on serving clients with critical requirements for confidentiality and data protection, provides a high degree of data availability, modern hardware and software, reliable power supply, high-speed data transmission channels and a high level of technical support. The perimeter has enhanced security (Fig. 7).

Rice. 7. Protected area of ​​the IBS DataFort data center

Inside the building there is a technical module with an area of ​​more than 130 sq.m., a two-story backup office with an area of ​​about 150 sq.m., and an operator station. To prevent the risk of floods and fires, the technical module of the data center is built from steel sandwich panels and raised half a meter above the floor level (Fig. 8).

Rice. 8. Technical module of the IBS DataFort data center

The technical module is a fireproof, earthquake-resistant structure, equipped with a high-strength raised floor, waterproofing and grounding systems. The module is designed for 1500 Rack servers placed in 19-inch industrial APC racks.

The data center is equipped with an automatic gas fire extinguishing system, consisting of Fire Eater, Shrack and Inergen state equipment, light and sound alarms (warning about the release of gas and requiring you to leave the premises of the data center), as well as an effective smoke removal system (Fig. . 9).

Rice. 9. Data center fire extinguishing systems
IBS DataFort

The climate control system (Fig. 10) consists of industrial air conditioners with automatic maintenance of a set temperature in the region of 22±0.5 °C and humidity at 50±5%, switched on according to the N+1 scheme (if one of the air conditioners fails, the calculated the parameters of the entire system are not violated). The flow of fresh air from the street is carried out using a special installation that prevents dust from penetrating into the data center.

Rice. 10. Climate control system
IBS DataFort data center

IBS DataFort specializes in comprehensive IT outsourcing services, taking over all functions of the customer’s IT departments, and offers the following types of services:

  • outsourcing of IT infrastructure - hosting the customer’s equipment or leasing data center infrastructure, ensuring the functionality of corporate information systems;
  • application management - skilled administration and management various applications;
  • outsourcing of IT personnel - provision of qualified specialists to solve various IT problems;
  • ensuring business continuity - organizing fault-tolerant solutions for restoring information systems after accidents and failures;
  • IT consulting and audit - audit and inventory services in the field of IT, as well as the construction of industrial technologies for operating IT systems;
  • functional outsourcing - management of individual IT functions according to agreed standards and approved service levels.

- Do you have a data center?
- Yes, we are building for 100 racks.
- And we are building for 200.
- And we are at 400 with an independent gas power plant.
- And we have 500 with water cooling and the ability to remove up to 10 kW of heat from one rack.
- And we watch the market and are surprised.

The situation on the Moscow (and Russian in general) data center market looked deplorable two years ago. The total shortage of data centers as such and space in existing centers led to the fact that a 3-4 kW rack, which cost in 2004-2005, about 700 USD, in 2007 it began to cost 1500-2000 USD. Trying to meet growing demand, many operators and system integrators organized “construction projects of the century” with the goal of creating the best and largest data centers. These wonderful aspirations have led to the fact that at the moment there are about 10 data centers in Moscow at the stage of opening and initial filling, and several more are in the project. The companies Telenet, i-Teco, Dataline, Hosterov, Agava, Masterhost, Oversun, Synterra and a number of others opened their own data centers at the turn of 2008 and 2009.

The desire to invest money in large-scale telecommunications projects was explained not only by fashion, but also by a number of economic reasons. For many companies, building their own data center was a forced measure: the incentive for this, in particular for hosting providers or large Internet resources, was the constantly growing costs of infrastructure. However, not all companies have calculated their strengths correctly; for example, the data center of one large hosting provider has turned into a long-term construction project that has been underway for two years. Another hosting provider, which built a data center outside the Moscow Ring Road, has been trying to sell it to at least some large clients for six months now.

Investment projects launched at the height of the crisis also cannot boast of a stable influx of clients. Quite often the cost per rack, included in the business plan at the level of 2000 USD, in the current economic conditions is reduced to 1500-1400 USD, postponing the project’s achievement of self-sufficiency for years.

Several grandiose projects for the construction of data centers with thousands of racks with gas power plants outside the Moscow Ring Road remained unrealized. One of these projects, in particular, was “buried” by a fixed-line operator (due to the company’s takeover by a larger player).

Thus, to date, only those data centers that were built more than three years ago and which were filled during the shortage years of 2004-2007 have paid off. In a crisis - in conditions of an excess of free space in data centers - the construction of more and more new data centers, it would seem, looks like sheer madness.

However, not everything is so bad: even in a crisis, subject to certain conditions, you can and should create your own data center. The main thing is to understand a number of nuances.

What makes a company create its own data center?

There is only one motive - business security and risk minimization. These are the risks. Firstly, the class of commercial data centers in Moscow corresponds to level 1-2, which means permanent problems with power supply and cooling. Secondly, commercial data centers categorically do not agree to cover losses from downtime and lost profits. Specify maximum size the fine or penalty that you can count on in case of downtime - it, as a rule, does not exceed 1/30 of the rent for the day of downtime.

And thirdly, you are not able to control the real state of affairs in a commercial data center:

  • This commercial organization, which must make a profit from its activities and sometimes saves even at the expense of the quality of its services;
  • you assume all the risks of a third-party company, for example - power outage (even short-term) for an outstanding debt;
  • the commercial data center may terminate its contract with you at any time.

Data center economics

It is very important to estimate the costs of construction and operation in advance, correctly and completely, and for this you need to determine the class of the data center that you will build. Below is the estimated cost of building a turnkey site for a 5 kW rack (excluding the cost of electricity).

Level 1 From 620 TR
Level 2 From 810 TR
Level 3 From 1300 t.r.
Level 4 From 1800 TR

The cost of operating a data center depends on many factors. Let's list the main ones.

  1. Cost of electricity = amount of electricity consumed + 30% for heat removal + transmission and conversion losses (from 2 to 8%). And this is subject to the implementation of all cost-cutting measures - such as reducing losses and proportional cooling (which in some cases, alas, is impossible).
  2. The cost of renting premises is from 10 thousand rubles per sq. m. m.
  3. The cost of servicing air conditioning systems is approximately 15-20% of the cost of the air conditioning system per year.
  4. The cost of servicing power systems (UPS, diesel generator set) is from 5 to 9% of the cost.
  5. Rental of communication channels.
  6. Payroll of the maintenance service.

What does a data center consist of?

There are a number of formalized requirements and standards that must be met when constructing a data center: after all, the reliability of its operation is critically important.

Currently, the international classification of levels (from 1 to 4) of data center readiness (reliability) is widely used () see. table). Each level assumes a certain degree of availability of data center services, which is ensured by different approaches to redundant power, cooling and channel infrastructure. The lowest (first) level assumes availability of 99.671% of the time per year (or the possibility of 28 hours of downtime), and the highest (fourth) level implies availability of 99.995%, i.e. no more than 25 minutes of downtime per year.

Data center reliability level parameters

Level 1 Level 2 Level 3 Level 4
Ways of cooling and input of electricity One One One active and one standby Two active
Component Redundancy N N+1 N+1 2*(N+1)
Division into several autonomous blocks No No No Yes
Hot-swappable No No Yes Yes
Building Part or floor Part or floor Freestanding Freestanding
Staff No At least one engineer per shift At least two engineers per shift More than two engineers, 24-hour duty
100 100 90 90
Auxiliary areas, % 20 30 80-90 100+
Raised floor height, cm 40 60 100-120 600 800 1200 1200
Electricity 208-480 V 208-480 V 12-15 kV 12-15 kV
Number of points of failure Many + operator errors Many + operator errors Few + operator errors No + operator errors
Allowable downtime per year, h 28,8 22 1,6 0,4
Time to create infrastructure, months. 3 3-6 15-20 15-20
Year of creation of the first data center of this class 1965 1975 1980 1995

This classification by levels was proposed by Ken Brill back in the 1990s. It is believed that the first data center itself high level readiness was built in 1995 by IBM for UPS as part of the Windward project.

In the USA and Europe, there is a certain set of requirements and standards governing the construction of data centers. For example, the American standard TIA-942 and its European analogue -EN 50173−5 fix the following requirements:

  • to the location of the data center and its structure;
  • to cable infrastructure;
  • to reliability, specified by infrastructure levels;
  • to the external environment.

In Russia, at the moment, no current requirements for the organization of data centers have been developed, so we can assume that they are set by standards for the construction of computer rooms and SNiPs for the construction of technological premises, most of the provisions of which were written back in the late 1980s.

So, let’s focus on the three “pillars” of data center construction, which are the most significant.

Nutrition

How to build a reliable and stable power supply system, avoid future operational failures and prevent equipment downtime? The task is not simple, requiring careful and scrupulous study.

The main mistakes are usually made at the design stage of data center power supply systems. They (in the future) can cause a failure of the power supply system for the following reasons:

  • overload of power lines, as a result - failure of electrical equipment and sanctions from energy regulatory authorities for exceeding the consumption limit;
  • serious energy losses, which reduces the economic efficiency of the data center;
  • limitations in the scalability and flexibility of power supply systems associated with the load capacity of power lines and electrical equipment.

The power supply system in the data center must meet the modern needs of technical sites. In the data center classification proposed by Ken Brill, in relation to power supply, these requirements would look like this:

  • Level 1 - it is enough to provide protection against current surges and voltage stabilization, this can be solved by installing filters (without installing a UPS);
  • Level 2 - requires installation of a UPS with bypass with N+1 redundancy;
  • Level 3 - parallel operating UPSs with N+1 redundancy are required;
  • Level 4 - UPS systems, with redundancy 2 (N+1).

Today on the market you can most often find data centers with a second-level power supply, less often - a third (but in this case, the cost of placement usually increases sharply, and is not always justified).

According to our estimates, a data center with full redundancy usually costs 2.5 times more than a simple data center, so it is extremely important to decide at the pre-project level what category the site should correspond to. Both underestimation and overestimation of the importance of the permissible downtime parameter equally negatively affect the company’s budget. After all financial losses are possible in both cases - either due to downtime and failures in the operation of critical systems, or due to throwing away money down the drain.

It is also very important to monitor how electricity consumption will be recorded.

Cooling

Properly organized heat removal is an equally complex and important task. Very often, the total heat release of the room is taken and, based on this, the power of demanding air conditioners and their number are calculated. This approach, although very common, cannot be called correct, since it leads to additional costs and losses in the efficiency of cooling systems. Errors in the calculations of cooling systems are the most common, and evidence of this is the operation service of almost every data center in Moscow, watering heat exchangers with fire hoses on a hot summer day.

The quality of heat removal is affected by the following points.

Architectural features of the building. Unfortunately, not all buildings have a regular rectangular shape with constant ceiling heights. Height changes, walls and partitions, structural features and exposure to solar radiation can all lead to additional difficulties in cooling certain areas of the data center. Therefore, the calculation of cooling systems should be carried out taking into account the characteristics of the room.

Height of ceilings and raised floors. Everything is very simple here: if the height of the raised floor does not spoil the matter, then too high ceilings lead to stagnation of hot air (therefore, it must be removed by additional means), and too low ceilings impede the movement of hot air to the air conditioner. In the case of a low raised floor (> 500 mm), the cooling efficiency drops sharply.

Indicators of temperature and humidity in the data center. As a rule, for normal operation of the equipment, it is necessary to maintain a temperature regime in the range of 18-25 degrees Celsius and a relative humidity of 45 to 60%. In this case, you will protect your equipment from stopping due to hypothermia, failure due to condensation in high humidity, static electricity (in the case of low humidity) or due to overheating.

Channels of connection

It would seem that such an “insignificant” component as communication channels cannot cause any difficulties. That is why neither those who rent a data center nor those who build it pay enough attention to it. But what is the use of flawless and uninterrupted operation of the data center if the equipment is unavailable, i.e. doesn't it actually exist? Be sure to note: fiber optic lines must be completely duplicated, and many times. By “duplicated” we mean not only the presence of two fiber optic cables from different operators, but also that they should not lie in the same collector.

It is important to understand that a truly developed communications infrastructure requires significant one-time costs and is by no means cheap to operate. It should be regarded as one of the very tangible components of the cost of a data center.

Own data center “one-two-three”

Here it’s worth making a small digression and talking about an alternative method for creating your own data center. We are talking about BlackBox - mobile center data processing built into the transport container. Simply put, BlackBox is a data center located not in a separate room indoors, but in a kind of trailer (see picture).

BlackBox can be brought into full working order in a month, i.e. 6-8 times faster than traditional data centers. At the same time, there is no need to adapt the infrastructure of the company building for BlackBox (to create a special fire safety, cooling, security system, etc.) for it. And most importantly, it does not require a separate room (you can place it on the roof, in the yard...). All that is really needed is to organize a water supply for cooling, an uninterruptible power supply system and an Internet connection.

The cost of the BlackBox itself is about half a million dollars. And here it should be noted that BlackBox is a fully configured (but with the possibility of customization) virtualization data center located in a standard transport container.

Two companies have already received this container for preliminary testing. These are the Linear Accelerator Center (Stanford, USA) and... the Russian company Mobile TeleSystems. The most interesting thing is that MTS launched BlackBox faster than the Americans.

Overall, the BlackBox comes across as a very well thought out and reliable design, although there are of course some shortcomings worth mentioning.

An external power source of at least 300 W is required. Here we come up with the construction or reconstruction of a transformer substation, installation of a main switchboard, and laying of a cable route. It’s not so simple - design work, coordination and approval of the project at all levels, installation of equipment...

UPSs are not included in the delivery package. Again we come up with design work, choosing a supplier, equipping the room for installation of a UPS with an air conditioning system (batteries are very sensitive to temperature conditions).

The purchase and installation of a diesel generator set will also be required. Without it, the problem with redundancy cannot be solved, and this is another round of approvals and permits (the average delivery time for such units is from 6 to 8 months).

Cooling - an external source of cold water is required. You will have to design, order, wait, install and launch a redundant chiller system.

Summary: you will essentially build a small data center within six months, but instead of a hangar (premises), buy a container with servers, balanced and seriously thought out to the smallest detail, with a set of convenient options and software for managing all this equipment, and install it in a month .

The more data centers, the..?

Currently, the largest data center about which there is information in open sources is Microsoft object in Ireland, in which the corporation plans to invest more than $500 million. Experts say that this money will be spent on creating Europe's first computing center to support various Microsoft network services and applications.

Construction of a structure with a total area of ​​51.09 thousand square meters in Dublin. m, on the territory of which tens of thousands of servers will be located, began in August 2007 and should be completed (according to the company’s own forecasts) in mid-2009.

Unfortunately, the available information about the project gives little information, because it is not the area that is important, but the energy consumption. Based on this parameter, we propose to classify the data center as follows.

  • “Home Data Center” is an enterprise-level data center that requires serious computing power. Power is up to 100 kW, which allows you to place up to 400 servers.
  • "Commercial Data Center". This class includes operator data centers, the racks in which are rented. Power - up to 1500 kW. Accommodates up to 6500 servers.
  • "Internet Data Center" - a data center for an Internet company. Power - from 1.5 MW, accommodates 6,500 servers or more.

I will take the liberty of suggesting that when building a data center with a capacity of more than 15 MW, an “economy of scale” will inevitably arise. An error of 1.5-2 kW in a 40 kW “family” data center will most likely go unnoticed. A megawatt-sized mistake would be fatal to a business.

In addition, one can reasonably assume in this situation that the law of diminishing returns is at work (as a consequence of economies of scale). It manifests itself in the need to combine large areas, enormous electrical power, proximity to main transport routes and laying railway tracks (this will be more economically feasible than delivering a huge amount of cargo by road). The cost of developing all this in terms of 1 rack or unit will be seriously above average for the following reasons: firstly, the lack of supplied power of 10 MW or more at one point (such a “diamond” will have to be artificially grown); secondly, the need to build a building or a group of buildings sufficient to accommodate a data center.

But if you suddenly managed to find a power of, say, about 5 MW (and this is already a lot of luck), with two redundant inputs from different substations into a building that has a regular rectangular shape with a ceiling height of 5 m and a total area of ​​3.5 thousand square meters. m, and there are no height differences, walls or partitions, and there is about 500 sq. m. m of adjacent territory... Then, of course, it is possible to achieve the minimum cost per rack, of which there will be approximately 650.

The figures here are based on a consumption of 5 kW per rack, which is the de facto standard today, since an increase in consumption in the rack will inevitably lead to difficulties with heat removal, and as a result, to a serious increase in the cost of the solution. Likewise, reducing consumption will not bring the necessary savings, but will only increase the rental component and require the development of larger areas than is actually required (which will also have a detrimental effect on the project budget).

But we must not forget that the main thing is the compliance of the data center with the assigned tasks. In each individual case, it is necessary to seek a balance, based on the input data that we have. In other words, you will have to find a compromise between the distance from the main highways and the availability of free electrical power, ceiling height and room area, full redundancy and the project budget.

Where to build?

It is believed that the data center should be located in a separate building, usually without windows, equipped with the most modern systems video surveillance and access control. The building must have two independent electrical inputs (from two different substations). If the data center has several floors, then the floors must withstand high loads (from 1000 kg per sq. m). The internal part should be divided into sealed compartments with their own microclimate (temperature 18-25 degrees Celsius and humidity 45-60%). Cooling of server equipment should be provided using precision air conditioning systems, and power backup should be provided by both uninterruptible power supply devices and diesel generator sets, which are usually located next to the building and ensure the functioning of all electrical systems of the data center in the event of an emergency.

Particular attention should be paid to the automatic fire extinguishing system, which must, on the one hand, exclude false alarms, and on the other, respond to the slightest signs of smoke or the appearance of an open flame. A serious step forward in the field of data center fire safety is the use of nitrogen fire extinguishing systems and the creation of a fire-safe atmosphere.

The network infrastructure must also provide maximum redundancy for all critical nodes.

How to save?

You can start saving with the quality of power in the data center and end with savings on finishing materials. But if you are building such a “budget” data center, it looks at least strange. What's the point of investing in a risky project and jeopardizing the core of your business?! In a word, saving requires a very balanced approach.

Nevertheless, there are several very costly budget items that are not only possible, but also necessary to optimize.

1. Telecommunication cabinets (racks). Installing cabinets in the data center, i.e. racks, where each has side walls plus a back and front door, does not make any sense for three reasons:

  • the cost may differ by an order of magnitude;
  • the load on the floors is higher on average by 25-30%;
  • cooling capacity is lower (even taking into account the installation of perforated doors).

2. SCS. Again, there is no point in entangling the entire data center with optical patch cords and purchasing the most expensive and powerful switches if you do not intend to install equipment in all racks at once. The average occupancy period for a commercial data center is one and a half to two years. At the current pace of development of microelectronics, this is a whole era. And the entire wiring will have to be redone one way or another - either you will not calculate the required volume of ports, or the communication lines will be damaged during operation.

Under no circumstances build a “copper” cross-connection in one place - you will go broke on the cable. It is much cheaper and smarter to install a telecommunications rack next to each row and install 1-2 copper patch panels from it to each rack. If you need to connect a rack, then throwing an optical patch cord to the desired row is a matter of minutes. At the same time, serious investments will not be needed at the initial stage; Finally, the necessary scalability will be ensured along the way.

3. Nutrition. Yes, you won’t believe it, but the most important thing in powering a data center is efficiency. Choose electrical equipment and uninterruptible power supply systems carefully! From 5 to 12% of the cost of data center ownership can be saved by minimizing losses, such as conversion losses (2-8%) in the UPS (older generations of UPS have lower efficiency) and losses when smoothing harmonic distortions with a harmonic filter (4-8%). Losses can be reduced by installing “reactive power compensators” and by reducing the length of the power cable route.

Conclusion

What conclusions can be drawn? How to choose from all the variety the solution that suits you? This is certainly a complex and non-trivial question. Let us repeat: in each specific case it is necessary to carefully weigh all the pros and cons, avoiding meaningless compromises - learn to correctly assess your own risks.

On the one hand, when reducing costs, outsourcing IT services can be one of the saving options. In this case, it is optimal to use commercial data centers, obtaining a full range of telecommunications services without investing in construction. However, for large specialized companies and the banking sector, with the onset of a period of instability and freezing of the construction of commercial data centers, the question of building their own data center becomes acute...

On the other hand, the crisis phenomena that we have observed over the past year have sadly affected the economy as a whole, but at the same time serve as an accelerator for the development of more successful companies. Rents for premises have dropped significantly. The decline in the hype around electric capacity has made room for more efficient consumers. Equipment manufacturers are ready for unprecedented discounts, and labor prices have fallen by more than a third.

In a word, if you were planning to build your own data center, now, in our opinion, is the time.

Computer