How to choose the right PC?

If you are not the best at understanding how a computer works, then this material will inform you how to choose the right computer and help you make the right choice when buying a ready-made computer. A computer can be different builds for different purposes, so first you need to decide what set of computer features you need or what you are going to buy a computer for, how you will use it.

How to choose the right computer in 2016

If you need a computer in order to use it at home, then the set of requirements for such a computer may also be different. If you plan to actively use the computer for games that require high technical performance, then the purchase approach will differ from the requirements for a computer that will only be used to access the Internet and communicate or visit informative sites. We will consider all the options for the requirements for different options for using a computer, and knowing what computers are, will only increase the possibility of choosing the most suitable result.

Innovative technologies change at a tremendous speed, and every six months there is an increase in requirements along with improvements in computer components. Is it possible to keep up with such progress? This is quite complicated and usually not needed. There is another way to improve your computer, it is called upgrade.

Its essence is to replace some obsolete computer parts with more powerful and modern ones, with those that have higher technical performance. Therefore, when choosing a computer, it is also important to take into account the possibility of a future upgrade and choose a package that will allow it to be carried out in the future. The main indicators in the configuration of the computer are the following items:

  1. Motherboard< /a>;
  2. Processor;
  3. Video card;
  4. RAM;
  5. Hard drive or solid state drive;
  6. Power supply and case

These listed main components are installed in the case of the computer system unit and have different technical parameters, which are decisive when it comes to choosing a computer for both home and office, and for games. A complete list of the system unit’s configuration can be found in the article: composition of the system block. This introductory information will help you gain basic knowledge about the computer device, but also choose a computer for your home in 2016.

How to choose a home computer

First you need to decide what you need a PC for when solving the problem of which computer to choose for your home. If you only need a PC to communicate via Skype or spend time in VK, Odnoklassniki and other social networks, then it will be right to choose a system unit with average technical indicators, since you may have additional tasks for using a computer and then you will have a certain technical margin. There should always be a reserve, as programs improve and become heavier, requiring more and more technical capabilities from a personal computer. This is the policy of software manufacturers within the Microsoft system on which its users depend.

If you plan to install a TV tuner on your computer and watch TV programs with it, or connect it to a widescreen TV to watch movies, then you need to take care of having a good video card in the system unit with high resolution processing capabilities.

Since the video card is dependent on the motherboard on which it is located, their capabilities must match, the motherboard should not limit the capabilities of the video card. Accordingly, the choice of the motherboard is carried out taking into account the technical requirements of all components that will be connected to it, in order to enable these components to maximize their capabilities.

When choosing a home computer, if you know that you will use a program such as Photoshop, you need to consider that this program requires sufficient RAM to work correctly. Accordingly, take note of this moment and select this component in accordance with the information provided.

If you want to save money on assembling a computer, then in principle it is possible, since it is quite easy to upgrade a PC for RAM – you can always change RAM sticks for more productive ones or add a second one to one bar (modern motherboards allow you to install up to four RAM bars) . Two to four gigabytes of RAM may be enough for these requirements.

It will be enough to buy a 400-watt power supply, it will fully satisfy the basic needs of a home computer, including with a margin. It is enough to choose a dual-core processor, it will fully satisfy the needs of a home computer.

Choosing a computer for home use is not at all difficult, you just need to devote some time to getting basic general information about what it consists of, and combine it with the requirements that you need. Undoubtedly, the presence of modern USB 3.0 connectors in the computer will only be a plus, as to increase data transfer speed, and for greater convenience. It all depends on the amount that you have and whether you can afford some excesses, or vice versa, you urgently need them. This is your personal choice, determined by the possibilities.

How to choose a PC for gaming

The optimal hardware configuration, as computer components are called in slang, for games has a number of features. Since all advanced games require high performance from a personal computer, you won’t be able to save much. A certain dependence on these technical indicators dictates the acquisition of appropriate parts that will provide the ability to play all games with different requirements on your computer.

The processor, video card, RAM will require you to pay attention to them when choosing a good computer for games. In another case, the toy may not start or you will have to play with a huge slowdown in video graphics and the computer itself, which will not pull the necessary conditions for perfect operation during the game. Of course, no one wants to acquire for themselves something that will not work properly and will not meet the tasks set. Therefore, get ready to spend money on the components discussed above.

However, these components are directly dependent on the motherboard with which they are combined. You can save on the motherboard, the main thing is that there is some margin in its indicators for the possibility of upgrading the general configuration or individual components in the future, if such a need arises. It will only be a plus if the video card is equipped with video outputs, as practice shows, when they are required for sure.

It is best to choose a four-core processor, since some toys require high performance. When choosing a processor, you need to understand that both AMD and Intel have both their advantages and disadvantages. In a video card for games, the DDR indicator may be of interest, it is he who affects the speed of the video card. Modern video cards use DDR5 memory. The higher it is, the better it is for processing graphics of computer toys.

RAM should be sufficient for gaming, and it is best to have at least four gigabytes for the computer to work correctly while gaming. If there are two memory sticks, then the dual-channel RAM mode will be automatically activated. The ideal option is considered to be the maximum similarity of the manufacturer’s indicators and the frequency indicator. The power supply is suitable for games from 550 watts.

The optical drive of a computer with this direction of use is selected DVD-R or DVD-RW. The designation “R” shows only the ability to read disks, the indicator “RW” indicates that the optical disc drive (or disc drive) allows you to write to discs. On this device, you can save a little, if there is no special need for it.

How to choose the right computer, which is already made in the finished assembly? You need to familiarize yourself with all the indicators of the components of which it consists – they are always given in the product description.

How to choose a computer for the office

In order to choose a computer for work, you need to determine the functional needs of those programs that will be used. If these are accounting programs and programs for storing a database or reporting documents, then you can buy a PC with minimal requirements, since such programs do not require great technical capabilities from a computer.

However, work is different and it happens that some office products may require entire servers to process and store information or to run heavy network programs. In any case, a personal computer for the office will require much less money than a computer for games. There is no need to plan a graphics card with a high performance indicator in the assembly, just as you do not need a high-performance processor to run programs when it comes to working with text documents.

How to choose a computer in the case of using only office programs in work and nothing more – use the minimum set of technical characteristics of computer components that will provide the functional needs of the programs used in the work.

When choosing a video card for an office computer, you can limit yourself to the amount of memory from two to four gigabytes. The video card can be built into the motherboard or be a separate board. The main thing is that its performance is sufficient to process the resolution of the purchased monitor, there is no distortion of the video signal displayed on the screen. Therefore, when purchasing a system unit for office work, it is best to simultaneously buy a monitor with a complete check of the operation of the entire computer system, and a video card, in particular.

The monitor is selected, also based on the work with which it will be provided. Accounting programs do not require high screen resolution, so budget models can be preferred. If in the process of work any design projects will be implemented, then you need a monitor with a high screen resolution for the correct color reproduction of the results of the work.

It is enough to install a hard drive alone if lightweight accounting documents are stored on it. The volume of such a disk can be 250 gigabytes. Taking into account the installation of even the most modern operating system on it, the remaining space is quite enough for text documents and the installation of the most modern office software (software). If more complex programs are needed for work and the saved files will weigh more significantly, then it is right to purchase a 320 or 500 gigabyte hard drive and this amount of information storage will be enough for work.

Once you come to the store with friends or acquaintances and choose the assembly you need according to your parameters, it would be appropriate to say that all the ready-made assemblies will not clearly meet your needs, you can only pick up an approximate assembly or ask you to assemble your computer from scratch if such a service is provided by the store . The second option may be more in line with the technical performance of the planned assembly. There is a third option – to buy all the components for the computer and assemble it yourself. This option is suitable for technical gourmets or, conversely, lovers of learning the unknown.
Whatever system unit you choose, with any filling, don’t forget to purchase input devices, which are mandatory for the computer – mouse and keyboard, and as for the need for audio speakers or a printer, this is up to you.

Knowing these indicators, you can independently choose the system unit for the computer. In the store, the price of the system unit always indicates the indicators of those components that are in it. Having the technical indicators you need, you can independently choose the personal computer you need for the office, even without consulting the seller.

However, if the seller offers his help to you and expresses a desire to advise you on which computer to choose best for your needs, then you will already be informationally prepared and will have a certain idea of ​​what the PC consists of and which set of parts you need. needed to solve already known problems.

In order not to overpay extra money, it is better to be at least a minimally informed buyer, and even better if you know how to choose the right computer.

Source: http://procomputer.su/problema-vybora/109-kak-vybrat-kompyuter-pravilno

Bad HDD Blocks: Causes and Types

Source: http://www.3dnews.ru/storage/badblock

So, a bad block is usually understood as a specific section of the disk, normal operation with which is not guaranteed or impossible at all. Such areas may contain various information, it may be user data or service information (otherwise called servo (obviously from Latin servire or English serve – serve), in this case it is fraught with consequences, the severity of which varies over a very wide range), although, of course, the best option would be the absence of something in this area (although you will most likely not have to deal with bad things in such areas). The appearance of such sectors can be due to various reasons, in one case such sectors can be restored, in the other it is impossible, in one it is necessary to use some methods of treatment and reassignment in another. But first, let’s dispel a few fairly common myths.

Myth one: there are no bads on modern hard drives. It’s not true, it happens. By and large, the technology is the same as years ago, only improved and refined, but still not ideal (however, an ideal one is unlikely to be created on the basis of magnetic recording technologies).

Myth two: for hard drives equipped with SMART, this is not relevant (read, there cannot be bads). Also not so: relevant, no less than for hard drives without SMART (if any still exist). The concept of a bad sector is close and dear to her, it should have become clear from the relevant publications dedicated to this technology (links at the end). The only thing is that SMART has taken over most of the concerns about such sectors that were previously assigned to the user. And it can often happen that the user does not know anything at all and does not find out about the bads taking place on his screw, unless of course the situation is not pathological. I have heard from users that this is how sellers sometimes justify their refusal to guarantee the exchange of hard drives, in which the bads “surfaced” out. The seller, of course, is wrong. SMART is not omnipotent, and no one has canceled the bads yet.

In order to understand the bads and their varieties, let’s delve into the method of storing information on the hard drive, just a little bit. Let’s clarify two points.

1. The unit of which the hard drive operates at a low level is the sector. In the physical space on the disk corresponding to the sector, not only data is recorded directly, but also service information – identification fields and a checksum for it, data and a control code for them, an error recovery code, etc. (not standardized and depends on the manufacturer and models). According to the presence of identification fields, two types of records are distinguished – with and without identifier fields. The former is old and has lost ground in favor of the latter. Later it will become clear why I am celebrating this. It is also important that there are means of error control (which, as we will see, can become their sources).

2. When working with old hard drives, it was necessary to write in the BIOS their physical parameters, which were indicated on the label, and in order to uniquely address a data block, it was necessary to indicate the cylinder number, sector number on the track, head number. Such work with the disk was completely dependent on its physical parameters. It was not convenient, and tied the hands of developers in many matters. An exit was required and was found in address translation. The one that interests us – it was decided to address the data in the drive with one parameter, and assign the function of determining the actual physical address corresponding to this parameter to the hard disk controller. This gave terubema freedom and compatibility.

The actual physical data of the drive turned out to be unimportant. It is only important that the number of logical blocks specified by the BIOS does not exceed the actual number. The creation of such a translator is of great importance for the issues of bad sectors too. And that’s why. The processing of bad sectors on old hard drives was not perfect, it was carried out by means of the file system. The disk was delivered with a sticker on which the addresses of the defective blocks found by the manufacturer were indicated. The user himself manually entered this data into FAT, and thus excluded the access to them by the operating system.

The plate manufacturing technology was imperfect then, and imperfect now. There are no methods for creating an ideal surface that does not contain a single bad block, contrary to the common opinion that a hard drive is shipped from the factory without them. With the growth of the volume of disks, the number of bad sectors grew when leaving the factory, and, it is clear that only up to a certain point the procedure for registering them in FAT could be performed manually, it was necessary to find a way to mark bads, even though it is not known which file system will be used. The invention of the translator made it possible to solve these problems. A special protected area was allocated on the hard drive, where a translator was written, in which a correspondence was established between each logical block of a continuous chain and a real physical address.

If suddenly a bad block was found on the surface, then it was simply skipped, and the address of the next physical available block was assigned to this logical block. The translator was read from the disk when it was turned on. Its creation was (and is) carried out at the factory, and it is precisely for this reason, and not because the manufacturer uses some kind of super technology, that the new disks do not seem to contain bad blocks. The physical parameters were hidden (and they varied too much, since the firms had a free hand in the production of their own low-level formats, and the user did not care), defects were marked at the factory, versatility increased. Good as in a fairy tale.

Now back to the bads and their varieties. Depending on the nature of the origin of all of them, they can be divided into two large groups: logical and physical.

Physical and logical defects

Surface defects can be associated with the gradual wear of the magnetic coating of the disks, the smallest dust particles that seeped through the filter, the kinetic energy of which, accelerated inside the drive to colossal speeds, turns out to be sufficient to damage the surface of the disks (however, they are most likely to roll off the disk under the action of centrifugal forces and will be delayed by the internal filter, but they may have time to mess up), the result of mechanical damage upon impact, in which small particles can be knocked out of the surface, which then, in turn, will also knock out other particles, and the process will go like an avalanche (such particles will also roll off the plates under the action of centrifugal forces, but much longer and heavier, as they will be held by magnetic forces.This is also fraught with the fact that they will collide with the head hovering at a very low height, which will cause it to heat up and degrade performance – distortion will occur signal, the result is reading errors), I have heard (I don’t have such statistics) that smoking at a computer can do the same thing, since tobacco tars can penetrate the hard drive air filter (which have it), leading to sticking of the heads there to the plates (damage to the surface and heads), simply settling on the surface, and thereby changing the performance, etc.

Such sectors turn out to be unsuitable for circulation and should be excluded from circulation. Their restoration is not possible either at home or in service centers. It will be good if one of them manages to at least recover the information. The speed of the process of this type of surface destruction is individual. If the number of bads does not grow or grows very slightly, then you can not be seriously afraid (although it is still worth doing a backup) if the growth is fast, then the disk will have to be replaced, and, moreover, in a hurry. With this type of bads, you can reassign blocks to a backup surface: it makes sense in the absence of progression. But this is not about now. It if to speak about area of the data. As already noted, service information is also stored on the plates. In the process of use, it can also be destroyed. This can be much more painful than a normal user interface.

The fact is that servo information is actively used in the process of work: servo marks stabilize the speed of disk rotation, hold the head over a given cylinder, regardless of external influences. Minor disruptions to servo information can go unnoticed. Severe damage to the servo format can make some part of the disk or the entire disk inaccessible. Since servo information is used by the drive program and is critical to ensure normal operation and in general due to its specifics, things are much more complicated with it. Some hard drives allow you to disable failing servo tracks. Restoring them is possible only at the factory using special expensive and complex equipment (we will estimate approximately the costs of such a repair of a non-warranty hard drive and understand that it would be correct to call this type of bads incorrigible).

Physical bads can also include bad sectors, the appearance of which is caused by malfunctions of the electronic or mechanical part of the drive, for example, broken heads, serious mechanical damage as a result of an impact – jamming of the positioner coil or disks, displacement of disks. The actions here can be different and depend on the specific situation, if, for example, a head break (such bads appear because an attempt is made to access a surface that cannot be accessed (which does not mean at all that something is wrong with surface)), then, for example, it can often be turned off (or it can be changed in the conditions of specialized service centers, but the cost of the operation makes you seriously think about its expediency (in most cases the answer is negative), unless, of course, we are talking about the need to restore extremely valuable information (but that’s another conversation)).

In general, this type of damage is characterized by a catastrophic character. Those. as we can see, physical bads are not treated, only some kind of “mitigation” of their presence is possible. With logical bad sectors, the situation is simpler. Some of them are curable. In most cases, due to recording errors. The following categories can be distinguished:

1. The simplest case: file system errors. The sector is marked in FAT as bad, but in fact it is not. Previously, some viruses used this technique, when on a small volume of a hard drive it was necessary to find a secluded place that was not accessible by simple means. Now this technique is not relevant, since it is not difficult to hide a couple of megabytes (or even a couple of tens of megabytes) in the depths of Windows. In addition, someone could just play a joke on an unlucky user (there were such programs). And in general, the file system is a fragile thing, it is treated very easily and absolutely without consequences.

2. Unrecoverable logical bads – typical for old hard drives using a record with identifier fields. If you have such a disk, then you may well encounter them. Due to the incorrect format of the physical address recorded for this sector, a checksum error for it, etc. Accordingly, it is impossible to address him. In fact, they are recoverable, but at the factory. Since I have already said that the technology of recording without identifier fields is now used, this variety can be considered irrelevant.

3. Correctable logical bads. Not so rare, especially on some types of drives, the type of bad blocks. The origin is mainly due to disk write errors. Reading from such a sector fails, since the ECC code in it usually does not match the data, and writing is usually impossible, since the writeable space is pre-checked before writing, and since problems have already been found with it, writing to this area is rejected. Those. it turns out that the block cannot be used, although physically the surface it occupies is in perfect order. Defects of this kind can sometimes be caused by errors in the firmware of the hard drive, they can be provoked by software or technical reasons (for example, a power failure and its fluctuation, the head goes to an unacceptable height during recording, etc.). But if it is possible to match the contents of the sector and its ECC code, then such blocks pass without a trace. Moreover, this procedure is not complicated, and the means for its implementation are widely available, and, in general, harmless.

4. The appearance of bad blocks of this type on hard drives is due to the peculiarities of the production technology: there are never two absolutely identical devices, some of their parameters will certainly differ. When preparing hard drives at the factory, a set of parameters is determined for each that ensures the best functioning of this particular instance, the so-called adaptives. These parameters are saved, and if they somehow mysteriously turn out to be damaged, the result may be a complete inoperability of the disk, its unstable operation, or a large number of bad sectors appearing and disappearing in one place or another. At home, nothing can be done about this, but everything can be configured at the factory or at a service center.

As you can see, only two types of logical bad blocks are actually treated at home. Others, if necessary, you can try to replace with backup, but not cure. Nothing can be done with the third houses. We will talk about how and what to do in the first two cases next time.

SMART – internal HDD condition assessment technology

Source: http://www.3dnews.ru/storage/smart/

Introduction

Today, I would like to talk a little more about the SMART technology mentioned in the previous article on the criteria for choosing a hard drive, as well as find out the issue of the appearance of bad sectors when checking the surface with special programs and exhausting the reserve surface for their reassignment – a question raised on the forum from the last article.

To begin with, as always, a brief historical digression. The reliability of a hard drive (and any storage device in general) is always of the utmost importance. And the point is by no means its cost, but the value of the information that it takes with it to another world, leaving life itself, and the loss of profit associated with downtime when hard drives fail, if we are talking about business users, even if the information remains. And it is quite natural that you want to know about such unpleasant moments in advance. Even ordinary reasoning at the household level suggests that monitoring the state of the device in operation can suggest such moments. It remains only to somehow implement this observation in the hard drive.

For the first time, the engineers of the blue giant (IBM, that is,) thought about this task. And in 1995, they proposed a technology that monitors several critical parameters of the drive, and makes attempts to predict its failure based on the data collected – Predictive Failure Analysis (PFA). The idea was picked up by Compaq, which later created its own technology – IntelliSafe. Seagate, Quantum and Conner also participated in the development of Compaq. The technology they created also monitored a number of disk performance characteristics, compared them with an acceptable value, and reported to the host system if there was a danger. This was a huge step forward, if not in increasing the reliability of hard drives, then at least in reducing the risk of information loss when using them. The first attempts were successful, and showed the need for further development of technology. The S.M.A.R.T (Self Monitoring Analysing and Reporting Technology) technology, based on IntelliSafe and PFA technologies, has already appeared in the association of all major hard drive manufacturers (by the way, PFA still exists as a set of technologies for monitoring and analyzing various subsystems of IBM servers, including including the disk subsystem, and the monitoring of the latter is based precisely on SMART technology).

So, SMART is a technology for internally evaluating the state of a disk, and a mechanism for predicting a possible failure of a hard disk. It is important to note that the technology, in principle, does not solve emerging problems (the main ones are shown in the figure below), it can only warn about a problem that has already arisen or is expected in the near future.

At the same time, it must also be said that the technology is not able to predict absolutely all possible problems, and this is logical: the output of electronics as a result of a power surge, damage to heads and surfaces as a result of an impact, etc. no technology can predict. Predictable are only those problems that are associated with the gradual deterioration of any characteristics, the uniform degradation of any components.

Stages of technology development

SMART technology has gone through three stages in its development. In the first generation, the observation of a small number of parameters was implemented. No independent actions of the drive were provided. The launch was carried out only by commands on the interface. There is no specification describing the standard completely, and, therefore, there was not and is not a clear destiny about which parameters should be controlled. Moreover, their definition and the determination of the permissible level of their reduction was entirely left to the manufacturers of hard drives (which is natural due to the fact that the manufacturer knows better what exactly should be controlled by his given hard drive, because all hard drives are too different). And the software, for this reason, written, as a rule, by third-party companies, was not universal, and could erroneously report an impending failure (the confusion arose due to the fact that different manufacturers stored the values ​​of various parameters under the same identifier). There were a large number of complaints that the number of cases of detection of a pre-failure state is extremely small (peculiarities of human nature: you want to get everything at once, it somehow never occurred to anyone to complain about sudden disk failures before the introduction of SAMRT). The situation was aggravated by the fact that in most cases the minimum necessary requirements for the functioning of SMART were not met (we will talk about this later). Statistics show that the number of predicted failures was less than 20%. The technology at this stage was far from perfect, but it was a revolutionary step forward.

Not much is known about the second stage of SMART development – SMART II. Basically, the same problems were observed as with the first. Innovations were the possibility of a background check of the surface, performed automatically by the disk during idle times and error logging, the list of controlled parameters was expanded (again, depending on the model and manufacturer). Statistics show that the number of predictable failures has reached 50%.

The modern stage is represented by SMART III technology. We will dwell on it in more detail, try to understand in general terms how it works, what and why it is needed.

We already know that SMART monitors the main characteristics of the drive. These parameters are called attributes. The parameters required for monitoring are determined by the manufacturer. Each attribute has some value – Value. It usually ranges from 0 to 100 (although it can be up to 200 or 255), its value is the reliability of a particular attribute relative to some of its reference values (determined by the manufacturer). A high value indicates no change in this parameter or, depending on the value, its slow deterioration. A low value indicates rapid degradation or a possible failure soon, i.e. the higher the value of the Value attribute, the better. Some monitoring programs display the value of Raw or Raw Value – this is the value of the attribute in the internal format (which is also different for disks of different models and different manufacturers), in which it is stored in the drive. For a simple user, it is not very informative, the Value value calculated from it is of greater interest. For each attribute, the manufacturer determines the minimum possible value at which the drive’s failure-free operation is guaranteed – Threshold. If the attribute value is below the Threshold value, a malfunction or a complete failure is very likely. It remains only to add that attributes are critical and non-critical. If a critically important parameter goes beyond the Threshold, the actual value means failure, if a non-critical parameter goes beyond the allowed values, it indicates a problem, but the disk can still work (although, perhaps, with some deterioration in some characteristics: performance, for example).

The most frequently observed critical characteristics are: Raw Read Error Rate – The rate of errors when reading data from a disk, the origin of which is due to the disk hardware.

Spin Up Time – the time it takes for a pack of disks to spin up from rest to operating speed. When calculating the normalized value (Value), the practical time is compared with some reference value set at the factory. A non-deteriorating non-maximum value with Spin Up Retry Count Value = max (Raw equal to 0) does not mean anything bad. The difference in time from the reference can be caused by a number of reasons, for example, the power supply let us down.

Spin Up Retry Count – the number of retries to spin up disks to operating speed, if the first attempt was unsuccessful. A non-zero Raw value (respectively, a non-maximum Value) indicates problems in the mechanical part of the drive.

Seek Error Rate – error rate when positioning the block of heads. A high Raw value indicates the presence of problems, which may be damaged servos, excessive thermal expansion of disks, mechanical problems in the positioning unit, etc. A constantly high Value indicates that everything is fine.

Reallocated Sector Count – number of sector remapping operations. SMART in modern ones is able to analyze the sector for stability on the fly and, if it is recognized as a failure, reassign it. Below we will talk about this in more detail.

Of the non-critical, so to speak, informational attributes, the following are usually monitored:
Start/Stop Count is the total number of starts/stops of the spindle. The disk motor is guaranteed to be able to endure only a certain number of on / off. This value is chosen as Treshold. The first models of disks with a rotation speed of 7200 rpm had an unreliable engine, could only transfer a small number of them and quickly failed.
Power On Hours – the number of hours spent in the on state. Passport time between failures (MBTF) is selected as a threshold value for it. Given the usually quite improbable MBTF values, it is unlikely that the parameter will ever reach a critical threshold. But even in this case, the failure of the disk is completely optional.
Drive Power Cycle Count – the number of complete disk on/off cycles. This and the previous attribute can be used to estimate, for example, how much the disk was used before purchase.
Temperatue – simple and clear. The readings of the built-in temperature sensor are stored here. Temperature has a huge impact on disk life (even if it is within acceptable limits).
Current Pending Sector Count – the number of sectors that are candidates for replacement is stored here. They have not yet been identified as bad, but reading them is different from reading a stable sector, the so-called suspicious or unstable sectors.
Uncorrectable Sector Count – the number of errors when accessing the sector that have not been corrected. Possible causes may be mechanical failures or surface damage.
UDMA CRC Error Rate – the number of errors that occur when transmitting data over the external interface. Can be caused by low-quality cables, abnormal operating modes.
Write Error Rate – Shows the rate of errors that occur when writing to disk. It can serve as an indicator of the quality of the surface and the mechanics of the drive.

All errors and parameter changes that occur are recorded in the SMART logs. This possibility appeared already in SMART II. All parameters of the magazines – purpose, size, their number are determined by the manufacturer of the hard drive. At the moment, we are only interested in the fact of their presence. Without details. The information stored in the logs is used to analyze the state and make forecasts.

If you do not go into details, then the work of SMART is simple – during the operation of the drive, all errors and suspicious phenomena that occur are simply tracked, which are reflected in the corresponding attributes. In addition, starting with SMART II, many drives have self-diagnostic functions. SMART tests can be launched in two modes, off-line – the test is actually performed in the background, since the drive is ready to accept and execute a command at any time, and exclusive, in which when a command is received, the test execution ends.

Three types of self-diagnostic tests are documented: background data collection (Off-line collection), shortened test (Short Self-test), extended test (Extended Self-test). The last two can run both in the background and in exclusive modes. The set of tests included in them is not standardized.

The duration of their execution can be from seconds to minutes and hours. If you suddenly do not access the disk, and at the same time it makes sounds like during a workload, it just seems to be doing introspection. All data collected as a result of such tests will also be stored in logs and attributes.

Oh those bad sectors…

Now back to the issue of bad sectors, which started it all. SMART III has a feature that allows you to transparently reassign BAD sectors for the user. The mechanism works quite simply, with an unstable reading of a sector, or an error in reading it, SMART enters it into the list of unstable ones and increases their counter (Current Pending Sector Count). If the sector is read without problems during repeated access, it will be thrown out of this list. If not, then when given the opportunity – in the absence of access to the disk, the disk will start an independent check of the surface, primarily suspicious sectors. If the sector is recognized as bad, then it will be reassigned to the sector from the backup surface (respectively, RSC will increase). Such background remapping leads to the fact that on modern hard drives, bad sectors are almost never visible when checking the surface with service programs. At the same time, with a large number of bad sectors, their reassignment cannot continue indefinitely. The first limiter is obvious – this is the volume of the reserve surface. This is the case I had in mind. The second is not so obvious – the fact is that modern hard drives have two defect lists P-list (Primary, factory) and G-list (Growth, formed directly during operation). And with a large number of reassignments, it may turn out that there is no place in the G-list to record a new reassignment. This situation can be identified by a high rate of remapped sectors in SMART. In this case, all is not lost, but that is beyond the scope of this article.

So, using the SMART data, without even taking the disc to the workshop, you can pretty accurately say what is happening to it. There are various add-on technologies for SMART that allow you to determine the state of the disk even more accurately and almost reliably the cause of its failure. We will talk about these technologies in a separate article.

You need to know that purchasing a drive with SMART is not enough to be aware of all the problems that occur with the drive. The disk, of course, can monitor its condition without outside help, but it will not be able to warn itself in the event of an approaching danger. You need something that will allow you to issue a warning based on SMART data. (the usual chain is shown in the figure below).
Alternatively, BIOS is possible, which, when booting with the corresponding option enabled, checks the status of SMART drives. And if you want to constantly monitor the state of the disk, you need to use some kind of monitoring program. Then you can see the information in a detailed and convenient way.

SmartMonitor from HDD Speed running under DOS
SIGuiardian running from Windows

We will also talk about these programs in a separate article. This is what I meant when I said that at first the necessary requirements were not met when operating hard drives with SMART.