Sas connector. SAS controllers from Adaptec. Fast and agile. SAS controllers and expanders

Today's file server or web server cannot do without a RAID array. Only this operating mode can provide the required throughput and speed of working with the data storage system. Until recently, the only hard drives suitable for such work were drives with a SCSI interface and a spindle speed of 10-15 thousand revolutions per minute. To operate such drives, a separate SCSI controller was required. Data transfer speeds via SCSI reached 320 Mb/s, but the SCSI interface is a regular parallel interface, with all its shortcomings.

Just recently a new disk interface appeared. It was called SAS (Serial Attached SCSI). Recreation centers in Chelyabinsk -Today, many companies already have controllers for this interface in their product line with support for all levels of RAID arrays. In our mini-review we will look at two representatives of the new family of SAS controllers from Adaptec. This is an 8-port model ASR-4800SAS and a 4+4 port ASR-48300 12C.

Introduction to SAS

What kind of interface is this - SAS? In fact, SAS is a hybrid of SATA and SCSI. The technology combines the advantages of two interfaces. Let's start with the fact that SATA is a serial interface with two independent read and write channels, and each SATA device is connected to a separate channel. SCSI has a very efficient and reliable enterprise data transfer protocol, but the disadvantage is the parallel interface and a common bus for multiple devices. Thus, SAS is free from the disadvantages of SCSI, has the advantages of SATA and provides speeds of up to 300 MB/s per channel. Using the diagram below you can roughly imagine the SCSI and SAS connection diagram.

The bidirectionality of the interface reduces latency to zero, since there is no read/write channel switching.

An interesting and positive feature of Serial Attached SCSI is that this interface supports SAS and SATA drives, and both types of drives can be connected to one controller at the same time. However, SAS drives cannot be connected to a SATA controller, since these drives, firstly, require special SCSI commands ( Serial protocol SCSI Protocol) during operation, and secondly, are physically incompatible with the SATA connector. Each SAS disk is connected to its own port, but, nevertheless, it is possible to connect more disks than the controller has ports. This opportunity is provided by SAS expanders (Expander).

The original difference between a SAS disk socket and a SATA disk socket is the additional data port, that is, each Serial Attached SCSI drive has two SAS ports with its own original ID, thus the technology provides redundancy, which increases reliability.

SAS cables are slightly different from SATA; there is special cable hardware included with the SAS controller. Just like SCSI, hard disks of the new standard can be connected not only inside the server case, but also outside, for which special cables and equipment are provided. To connect hot-swap drives, special boards are used - backplane, which have all the necessary connectors and ports for connecting drives and controllers.

As a rule, the backplane board is located in a special case with a sled mounting of disks; such a case contains a RAID array and provides its cooling. If one or more disks fail, it is possible to quickly replace the faulty HDD, and replacing the faulty drive does not stop the operation of the array - just change the drive and the array is fully operational again.

Adaptec SAS Adapters

Adaptec has presented two rather interesting models of RAID controllers for your consideration. The first model is a representative of the budget class of devices for building RAID in inexpensive servers entry level is an eight-port model ASR-48300 12C. The second model is much more advanced and designed for more serious tasks; it has eight SAS channels on board - this is the ASR-4800SAS. But let's take a closer look at each of them. Let's start with a simpler and cheaper model.

Adaptec ASR-48300 12C

The ASR-48300 12C controller is designed for building small RAID arrays of levels 0, 1 and 10. Thus, the main types of disk arrays can be built using this controller. Delivered this model in a regular cardboard box, which is decorated in blue and black tones, on the front side of the package there is a stylized image of a controller flying from a computer, which should evoke thoughts of the high speed of the computer with this device inside.

The delivery set is minimal, but includes everything you need to get started with the controller. The kit contains the following.

Controller ASR-48300 12C
. Low profile bracket

. Storage Manager Software Disc
. Brief manual
. Connecting cable with connectors SFF8484 to 4xSFF8482 and power supply 0.5 m.

The controller is designed for the PCI-X 133 MHz bus, which is very widespread in server platforms. The adapter provides eight SAS ports, however, only four ports are implemented in the form of an SFF8484 connector, to which drives are connected inside the case, and the remaining four channels are routed outside in the form of an SFF8470 connector, so some of the disks must be connected externally - this can be external box with four discs inside.

When using an expander, the controller has the ability to work with 128 disks in the array. In addition, the controller is capable of working in a 64-bit environment and supports the corresponding commands. The card can be installed in a low-profile 2U server if you install the included low-profile plug. General characteristics the fees are as follows.

Advantages

Cost-effective Serial Attached SCSI controller with Adaptec HostRAID™ technology for high-performance storage of critical data.

Client needs

Ideal for supporting entry-level, mid-range and workgroup server applications that require high-performance storage and reliable protection, for example, applications Reserve copy, web content, Email, databases and data sharing.

System environment - Department and workgroup servers

System bus interface type - PCI-X 64 bit/133 MHz, PCI 33/66

External Connections – One x 4 Infiniband/Serial Attached SCSI (SFF8470)

Internal Connections – One 32 pin x 4 Serial Attached SCSI (SFF8484)

System requirements - Servers type IA-32, AMD-32, EM64T and AMD-64

32/64-bit PCI 2.2 or 32/64-bit PCI-X 133 connector

Warranty - 3 years

RAID levels—Adaptec HostRAID 0, 1, and 10

Key RAID Features

  • Boot array support
  • Automatic recovery
  • Management with Adaptec Storage Manager software
  • Background initialization

Board dimensions - 6.35cm x 17.78cm (including external connector)

Operating temperature - 0° to 50° C

Power dissipation - 4 W

Mean Time Before Failure (MTBF) - 1692573 hours at 40 ºC.

Adaptec ASR-4800SAS

The adapter number 4800 is more advanced functionally. This model is positioned for faster servers and workstations. It supports almost any RAID arrays - arrays that are available in the younger model, and you can also configure RAID 5, 50, JBOD and Adaptec Advanced Data Protection Suite arrays with RAID 1E, 5EE, 6, 60, Copyback Hot Spare with the Snapshot Backup option for tower servers and high-density rack-mount servers.

The model comes in a package similar to the younger model with a design in the same “aviation” style.

The set contains almost the same as the low card.

ASR-4800SAS controller
. Full length bracket
. Driver disk and complete guide
. Storage Manager Software Disc
. Brief manual
. Two cables with SFF8484 to 4xSFF8482 and power connectors, 1 m each.

The controller has support for the PCI-X 133 MHz bus, but there is also a model 4805, which is functionally similar, but uses PCI-E bus x8. The adapter provides the same eight SAS ports, but all eight ports are implemented as internal ones; accordingly, the board has two SFF8484 connectors (for two complete cables), but there is also an external SFF8470 connector for four channels, when connected to which one of the internal connectors turns off.

Just like in the younger device, the number of disks is expandable up to 128 using expanders. But the main difference between the ASR-4800SAS model and the ASR-48300 12C is the presence on the first 128 MB of DDR2 ECC memory, used as a cache, which speeds up work with the disk array and optimizes work with small files. An optional battery module is available to retain data in the cache when power is removed. The general characteristics of the board are as follows.

Benefits - Connect high-performance storage and data protection devices for servers and workstations

Customer Needs - Ideal for supporting server and workgroup applications that require high levels of read/write performance at all times, such as streaming video applications, web content, video on demand, fixed content, and reference data storage.

  • System environment - Departmental and workgroup servers and workstations
  • System bus interface type - Host interface PCI-X 64-bit/133 MHz
  • External Connections – SAS connector one x4
  • Internal Connections - Two x4 SAS connectors
  • Data transfer speed - Up to 3 GB/s per port
  • System requirements -Intel or AMD architecture with a free 64-bit 3.3v PCI-X slot
  • Supports EM64T and AMD64 architectures
  • Warranty - 3 years
  • Standard RAID levels - RAID 0, 1, 10, 5, 50
  • Standard RAID Features - Hot Standby, RAID Level Migration, Online Capacity Expansion, Optimized Disk, Utilization, S.M.A.R.T and SNMP support, plus features from Adaptec Advanced
  • Data Protection Suite including:
  1. Hot Space (RAID 5EE)
  2. Striped Mirror (RAID 1E)
  3. Dual Drive Failure Protection (RAID 6)
  4. Copyback Hot Spare
  • Additional RAID Features - Snapshot Backup
  • Board dimensions - 24cm x 11.5cm
  • Operating temperature - 0 to 55 degrees C
  • Mean Time Before Failure (MTBF) - 931924 hours at 40 ºC.

Testing

Testing adapters is not easy. Moreover, we have not yet acquired much experience working with SAS. Therefore, it was decided to test the speed of operation hard drives with SAS interface compared to SATA drives. To do this, we used our existing 73GB SAS drives Hitachi HUS151473VLS300 at 15000rpm with a 16Mb buffer and WD 150GB SATA150 Raptor WD1500ADFD at 10000rpm with a 16Mb buffer. We conducted a direct comparison of two fast drives, but with different interfaces on two controllers. The disks were tested in the HDTach program, in which the following results were obtained.

Adaptec ASR-48300 12C

Adaptec ASR-4800SAS

It was logical to assume that HDD with a SAS interface will be faster than SATA, although to evaluate performance we took the most fast disk WD Raptor, which can easily compete in performance with many 15,000 rpm SCSI drives. As for the differences between the controllers, they are minimal. Of course, the older model provides more features, but the need for them arises only in the corporate sector of using such devices. These enterprise features include custom RAID levels and additional on-board cache memory on the controller. An ordinary home user is unlikely to install 8 hard drives assembled in a RAID array with redundancy in a home PC, even if it is a modified one - rather, preference will be given to using four drives for a level 0+1 array, and the remaining ones will be used for data. This is where the ASR-48300 12C comes in handy. In addition, some overclocking motherboards have a PCI-X interface. The advantage of the model for home use is its relatively affordable price (compared to eight hard drives) of $350 and ease of use (insert and connect). In addition, 10K 2.5-inch hard drives are of particular interest. These hard drives have lower power consumption, heat less, and take up less space.

conclusions

This is an unusual review for our site and is more aimed at studying user interest in reviews of a special hardware. Today we reviewed not only two unusual RAID controllers from a well-known and proven manufacturer server equipment- Adaptec company. This is also an attempt to write the first analytical article on our website.

Regarding our today's heroes, Adaptec SAS controllers, we can say that the company's next two products were a success. The younger model, ASR-48300, costing $350, may well take root in productive home computer and even more so in an entry-level server (or a computer performing its role). For this, the model has all the prerequisites: convenient Adaptec Storage Manager software, support from 8 to 128 disks, work with basic RAID levels.

The older model is designed for serious tasks and, of course, can be used in inexpensive servers, but only if there are special requirements for the speed of working with small files and the reliability of information storage, because the card supports all levels of enterprise-class RAID arrays with redundancy and has 128 MB of fast DDR2 cache memory with Error Correction Control (ECC). The cost of the controller is $950.

ASR-48300 12C

Pros of the model

  • Availability
  • Supports from 8 to 128 drives
  • Ease of use
  • Stable work
  • Adaptec Reputation
  • PCI-X slot - for greater popularity, the only thing missing is support for the more common PCI-E

ASR-4800SAS

  • Stable work
  • Manufacturer reputation
  • Good functionality
  • Availability of upgrade (software and hardware)
  • Availability of PCI-E version
  • Ease of use
  • Supports from 8 to 128 drives
  • 8 internal SAS channels
  • Not very suitable for budget and home use sectors.

Tests of RAID 6, 5, 1 and 0 arrays with Hitachi SAS-2 drives

Apparently, gone are the days when a decent professional 8-port RAID controller cost quite a lot of money. Nowadays, solutions have appeared for the Serial Attached SCSI (SAS) interface, which are very attractive in terms of price, functionality, and performance. This review is about one of them.

LSI MegaRAID SAS 9260-8i controller

Previously, we already wrote about the second generation SAS interface with a transfer speed of 6 Gbit/s and the very cheap 8-port HBA controller LSI SAS 9211-8i, designed for organizing entry-level data storage systems based on the simplest RAID arrays SAS and SATA- drives. The LSI MegaRAID SAS 9260-8i model will be a higher class - it is equipped with more powerful processor with hardware processing of arrays of levels 5, 6, 50 and 60 (ROC technology - RAID On Chip), as well as a significant volume (512 MB) of on-board SDRAM memory for effective data caching. This controller also supports SAS and SATA interfaces with a data transfer rate of 6 Gbps, and the adapter itself is designed for PCI Express x8 version 2.0 (5 Gbps per lane), which is theoretically almost enough to satisfy the needs of 8 high-speed SAS ports. And all this at a retail price of around $500, that is, only a couple of hundred more expensive than the budget LSI SAS 9211-8i. The manufacturer itself, by the way, classifies this solution as a MegaRAID Value Line series, that is, economical solutions.




LSIMegaRAID SAS9260-8i 8-port SAS controller and its SAS2108 processor with DDR2 memory

The LSI SAS 9260-8i board has a low profile (MD2 form factor), is equipped with two internal Mini-SAS 4X connectors (each of them allows you to connect up to 4 SAS drives directly or more via port multipliers), is designed for the PCI Express bus x8 2.0 and supports RAID levels 0, 1, 5, 6, 10, 50 and 60, dynamic SAS functionality and much more.  etc. The LSI SAS 9260-8i controller can be installed both in 1U and 2U rack servers (Mid and High-End class servers), and in ATX and Slim-ATX cases (for workstations). RAID support is provided in hardware - an integrated LSI SAS2108 processor (PowerPC core at 800 MHz), complemented by 512 MB of DDR2 800 MHz memory with ECC support. LSI promises processor speeds of up to 2.8 GB/s for reading and up to 1.8 GB/s for writing. Among the rich functionality of the adapter, it is worth noting the functions Online Capacity Expansion (OCE), Online RAID Level Migration (RLM) (expanding the volume and changing the type of arrays on the fly), SafeStore Encryption Services and Instant secure erase (encrypting data on disks and securely deleting data ), support solid state drives

(SSD Guard technology) and many others. 

etc. An optional battery module is available for this controller (with it, the maximum operating temperature should not exceed +44.5 degrees Celsius).LSI SAS 9260-8i controller: main technical characteristics
System interfacePCI Express x8 2.0 (5 GT/s), Bus Master DMA
Disk interfaceSAS-2 6 Gbit/s (supports SSP, SMP, STP and SATA protocols)
Number of SAS ports8 (2 x4 Mini-SAS SFF8087 connectors), supports up to 128 drives via port multipliers
RAID supportlevels 0, 1, 5, 6, 10, 50, 60
CPULSI SAS2108 ROC (PowerPC @ 800 MHz)
Built-in cache memory512 MB ECC DDR2 800 MHz
Energy consumption, no more24 W (+3.3 V and +12 V power from PCIe slot)
Operating/Storage Temperature Range0…+60 °С / −45…+105 °С
Form factor, dimensionsMD2 low-profile, 168×64.4 mm
MTBF value>2 million h

Manufacturer's warranty

In a white and orange box with a frivolously smiling, toothy lady's face on the “title” (apparently to better attract bearded system administrators and stern system builders) there is a controller board, brackets for installing it in ATX, Slim-ATX cases, etc., two 4-disk cable with Mini-SAS connectors on one end and regular SATA (without power) on the other (for connecting up to 8 drives to the controller), as well as a CD with PDF documentation and drivers for numerous Windows versions, Linux (SuSE and RedHat), Solaris and VMware.


Contents of delivery boxed version LSI MegaRAID SAS 9260-8i controller (MegaRAID Advanced Services Hardware Key is available upon request)

Available with a special hardware key (supplied separately) for the LSI MegaRAID SAS 9260-8i controller software technologies LSI MegaRAID Advanced Services: MegaRAID Recovery, MegaRAID CacheCade, MegaRAID FastPath, LSI SafeStore Encryption Services (their consideration is beyond the scope of this article). In particular, in terms of increasing the performance of a traditional disk array (HDD) using a solid-state drive (SSD) added to the system, MegaRAID CacheCade technology will be useful, with the help of which the SSD acts as a second-level cache for the HDD array (analogous to a hybrid solution for HDD), in some cases, providing up to a 50-fold increase in disk subsystem performance. Also of interest is the MegaRAID FastPath solution, which reduces the processing latency of I/O operations by the SAS2108 processor (by disabling optimization for HDDs), which allows you to speed up the operation of an array of several solid-state drives (SSDs) connected directly to the SAS 9260-8i ports.

It is more convenient to perform operations on configuring, setting up and servicing the controller and its arrays in the proprietary manager in the operating system environment (settings in the menu BIOS Setup the controller itself is not rich enough - only basic functions are available). In particular, in the manager, with a few clicks of the mouse, you can organize any array and set policies for its operation (caching, etc.) - see screenshots.




Examples of screenshots of the Windows manager for configuring RAID arrays of levels 5 (above) and 1 (below).

Testing

To get acquainted with the basic performance of the LSI MegaRAID SAS 9260-8i (without the MegaRAID Advanced Services Hardware Key and related technologies), we used five high-performance SAS drives with a spindle speed of 15 thousand rpm and support for the SAS-2 interface (6 Gbit/ c) - Hitachi Ultrastar 15K600 HUS156030VLS600 with a capacity of 300 GB.


Hitachi Ultrastar 15K600 hard drive without top cover

This will allow us to test all basic levels of arrays - RAID 6, 5, 10, 0 and 1, and not only with the minimum number of disks for each of them, but also “for growth”, that is, when adding a disk to the second of the 4-channel SAS ports of the ROC chip. Note that the hero of this article has a simplified analogue - the 4-port LSI MegaRAID SAS 9260-4i controller on the same element base. Therefore, our tests of 4-disk arrays are equally applicable to it.

The maximum sequential read/write speed of payload data for the Hitachi HUS156030VLS600 is about 200 MB/s (see graph). The average random access time when reading (according to specifications) is 5.4 ms. Built-in buffer - 64 MB.


Hitachi Ultrastar 15K600 HUS156030VLS600 sequential read/write speed chart

The test system was based on Intel processor Xeon 3120, motherboard with Intel chipset P45 and 2 GB DDR2-800 memory. The SAS controller was installed in the PCI Express x16 v2.0 slot. The tests were carried out under the control of operating rooms Windows systems XP SP3 Professional and Windows 7 Ultimate SP1 x86 (pure American versions), since their server counterparts (Windows 2003 and 2008, respectively) do not allow some of the benchmarks and scripts we used to work. The tests used were AIDA64, ATTO Disk Benchmark 2.46, Intel IOmeter 2006, Intel NAS Performance Toolkit 1.7.1, C’T H2BenchW 4.13/4.16, HD Tach RW 3.0.4.0 and for Futuremark PCMark Vantage and PCMark05. Tests were carried out both on unpartitioned volumes (IOmeter, H2BenchW, AIDA64) and on formatted partitions. In the latter case (for NASPT and PCMark), the results were taken both for the physical beginning of the array and for its middle (array volumes of the maximum available capacity were divided into two equal-sized logical partitions). This allows us to more adequately evaluate the performance of solutions, since the fastest initial sections of volumes, on which file benchmarks are carried out by most browsers, often do not reflect the situation on other sections of the disk, which can also be used very actively in real work.

All tests were carried out five times and the results were averaged. We will look at our updated methodology for evaluating professional disk solutions in more detail in a separate article.

It remains to add that during this testing we used controller firmware version 12.12.0-0036 and drivers version 4.32.0.32. Write and read caching has been enabled for all arrays and disks. Perhaps the use of more modern firmware and drivers saved us from the oddities noticed in the results of early tests of the same controller. In our case, such incidents were not observed. However, we also do not use the FC-Test 1.0 script, which is very dubious in terms of the reliability of the results (which in certain cases the same colleagues “would like to call confusion, vacillation and unpredictability”) in our package, since we have previously repeatedly noticed its inconsistency on some file patterns ( in particular, sets of many small, less than 100 KB, files).

The charts below show the results for 8 array configurations:

  1. RAID 0 of 5 disks;
  2. RAID 0 of 4 disks;
  3. RAID 5 of 5 disks;
  4. RAID 5 of 4 disks;
  5. RAID 6 of 5 disks;
  6. RAID 6 of 4 disks;
  7. RAID 1 of 4 disks;
  8. RAID 1 of 2 disks.

By RAID 1 array of four disks (see screenshot above), LSI obviously means a stripe+mirror array, usually referred to as RAID 10 (this is confirmed by test results).

Test results

In order not to overload the review web page with a countless set of diagrams, sometimes uninformative and tiring (which is often the fault of some “rabid colleagues” :)), we have summarized the detailed results of some tests in table. Those who want to analyze the subtleties of our results (for example, find out the behavior of the participants in the most critical tasks for themselves) can do this on their own. We will focus on the most important and key test results, as well as on average indicators.

First, let's look at the results of “purely physical” tests.

The average time for random data access when reading on a single Hitachi Ultrastar 15K600 HUS156030VLS600 disk is 5.5 ms. However, when organizing them into arrays, this indicator changes slightly: it decreases (thanks to effective caching in the LSI SAS9260 controller) for “mirror” arrays and increases for all others. The largest increase (about 6%) is observed for Level 6 arrays, since in this case the controller has to simultaneously access the largest number of disks (three for RAID 6, two for RAID 5 and one for RAID 0, since the access in this test occurs in blocks of only 512 bytes, which is significantly smaller than the size of array interleaving blocks).

The situation with random access to arrays during writing (in blocks of 512 bytes) is much more interesting. For a single disk, this parameter is about 2.9 ms (without caching in the host controller), however, in arrays on the LSI SAS9260 controller, we observe a significant decrease in this indicator - thanks to good write caching in the controller's 512 MB SDRAM buffer. Interestingly, the most dramatic effect is obtained for RAID 0 arrays (random write access time drops by almost an order of magnitude compared to a single drive)! This should undoubtedly have a beneficial effect on the performance of such arrays in a number of server tasks. At the same time, even on arrays with XOR calculations (that is, a high load on the SAS2108 processor), random write accesses do not lead to obvious performance degradation - again thanks to the powerful controller cache. It is natural that RAID 6 is slightly slower here than RAID 5, but the difference between them is essentially insignificant. In this test, I was somewhat surprised by the behavior of a single “mirror”, which showed the slowest random access when writing (perhaps this is a “feature” of the microcode of this controller).

The speed graphs for linear (sequential) reading and writing (in large blocks) for all arrays do not have any special features (for reading and writing they are almost identical, provided that the controller write caching is enabled) and they are all scaled according to the number of disks participating in parallel in the “useful » process. That is, for a five-disk RAID 0 disk the speed is “quintupled” relative to a single disk (reaching 1 GB/s!), for a five-disk RAID 5 it is “quadrupled”, for RAID 6 it is “tripled” (triples, of course :)), for RAID 1 of four disks it doubles (no “y2eggs”! :)), and for a simple mirror it duplicates the graphs of a single disk. This pattern is clearly visible, in particular, in the maximum speed of reading and writing real large (256 MB) files in large blocks (from 256 KB to 2 MB), which we illustrate with a diagram of the ATTO Disk Benchmark 2.46 test (the results of this test for Windows 7 and XP are almost identical).

Here, the only thing unexpectedly missing from the overall picture was the case of reading files on a RAID 6 array of 5 disks (the results were double-checked). However, for reading in 64 KB blocks, the speed of this array reaches the required 600 MB/s. So let’s attribute this fact to a “feature” of the current firmware. Note also that when writing real files, the speed is slightly higher due to caching in a large controller buffer, and the difference with reading is more noticeable, the lower the actual linear speed of the array.

As for the interface speed, usually measured by buffer writing and reading (multiple accesses to the same disk volume address), here we are forced to state that for almost all arrays it turned out to be the same due to the inclusion of the controller cache for these arrays (see . table). Thus, the recording performance for all participants in our test was approximately 2430 MB/s. Note that the PCI Express x8 2.0 bus theoretically gives a speed of 40 Gbit/s or 5 GB/s, however, according to useful data, the theoretical limit is lower - 4 GB/s, which means that in our case the controller actually worked on version 2.0 of the PCIe bus. Thus, the 2.4 GB/s we measured is obviously the real bandwidth of the controller’s onboard memory (DDR2-800 memory with a 32-bit data bus, as can be seen from the configuration of the ECC chips on the board, theoretically gives up to 3.2 GB/s). When reading arrays, caching is not as “comprehensive” as when writing, which is why the “interface” speed measured in utilities is usually lower than the read speed of the controller cache memory (typical 2.1 GB/s for arrays of levels 5 and 6) , and in some cases it “falls” to the buffer reading speed of the hard drives themselves (about 400 MB/s for a single hard drive, see the graph above), multiplied by the number of “sequential” disks in the array (these are just the cases of RAID 0 and 1 from our results).

Well, we have sorted out the “physics” to a first approximation, it’s time to move on to the “lyrics”, that is, to tests of “real” application guys. By the way, it will be interesting to find out whether the performance of arrays scales when performing complex user tasks as linearly as it scales when reading and writing large files (see the ATTO test diagram just above). The inquisitive reader, I hope, has already been able to predict the answer to this question.

As a “salad” for our “lyrical” part of the meal, we will serve desktop-by-nature disk tests from the PCMark Vantage and PCMark05 packages (under Windows 7 and XP, respectively), as well as a similar “track” test of applications from the reputable German package H2BenchW 4.13 C'T magazine. Yes, these tests were originally created to evaluate desktop hard drives and low-cost workstations. They emulate the execution of typical tasks of an advanced personal computer on disks - working with video, audio, Photoshop, antivirus, games, swap files, installing applications, copying and writing files, etc. Therefore, their results should not be taken in the context of this article as the ultimate truth - after all, other tasks are more often performed on multi-disk arrays. However, in light of the fact that the manufacturer itself positions this RAID controller, including for relatively low-cost solutions, such a class of test tasks is quite capable of characterizing a certain proportion of applications that will actually be executed on such arrays (the same work with video, professional graphics processing, swapping OS and resource-intensive applications, copying files, antivirus, etc.). Therefore, the importance of these three comprehensive benchmarks in our overall package should not be underestimated.

In the popular PCMark Vantage, on average (see chart), we observe a very remarkable fact - the performance of this multi-disk solution almost does not depend on the type of array used! By the way, within certain limits, this conclusion is also true for all individual test tracks (types of tasks) included in the PCMark Vantage and PCMark05 packages (see the table for details). This may mean either that the controller firmware algorithms (with cache and disks) hardly take into account the specifics of this type of application, or that the bulk of these tasks are performed in the cache memory of the controller itself (and most likely we are seeing a combination of these two factors ). However, for the latter case (that is, executing tracks to a large extent in the cache of the RAID controller), the average performance of the solutions is not so high - compare these data with the test results of some “desktop” (“chipset-based”) 4-disk RAID 0 and 5 and inexpensive single SSDs on the SATA 3 Gb/s bus (see review). If, compared to a simple “chipset” 4-disk RAID 0 (and on twice as slow hard drives as the Hitachi Ultrastar 15K600 used here), LSI SAS9260 arrays are less than twice as fast in PCMark tests, then relative to not even the fastest “budget” single SSDs definitely outperform them all! The results of the PCMark05 disk test give a similar picture (see table; there is no point in drawing a separate diagram for them).

A similar picture (with some reservations) for arrays on LSI SAS9260 can be observed in another “track” application benchmark - C’T H2BenchW 4.13. Here, only the two slowest (by design) arrays (RAID 6 of 4 disks and a simple “mirror”) noticeably lag behind all other arrays, the performance of which obviously reaches that “sufficient” level when it no longer rests on the disk subsystem, and the efficiency of operation of the SAS2108 processor with the controller cache memory for these complex sequences of calls. What makes us happy in this context is that the performance of arrays based on LSI SAS9260 in tasks of this class is almost independent of the type of array used (RAID 0, 5, 6 or 10), which allows the use of more reliable solutions without compromising the final performance.

However, “Maslenitsa is not all for the cat” - if we change the tests and check the operation of arrays with real files on file system NTFS, then the picture will change dramatically. Thus, in the Intel NASPT 1.7 test, many of the “preset” scenarios of which are quite directly related to tasks typical of computers equipped with an LSI MegaRAID SAS9260-8i controller, the disposition of the arrays is similar to what we observed in the ATTO test when reading and writing large files - performance increases proportionally as the “linear” speed of the arrays increases.

In this chart we show the average for all NASPT tests and patterns, while in the table you can see detailed results. I would like to emphasize that we ran NASPT both under Windows XP (this is what many browsers usually do) and under Windows 7 (which, due to certain features of this test, is done less frequently). The fact is that Seven (and its “big brother” Windows 2008 Server) use more aggressive caching algorithms when working with files than XP. In addition, copying of large files in Semerka occurs mainly in blocks of 1 MB (XP, as a rule, operates in blocks of 64 KB). This leads to the fact that the results of the Intel NASPT “file” test differ significantly in Windows XP and Windows 7 - in the latter they are much higher, sometimes more than twice! By the way, we compared the results of NASPT (and other tests of our package) under Windows 7 with 1 GB and 2 GB installed system memory(there is information that with large amounts of system memory, caching of disk operations in Windows 7 is enhanced and NASPT results become even higher), however, within the measurement error, we did not find any difference.

We leave the debate about which OS (in terms of caching policies, etc.) to test disks and RAID controllers under for the discussion thread of this article. We believe that drives and solutions based on them need to be tested under conditions that are as close as possible to the real situations of their operation. That is why, in our opinion, the results we obtained for both operating systems are of equal value.

But let's return to the average performance chart in NASPT. As you can see, the difference between the fastest and slowest arrays we tested here is on average just under three times. This, of course, is not a five-fold gap, as when reading and writing large files, but it is also very noticeable. The arrays are located virtually proportional to their linear speed, and this is good news: this means that the LSI SAS2108 processor processes data quite quickly, creating almost no bottlenecks with active operation of arrays of levels 5 and 6.

To be fair, it should be noted that in NASPT there are patterns (2 out of 12) in which the same picture is observed as in PCMark with H2BenchW, namely that the performance of all tested arrays is almost the same! These are Office Productivity and Dir Copy to NAS (see table). This is especially obvious under Windows 7, although for Windows XP the trend of “convergence” is obvious (compared to other patterns). However, in PCMark with H2BenchW there are patterns where there is an increase in the performance of arrays in proportion to their linear speed. So everything is not as simple and unambiguous as some might like.

At first, I wanted to discuss a chart with general array performance indicators averaged over all application tests (PCMark+H2BenchW+NASPT+ATTO), that is, this one:

However, there is nothing special to discuss here: we see that the behavior of arrays on the LSI SAS9260 controller in tests that emulate the operation of certain applications can vary dramatically depending on the scenarios used. Therefore, it is better to draw conclusions about the benefits of a particular configuration based on exactly what tasks you are going to perform. And another professional test can significantly help us with this - synthetic patterns for IOmeter, emulating a particular load on the data storage system.

Tests in IOmeter

In this case, we will omit the discussion of numerous patterns that carefully measure the speed of operation depending on the size of the access block, the percentage of write operations, the percentage of random accesses, etc. This is, in fact, pure synthetics that provides little useful practical information and is of interest rather purely theoretically. After all, we have already clarified the main practical points regarding “physics” above. It is more important for us to focus on patterns that emulate real work - servers of various types, as well as file operations.

To emulate servers such as File Server, Web Server and DataBase (database server), we used the same and well-known patterns proposed at one time by Intel and StorageReview.com. For all cases, we tested the arrays with command queue depth (QD) from 1 to 256 in increments of 2.

In the “Database” pattern, which uses random disk access in 8 KB blocks within the entire volume of the array, one can observe a significant advantage of arrays without parity (that is, RAID 0 and 1) with a command queue depth of 4 and higher, while all arrays with parity control (RAID 5 and 6) demonstrate very similar performance (despite the twofold difference between them in the speed of linear accesses). The situation can be explained simply: all arrays with parity control showed similar values ​​in tests for average random access time (see the diagram above), and it is this parameter that mainly determines the performance in this test. It is interesting that the performance of all arrays increases almost linearly with increasing command queue depth up to 128, and only at QD=256 in some cases can a hint of saturation be seen. The maximum performance of arrays with parity control at QD=256 was about 1100 IOps (operations per second), that is, the LSI SAS2108 processor spends less than 1 ms to process one piece of data of 8 KB (about 10 million single-byte XOR operations per second for RAID 6 ; of course, the processor also performs other tasks in parallel for data input/output and working with cache memory).

In the pattern of a file server that uses blocks of different sizes with random read and write access to the array within its entire volume, we observe a picture similar to DataBase with the difference that here five-disk arrays with parity (RAID 5 and 6) are noticeably faster in speed their 4-disk counterparts and demonstrate almost identical performance (about 1200 IOps at QD=256)! Apparently, adding a fifth disk to the second of the controller's two 4-channel SAS ports somehow optimizes the computing load on the processor (at the expense of I/O operations?). It may be worth comparing the speed of 4-disk arrays when the drives are connected in pairs to different Mini-SAS connectors of the controller in order to identify the optimal configuration for organizing arrays on the LSI SAS9260, but this is a task for another article.

In the web server pattern, where, according to its creators, there are no disk write operations as a class (and therefore no calculation of XOR functions per write), the picture becomes even more interesting. The fact is that all three five-disk arrays from our set (RAID 0, 5 and 6) show identical performance here, despite the noticeable difference between them in the speed of linear reading and parity calculations! By the way, these same three arrays, but with 4 disks, are also identical in speed to each other! And only RAID 1 (and 10) falls out of the overall picture. Why this happens is difficult to judge. The controller may have very efficient algorithms for sampling the "lucky drives" (that is, those of the five or four drives from which the desired data arrives first), which in the case of RAID 5 and 6 increases the likelihood of data arriving from the platters earlier, preparing the processor in advance for necessary calculations (remember the deep command queue and large DDR2-800 buffer). And this can ultimately compensate for the delay associated with XOR calculations and equalizes their “chances” with “simple” RAID 0. In any case, the LSI SAS9260 controller can only be praised for its extremely high results (about 1700 IOps for 5-disk arrays at QD=256) in the Web Server pattern for arrays with parity. Unfortunately, the fly in the ointment was the very low performance of the two-disk “mirror” in all these server patterns.

The Web Server pattern is echoed by our own pattern, which emulates random reading of small (64 KB) files within the entire array space.

Again, the results were combined into groups - all 5-disk arrays are identical to each other in speed and are leaders in our “race”, 4-disk RAID 0, 5 and 6 are also indistinguishable from each other in terms of performance, and only “DSLRs” fall out of the general masses (by the way, a 4-disk “DSLR”, that is, RAID 10 turns out to be faster than all other 4-disk arrays - apparently due to the same “selecting a successful disk” algorithm). We emphasize that these patterns are valid only for a large depth of the command queue, while with a small queue (QD = 1-2) the situation and leaders can be completely different.

Everything changes when servers work with large files. In the conditions of modern “heavier” content and new “optimized” operating systems such as Windows 7, 2008 Server, etc. Working with megabyte files and 1 MB data blocks is becoming increasingly important. In this situation, our new pattern, which emulates random reading of 1-MB files within the entire disk (details of the new patterns will be described in a separate article on the method), comes in handy to more fully evaluate the server potential of the LSI SAS9260 controller.

As we can see, the 4-disk “mirror” here leaves no hope for leadership, clearly dominating any queue of commands. Its performance also initially grows linearly with increasing command queue depth, but at QD=16 for RAID 1 it reaches saturation (speed of about 200 MB/s). A little “later” (at QD=32) performance “saturation” occurs in the slower arrays in this test, among which “silver” and “bronze” have to be given to RAID 0, and arrays with parity control find themselves as outsiders, losing even before brilliant RAID 1 of two disks, which turns out to be unexpectedly good. This leads us to the conclusion that even when reading, the computational XOR load on the LSI SAS2108 processor when working with large files and blocks (randomly located) turns out to be very burdensome for it, and for RAID 6, where it actually doubles, it is sometimes even prohibitive - The performance of the solutions barely exceeds 100 MB/s, that is, 6-8 times lower than with linear reading! “Redundant” RAID 10 is clearly more profitable to use here.

When randomly recording small files, the picture is again strikingly different from what we saw earlier.

The fact is that here the performance of the arrays practically does not depend on the depth of the command queue (obviously, the huge cache of the LSI SAS9260 controller and the rather large caches of the hard drives themselves have an effect), but it changes radically with the type of the array! The undisputed leaders here are the “simple” ones for the RAID 0 processor, and the “bronze” with more than a two-fold loss to the leader is RAID 10. All arrays with parity control formed a very close single group with a two-disk DSLR (details for them are given in a separate diagram under the main ), losing three times to the leaders. Yes, this is definitely a heavy load on the controller processor. However, frankly speaking, I did not expect such a “failure” from the SAS2108. Sometimes even software RAID 5 on a “chipset” SATA controller (with caching) using Windows and calculation using the PC’s central processor) is able to work faster... However, the controller still produces “its” 440-500 IOps stably - compare this with the diagram for average access time when writing at the beginning of the results section.

The transition to random writing of large 1 MB files leads to an increase in absolute speed indicators (for RAID 0 - almost to the values ​​for random reading of such files, that is, 180-190 MB/s), however, the overall picture remains almost unchanged - arrays with parity many times slower than RAID 0.

A curious picture for RAID 10 is that its performance decreases with increasing command queue depth, although not by much. For other arrays there is no such effect. The two-disc “mirror” here again looks modest.

Now let's look at patterns in which files are read and written to disk in equal quantities. Such loads are typical, in particular, for some video servers or during active copying/duplication/backup of files within one array, as well as in the case of defragmentation.

First - 64 KB files randomly throughout the array.

Some similarity with the results of the DataBase pattern is obvious here, although the absolute speeds of the arrays are three times higher, and even at QD=256 some performance saturation is already noticeable. A larger (compared to the DataBase pattern) percentage of write operations in this case leads to the fact that arrays with parity and a two-disk “mirror” become obvious outsiders, significantly inferior in speed to RAID 0 and 10 arrays.

When switching to 1 MB files this pattern is generally preserved, although absolute speeds are approximately tripled, and RAID 10 becomes as fast as a 4-disk stripe, which is good news.

The last pattern in this article will be the case of sequential (as opposed to random) reading and writing of large files.

And here many arrays already manage to accelerate to very decent speeds in the region of 300 MB/s. And although the more than twofold gap between the leader (RAID 0) and the outsider (two-disk RAID 1) remains (note that with linear reading OR writing this gap is fivefold!), RAID 5 entered the top three, and the remaining XOR arrays did not catch up may not be reassuring. After all, judging by the list of applications of this controller that LSI itself provides (see the beginning of the article), many target tasks will use exactly this type of access to arrays. And this is definitely worth considering.

In conclusion, I will provide a final diagram in which the indicators of all the IOmeter test patterns mentioned above are averaged (geometrically for all patterns and command queues, without weighting coefficients). It is curious that if the averaging of these results within each pattern is carried out arithmetically with weighting coefficients of 0.8, 0.6, 0.4 and 0.2 for command queues 32, 64, 128 and 256, respectively (which conditionally takes into account the drop in the share of operations with high depth of the command queue in the overall operation of drives), then the final (for all patterns) normalized array performance index will coincide within 1% with the geometric mean.

So, the average “hospital temperature” in our patterns for the IOmeter test shows that there is no escape from “physics and mathematics” - RAID 0 and 10 are clearly in the lead. For arrays with parity control, a miracle did not happen - although the LSI SAS2108 processor demonstrates In some cases, decent performance, in general, such arrays cannot “reach” the level of a simple “stripe”. At the same time, it is interesting that 5-disk configurations clearly add value compared to 4-disk configurations. In particular, 5-disk RAID 6 is definitely faster than 4-disk RAID 5, although in terms of “physics” (random access time and linear access speed) they are virtually identical. The two-disk “mirror” was also disappointing (on average it is equivalent to a 4-disk RAID 6, although the mirror does not require two XOR calculations for each bit of data). However, a simple “mirror” is obviously not a target array for a fairly powerful 8-port SAS controller with a large cache and a powerful on-board processor. :)

Pricing information

The LSI MegaRAID SAS 9260-8i 8-port SAS controller with a complete set is offered at a price of around $500, which can be considered quite attractive. Its simplified 4-port analogue is even cheaper. A more accurate current average retail price of the device in Moscow, relevant at the time you read this article:

LSI SAS 9260-8iLSI SAS 9260-4i
$571() $386()

Conclusion

Summarizing what was said above, we can conclude that we will not risk giving uniform recommendations “for everyone” regarding the 8-port LSI MegaRAID SAS9260-8i controller. Everyone should draw their own conclusions about the need to use it and configure certain arrays with its help - strictly based on the class of tasks that are supposed to be launched. The fact is that in some cases (on some tasks) this inexpensive “mega-monster” is capable of showing outstanding performance even on arrays with double parity (RAID 6 and 60), but in other situations the speed of its RAID 5 and 6 clearly leaves much to be desired . And the only salvation (almost universal) will be a RAID 10 array, which can be organized with almost the same success on cheaper controllers. However, it is often thanks to the processor and cache memory of the SAS9260-8i that the RAID 10 array behaves no slower than a stripe of the same number of disks, while ensuring high reliability of the solution. But what you should definitely avoid with the SAS9260-8i is a two-disk “mirror” and 4-disk RAID 6 and 5 - these are obviously suboptimal configurations for this controller.

Thanks to Hitachi Global Storage Technologies
for the hard drives provided for testing.

If your computer has a couple of drives, connecting them is simple. But if you want a lot of disks, peculiarities arise. On the KDPV SAS cable from Ali, which had already slipped through in the past, was so unexpectedly warmly received by the community. Thank you, comrades. I will try to touch on a topic that is potentially useful to a slightly wider circle. Although specific. I'll start with this cable and the required program, but only for starters. Different pieces of the puzzle have to be assembled in different places.
I would like to warn you right away that the text is dense and quite heavy. It’s certainly not necessary to force yourself to read and understand all of this. Lots of pictures!

Some people say 9 bucks for a dumb cable? What to do, this is used extremely rarely in everyday life, and for industrial items the circulation is lower and the prices are higher. They can charge you a hundred or two bucks for a complex SAS cable without blinking an eye. So the Chinese are reducing it even more :)

Delivery and packaging

Ordered May 6, 2017, received May 17 - just a rocket. There was a track.

An ordinary gray bag with one more inside - quite enough, the goods are not fragile.

Specification

Female-male SFF-8482 SAS 29 pin cable.
Length 50 cm
Net weight 66 g

Seller's picture

Real appearance, as you can see, it’s different



For the extra plastic, the seller received 4 stars instead of 5, but it does not affect performance.

About SAS and SATA connectors

What is SFF-8482 and what is it eaten with? Firstly, this is the most popular connector on SAS devices (), for example, on my tape drive



And the SFF-8482 fits perfectly on a SATA drive (but not vice versa)


Compare, with SATA there is a gap between data and power. And at SAS it is filled with plastic. Therefore, the SATA connector will not fit on the SAS device.

Of course, this makes sense. The SAS and SATA signals are different. And the SATA controller will not be able to work with the SAS device. A SAS controller will be able to do both (although there is advice not to mix under certain circumstances, which are unlikely to be real at home)

SAS controllers and expanders

So what, the reader will ask. What do I gain from this compatibility? SATA controllers are enough for me!

The real truth! If it’s enough, you can stop reading at this point. The question was what to do if there are a LOT of disks?

This is what a simple SAS controller looks like from my zip - DELL H200.


Mine is stitched in HBA, that is, all axis disks are visible separately

And this is the ancient SAS RAID HP

In both we see internal connectors (called sff 8087 or, more often, miniSAS) and one external connector - sff 8088

How many drives can be connected to one miniSAS? The answer depends. Blunt cable - 4 pieces, that is, 8 for such a controller. The cable from my spare parts looks like this

At one end there is miniSAS, at the other there are 4 pieces of SATA (and another connector, more about it below)

But you can take a miniSAS-miniSAS cable and connect it to an expander, that is, a port multiplier. And the controller will handle up to 256 (two hundred and fifty-six) disks. Moreover, the channel speed is enough for dozens of disks - for sure.
The expander as a separate card looks, for example, like my Chenbro

Or it can be soldered onto a disk cage. Then only one miniSAS channel can go into it (or maybe more). These are the cables.


Agree, cable management is somewhat simplified :)

Baskets

It’s clear that disks can work just fine without special baskets. But sometimes baskets can be useful.

This is what the SATA cage of the old Supermicro model looks like. You can find it for 1000 rubles, but more likely for 5+ thousand.


Her disc tray


View from the inside, you can see that there are SATA connectors.


If the SAS basket is even better, less wires. If it's SCSI or FC, you won't be able to use it. I took one 19" FC for testing - it didn’t do anything useful. However, there was scrap non-ferrous metal almost worth the money for which I bought it.


Rear view, we see 4 SATA, 2 MOLEX and the same port that was on the cable. Designed to control LED disk activity.

This is what one of the simplest baskets looks like (there are many different models, but similar)


These are the ones they don't sell anymore, so the details aren't important. Just a piece of metal with shock absorbers and a Carlson in front.

This is what it looked like in 2013:


The cardboard crutch at the bottom and the third basket were only for the moment for transferring data from 2T disks to 4T. Since then it has been working 24/7.

I have SAS+SATA

More precisely, it worked until I needed to connect the tape drive. First of all, I plugged in a second SAS controller, bought a miniSAS cable to sff 8482, something like this

And turned it on. Everything worked, but in 24/7 mode, every watt costs money. I was looking for adapters from sff 8482 to SATA, but the solution turned out to be even simpler. You remember that a SATA drive is connected to a SAS sff 8482?

Now I also remember, but then I was stupid for a couple of months :) And then I took out the extra controller, switched one of the drives to the chipset SATA port, the other three to sff 8482. I had to change the power connection, there was a Molex-SATA splitter, I had to buy it on Ali Molex - lots of Molex. Like this


, Everything is fine.

And the tape drive moved to another building using the monitored cable. But this is a separate story, and, guard, I feel tired :)

Where is the best place to look for all this?

Prices for new server hardware for the home are prohibitive. So used, including from spare parts from equipment being decommissioned.
Cables can be found locally. For comparable money on e-bay. On Ali - somewhat less likely, but there are exceptions - I bought it.
Controllers- primarily on e-bay, and from Europe. It is possible from the USA, it is much cheaper there if you somehow solve the delivery issue. You can find it in your homeland - Avito. (On a lump - expensive). It is very dangerous to buy in China. Lots of complaints about fakes from the rejects. Sometimes it works, sometimes it doesn't. You can't prove anything to anyone.
Baskets It's wiser to look locally. There are even options for the simplest baskets - buying new ones. Simple baskets without electronics can be taken in China and Europe and at flea markets. Baskets with expanders - see the paragraph about controllers.

IMPORTANT Getting confused is easier than getting lost in the forest. Consult the forum. SAS comes in different sizes - 3, 6 and 12 Gb/s. Some controllers are made so that they can be used with desktop hardware, others are not, and others will not survive anywhere except the mother of the native manufacturer. And so on.



On my trunk I'm MikeMac

PS If for you this became a speech by Captain Obvious, I apologize for wasting time.
If this is bullshit, then my sincere apologies. It’s difficult to balance; everyone has their own wants, objectives and initial ones.

I'm planning to buy +33 Add to favorites I liked the review +56 +106

For more than 20 years, the parallel bus interface has been the most common communication protocol for most digital storage systems. But as the need for throughput and system flexibility has grown, the shortcomings of the two most common parallel interface technologies have become apparent: SCSI and ATA. The lack of compatibility between parallel SCSI and ATA interfaces—different connectors, cables, and command sets used—increases the cost of system maintenance, research and development, training, and qualification of new products.

Today, parallel technologies still suit users of modern corporate systems from a performance standpoint, but growing demands for higher speeds, greater data retention, smaller physical sizes, and greater standardization challenge the parallel interface's ability to cost-effectively keep up with rapidly increasing CPU performance and hard drive speeds. disks. In addition, in conditions of austerity, it is becoming increasingly difficult for enterprises to find funds for the development and maintenance of different types of connectors rear panels server cases and external disk arrays, checking the compatibility of heterogeneous interfaces and inventorying heterogeneous connections for performing I/O operations.

The use of parallel interfaces also poses a number of other problems. Parallel data transmission over a wide daisy chain is subject to crosstalk, which can create additional interference and lead to signal errors - to avoid this trap, you have to reduce the signal speed or limit the cable length, or do both. Termination of parallel signals is also associated with certain difficulties - you have to terminate each line separately, usually this operation is performed by the last drive, in order to prevent the signal from being reflected at the end of the cable. Finally, the large cables and connectors used in parallel interfaces make these technologies unsuitable for new compact computing systems.

Introducing SAS and SATA

Serial technologies such as Serial ATA (SATA) and Serial Attached SCSI (SAS) overcome the architectural limitations of traditional parallel interfaces. These new technologies got their name from the method of signal transmission, when all information is transmitted sequentially (English serial), in a single stream, in contrast to multiple streams that are used in parallel technologies. Main advantage serial interface is that when data is transferred in a single stream, it moves much faster than when using a parallel interface.

Serial technologies combine many bits of data into packets and then transmit them over a cable at speeds up to 30 times faster than parallel interfaces.

SATA extends the capabilities of traditional ATA technology, allowing data transfer between disk drives at speeds of 1.5 GB per second and higher. Due to its low cost per gigabyte of disk capacity, SATA will remain the dominant disk interface in desktop PCs, entry-level servers, and network-attached storage systems where cost is a major consideration.

SAS technology, the successor to parallel SCSI, builds on the proven functionality of its predecessor and promises to significantly enhance the capabilities of today's enterprise storage systems. SAS offers a number of advantages that traditional storage solutions do not provide. In particular, SAS allows you to connect up to 16,256 devices to one port and provides reliable serial connection"point-to-point" with speeds up to 3 Gb/s.

Additionally, with a smaller connector, SAS provides full dual-port connectivity for both 3.5" and 2.5" drives (previously only available for 3.5" Fiber Channel drives). This is a very useful feature when you need to fit a large number of redundant drives into a compact system, such as a low-profile blade server.

SAS improves drive addressing and connectivity with hardware expanders that allow large numbers of drives to be connected to one or more host controllers. Each expander provides connection to up to 128 physical devices, which can be other host controllers, other SAS expanders or disk drives. This scheme scales well and allows you to create enterprise-scale topologies that easily support multi-node clustering for automatic recovery systems in case of failure and to distribute the load evenly.

One of the biggest benefits of the new serial technology is that the SAS interface will also be compatible with lower-cost SATA drives, allowing system designers to use both types of drives in the same system without incurring additional costs to support two different interfaces. Thus, SAS, the next generation of SCSI technology, overcomes the current limitations of parallel technologies in terms of performance, scalability and data availability.

Multiple levels of compatibility

Physical Compatibility

The SAS connector is universal and is compatible with SATA in form factor. This allows both SAS and SATA drives to be directly connected to the SAS system, allowing the system to be used either for mission-critical applications that require high performance and fast data access, or for more cost-effective applications with a lower cost per gigabyte.

The SATA command set is a subset of the SAS command set, allowing compatibility between SATA devices and SAS controllers. However, SAS drives cannot work with a SATA controller, so they are equipped with special keys on the connectors to eliminate the possibility of incorrect connection.

Additionally, the similar physics of SAS and SATA interfaces allows the use of a new universal SAS backplane that supports both SAS and SATA drives. As a result, there is no need to use two different backplanes for SCSI and ATA drives. This design compatibility benefits both back panel manufacturers and end users by reducing hardware and engineering costs.

Protocol Compatibility

SAS technology includes three types of protocols, each of which is used to transfer different types of data over the serial interface depending on what device is being accessed. The first is the serial SCSI protocol (Serial SCSI Protocol SSP), transmitting SCSI commands, the second is the SCSI management protocol (SCSI Management Protocol SMP), transmitting control information to the expanders. The third, SATA Tunneled Protocol STP, establishes a connection that allows SATA commands to be transmitted. Thanks to the use of these three protocols, the SAS interface is fully compatible with existing SCSI applications, control software and SATA devices.

This multi-protocol architecture, combined with the physical compatibility of SAS and SATA connectors, makes SAS technology the universal link between SAS and SATA devices.

Benefits of Compatibility

SAS and SATA compatibility provides a number of benefits to system designers, builders, and end users.

System designers can use the same backplanes, connectors and cable connections thanks to SAS and SATA compatibility. Upgrading a system with the transition from SATA to SAS actually comes down to replacing disk drives. In contrast, for users of traditional parallel interfaces, moving from ATA to SCSI means replacing backplanes, connectors, cables, and drives. Other cost-effective benefits of consistent technology interoperability include simplified certification and asset management.

VAR resellers and system builders can easily and quickly reconfigure custom systems by simply installing the appropriate disk drive into the system. There is no need to work with incompatible technologies and use special connectors and different cable connections. Moreover, the added flexibility to balance price and performance will allow VAR resellers and system builders to better differentiate their products.

For end users, SATA and SAS compatibility means new level flexibility in choosing the optimal price-performance ratio. SATA drives will be the best solution for low-cost servers and storage systems, while SAS drives will provide maximum performance, reliability and compatibility with control software. Upgradeable from SATA drives to SAS drives without having to purchase anything to do so new system significantly simplifies the purchasing decision process, protects system investments and reduces total cost of ownership.

Joint development of SAS and SATA protocols

On January 20, 2003, the SCSI Trade Association (STA) and the Serial ATA (SATA) II Working Group announced a collaboration to ensure system-level compatibility of SAS technology with SATA disk drives.

The collaboration between the two organizations, as well as the joint efforts of storage vendors and standards committees, aims to provide even more precise interoperability guidelines that will help system designers, IT professionals and end users implement even more fine tuning of their systems to achieve optimal performance and reliability and reduce total cost of ownership.

The SATA 1.0 specification was approved in 2001, and today there are SATA products on the market from various manufacturers. The SAS 1.0 specification was approved in early 2003, and the first products should hit the market in the first half of 2004.

In this article we'll talk about what allows you to connect a hard drive to a computer, namely, about the interface hard drive. More precisely, about hard drive interfaces, because a great many technologies have been invented for connecting these devices throughout their existence, and the abundance of standards in this area can confuse an inexperienced user. However, first things first.

Hard drive interfaces (or strictly speaking, external drive interfaces, since they can be not only drives, but also other types of drives, for example, optical drives) are designed to exchange information between these external memory devices and motherboard. Hard drive interfaces, no less than the physical parameters of the drives, affect many of the operating characteristics of the drives and their performance. In particular, drive interfaces determine such parameters as the speed of data exchange between the hard drive and the motherboard, the number of devices that can be connected to the computer, the ability to create disk arrays, the possibility of hot plugging, support for NCQ and AHCI technologies, etc. . It also depends on the hard drive interface which cable, cord or adapter you will need to connect it to the motherboard.

SCSI - Small Computer System Interface

The SCSI interface is one of the oldest interfaces designed for connecting storage devices in personal computers. This standard appeared in the early 1980s. One of its developers was Alan Shugart, also known as the inventor of the floppy disk drive.

Appearance of the SCSI interface on the board and the cable connecting to it

The SCSI standard (traditionally this abbreviation is read in Russian transcription as “skazi”) was originally intended for use in personal computers, as evidenced by the very name of the format - Small Computer System Interface, or system interface for small computers. However, it so happened that the drives of this type were used mainly in top-class personal computers, and subsequently in servers. This was due to the fact that, despite the successful architecture and a wide set of commands, the technical implementation of the interface was quite complex and was not affordable for mass PCs.

However, this standard had a number of features that were not available for other types of interfaces. For example, a cord for connecting Small Computer System Interface devices may have maximum length at 12 m, and the data transfer speed is 640 MB/s.

Like the IDE interface that appeared a little later, the SCSI interface is parallel. This means that the interface uses buses that transmit information over several conductors. This feature was one of the limiting factors for the development of the standard, and therefore a more advanced, consistent SAS standard (from Serial Attached SCSI) was developed as its replacement.

SAS - Serial Attached SCSI

This is what the SAS server disk interface looks like

Serial Attached SCSI was developed as an improvement to the rather old Small Computers System Interface for connecting hard drives. Despite the fact that Serial Attached SCSI uses the main advantages of its predecessor, it nevertheless has many advantages. Among them it is worth noting the following:

  • Use of a common bus by all devices.
  • The serial communication protocol used by SAS allows for fewer signal lines to be used.
  • There is no need for bus termination.
  • Virtually unlimited number of connected devices.
  • Higher throughput (up to 12 Gbit/s). Future implementations of the SAS protocol are expected to support data transfer rates of up to 24 Gbit/s.
  • Possibility of connecting drives with Serial ATA interface to the SAS controller.

As a rule, Serial Attached SCSI systems are built on the basis of several components. The main components include:

  • Target devices. This category includes the actual drives or disk arrays.
  • Initiators are chips designed to generate requests to target devices.
  • Data delivery system - cables connecting target devices and initiators

Serial Attached SCSI connectors come in different shapes and sizes, depending on the type (external or internal) and SAS versions. Below are the SFF-8482 internal connector and the SFF-8644 external connector designed for SAS-3:

On the left is an internal SAS connector SFF-8482; On the right is an external SAS SFF-8644 connector with cable.

A few examples of the appearance of SAS cords and adapters: HD-Mini SAS cord and SAS-Serial ATA adapter cord.

On the left is the HD Mini SAS cable; On the right is an adapter cable from SAS to Serial ATA.

Firewire - IEEE 1394

Today you can often find hard drives with a Firewire interface. Although through Firewire interface You can connect any type to your computer peripheral devices, and it cannot be called a specialized interface designed exclusively for connecting hard drives, however, Firewire has a number of features that make it extremely convenient for this purpose.

FireWire - IEEE 1394 - view on a laptop

The Firewire interface was developed in the mid-1990s. The development began with the well-known company Apple, which needed its own bus, different from USB, for connecting peripheral equipment, primarily multimedia. The specification describing the operation of the Firewire bus is called IEEE 1394.

Firewire is one of the most commonly used high-speed serial external bus formats today. The main features of the standard include:

  • Possibility of hot connection of devices.
  • Open bus architecture.
  • Flexible topology for connecting devices.
  • Data transfer speeds vary widely – from 100 to 3200 Mbit/s.
  • The ability to transfer data between devices without a computer.
  • Possibility of organization local networks using a tire.
  • Power transmission via bus.
  • A large number of connected devices (up to 63).

To connect hard drives (usually via external hard drive enclosures) via the Firewire bus, as a rule, a special SBP-2 standard is used, which uses the Small Computers System Interface protocol command set. It is possible to connect Firewire devices to a regular USB connector, but this requires a special adapter.

IDE - Integrated Drive Electronics

The abbreviation IDE is undoubtedly known to most users. personal computers. The interface standard for connecting IDE hard drives was developed by a well-known hard drive manufacturer - Western Digital. The advantage of IDE over other interfaces that existed at the time, in particular the Small Computers System Interface, as well as the ST-506 standard, was that there was no need to install a hard drive controller on the motherboard. The IDE standard implied installing a drive controller on the drive itself, and only a host interface adapter for connecting IDE drives remained on the motherboard.

IDE interface on motherboard

This innovation has improved the operating parameters of the IDE drive due to the fact that the distance between the controller and the drive itself has been reduced. In addition, installing an IDE controller inside the hard drive case made it possible to somewhat simplify both motherboards and the production of hard drives themselves, since the technology gave freedom to manufacturers in terms of optimal organization of the logic of the drive.

The new technology was initially called Integrated Drive Electronics. Subsequently, a standard was developed to describe it, called ATA. This name is derived from the last part of the name of the PC/AT family of computers by adding the word Attachment.

For connecting hard drive or other device, such as an optical drive that supports Integrated Drive Electronics technology, to the motherboard uses a special IDE cable. Since ATA refers to parallel interfaces (therefore it is also called Parallel ATA or PATA), that is, interfaces that provide simultaneous data transmission over several lines, its data cable has a large number of conductors (usually 40, and in latest versions protocol, it was possible to use an 80-core cable). Regular data cable for of this standard has a flat and wide appearance, but there are also round cables. The power cable for Parallel ATA drives has a 4-pin connector and is connected to the computer's power supply.

Below are examples of IDE cable and round PATA data cable:

Appearance of the interface cable: on the left - flat, on the right in a round braid - PATA or IDE.

Thanks to the comparative low cost of Parallel ATA drives, the ease of implementation of the interface on the motherboard, as well as the ease of installation and configuration of PATA devices for the user, Integrated Drive Electronics type drives have for a long time pushed out devices of other interface types from the market of hard drives for budget-level personal computers.

However, the PATA standard also has a number of disadvantages. First of all, this is a limitation on the length that a Parallel ATA data cable can have - no more than 0.5 m. In addition, the parallel organization of the interface imposes a number of restrictions on the maximum data transfer speed. It does not support the PATA standard and many of the advanced features that other types of interfaces have, such as hot plugging of devices.

SATA - Serial ATA

View of the SATA interface on the motherboard

The SATA (Serial ATA) interface, as the name suggests, is an improvement over ATA. This improvement consists, first of all, in converting the traditional parallel ATA (Parallel ATA) into a serial interface. However, the differences between the Serial ATA standard and the traditional one are not limited to this. In addition to changing the data transmission type from parallel to serial, the data and power connectors also changed.

Below is the SATA data cable:

Data cable for SATA interface

This made it possible to use a much longer cord and increase the data transfer speed. However, the downside was the fact that PATA devices, which were present on the market in huge quantities before the advent of SATA, became impossible to connect directly to the new connectors. True, most new motherboards still have old connectors and support connecting older devices. However, the reverse operation - connecting a new type of drive to an old motherboard usually causes much more problems. For this operation, the user usually requires a Serial ATA to PATA adapter. The power cable adapter usually has a relatively simple design.

Serial ATA to PATA power adapter:

On the left is a general view of the cable; On the right is an enlarged view of the PATA and Serial ATA connectors

However, the situation is more complicated with a device such as an adapter for connecting a serial interface device to a parallel interface connector. Typically, an adapter of this type is made in the form of a small microcircuit.

Appearance of a universal bidirectional adapter between SATA - IDE interfaces

Currently, the Serial ATA interface has practically replaced Parallel ATA, and PATA drives can now be found mainly only in fairly old computers. Another feature of the new standard that ensured its wide popularity was support.

Type of adapter from IDE to SATA

You can tell us a little more about NCQ technology. The main advantage of NCQ is that it allows you to use ideas that have long been implemented in the SCSI protocol. In particular, NCQ supports a system for sequencing read/write operations across multiple drives installed in a system. Thus, NCQ can significantly improve the performance of drives, especially hard drive arrays.

Type of adapter from SATA to IDE

To use NCQ, technology support is required from the hard drive as well as the host adapter motherboard. Almost all adapters that support AHCI also support NCQ. In addition, some older proprietary adapters also support NCQ. Also, for NCQ to work, it requires support from the operating system.

eSATA - External SATA

It is worth mentioning separately the eSATA (External SATA) format, which seemed promising at the time, but never became widespread. As you can guess from the name, eSATA is a type of Serial ATA designed for connecting exclusively external drives. The eSATA standard offers for external devices most of the capabilities of the standard one, i.e. internal Serial ATA, in particular, the same system of signals and commands and the same high speed.

eSATA connector on a laptop

However, eSATA also has some differences from the internal bus standard that gave birth to it. In particular, eSATA supports a longer data cable (up to 2 m) and also has higher power requirements for drives. Additionally, eSATA connectors are slightly different from standard Serial ATA connectors.

Compared to other external buses, such as USB and Firewire, eSATA, however, has one significant drawback. While these buses allow the device to be powered via the bus cable itself, the eSATA drive requires special connectors for power. Therefore, despite the relatively high data transfer speed, eSATA is currently not very popular as an interface for connecting external drives.

Conclusion

Information stored on a hard drive cannot become useful to the user or accessible to application programs until it is accessed CPU computer. Hard drive interfaces provide a means of communication between these drives and the motherboard. Today there are many various types hard drive interfaces, each of which has its own advantages, disadvantages and characteristics. We hope that the information provided in this article will be largely useful to the reader, because the choice of a modern hard drive is largely determined not only by its internal characteristics, such as capacity, cache memory, access and rotation speed, but also by the interface for which it was developed.




Top