Building AI-Driven Apps Using Semantic Kernel.pptx
Perf Storwize V7000 Eng
1. A comparison of storage performance
IBM DS4700 vs. IBM DS5300 vs. IBM Storwize V7000
2. 1. Configuration of test data storage systems:
IBM DS4700 1GB RAM per controller HDD 15K RPM 3.5” (ST3300657FC 146GB)
IBM DS5300 8GB RAM per controller HDD 15K RPM 3.5” (MBA3147FD 300GB)
IBM Storwize V7000 8GB RAM per controller HDD 10K RPM 2.5” (Savvio 10K.3 300GB)
Features HDD, used in the test storage
Savvio 10K.3
MBA3147FD ST3300657FC
Average latency time: 2 ms Average latency time: 2 ms Average latency time: 3 ms
2-platter 2-platter 2-platter
Areal density: 112.8 Gbits/sq. in. Areal density (avg) 225Gbits/sq. in. Areal density (avg) 252Gbits/sq. in.
Power requirement (ready) 14.8 W Average operating power 13.8W Average operating power 4.4W
As can be seen from the above table, 10K RPM 2.5 "HDD SFF lose performance 15K RPM 3.5" HDD SFF, but have a substantially lower power consumption. According to
tests of single disks, 15K RPM 3.5 "HDD give up 40% Input / Output operation Per Second (IOPS). But here, in this case, wondering how it will affect the final performance
storage systems, whether we observe the same difference of 40% on iops? Proceed directly to the test.
2. When comparing the performance of the raid groups R10 used the following configuration:
IBM DS4700 One raid group R10 (8+8)
IBM DS5300 One raid group R10 (8+8)
IBM Storwize V7000 One raid group R10 (8+8)
3. 2.1. Results of testing with a load of 64 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.2.1a. The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 2.1b
diagram 2.1B
diagram 2.1A
R10
R10
4000 12,0 64 I/O treads
64 I/O treads
3500 100%Random
100%Random 10,0
3000 R/W=80/20
2500 R/W=80/20 8,0
2000 IBM DS4700 (16x15K 3.5" 6,0 IBM DS4700
IOPS latency, ms
1500 HDD) 15K 3.5" HDD
1000 4,0
500 2,0
IBM DS5300 (16x15K 3.5" IBM DS5300
0
HDD) 0,0 15K 3.5" HDD
4K 8K 32K 64K 256K 4K 8K 32K
IBM V7000 Storwize 64K 256K IBM V7000
(16x10K 2.5" HDD) Storwize 10K
2.5" HDD
block size
4. 2.2. The results of testing at a load of 512 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R / W = 80/20 are shown in
diagram.2.2a. The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 2.2b.
diagram 2.2A diagram 2.2B
R10 R10
6000 512 I/O treads 14,0 512 I/O treads
5000 100%Random 12,0 100%Random
4000 R/W=80/20 10,0 R/W=80/20
3000 IBM DS4700 16x 15K 3.5" 8,0 IBM DS4700
IOPS HDD latency, ms 6,0
2000 15K 3.5" HDD
4,0
1000 IBM DS5300 16x 15K 3.5" 2,0 IBM DS5300
0 HDD 15K 3.5" HDD
0,0
4K 8K 32K 4K 8K
64K 256K 32K IBM V7000
IBM V7000 Storwize 16x 64K 256K Storwize 10K
10K 2.5" HDD
2.5" HDD
block size
5. 3. When comparing the performance of the raid groups used the following configuration R5:
IBM DS4700 One raid group R5 (15+1)
IBM DS5300 One raid group R5 (15+1)
IBM Storwize V7000 One raid group R5 (15+1)
3.1. Results of testing with a load of 64 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R / W = 80/20 are shown in
diagr.3.1a.
The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 3.1b.
diagram 3.1A diagram 3.1B
R5 R5
3500 64 I/O treads 64 I/O treads
14,0
3000 100%Random 100%Random
12,0
2500 R/W=80/20 R/W=80/20
10,0
2000
8,0 IBM DS4700
IOPS 1500 IBM DS4700 16x 15K 3.5" latency, ms 15K 3.5" HDD
HDD 6,0
1000
4,0
500 IBM DS5300
0 IBM DS5300 16x 15K 3.5" 2,0
15K 3.5" HDD
HDD 0,0
4K 8K 32K 64K 4K IBM V7000
256K 8K 32K
IBM V7000 Storwize 16x 64K Storwize 10K
10K 2.5" HDD 256K
2.5" HDD
block size
6. 3.2. The results of testing at a load of 512 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R / W = 80/20 are shown in
diagr.3.2a.
The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 3.2b
diagram 3.2A
diagram 3.2B
R5
R5
4500 512 I/O treads
16,0 512 I/O treads
4000 100%Random
3500 14,0 100%Random
R/W=80/20 12,0
3000 R/W=80/20
2500 IBM DS4700 16x 15K 3.5" 10,0
2000 8,0 IBM DS4700
IOPS HDD latency, ms
1500 6,0 15K 3.5" HDD
1000 4,0
IBM DS5300 16x 15K 3.5"
500 2,0 IBM DS5300
0 HDD
0,0 15K 3.5" HDD
4K 8K 4K
32K 64K IBM V7000 Storwize 16x 8K 32K
256K 64K IBM V7000
10K 2.5" HDD 256K
Storwize 10K
2.5" HDD
block size
7. 4. When comparing the performance of the raid groups used the following configuration R6:
IBM DS4700 does not support this type of raid
IBM DS5300 One raid group R6 (14+2)
IBM Storwize V7000 One raid group R6 (14+2)
4.1. Results of testing with a load of 64 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R / W = 80/20 are shown in
diagr.4.1a.
The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 4.1b.
diagram 4.1B
diagram 4.1A
R6
R6
64 I/O treads
3000 64 I/O treads
8,0 100%Random
2500
100%Random 7,0 R/W=80/20
R/W=80/20 6,0
2000
5,0
1500 IBM DS4700 - not support latency, ms 4,0 IBM DS4700 -
IOPS
1000 3,0 not support
500 2,0
IBM DS5300 16x 15K 3.5" 1,0 IBM DS5300 15K
0
HDD 0,0 3.5" HDD
4K 8K 4K
32K 64K 8K
256K 32K 64K 256K IBM V7000
IBM V7000 Storwize 16x
Storwize 10K
10K 2.5" HDD
2.5" HDD
block size
8. 4.2 Test results at a load of 512 I / O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R / W = 80/20 are shown in diagr.4.2a.
The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 4.2b.
diagram 4.2A
diagram 4.2A
R6
R6
512 I/O treads
4000 512 I/O treads 7,0 100%Random
3500 100%Random
6,0 R/W=80/20
3000 R/W=80/20
5,0
2500
4,0 IBM DS4700 15K
2000 IBM DS4700 16x 15K
IOPS 1500 latency, ms 3,0 3.5" HDD
3.5" HDD
1000 2,0
500 IBM DS5300 16x 15K 1,0 IBM DS5300 15K
0 3.5" HDD 3.5" HDD
0,0
4K 8K 4K
32K 64K IBM V7000 Storwize 16x 8K 32K IBM V7000
256K 10K 2.5" HDD 64K 256K Storwize 10K
2.5" HDD
block size
As seen from these results, the same number of HDD, IBM Storwize V7000 on all types of raids, shows the best result * than the old IBM DS4700
and behind the IBM DS5300 to an average of 5-15% iops, but a third of tests shows with less response time.
* significantly worse outcome in iops and response time for the IBM DS4700 is in a block size of 256KB, presumably due to the lower plate density use HDD
9. 5. It is well known that in the case of R5 and R6 to a raid group Random Write performance is significantly lower than in the case of R10. But in a typical database
workload, as a rule, the percentage of write operations at times less than the read operations. What will be the difference for the test pattern, simulating the load of a
typical DBMS? For ease of comparison, we reduce to a single chart (5A) The results of IBM Storwize V7000, and the other (5B) the results of IBM DS5300.
As we can see the difference in performance R5 vs. R10 used in the pattern of 20-25% of the difference in usable capacity almost doubled.
Also worth noting is a very small difference in this pattern, the performance R5 vs. R6 with a significant difference in the resiliency of data raid groups, because R5 without
loss of data can survive a single disk failure (any), and R6 - the simultaneous failure of any two drives.
diagram 5A diagram 5B
5000 R6 vs. R5 vs. R10 6000 R6 vs. R5 vs. R10
512 I/O treads 5000
512 I/O treads
4000
100%Random 100%Random
4000
3000 R/W=80/20 R/W=80/20
3000
IOPS 2000 IBM Storwize V7000 R6 IOPS IBM DS5300 R6
2000
1000
1000
IBM Storwize V7000 R5 IBM DS5300 R5
0 0
4K 8K 4K
32K 8K 32K
64K 256K 64K
IBM Storwize V7000 R10 256K IBM DS5300 R10
block size block size
10. 6. Estimate the efficiency of data center space for each of the storage system on the example of the kind of performance, presumably, can be obtained using 18U **
Space RM-cabinet.
**for IBM DS5300 - 19U
diagram 6A diagram 6B
R10 R5
60000 512 I/O treads 50000
512 I/O treads
100%Random 100%Random
50000
R/W=80/20 40000 R/W=80/20
40000
IBM DS4700 96x 15K 3.5" 30000 IBM DS4700 96x 15K 3.5"
30000
HDD, total 18U 20000 HDD, total 18U
IOPS
IOPS 20000
10000 IBM DS5300 80x 15K 3.5" 10000 IBM DS5300 80x 15K 3.5"
0 HDD, total 19U HDD, total 18U
0
4K 8K 4K
32K IBM V7000 Storwize 216x 8K 32K IBM V7000 Storwize 216x
64K 256K 64K
10K 2.5" HDD, total 18U 256K 10K 2.5" HDD, total 18U
block size block size
As can be seen from the above chart, an example of test data storage systems, in terms of efficient use of space in the data center SFF-disc 2,5 "10K RPM is much more
efficient LFF-drive 3,5" 15K RPM - more than 2 times.
11. 6.1 Let's count the power consumption for each of the configurations (Section 6) and display the chart 6.1a. Based on the data, calculate the average density in
watt/unit - figure 6.1B.
3500 diagram 6.1A 200 diagram 6.1B
180
3000 160
140
2500
120
2000 100
IBM DS4700 96x 15K 3.5" 80 IBM DS4700 15K 3.5" HDD
1500 HDD, total 18U 60
1000 40
IBM DS5300 80x 15K 3.5" IBM DS5300 15K 3.5" HDD
20
HDD, total 19U
500 0
0 IBM V7000 Storwize 216x IBM V7000 Storwize 10K
10K 2.5" HDD, total 18U 2.5" HDD
aver
age
total watt/
Watts unit
12. 6.2 How to evaluate the efficiency of each derivative of storage, according to the criterion of maximum / minimum IOPS / unit *** - figure 6.2a and maximum /
minimum IOPS / Watt *** - figure 6.2B. According to the criteria of the benefits of storage with SFF HDD more than obvious.
*** In the test and calculated configurations were used only HDD media by using SSD - performance IOPS/Unit and IOPS/Watt will be much higher.
diagram 6.2A diagram 6.2B
3500 30
3000 25
2500
20
2000 IBM DS4700 15K 3.5" HDD IBM DS4700 15K 3.5" HDD
15
1500
IBM DS5300 15K 3.5" HDD IBM DS5300 15K 3.5" HDD
10
1000
IBM V7000 Storwize 10K 5 IBM V7000 Storwize 10K
500
2.5" HDD 2.5" HDD
0 0
maximum maximum
minimum minimum
iops/unit iops/watt
13. Application
- All tested storage systems were connected to the same LPAR with AIX 6.1 TL6 SP4
- Used a JFS2 file system
- The size of the tested area was 640GB (total size of the test files)
- To create a load test was used package nstress, in particular utility ndisk64
- For collecting and processing information about the response time of storage used by utilities NMON and NMON Analyser
respectively