Friday, October 21, 2005

read/ write performance on the volumes.

Hi Greg,

I can't offer any suggestions , but I am interested in knowing how you
are measuring the read/ write performance on the volumes. HDS tool? or
something more common.

Thanks, V

> Hello,

> Just a quick info gathering. I am at a customer site installing a new
> HDS 9990. The high level config overview:

> HDS 9990 (Open V 40GB LUNS)
> HDS 9200 (Legacy Array)
> Sun Fire v880
> Brocade 4100's (2 Fabrics)
> QLogic 2GB Cards (375-3102) to new SAN
> JNI FCE2-6412 cards to old HDS 9200 Array
> MPXIO enabled and configured for Round Robin
> VxVM 4.1
> Oracle 9i

> During this phased implementation, we are in the date migration stage.
> We are mirroring the old storage, which is from a HDS 9200 to the new
> LUNS on the TS (9990).
> Once the mirroring is complete, we will break of the plexes from the
> old array and be fully migrated to the new Hitachi.

> The customer decided not to break the mirrors yet. We have noticed a
> decrease in write and read performance on all the volumes on the host.
> I would expect a slight decrease in write performance, however, we are
> seeing upto a 1/5 milli-second increase in read time as well on each of
> the volumes. My assumption is that because of the double writes to two
> different (types) of LUNS, that is impacting our reads.

> Suggestions?

Reply

Hi

I am using vxstat -g 'diskgroup' -i 1 (the -i is the interval I am
polling, in this case every one second). This output is giving me a
format like this:

OPERATIONS BLOCKS AVG TIME(ms)
TYP NAME READ WRITE READ WRITE READ WRITE

vol ora00 39364 388 4931856 6208 1.8 0.1
vol ora00 39585 389 4950704 6224 1.9 0.0
vol ora00 39571 391 4954960 6256 1.8 0.1

As for Solaris LUN metrics, I generally use iostat -xnpz 1, which is
giving me the disk & tape I/O data and excluding any zero's. It's a
lot of information, so what I do is grep out what I am looking for, for
example, iostat -xnpz 1 | grep c5t0d0.

Thanks,

Greg

0 Comments:

Post a Comment

<< Home