Mombu the Programming Forum sponsored links

Go Back   Mombu the Programming Forum > Programming > IDS9.4/AIX5.2 performace degradation
User Name
Password
REGISTER NOW! Mark Forums Read

sponsored links


Reply
 
1 30th January 12:59
dalibor krleza
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


Hello everybody!

Well, we are experiencing following problems:
1) Before migration to new system (server and operating system) we
had installed IDS 7.31 database on Caldera operating system.

2) We migrated on IDS9.4 FC4, operating system AIX 5.2 on
IBM pSeries server (p630).

All configuration setting for database were copied to a new server.

In meanwhile we are constantly experiencing significant performace
degradation, like 4x slower than before.

Any ideas??

Dalibor.
  Reply With Quote


  sponsored links


2 1st February 22:52
neil truby
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


Update statistics?
  Reply With Quote
3 1st February 22:52
art s. kagel
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


Several. Like Neil and Keith I'm going to recommend that you start with
running the recommended suite of UPDATE STATISTICS commands as outlined in the
Performance Guide. If you've already updated stats, please post the
command(s) or a sample of the commands you used.

If you want help tuning the beast post the following (not as attachments!):

System description: # CPUs, disk farm (RAID levels, # spindles, local or SAN,
etc.),
contents of you ONCONFIG file,
elapsed time since onstat -z was last run (or since startup if never since)
output from:
onstat -p
onstat -P
onstat -d
onstat -D
onstat -g glo
onstat -g iov
onstat -g iof
onstat -F
onstat -R
onstat -l
onstat -T
onstat -g dsc
onstat -g dic
onstat -g prc
onstat -g env
onstat -g seg

Art S. Kagel
  Reply With Quote
4 1st February 22:53
beers41
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


If you send a copy of your onstat -p and what is in your onconfig I
may be able to help you. We run AIX all the time. I have 64 and 32
bit versions of IDS 9.4 on 5.2 singing my praises on P-Series boxes.
(druck1@eckerd.com)
  Reply With Quote
5 5th February 23:06
ben thompson
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


I may have missed it in your long post, but I didn't see anything about
disc layouts or number of CPUs. Also I may have missed it as well but I
couldn't see any output from "onstat -g iof" or "onstat -g iov". I have
commented on your onconfig anyway.

Personally I think your log size is very small although others may have
more of an idea about this than me.

Try setting this to 0 unless you need the stats.

Try setting to 1 or -1.

Comment this out and use VPCLASS - see 9.40 performance guide.

Comment out NOAGE, AFF_* and use VPCLASS - see 9.40 performance guide.


Comment this out and use VPCLASS - see 9.40 performance guide.


How many discs are used by IDS for data? Generally speaking Ttis
parameter should be set to the number of discs you use. Any RAID discs
that appear as one disc to the OS count as 1. The 9.40 performance guide has more information.

You have a lot of shared memory segments. Try increasing this to more
like 80000, rather than 8000. Check then how many segments you have with
onstat -g seg. Ideally you want just one but shared memory should not be
bigger than it needs to be.

Read up on LRUS in the 9.40 performance guide.

You may get more throughput if you set these values higher as long as
checkpoint times do not get too high.


Try 0 and see if this helps. It may or may not.
  Reply With Quote
6 5th February 23:06
dalibor krleza
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


Ups, sorry.. here it is

***ONSTAT -g iov***

IBM Informix Dynamic Server Version 9.40.FC4 -- On-Line -- Up 06:16:20 -- 943248 Kbytes

AIO I/O vps:
class/vp s io/s totalops dskread dskwrite dskcopy wakeups io/wup errors
kio 0 i 125.3 21921 21000 921 0 63403 0.3 0
kio 1 i 267.7 46843 46291 552 0 90272 0.5 0
msc 0 i 0.1 18 0 0 0 18 1.0 0
aio 0 i 0.2 27 9 0 0 27 1.0 0
aio 1 i 0.0 0 0 0 0 0 0.0 0
aio 2 i 0.0 0 0 0 0 0 0.0 0
aio 3 i 0.0 0 0 0 0 0 0.0 0
aio 4 i 0.0 0 0 0 0 0 0.0 0
aio 5 i 0.0 0 0 0 0 0 0.0 0
aio 6 i 0.0 0 0 0 0 0 0.0 0
aio 7 i 0.0 0 0 0 0 0 0.0 0
aio 8 i 0.0 0 0 0 0 0 0.0 0
aio 9 i 0.0 0 0 0 0 0 0.0 0
aio 10 i 0.0 0 0 0 0 0 0.0 0
aio 11 i 0.0 0 0 0 0 0 0.0 0
aio 12 i 0.0 0 0 0 0 0 0.0 0
aio 13 i 0.0 0 0 0 0 0 0.0 0
aio 14 i 0.0 0 0 0 0 0 0.0 0
aio 15 i 0.0 0 0 0 0 0 0.0 0
aio 16 i 0.0 0 0 0 0 0 0.0 0
aio 17 i 0.0 0 0 0 0 0 0.0 0
aio 18 i 0.0 0 0 0 0 0 0.0 0
aio 19 i 0.0 0 0 0 0 0 0.0 0
aio 20 i 0.0 0 0 0 0 0 0.0 0
aio 21 i 0.0 0 0 0 0 0 0.0 0
aio 22 i 0.0 0 0 0 0 0 0.0 0
aio 23 i 0.0 0 0 0 0 0 0.0 0
pio 0 i 0.0 0 0 0 0 0 0.0 0
lio 0 i 0.0 0 0 0 0 0 0.0 0

***ONSTAT -g iof***

IBM Informix Dynamic Server Version 9.40.FC4 -- On-Line -- Up 06:16:34 -- 943248 Kbytes

AIO global files:
gfd pathname totalops dskread dskwrite io/s
3 /dev/rlvinfx01 8 4 4 0.0
4 /dev/rlvinfx03 42 0 42 0.2
5 /dev/rlvinfx05 771 0 771 4.1
6 /dev/rlvinfx01 194 46 148 1.0
7 /dev/rlvinfx03 260 100 160 1.4
8 /dev/rlvinfx05 212 51 161 1.1
9 /dev/rlvinfx01 2332 2126 206 12.3
10 /dev/rlvinfx03 67252 67169 83 355.8
11 /dev/rlvinfx05 1893 1853 40 10.0
12 /dev/rlvinfx02 892 822 70 4.7
13 /dev/rlvinfx04 0 0 0 0.0
14 /dev/rlvinfx06 0 0 0 0.0
  Reply With Quote
7 5th February 23:07
ben thompson
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


That looks like quite an impressive amount of IO and all on one raw
device? You seem to be using some sort of logical volume manager which I
can't help you with, having never got my hands on such juicy kit. Have
you tried reading the performance guide for 9.40 and applying the
changes recommended earlier to your set up?

Ben.
  Reply With Quote
8 5th February 23:07
beers41
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


The 9.4 FC4 version is a 64-bit version. You have nowhere near the
resources allocated to run in this ballpark. Also, do you know for
certain that the 630 has been set up in 64 bit mode. That mode is an
option that has to be turned on in AIX 5.2. I start my 64-bit systems
with a min 6 GB of memory (6144152 in shmvirtsize). You are no longer
bound by 2 GB limitations on this release in either memory or dbspace.
Think BIG. If the database is small and you have enough physical
memory you may be able to now load the whole DB into memory. Give the
database as much as you can spare.

Also what chips and disk are you running? If you are using any of the
Power4 chips 1.1GHz or better, you are in really good shape, I have
found you can add additional CPU VP's without incident because these
chips are fast with a capital "F". though not officially supported, I
have tested 2 and 3 CPU VP's per CPU and still had major power to
spare. (OK, so I really, really like the Power 4 chips). If you are
running the 750MHz or less, you can still setup for 64-bit, "BUT",
your application will run ... differently. You may encounter "choke
points" and have to spend more time tuning.

If you are running to EMC storage or at minimum IBM SSA disk arrays
you need not worry about I/O bottlenecks. But you really do have to
rethink a 9.4 engine layout because most limitations are gone. Open
it up and let it run.

Increase your "physfile to 32768" and "logsize to 10240". One cleaner
is not going to get it either. Revisit that topic in our admin guide.
The calculation is based on your disk layout.

YOu may also benefit from more tmpdbspaces. I usually allocate 5.
  Reply With Quote
9 5th February 23:07
art s. kagel
External User
 
Posts: 1
Default IDS9.4/AIX5.2 performace degradation


I did not see the post that Ben is replying to at all, I am commenting on
what's below, but if you send me the full posting with the SYSTEM DESCRIPTION
and the onstat's I requested, via email I'll look it all over. Ben's comments
are mostly well taken. See below.
Art S. Kagel

I do not like using SERVERNUM zero. It works but almost always causes
problems when you try to bring up a second server. FWIW.

RESIDENT 1 can be a big performance improvement!


I don't remember if there is any issue with using NOAGE on AIX, but this can
also improve performance if the OS is aggressive about lowering the priority
of long running processes. Solaris and especially HPUX are very aggressive,
IB AIX is less so, but it can still help, especially if you notice that
performance is better for a day or so after you bounce the engine.

Looking at the onstat -g iov output in your other post I see that only one AIO
VP is normally used. Since you have KAIO enabled and it would seem that all
chunks are RAW devices, you do not need all 24 AIO VPs that are being
automatically allocated for you because you did not set a value here. Set
this to 4 to 6 (better take Ben's advice and use a VPCLASS....AIO instead to
set the number of AIO VPs).


This may be the reason why you are seeing so many CHUNK writes and FGwrites.
See below for my recommendation for LRUS and set CLEANERS >= LRUS to improve
LRU cleaning (and as Ben says if you have many disks set it >= numdisks or
numchunks to improve checkpoint performance.


I did not see the onstat -g seg output, but assuming Ben is correct, it would
be VERY good to fold the extra virtual segments into the size of the initial
segment as AIX does not do well with multiple segments. Also, IMS, AIX has
some kind of minimal allocation so that segments < 256MB are rounded up
internally, it's a waste to make smaller ones.


Given the BR of 9.51 that I calculated from your original post, I'd increase
this to at least 16 (avoid 32, 64, and 96 these specific values have caused
performance degradation in previous versions and I do not know if it's been
fixed since Informix & IBM have never acknowledged the bug). Don't forget to
match or exceed the new LRUS value when setting CLEANERS.


With values this low you should not be seeing mostly chunk writes and FG
writes as you are seeing. The only reason I can see would be the single
CLEANER or it may be that you just do not have enough BUFFERS configured. If
I get to see onstat -P and onstat -p output and time since stats were zero'd
I can better determine if you need more buffers.


You are using defaults, and since your RAU is 99.99% it's not a real worry,
but, the defaults may be stressing your IO systems and delaying writes,
especially given your VERY LOW read cache percentage (62%). Read cache
should ideally be in the high 90% range.
  Reply With Quote


  sponsored links


Reply


Thread Tools
Display Modes




Copyright 2006 SmartyDevil.com - Dies Mies Jeschet Boenedoesef Douvema Enitemaus -
666