You Asked
Mr. Kyte,
I am working on databases where the most important concern is insert rate. The databases are on 32-bit RHEL (free buffer waits of course). We are currently inserting at a rate of approx. 4 MB/sec into tables (standard ddl below) that are range partitioned by date (time span = 10 mins...converted into secs from 1.1.1970). Table size is approx. 200 GB. Extents are currently an uniform size of 10M (2M for indexes) using LMT and ASSM. Only one session inserts into each table. Inserts are bulk inserts of 5000 rows at a time via OCI. Each table has approx. 6 indexes (local) that cannot be dropped and/or rebuilt. The average rows sizes are 200-400k. There are no PKs or FKs. The database is in NOARCHIVELOG mode as there is currently no recovery requirement.
Outside of improving the disk layout, disk performance, using ASM, etc... (which are all in the works, especially for redo), what advice would you give in regards to blocksize, subpartitioning, extent mananagement, etc... for the tables to increase the insert rate?
Thanks.
Keith
Table DDL:
CREATE TABLE sample_table_ddl
(
row1 RAW(8),
TIMETAG NUMBER NOT NULL,
row3 NUMBER,
row4 NUMBER,
row5 NUMBER,
row6 NUMBER,
row7 NUMBER,
row8 NUMBER,
row9 NUMBER,
row10 NUMBER,
row11 NUMBER,
row12 NUMBER,
row13 NUMBER,
row14 NUMBER,
row15 VARCHAR2(33 BYTE),
row16 NUMBER,
row17 NUMBER,
row18 NUMBER,
row19 NUMBER,
row20 NUMBER,
row21 NUMBER,
row22 NUMBER,
row23 NUMBER,
row24 NUMBER,
row25 NUMBER,
row26 NUMBER,
row27 NUMBER,
row28 NUMBER,
row29 NUMBER,
row30 NUMBER,
row31 NUMBER,
row32 NUMBER,
row33 NUMBER,
row34 NUMBER,
row35 NUMBER,
row36 VARCHAR2(25 BYTE),
row37 VARCHAR2(25 BYTE),
row38 VARCHAR2(25 BYTE),
row39 VARCHAR2(25 BYTE),
row40 NUMBER,
row41 NUMBER,
row42 NUMBER,
row43 NUMBER,
row44 NUMBER,
row45 NUMBER,
row46 NUMBER,
row47 NUMBER,
row48 NUMBER,
row49 NUMBER,
row50 NUMBER,
row51 NUMBER,
row52 NUMBER,
row53 VARCHAR2(9 BYTE),
row54 RAW(3),
row55 VARCHAR2(7 BYTE),
row56 NUMBER,
row57 VARCHAR2(25 BYTE),
row58 NUMBER,
row59 NUMBER,
row60 NUMBER,
row61 NUMBER,
row62 NUMBER,
row63 NUMBER,
row64 VARCHAR2(5 BYTE),
row65 NUMBER,
row66 NUMBER,
row67 NUMBER,
row68 RAW(8),
row69 NUMBER,
row70 NUMBER,
row71 NUMBER,
row72 NUMBER,
row73 NUMBER
)
TABLESPACE DATA_CDR
PARTITION BY RANGE (TIMETAG)
(
PARTITION P1257533000 VALUES LESS THAN (1257533000)
LOGGING
NOCOMPRESS
TABLESPACE DATA,
PARTITION P1257533600 VALUES LESS THAN (1257533600)
LOGGING
NOCOMPRESS
TABLESPACE DATA,
PARTITION P1257534200 VALUES LESS THAN (1257534200)
LOGGING
NOCOMPRESS
TABLESPACE DATA
)
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
and Tom said...
if only one thing at a time inserts into a table, partitioning will not make it any faster - partitioning could only make inserts faster if they remove contention. Partitioning makes an insert take a small bit longer in single user mode as we have to figure out "where to put the data" before putting it. You are partitioning I assume so as to make it easy to purge old data...
noarchivelog mode here is not relevant - since arch would do its work in the background and not really impact performance of the inserts. You are generating redo because as far as I can tell - you are using conventional path inserts.
extent management isn't going to be a big deal either - locally managed can add extents blindingly fast.
What might be of use would be direct path loads and larger bulk inserts. Direct path will always write above the high water mark of the table (I'm assuming that you don't delete, you always truncate/drop old partitions - if so, direct path is perfectly ok for you space wise). It'll also take the newly inserted data and create a mini index of just the new stuff and merge it into the existing indexes in bulk (more efficient index maintenance). A direct path load can skip redo and undo generation for the TABLE (but not the indexes), the direct path index maintenance however will tend to generate less redo/undo in general.
http://docs.oracle.com/cd/B19306_01/appdev.102/b14250/oci12obn.htm#sthref1920 that would be something to consider.
Rating
(15 ratings)
Is this answer out of date? If it is, please let us know via a Comment