Skip to Main Content
  • Questions
  • Datapump schema export from max_string_size Standard, import to max_string_size Extended => ORA-01450: maximum key length (6398) exceeded

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Kim Berg.

Asked: March 25, 2020 - 9:20 am UTC

Last updated: March 25, 2020 - 1:33 pm UTC

Version: 12.1.0.2 + 19.3

Viewed 1000+ times

You Asked

Hi, Team of Oracle Masters

I have a schema in a source database version 12.1.0.2 with charset AL32UTF8 and max_string_size Standard.
Some tables have columns VARCHAR2(2000 CHAR), which in principle could mean up to 8000 bytes, but here the max is 4000, which DATA_LENGTH in user_tab_columns reflect.
So potentially some text with lots of 4-byte UTF characters can't fit even with less than 2000 characters, but in practice it is not a problem with normal text with a few UTF characters.
Several of these columns are indexed, which is no problem because of the 4000 bytes max limit.

I am datapump exporting this schema and importing it in a target database version 19.3 with charset AL32UTF8 and max_string_size Extended.
Because of the Extended max_string_size, these VARCHAR2(2000 CHAR) columns created by the import now have a DATA_LENGTH of 8000!
(In principle good, since now there is no risk of a 1942 character string not being able to fit ;-)

The problem then is, that these columns can not be indexed anymore.
I get "ORA-01450: maximum key length (6398) exceeded" in the datapump import log.

Documentation states it's impossible to change max_string_size from Extended to Standard, so I can't change the target database.
Also I don't really want to do this, as other schemas in this database most likely would benefit from using Extended.
But for this one schema I'd prefer to keep the DATA_LENGTH of these columns low enough (4000 bytes) to allow them to be indexed.

If I change the columns to VARCHAR2(4000 BYTE), the indexes would be allowed.
But the database would then allow UTF strings longer than 2000 characters, right? (As long as most characters are 1-byte UTF.)
That could potentially break the front end application.

I'm considering VARCHAR2(4000 BYTE) with a check constraint LENGTHC(col) <= 2000.
Would you see a downside to this?

Would there be a way of transforming these columns during datapump import?
Or is the best way simply to ignore the ORA-01450 errors, alter the columns after the import, and then create the indexes?

Any other ideas or tips that might be beneficial?

Thanks...


Cheerio
/Kim

and Chris said...

But the database would then allow UTF strings longer than 2000 characters, right?

Yes indeed it would:

create table t (
  c1 varchar2(2000 char),
  c2 varchar2(4000 byte)
);

insert into t values ( 'x', rpad ( 'x', 4000, 'x' ) );

select lengthc ( c2 ) from t;

LENGTHC(C2)   
          4000 


I'm considering VARCHAR2(4000 BYTE) with a check constraint LENGTHC(col) <= 2000.
Would you see a downside to this?


Can't think of anything offhand; as the existing app is already limited to 2,000 characters there shouldn't be functional issues. My main concern would be the "surprise factor". i.e. this is something unusual, so may catch out developers/DBAs in the future.

Or is the best way simply to ignore the ORA-01450 errors, alter the columns after the import, and then create the indexes?

That is an option ;)

You can get around the problem somewhat by using SUBSTR or STANDARD_HASH in the index definition. The optimizer can use the standard hash function for equality without changing the query:

create index i2 
  on t ( standard_hash ( c2 ) );
  
insert into t values ( 'c1', 'c2' );

set serveroutput off
select * from t
where  c2 = 'c2';

select * 
from   dbms_xplan.display_cursor(format => 'BASIC LAST +PREDICATE');

----------------------------------------------------                        
| Id  | Operation                           | Name |                        
----------------------------------------------------                        
|   0 | SELECT STATEMENT                    |      |                        
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T    |                        
|*  2 |   INDEX RANGE SCAN                  | I2   |                        
----------------------------------------------------                        
                                                                            
Predicate Information (identified by operation id):                         
---------------------------------------------------                         
                                                                            
   1 - filter(("C2"='c2' AND INTERNAL_FUNCTION("T"."C1")))                  
   2 - access("T"."SYS_NC00003$"=HEXTORAW('6B1F53303A732CCC8C6AAE6640399    
              827C15250E3'))     


You can read more about this in the docs https://docs.oracle.com/en/database/oracle/oracle-database/20/sqlrf/CREATE-INDEX.html#GUID-1F89BBC0-825F-4215-AF71-7588E31D8BFE

Any other ideas or tips that might be beneficial?

The problem is the data for an index key must be less than the index block size minus some overhead. So there is another workaround:

Put the indexes in a tablespace with a larger blocksize!

create index i 
  on t ( c1 );
  
ORA-01450: maximum key length (6398) exceeded

sho parameter db_16K_cache_size

NAME              TYPE        VALUE 
----------------- ----------- ----- 
db_16k_cache_size big integer 16M   

sho parameter max_string

NAME            TYPE   VALUE    
--------------- ------ -------- 
max_string_size string EXTENDED 

create tablespace tblsp_16k 
  datafile 'tblsp_32k' size 1M
  blocksize 16384;

create index i 
  on t ( c1 )
  tablespace tblsp_16k;
  
select tablespace_name 
from   user_indexes
where  index_name = 'I';

TABLESPACE_NAME   
TBLSP_16K       


Note you need to set the appropriate cache size to a non-zero value. And this also increases the "surprise factor" for the database. I'd prefer one of the function-based index methods over this.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks a lot

A reader, March 25, 2020 - 12:04 pm UTC

Thanks :-)

I didn't think of the tablespace with larger blocksize.

We'll discuss what suits best. I understand the point about using "tricks" to make it look like before can be a future bad surprise.

Cheerio
/Kim

Chris Saxon
March 25, 2020 - 1:33 pm UTC

Thanks - at this point I think it comes down to which "trick" has the least downsides for you!

More to Explore

Design

New to good database design? Check out Chris Saxon's full fundamentals class.