Skip to Main Content
  • Questions
  • Data Pump API to refresh top 10 partitions

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, NISHANT.

Asked: March 25, 2020 - 6:22 pm UTC

Last updated: May 09, 2025 - 2:19 am UTC

Version: 12.1

Viewed 1000+ times

You Asked

I have a requirement where I have to refresh tables from PROD to DEV. But refresh needs to be done in such a manner that partitioned table should be refreshed with its latest top 10 partitions and non partitioned table should be refreshed full

I am using DBPUMP API to do this over network link. I was able to pull 2 partitions successfully for a single table but when I specify multiple data_filter lines second table never gets refreshed. And also where will I incorporate non partioned tables?

Request you to please have a look at the pattern of code I am using. For detecting partitions I have planned that I will write a code to fetch partition names and then append them in a code but if you have a better suggestion then please shoot that to me.

=====================================================================
set scan off
set serveroutput on
set escape off
DECLARE
h1 number;
errorvarchar varchar2(100):= 'ERROR';
tryGetStatus number := 0;
l_ku$status ku$_status1020;
l_job_state varchar2(4000);

percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
begin
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'TABLE', version => 'COMPATIBLE', remote_link =>'DBPUMP_TEST_LINK' );

tryGetStatus := 1;
dbms_datapump.set_parallel(handle => h1, degree => 1);

dbms_datapump.add_file(handle => h1, filename => 'POC-'||to_char(sysdate,'yyyy-mm-dd-hh24_mi_ss')||'.LOG', directory => 'DATAPUMP', filetype => 3);

dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1);

--dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_LIST', value =>'''FDW_PUMP_TEST''');

dbms_datapump.data_filter(handle => h1, name => 'PARTITION_LIST', value =>'SYS_P26447,SYS_P27786', TABLE_NAME => 'TAB1');

dbms_datapump.data_filter(handle => h1, name => 'PARTITION_LIST', value =>'SYS_P34155,SYS_P34666', TABLE_NAME => 'TAB2');

dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 0);

dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');

DBMS_DATAPUMP.SET_PARAMETER(handle => h1,name => 'TABLE_EXISTS_ACTION',value => 'TRUNCATE');

dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);

errorvarchar := 'NO_ERROR';
DBMS_OUTPUT.PUT_LINE('NO_ERROR');
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSDATE,'DD-MON-YYYY HH:MI:SS'));

percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;

-- If the percentage done changed, display the new value.

if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
end loop;
DBMS_OUTPUT.PUT_LINE(TO_CHAR(SYSDATE,'DD-MON-YYYY HH:MI:SS'));
EXCEPTION
WHEN OTHERS THEN
BEGIN
IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN
DBMS_DATAPUMP.DETACH(h1);
DBMS_OUTPUT.PUT_LINE('LANDED IN EXCEPTION');
DBMS_OUTPUT.PUT_LINE(SQLERRM);
   END IF;

   RAISE;
END;
RAISE;
END;
/  


=====================================================================


Regards
Nishant

and Chris said...

Are you getting any errors? If so what exactly?

I see you have system-generated names for the partitions (SYS_Pnnnn). Are the partition names the same in both environments? If not this could be the cause of the problem...

You don't need to do anything special to get the non-partitioned tables.

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Here is the answer with another question

Nishant Kariya, March 27, 2020 - 6:19 am UTC

Thanks Team for responding quickly. To answer your question:

-> No I don't get any error, code runs fine.
-> TAB1 has same partition names on both environments and it fetches data successfully.
-> Wheras, TAB2 has same partition names in both env but it doesn't pulls any data.

Let me brief what I did as a pre-requisite of my POC. I created a blank schema on lower environment i.e. on DEV and first imported all metadata from PROD . So all structure was copied exactly same. Requirement is to refresh database each month but only keep limited data on DEV. so this means every time refresh is initiated, tables in DEV, should be truncated and latest 10 partitioned along with dimensions should be refreshed from PROD without touching any code.
No I understand when I go live with this, and operate on true data schema's, system partition names will never be same in both environments. But here I am doing a POC and thus have same partition names on both env(because I imported blank structure first)

Also one question on my way of designing this code -
One PLSQL block for incorporating all partitioned and non partioned tables, OR it should be a cursor loop in which this PLSQL block should be executed for each table.

I have used DBPUMP lot but via command line EXPDP and IMPDP. And by that experience I can say that I should write a code once to incorporate everything.

Note that I have given TAB1 and TAB2 in DATA_FILTER without specifying NAME_LIST or NAME_EXPR separately. So when we say we can incorporate other non partitioned tables also in this code (lets say TAB3 ) then where it should be mentioned?

Should it be like multiple DATA_FILTERS lines for partitioned tables and NAME_LIST for non-partitioned table?

Chris Saxon
March 27, 2020 - 8:40 am UTC

Have you got a log file for the export? What does that show?

If you want to specify a list of tables, use NAME_EXPR, e.g.:

dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''T1'',''T2'',...)'); 


One PLSQL block for incorporating all partitioned and non partioned tables, OR it should be a cursor loop in which this PLSQL block should be executed for each table.

I'm not sure what you're getting at here?

Refresh composite partition table in UAT from PROD using exchange partition

Kevin, May 08, 2025 - 1:59 am UTC

I have a requirement to refresh the large composite (range-hash) partitioned tables in UAT environment from PROD environment on schedule basis or on-demand basis.

We recently refreshed the entire partition table in UAT with production copy using transportable tablespace.

Now user wants to sync up periodically only the partitions since last refresh with production data (not complete table). This periodic refresh has to be done on demand basis or schedule basis going forward.


Both source and target tables partition schema are range-hash weekly partitions.

Here are the steps, I think needed to periodically refresh the selective partitions every time.

1. How to identify the partitions that needs to be exported. I mean, what logic to follow on picking the partitions since last time export?

2. Data pump export only those partitions with data

3. Transfer those Data pump export dumps to UAT environment

4. Import those partitions with PARTITON_OPTIONS=DEPARTITION

5. Alter table exchange partition with DEPARTITION tables...

I have 16-way subpartitions for each partition defined in the 'subpartition template' and partition/subpartitions are named and names are same in all environments.

Can you please show an example with step by step how to perform "exchange partition" in this case?

This will be very helpful. Appreciate your suggestions and best practice of doing this refresh.

Thank you.
Chris Saxon
May 08, 2025 - 12:37 pm UTC

I mean, what logic to follow on picking the partitions since last time export?

¯\_(ツ)_/¯ You tell us - how are you identifying what data needs to be imported??

Presumably, there's a date column on the source table. Record the highest date for the rows when the export runs. Use this date as the starting point for the rows to export next time round. Repeat.

The rest of the process sounds good. After the import completes swap the data in using (sub)partition exchange:
ALTER TABLE tab_name EXCHANGE SUBPARTITION subpartition_name
      WITH TABLE imported_subpart_tab_name INCLUDING INDEXES;

Refresh composite partition table in UAT

Kevin, May 09, 2025 - 12:56 am UTC

Hi Chris,
Thank you for sharing the details and confirming the steps. this is very helpful.



Connor McDonald
May 09, 2025 - 2:19 am UTC

glad we could help

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.