Skip to Main Content
  • Questions
  • Fastest way to restore test schema to known state

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Luke.

Asked: June 22, 2011 - 11:05 am UTC

Last updated: January 31, 2012 - 2:57 pm UTC

Version: 11.2.0

Viewed 1000+ times

You Asked

Hi Tom - I work in Java, and we run a large number of Java testcases every night one after another. All the tests execute against the same schema, and in order to make sure the initial data is known, each test drops and recreates all the objects in the schema and repopulates the default data. We're also using an ORM to generate many of the DDL commands for us. Our schema contains about 150 tables with no more than a few hundred rows in each table. It takes about 30 minutes to get the schema and test data setup. Most of our testcases execute within a few seconds if the aside from the schema preparation, so I'm looking for a way to speed this up, as it limits how many tests we're able to run each night.

I am wondering if there is a common way of quickly reverting a schema to a known state. Many of the testcases use multiple transactions so we can't simple rollback. I've tried using exp/imp but I'm wondering if there is better option I'm missing. My understanding of flashback is that it affects the entire DB and not just a single schema, in which case it would not be an option for me. Thanks for any suggestions.


and Tom said...

depending on the amount of changes, you might be able to use the flashback table command. flashback table t1, t2, t3, t4, .... to <point in time>;

http://docs.oracle.com/cd/E11882_01/server.112/e17118/statements_9012.htm#SQLRF01802


Short of that, another very viable approach would be to set up the test tablespace and "transport". When you wanted to restore the schema data you would simply drop that tablespace including contents and datafiles and then re-attach the old data files (transport them in).

http://docs.oracle.com/cd/E11882_01/server.112/e17120/tspaces013.htm

Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Another option...

Kashif, June 22, 2011 - 12:37 pm UTC

though not necessarily the best one is to simply create a copy of these 150 tables once (perhaps in a separate schema). Then each night you reload your actual schema's tables from this backup schema.

I like the transportable tablespace approach though the best.

Alex, June 22, 2011 - 12:48 pm UTC

I think a tablespace point in time recovery would be good for this and less steps than transport tablespace. However I have found them to be slow, even with very small tablespaces.

Flashback is the greatest thing since sliced bread, and would be helpful as long as there's no DDL happening along the way.
Tom Kyte
June 22, 2011 - 12:51 pm UTC

it would be a lot more work/resources to do that.

The transporting would be less steps in general - they would transport out once and then repeatedly transport back in - just two commands...

although yes, TSPITR (tablespace point in time recovery) would work fine as well.

Alex, June 22, 2011 - 1:05 pm UTC

Ok yes you're right saving it off once is a good idea. I was thinking the RMAN TT version and the similar process it takes to build an auxiliary instance like TSPITR and how that wouldn't buy you much.

schema refresh

Michael, June 23, 2011 - 6:58 am UTC

tom,

we also have to refresh our test databases in a circular fashion. we need to refresh the whole schema but not only the data.

the simplest way is "drop user cascade" + networked data pump import which is not appropriate for large datasets.

so we use tts for large datasets:
1. drop user cascade
2. create user
3. transport tablespace + segments
4. networked data pump import of all non-segment objects

question: is there a shorter method for that?
Tom Kyte
June 23, 2011 - 8:13 am UTC

I'd just segment out this schema into it's own database and restore a backup of the necessary data (just restore system, undo and the tablespaces you need for testing).

schema refresh

Michael, June 23, 2011 - 8:26 am UTC

good idea but I'm afraid we can't do this.

+ we have several such schemes
+ we can't run a partial database for each of them
+ development requires permanent test databases
+ development requires test to be equal to prod (cross references,...)

it would be nice to extend TTS to somewhat like "transportable schema", a combination of TTS + data pump of non-segment objects.

To Michael

Alex, June 23, 2011 - 11:54 am UTC

Maybe try datapump (without the TT option)? That will capture your code for you. If your schema is huge, let's say over 50 GB, your approach is probably as good as you're going to get.
Tom Kyte
June 23, 2011 - 11:55 am UTC

that is what he is currently doing :)

Alex, June 23, 2011 - 12:37 pm UTC

D'oh. Skimmed over that too quick.

Makes me wonder if it makes more sense for Oracle to store user defined code in a user tablespace.

Thanks.

Luke, June 24, 2011 - 12:55 pm UTC

Transportable tablespaces and flashback both sound like promising approaches - I'll give them a try asap. Thanks Tom and everyone else for the suggestions.

Flashback database and ordering of rows on disk

Lise, January 26, 2012 - 8:12 am UTC

Hi,
I read your feature on testing in the Nov/Dec magazine issue, and found it very useful.
I only have one question please. If I use flashback database to restore the database rather than database backup, would I risk ending up with changes in how the rows on disk are ordered?
I am planning to use flashback database when performance testing you see, to speed up 'database reset'.

Thanks
Lise
Tom Kyte
January 26, 2012 - 10:39 am UTC

flashback database is a binary operation, the bits and bytes of the flashed back data are identical to what they used to be, it is not a 'logical' flashback, it is a physical flashback.

the rows will be in the same place.

A reader, January 26, 2012 - 11:25 am UTC

Would Workspace Manager be a viable solution to this?
Tom Kyte
January 31, 2012 - 2:57 pm UTC

it would accomplish the goal, but at a huge performance penalty - and possible functionality penalty.

workspace manager is 99% transparent.

it would add a measurable overhead to the runtime of their transactions - and if they were testing performance - it would be impossible to really do (other than know "it will run faster than this in real life")

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.