Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Tony.

Asked: March 29, 2009 - 6:26 pm UTC

Last updated: April 22, 2013 - 7:35 pm UTC

Version: 10.2.0

Viewed 100K+ times! This question is

You Asked

Hi TOM,

Can you specify the correct way to find the latest time a DML was commited on a table?

I have looked at DBA_TAB_modifications and orarow_scn per table as guide. These do not address accurately this quest.

Please advice.

Thanks in advance,

TonyF

and Tom said...

If you do not want to use ora_rowscn, you would need to enable auditing - either using the AUDIT command, or via fine grained auditing (DBMS_FGA), or via a custom trigger.


We, by default, do not track this.


orarow_scn would actually accurately track this. I don't know why you say it would not.

ops$tkyte%ORA10GR2> select max(ora_rowscn), scn_to_timestamp(max(ora_rowscn)) from scott.emp;

MAX(ORA_ROWSCN)
---------------
SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
---------------------------------------------------------------------------
       77782885
05-FEB-30 11.40.58.000000000 AM


ops$tkyte%ORA10GR2> update scott.emp set ename = ename where rownum = 1;

1 row updated.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select max(ora_rowscn), scn_to_timestamp(max(ora_rowscn)) from scott.emp;

MAX(ORA_ROWSCN)
---------------
SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
---------------------------------------------------------------------------
       81997663
30-MAR-09 08.05.43.000000000 AM


Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Finding latest DML time on a table

Tony Fernandez, March 30, 2009 - 3:05 pm UTC

Tom,

ora_rowscn will not work because I am trying to locate tables not used anymore, not even for select statements.

A select statement does not change the ora_rowscn, so it does not change the max( ora_rowscn ) on a table and translated may point to May-2003.

Say, the tables created or modified until 5 years ago, are in fact not DMLed anymore, yet an application can be selecting from it on a daily basis, so we do not want to erase that table.

Thanks,
Tom Kyte
March 30, 2009 - 5:56 pm UTC

wow, you would sort of think "you would know what tables your APPLICATIONS actually use" wouldn't you....


when you say "latest time a DML was COMMITTED on a table", I naturally think "modification"

In any case, you won't be able to answer this, we don't track it unless and until you tell us to. So, since you did not enable auditing 5 years ago, you won't know what activity has (or has not) taken place in the last 5 years.


I don't think you want to erase any tables, not if your approach is as you state. You cannot just look for a table that hasn't been accessed, you need to sort of understand what that table is - what it does - why it exists. Then and only then can you get rid of it.


Finding latest DML time on a table

Tony Fernandez, March 30, 2009 - 3:08 pm UTC

I see my initial question was incorrectly worded. It should have stated latest DML time on a table including a select statement.

Look forward for your advice.

Thanks,
Tom Kyte
March 30, 2009 - 5:58 pm UTC


My advice would be to go to your application developers, the people that maintain your application infrastructure, and ask them for an inventory of tables that need be present - they should have this well documented, and if not - now you have a reason to tell them "well, get on it"

row_scn

A reader, March 30, 2009 - 6:11 pm UTC


Finding latest DML time on a table

Tony Fernandez, March 30, 2009 - 7:50 pm UTC

Dear Tom,

I appreciate your prompt and accurate replies to the point.

I have talked to the developers ( I was 1 long ago, so I understand their concerns ), and we decided to turn on auditing for say couple of months. After that we will sit down and go line by line and locate the tables that in fact need to go, using their input and what the audit says. There about 300 tables involved. This is first step in the process of optimizing and cleaning the DB.

I bought your book "Effective Oracle by Design", I believe will be critical in this ongoing task.

Best regards,

Tony Fernandez
Tom Kyte
March 30, 2009 - 9:43 pm UTC

this still sounds backwards

shouldn't they actually know what their code

a) does
b) uses


?? Why not go to the code, line by line, and document the dependencies - if you are going line by line, don't do it to find out what to drop - do it to document the dependencies (what you need) and then from that you can determine what to drop.

And that way, you know where the objects are used - and by how many different points in the code.

the developers and the black box

Duke Ganote, March 31, 2009 - 7:42 am UTC

Like anyone else, developers can inherit some wretched, poorly-documented stuff. A test environment and (hopefully automated) test scenarios can be helpful in such cases.
Tom Kyte
March 31, 2009 - 9:05 am UTC

right, but still the approach should be

a) review code,
b) document dependencies
c) derive list of objects NEEDED


rather than

a) list all objects
b) scratch them off list when you hit them the first time in code
c) end up with a list of what is not needed


If you do the first one, you end up with two things

1) a list of dependencies, you know where each object is accessed and in how many places. Making changes - easier now. Understanding application - easier now

2) a list of objects you can MINUS from your existing list of all objects - to find the ones you can consider mothballing.

Finding latest DML time on a table

Tony Fernandez, March 31, 2009 - 9:47 am UTC

I fully agree that we should go with a list of what is used and their dependencies. This way we understand more and document more appropriately the code.

The caveat in my particular case is, documentation is old and not updated in addition to developers being distributed on the planet.

I was convinced the approach of turning on "AUDIT_TRAIL = DB, EXTENDED" for few months and capture the actual SQL statements touching my tables. I see a performance extra load, but I tried on the test data and seems fine.

Can you enlighten the dangers of the path I am proposing to take?

Cheers,

Tom Kyte
April 01, 2009 - 7:10 am UTC

... The caveat in my particular case is, documentation is old and not updated in
addition to developers being distributed on the planet.
...


but you said

... After that we will sit down and go line by
line and locate the tables that in fact need to go, using their input and what the audit says.
...

so, that is the perfect time to change the order of operation. DO go line by line. DO update documentation. Then figure out what you can drop. Not the other way around.


If you are going to go line by line, don't do it wrong.



Why bother with the actual sql, isn't it enough to know "table T was queried"?

Dangers on the path

Duke Ganote, March 31, 2009 - 12:41 pm UTC

Tom's proposing the "white box" approach: open up the code and see everything it may do.

The 'run auditing' or 'run test scenarios' are "black box" approaches. 'Run auditing' may miss some possible combination of events that happen only rarely. 'Run known test scenarios' may test capabilities that only happen once a year or so, or have not happened yet in production.

Black box approaches may result in "false negatives": you never *seem* to use something that is really necessary.

http://en.wikipedia.org/wiki/Type_I_and_type_II_errors
http://en.wikipedia.org/wiki/Black_box_testing

End Result

Tony Fernandez, July 01, 2009 - 6:00 pm UTC

Tom,

Going line by line in the code was not even close to feasible. Developers change, code changes, it ended up being a political game. And no one signed up for it. The end result is that these obsolete tables still remain in the system. The system was never cleaned from these redundant objects.

I ended up focusing rather on tuning up the SQLs in the application and in the Stored Procedures. I can mention proudly that some costs have been brought down from 5 or 6 digits in the number of bytes to 1 digit. This resulted in timings coming down from minutes to 0.02 seconds on average for one particular query. There were also other things in need of review, like some queries not bringing accurate data due to the wrong join type being utilized, inner instead of outer and vice versa. That was also a success. I left that company, and left the cleaning task to another dedicated professional, I hope he or she has better luck convincing the people, please review line by line your code and give me a list of objects currently used. Wow.

OK.

In my new company, I will also tackle another Tuning and Optimizing exercise. It is a small database 30GB compared to 400TB I have seen before. Discussions are happening now to decide if materialized views should be utilized. My first reaction was maybe not due to the updating cost that has to happen at some point. I will report what it gets utilized, since it is a trade off question. I am sure I will ping you all again for some help when needed it.

Thanks to you and your team for all invaluable intelligence you disperse time after time.

With all respect,

Tony Fernandez

Finding latest DML time on a Schema.

A reader, November 22, 2011 - 7:05 am UTC

Hi Tom,

Is it possible to find the latest time a DML was committed on a Schema.
like first answer in this thread, I can use scn_to_timestamp(max(ora_rowscn)) for all the tables in this schema
or is there any better method to do it in optimal way.

Schema has 100+ tables, I have to run 100 query to find latest modification done on a schema.
Tom Kyte
November 22, 2011 - 8:48 am UTC

enable auditing pops into my head immediately. That would be the best approach.


Select max(ora_rowscn) from huge table taking long time

sachin, April 17, 2013 - 4:32 am UTC

Hi Tom,

We have a table of size 25GB and enabled replication using goldengate on this table. Every one hour the below query runs to check the modified data. It does full table scan and takes 10 minutes to complete. Is there a way to avoid full table scan?

Select max(ora_rowscn) from table;

Regards,
Sachin
Tom Kyte
April 22, 2013 - 7:35 pm UTC

why are you running that query??

think about it - no, you cannot avoid the full scan really. the ora_rowscn is an attribute of a block or a row. it is not an attribute that can be indexed or anything.

If you kept your "own" attribute there - like a timestamp column - you'd be able to index that and get the max timestamp fast (but remember - it'll slow down every single modification maintaining the index...)

More to Explore

Security

All of the vital components for a secure database are covered in the Security guide.