Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Alex.

Asked: September 28, 2005 - 6:54 pm UTC

Last updated: March 18, 2022 - 4:52 am UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Hi Tom,

I was wondering if there are some thing you don't like about Oracle?
I personally wish more things came setup, out of the box such as autotrace, intermedia text, scott schema etc. Any thoughts on this?

-Alex

and Tom said...

Well, Oracle text does (dba just needed to have selected it). Scott schema does (ditto, dba just needs to have installed the demos)

Autotrace - yes, be nice if it were there HOWEVER, on a production system - it shouldn't necessarily be there so..... Maybe that is good.

What would I change....

Actually, many of those things I would change are being addressed - a better sqlplus that is still sqlplus (coming, code name "raptor", previewed at OOW)...

htmldb - that was a big one.

Most of the things I would change are "political", the DBA/Developer relationship (or lack thereof). The (sometimes) developer approach of "I don't need to learn the database"

There is so much in there - what I would change is peoples knowledge level of what is actually available.

I would appreciate better error messages sometimes - I can usually figure them out, but.... ;)

I'll publish this to see what others add - maybe some have been addressed and we can point it out - others, I'll file away for the days when I'm asked "what would you add...."


Rating

  (690 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Things we DON'T like

Mike Friedman, September 29, 2005 - 10:11 pm UTC

1. Lack of proxy capability within the DB - we should be able to log onto a non-[rivileged account, authenticate ourselves, and then have our connection switched to a selected privileged account.

2. Merge statement should take multiple actions just like Insert

3. The current move away from PL/SQL and transactional interfaces to many components needs to be reversed. For example, there is no way to make use of CDM SDK direct from PL/SQL while maintaining transaction atomicity, etc.

Tom Kyte
September 30, 2005 - 8:55 am UTC

1) that is not within the database. We have that ability in the database, sqlplus doesn't "do it"

2) it does - when matched, when not matched. you can insert and update and delete from a table ???

Don't know what you mean

3) that seems to be a contradictory statement? You say "don't go components" but then say "we need interface to components"? (frankly, I don't know what the CDM SDK is..)

Wow! You Let me Build my Wishlist?

Arup Nanda, September 29, 2005 - 11:48 pm UTC

Oh, I could go on forever. Here are some:

1. SQL*Plus with a built-in tabular format output; useful for rows that span across many lines. No, Grid Control SQL Output just doesn't cut it; it's a static output of the entire table. It doesn't have the powerful SQL*Plus commands.

2. An unloader utility that reads at the block level and dumps data as plaintext ASCII much faster than the SQ*Plus or Pro*C programs can do.

3. I miss the SQL*Forms 3.0 (if anyone is old enough to remember that)which allowed me to do quick and dirty data entry screens for dat manipulations. And, sweet oh sweet, I could edit the INP files for quick modifications. Master detail forms were a snap.

4. Separation of roles where DBA can see the data files, dictionary, etc., but not the actual data. A separate role will allow the data to be seen and allow privileges. A third account type will have access to audit trails only. I understand this is being addressed in Data Vault.

5. PL/SQL compilation where the decision to make the dependent objects such as triggers, functions, etc invalid due to table alters are controlled by DBA. When a table column data type is modified, the dependednt objects are immediately modified; however, sometimes the changes to the table do not alter the contents of the object and it need not be invalidated. This will help immensly in keeping the uptime of the system.

6. Batch commits. When I should be able to do a mass update but ask Oracle to commit at a certain frequency only. I could do this by cursors; but a simple SQL will be very useful. Something very similar to the commit frequency setting in COPY command.

7. Conditional Shared Pool manipulations. Shared pool contains SQL statements that should have been aged out eons ago; but linger about and cause the ORA-4031 error. How about allowing the DBA to age certain SQLs, either explicitly or via some pattern matching or both.

8. Ability to use different userids in unix for the oracle owner and the user whos owns the logs such as alert log, listerner log, crs log, etc. This will allow the separation of roles - the DBA can perform database maintenance but a different user owns the logs. So the oracle software owner can't delete logs. The process of logging can be performed by a root-owned process, installed during the execution of root.sh.

... there is more; but I leave it to others how they feel.

Tom Kyte
September 30, 2005 - 8:59 am UTC

(1, 2) => raptor

(3) => htmldb!


4, You can create your own DBA role "my_dba" and grant it select on the dictionary objects you want -- but not the "select any table" and such. Meaning - you can accomplish this today?

6, I disagree, that would be excessively dangerous. How do you restart safely such an operation?


7, I'd sort of like a "don't share" hint :) for applications that don't bind - don't have their sql even go there....




Things we don't like

Jim Weatherly, September 30, 2005 - 12:30 am UTC

1. Cross-platform data file incompatibility. With the size of databases these days export/import takes too long. Transportable tablespaces require 10g to cross platforms and have to go through an extra RMAN process--and as of 9.2.0.5 the process is still buggy.

2. Single oracle.exe on 32-bit windows. Context switching among threads may be faster, but the limits of addressable memory render this less imporant, even with the /3GB switch. 64bit Windows should address this, but not without upgrading hardware.

3. Inability to bypass redo for statements that don't need recovery (for example, the insert /*+ append */ into the Hits table for Tom's pagination solution). Always writing above the high watermark can eat up space quickly and like a temporary tablespace, I don't care if I can't recover it. Create a new kind of tablespace of type unrecoverable and stick segments you don't care about here. Just a thought.

4. Lackluster management tools. While grid control is promising, it has taken a long time to get there. After paying the price for the best database on the planet it's annoying to have to pay such a premium for the tools to manage it. Also, more stable Intelligent Agents would be nice.

5. Lack of dynamically expandable OCFS. It would be nice to be able to resize an OCFS drive on the fly as with other file systems. ASM on raw devices may make this moot, but not on 9iR2.

Tom Kyte
September 30, 2005 - 9:01 am UTC

1) 10gR2 has full database transports across platforms with the same endianess now.

2) multi-process would be horrendous on windows - it would not perform even a little bit.


...

Bob B, September 30, 2005 - 12:38 am UTC

I'd like to concur and extend Tom's point further. As the database becomes more "self-managing" it becomes more user friendly. Also, providing better Oracle supplied tools (e.g. Raptor) to use Oracle also improves the user friendliness. The end result will (hopefully) be a better learning curve for Oracle users. HtmlDB was also another step in the same direction.

If you know how to use it, Oracle can do some amazing things. By making Oracle easier to use for more people, you increase the number of people who know how to use it. More people kicking a$$ with Oracle will lead to more kick a$$ applications. This opinion is a take off from the Creating Passionate Users blog (see Tom's blog for the url)

Amen

Tony, September 30, 2005 - 12:51 am UTC

<quote>
1. SQL*Plus with a built-in tabular format output; useful for rows that span
across many lines. No, Grid Control SQL Output just doesn't cut it; it's a
static output of the entire table. It doesn't have the powerful SQL*Plus
commands.

2. An unloader utility that reads at the block level and dumps data as plaintext
ASCII much faster than the SQ*Plus or Pro*C programs can do.
</quote>

1. I couldn't agree more. Occassionally I need to look at some very wide tables (135 columns) and SQL*Plus doesn't really cut it. I use SQL*Plus for 98% of my work, the other 2% is JDeveloper just to see these wide tables. I wish I could just stay in my terminal window.

2. Not only something that is easy and fast but provides some good error handling.

Peevs and Wants

Billy, September 30, 2005 - 2:05 am UTC

My biggest pet peeve is that grants assigned via roles do not work in named PL/SQL (despite working in anonymous PL/SQL) - which means those grants have to explictly granted to the user/schema. I have never really seen a sensible explanation for this in any Metalink material.. Tom have you perhaps addressed this here? (had a brief look some time ago and did not see any specific from you on this).

On the Wanted List, no-one yet mentioned O-O (or O-R) in Oracle. I'm nuts about ADTs. It is one of the best features of Oracle.. and one of the most ignored.

But I want loads more. In an ATD I want to be able to define private, protected and public properties and methods. I want to be able to define Getters and Setters for properties. Most of all, I want to be able to call a parent constructor in a child class's constructor without having to resort to ugly hacks. Went into some detail on this in Usenet: </code> http://groups.google.com/group/comp.databases.oracle.server/browse_frm/thread/c62736aa46a3c46b/de3e8541b26d2364 <code>

The PL/SQL team should have a close look at Delphi and C# features and think of implementing some of the most useful ones into PL/SQL.


Tom Kyte
September 30, 2005 - 9:12 am UTC

(I'm nuts the other way on ADT's in Oracle personally ;)



I did address this role thing in my book Expert one on one Oracle:

<quote>
What happens when we compile a Definer rights procedure

When we compile the procedure into the database, a couple of things happen with regards to privileges. We will list them here briefly and then go into more detail:

q All of the objects the procedure statically accesses (anything not accessed via dynamic SQL) are verified for existence. Names are resolved via the standard scoping rules as they apply to the definer of the procedure.
q All of the objects it accesses are verified to ensure that the required access mode will be available. That is, if an attempt to UPDATE T is made - Oracle will verify the definer or PUBLIC has the ability to UPDATE T without use of any ROLES.
q A dependency between this procedure and the referenced objects is setup and maintained. If this procedure SELECTS FROM T, then a dependency between T and this procedure is recorded

If, for example, I have a procedure P that attempted to 'SELECT * FROM T', the compiler will first resolve T into a fully qualified referenced. T is an ambiguous name in the database - there may be many T's to choose from. Oracle will follow its scoping rules to figure out what T really is, any synonyms will be resolved to their base objects and the schema name will be associated with the object as well. It does this name resolution using the rules for the currently logged in user (the definer). That is, it will look for an object owned by this user called T and use that first (this includes private synonyms), then it will look at public synonyms and try to find T and so on.

Once it determines exactly what T refers to - Oracle will determine if the mode in which we are attempting to access T is permitted. In this case, if we as the definer of the procedure either owns the object T or has been granted SELECT on T directly or PUBLIC was granted SELECT, the procedure will compile. If we do not have access to an object called T by a direct grant - the procedure P will fail compilation. So, when the object (the stored procedure that references T) is compiled into the database, Oracle will do these checks - and if they "pass", Oracle will compile the procedure, store the binary code for the procedure and set up a dependency between this procedure and this object T. This dependency is used to invalidate the procedure later - in the event something happens to T that necessitates the stored procedures recompilation. For example, if at a later date - we REVOKE SELECT ON T from the owner of this stored procedure - Oracle will mark all stored procedures this user has that are dependent on T, that refer to T, as INVALID. If we ALTER T ADD some column, Oracle can invalidate all of the dependent procedures. This will cause them to be recompiled automatically upon their next execution.

What is interesting to note is not only what is stored but what is not stored when we compile the object. Oracle does not store the exact privilege that was used to get access to T. We only know that procedure P is dependent on T. We do not know if the reason we were allowed to see T was due to:

q A grant given to the definer of the procedure (grant select on T to user)
q A grant to public on T (grant select on T to public)
q The user having the SELECT ANY TABLE privilege

The reason it is interesting to note what is not stored is that a REVOKE of any of the above will cause the procedure P to become invalid. If all three privileges were in place when the procedure was compiled, a revoke of ANY of them will invalidate the procedure - forcing it to be recompiled before it is executed again. Since all three privileges were in place when we created the procedure - it will compile successfully (until we revoke all three that is). This recompilation will happen automatically the next time that the procedure is executed.

Now that the procedure is compiled into the database and the dependencies are all setup, we can execute the procedure and be assured that it knows what T is and that T is accessible. If something happens to either the table T or to the set of base privileges available to the definer of this procedure that might affect our ability to access T -- our procedure will become invalid and will need to be recompiled.

This leads into why ROLES are not enabled during the compilation and execution of a stored procedure in Definer rights mode. Oracle is not storing exactly WHY you are allowed to access T - only that you are. Any change to your privileges that might cause access to T to go away will cause the procedure to become invalid and necessitate its recompilation. Without roles - that means only 'REVOKE SELECT ANY TABLE' or 'REVOKE SELECT ON T' from the Definer account or from PUBLIC. With roles - it greatly expands the number of times we would invalidate this procedure. If some role that was granted to some role that was granted to this user was modified, this procedure might go invalid, even if we did not rely on that privilege from that role. ROLES are designed to be very fluid when compared to GRANTS given to users as far as privilege sets go. For a minute, let's say that roles did give us privileges in stored objects. Now, most any time anything was revoked from ANY ROLE we had, or any role any role we have has (and so on -- roles can and are granted to roles) -- many of our objects would become invalid. Think about that, REVOKE some privilege from a ROLE and suddenly your entire database must be recompiled! Consider the impact of revoking some system privilege from a ROLE, it would be like doing that to PUBLIC is now, don't do it, just think about it (if you do revoke some powerful system privilege from PUBLIC, do it on a test database). If PUBLIC had been granted SELECT ANY TABLE, revoking that privilege would cause virtually every procedure in the database to go invalid. If procedures relied on roles, virtually every procedure in the database would constantly become invalid due to small changes in permissions. Since one of the major benefits of procedures is the 'compile once, run many' model - this would be disastrous for performance.

Also consider that roles may be

q Non-default: If I have a non-default role and I enable it and I compile a procedure that relies on those privileges, when I log out I no longer have that role -- should my procedure become invalid -- why? Why not? I could easily argue both sides.
q Password Protected: if someone changes the password on a ROLE, should everything that might need that role be recompiled? I might be granted that role but not knowing the new password - I can no longer enable it. Should the privileges still be available? Why or Why not? Again, arguing either side of this is easy. There are cases for and against each.

The bottom line with respect to roles in procedures with Definer rights are:

q You have thousands or tens of thousands of end users. They don't create stored objects (they should not). We need roles to manage these people. Roles are designed for these people (end users).
q You have far fewer application schema's (things that hold stored objects). For these we want to be explicit as to exactly what privileges we need and why. In security terms this is called the concept of 'least privileges', you want to specifically say what privilege you need and why you need it. If you inherit lots of privileges from roles you cannot do that effectively. We can manage to be explicit since the number of development schemas is SMALL (but the number of end users is large)...
q Having the direct relationship between the definer and the procedure makes for a much more efficient database. We recompile objects only when we need to, not when we might need to. It is a large efficiency enhancement.
</quote>



Raptor?

Iru, September 30, 2005 - 2:48 am UTC

Hello to everyone.
I don't know about you, but i would like a lot to hear more about this 'Raptor'.

Please Tom?
Thanks a zillion for being there.

Raptor

Tarry Singh, September 30, 2005 - 3:03 am UTC

See here, this guy has been to the OOW as well </code> http://www.oratransplant.nl/2005/09/20/openworld-project-raptor-%e2%80%93-watch-out-toad/ <code>I don't know about cool, it's just another tool and I guess they want the Raptor to devour the Frog(TOAD).

I'm curious though, I'd like to see it work.

Tom Kyte
September 30, 2005 - 9:20 am UTC

(you beat me to it!)

I don't like Oracle's DESC functionality all that much

Hans Wijte, September 30, 2005 - 3:16 am UTC

Hi Tom,

In my opinion the DESC command lacks several useful features when used to describe a table or a view.
When it is a table I'd like to know its PK columns and
which columns are FK columns; from the DESC it is also not clear whether the described object is a table or a view and you have to know in advance who owns the described object.

However, I found a very nifty procedure created by a Swiss
programmer (Rene Nyffenegger) at this URL
</code> http://www.adp-gmbh.ch/ <code>
which does everything described above.

Regards
HAnS

Thanks Tarry

Iru, September 30, 2005 - 3:17 am UTC

Thanks a lot. From what i've seen there looks a lot like they want to smash the frog, yes. Maybe a lot of people out there who won't use Toad because of some limitations it have would use Raptor because it's from Oracle itself. I think that is what Larry may have think of.

Saludos.

Oracle is improving all the time, but...

mobra, September 30, 2005 - 3:47 am UTC

... I wish that the following could soon be addressed:

1. The 32K limit on strings in PL/SQL.

2. The "table or view does not exist" error message should include the name of the invalid table/view. This would be very helpful in a huge query. Similar for non-existent columns, and value errors (string to number conversions, etc.).

Wishlist

Marc Blum, September 30, 2005 - 4:32 am UTC

- BEFORE COMMIT TRIGGER
- disable asynchronous commits
- robust timestamp arithmetics
- robust daylight saving support

Tom Kyte
September 30, 2005 - 9:34 am UTC

timestamp and daylight saving? what is missing in 9i and above?

I do not like COMPRESS=Y as default (EXP-ort)

Kim Anthonisen, September 30, 2005 - 5:05 am UTC

Hi Tom

I am a really big fan of Oracle, but I have never understood why COMPRESS=Y is the default. Have never used it, but have felt the pain from it several times, when people (including myself) have forgot to set it to N during export.

Tom Kyte
September 30, 2005 - 9:36 am UTC

because EXP is a tool as old as dirt and to change a default is something they are very hesitant to do.

EXPDP got to do it over.

wishlist

A reader, September 30, 2005 - 5:29 am UTC

asynchronous rollback

Tom Kyte
September 30, 2005 - 9:38 am UTC

That can be achieve by this C code:


*((char*)0) = "go away";


:)

beauty Insert

Alvaro, September 30, 2005 - 8:57 am UTC

How about an alternative for insert sintax. I know that is not ANSI SQL but...

We can do
UPDATE tTable a
SET a.col1 = '1'
, a.col2= 2
.
.
.
, a.coln = 'n';

but we can not do
INSERT INTO tTable a
SET a.col1 = '1'
, a.col2 = 2
.
.
.
,a.coln = 'n';

We must read complex and large insert like:
INSERT INTO tTable
(col1, col2, ... , coln)
VALUES
('1',
2,
.
.
.
, 'n')

In this way, we can not associate destination column with value for it (who is who), except we add some comments or something else.

Alex, September 30, 2005 - 9:01 am UTC

Well this thread sure blew up pretty quick. I didn't put too much thought into the things I don't care for, I just wanted to provoke some thought. However, you brought up some things I missed. (Statspack was another thing I was thinking of, but maybe the DBA can enable too?)

Good call on the error messages. I agree there. I had not heard of the new and improved sqlplus, that's a great idea.

I am just getting a feel for the DBA/developer relationship, this is my first job, coming up on two years. It's tough though, trying to explain the importance of preparedStatements without sounding like a database Nazi or using confusing (to them) technical terminology.

I had a crazy thought about binds while on the subject of Oracle improvements and; I wonder why it is not considered syntactically incorrect to write code in Oracle without binds where they are necessary. I've heard and seen enough to know now that code without binds is just wrong. Is it nuts to suggest this? I know it's possible to overbind, so maybe it's unrealistic to ask the compiler to be able to know when they should be used and when not? Although if we as people know, I would think we could program the rules and circumstances? It's just some insane idea I was thinking of. I just think that code open to sql injection is serious enough that it shouldn't even compile.

Looking forward to reading your new book.

Tom Kyte
September 30, 2005 - 10:56 am UTC

... (Statspack was another thing I was
thinking of, but maybe the DBA can enable too?)...

yes, they are the ones that would either

a) include it
b) or not


.... I wonder why it is not considered syntactically incorrect to write code in
Oracle without binds where they are necessary.....

there is never a case where they *have* to be used. It is that they better be used if you want more than one user ;0


It is impossible for us to detect SQL that has been sql injected, to the database it is just more code to run.

Von, September 30, 2005 - 9:16 am UTC

I wish oracle has an easy way of grabbing the source_code
from the database for procs/functions/packages

Tom Kyte
September 30, 2005 - 11:02 am UTC

------------- getcode.sql --------------------------
set feedback off
set heading off
set termout off
set linesize 1000
set trimspool on
set verify off
spool &1..sql
prompt set define off
select decode( type||'-'||to_char(line,'fm99999'),
'PACKAGE BODY-1', '/'||chr(10),
null) ||
decode(line,1,'create or replace ', '' ) ||
text text
from user_source
where name = upper('&&1')
order by type, line;
prompt /
prompt set define on
spool off
set feedback on
set heading on
set termout on
set linesize 100
-------------------------------------------------------





-------------------- getallcode.sql -------------------
set termout off
set heading off
set feedback off
set linesize 50
spool xtmpx.sql
select '@getcode ' || object_name
from user_objects
where object_type in ( 'PROCEDURE', 'FUNCTION', 'PACKAGE' )
/
spool off
spool getallcode_INSTALL
select '@' || object_name
from user_objects
where object_type in ( 'PROCEDURE', 'FUNCTION', 'PACKAGE' )
/
spool off
set heading on
set feedback on
set linesize 130
set termout on
@xtmpx.sql
------------------------------------------------------


dave, September 30, 2005 - 9:55 am UTC

[quote]
I wish oracle has an easy way of grabbing the source_code
from the database for procs/functions/packages
[/quote]


there is, it is called user/dba_source

David Aldridge http://oraclesponge.blogspot.com, September 30, 2005 - 10:03 am UTC

I hope that everyone who has posted an enhancement request here has also officially raised it on MetaLink ... ?

Tom Kyte
September 30, 2005 - 11:10 am UTC

(i've sent this link onto the raptor guys - many of the ideas would fall in there)...

but yes, metalink with a business case is the official way, this is just ideas being thrown out there. They'll get read and all, but no "promises" :)

Vipin, September 30, 2005 - 10:17 am UTC

I would like to have something like db2look ( a db2 utility which dumps whole database definition really quick) in Oracle . I know we have exp/imp but the output is not user friendly - the user should not have to format the output before using it .

Backup/restore could be made more easier .

Finally i am waiting eagerly to have an improved sqlplus - with the capability to display wide outputs without wrapping at the end .

After venturing briefly into db2 & sqlserver worlds , I can safely say oracle is a much better product then the other two in my opinion.



Tom Kyte
September 30, 2005 - 11:25 am UTC

(have you looked at rman? it really isn't that "complex")


dbms_metadata.get_ddl is fairly useful these days..

wishlist

A reader, September 30, 2005 - 10:38 am UTC

that quality of asktom stops decreasing with this enormous speed

Cols_as_rows

Bhavesh, September 30, 2005 - 10:44 am UTC

hi tom,
I just need cols_as_rows() function built-in in oracle...
;-)
Bhavesh

Benthic Software Golden32

A reader, September 30, 2005 - 11:21 am UTC

<quote>
1. I couldn't agree more. Occassionally I need to look at some very wide
tables (135 columns) and SQL*Plus doesn't really cut it. I use SQL*Plus for
98% of my work, the other 2% is JDeveloper just to see these wide tables. I
wish I could just stay in my terminal window.
<quote>

Benthic software has a tool called Golden32. Its not just another query tool. It has multiple tab(one table per tab) capability(and keeps queried data even after you switch tabs)...

Not only can you run querys but you can also:
- Create database objects(tables, stored proc/func/pack, collections so on...)
- Run plsql stored procedures/functions, pl/sql anon blocks
- You can automatically export data to excel spreadsheet, csv, xml(no tool configuration required)
- You can do table data import/export(csv format) from tool
- You can do automatic editing of table data within tool(within the data output grid itself)
- You can if you dont know sql tool comes with sql builder that allows you to point click build sql statement with mouse.
- You can from sql builder query/search user, all_objects, dict objects in order to view tables, views, store procedures, triggers, functions. Or you can simply click on table_name and it pulls up all corresponding columns
- You can do searches of particular table columns

I started using tool back in 1999 and for 35 dollars its the best db tool for price/functionality I ever used.

And its free 30 day trial...

Just google benthicsoftware and you will find the website!




PL/SQL improvement

Menon, September 30, 2005 - 11:22 am UTC

I wish the following two cases would not compile - too often I have been bitten by this.

scott@MYORA92> create or replace function f (x number )
2 return number
3 is
4 l_ret number;
5 begin
6 if x = 1then
7 l_ret := 2;
8 elsif x = 2then
9 l_ret := 3;
10 else
11 l_ret := -1;
12 end if;
13 end f;
14 /

Function created.

create or replace function f (x number )
return number
is
l_ret number;
begin
if x = 1then
l_ret := 2;
elsif x = 2then
l_ret := 3;
else
l_ret := -1;
end if;
end f;
/
show errors;

scott@MYORA92> create or replace package pkg
2 as
3 g_const varchar2(1) := '22';
4 end pkg;
5 /
Package created.

Similarly, one of the favorite peeves with PL/SQL abou not having a good way of getting stack trace is taken care of in 10g.

a native stragg would be nice too

A reader, September 30, 2005 - 11:23 am UTC

I often need to display values from multiple rows as a single column value. Judging from the number of times people ask how to do this on askTom and the OTN SQL/PLSQL discussion forums, it seems many others need this too.

Your "stragg" function does the trick, but a native function would be better. I'm thinking of something like an aggregate version of the single-row function CONCAT, with a parameter for a separator string. The aggregate COLLECT function comes close, but it only returns a nested table. We need a COLLECT that returns a VARCHAR2. Barring that, a native way to cast a nested table into a VARCHAR2 string could be used with COLLECT to achieve acceptable results.


Alex, September 30, 2005 - 11:27 am UTC

"It is impossible for us to detect SQL that has been sql injected, to the database it is just more code to run. "

Just to clarify Tom, I didn't mean the injected code itself, I meant the code that leaves your app open to injection such as an execute immediate without a USING clause. But I realize that doesn't make sense because what if we wanted to code a literal? Then we don't want to bind there.

Tom Kyte
September 30, 2005 - 12:01 pm UTC

right, and I use sys_context to "bind" all of the time - because I don't know at compile time how many binds I'll actually have!

Re: PL/SQL improvement

Menon, September 30, 2005 - 11:28 am UTC

Sorry I repeated the function f twice - Just ignore the second one.
Interestingly, notice that the function compiled even though I said
"if x = 1then" - no space between the 1 and then.

That was not the error I wanted the compiler to catch though.

Tom Kyte
September 30, 2005 - 12:02 pm UTC

that was the error I thought you wanted - but you mean "a function that forgets to return a value"

A reader, September 30, 2005 - 11:33 am UTC

Some utility or function to automatically calculate the bucket size for histograms, depending on the data distribution

Anto

Tom Kyte
September 30, 2005 - 12:02 pm UTC

done, dbms_stats does that.

Anto, September 30, 2005 - 11:35 am UTC

Another one - To get the 10046 trace(as a single file) for parallel queries,

Anto

A reader, September 30, 2005 - 11:47 am UTC

Another one - sorry for not putting it all together in one place

Making the Ora-600 and Ora-7445 errors more specific and detailed - splitting the Ora-600 and Ora-7445 errors into sub categories with separate error numbers

Anto

Tom Kyte
September 30, 2005 - 12:06 pm UTC

by definition these are "oh crud" errors - they are not anticipated. Hard to make the unanticipated more specific and more detailed, you don't really know what happened (thats the point of those two)

what I don't like

Andrew, September 30, 2005 - 12:14 pm UTC

Plsql and oracle sql should have a option to disable implicit type casting. make it strongly typed.

select into...

Mariano, September 30, 2005 - 12:17 pm UTC

I wish that in PL/Sql we could test if a value exists in a table without select .. into .., just:
if (select x from v = true) (or whatever) then
...



Restartable Batch Process

Bob B, September 30, 2005 - 1:11 pm UTC

create table batches(
batch_id number primary key,
db_procedure_name varchar2(100) not null,
driving_table varchar2(100) not null,
starting_scn number not null,
last_rowid rowid -- not sure if this is a ligit datatype
)

create sequence batch_id_seq;

create or replace package batch_pkg as
procedure create_batch(
p_db_procedure_name in batches.db_procedure_name%type,
p_driving_table in batches.driving_table%type,
p_starting_scn in batches.starting_scn%type,
p_last_rowid in batches.last_rowid%type,
p_batch_id out batches.batch_id%type
);

procedure execute_batch(
p_batch_id in bathes.batch_id%type
);

procedure commit_batch(
p_last_rowid in batches.last_rowid%type
);

end batch_pkg;

"create_batch"
Insert the batch into the batches table.

"execute_batch"
Create a query from the driving_table as of the starting_scn and add a "rowid > last_rowid" when the last_rowid is not null. Select out the rowid and all columns of the driving table, order by rowid, and set a private global variable "g_batch_id" (exists only in the package body). It will then open this query as a ref cursor and pass it as a parameter to the db_procedure_name defined for the batch.

"commit_batch"
Called by the procedure doing the batch process to commit. The procedure updates the batch with the last_rowid and then commits.

Each batch process would need a wrapper to call the create_batch and execute_batch routines and then a batch_processor that would take a ref cursor as parameter, process it and use "batch_pkg.commit_batch" whenever it wants to commit.

There are some issues that I can think of off the top of my head and I'm sure there are a ton more:

1) Row movement may affect this
2) Must have enough rollback to flashback to the given scn.
This could be a problem when running the batch process without any failures. It is worsened if it does have to be restarted (the longer it went without failing, the worse it is)
3) Performance might be an issue when ordering by rowid for larger result set

Tom Kyte
September 30, 2005 - 2:21 pm UTC

1) yes it could
2) correct
3) why order?

I'd rather use rowid RANGES - use between low and high

Some more requests

Sergei Agalakov, September 30, 2005 - 1:16 pm UTC

I would like to lift a restriction of 30 characters for object names, I have situations when I can't create a good descriptive name just 30 characters long.

I do have problems with inability to see individual procedures/functions in PL/SQL package in errors and profiler information. When I do see a profiler information that 95% of the time was consumed in the package PCK1 and this package has 100 procedures and functions - it doen't help much. I do see a time distribution by line number in package but againt I can't even reliably combine them and tell that the most time consuming lines 1125, 1201 and 1245 belongs to just one procedure in package or to three different procedures in this package.
It would be nice to have something like DBA_SUBROUTINES view where you can see first/last line of every procedure in the package, names and parameters of this procedures. Then at last profiler would be able to report that 80% of time was consumed by PCK1.PRC1 procedure, 12% in PCK1.PRC2 procedure and 3% in PCK1.PRC3 procedure instead of the current that 95% of the time was spent somewhere in PCK1. The same applies to Oracle types.

Encrypting of network traffic has not to be an Advanced Security option on the top of Enterprise Edition - it is must have, very basic security measure. A simple test: enable sqlnet tracing and send a command 'ALTER USER SCOTT IDENTIFIED BY TIGER' and then take a look at a trace file - here you have a new password for a SCOTT guy.

Oracle doesn't have a thin .NET data provider. Oracle's .NET data provider doesn't support Oracle objects. All that exists for JDBC, but does Oracle really believe that all applications for MS Windows have to be written on Java?


Jon Roberts, September 30, 2005 - 1:18 pm UTC

1. PL/SQL development tool with tight integration with version control like VSS (I sure hope you say Raptor).

2. OCA (Open Client Adapter) brought back. It was nice when building Oracle Reports that I could open a connection with SQLPlus and run the exact SQL that I am going to run in the report. With Reports now, you have to use JDBC which means more configuring on the client and the server than the old days with OCA. Or you could use HS to connect to a different database but that has its problems too because now I have an additional database to manage.

With Oracle Forms, you have to use HS to connect to a different database. With Forms 6i, I could use OCA to connect to a different database.

Discoverer too was nice with the OCA and now I have to create the EUL in an Oracle database only. I see the point but it was nice to be able to install the EUL in a SQL Server database when it was more appropriate.

3. Ref Cursor support in Discoverer.

4. Oracle Text support in Discoverer.

5. Better patching in Application Server. When a new release comes out, many times it means installing a new version where in the database world, Oracle supplies a patch.
1.0.2.2.0 couldn't be patched to 1.0.2.2.2.
1.0.2.2.2 couldn't be patched to 9.0.2
9.0.2 couldn't be patched to 9.0.4.
9.0.4 couldn't be patched to 10.1.2.0.
10.1.2.0 can't be patched to 10.1.2.0.2.

I understand why 9iAS R1 couldn't be patched to 9iAS R2 as it was a major change but there should have been patches for things like 9.2.0.2 to 9.2.0.4.

6. A real Access killer version of HTML DB. The current version is great for building apps but a lot of people use Access to just load tables and analyze data. You must be a developer in HTML DB before you can create a table and load data to it which means more management on top of the database someone is already managing.

It would be nice if I could just grant a user rights in the database, let the user log into the web application with the database username and password, and then based on database security, be able to create and load tables into the database.


Tom Kyte
September 30, 2005 - 2:24 pm UTC

1) i wont....



Anto, September 30, 2005 - 1:42 pm UTC

Some option in sqlplus to return only 'n' rows of output ( instead of giving rownum in each query), for all the queries, given hereafter for the session.

Anto, September 30, 2005 - 1:45 pm UTC

Some dictionary view(s) to see all the SQLs executed by each user,after the instance startup, and the 'actual' explain plans those queries had used.

My wish list...

Philip Moore, September 30, 2005 - 2:13 pm UTC

Hi Tom,

My wish list is as follows:

1) Star-transformation with bind variables - the reason I ask for this is that we have a datawarehouse with Siebel Analytics (which you guys now own) as our primary reporting tool. We have thousands of users issuing relatively similar SQL queries, often differing only by literals in the WHERE clause. Our shared pool is getting killed with unique SQL due to the fact that Analytics does not bind - as it shouldn't since this is a DSS (a "seconds-per-query" system). But I think the mentality should shift by Oracle putting a "blind_star_transformation" parameter or something of the like that allows the very powerful star-join technique to take place - with Binds. I realize that in some cases, it might not be optimal - but having such a feature would allow to prove the case for/against blindly star-transforming queries with binds. With such a feature - I could force Siebel Analytics to bind by using cursor-sharing, until you get ahold of them, Tom ;)

2) Cascading Updates to child tables - I know you've put code on asktom for doing this - but it is a tricky workaround to the mutating table error. Why not build it in the database? My argument is that my $200 Axxess database can do it, why not my $40,000 (per cpu) Oracle database do it? (please be gentle here, Tom) :)

3) PIVOT features built into SQL - without the use of Analytical functions. You have a handy pivot routine (in Expert Oracle Signature Edition) - but why not make this a built in feature of the RDBMS? Again - my $200 Axxess database can do it, why not my $40,000 (per cpu) Oracle database do it?

4) Why doesn't Oracle mow my grass for me, my $200 Axxess database does, why not Oracle? Just kidding...

5) Hmm, nevermind - 10G R2 has taken care of a lot of my wants :)

Thanks as always Tom, you da man...

Sincerely,

Philip

Tom Kyte
September 30, 2005 - 2:36 pm UTC



2) because it would just be *wrong* ;)



sqlldr log file in XML

Alex, September 30, 2005 - 2:28 pm UTC

1 how about XML format for sqlldr log output.

The current report format is difficult to parse.

2 Execution Plan - REMOTE path (via db link); this should be further expanded.

3 An easier to program and understand DBMS_JOB package



Tom Kyte
September 30, 2005 - 2:37 pm UTC



3) dbms_scheduler?

easier to program, but a lot more feature rich so harder to understand.

(i thought dbms_job was fairly "straight forward", the hardest thing was the interval and the scheduler has that fairly well knocked down I think)

David Aldridge, September 30, 2005 - 2:39 pm UTC

"4) Why doesn't Oracle mow my grass for me, my $200 Axxess database does, why not Oracle?"

Or dig trenches and move piles of wood ...

:D

Tom Kyte
October 01, 2005 - 8:08 pm UTC

lots of wood, I hurt very much right now, but a good hurt :) (except for that piece of wood that banged off my kneecap, that wasn't a good hurt, that was more like a blinding shot of pain....)

Thought I could move it all in a day - I was wrong, next weekend is yet another weekend.... Almost done.

Lev, September 30, 2005 - 2:51 pm UTC

Hi Tom,

It looks like you opened Pandora’s Box.

1) I'd like to be able to kill a session without orakill (in windows). Something like:
alter session ... kill now
and all recourses (locks especially) are free

2) More detailed error messages, probably as an xml document - application will be able to analyze it and heal itself.

Thanks,
Lev


Tom Kyte
October 01, 2005 - 8:12 pm UTC

1) that is alter system kill session -- that releases the resources such as locks.

It won't release the SESSION entirely until the client says something to us (else we would be overwhelmed with "i'm getting 3113's all of the time, whats up with that)



Love pandora's box, I've sent this url to more than one person, They'll see the feedback and take it into consideration.


Let is never be said that I've said "this is perfect, we are done"....

Anto, September 30, 2005 - 2:51 pm UTC

To execute all operations in the least possible time and most efficient way - (we have to do away with statistics collection, varying the values of init.ora parameters etc ;-)).

It has to automatically set the best values for the various init.ora parameters, for any system - maybe after the system has been UP for a decent amount of time

Tom Kyte
October 01, 2005 - 8:13 pm UTC

I believe we are getting there for many types of systems - there will likely always be exceptions.

For some systems - all you need to tell us is how much memory to use (pga/sga targets) and that is about it - today (10gr1/10gr2).

Anto, September 30, 2005 - 2:55 pm UTC

Never saw a thread so 'hot' other than the "Undocumented secrets for super-sizing your PGA" and "Predictive reorganization" for obvious reasons ;-)

Tom Kyte
October 01, 2005 - 8:14 pm UTC

Seems like controversy brings out everyone :)

but yes, this is the 2nd "hottest" thread right now....

sql loader control file generator

Jim, September 30, 2005 - 4:15 pm UTC

Nice little tool (sort of like how htmldb does it) that will generate the sqlloader control file. Allow a browse to the file, and show the begining of the file. Make a guess on the parse, but allow changes etc. Then save the control file. It could be in Java (not a requirement) so you wouldn't have to have it reqritten for each platform.

Oh, yes. The problem was chained rows and IO queue length. Thanks for the possibilities of things to look for. (We spoke briefly at the Oracle Book Store Wed. at OOW.)

Tom Kyte
October 01, 2005 - 8:28 pm UTC

I remember you -- we were guessing :) Glad to hear we guessed right....

bulk collect into

Art, September 30, 2005 - 4:35 pm UTC

I would like to be able to

select a, b, c
bulk collect into my_table_type index by d
from my_table

where d would set my_table_type's index


I wish ....

Enayet, September 30, 2005 - 4:43 pm UTC

I wish .. I am hesitant to spell out... Oracle has another nifty version for DW. Often I heard, Oracle underperform in DW arena.

I wish, Oracle would be as easy as .NET + SQLServer duo; Oracle should be sweet as a pie; so Developers (.NET + Java dudes) can sucumbs into it like VB developers sucks into SQLServer. There should be native API (like ADO) to connect to Oracle.

I wish Larry 'walmart'ize' Oracle as a de-facto database; one can buy Oracle from Bestbuy :))



Tom Kyte
October 01, 2005 - 8:29 pm UTC

... Often I heard, Oracle underperform in DW arena. ...


don't hear things, experience them. If it were so, well, we'd be hurting big time.




I wish,,,,

sns, September 30, 2005 - 4:52 pm UTC

When I do deletes and inserts of my table every day (I couldn't partition it and trucate the partition), I perfer Oracle would have the facility to "exchange" the record . This way the delete from my base table is not required and insert from the incremental can be avoided too.

Something like this:
exchange base_table with inc_table on base_table.column_name = inc_table.column_name.

I know partitions can be exchanged with other tables.

boom......




Tom Kyte
October 01, 2005 - 8:30 pm UTC

we call this "merge"???

rough interface

Alberto Dell'Era, September 30, 2005 - 5:34 pm UTC

Instrumentation:
(a) we have bind variables peeking but "explain plan" and "autotrace" don't
peek;
(b) we have session cached cursors but we can't distinguish them in
v$open_cursors;
(c) we have dbms_xplan to wonderfully format plans but until 10gR2,
"explain plan" and "autotrace" didn't use the same rich-information format;
(e) we have parallel execution, but getting the plans of slave sql is an art
in itself;
(f) we have tkprof, but its plan dumps don't use dbms_xplan rich format;
(etcetera)

DBA interface:
(g) to rebuild a non-parallel index in parallel, we must "alter index rebuild
parallel" it, then set it back to noparallel - why we can't simply specify
the parallel degree to use for the rebuild ? Same for "alter table move".
(h) why an insert/*+append*/ into an *empty* table has to maintain indexes
(basically) row-by-row, instead of rebuilding them at the end, thus saving
tons of redo, cpu etc ? I'm tired of setting them UNUSABLE ...
(etcetera)

That's the hardest part of working with the Oracle database for me.

I think that I spend 40% of my time coping with limitations similar to the ones above, when implementing/testing a design, diagnosing a problem, explaining Oracle to juniors ...

In other words - working with Oracle is 40% learning about the dashboard, and the rest about the engine, suspensions, driving techniques - in my car, the ratio is something around 1%.

Tom Kyte
October 01, 2005 - 8:46 pm UTC

(e) have you seen the slave sql in 10g? totally redone from 9i?

(h) it doesn't - it does a MERGE of the index - the append builds the index off to the side and merges with an empty index in this case.

Right on Tom!

Menon, September 30, 2005 - 6:46 pm UTC

"that was the error I thought you wanted - but you mean "a function that forgets
to return a value"

Yup - left that detail out for the reader to appreciate it is easy to miss it out.



A real whoami

Todd, September 30, 2005 - 6:54 pm UTC

A built-in function that would return the name of the current function/procedure, even within packages.

schema.funcproc_name
or
schema.package_name.funcproc_name

Hard to believe we're at 10GR2 and still don't have this yet...

Tom Kyte
October 01, 2005 - 8:51 pm UTC

hey, whoami isn't imaginary ;)

Obvious one....

David Rydzewski, September 30, 2005 - 7:01 pm UTC

new init.ora setting

FAST=TRUE

Dave

Tom Kyte
October 01, 2005 - 8:51 pm UTC

;)

wishlist

Yelena, September 30, 2005 - 7:53 pm UTC

Statement and row level database triggers on SELECT.

Tom Kyte
October 01, 2005 - 8:51 pm UTC

what would you do with the row level trigger?

Whishlist keeps growing.. keeps growing

Raj Kathamuthu, September 30, 2005 - 9:46 pm UTC

* To Able to rename a schema (just got the rename TBS wish granted in 10gR2..).

* Making HTML_DB able to work with other web servers.. (or have some kind of a PL/SQL container in the middle to make it true 3-tier environment)

* Wouldn't it be nice if OID is self contained like any other LDAP servers without requiring a DB instance? Some security admins have rules (I'd say blind rules).. like
<1> There shouldn't be no databases hosted in the application layer..
<2> Web layer shouldn't directly talk to DB layer..
<..>
Sometimes it's hard to convince them and get the waiver..
Just wondering though.. storing hierarchical data into a relational structure (in case of OID), what exactly we're gaining?



Tom Kyte
October 01, 2005 - 8:59 pm UTC

OID - it'll never happen, you know what you need for LDAP.....


umm, it is called a database, you can call it whatever you like but it is A DATABASE...

Hierarchical data goes well into "relational databases", it is where most of it is anyway.

You gain security, scalability, concurrency, backup, RECOVERY, replication, etc etc etc - without actually having to reinvent it all over again.



Enhancement Requests

A reader, September 30, 2005 - 11:55 pm UTC

How about an area in OTN where enhancment requests for all products can be entered, queried, and voted on (one vote per OTN member). This way we don't have to fill up poor Tom's forum with our wish list rants. ;-)

Tom Kyte
October 01, 2005 - 9:00 pm UTC

metalink - it does that. voting, no, but the only voting is done by customer advisory boards.

What I would like?

Joris, October 01, 2005 - 3:06 am UTC

1) Macros in plsql. Like #define in C.
2) Bash shell style command history and tab-autocomplete in sqlplus.
3) Ability to flush individual statements from the sqlarea. Sometimes a statement with a bad plan sits there because the first time it was executed the values in the binds were unusual.

Tom Kyte
October 01, 2005 - 9:03 pm UTC

1) there is conditional compilation in 10gr2 - like #ifdef, but no macros.....

3) agreed.

Improved plsql namespaces

Tom, October 01, 2005 - 6:05 am UTC

Couple of wishes which come from doing a lot of plsql work in the database.

Better Namespaces
-----------------
I would like to be able to actually have package hierarchies so I can create packages

testapp.crm.model.accountapi
testapp.crm.model.contactapi

Obviously this wouldn't be a small change :-) but pretty much all languages have the ability to subdivide code into namespaces like this. Unfortunately the best we can do in Oracle is separate schemas with all the related grant management etc.

Check Constraint Error Messages
-------------------------------
Easy one to implement I would think. I want to type the following

alter table widgets add constraint cost_chk check (cost < 100) error "Widget cost must be less than $100";

so I can catch check constraint violations and return a friendly message to the user.

Where do you go in Metalink to log enhancement requests?


Tom Kyte
October 01, 2005 - 9:04 pm UTC

it is done via the regular ITAR process..


I like that last idea myself.

hierarchical queries to xml

arnaud, October 01, 2005 - 8:07 am UTC

turn the result of a "connect by" query into xml

A reader, October 01, 2005 - 12:22 pm UTC

sometimes i don't like the support provided by metalink :o)

And now to lose some friends.

Niall Litchfield, October 01, 2005 - 12:42 pm UTC

OEM. Its just horrid. I'm yet to be convinced 10g is better than 9i rather than just a backwards step. OK I might not lose too many friends for that one but

BINDS

I really don't see why

select cols from tab where pk = 10;

should be radically worse than

select cols from tab where pk = :b1;

I know why it is, I just wish it wasn't.


Tom Kyte
October 01, 2005 - 9:10 pm UTC

Niall,

you won't lose friends by expressing lucid opinions based on your experiences...


But on the bind thing - you and I see opposites I think. Maybe - when I come to the UKOUG - you can sit in on my "all about binds" session and I'll describe "why"


the first time around, "where pk = 10" is not any different from "where pk = :b1"

Even on the 1000th time, "where pk = 10" is not any different from "where pk = :b1"

However, you don't tend to submit "where pk = 10" 1000 times, you tend to do it once and it must be compiled (hard parsed)


so, it is the difference between:

where pk = 10
where pk = 11
....
where pk = 10000
where pk = 10001
...
where pk = .............


and

where pk = :b1



Think about a developer writing a SUBROUTINE for each unique set of inputs and calling them:

subroutine_1
subroutine_2
subroutine_3
.....


that is the logical equivalent.

rman improvements

Chris Crisante, October 01, 2005 - 1:42 pm UTC

I have liked the improvments in 9i and 10g. Some of the things that were available in mainframe DBMSs I have supported have slowly made their way into Oracle.

One useful feature still missing is Change Accumulation. This would be an RMAN function to go through the archivelogs for a time period and accumulate all the physical changes for each block. Subsequent restore could then use an incremental level 0 or full backup, and recovery would apply the 'change accumulation' file (and possibly whatever logs accumulated after the change accum utility had run. This would reduce the need for incremental level '>0' backups. It could also be done using only archive logs, no need to use the DB datafiles, and possibly even run on a different machine. And if you have a DB with multiple updates to the same blocks, it could speed up recovery (If the block changed every day, only one block image would need to be applied during recovery).

I also don't like much about the installation, cost and licensing of Oracle DB. Especially I dislike the way we must license individual 'options' such as partitioning, RAC, Parallel Query, OLAP, etc. I can understand some things such as Advanced Security or OLAP being add-ons. But Partitioning?? Parallel Query?? These should be a basic part of any DBMS these days. There is already the additional cost for 'Enterprise' edition which should cover the features such as Partitioning that most "enterprises" would need.

When I watch a sporting event these days, there is usually an Oracle commercial talking about the "low-cost" Grid - I see about 16 processors on the screen and do the math in my head. Processor licenses, partitioning liccenses, RAC Licenses, Parllel Query licenses, license the hot standby site at the exact same cost as production even though there are no users, etc, etc, etc.








Tom Kyte
October 01, 2005 - 9:15 pm UTC

well - a physical standby does change accumulation and since you can backup from standby and restore to production - you sort of have that concept today.


PQ is a feature of Enterprise edition, not an option (hasn't been for a while). The license options are based on market - what do the others have or not have as well drives this.

Queue DDL statements.

Sai, October 01, 2005 - 3:36 pm UTC

Hi,

My wishlist is to have the ability to run some DDL commands like "alter table add column...." on a busy table and let it queue along with other DML statements hitting this table. I know dbms_redifinition does it, but how about recreating a 300G table every now and then with each application release and take the performance hit whie the rebuild is going on.

This is in the lines of what online index rebuild does, it queues with DML statements to get a lock on the table and start recording changes happening in a journal table.

We have already filed enhancement request for this.

Tom, do you see any downside of having this feature?

Tom Kyte
October 01, 2005 - 9:17 pm UTC

well, it could lead to a logjam.

In order to queue in there, you prevent further updates from starting (they block on the ALTER), but if the alter is blocked long enough - bamm, the system sort of stops dead.

that is one potentional downside - and since an add columm with a default could take a really really long time.....

that would be another.

Standby <> Change Accum

Chris, October 01, 2005 - 9:42 pm UTC

You have a point in theory i guess, but in practise:

1. Standby requires another physical copy of the database, whereas a 'change accum' would not require this disk space. Just sufficient sort space to go through the logs. Sort of like an incremental backup requires less tapes than a full backup (in most cases).

2. Running a standby DB on a remote system requires a full production license for that system. Double the cost. A 'change accum' function could be run on the same system as the primary DB (or maybe on an offsite system configured in a less costly way to offload the work).

Other DBMS's i have used had both of these functions (standby and change accum) and we could use whichever was a better fit for us. It would be nice if Oracle did the same.

As for the licensing rant, i was just venting. Of course Oracle has the right to charge whatever the market will bear. But it is an inconvenience of having to go ask for more money from mgmt or users when we figure out, say, that partitioning would be helpful.

But I do wonder how well Grid computing is catching on with the average company when each processor in the Grid requires a processor license?

Tom Kyte
October 02, 2005 - 10:35 am UTC

Agreed, I was just pointing out that while not identical, you can achieve a very similar goal.

It is not double the cost, standby machines are typically a fraction of the size.


Another "accum" approach is the improved incrementals in 10g -- where we catch a backup up on disk.


Each processor in a NON-GRID environment requires a license too. At the end of the day, you either have 16 cpus in a single big machine or 16 cpus in 4 small very cheap machines. You still have "16"

Reduce/Eliminate parsing of REF CURSORS

A reader, October 01, 2005 - 10:19 pm UTC

It would be nice to have an "option" that would prevent parsing of the same SQL statement in a reference cursor. While I use them in specific places for specific problems with bind variables and session_cache_cursors, executions from multiple sessions can hit the library cache latch pretty hard.

Having the option to specify that a reference cursor is weak, strong, or static would be really nice!

just a couple of things

Mark Wooldridge, October 01, 2005 - 11:42 pm UTC

Well there are probably others but this is all I can think of now.

1. insert into tab values my_rec overiding col1 with myseq.nextval;

1.b assign user object = user rec, mapping the columns where the names are the same. ability to also assign rec = obj.

2. parametric search indexing similar to Verity's where you can refine a result search using predefined parametric values. Implemented this with an IOT which is maintained using a PL/SQL api for insert/update on the table. Just would like to see this more integrated with Oracle text.

Robert, October 02, 2005 - 12:23 am UTC

>>1. PL/SQL development tool with tight integration with version control like VSS <<

the new and free JDeveloper ?

>>
create or replace function f (x number )
return number
is
l_ret number;
begin
if x = 1then
l_ret := 2;
elsif x = 2then
l_ret := 3;
else
l_ret := -1;
end if;
end f;
<<

Wow ! I haven't written standlone functions in a while
didn't even know this was compilable.

to Robert

Muhammad Riaz Shahid, October 02, 2005 - 1:07 am UTC

Why do you think this code is not compilable ?

Tom Kyte
October 02, 2005 - 10:42 am UTC

the lack of spaces in the thens looks "not valid"


 
ops$tkyte@ORA9IR2> select*from dual where 1>0and 5<6;
 
D
-
X


sort of like that. 

Alberto Dell'Era, October 02, 2005 - 6:56 am UTC

On:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#48858384840202 <code>
The point I was trying to make is that a lot of little "issues" or "inconveniences" scattered here and there, when added together, make a big issue and a lot of wasted time for everyone.

The "devil is in the details" is another way to put it.

I think that when re-engineering the engine, brakes and suspensions, the Oracle Engineers should walk the extra mile and apply their top-notch Engineering skills to the dashboard and stick-shift controls also (perhaps with the help of an ergonomic specialist).

Eg implementing the "bind variable peeking" feature took a lot of time for sure - why not walking the extra mile and modify "explain plan" also, that takes, I bet, just 1% more time ?

That's where I waste a lot of time - on details.

And when mentoring juniors, I usually start a mentoring session with a smiling-face junior ("great, today I can learn how to drive the Ferrari of the databases!!"), and very often see his/her face switch to disillusioned mode ("gosh, I expected the Ferrari cockpit to be a tad better"). Speaking about *perceived* quality.

Tom Kyte
October 02, 2005 - 10:56 am UTC

but the binds don't exist for an explain plan?  applications do not "bind" to explain plans ;)  

ops$tkyte@ORA9IR2> explain plan for select * from dual where :x= :y;
Explained.

no binds were defined, declared, anything - the concept has never existed.



As for your last paragraph, not that it matters - but have you seen the cockpits of the high end cars? 

Alberto Dell'Era, October 02, 2005 - 10:35 am UTC

> (h) it doesn't - it does a MERGE of the index - the append
> builds the index off to the side and merges with an empty
> index in this case.

Ok, but why merging with an empty index (and thus generating undo) instead of replacing the empty index with the "off to the side" one ? Or just rebuilding the index after the rows have been inserted above the table HWM (thus "kernelizing" the classical manual SET UNUSABLE-INSERT APPEND-REBUILD sequence - avoiding the UNUSABLE which is not necessary).

Inserting into an empty table is not infrequent - staging tables, local copies of remote tables, indexed GTTs, etc

Furthering the override above

Bob B, October 02, 2005 - 11:18 am UTC

Or just a way to attach a sequence to a table.column as a default. If the column isn't provided, it uses the sequence number.

Tom Kyte
October 02, 2005 - 12:27 pm UTC

well, I know it is not a sequence, but if all you want is a "surrogate key":

ps$tkyte@ORA10G> create table t ( x raw(16) default sys_guid(),
  2                   y varchar2(32) default sys_guid(),
  3                                   z varchar2(5) );
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> insert into t (z) values ( 'hello' );
 
1 row created.
 
ops$tkyte@ORA10G> select * from t;
 
X                                Y                                Z
-------------------------------- -------------------------------- -----
0227D36794A61D44E040007F010061CC 0227D36794A71D44E040007F010061CC hello
 


(using RAW would be best - but some people are 'afraid' of raw...) 

Alberto Dell'Era, October 02, 2005 - 12:25 pm UTC

>ops$tkyte@ORA9IR2> explain plan for select * from dual where :x= :y;
>Explained.

I know, it assumes that :x and :y are varchar2 and doesn't peek at the bind variables values.

Wouldn't you be happy if "explain plan" peeked at the values (and TYPES) of the :x and :y bind variables (including the LENGTH of varchar2 variables that can influence the plan) and so generated and explained the *real* plan ?

Or - how many people actually know about that "explain plan" limitation ? Even if they know, it's Yet Another Thing to remember - add all them together and it makes for a pretty cluttered dashboard. Fix these little "inconveniences" and you'll get a much more pleasant driving experience, and enjoy the top-notch engine, instead of fighting with the cockpit controls.

Tom Kyte
October 02, 2005 - 5:35 pm UTC

(there is nothing to peek at is my point...)

I didn't define any binds, you don't define binds, you don't even have the syntax to bind to a explain plan...

I hear you - but it would take almost a new command and a change in code in all existing applications.

sys_guid()

Bob B, October 02, 2005 - 1:05 pm UTC

I didn't know sys_guid() could be used. The only problem I see is that the raw is always going to take 16 bytes, whereas a number will take between 1 and 8(?) bytes. This isn't much for a single table, but using it across an application and with RI it could add up quickly.

Another advantage of being able to bind a sequence to a table.column would be the ability to easily tell what sequences are used for which tables. Plus, the database engine could use this information to optimize insert into select, merge, and other bulk operations.

Tom Kyte
October 02, 2005 - 5:36 pm UTC

1 and 22 bytes....
numbers are 1 to 22 bytes in length.


I don't see how this would optimize anything?

I wish I could. . .

Andrew, October 02, 2005 - 3:47 pm UTC

a) Set an initialization parameter that would return something like "ORA-01002: fetch out of sequence" for any transaction that attempts to fetch across commit -- not just on a FOR UPDATE cursor. We have a standard that we do not do this, but it would sure be nice to be able to have the database enforce it.

b) See the values of bind variables for other sessions. Something like v$session_binds
sid,
serial#,
sql_address,
sql_hash_value,
name,
value

There are probably others, but those are the only ones that come to mind right now.

New undocumented ora.init parameter

Andrew, October 02, 2005 - 4:52 pm UTC

First step:
_db_optimization = SO_SO
Second step:
_db_optimization = OK
Third step:
_db_optimization = PRETTY_GOOD
Fourth step:
_db_optimization = VERY_VERY_FAST
Fifth step:
_db_optimization = TRULY_AMAZING

After each iteration run some benchmarks and give them to the management. Now that's something that every DBA would appreciate!







about the exchange row facility,

sns, October 02, 2005 - 6:10 pm UTC

Oh my god. I keep using this facility from the last 2 years and I forgot about this. The reason I forgot is, in my current working place, there are lots of ETL transactions that does DELETE and INSERT. Nobody has tried doing MERGE as an alternative. May be they are scared of the performance since some tables have more than 100 columns. Can we use "parallel" hint while doing MERGE?

I like to work in a place where management gives full liberty to make use of all the features the database has. But buerocracy sometimes prohobits that.


Thanks,

Tom Kyte
October 03, 2005 - 7:17 am UTC

you can do a parallel merge, direct path the insert part....


100 columns? so?

SQL Loader dinosaur should evolve

Andrew, October 02, 2005 - 10:50 pm UTC

1. SQL Loader control and shell run files should really be generated through a GUI so that people like myself don't have to look at the docs literally every time (or I have to look at an example of an older SQL Loader process). OWB has now got some functionality there but why not copy that functionality somewhere else and expand it further...

2. One thing that I really loathe about SQL Loader is that you have to call it from command-line and cannot do it straight from PL SQL. Does anyone really feel like creating shell scripts and even worse, extracting error info from log files?! It shouldn't be that hard to create PLSQL and Java APIs for it, with nice error codes. External tables are usually a good alternative but some functionality is missing (e.g. loading into multiple tables - very often found in many legacy and even non legacy environments).

Tom Kyte
October 03, 2005 - 7:20 am UTC



2) multi-table inserts? you can load into as many tables as you like, even easier than sqlldr. You can use complex when clauses with OR's, AND's, whatever sql function you like

Re: What I would like?

Egor Starostin, October 02, 2005 - 11:39 pm UTC

> 2) Bash shell style command history and tab-autocomplete in sqlplus.
You know, it's already here. Just install rlwrap to enable these features. :)


Standby is a fraction of cost?

chris crisante, October 03, 2005 - 1:23 am UTC

Sorry about this - I know this thread isn't about licensing. But.. in reply to your points

"It is not double the cost, standby machines are typically a fraction of the size."

- I meant double the cost of Oracle licenses, not double the cost of the machine. According to a very official looking document I have - I could fax it to you - if we implement a Hot Standby which we intend to use for DRP (The usual reason for a Standby at our shop), we must license that standby machine at the same level as the production machine. We can't buy a 'small' Oracle license (eg 10 named user, etc) for our DRP machine even though nobody will (hopefully) ever sign on to that machine. And every time we add capacity to a production machine, we are obligated to add the same license for DRP. We have argued this point unsuccessfully for years.

"Each processor in a NON-GRID environment requires a license too. At the end of the day, you either have 16 cpus in a single big machine or 16 cpus in 4 small very cheap machines. You still have "16" "

Agreed, 4*4=16*1. But in my experience there is usually the choice of upgrading CPUs vs adding more CPUs. Sometimes we feel 'more CPUs' might perform better than 'faster CPUs' for a particular app, but we can rarely use that approach because the licenses cost more than the hardware.

The Grid approach makes a lot of sense. It would be nice to manage CPUs as a 'low-cost' commodity much like disk. But to date, the licensing costs have made that approach hard to sell. It's not just Oracle, most software vendors base their costs on the number of CPUs in the machine. Hopefully that will change someday soon.



Tom Kyte
October 03, 2005 - 7:31 am UTC

that is not my understanding on how it works, but I am not a license person.

The database license guide
</code> http://www.oracle.com/corporate/pricing/specialtopics.html <code>

states that the same *metric* must be used (named user vs processor) but does not say you have to license 16 cpus on a standby that only has 4. You cannot license named user on production and cpu on standby, you have to do cpu/cpu or named user/named user.

Cost: Oracle standby vs DB2

Enayet, October 03, 2005 - 6:12 am UTC

Just to add my recent experience about standby licensing:

Oracle does charge for standby licensing which is disheartening. My arguement is: standby is an inactive database, why we should pay for that?

I know IBM/DB2 do not charge any extra for their standby. On the same token, Oracle argue that their standby is a HOT standby and can be turned as a read-only DB anytime for reporting purposes.

But my personal opinion: this is just a over kill. Standby should be free of charge, no money should be paid for such an idle database, does not make any valid business case!

I have worked with Veritas, EMC, IBM and Microsft. They all agreed not to charge any money for the software (even for hardware IBM and EMC agreed to opt out for a premimum pay) until activated for production usage; except Oracle. That's beat me!


dave, October 03, 2005 - 6:19 am UTC

Taking Oracle's point of view here. A true standby database is a live database it is accepting redo and processing that data. With a logical standby you can actively use that database for queries and reporting (and you can with a physical standby as well if you open it read only)

To say it is idle is false really because it is doing a lot of work on your behalf to keep your data in sync

If you dont want that consider a cold failover one where if the first machine fails you can remount the disks onto the second server and then startup the instance. If that database is active for more than 10 databases per year then you need to buy a license, otherwise 'free'

my wishlist

Markus, October 03, 2005 - 6:58 am UTC

1. a "switch user" command like the "su" command on unix. this would be a tremendous help as we - the dba's - would not have to know all user passwords any longer. (tom, i know your script where you temporary change a user password and set it back in following. but this is not feasible in any case).

2. comment opportunities for nearly all database objects like the already existing table and column comments. it is very common that objects, starting with users, roles and associated privileges are created/granted by someone, maybe also documented somewhere "off-line", but after a while no one really knows what these objects have been initially created for. providing the ability for object comments would make the database scheme more "self-documenting", eg. looking at DBA_USERS could not only show what is in this case but also why it is.

Tom Kyte
October 03, 2005 - 7:40 am UTC

1) but why is a DBA having to even log in as another user? what are the use cases (i can aleady create a table, index, procedure, etc etc etc "for" you, why do I need to "be" you?)



ad "my wishlist", "why a dba have to log in..."

Markus, October 03, 2005 - 8:12 am UTC

hi tom,

as why a DBA have to log in as another user.

eg. to maintain private database links (stuff and organisation change and sometimes new stuff is not aware how to do this, or it is not pretty clear at the moment who is responsible for this in future,...).

eg. to work and try something in a specific users security domain directly. sometimes users call and simply claim "it is" where i, as dba, want to validate that "it is" for real. this hold especially true when using security contexts and logon triggers,...

regards,
markus



Tom Kyte
October 03, 2005 - 11:11 am UTC

for private dblinks, I've not heard/experienced a need for a DBA to do that personally - that seems fishy.....

AJB, October 03, 2005 - 8:45 am UTC

What about enforcing the use of APIs if required.

If you have table(s) that should only be maintained via api calls then prohibit other update methods (single dml statements).

This should probably be configurable as it wouldn't always be needed or desirable but would be a good option to protect data from rogue updates.

Tom Kyte
October 03, 2005 - 11:15 am UTC

GRANT and REVOKE are awesome for that?

I mean, with n-tier proxy authentication, secure application contexts and fine grained access control - you have that (since 8i). You can make it so that only your application can muck with the data

Column dependency.

MBPP, October 03, 2005 - 8:50 am UTC

One nice feature to implement would be to have a dictionary view to show where a particular column is referenced, not only table.

Alex, October 03, 2005 - 9:08 am UTC

Tom,

Since I submitted this thread I've gotten about 50 emails from you saying "a reply has been posted here....etc". It seems I am getting one for every time someone posts a followup here. I was thinking you might like to check into that. Thanks.



Tom Kyte
October 03, 2005 - 11:17 am UTC

You asked to be notified when the question is updated when you submitted it?

Would you like for me to update that and turn it off for you?

To: AJB from England

Andrew, October 03, 2005 - 10:06 am UTC

That facility is already available, thus;
a) create a schema to own all objects. The password is known only to the DBAs.
b) DO NOT grant any privileges (other than maby select )on these tables to anyone.
c) create the API procs ( in the same schema as the tables ) as necessary.
d) Grant execute on these API procs to the appropriate users/roles.

Now no one can manipulate the data at all, except through one of the procs you have created.


AJB, October 03, 2005 - 11:04 am UTC

Thanks Andrew - I know you can limit access in this way, but I'm looking for an extra level of protection to stop developers (the people maintaining the apis) from bypassing existing APIs.
I know you would expect these people to be aware of existing APIs and how to use them, but in complex applications this isn't always the case especially if the APIs aren't as fully documented as they should be.


Tom Kyte
October 03, 2005 - 11:30 am UTC

n-tier proxy authentication, you can set it up so that only the application can make changes.

Your Top 5 or Top 10 suggestions

Anto, October 03, 2005 - 11:28 am UTC

Hi Tom,

Before you decide to close down this thread, please pick your Top 5 or Top 10 suggestions from the list given by various users of oracle



Tom Kyte
October 03, 2005 - 8:22 pm UTC

There are precisely 2 threads out of 27,231 that I've "closed down" regretfully ever. 2... There is a commonality between those two threads not shared by this one. It won't shutdown.

I was asked to get the top ten from here and send them along (this thread has generated interest), I'll share that all with you after I do it.

Mark, October 03, 2005 - 11:29 am UTC

"
Followup:
1) but why is a DBA having to even log in as another user? what are the use
cases (i can aleady create a table, index, procedure, etc etc etc "for" you, why do I need to "be" you?)"

What about Jobs? As a DBA, can you scedule a job for a user? Or run a job as a user? I have not used 10g so maybe so, but in 9i this was a problem.


I would like to see the power and flexibility like the Concurrent manager in Oracle Apps. Have the ability to group jobs, prioritize them, and serialize them. I also would like to see Enterprise Manager have a job management screen.

Thanks..


Virtual column?

David Rydzewski, October 03, 2005 - 11:34 am UTC

Something to the effect of

alter table add new_column varchar2(50) new segments only;

In other words, altering a table to add a new column but only having it effect new partitions added. SQL against older partitions would yield a null for this column. Once older partitions are aged out, the column is no longer considered "virtual."

Could be useful in supporting change in a data warehousing environment that can't afford down time.

Dave

A reader, October 03, 2005 - 11:34 am UTC

Oh is that how that works? Sorry I thought that was to notify me when YOU responded to it. This is the first time I asked a question and it's been published. If it's not too much trouble you can kill that, this turned out to be a hot one.

Tom Kyte
October 03, 2005 - 8:29 pm UTC

done...

Above post is Alex

A reader, October 03, 2005 - 11:36 am UTC


column specific errors

Igor, October 03, 2005 - 11:39 am UTC

Hi,

It would be nice when doing large INSERTS (I mean many columns) that error messages that are column specific are more informative. So, if you have error when attemped to insert NULL in not null column or when insert alpha character which are not digits, that's is clear what column is problematic.
Also it would be nice that view column and value mappings are made much clearer.


Tom Kyte
October 03, 2005 - 8:31 pm UTC

ops$tkyte@ORA10GR2> insert into t values ( null );
insert into t values ( null )
                       *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("OPS$TKYTE"."T"."X")



for nulls that's been true for a couple of versions. 

Add a storage option for LOBS

Jim, October 03, 2005 - 1:55 pm UTC

Add a storage option for LOBS to store them as gzip or a zip or simlar compressed format. Maybe this would be limited to binary (BLOBS). The BLOB would be uncompressed on the return. The compressed format would not allow function manipulation (instr, etc.) or where clause usage. It would just compress the data.

Sqlplus Enhancement

Mathew Butler, October 03, 2005 - 2:11 pm UTC

How about a facility in sqlplus to enhance the SQL buffer and enable the aliasing of ad-hoc SQL in sqlplus. It seems I type similar things within a session many times. It would be great to be able to alias the last 10 queries that you exectuted and then be able to access these in the buffer.

Mat.

parameter request

Tom Jordan, October 03, 2005 - 4:16 pm UTC

I would request an "exclude" parameter for imp/exp. My fingers are cramped from listing out all (1000 - 1) tables for an export.

Sure I can script it with all_tables etc., I'm just saying...

-- Tom

Tom Kyte
October 03, 2005 - 8:59 pm UTC

data pump.......

A reader, October 03, 2005 - 5:29 pm UTC

I guess it is there in expdb in 10g

Raptorize

Tarry Singh, October 03, 2005 - 5:42 pm UTC

"(you beat me to it!)"

heh heh

init parameter request

A reader, October 03, 2005 - 6:26 pm UTC

Hi,

a nice parameter would be
implicit_type_conversion=false

My requests

Gary, October 03, 2005 - 8:49 pm UTC

Something similar was set up on
"</code> http://www.oracleplsqlprogramming.com/IC/" <code>
but it had a tendancy to throw MySQL errors when submitting suggestions :)

My list.
1.A couple of mentions for disabling implicit type conversion. I'd like that to extend to explicit type conversions without format masks, at least for TO_DATE.
2.DESC for a specific procedure/function in a package
3.Extended explain syntax to add an optional USING clause (similar to EXECUTE IMMEDIATE) to allow the equivalent of bind variable peeking.
4.Something that is sort of like a function, but only involves SQL so doesn't require a context switch.
For example, you want to strip all the non-numeric characters from a string, you have
CREATE OR REPLACE ????? strip_nonnum (p_in varchar2)
returns
translate(p_in,' '||translate(p_in,' 1234567890',' '),' ');
and your SQL is simply SELECT strip_nonnum(col_name)....

[Thinking about it, it would be nice to have STRIP and KEEP SQL functions that discard or keep characters from strings rather than using TRANSLATE. I hate the fact that TRANSLATE fundamentally deals with arrays of characters rather than strings, but treats a zero length array as a null string. CONCAT is quite happy with the idea of a zero-length strings.]


Tom Kyte
October 03, 2005 - 9:15 pm UTC

2) oh, so you want to go back to 7.3 :) (that was the way it used to work....)

3) nice, in line with Alberto above...

4) I like that.



User Friendliness

Dhimant, October 04, 2005 - 12:54 am UTC

I guess anybody who has worked with MS SQL Server 7.0 and above can easily understand the ease of use and flexibility Enterprise Manager in SQL Server offers. Java based consoles [OEM] are good, platform idependent but slooooooooooooooooooooow. Oracle has fantastic manuals but the same needs to coupled ***with*** sqlplus, at least for SQL and PL/SQL. A light weight database modelling tool too should be made a part of Oracle Enterprise Edition. Forget everything else I think just add the SQL and PL/SQL Documentation with sqlplus. Do it and Do it asap.

regds.

Tom Kyte
October 04, 2005 - 1:42 pm UTC

Forget
everything else I think just add the SQL and PL/SQL Documentation with sqlplus.


what do you mean by that?

Idle musings

Connor, October 04, 2005 - 1:15 am UTC

Most of these can be done via other means - so this is more of a "convenience" list.

1) "on commit trigger"

For doing those transactional level validation things - sort
of like deferred constraints on steroids.

2) "v$sql_children"

child of v$sql showing statistics for each execution of an SQL (say at stats_level = all). Sometimes 5 executions of the same child cursor have very different execution characteristics.

3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this but be nice to have it all under 10046)

4) SQL to PLSQL call stack

SQL's called from PL/SQL can be linked to the executing package without having to trace/profiler/x$kglrd

5) Extended unique constraint

alter table T add constraint C unique ( upper(col) )

rather than having to use 'create index'

6) scott/tiger

$> ls -l /oracle/10gr2/sqlplus/demo/demobld.sql
/oracle/10gr2/sqlplus/demo/demobld.sql: No such file or directory

I dont *want* those new example schemas :-)

7) rman "duplicate as copy"

rman dup appears to always use a backup or take a new one. Why not just backup from the live db straight into the duplicated instance. The 'backup as copy' gets me closer in 10g

8) parent-id shown with dbms_xplan

The indenting is nice, but the explicit link between id to parent id is more concrete imho.

9) lob segments named better by default

rather then "SYS_L....". why not do it like materialised views etc - use the first 'n' chars of the column name with a sequence# suffix. You could always fall back to the default if a class occurred.

10) smart date based add-partition

If you've got a range partition table and each range is a day/week/month/year, then its not too much of a stretch to allow a ddl as:

alter table T add partition default range;

11) /*+ cursor_sharing_force */ hint

Trying to convince java developers to rewrite their string-concatenated SQL is a big ask. But getting them to add the hint to the string is easy. Allows selective cursor sharing without having to toggle 'alter session' calls throughout the code.

12) dbms_stats('execute-for-n-secs')

Rather than estimate/compute etc, simply a "please calculate stats until n seconds have elapsed". Under the scenes it could be similar to the auto mode - start with small sample and keep extending until time runs out.

13) aux_stats$ entries per instance

14) alter sequence reset to n;

15) online table move

"alter table T move online";

Spare the dbms_redefinition mumbo jumbo :-)

16) transactional truncate (under certain conditions)

So we could do:

truncate
insert /*+ append */
rollback

I'm thinking the truncate simple sets a seg header flag which forces all new data about the hwm. If you commit, we move the start block to the old hwm.

17) usage counts in v$object_usage

18) alter (context) index rebuild online;

and no its *not* already there when the manual says: "At the very beginning or very end of this process, DML might fail."

19) Removal of anything related to XML :-)

Cheers
Connor

Tom Kyte
October 04, 2005 - 2:08 pm UTC

5) yes....


thanks for the list - nice ideas. I will be (after this settles a bit, getting there soon I think) pick sort of a top-n to be sent on up the chain.

For "Queue DDL statements"

Connor, October 04, 2005 - 1:49 am UTC

You might want to play with "ddl_wait_for_lock"

Tom Kyte
October 04, 2005 - 2:08 pm UTC

well, now you are just getting pushy ;)

If not already mentioned ...

Clemens, October 04, 2005 - 3:08 am UTC

... how about these:

- NULL-Strings <> '', as session or init-Parameter, that would help a lot (and keep away those NVL-constructs).

- Trigger "numbering" (as for example seen in firebird), so we can have for example several "insert for each row"-triggers on the same table, with a fixed execution order - could be handy if you automate some insert triggers for sequence no's or have to add new features to existing applications.

- Sequence/Id-Columns, as seen in postgres. You can still create your own sequences, but it keeps away the unavoidable insert-triggers for that purpose.

- Removing the varchar2 barrier - lobs are nice, but why not open up the "character" datatype as seen in other DBs (even firebird manages char-columns up to 32K length).

- A "step"-clause in the for-loop would be nice (as seen in other languages [2]).

- Case statements with selector which allows extended compares, such as IN, LIKE, etc. Searched case statements are nice, but couldn't there be more relaxed variant for the selector-type? (as seen in other languages [2]).

- Mixed case object names combined with case insensitive DML. All upper case (such as in desc-statements) is harder to read (in my opinion), "<objectname>" requires " "-syntax on selects, etc. if I remember correctly. Could be a session or init-parameter.


- Already mentioned: constraints with it's own error messages (I think Sybase ASE has that already built in).

- Already mentioned: 30 character limit on (all) object names. Tables/Columns are fine (for me), but constraints (such as foreign keys), where I usually prefer to include both tablenames and columns for easier identification it's a drag, especially if you use 3rd-party modelling tools.

C.

[1] My first post here, but I'm a reader of this, in my opinion very valuable "knowledge base", for a couple of years now.
[2] I'm refering to languages such as Visual Basic (gambas if you're on die *[IU]X-Side).

I wish...

Vladimir Andreev, October 04, 2005 - 6:06 am UTC

Removal of LONG columns from the dictionary views (or adding new LOB columns with the same data and keeping the LONG columns for backward compatibility - sort of what happened to V$SESSION_EVENT with regard to TIME_WAITED and TIME_WAITED_MICRO)

Cheers,
Flado

I would like...

Frank, October 04, 2005 - 7:57 am UTC

... to be able to see dependencies between packaged functions and procedures (i.e. at function-level; not at package level), as well as to be able to query for parameters of (private) procedures/functions


Tkprof stuff

Jim, October 04, 2005 - 8:26 am UTC

1. Finer resolution on CPU times.
2. Format bind values.

group by syntax

Graham Halsey, October 04, 2005 - 9:27 am UTC

I may be way behind the times (8i!), but why can I do this:

"select table_name,tablespace_name from user_tables order by 2,1;"

but not this:

"select tablespace_name,count(*) from user_tables group by 1;"

i.e. why is ORDER BY and GROUP BY handled differently when it comes to referring to fields in the SELECT list numerically?

Tom Kyte
October 04, 2005 - 4:30 pm UTC

SQL standard.... It permits order by <ordinal column position>, but not so for group by.

Finer grained dependencies on packages ...

Dushan, October 04, 2005 - 9:46 am UTC

I wish I could see dependency like:

package1.procedure1 calls package2.procedure2

It would be nice for documentation issues.

Syntax Re-write..

Pet, October 04, 2005 - 10:10 am UTC

I would like PL/SQL syntax to match up with other languages (java probably..)

:= should become =
= should become ==
if() then end if should become if(){} else {}
...

it's tough everytime to change the coding style from PL/SQL and Java . I guess most of the developers has the same problem..


Tom Kyte
October 04, 2005 - 4:33 pm UTC

that'll not be happening.

Make java change. := predates java by a couple of years and a quite a few languages :)

Sorry, but I disagree ...

Clemens, October 04, 2005 - 10:21 am UTC

@Pet: Java is Java and PL/SQL is PL/SQL (or ADA). No need to turn PL/SQL into something different. But if Oracle would offer the usage of different SP-Languages (like for example postgres) ... my personal favorite then would be LUA. ;-)

C.

Tom Kyte
October 04, 2005 - 4:41 pm UTC

we do allow for different languages

plsql
C
java

and in 10gr2 - CLR languages on windows

wish list

Georg, October 04, 2005 - 11:00 am UTC

1. Improved data dictionary security
Why is it possible to modify internal views (like DBA_USERS)? I wish a new security layer would make such internal objects "read-only". Or even having any internal checksums to be able to detect manipulations...

2. From the other DB-world: TOP N Query and Identity
I know, it's easy to build it in Oracle - but...
Why not just
create table <t-name> (id number identity using sequence <seq-name>)

and SELECT TOP N ... is really much more user friendly then using inline views.

3. "NULL or unique" constraint.
Business case: You have a form having email field. The user don't have to fill it - but if he does, the email address has to be unique. So there are many NULLs values and unique not null values

4. already mentioned: 30 char limit and reset a sequence value


Tom Kyte
October 04, 2005 - 4:42 pm UTC

3) that already exists?? a unique constraint on a NULLABLE column is always "null or unique"



Arrays as IN arguments to SQL Statements

Steve G, October 04, 2005 - 11:08 am UTC

I wish plsql would allow arrays passed in to be used as an IN argument to a sql clause such as:

p_array_demo(p_int in int_array)
is
begin
select *
from temp
where pk in (p_int);
end;

That would be real helpful.

Tom Kyte
October 04, 2005 - 4:46 pm UTC

if you use collections - you can do that.




The Installer

C Wayne Huling, October 04, 2005 - 11:23 am UTC

If this is available, I have not found it? Running the Universal Installer in text mode. Having to run that across a 10 base T (really, the Government Installation won't upgrade!) through the firewall is a killer!

Tom Kyte
October 04, 2005 - 4:50 pm UTC

there is no character mode, there is

a) gui mode
b) silent mode (you ran the gui and created a response file to replay later)

Access of tracefile to developers

Jagjeet Singh, October 04, 2005 - 11:43 am UTC

I would like to have a feature which can provid the trace output to direct sql*plus client.

Developer can issue "alter session set sql_trace=true"
but for getting this file they need to have access on database server or dba needs to setup

I would like to have a global_temporary table for this output.


Delink Full Table Scan I/O from db_file_multiblock_read_count & db_block_size parameters

Anto, October 04, 2005 - 11:49 am UTC

Do all Full Table Scans at the maxium io rate possible for the system, irrespective of the settings for db_file_multiblock_read_count & db_block_size



Tom Kyte
October 04, 2005 - 4:53 pm UTC

10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?

Some estimate of time remaining

Anto, October 04, 2005 - 12:21 pm UTC

Some way of determing the time remaining time for operations like Nested Loop (based on the statistics available with the optimizer), even if they are not accurate (if stats are not uptodate)

Similar to what we have for long operations like Full table scan, fast full index scan / hash join etc

How about a column name

Ian, October 04, 2005 - 12:45 pm UTC

It would be nice to be told the column name when you get an ORA-01401: inserted value too large for column.

Tom Kyte
October 04, 2005 - 5:15 pm UTC

ops$tkyte@ORA10G> select * from v$version;
 
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Prod
PL/SQL Release 10.1.0.4.0 - Production
CORE    10.1.0.4.0      Production
TNS for Linux: Version 10.1.0.4.0 - Production
NLSRTL Version 10.1.0.4.0 - Production
 
ops$tkyte@ORA10G> @test
 
Table dropped.
 
 
Table created.
 
insert into t values ('xxx','xx')
                      *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."X" (actual: 3, maximum:
2)
 
 
insert into t values ('xx','xxx')
                           *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."Y" (actual: 3, maximum:
2)
 

ASM - even more intelligence

Jim, October 04, 2005 - 1:05 pm UTC

Automatic Storage management is nice but it goes on the assumption that raw IO times and available size of each mount point are equal. Two issues here:

1. If you have two mount points with unequal disk acess times (eg 1 with 10,000 RPM disks and one with 15,000 PRM disks) and you use them as one ASM file system. You are going to have unusual performance characteristics. Yes, the advice is that you shouldn't do that or you should have two files systems. (one for the 10K disks and one for the 15K disks). However, the reality is that over time it becomes difficult to do that.

2. Mount points of uneven size. ASM really wants mount points that are the same size. It wants to spread the data evenly over the mount points. Again, on a brand spanking new system you would want to set things up this way. As things age, the data bits age into their 40's and their waist lines expand, taking up moore room. (just kidding) So another mount point gets added that is much larger. Now you have a situation that is similar to situation#1. Potentially uneven access.


It would be nice to "environmentally tune" ASM. It should know what the IO patterns are and what the disk characteristics are over time. (how fast different types of reads and writes occur over time.) It could then make a better allocation of data over the mount points to optimize data access. It would have to take into account segment access, frequency and volume also.

This would not be a trivial piece of work and you might not want ASM to always be that smart. But I envision that a simple parameter could activate it or not. drum roll please.... _fast=true

(grin)

Tom Kyte
October 04, 2005 - 5:20 pm UTC

1) that would be a "don't put them into the same diskgroup" yes. but that would be true OF ANY volume manager.

2) no it doesn't? it just wants disk, as much as you want to give it. I don't understand that comment?



What to send up the chain

Bob B, October 04, 2005 - 2:21 pm UTC

I was reading on the "Creating Passionate Users" site about 37 signals and their sort of mantra on customer requests. If something's really important, people will keep bringing it up. So far, the big one I'm seeing is increasing the length of object names. I think many situations have come up where a (slightly) longer name for a constraint or index would make it self-documenting. Or maybe leave object names as they are and allow all objects to be commented on? *shrug*

Improve sqlplus editing

Jack, October 04, 2005 - 4:21 pm UTC

Add editing and history functionality, similar to Korn shell editing, to sqlplus. I'm fortunate in that I work on HP-UX, which has the ied utility. I go into sqlplus using the "ied sqlplus" command, which gives me shell-like editing and history functions. It's much easier to use than the existing sqlplus editing commands, and I can retrieve commands from history using ESC-k.

Tom Kyte
October 04, 2005 - 8:25 pm UTC

(that is rlwrap to the rest of us ;)

Continuing gripes

Bill C., October 04, 2005 - 4:38 pm UTC

First, kudos to Oracle for making the DB much better and not screwing up too much in the process. I don't know about you, but it seems that up until 8.1.6, my life revolved around waiting for the next patchset. But not since then. 8.1.7.4 was pretty good. 9.2.0.7 is looking pretty stable as well. I especially congratulate the team that finally allowed me to use records and arrays of records in DML and bulk DML. I had been waiting on pins and needles for that since '94.

Now, for a few gripes, some of which should have been improved eons ago.

1) Losing track of the point of error if you dared to use error handling. I understand this is sort-of fixed in 10g if you parse the output of the new backtrace function. But it still shouldn't be so complex to get the current line number, or the line number of the offending code when throwing an error. It should be a built-in, like
l_line := DBMS_UTIL.current_line_num;
OR
l_line := DBMS_UTIL.point_of_error(stacklevel => 1);


2) Not being able to programatically determine what the name is of the currently executing routine. Sorry Tom, last time I tried "whoami" deep within some packaged calls, it just didn't cut it. Having to parse strings to get that info is ridiculous. It should be a built-in, like
l_pkg_nm := SELF.package_name;
l_fqn := SELF.fully_qualified_name;
l_routine := SELF.current_routine;

3) Being constrained by an old-world 30 char limit to object names. I can live with uppercase only, but I really need 128 or 256 for identifiers.

4) No alphanumeric datatype somewhere between the extremes of 4000 and 4GB. Even lowly Access has this requirement nailed. Can't we have a datatype, or simply expand VARCHAR2, to allow text up to, say 1MB in a column? While we're on datatypes, I type VARCHAR2 so many times in my workday, it would help my weary fingers to introduce a new name for VARCHAR2, say VARC, or STR, which transparently compiles (and so is backwards compatible) to VARCHAR2 under the covers.

5) Everything else is being pushed and encouraged to dump LONG and LONG RAW in favor of CLOB and BLOB, why not Oracle internal tables? They cause me a good deal of grief when attempting certain impact analysis and comparisons across links. I have to jump through a lot of hoops to get around them. (Thanks to Vladimir for the reminder).

6) Dependent item Recycle Bin. Too often I or one of the DBAs gets a little too busy or rattled and drop a table, inadvertently losing grants or triggers that we forgot to check for first. Of course there are processes and tools to prevent that, but it would be sweet if there were a built-in facility to either a) given a table name, produce a drop script that captures the recreate scripts for the automatically-dropped dependent items, or b) a recycle bin that would contain the recreate DDL for items that disappeared when the table dropped. On a similar note, if option (a) were attempted, I'd love to see it generate the commands required to drop and recreate all the dependent foreign keys pointing to the table being dropped.

Anto, October 04, 2005 - 5:17 pm UTC

" Do all Full Table Scans at the maxium io rate possible for the system,
irrespective of the settings for db_file_multiblock_read_count & db_block_size


Followup:
10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?
"

We don't want to reduce the cost associated with Full table scan , thereby favoring more Full Table Scans - The cost should be the same irrespective of this init.ora. But whenever a full table scan is chosen by the optimizer, it has to be done at the max. i/o rate possible for the system, irrespective of the values for db_block_size and
db_file_multiblock_read_count

Tom Kyte
October 04, 2005 - 8:38 pm UTC

but then the cost (he says) should be set appropriately for that Max IO size - meaning - you already got it, if you want to use the max IO size - use it, the cost will be the cost.

I'm confused?

No Longer Names !!!!!

Andrew, October 04, 2005 - 6:16 pm UTC

30 characters is plenty long enough for me. If the developers I support could make longer names the would and that means that I would have to type in names upto 128 chars. NO THANKS! For those of you who use the point-n-click tools to manage your databases, you do not care how long the names are but us command like commandos do. In fact, our standards limit table names to 25 characters, so there is enough room to add _PK, _IDX1, etc. We use short name conventions when we make compound names. If you had longer name space, all you would do is end up complaining that you need more because of the compounding of longer names into even longer compound names.

I hate the idea of trying to read columns of data about tables, constraints, etc where the names are 128 char wide and wrapped into two or three lines. Ouch.


Compress subpartition?

Quadro, October 04, 2005 - 8:09 pm UTC

It's 2005 and 10G Release 2.

And we still can't compress subpartitions!

Tom Kyte
October 04, 2005 - 9:18 pm UTC

sure you can???  You cannot compress an "indvidual" subpartition but subpartitions can be compressed (at the partition level, all of them are - or not)


ops$tkyte@ORA9IR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x) subpartitions 8
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA9IR2> CREATE TABLE t2
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  COMPRESS
  8  PARTITION BY RANGE (dt)
  9  subpartition by hash(x) subpartitions 8
 10  (
 11    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 13    PARTITION junk VALUES LESS THAN (MAXVALUE)
 14  )
 15  /

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t1
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30733 rows created.

ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t2
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30733 rows created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T2' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name in ( 'T1', 'T2' );

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    268            0
T2                                    128            0



table is full of subpartitions and they are compressed... 

About subpartition compression

Quadro, October 05, 2005 - 1:34 am UTC

Ok, thank you Tom, it's my fault - i should me more specific.

Consider: you loading data into table partition during month. In the end of the month you do alter table move partition compress - to compres it content.

With list subpartitions you cannot do that.

SQL> create table test
  2  (
  3   dt date,
  4   lst number
  5  ) partition by range (dt)
  6  subpartition by list (lst)
  7  subpartition template
  8  (
  9   subpartition t1 values (1),
 10   subpartition t2 values (2),
 11   subpartition t3 values (3)
 12  )
 13  (
 14   partition test values less than (maxvalue)
 15  );

Table created.

SQL> insert /*+ append */ into test select created, mod(rownum,3)+1 from all_objects;

51816 rows created.

SQL> commit;

Commit complete.

SQL> alter table test move partition test compress;
alter table test move partition test compress
                                *
ERROR at line 1:
ORA-14257: cannot move partition other than a Range or Hash partition


SQL> alter table test move subpartition test_t1 compress;
alter table test move subpartition test_t1 compress
                                           *
ERROR at line 1:
ORA-14160: this physical attribute may not be specified for a table
subpartition 

Tom Kyte
October 05, 2005 - 7:17 am UTC

just alter table test move subpartition??  The subpartition already has the compressed attribute.  It is inherited from the TABLE or PARTITION attribute


ops$tkyte@ORA9IR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x) subpartitions 8
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t1
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30702 rows created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name = 'T1';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    268            0

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table t1 compress;

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
  2          for x in ( select * from user_tab_subpartitions where table_name = 'T1' )
  3          loop
  4                  execute immediate 'alter table t1 move subpartition ' || x.subpartition_name;
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name = 'T1';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    128            0
 

Followup ...

Clemens, October 05, 2005 - 3:13 am UTC

@Tom:

we do allow for different languages
plsql
C
java --> yes, but you still need PL/SQL as foundation. But that's fine with me. Now to the python, perl, ... whatever integration. ;-)

and in 10gr2 - CLR languages on windows

So what about mono support on the *[IU]X platforms?

@Andrew from m

Short column/table-names, why not. But finding "artificial" names for objects such as foreign keys, where it would be nice to know the relationships between tables _and_ columns by reading its name.

As a side note: I'm using both "point'n'click" (for DB model desing) and command line tools, and I'm well aware of the misuse of long names. Brings us to another issue:



How about tab completion (such as in bash, zsh, etc.) in sqlplus for object names, etc.? Don't think that rlwrap helps you there. [1]

C.

[1] Workaround for me right now is vim with dbext plugin.

Re: Queue DDL statements.

Sai, October 05, 2005 - 4:16 am UTC

Connor,

Thanks for your tip. It seems this feature "DDL_WAIT_FOR_LOCKS" was added in 10g release 1. But, I can't get it to work, any clues?

SQL> alter session set DDL_WAIT_FOR_LOCKS=true;

Session altered.

SQL> alter table test add sai_ddl number;
alter table test add sai_ddl number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


SQL> show rel
release 1001000200

Thanks,
 Sai. 

Tom Kyte
October 05, 2005 - 7:37 am UTC

It is an unimplemented feature that was erroneously documented in 10gr1 and removed in 10gr2 (from the docs)

Alberto Dell'Era, October 05, 2005 - 5:18 am UTC

> It seems this feature "DDL_WAIT_FOR_LOCKS" was added in 10g
> release 1. But, I can't get it to work, any clues?

metalink - doc id 660901.999 "DDL_WAIT_FOR_LOCKS=TRUE has no effect" seems to offer a possible interpretation:

<quote>
Got response from the TAR and Oracle told me the following:

"It has been confirmed that this is a documentation bug. This feature has not yet been implemented and it is not being implemented in the next release of
Oracle either."
</quote>

ddl_wait_for_locks

Connor, October 05, 2005 - 5:36 am UTC

bug 4254209

From the description:

"The parameter "ddl_wait_for_locks" is not used for anything in the code"

Charming...

Cheers
Connor

ddl_wait_for_Locks

Connor, October 05, 2005 - 5:40 am UTC

Sorry - should have also added some assistance.

You can simulate this with something like the below (untested):

declare
n number := dbms_utility.get_time;
begin
loop
begin
execute immediate 'alter table ...';
exit;
exception
when others then
if sqlcode = -54 then
if dbms_utility.get_time - n > 60*100 then
raise;
else
dbms_lock.sleep(0.1);
end if;
else
raise;
end if;
end;
end loop;
end;

which will repeatedly try the DDL in 1/10th second intervals for up to 60 seconds.

hth
Connor

Actually for me, most things have already been mentioned

Bob, October 05, 2005 - 6:58 am UTC

I actually like most of Oracle!
But the things I'd change would be to simplify some of the things that I've needed to do regularly withing commands or packages.

alter table t rebuild online tablespace x;

instead of db redef package. Have it auto rebuild the indexes in place. Simplify it to just do that.

SQLPlus grid results. 90% of the reason people use Toad or PLSQL developer is just to issue queries when developing. It's way easier to look at a table with 50 columns in those tools than sqlplus.

30 char limit for object names. Too small today.

re: improved SQL*Plus

A reader, October 05, 2005 - 8:05 am UTC

if all these wishes for an enhanced SQL*Plus will get implemented may be this program then could be renamed to "UFI" (user friendly interface)

;-)

oups


I'll add my vote

Heath, October 05, 2005 - 9:31 am UTC

Just agreeing with the earlier comments that Raptor would recognize the fact that database developers have their PL/SQL source code in a version control system, too. Let us check those files out, edit the text file locally, and then be able to compile and debug based on that local file. Expecting the user to just pull the source from the data dictionary and edit it doesn't cut it. In a collaborative development environment, I can't trust the source in the database. I do trust the code in the source code system.

For Robert, the free JDeveloper doesn't allow this either. It expects you to edit from source in the database. (If I'm wrong, feel free to tell me how to do this.)

Please let me not have to develop using PLSQL Developer with its substandard debugger simply because that product allows me to work with local files. I love the JDeveloper PL/SQL debugger and would be pleased to be able to use the JDeveloper IDE (or similar) to do my development as well.

db_file_multiblock_read_count parameter

Anto, October 05, 2005 - 10:33 am UTC

"Do all Full Table Scans at the maxium io rate possible for the system,
irrespective of the settings for db_file_multiblock_read_count & db_block_size



Followup:
10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?
"

We don't want to reduce the cost associated with Full table scan , thereby
favoring more Full Table Scans - The cost should be the same irrespective of
this init.ora. But whenever a full table scan is chosen by the optimizer, it has
to be done at the max. i/o rate possible for the system, irrespective of the
values for db_block_size and
db_file_multiblock_read_count


Followup:
but then the cost (he says) should be set appropriately for that Max IO size -
meaning - you already got it, if you want to use the max IO size - use it, the
cost will be the cost.

I'm confused?

"

Hi Tom,

Here is my understanding of the parameter - db_file_multiblock_read_count

Correct me if I am wrong.

If this parameter db_file_multiblock_read_count is set too high to say 128 or 256 or more, more and more SQLs which were using indexes before, will start using full table scans, since we are reducing the cost of FTS, by increasing the value for this parameter.

If this parameter is set too low, to say 4 or 8 or 16, then more and more SQLs(which were using FTS before) will start using indexes,since the cost of FTS is high, due to low value for this parameter. But even with this low value for db_file_multiblock_read_count parameter , there will be a few SQLs which will still be using the FTS(full table scan) at a max. i/o rate of db_file_multiblock_read_count* db_block_size (which will be low, due to low value for db_file_multiblock_read_count parameter) . Ideally those SQLs which are still using FTS, should do the FTS at the maximum i/o rate possible for that system instead of doing the FTS at db_file_multiblock_read_count* db_block_size(which will be low due to low value for db_file_multiblock_read_count parameter)

Tom Kyte
October 05, 2005 - 11:40 am UTC

but the cost of the full scan at the MAX IO SIZE is the cost of the full scan at the MAX IO SIZE.

What you are really asking for is a parameter to set the "full scan cost", not a parameter to muck with the IO size.


You really DO reduce the cost of the fts by increasing the multi-block read count see -- you really do, so you either want the cost we are going to use or not.


you set it to the max io size (settings above that don't count, it doesn't care if you set it really high, it adjusts) and that -- that is the COST of a full scan.


You have two different concepts going here. Perhaps you want to look at system statistics (so we know the cost of multi versus single block IO) and the optimizer_index_* parameters.

Good thread here

Jim Tommaney, October 05, 2005 - 10:49 am UTC

1.) If this can be done and not impact concurrency:

create sequence seq_1;
alter table t_1
add constraint sk_1 sequence (c_1 )
references seq_1{.last_number};

Anyone who's dealt with thousands of sequences and hundreds of developers will appreciate the documentation aspect of this.

2.) I agree with the non-context-switching-function-like-thing suggested by Gary from Sydney. This would go a long way toward cleaning up some really tangled string manipulation.

3.) Cursor_sharing (nn ) approach, get sharing behavior when the number distinct literal values goes above the parameter value.

NAME DESCRIPTION
cursor_sharing_threshold re-write literal value to bind when literal exhibits > nn distinct values


Thanks - Jim T


Yikes

Scot, October 05, 2005 - 10:57 am UTC

This is a great thread, and there are some good suggestions here. But also some really bad ones, I trust that close scrutiny will be paid to all comments :)


Tom Kyte
October 05, 2005 - 11:46 am UTC

;) indeed, I have my own filters.

User, October 05, 2005 - 11:34 am UTC

I would like to get an error message like "Object does not exist" while creating a synonym, if the object does not exist.

column specific errors (part2)

Igor, October 05, 2005 - 11:38 am UTC

Yes, sorry, I wrote from top of my head. Actually 1401 with a lot of values is sometimes not easy to find:
SQL> insert into t values ( 'ABC' );
insert into t values ( 'ABC' )
                        *
ERROR at line 1:
ORA-01401: inserted value too large for column

 

more on binds

Niall Litchfield, October 05, 2005 - 11:39 am UTC

wow - this thread grew a bit in the last few days...

Not sure if this belongs here or in a blog entry.

"Niall,

you won't lose friends by expressing lucid opinions based on your experiences... "

Hey, I know that. Just overstating again.

"But on the bind thing - you and I see opposites I think. Maybe - when I come to
the UKOUG - you can sit in on my "all about binds" session "

It's on my agenda already. I don't think we are actually on opposites (at least as far as understanding goes).

"
the first time around, "where pk = 10" is not any different from "where pk =
:b1"

Even on the 1000th time, "where pk = 10" is not any different from "where pk =
:b1"

However, you don't tend to submit "where pk = 10" 1000 times, you tend to do it
once and it must be compiled (hard parsed)


so, it is the difference between:

where pk = 10
where pk = 11
....
where pk = 10000
where pk = 10001
...
where pk = .............


and

where pk = :b1



Think about a developer writing a SUBROUTINE for each unique set of inputs and
calling them: "

What you describe is how it *is* in Oracle, given how Oracle works binds are just so, so important. The fact is though that developers, and more particularly sql generators have a real hard time getting binding right. I guess what I am saying is that, given the nature of apps, I'd like Oracle to behave like CURSOR_SHARING=<FORCE> by default (only without the bugs :(). i.e that writing code like

select col from tab where pk =||x (ie string concatenation)

resulted in nice straightforward sharable code. Microsoft's SQL server has done this for a while now.

I guess what I am probably saying is that developers and sql generators don't and won't bind (and we maybe can't teach them better) I'd like it if Oracle didn't suffer so much as a result.

Incidentally I agree with Connor for some reason I can easily see persuading developers to insert hints to make up for their lack of proper coding. Never will get that mentality.







Tom Kyte
October 05, 2005 - 11:53 am UTC

MS SQL Server doesn't have the concept of shared sql at all. It cannot be doing it "nicely" :)

exp/imp specifics & LOBs

Igor, October 05, 2005 - 11:45 am UTC

Even in 9.2, after quite a few releases with LOBs, when you change tablespace during import, LOB storage is not done automaticaly, as "normal" storage for tables and indexes. So that you should use indexfile, edit, create, import etc.




Tom Kyte
October 05, 2005 - 11:54 am UTC

data pump. have you looked at that in 10g.

Follow up for Tom and/or Connor

Robert, October 05, 2005 - 11:48 am UTC

Hello Tom or Connor,

Connor said the following in his list of suggestions...

=========================
3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this
but be nice to have it all under 10046)
=========================

Could you please explain exactly how we can trace logical I/O's (8i and 9i).

Thanks!

Robert.

not using binds in MS SQL Server

Jim, October 05, 2005 - 12:03 pm UTC

MS actually would prefer you use binds for MS SQL Server. (I suspect they would prefer you not use binds for Oracle <grin>.) If you look at the SQL Server API it does prefer you use binds. Years ago a colleage of mine wrote a drive to MS SQL Server (prior to ODBC existing) and used binds. It was a lot faster than not using binds.

The code generators that don't use binds have lazy or ignorant developers. The recommendation of the use of binds hasn't changed since at least Oracle 6, probably farther back than the birth of some of the developers!

Anto, October 05, 2005 - 1:37 pm UTC

If we have enabled system stats, will the optimizer use the mbrc from that or will it be taken from db_file_multiblock_read_count of init.ora( if this parameter is set to a lower value in init.ora) ?

Tom Kyte
October 05, 2005 - 1:51 pm UTC

system statistics tell the optimizer how expensive multi-block reads are and single block reads are.


db_file_multiblock_read_count is used to perform the IO.

system stats do not "set" the multiblock read count, they report how long the multi block reads take.

Follow up on Connor's comments

Robert, October 05, 2005 - 2:45 pm UTC

Tom,

Connor said the following in his list of suggestions...

=========================
3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this
but be nice to have it all under 10046)
=========================

Could you please explain exactly how we can trace logical I/O's (8i and 9i).

Thanks!

Robert.



Tom Kyte
October 05, 2005 - 3:19 pm UTC

via undocumented events.

all about tracing

Alberto Dell'Era, October 05, 2005 - 3:02 pm UTC

1) having in the STAT lines (and so in tkprof) the CBO estimates of the cardinalities, cost, temp space etc - basically what's in v$sql_plan.

Getting the real plan estimates by using "explain plan" is currently near to impossible - you have to reproduce the actual environment at parsing time (hard or impossible, eg think about workarea_size_policy=auto), you would have to know the bind variables values at bind variable peeking time (hard or impossible, not in the trace ... oh, yes, i was about to forget, explain plan doesn't peek ;).
The only feasible way is to look at v$sql_plan, but the plan may have been invalidated or changed in the meanwhile, you should identify the correct child cursor, etc.

2) having an API to dump the STAT lines at will, not only at cursor closing time (that may never happen, think about PL/SQL cached cursors, or JDBC 3.0 cursor caching, for app using a connection pool or simply never closing the connection).

3) I "vote" for Connor's idea to have "logical IO's dumped to trace file". Excellent for statements that have the accessed blocks almost always cached in the buffer cache, eg that join a table to some lookup tables.

4) nice to have: tkprof using the same format of dbms_xplan.

Tom Kyte
October 05, 2005 - 3:23 pm UTC

1) what about workarea_size_policy auto??? that it is on and the pga set to "x" is what decides the plan. It is rather deterministic.

binds are in the trace if you ask for them. and you would explain the query with the binds replaced by literals.

I find it is in general not near to impossible - but almost always obtainable - usually very easily obtainable (it would only be if the application did something to change it's environment).....




Alberto Dell'Era, October 05, 2005 - 4:36 pm UTC

> 1) what about workarea_size_policy auto???

And how can you be sure to be able to reproduce the PGA utilization at the moment of parsing ? Maybe the statement got parsed three days ago at 02:12 am ? Maybe at that time a series of batch session started, and allocated "most" of the PGA ?

> binds are in the trace if you ask for them.

What if the plan got parsed before you enabled tracing, or parsed by another session 1millisecons before you (soft) parsed in your traced session ?

> you would explain the query with the binds replaced by literals.

And can you 100% sure that the CBO will generate the same plan in that case ? Say, you don't know that Thin Drivers in 9i don't peek, replace by literals, and get a different plan.

But even assume that this isn't an issue, that you could theoretically reproduce the PGA utilization, the bind variables values at parsing (execution) time, and the environment. How can you you be sure that you won't commit a mistake - will you trust your generated plan by 100% ?

If that were in the STAT lines - no uncertainty, you get the CBO estimates, in zero time. No errors, no mistakes possible - everything is there to see and diagnose.

Tom Kyte
October 05, 2005 - 8:37 pm UTC

1) and do you have an example where it makes a difference? :)

what matters is

a) workarea was auto
b) pga aggregate target was set

Have you seen different plans for the same query? What changes is the amount of memory given to a workarea....


true about the binds.



But - if the plans were "not" the same - it would be extremely apparent as well (and fairly rare).


The stat lines are reality - the guesses - that only comes from explain plans. You would not want to do an explain plan per parse/execute.

A couple more suggestions

Justin, October 05, 2005 - 5:15 pm UTC

1) The ability to grant privileges to a particular user or role for all objects of a particular type in a particular schema, i.e.

GRANT SELECT ANY TABLE IN schema_owner TO read_only_role
GRANT EXECUTE ANY PROCEDURE IN schema_owner TO application_role

You can get the same effect today with a DDL trigger that grants privileges on the newly created objects, but having Oracle automatically handle this sort of maintenance would be nice.

2) A SQL*Loader "create table" option. If the data files are on the server, I can obviously use an external table, but if I'm doing a load from a client machine, it would be nice to avoid having to duplicate information in my control file into my CREATE TABLE statement.

3) You can't MERGE into a workspace enabled table (Workspace Manager).

Events for these already exist

A reader, October 05, 2005 - 6:32 pm UTC

"2) having an API to dump the STAT lines at will, not only at cursor closing time"
"3) I "vote" for Connor's idea to have "logical IO's dumped to trace file"."

Seperate events for both of these already exist. You can set multiple events in the same session and they will write into the same trace file (not that you would do this in any production environment).

Tom Kyte
October 05, 2005 - 8:38 pm UTC

but events don't necessarily count - I talk about two of them (10046 and 10053), but wish I didn't have to - and with 10g, I can stop talking about 10046...

Followup on IAN's comments

Madhava Reddy, October 05, 2005 - 9:37 pm UTC

The column name is not being specified for number datatype:

madhava@MADHAVA>drop table test
2 ;

Table dropped.

madhava@MADHAVA>create table test(x number(3,2),y varchar2(2));

Table created.

madhava@MADHAVA>insert into test(x,y) values(123.23,'xx');
insert into test(x,y) values(123.23,'xx')
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column


madhava@MADHAVA>insert into test(x,y) values(1,'test');
insert into test(x,y) values(1,'test')
*
ERROR at line 1:
ORA-12899: value too large for column "MADHAVA"."TEST"."Y" (actual: 4, maximum: 2)


madhava@MADHAVA>begin
2 insert into test(x,y) values(123.23,'xx');
3 end;
4 /
begin
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column
ORA-06512: at line 2


madhava@MADHAVA>begin
2 insert into test(x,y) values(1,'test');
3 end;
4 /
begin
*
ERROR at line 1:
ORA-12899: value too large for column "MADHAVA"."TEST"."Y" (actual: 4, maximum: 2)
ORA-06512: at line 2

Will this be done in the next release?

"but events don't necessarily count"

Connor, October 05, 2005 - 9:53 pm UTC

which is precisely why 'alter session' is a terrible terrible privilege to grant to anyone in a production system

Re: Follow up on Connor's comments

Sai, October 05, 2005 - 10:52 pm UTC

Robert,

To get the dump of logical I/O's performed for any sql statement, you can use 10200 event in Oracle 8i. But you can check v$bh to see what buffer blocks are cached and x$bh to see which buffers were touched when.

Connor, is that the event you meant to get logical I/O dump?

Thanks,
Sai.

"but events don't necessarily count"

A reader, October 06, 2005 - 12:10 am UTC

How often are you going to do this? Should there be a reason that 10046 is not enough, then maybe in development or even in a performance test lab.

I just see the need for this as being sooooo remote, I ask why having this as an event is not "good enough"??? 99% of the time, the database does what it was meant to do and it does it extremely well. I am just happy that Oracle even gives you the ability to enable a event that will dump out this detailed information (granted, it would be nice if Oracle was a little more public about some of the events outside 10046 and 10053).

Just my $.02

Tom Kyte
October 06, 2005 - 7:41 am UTC

should there be a reason 10046 is not enough?

Absolutely, for every developer should be using it every day and in every way. dbms_monitor in 10g takes case of it for us in a documented, accessible, safe fashion.


Events are not documented, will not be documented as far as I know. That is why they are not good enough.

online rename datafiles

steen bartholdy, October 06, 2005 - 5:48 am UTC

I would like to :
alter database rename file X to Y *online* ;
all files preferred : capacity : datafile,
easy of usage controlfiles to !
backup online
create copy online
but switch online, naaah
or have I missed something here ?


Tom Kyte
October 06, 2005 - 7:53 am UTC

alter database rename file X to Y *online* ;

that doesn't make sense??


'switch online'?? don't understand.

Adding constraints/Columns etc.

Bipul, October 06, 2005 - 6:56 am UTC

My 2c.

1. Whenever a table structure is changed [ add a new column], this invalidates all the dependent stored pl/sql and also all the dependent cursors in shared pool. In an OLTP environment, where thousands of transactions/queries take place every minute by hundreds of session, this causes some issue. For example, most of the sessions will go in "library cache pin" or "library cahce lock" wait and the database appears to hang. I have noticed this from 8.1.7 to 10.1.0.4. This is not very helful in providing 24x7 uptime. Oracle support havn't provided any workaround for this so far.

2. Adding a constraint locks the parent table. Again, this causes issues in high transactional OLTP system. The lock time varies with the size of parent table. Often this results in some "timeouts".

It would be useful, if Oracle can address these issues as part of "high availablity" features.

Other bit I would like to be improved is Oracle support. I don't think [this is completely based on my experience] that Oracle Support puts a lot of effort in recreating and diagonising customer issues. Often they tell us to recreate it in our environment, whereas the problem can be easily recreated anywhere i.e. Oracle support's own environment. At times, it seems like they are after "closing the ticket" rather than "fixing customer issues" [ The last statement assumes that asktom.com is not part of Oracle support. In my opinion, to date this is most useful site for fixing oracle issues/gaining insight into issues and learning to do things so that we don;t bump into issues]

Thanks
bipul




Tom Kyte
October 06, 2005 - 7:59 am UTC

1) because upgrading an application is a hard nut to crack - think about it, you are patching or upgrading/updating an application - you are CHANGING it.

2) same comment - you are upgrading.




As for support - I agree with their approach. If you call and say "hey, my 500,000 line program fails - why" - they can do nothing.

If you call and say "hey, run this and what do you see", they can do something.

Developing a test case in most all cases is paramount - you see me do it all of the time, you see me demand it all of the time as well.

no, this is not part of support as it is.

cursor_sharing

Igor, October 06, 2005 - 6:58 am UTC

And sometimes it would be handy to have "finer grained" cursor_sharing that could be handled when no application code could be changed and say "Hey, this part I want with FORCE, rest is OK". Based on tables or something similar...

BLOBs

Igor, October 06, 2005 - 7:06 am UTC

"data pump. have you looked at that in 10g."

OK, but on systems that I touch are "still" mostly 9i.
And correct me if I'm wrong, BLOB was, I think, introduced in 8.0 and in 9.2 this relatively basic thing does not work.

Also, in exp/imp context is a bit unusual that I could have problems exporting with exp 9.2.0.5 from database 9.2.0.4.
I know that exists Metalink compatibility matrix and all this but it's a bit "counter-intuitive"



Tom Kyte
October 06, 2005 - 8:01 am UTC

exp/imp have no techniques for remapping tablespaces - unless you have the relatively simple "single segment object", the facility just didn't exist.

And I was just pointing out that what you ask for was added a couple of years ago....

CBO decision

Igor, October 06, 2005 - 7:09 am UTC

Sometimes it would be nice to have CBO saying how "far" he is from changing execution plan or if second is pretty close...

Alberto Dell'Era, October 06, 2005 - 7:53 am UTC

> 1) and do you have an example where it makes a difference? :)

Not specific to pga_aggregate_target; anyway I've seen many plans in tkprof that I wasn't able to reproduce using explain plan. Call it my fault, perhaps I missed some "alter session" of the app, perhaps we were using 9i thin drivers and I wasn't clever enough to know that 9i thin drivers don't peek, whatever ...

Having the estimated cardinalities, cost, temp space, etc in the STAT lines would leave no space to guessing, and would mean, *at the very least*, that you wouldn't need to spend time to reproduce the environment for explain plan.

> You would not want to do an explain plan per parse/execute.

You need not, the infos are already there, in the kernel structures read by v$sql_plan, since hard parse time - you would just need to dump them in the trace file.

Followup on your comment re: constraints/structure change

Bipul, October 06, 2005 - 8:51 am UTC

<TK>
1) because upgrading an application is a hard nut to crack - think about it, you
are patching or upgrading/updating an application - you are CHANGING it.

</TK>

Upgrading an application is slightly different. We plan it/get some downtime etc. But in reality what happens is

1. Managers/designers and stakeholders read about a development methodology called RAD [rapid application development].
2. They want to develop something using RAD very quickly with few tables. Get application online, and keep adding features to it. Adhoc, whenever they feel like.
3. Some of the enhancement/features require change in existing table, which has lots of rows, used by many stored procedures/cursors etc. And then we [DBA] hit this problem when we have to modify the table or add constraints etc.

I think we [IT industry in general]are spending less time in design.

<TK>
As for support - I agree with their approach. If you call and say "hey, my
500,000 line program fails - why" - they can do nothing.

If you call and say "hey, run this and what do you see", they can do something.

Developing a test case in most all cases is paramount - you see me do it all of
the time, you see me demand it all of the time as well.
</TK>

I agree that developing a test case is important, but sometimes the problem is bleeding obvious and you don;t need few GB of data to reproduce it. For example, "library cache pin"/"library cache lock" issue in high transactional OLTP database can be recreated in support's environment.

I should add that sometimes [at least once I have noticed] Oracle changes functionality without putting that in release notes. For example XDB in 10g doesn;t support unicode characters > 255, and it does in 9i. This wasn;t listed anywhere in Oracle documentation. Support suggested to convert the database character set to UTF-8. Sure we can do that, thats pretty easy for a > 1TB database with loads of VARCHAR and CLOB that requires a 24X7 uptime !

Anyway, I think this thread is not about griping support issues. So I should stick with the product feature.

How about ability to rename schema? This will be useful in upgrades, so a sample upgrade process could be like

1. Clone schema,
2. Upgrade the cloned schema
3. rename original schema to something else.
3. Rename cloned schema to original schema.

It will save us the restore of original schema if upgrade fails.


Thanks
bipul

Tom Kyte
October 06, 2005 - 9:26 am UTC

as for the upgrade -

IF this is a mission_critical_system
AND downtime is to be avoided
THEN
you need to have a 'mainframe' mentality, for lack of a better term.
END IF



surprised no one else mentioned a schema rename yet ;)

Support

Connor, October 06, 2005 - 9:09 am UTC

Don't want to turn this into an anti-support rant because

i) compared with other product support knowledge bases, I rate Metalink as the benchmark by which all other pale

ii) being a support person must be a thankless job

but as a user of support for around 12 years, I would contend the quality has dropped significantly over the past few years.

(Hopefully my book content lends credibility to my statement) that I'm a real stickler for providing small, easy to run, cut-paste test cases - something I do in my TAR's, but even then there is so much drive by support analysts to

a) close your TAR asap (I assume they're measured on turnaround time)

b) resist any possibility that what you've described is a bug.

Cheers
Connor

Is it possible to move datafiles online ?

steen bartholdy, October 06, 2005 - 9:20 am UTC

I believe that in order to move a datafile you have to :
tablespace Tab offline;
alter database rename file X to Y;
tablespace Tab online;
If this is true then I want to move datafiles without taking the tablespace offline.
And I would like to move controlfiles online as well.


Tom Kyte
October 06, 2005 - 11:50 am UTC

and like I said - to me that does make sense - you have to MOVE the file, meaning the file is in two locations.....

Hints

DaPi, October 06, 2005 - 9:21 am UTC

Any chance of a warning diagnostic from SQL+ for incorrectly formated hints? (yes, I know, better not use them, but . . . . ) Currently you don't know if it made no difference or if you got the format wrong.

Excellent DaPi

A reader, October 06, 2005 - 9:45 am UTC

Of course you only mean the good hints

Like it Dapi

padders, October 06, 2005 - 10:06 am UTC

Perhaps a small section in the DBMS_XPLAN.DISPLAY output (as for predicates) to indicate what hints were applied to which operation etc.

Split user / schema, anyone?

Clemens, October 06, 2005 - 10:38 am UTC

> Tom: surprised no one else mentioned a schema rename yet ;)

I wouldn't mind if you could create (and drop) users without corresponding schemas (sometimes you only need them for logon/security issues) as seen in postgres. Get's a bit messy if you've got lots of users and you want to select a certain schema in one of those point'n'click tools.

C.

A reader, October 06, 2005 - 10:38 am UTC

The quality of Oracle support has indeed gone down over the years. The final answer from support in most of the cases - upgrade to the next version. In some cases, they don't even care to find out the exact bug we are hitting.

Metalink is quite good in itself and if I dont get the answer there,I would rather ask in asktom.oracle.com rather than raising a TAR with Oracle Support

Context Switch

AndyN, October 06, 2005 - 10:58 am UTC

If I could change something about Oracle it would be the removal of the context switch between the PL/SQL engine and the SQL engine. I have never been quite understood why this has to be there. If the two engines were integrated together couldn't this be removed?

Tom Kyte
October 06, 2005 - 12:04 pm UTC

plsql is a separate vm, just like java.

we did have a request above that seems neat -- sort of a "sql fuction" capability.

the overhead of calling sql from plsql or plsql from sql are fairly small - problem is, many people try to do it 1,000's of times and it ADDS up.

Alberto Dell'Era, October 06, 2005 - 12:30 pm UTC

A schema SYNONYM would be dandy, to be able to repoint a schema as we do with tables.

create or replace schema synonym interface for schema1;

Users point to the "interface" schema and get redirected to schema1. Much transparent then "alter session set current_schema=schema1", could be retrofitted to existing applications with no modifications.

Or, the ability to be able to repoint a set of synonyms atomically and consistently:

create or replace synonym t for schema1.t, u for schema1.u ..

Tom Kyte
October 06, 2005 - 12:48 pm UTC

yeah, let me chime in...


a create schema that is atomic (as it is) but supports everything....

an alter schema to go with it.


so that a group of DDL statements can be done together as an atomic unit of work.

Alberto - you just made this thread TIE that other recent thread and have dethroned it! This is now the current "hot" one :)

Anto, October 06, 2005 - 2:08 pm UTC

Nice to know that a purely technical thread has come to the top. ;-)

Never in his wildest dreams would Alex have thought that his thread will become the hottest one in less than a week

A reader, October 06, 2005 - 2:21 pm UTC

Some dictionary view to list all the queries currently being parsed by the optimizer or does V$SQL or some other view, have this info ?

Even though parsing typically takes only msecs, there are cases where we had to wait for seconds or even minutes (for the parsing to complete) and to see the explain plan. This happens when the optimizer_max_permutation is retained at its 8i default value of 80,000, even after you upgrade the database to 9i. The problem disappears if you reduce optimizer_max_permutation by 1 to 79999



Alberto Dell'Era, October 06, 2005 - 3:52 pm UTC

> you just made this thread TIE that other recent thread

If technical threads draw more attention than political ones, there's still hope for the [Oracle] world ...

Tom Kyte
October 06, 2005 - 6:27 pm UTC

Hey, the other thread *started* technical.

Alberto Dell'Era, October 06, 2005 - 6:26 pm UTC

"alter table exchange SEGMENT including indexes"

Some uses:

1) you want to delete 90% of rows in a table - you just insert-select the other 10% in a twin table (or CTAS it), index, analyze, then exchange the segment(s) back.

Similar to ctas-drop old-rename, but without having to regrant privileges, recreate triggers, and invalidating depending cursors and packages.

2) segment reorganization - "alter table move" invalidates indexes, the technique above allows you to (re)build them on the "new" table while allowing concurrent reads of the "old" table.

You may also, say, reorder physically the rows of the table to decrease the clustering factor of one index.

3) 24x7 read interface tables

We currently use this technique:
a) readers read from t
b) we prepare the new data image in the t_part, a table with a single partition identical to t
c) we exchange the table with the partition

NB I've tested that if a reader has a cursor opened on t, it can continue its fetching across the "exchange partition" - it reads the "old" image of the data (perfect read consistency!).

The "exchange segment" would definitely make it easier to manage (no need to know about partitioning, same script to create the twin table, no need to maintain both t and t_part, etc).

Tom Kyte
October 06, 2005 - 7:15 pm UTC

1) that would be an online redefine with a where clause - but it (as anything) will invalidate cursors, have to - plans are different, different data objects to query.

(so, online redefine with where clause is what you are asking for)

2) that is online redefine as it exists

3) careful, that is imperfect read consistency at best, strange things will happen when the space is reused.


commit other session's transactions

Bobcat, October 06, 2005 - 10:50 pm UTC

How about the ability to (force) commit other session's transactions? We can force rollback other session's transaction by killing the session, but no "force commit" capability.

Tom Kyte
October 07, 2005 - 8:21 am UTC

never, I would never even consider running that one up the chain. No way.

How dangerous is that.

Just extending...

Connor, October 06, 2005 - 11:08 pm UTC

To extend my initial one of "transactional truncate" and the later comments by others about atomic schema operations, I suppose we could generalise this to: transactional DDL facility. Its something that some other databases can do. It would be nice to be able to do (eg)

Task:
- create table
- insert into table select * from ...
- create index on table

and if the 'create index' fails, we rollback to the table never having existed. It would make scripting of application installs very easy


Tom Kyte
October 07, 2005 - 8:23 am UTC

Re: Just extending.

Sai, October 07, 2005 - 4:50 am UTC

I disagree, transactional DDL rollback could compromise database integrity and introduce more failure paths. How many times application installs are done compared to the total number of database transactions?

Imagine a scenario where session 1 is running a transactional DDL script and another session is trying to modify data in a table which is being resolved through public synonym for this user, but the changes started taking place in a table with the same name just created by session 1. This may be avoided by not auto commiting data dictionary modifications after every DDL command, but it's like asking for something to be done in a hard way, which can be very easily achieved by other means.

Tom Kyte
October 07, 2005 - 8:35 am UTC

No it wouldn't.

It would increase database integrity and reduce failure paths.




Imagine a scenario where by every night you load two tables with data. You use two schemas to do this (I have just such a situation).

Now, after the data is loaded and validated and indexed and so on in schema1 - you need to repoint two synonyms or recreate two views to point to schema1 (from schema2 where yesterdays data was!)


create or replace synonym t1 for schema1.t1;
create or replace synonym t2 for schema1.t2;


We need those to be atomic, if a failure were to occur in between - that would be "bad", we'd be half pointing to yesterday and half pointing to today.


There are many other examples as well. I'd love to have the ability to run a couple of related DDL statements together as an atomic unit of work - we already have the basic capability to do so:

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6014.htm#sthref5662 <code>

needs to be expanded.

How cool is this!

Vladimir Andreev, October 07, 2005 - 9:33 am UTC

That CREATE SCHEMA is so cool! Pity indeed that it only supports tables, views and grants.

My new thing learned about Oracle for today :-)
Is it new in 10gR2?
Never mind, I'll check the docs myself.

Thanks!
Flado

Tom Kyte
October 07, 2005 - 8:15 pm UTC

it goes way back.

A reader, October 07, 2005 - 9:53 am UTC

So many suggestions in here that you need not conduct a brainstorming session again in Oracle corp. before releasing 11? for the new features to be included

Tom Kyte
October 07, 2005 - 8:36 pm UTC

yes, when it settles down (as it is beginning to) I will be developing a list to send on - alread have been asked to

Doh!

Vladimir Andreev, October 07, 2005 - 10:08 am UTC

Turns out CREATE SCHEMA has been around since at least Oracle8i, maybe even 7.3 for it is not mentioned in the "Getting to Know Oracle8i" guide:
</code> http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76962/toc.htm#top <code>

It's still cool, though :-)

Cheers,
Flado

David Aldridge http://oraclesponge.blogspot.com, October 07, 2005 - 10:43 am UTC

>> There are many other examples as well. I'd love to have the ability to run a couple of related DDL statements together as an atomic unit of work - we already have the basic capability to do so: <<

On a similar note I submitted an enhancement request for a new partitioned table operation in which a non-partitioned table can be added to a partitioned table as a new partition, kind of like:

i) add partition
ii) exchange partition with non-partitioned table
iii) drop non-partitioned table

... but as a single operation for the purpose of simplicity of recovery from some kind of failure. However this ability to rollback beyond DDL commit points would also be a great advantage.

More about commit other session's transactions

Bobcat, October 07, 2005 - 12:40 pm UTC

It should certainly be a privilege limited to the DBA role. But I really don't see how dangerous it can be, considering that we can rollback a transaction by killing the session.

Tom Kyte
October 07, 2005 - 9:56 pm UTC

rolling back is always safe (as if never happened)

commiting from a distance is ALWAYS dangerous - you have no clue "where they are"

Alberto Dell'Era, October 07, 2005 - 3:01 pm UTC

> 3) careful, that is imperfect read consistency at best,
> strange things will happen when the space is reused.

Ok, but readers' selects end in at most 10 minutes, and after exchanging the partitions, we keep the "old" data, we don't truncate - next refresh happens after 24 hours.

Do you think we're safe here, since we don't "reuse space" while it's being read - or were you thinking about something else ?

TIA

Tom Kyte
October 08, 2005 - 8:47 am UTC

you are *likely* safe if the space isn't reused, that is what I was thinking - if the space is reused....

although there are cases of dropped tables where even when the space isn't reused - there can be issues.

how about alter table read-only

andrew, October 07, 2005 - 9:06 pm UTC

How about an easy way to prevent any of insert/update/delete and maybe even select to a table?

alter table EMP enable/disable insert|update|delete|select(?);

Trying to control it through tablespaces, triggers, grants etc all have their limitations.

Tom Kyte
October 08, 2005 - 8:49 am UTC

what limitation they have that this would not?



Re:how about alter table read-only

Sai, October 07, 2005 - 9:34 pm UTC

You can achieve it through triggers. You can also control DML's on any table through object level privileges for other users.

Alberto Dell'Era, October 08, 2005 - 12:18 pm UTC

Continuing on:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#49216618020078

> 1) online redefine with where clause is what you are asking for

Yes, please consider adding the request for a "where clause" on online reorganization into the Wish List.

Better than what I proposed, online redefine automates the index cloning too - cool.

And should be very easy to implement also - probably all the Oracle developers need is to add the user provided where clause to the MV.

Eh, with a bit of additional effort, the developers could also add to dbms_redefinition the automatic management of FKs as well, automating this technique you demonstrated here:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4636779130376#6783811821076 <code>
> 2) that is online redefine as it exists

Correct, thanks - with the automatic management of FKs - would be heaven.

> 3) careful, that is imperfect read consistency at best

Ok, assuming that we don't reuse space and that your worries don't apply (no issues so far from production for my implementations of the "24x7 exchange table"), that's the only thing that would benefit from "exchange SEGMENT" :)

Two additional considerations on "exchange SEGMENT" - I think that
a) it would be easy to implement (a partition is a table in disguise, so the code is probably already there and already tested to destruction in production)
b) it would be a command very simple and very easy to understand, the main characteristics of good tools/bricks.

optional row migration

A reader, October 08, 2005 - 5:19 pm UTC

Allow row movement on non-partitioned heap tables. When creating a heap table, if row movement is specified, then don't migrate rows by leaving a forwarding address. Instead, if the row has to move, immediately move the row to a new address and update any indexes that are pointing to it.

Tom Kyte
October 08, 2005 - 5:50 pm UTC

you mean update all indexes....

I'm not sure I like this. why? because it would "hide" the fact that you are migrating tons of rows. I'd rather

a) find out I had the problem
b) implement a fix for the problem

rather than masking the problem, hiding it - but negatively impacting the performance of modifications without being able to see it very easily.

IOT physical guesses

A reader, October 08, 2005 - 6:08 pm UTC

When creating an index on an IOT, have a "no physical guesses" clause. If "no physical guesses" is requested, then the index won't contain physical guesses.

Physical guesses may be appropriate for DW. However, they're not as useful for busy OLTP. For OLTP databases, dispensing with the physical guesses will make indexes smaller and more efficient.

re: optional row migration

A reader, October 08, 2005 - 6:45 pm UTC

> I'd rather
>
> a) find out I had the problem
> b) implement a fix for the problem

Sometimes there is no good fix to prevent row migration, such as when the initial and final row sizes have a wide variability. Quite common in our database.

> rather than masking the problem, hiding it - but
> negatively impacting the performance of modifications
> without being able to see it very easily.

In general, rows are updated much less frequently than they are read. I'd rather pay the one-time migration price during the update, rather than incur it every time the row is read thereafter.

Again, non-row migration would be optional. The default setting would remain row migration.



Tom Kyte
October 08, 2005 - 8:32 pm UTC

still not convinced it'll make "my" top ten but I hear you.

My whishes - new features of Oracle Database 11br

Zlatko Sirotic, October 09, 2005 - 3:09 pm UTC

My whishes - new features of Oracle Database 11br

A) PL/SQL

1. References between transient objects (instances of objects types) and (then) garbage collector

2. Generic classes (templates, generics) like in Eiffel, C++, Java 1.5
(PL/SQL was modeled after ADA 83, and ADA 83 has generic packages)

3. Multiple inheritance like in Eiffel (inner classes like in Java - no, please)

4. Design By Contract like in Eiffel (C++ / Java 1.4 assert is not enough)


B) Database

Better support for business rules in database

We need "What Not How: The Business Rules Approach to Application Development" (book by C.J.Date).
Rules Manager, a new feature for Oracle Database 10g Release 2, seems like good direction, but Oracle has Oracle Business Rules in Application Server too.
My motto is: "Thou shall not do in the Middle Tier, what thou could have done in the Data Tier" (Toon Koppelaars in "A first JDeveloper project: Choices made, lessons learned", Oracle World 2002).


Regards,
Zlatko Sirotic


... one mo' time ...

Clemens, October 10, 2005 - 4:06 am UTC

Keep coming back to this thread ... ;-)

I did an install [1] of 10g1/g2 SE on one of my linux boxes this weekend and did notice some things.

- why the memory check on 10g2 (with 1GB memory), what's the big difference to 10g1? Yes, I know about the "what's new" chapter, but 1GB???
- why the "bloat" in 10g2. ~ 860 MB on 10g1 vs. 1.2 GB on 10g2
- library hell: libaio for example, was not needed in 10g1, now it screws up my installation process on 10g2. Same goes for hard coded lib paths such as libstdc.so.5 [2]

These are all support issues. What I'm questioning is the concept of bloat SW, where everything plus the metaphorical coffee machine is included. Maybe 11g could become more like "lean and mean", such as 10g1 was/is, compared to 9i, at least from the size of the install CDs. Reduced memory consumption for small/medium servers would also be nice, since we're on the dawn of server virtualization/consolidation (again) - that is if Oracle is interested in the mass market of small/medium sized companies.

Aside from this, some more ideas on PL/SQL:

- Synonyms on Public Functions/Procedures inside Packages, anyone? (or did I miss that in the newer releases?)
- Already mentioned: variable IN lists with arrays. Table/Pipelined Functions is nice, but that one would save some coding.
- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used code in large applications.

C.

[1] The reason for my installation was to try out the c't db challenge - a web shop with 1G/10G/100G data.

[2] I do only recall the xxxx_so.5 and /usr/lib path, even with LD_LIBRARY_PATH it did not consider the /usr/lib/libstdc++v3 into its relink stage.

perl inside the database

Stefan, October 10, 2005 - 9:43 am UTC

I'd love to have a perl interpreter inside the database, to be able to do something like

CREATE OR REPLACE PERL SOURCE ....


Stefan

Compressed table

Anto, October 10, 2005 - 9:54 am UTC

a) How about the table indexes still remaining valid after compressing a table - or is it already available in 10g ?

b) I remember at least in Oracle 8, Oracle optimizer does not take table stats into account for remote tables accessed through dblink, for deriving the explain plan of distributed queries. Is that the case still now ?



Tom Kyte
October 10, 2005 - 11:17 am UTC

a) with partitioned tables - yes...


ops$tkyte@ORA10G> create table t (object_id, object_name)
  2  partition by range(object_id)
  3  (
  4  partition all_data values less than(MAXVALUE)
  5  )
  6  as
  7  select object_id, object_name from all_objects;
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create index t_idx on t(object_id);
 
Index created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select object_name, object_type, status from user_objects;
 
OBJECT_NAME                    OBJECT_TYPE         STATUS
------------------------------ ------------------- ----------
T                              TABLE PARTITION     VALID
 
T                              TABLE               VALID
 
T_IDX                          INDEX               VALID
 
 
ops$tkyte@ORA10G> alter table t compress;
 
Table altered.
 
ops$tkyte@ORA10G> alter table t move partition all_data update indexes;
 
Table altered.
 
ops$tkyte@ORA10G> select object_name, object_type, status from user_objects;
 
OBJECT_NAME                    OBJECT_TYPE         STATUS
------------------------------ ------------------- ----------
T                              TABLE PARTITION     VALID
 
T                              TABLE               VALID
 
T_IDX                          INDEX               VALID
 



b) the cbo always had the ability to use remote stats?  not sure what you mean. 

Anto, October 10, 2005 - 12:01 pm UTC

a) "the update indexes" option for "alter table .. move" works only for partitioned tables or is it available for ordinary tables as well ?

b) I thought that CBO in Oracle 8, never takes into account statistics on remote tables, for distributed or remote queries. That is the impression I got after opening a TAR, for some related issue, way back in 1999. So I was wrong.

thanks

Tom Kyte
October 10, 2005 - 12:12 pm UTC

a) as stated "with partitioned tables - yes.."....




anto, October 10, 2005 - 12:05 pm UTC

for (b) we were using Oracle Apps that time and hence we may have been using RBO instead of CBO

Thanks for confirming this



Re: how about alter table read-only

andrew, October 10, 2005 - 3:46 pm UTC

> How about an easy way to prevent any of
> insert/update/delete and maybe even
> select to a table?
>
> alter table EMP enable/disable insert|update|delete|select(?);
>
> Trying to control it through tablespaces, triggers, grants etc all have their
> limitations.
>
> Followup:
> what limitation they have that this would not?

Altering the tablespace containing the table read-only hardly an acceptable way to control individual tables.
Creating (and dropping) triggers - or adding logic to triggers do do some lookup is messy and has overhead.
Grants are fine - as long as the app doesn't connect as the table owner.

Just a simple alter table command would be clean and any novice could understand it.

Oracle optimizer, Stats and DB Links

Gary, October 10, 2005 - 7:47 pm UTC

"I remember at least in Oracle 8, Oracle optimizer does not take table stats into account for remote tables accessed through dblink, for deriving the explain plan of distributed queries. Is that the case still now ?"

Tables work fine. Views can be more complex because they don't have stats themselves and (at least up to 9iR2) the remote db doesn't do much in the way of 'resolving' the view to guess stats. Also, the remote db won't tell the local db about any indexes on the tables underlying the view. [Frankly, it would be pretty difficult for Oracle to do any of this. To be comprehensive about it, it would need to tell the originating database the view query, stats and index information for all the tables and non-oracle functions involved etc. And for that to really work, you'd probably need some closer tie-in between the versions of Oracle running on both databases.]

I've got some simple scripts demonstrating the issues :
</code> http://igor-db.blogspot.com/2005/09/views-db-links-and-cbo.html
and
http://igor-db.blogspot.com/2005/09/views-db-links-cbo-and-indexes.html <code>


some of my my wishes ...

Mauro Papandrea, October 11, 2005 - 3:17 am UTC

1) SQL: currently it is not possible to manage db links for other users.
I should like something like this:
create/drop database link ... OWNER BY scott with a CREATE/DROP ANY DATABASE LINK privilege

2) the possibility to use alias everywhere and not only in order by clause; it is a simple matter of search/replace that should be very easy to implement and would improve readability very much

3) a RESERVED_PROCESS init parameter to be used only for system account ( or with an additional privilege, even better ): when for some error the maximum number of process is reached even admins are no longer allowed to connect and the db gets kind of frozen

4) the possibility of seeing the value of bind variables on the fly and the environments of other sessions ( no way to see if a session altered some parameters )

5) as of 9i it is possible remove datafiles when dropping tablespaces however "including contents" must be used; i cannot understand why: it is potentially dangerous.
I'd rather to remove this bound

6) Sqlplus copy command supporting new datatypes

7) update cascade

8) In Metalink searching with "exact sentence" many docs are retrieved where that sentence does not appear

9) better testing: for instance, the master script generated by dbca is written for bourne shell but with mixed cshell syntax: had it ever been run just once it would be easily fixed, 9.2.0.7 does not allow to drop a user and this is really a big one ...

Best regards



Silly me ...

Clemens, October 11, 2005 - 10:56 am UTC

- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used code in large applications.

=> is already solved through:

SELECT object_name, timestamp
FROM dba_objects
WHERE object_type LIKE 'PACKAGE%'
ORDER BY timestamp;

C.

Tom Kyte
October 11, 2005 - 3:40 pm UTC

that doesn't tell you when it was last used.

Re: Clemens comments

andrew, October 11, 2005 - 12:19 pm UTC

> Well if it's a dynamic call, then monitoring on the
> package/function/proc/sequence would still be useful.


Well if it's a dynamic call, then monitoring on the package/function/proc/sequence would still be useful.

Read locks in Oracle!

Brian Camire, October 11, 2005 - 1:13 pm UTC

Seriously, it would be helpful to have a more flexible row locking mechanism -- sometimes SELECT...FOR UPDATE is too restrictive.

Specifically, it would be helpful to have way to lock a row that blocks other transactions that try to update it, but does not (unlike SELECT....FOR UPDATE) block other transactions that try lock it in the same (or a "compatible") way.

It would be like saying:

"I'm depending on these rows not to change for the rest of this transaction. It's OK to let other transactions do the same, but not to let other transactions update these rows."

It might look like SELECT...IN <lock mode> MODE, where <lock mode> might correspond (in meaning, if not name) to the lock modes supported by LOCK TABLE.

SELECT...IN EXCLUSIVE MODE would be synonomous with SELECT...FOR UPDATE.

I know you can emulate this using DBMS_LOCK, but it would be easier if it was more declarative.

Dependencies

anto, October 11, 2005 - 2:01 pm UTC

To show the dependencies(referenced_name in user_dependencies) even if the objects are used within execute immediate statements in procedures/packages

Tom Kyte
October 11, 2005 - 3:52 pm UTC

that is impossible. think about it.


execute immediate string;


execute immediate string1 || ' emp ' || string2;


and if the string is "imutable" - so it could be done, well, then one asks "why dynamic sql"???

sql 'macros'

lh, October 11, 2005 - 3:00 pm UTC

1) In order to write modular code it is sometimes useful to encapsulate complex sql-functions (e.g replace, upper, translate stacks or case statements) into
stored functions or packages.

Referencing these functions is however slower than using (unmodular) straight sql-sentence. (there has already been discussion about the context switch between pl/sql engine and sql).

In c-language there is syntax for macros. Something like that so that one could:
create or replace macro example (p_param) is
replace(upper(substr(p_param,1,20)),'XXX','YYY');

and one could use this is sql-statements:
select
field1,
field2,
example(field3)
from
table1
;

2) Enhancing ORA-600 messages so that it would give more information. Something like the number of the bug that is causing the error message.
Wouldn't this be a great and benefical feature ?
(perhaps some minor difficulties in implementing...)

Tom Kyte
October 11, 2005 - 3:54 pm UTC

1) already noted above..

2) if we knew the bug, we would not have thrown the error as the error is due to a bug. chicken a very much "egg" here.

separation of application server and database server

lh, October 11, 2005 - 3:11 pm UTC

For license, maintenance, security reasons it is sometimes usefull to dedicate a server just for the database.

Application would run in a different machines connecting into a database using sql*net.

However application/client can use utl_file -package that is writing into database server. Also alter session set sql_trace = true commands produces trace files into database server.

If these files could somehow be directed into calling clients machine ? Automatic writing using sqlnet ?

Tom Kyte
October 11, 2005 - 3:54 pm UTC

that would be very virus like.

Mount the file systems to the server and you can do this, but you have to take the initiative to do that.

sql 'macros'

lh, October 11, 2005 - 3:33 pm UTC

When reading more of these suggestions I found that Gary has already suggested same kind of functionality.

So I am not alone in missing something like this.

Wish list

Pet, October 11, 2005 - 4:22 pm UTC

Build-in Scheduler and event based job kick off ( Enhance DBMS_JOB., for ex. start a PL/SQL procedure when we receive files in a directory of OS)., so that we don't need to use KRON.
Build-in basic workflow., ( data goes from one table to other and wait for user action and goes to same table with updated status , or other tables..)
Autoformat of result and Row limit on SQL plus client( like set ROWLIMIT 50 so that the all the subsequent queries stops after 50 rows) (very hard to cancel the Query in SQL*Plus client)
Named SQL ( type select statement.. and save it as by giving a name in the database., instead of saving it in OS file ) so that you can re-use the same statement later.. namespace of named SQL should be current DB schema.)


Tom Kyte
October 12, 2005 - 6:38 am UTC

someone would have to POLL the directory - so just schedule the job to run periodically and scan the directory (I've a java stored procedure on this site to get a list of files in a directory).


I think AQ is built in basic workflow (to build your workflows with)




something between shutdown immediate and abort.

nowhere man, October 11, 2005 - 4:59 pm UTC

shutdown immediate defer long transaction rollback.

Nice to have...

Chris, October 11, 2005 - 9:44 pm UTC

A few wishes off the top of my head...

a. Being able to join database tables to pl/sql tables in pl/sql code? A friend of mine swears blind he saw this listed as a feature in 10g (but he can't find it now).

b. I would like to be able to be able to 'drop table space including datafiles' - with an exception being raised if there were any contents. At the moment you can only specify 'including contents and datafiles' together - which can be dangerous.

c. The ability to drop individual datafiles from a tablespace (if empty).

d. User defined pfile/spfile parameters. For example, this could be used to automatically suppress certain features if a database if it were a copy of production. This would save having to remember to set a values in a database table after a refresh of a test database from production one.

Eg: '!my_special_parameter = 123'

From the previously mentioned features in this thread I'd like to add my vote to:

1. Defaulting a column to sequence.nextval (or sequence.currval).

2. Constraints with error messages associated (not just check constraints either). It might also be useful if you could include parameters in the message which would be replaced by the value that a column was being set to.

Eg: '$1 is not a valid cost code'

3. Allowing comments on everything (OK - boring but handy).

4. Rename user / schema. This would be good for testing - to be able to swap back and forward between alternate versions of data quickly.

5. Allowing in-line SELECTs in PL/SQL such as:

v_variable := NVL((SELECT ...), 'Unknown');

IF EXISTS(SELECT NULL FROM emp WHERE ...)

IF NVL((SELECT emp_id FROM emp ...), 0)

6. ON COMMIT triggers.


Tom Kyte
October 12, 2005 - 7:09 am UTC

a) you can join to collections - and have been able to since 8i....

b) if you use OMF.... that happens already with just drop tablespace.

c) 10gr2 does that.


for previous poster(s)

Connor, October 12, 2005 - 5:59 am UTC

a) select on plsql table

select * from table(plsql_table_type)

is already there.

b) alter table read only

Isn't this just:

create trigger before insert or update or delete
pragma exception_init(-942,fake_privs_msg);
begin
raise fake_privs_msg;
end;

I'm not sure what the "overhead" of such a trigger is

Cheers
Connor

Bitmap Join Index - make the more robust (and fix some bugs)...

Philip Moore, October 12, 2005 - 10:16 am UTC

Hi Tom,

I was just thinking of one of the features that I believed to be very promising when I first heard of them - Bitmap Join Indexes. After experience, however - I find them to be a little less useful.

The first thing is that join-back is still required if I reference the dimension table column that was indexed in the SELECT clause (oddly enough - it works fine if I only mention the column in the "WHERE" clause).

Example:
philip@ORA9iR2 -> DROP TABLE products CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts
2 /

Table dropped.

philip@ORA9iR2 -> CREATE TABLE products( product_id VARCHAR2(15)
2 , product_name VARCHAR2(30)
3 , CONSTRAINT product_pk PRIMARY KEY (product_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> CREATE TABLE facts ( product_id VARCHAR2(15)
2 , sales_date DATE
3 , sale_in_dollars NUMBER
4 , CONSTRAINT facts_pk PRIMARY KEY (product_id, sales_date)
5 , CONSTRAINT facts_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
6 )
7 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_product_bjix ON facts (products.product_name)
2 FROM products
3 , facts
4 WHERE products.product_id = facts.product_id
5 /

Index created.

philip@ORA9iR2 -> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER, tabname=>'PRODUCTS');

PL/SQL procedure successfully completed.

philip@ORA9iR2 -> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER, tabname=>'FACTS');

PL/SQL procedure successfully completed.

philip@ORA9iR2 -> SET autotrace traceonly EXPLAIN ;
philip@ORA9iR2 -> -- Here the index is used appropriately...
philip@ORA9iR2 -> SELECT SUM(facts.sale_in_dollars)
2 FROM facts
3 , products
4 WHERE facts.product_id = products.product_id
5 AND products.product_name = 'Tom Kyte''s BBQ'
6 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=22)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'FACTS' (Cost=2 Card=1 Bytes=22)
3 2 BITMAP CONVERSION (TO ROWIDS)
4 3 BITMAP INDEX (SINGLE VALUE) OF 'FACTS_PRODUCT_BJIX'



philip@ORA9iR2 -> /* Oracle already did the work to join the FACTS and PRODUCTS table as well as catalog each unique
DOC> entry of PRODUCT_NAME (for storage in the Bitmap Join Index).
DOC> So why the join back to the PRODUCTS table? Why not just use the bitmap join index entry
DOC> to bring back the PRODUCT_NAME? */
philip@ORA9iR2 -> SELECT products.product_name
2 , SUM(facts.sale_in_dollars)
3 FROM facts
4 , products
5 WHERE facts.product_id = products.product_id
6 AND products.product_name = 'Tom Kyte''s BBQ'
7 GROUP BY products.product_name
8 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=48)
1 0 SORT (GROUP BY NOSORT) (Cost=3 Card=1 Bytes=48)
2 1 NESTED LOOPS (Cost=3 Card=1 Bytes=48)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'FACTS' (Cost=2 Card=1 Bytes=22)
4 3 BITMAP CONVERSION (TO ROWIDS)
5 4 BITMAP INDEX (SINGLE VALUE) OF 'FACTS_PRODUCT_BJIX'
6 2 TABLE ACCESS (BY INDEX ROWID) OF 'PRODUCTS' (Cost=1 Card=1 Bytes=26)
7 6 INDEX (UNIQUE SCAN) OF 'PRODUCT_PK' (UNIQUE)

--------------------

The second thing is that if I have two indexes on the fact table (which is partitioned) - I can't exchange that partition - this bug has been around since bitmap join index inception - and even exists in Oracle 10G Release 2 (I verified it there)!!!

Example:
philip@ORA9iR2 -> DROP TABLE products CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE geographies CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts_swap
2 /

Table dropped.

philip@ORA9iR2 -> CREATE TABLE products( product_id VARCHAR2(15)
2 , product_name VARCHAR2(30)
3 , CONSTRAINT products_pk PRIMARY KEY (product_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> CREATE TABLE geographies ( geography_id VARCHAR2(15)
2 , geography_name VARCHAR2(30)
3 , CONSTRAINT geographies_pk PRIMARY KEY (geography_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> -- Here's our partitioned fact table...
philip@ORA9iR2 -> CREATE TABLE facts ( product_id VARCHAR2(15)
2 , geography_id VARCHAR2(15)
3 , sales_date DATE
4 , sale_in_dollars NUMBER
5 , CONSTRAINT facts_pk PRIMARY KEY (product_id, geography_id, sales_date) USING INDEX LOCAL
6 , CONSTRAINT facts_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
7 , CONSTRAINT facts_fk_geographies FOREIGN KEY (geography_id) REFERENCES geographies (geography_id)
8 )
9 PARTITION BY RANGE (sales_date)
10 (PARTITION p_only VALUES LESS THAN (MAXVALUE))
11 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_product_bjix ON facts (products.product_name)
2 FROM products
3 , facts
4 WHERE products.product_id = facts.product_id
5 LOCAL
6 /

Index created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_geography_bjix ON facts (geographies.geography_name)
2 FROM geographies
3 , facts
4 WHERE geographies.geography_id = facts.geography_id
5 LOCAL
6 /

Index created.

philip@ORA9iR2 -> -- Now for the "swap table"...
philip@ORA9iR2 -> CREATE TABLE facts_swap ( product_id VARCHAR2(15)
2 , geography_id VARCHAR2(15)
3 , sales_date DATE
4 , sale_in_dollars NUMBER
5 , CONSTRAINT facts_swap_pk PRIMARY KEY (product_id, geography_id, sales_date)
6 , CONSTRAINT facts_swap_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
7 , CONSTRAINT facts_swap_fk_geographies FOREIGN KEY (geography_id) REFERENCES geographies (geography_id)
8 )
9 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_swap_product_bjix ON facts_swap (products.product_name)
2 FROM products
3 , facts_swap
4 WHERE products.product_id = facts_swap.product_id
5 /

Index created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_swap_geography_bjix ON facts_swap (geographies.geography_name)
2 FROM geographies
3 , facts_swap
4 WHERE geographies.geography_id = facts_swap.geography_id
5 /

Index created.

philip@ORA9iR2 -> -- Drum Roll:
philip@ORA9iR2 -> ALTER TABLE facts
2 EXCHANGE PARTITION p_only WITH TABLE facts_swap
3 INCLUDING INDEXES WITH VALIDATION
4 /
EXCHANGE PARTITION p_only WITH TABLE facts_swap
*
ERROR at line 2:
ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION


philip@ORA9iR2 -> -- Now drop the Bitmap Join Indexes...
philip@ORA9iR2 -> DROP INDEX facts_swap_product_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_swap_geography_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_product_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_geography_bjix
2 /

Index dropped.

philip@ORA9iR2 -> -- Now retry the swap...
philip@ORA9iR2 -> ALTER TABLE facts
2 EXCHANGE PARTITION p_only WITH TABLE facts_swap
3 INCLUDING INDEXES WITH VALIDATION
4 /

Table altered.

-------------------------

Finally - could you kindly share your top ten thus far?

Thanks.

Phil

standard proc for compiling invalid objects in a schema

Anto, October 12, 2005 - 10:30 am UTC

What about a oracle supplied procedure for compiling all invalid objects in a schema based on dependencies ( similar to what we have in TOAD - compile all invalid objects in a schema ) - rather than going through the pain of writing our own procedures

Tom Kyte
October 12, 2005 - 2:06 pm UTC

utlrp/utlirp.sql exist for the database.

me, i just let them be, they'll take care of themselves.

A reader, October 12, 2005 - 3:05 pm UTC

What about those developers who don't have access to those files ?

Anto

Tom Kyte
October 13, 2005 - 9:35 am UTC

me, i just let them be, they'll take care of themselves.

dave, October 12, 2005 - 6:41 pm UTC

dbms_utility.compile_schema()

Tom Kyte
October 13, 2005 - 10:23 am UTC

that is true, I forgot about that one - had issues in some 8i releases with a connect by loop in a view - but that should have been fixed in the latest 817 releases.


Clemens, October 13, 2005 - 4:47 am UTC

- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used
code in large applications.

=> is already solved through:

SELECT object_name, timestamp
FROM dba_objects
WHERE object_type LIKE 'PACKAGE%'
ORDER BY timestamp;

C.


Followup:
that doesn't tell you when it was last used.

------------------------------------------------------
Ok, so I wasn't that far of with my "wish". Just beeing curios, what does Oracle mean by

</code> http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10755/statviews_1102.htm#sthref1294 <code>

TIMESTAMP VARCHAR2(20)
Timestamp for the _specification_ of the object (character data)

Regards
C.

Tom Kyte
October 13, 2005 - 10:40 am UTC

for timestamp checking with remote PLSQL (forms/reports). specification is like the package specificiation for example

An Oracle newbie's experience

Andy Mackie, October 13, 2005 - 7:14 am UTC

As someone who has recently moved to Oracle from "the other DBMS" (SQL Server), here's my experience:

1. Development tools - impossible to use Oracle without 3rd party tools such as Toad or PL/SQL Developer. iSQL*Plus, or SQL*Plus and notepad ? Need I say more...

2. 30 character limit for names - AAGGHH!

3. Can't select CLOB columns across a database link ("ORA-22992: cannot use LOB locators selected from remote tables") - need to resort to workarounds like creating temp.tables.

4. XML Schema evolution - extremely limited. (discussion of copyEvolve limitations at </code> http://forums.oracle.com/forums/message.jspa?messageID=1051365 <code>

5. DHCP environments - when installing on developer workstations, requires a fixed IP address, needing loopback adapters setup.

6. UPPERCASE COLUMN NAMES ONLY UNLESS YOU "Start putting everything in quotes". Jeez.

7. Enterprise Manager (either web or Java UI) - sloooowww...lacks the slickness of modern GUI's.

8. Extremely heavy footprint. Just how many GB's of RAM do I need on my workstation to run Oracle and my development tools at the same time ?

9. Case sensitivity mixture - data comparisons are case sensitive, but the SQL itself is case insensitive.

My initial impression is that Oracle may have a good database engine, but the interface between me and the database leads to an extremely frustrating experience. Everything just seems so much more difficult to achieve. Oracle really needs to improve its usability significantly.

Grants for all tables of a schema

Kim Berg Hansen, October 13, 2005 - 9:04 am UTC

A small enhancement I just wish I had today:

GRANT SELECT ON SCOTT.* TO JONES

At present I can do two things only:
GRANT SELECT ANY TABLE TO JONES
or
GRANT SELECT <one! table> TO JONES

Sure I can quickly whip up a script to GRANT SELECT on all Scott's tables to Jones, but when Scott creates a new table, that has to be explicitly granted too.

My reason is that I would like to create some users who have a sort of "READ ONLY" access to a particular schema - including any new tables that will come.

Similarly it would be nice with something like:
GRANT EXECUTE ON SCOTT.* TO JONES

And there are probably other grants that would be nice to be able to do on a schema-level rather than just either globally or object-level.


linker error : _sqlcxt module

sohli, October 13, 2005 - 10:16 am UTC

hey
i m a newbie
i didnt get the proper idea of what
u were saying could u please explain in details
thank you


Tom Kyte
October 13, 2005 - 11:04 am UTC

nt rly

to Andy in England

Jim, October 13, 2005 - 11:42 am UTC

Items 6,8,9. In 10G you can have case insensitive searches. Also you can make the database take up less RAM. (SQL server takes all RAM available unless you specifically limit it.) Also in other databases the tools automatically quote the table and column names so they rely on mixed case object names. I like not having to quote every darn thing just to query things. I don't want to have mixed case object names. Do you really want an Employee table that is different from the EMPLOYEE table? UGH.

GRANT SELECT ON SCOTT.* TO JONES

Alex, October 13, 2005 - 11:48 am UTC

A small enhancement I just wish I had today:

GRANT SELECT ON SCOTT.* TO JONES


Have you tried CREATE TRIGGER on schema level for handling DDL events ?



For Alex

Andrew, October 13, 2005 - 12:06 pm UTC

A little generic, but you get the idea...

BEGIN
FOR x IN ( SELECT owner,
object_name,
DECODE(object_type, 'TABLE' , 'select',
'SEQUENCE', 'select',
'VIEW', 'select',
'execute') AS privs,
DECODE (owner, 'SCOTT', 'jones') AS app_user
FROM dba_objects
WHERE object_type IN ('TABLE', 'PACKAGE', 'PROCEDURE',
'FUNCTION', 'SEQUENCE', 'VIEW')
AND owner IN ('SCOTT' )
LOOP
BEGIN
EXECUTE IMMEDIATE 'grant ' || x.privs || ' on ' || x.owner ||
'.' || x.object_name || ' to ' || x.app_user ;

EXECUTE IMMEDIATE 'create or replace synonym '|| x.app_user||'.'||x.object_name||
' for ' ||x.owner||'.'||x.object_name ;
END;
END LOOP;
END;
/


A reader, October 13, 2005 - 12:06 pm UTC

What about those developers who don't have access to those files ?

Anto


Followup:
me, i just let them be, they'll take care of themselves.

You mean the invalid objects will take care of themselves (will become valid, if it can) , the next time someone runs them ?

Not if there are 2 or 3 levels of dependencies .
If some object A is dependent on object B which is invalid(due to some object C being invalid)...., there is the need to comile in the order C,B,A



Tom Kyte
October 13, 2005 - 1:17 pm UTC

yes, they will become valid - the database will "fix them" all by itself.

Not if there are 2 or 3 levels of dependencies .

Kamal Kishore, October 13, 2005 - 12:57 pm UTC

Is this what you mean?

SQL> create table t as select * from dual ;

Table created.

SQL>
SQL> create or replace procedure proc_c is
  2  begin
  3    for rec in (select * from t)
  4    loop
  5      null ;
  6    end loop ;
  7  end ;
  8  /

Procedure created.

SQL> create or replace procedure proc_b is
  2  begin
  3    proc_c ;
  4  end ;
  5  /

Procedure created.

SQL> create or replace procedure proc_a is
  2  begin
  3    proc_b ;
  4  end ;
  5  /

Procedure created.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          VALID
PROC_B                          VALID
PROC_C                          VALID

3 rows selected.

SQL> rename t to t1 ;

Table renamed.

SQL> rename t1 to t ;

Table renamed.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          INVALID
PROC_B                          INVALID
PROC_C                          INVALID

3 rows selected.

SQL> exec proc_a ;

PL/SQL procedure successfully completed.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          VALID
PROC_B                          VALID
PROC_C                          VALID

3 rows selected.

SQL>
 

2 or 3 or more levels of dependency

A reader, October 13, 2005 - 1:42 pm UTC

What if the objects involved are materialized views and not procedures ? Will Oracle fix all the dependent MVs in the correct order ?

Anto

Tom Kyte
October 13, 2005 - 4:16 pm UTC

define "fix", what is "broken" about them

Join Conditions

A reader, October 13, 2005 - 2:37 pm UTC

I hate having to explicitly write join conditions
that effectively duplicate foreign key information
already stored in the database. In 90% of the queries
I write the join conditions are a redundant expression
of pre-defined foreign key constraints.

For example:

create table table_a
( a_id number,
a_name varchar2(10)
primary key(a_id)
);

create table table_b
( b_id number,
a_id_fk number references table_a,
b_name varchar2(10)
);

create table table_c
( c_id number,
b_id_fk number references table_b,
c_name varchar2(10)
);

-- instead of this

select
a.name, b.name, c.name
from
table_a a, table_b, table_c
where
a.a_id = b.a_id_fk and
b.b_id = c.b_id_fk and
...
;

-- or this ugly ANSI SQL construct

select
a.name, b.name, c.name
from
table_a a inner join
table_b b inner join
table_c c
on ( b.b_id = c.b_id_fk )
on ( a.a_id = b.a_id_fk )
where
...
;

-- something like this would be nice;
-- it is fast and easy to write and others
-- can clearly follow the logic with minimal effort

select
a.name, b.name, c.name
from
table_a a,
table_b b,
table_c c
join
a inner join b,
b inner join c
where
...
;

Oracle invested huge amounts to develop a complex cost based
optimizer to pre-process queries. I'm surprised that, after
all these decades, that they haven't invested a small fraction of
that amount to make their parser smart enough to know how to join
two tables together simply using constraint information already
available in its data dictionary.

When you have a situation where you need joins that don't
follow the pre-defined foreign key constraints, the
existing syntax will do. In a well designed database,
these cases should definitely be the minority however.

I'm aware of the "USING (column_name)" clause. This
construct comes close, but it has a few deficiencies
that cancel out is usefulness as a time saving syntax
construct:

1. you can't use it when the primary key and foreign
key column names differ

2. it precludes the use of shortcuts like "a.*" in the
SELECT clause

3. you have to remember to treat the join columns
differently when you reference them in other
parts of the SELECT clause; i.e. if you are in
the habit of prefixing all your column names with
table aliases you have to remember not to do this
for columns mentioned in the USING clause.



Andy

DaPi, October 13, 2005 - 2:37 pm UTC

"5. DHCP environments - when installing on developer workstations, requires a
fixed IP address, needing loopback adapters setup."

Can't you get round this by using IPC (Inter-Process Connect/Communication) ?

DHCP - yes

Tom Fox, October 13, 2005 - 4:59 pm UTC

Sure, you can use DHCP. I run 9i and 10gR2 on my box with a DHCP address and no loopback installed. I get errors during the installation wondering about my hostname and IP, but I hit continue and it functions normally. Of course, I don't need other people on the network to connect to me so this is fine.

Things to add..

Rob, October 13, 2005 - 4:59 pm UTC

1. Would love the ability to display user sessions and be able to sort on the by the amount of archivelogging that they have generated.

2. Would love a utility to BLAST updates to multi-million rows in an extremely fast manner. I have used a couple manners using hints, updating the stats on the object, etc. Just a great "method" that saves me the time of having to play with the different methods to find the fastest.

3. Built-in virus scanning of uploaded attachments in the database. (Can I say Mcafee?)

4. Cross database-link explain plans with more detail.

Those would be my first requests off the top of my head.

RMAN like commands in sqlplus would be helpfull too

Yegna, October 13, 2005 - 7:02 pm UTC

Like in RMAN if I could do the following in sqlplus that would be great..

List tablespace (to list all the tablespaces with size)
list constraints for table schema.tname (to list the constraints of the table)
..
..

What I don't like...

Toon Koppelaars, October 17, 2005 - 4:01 am UTC

Fully argree with Zlatko a few posts ago.

What I don't like about Oracle (or any other DBMS out there) is the lack of more support for data integrity constraints (aka business rules). We need much more than just CHECK, PRIMARY KEY, UNIQUE and FOREIGN KEY.

Support for the CREATE ASSERTION ANSI SQL construct would be EXTREMELY useful and thus a good start. Allthough I know this is also EXTREMELY difficult to implement... ;-)


...and another wish

A reader, October 17, 2005 - 1:17 pm UTC

I would like an additional variable to be available in an exception block (like we already have SQLERRM and SQLCODE) - to give the line number within the module that caused the branch to the exception block.

You can currently look at the call stack / exception stack but that will only give you a line number within the exception block as opposed to the one that sent you there.



Tom Kyte
October 17, 2005 - 1:37 pm UTC

10g added that

dbms_utility.FORMAT_ERROR_BACKTRACE

re-raise original exception

Alberto Dell'Era, October 19, 2005 - 2:30 pm UTC

In Java we can do:

catch (Exception e) {
// log infos about exception etc
throw e;
}

Would be nice to have something similar in pl/sql, currently you can do:

dellera@ORACLE10> declare
2 l_dummy varchar2(1);
3 begin
4 select dummy into l_dummy from dual where 1=0;
5 exception when others then
6 raise;
7 end;
8 /
declare
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 6

Would be nice to have something like "raise original_exception;", that gives "ORA-06512: at line 4" instead (and the full original stack trace in general of course).

Using dbms_utility.FORMAT_ERROR_BACKTRACE would "flatten" the stack into the message string, it's not the same (besides being a tad verbose).

Change(s) Needed - Idle Time Parameter

Magesh, October 19, 2005 - 4:58 pm UTC

The IDLE TIME Parameter when set in the database. When a user leaves his or her session inactive exceeding the idle time set for whatever be the reason, Oracle waits for the user to return and submit an action to the database before completely cleaning up the session. It would be nice if it cleans up automatically instead of putting the session as "Sniped". The user either knowingly or unknowingly violated the IDLE TIME limit and his session is anyway gone. What use is it to the database by keeping it in "Sniped" Status anyway?

Tom Kyte
October 19, 2005 - 5:02 pm UTC

the end user would get "3113 end of file on communication channel" upon return.

What do you do about 3113's?
You call support....



A Simple Way to list dependencies may be

A reader, October 19, 2005 - 5:11 pm UTC

When we have to make change to an object, say change the data type or name, I want to know all the 20-30 sub-programs that is using it, so I know how much work is going to be required or what is needed to change in advance. I know you can do it through Enterprise Manager but I am talking about some simple way probably from sqlplus or something. Probably even like a system table which would store the object and the dependencies of it will do good. But I would really love to do something like 

SQL> LIST DEPENDENCY EMP; or something like it. 

Tom Kyte
October 19, 2005 - 7:42 pm UTC

(connect by query on dba_dependencies can do this....)

how about better support for binds in explain plan

Sean D Stuber, October 19, 2005 - 11:45 pm UTC

In particular, allow date bind variables when doing a plan.

Clearly Oracle can do it, the statements execute and have plans generated when they run, why can't that same logic (or nearly the same) be applied when just running the plan?

I'd be willing to have the functionality toggled through an

alter session xplan_allow_date_binds=true

in order to allow "default" behavior to work (or rather, fail) the current way; but still give me the option to generate a plan from the the real application sql without tinkering with it first.


empty strings and nulls

Raymond Allo, October 20, 2005 - 12:48 am UTC

As a DBA I am involved in many Java and PHP projects. Most developers are complaining (specifically the Java ones) about how Oracle deals with empty strings and nulls. Nulls is to confusing somehow and creates all this extra work for them.(ok guys it is what it is so get over it, just put it in a class) Having said that, it seems Oracle is the only database which has this issue. DB2, Postgres, MYSQL, SQLServer etc handle empty strings as empty strings.
An init.ora parameter like empty_strings=true/false would be nice thank you.

DBMS_OUTPUT.PUT_LINE

A reader, October 20, 2005 - 4:03 am UTC

1)
Few books on PL/SQL says that we should be able to use DBMS_OUTPUT.PUT_LINE using boolean variable, which is not overloaded yet for its usage.
2) If there is command line history of the SQL statements, it would be very much helpful in SQL*PLUS


Tom Kyte
October 20, 2005 - 8:13 am UTC

1) I'm sorry that some books are wrong? are they Oracle books (eg: something we can actually fix?)

2) raptor.

gathering global statistics for partitioned table

lh, October 20, 2005 - 5:24 am UTC

Have I understood it correctly :if sql-statement is referencing only one partition, it will use statistics for that partition, otherwise it will use global statistics.

Generating global statistics can be costly operation for large partitioned tables especially when using histograms.

Some kind of 'cumulative' gathering of statistics could be beneficial. Calculation of statistics for partition would e.g. update global statistics.

Tom Kyte
October 20, 2005 - 8:18 am UTC

if the optimizer can determine a) that a single partition will be accessed and b) which partition that is then partition stats for that partition will be used.

Else, global table statistics are used.


calculation of local stats for all partitions DOES generate global stats HOWEVER if you have 100 partition and column X has 1000 distinct values in each partition, tell me how many distinct values of X does the table have?

a) 1,000
b) 100,000
c) some number between 1,000 and 100,000


and for bonus points

q) is X skewed, if so how?


you cannot roll up partition stats and answer those....

DBMS_OUTPUT.PUT_LINE

A reader, October 20, 2005 - 12:48 pm UTC

Hi tom,
does oracle support printing boolean values using DBMS_OUTPUT.PUT_LINE?


Tom Kyte
October 20, 2005 - 4:49 pm UTC

nope.

ops$tkyte@ORA9IR2> exec dbms_output.put_line( true );
BEGIN dbms_output.put_line( true ); END;

      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'PUT_LINE'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

 

gathering global statistics for partitioned table

lh, October 21, 2005 - 5:54 am UTC

I had in my mind situation were new data is constantly added into table. New partitions are created and statistics are gathered for them after they have 'representative' volume.

Column statistics also should use histograms. This is the key point. These might be cumulated with some accurancy as global statistics.
Histograms have some of the required information.
This would be an estimate, agreed. This could be (yet) another option in dbms_stats -package when gathering statistics for named partition.

Other way to solve the basic problem is that if sql-statement would use several partitions (with local indexes)then optimizer would use these partitions statistics, not global statistics.

(I have few times have probles using partitioned tables with local indexes. Statistics we gathered for partitions.
But sql-statements where referencing several partitions.
Especially queries which had indexed date columns in where clauses performed sometimes poorly because old global statistics did not understand values beyond old statistics range.
Table had hundreds of millions rows and gathering global statistics in that machine takes a long time. We are now doing it, because we couldn't modify that third party application. But I am wondering if this could be solved otherwise).

Tom Kyte
October 21, 2005 - 8:28 am UTC

stats should be gathered for them from the get go - else the global stats are not rolled up.


it is not that histograms would be estimated - rather it is the fact that there isn't a way to roll them up as described.


100 partitions
10000 distinct values of X in each

do you have 10,000 distinct X's or 1,000,000 distinct X's or something in between??

The optimizer EITHER

a) uses global stats because it cannot figure out the SINGLE partition that will be accessed at parse time

b) uses local stats because it CAN figure out the SINGLE partition that will be accessed.

execute immediate @ db-link

A reader, October 21, 2005 - 8:16 am UTC

would be handy in some cases
(instead of dbms_sql@db-link)

booleans in dbms_output

A reader, October 21, 2005 - 9:57 am UTC

I generally use following syntax :
exec dbms_output.put_line(case 1>2 when true then 'true' when false then 'false' else 'what?' end);



My votes

Peter Nybo Rasmussen, October 21, 2005 - 10:29 am UTC

My votes goes to the following (all said by others before):

1) Multi-table check constraints or assertions as defined by ANSI SQL (don't see why this would be "extremely difficult to implement" as stated by Toon Koppelaars).
2) Don't treat empty strings as null (a way to disable this, big design mistake).
3) No implicit type conversions (a way to disable this).
4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers (currently 30 characters).
5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
6) Mixed case object names (for better readability) without having to quote, combined with case insensitive DML.
7) Extended explain syntax to allow the equivalent of bind variable peeking.


Not technical but...

Steve, October 21, 2005 - 11:22 am UTC

Having AskTom occasionally open to accept questions during UK working hours!

;-)

Tom Kyte
October 21, 2005 - 11:39 am UTC

I'll be in Europe for the next two weeks :)

malcolm, October 21, 2005 - 11:45 am UTC

GPL the source code. Hahaha!

And another vote for the option to compile pl/sql with strong typing. (We don't like being treated like VB programmers...)



For Peter Nybo Rasmussen

Andrew, October 21, 2005 - 12:49 pm UTC

1) Multi-table check constraints or assertions as defined by ANSI SQL . . .
- seems reasonable to me

2) Don't treat empty strings as null (a way to disable this, big design mistake).
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string? If length of enpty string > 0, then it is not empty, thus not null.

3) No implicit type conversions (a way to disable this).
- good coding practices generally dictate explicit type conversions, but to eliminate it would bust lots and lots of apps. Besides, just because your coders will not follow your standards ( you have a standard requiring explicit type conversions, right), does not mean you should use the database to enforce it.


4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers (currently 30 characters).
- Longer varchar2 is fine with me, but identifiers at 30 is fine too. This enhancement is usually requested by people who like to mash object names together to make even longer names. If it were to increase to 60 or 128 or?? all that would happen is that there would be complaints that from people that they could not mash their 120 char names together to make 600 char names. Give us a break, someone has to type in those long names every time their code references them

5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
- Actually seperate sequences are more flexible. Take for instance a bi-directional replication system. We set the sequence to start at 1 on site a and to start at 2 on site b, now we increment by 2 on each site. Voila, no data collisions. Managing that with autoincrement would mean the code (DDL at least) would have to be different on each database.

6) Mixed case object names (for better readability) without having to quote, combined with case insensitive DML.
- You can put mixed case object names in your apps all you want, oracle does not care. DML is case insensitive. update emp set salary = 10; will work just fine. So will UPDATE EMP SET SALARY = 10; and so will Update Emp Set Salary = 20;. As for dictionary contnt readability, learn to use underscores -- They take the palce of spaces in object names AndMakeReadabilityMuchEasierThanMixedCase.

7) Extended explain syntax to allow the equivalent of bind variable peeking.
- That is nice. But also, having to diagnose problems others are having in thier sessions, it would be nice to peek at the bind variables for other sessions too.

Just my US$.02


Chuck Jolley, October 21, 2005 - 1:16 pm UTC

Quote:
AndMakeReadabilityMuchEasierThanMixedCase.

Thats no problem. Just make it: TblAndMakeReadabilityMuchEasierThanMixedCase and it becomes easy to read.
Ask any SQLServer/VB programmer ;)




Those naughty NULL's

DaPi, October 21, 2005 - 2:09 pm UTC

Andrew said:
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string?

I would say that the length of a NULL string is unknown, so should be NULL.

The difference is between:
a message '' received with a good check-sum => empty string
a message received with bad checksum, unknown message => NULL

I'm sure Fabian Pascal can do it better.


Nice thread

Andrew Max, October 21, 2005 - 2:30 pm UTC

Well, my strong belief is that one should be a smart Oracle guru to feel 100% free criticizing something about Oracle and saying "Hey there, I would like you to add this feature! Why Oracle is still missing it???". Otherwise, we may end up with beating the air...

But anyway -- there is one intricate (and very simple) thing I still can't get.

Why in SQL*Plus we can, for example:

SQL> variable NumV number

... but can not:

SQL> variable DateV DATE
Usage: VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
                    VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
                    NVARCHAR2 (n) | CLOB | NCLOB | REFCURSOR ] ]

So -- VARCHAR2, REFCURSOR and even CLOB are allowed whereas DATE is not.
But why???

I really have no clues so I'm afraid it's a kinda stupid question.
Well, I am not guru. So I realize that this may be just "beating the air..." :)

Best regards,
Andrew. 

Tom Kyte
October 22, 2005 - 9:44 am UTC

they just didn't "implement it".

applications generally don't *bind* dates - they bind strings which hold characters that when a specific format is applied to these characters can be converted into a 7 byte internal date format (or 11 bytes for timestamps with fractional seconds)

Compiling invalid objects in the correct order

Marco Coletti, October 21, 2005 - 2:33 pm UTC

Yet another sample of "selecting DDL commands".

This query is a bit slow, but it should give you a list of DDL statements in (one of) the correct execution order.

The basic idea here is first to find the set INVOBJS_INDEPENDENT_INVOBJS of invalid objects which do not refer to other invalid objects, then from this set start a hierarchical query wich goes down the dependency tree; the resulting set is then aggregated by object and ordered by maxlevel to take care of objects which appear more than once in the dependency hierarchy.

with INVOBJS_INDEPENDENT_INVOBJS as (
select OBJECT_NAME, OBJECT_TYPE
from USER_OBJECTS where STATUS = 'INVALID'
minus
select d.NAME, d.TYPE
from
USER_DEPENDENCIES d,
(select OBJECT_NAME, OBJECT_TYPE from USER_OBJECTS where STATUS = 'INVALID') o
where d.REFERENCED_NAME = o.OBJECT_NAME
and d.REFERENCED_TYPE = o.OBJECT_TYPE)
select
'alter '||rpad(decode(OBJECT_TYPE,'PACKAGE BODY','PACKAGE',OBJECT_TYPE),14,' ')||
rpad(OBJECT_NAME,31,' ')||'compile'||
decode(OBJECT_TYPE,'PACKAGE BODY',' body')||';' COMPILE_COMMAND
from (
select 0 L, OBJECT_NAME, OBJECT_TYPE, NULL REFERENCED_NAME, NULL REFERENCED_TYPE
from INVOBJS_INDEPENDENT_INVOBJS
union all
select LEVEL L, NAME, TYPE, REFERENCED_NAME, REFERENCED_TYPE
from USER_DEPENDENCIES
connect by prior NAME = REFERENCED_NAME and prior TYPE = REFERENCED_TYPE
start with (REFERENCED_NAME, REFERENCED_TYPE) in (select OBJECT_NAME, OBJECT_TYPE from INVOBJS_INDEPENDENT_INVOBJS))
group by OBJECT_NAME,OBJECT_TYPE
order by max(L);

COMPILE_COMMAND
----------------------------------------------------------------
alter VIEW V_OM_CDN compile;
alter VIEW V_OM_ROD compile;
alter PACKAGE CMKANALISI compile body;
alter VIEW V_OM_ALICE compile;
alter PROCEDURE ALLINEA_CPC compile;
alter VIEW TV_CPC_VIEW compile;
alter VIEW V_RPT_SMART compile;
alter FUNCTION FAMIGLIA_ROD compile;
alter PROCEDURE OM_MERGE_ROD compile;
alter PROCEDURE SOGLIE_PER_SISTEMA compile;
alter PROCEDURE SLAMONITOR_WRAP_OM compile;
alter PROCEDURE SLAMONITOR_OM_WRAP compile;
alter PROCEDURE OM_MERGE_WHOLESALE compile;
alter PROCEDURE ALLINEA_CPC_ORESTE compile;
alter PROCEDURE SLAMONITOR_OM_DAI compile;
alter PROCEDURE SLAMONITOR_OM_CPC compile;
alter PROCEDURE SLAMONITOR_CPC_OM compile;
alter PROCEDURE SLAMONITOR_ALL_OM compile;
alter PROCEDURE SMART_BUS_SIEBEL_CLEAN compile;
alter VIEW TV_TD_INNOVATIVA_VIEW compile;
alter PROCEDURE SVECCHIA_OL_IN_CATENA compile;
alter VIEW TV_ELABORAZIONE_VIEW compile;
alter VIEW TV_CPC_VIEW_X_PORZIO compile;
alter VIEW V_RPT_WHOLESALE_NEW compile;
alter PROCEDURE INFASA_OL_IN_CATENA compile;
alter PROCEDURE CARICA_OL_IN_CATENA compile;
alter PROCEDURE SP_OM_DELETE_TABLE compile;
alter VIEW TV_STATO_ORDINATIVI_ALTA_VIEW compile;
alter PROCEDURE ANDREA_AGG_CPC_DOPPIA_ATTIVAZ compile;
alter VIEW TV_CONFIGURAZIONE_CIRCUITO_V compile;
alter PROCEDURE SP_DEL_DUPLICATI_TD_INNOV compile;
alter VIEW TV_STATO_ORDINATIVI_VIEW compile;
alter VIEW V_RPT_TD_INNOVATIVA_NEW compile;
alter PACKAGE TEST_TIZIANO compile body;
alter PROCEDURE PR_TOSC_UNICA compile;
alter PROCEDURE SLAMONITOR_CPC compile;
alter PROCEDURE PULITURA_DB_BAM compile;
alter PROCEDURE SLAMONITOR_WRAP compile;
alter PACKAGE TEST_TIZIANO_BE compile body;
alter PROCEDURE OM_MERGE_SOSPESI compile;
alter PROCEDURE SLAMONITOR_RUN_1 compile;
alter PROCEDURE SLAMONITOR_RUN_2 compile;


Chuck Jolley, October 21, 2005 - 2:49 pm UTC

I'm for null strings having a length of 0.

It just seems inconsistent to have it work the way it does.

I you say, well what about null numbers etc:
Null strings are different.
null it's self is a string.

I just can't think of a case where a program would treat a null string differently from a 0 length string as a prctical matter.



NULL's

DaPi, October 21, 2005 - 3:00 pm UTC

It's not what a program would do, it's what you want your data to mean. e.g. in a survey:

Q: What is your boat's name?
A: It doesn't have a name. (empty string)

Q: What is your boat's name?
A: . . . AGHHHHH! my house is on fire . . . . . (NULL string)





Enhancement Request

djb, October 21, 2005 - 3:46 pm UTC

I realize this thread is getting long and I may have missed this, but I'd like something like a dynamic (or conditional) foreign key constraint, like this:

create table T1 (
T1F1 VARCHAR2(30) primary key
)
/

create table T2 (
T2F1 VARCHAR2(30) primary key
)
/

create table TMAIN (
F1 number(1),
F2 VARCHAR2(30)
)
/

alter table TMAIN
add foreign key (F2)
when (F1 = 1) references (T1.T1F1)
when (F1 = 2) references (T2.T2F1)
when (F1 = 3) references (NOTHING)
/

I worked with a proprietary database that did this sort of thing (it was not SQL-based) and this feature was great.
When the value of F1 changed, the DB checked F2 against the appropriate table. To change values, you simply updated both at the same time.

I'd be curious to know what others think of this (including Tom).

Tom Kyte
October 22, 2005 - 10:29 am UTC

Never, never never.

I would hang up my hat.


You have three attributes here, not a single column multiplexed to mean one of three things.

re: nulls -vs- empty strings

Chuck Jolley, October 21, 2005 - 3:59 pm UTC

But Oracle is not consistent about it.
Is '' a null value or an empty string?
If it's not an empty string then what is?

"My boat is on fire" should raise a null pointer error,
Not be an acceptable answer to "What is the name of your boat?"

SQL*Plus: Release 8.1.7.0.0 - Production on Fri Oct 21 14:45:30 2005

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

SQL> @connect chajol@tax_test
Enter password:
Connected.
chajol@tax.experior>create table blah(id number, string varchar2(16) default '' not null);

Table created.

Elapsed: 00:00:00.43
chajol@tax.experior>insert into blah(id) values(1);
insert into blah(id) values(1)
            *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("CHAJOL"."BLAH"."STRING")


chajol@tax.experior>alter table blah modify(string default 'Hello World');

Table altered.

chajol@tax.experior>insert into blah(id) values(1);

1 row created.

chajol@tax.experior>select * from blah;

        ID STRING
---------- ----------------
         1 Hello World


chajol@tax.experior>

 

Improved sqlplus parameter handling..

Jmv, October 21, 2005 - 5:01 pm UTC

For instance if &1, &2, et al are not defined on the command line is there a way to have a default automatically assigned. Similar to the ${var:=value} fuctionality in ksh.
Or to see the number of arguments from the command line.

Sorry, missed that

Andrew Max, October 22, 2005 - 10:59 am UTC

Aah... my bad! Completely forgot about that 7-byte SQLT_DAT datatype. 
Indeed, client applications generally do not bind this datatype.

> applications generally don't *bind* dates - they bind strings which hold characters ...

... did you mean this I believe?

SQL> create table dates (x date);

Table created.

SQL> variable some_string varchar2(48)
SQL>
SQL> exec :some_string := '22.10.05'

PL/SQL procedure successfully completed.

SQL> insert into dates values (:some_string);

1 row created.

Well, I do not code clients and involved mostly in server-side logic programming (SQL, PL/SQL, a bit of OCI). But I realize that if someone had decided to bind strings containing dates he(she) must be aware of this:

SQL> alter session set NLS_DATE_FORMAT = 'DD/MON/YYYY';

Session altered.

SQL> insert into dates values (:some_string);
insert into dates values (:some_string)
                           *
ERROR at line 1:
ORA-01843: not a valid month


SQL>  exec :some_string := '22/OCT/2005'

PL/SQL procedure successfully completed.

SQL> insert into dates values (:some_string);

1 row created.

Anyway -- agree with you, DATE for SQL*Plus VARIABLE command makes no real sence because most other client apps never use SQLT_DAT. Sorry for that silly question. ;)

Thanks and best regards,
Andrew. 

Some extra about statistics

Ofir Manor, October 22, 2005 - 11:12 am UTC

There are many great suggestion here.
Some ideas about statistics:
1. I'd like to be able to do ALTER TABLE xxx DEFAULT STATISTICS METHOD FOR ALL INDEXED COLUMNS...
It would allow us to use gather_schema_stats and easily customize the way it analyzes a few selected tables that need some special care. Also, it might be useful to be able to customize the default estimate percent(or default to compute) etc.
2. I wish that when specifing parallelism for gather_schema_stats, each parallel slave would analyze a different table or index. So, if my schema has 200 tables and 800 indexes, and I run with a degree of 4, I'll have 4 processes that each analyzes a different object (starting with the biggest and the unanalized, maybe, so they finish more or less the same time). Again, otherwise we might need to work with our own scripts to create that parallelism.
Well 1+2 mean - find out why people don't use gather_schema_stats (or database_stats) on many production systems and use custom scripts instead, then make the builtin way easy and flexible enough so people will use it out of the box (using db control..)
3. I'd like the optimizer to use a new kind of statistics - COLUMN CORRELATION. For example, when a query has table t (1 million rows) as the driving table, and the two predicates that filter rows are WHERE t.a = :x and t.b = :y, and both A and B are NOT NULL with a 200 distinct values, how many rows will return from this part of the query? Oracle will guess a 25(a million divided twice to 200). It is a good guess if the columns are not correlated. but if they are strongly corelated, a better guess might be a few thoursands! so the rest of the plan might be really of. I'd like to be able to ask the database to collect correlation statistics on selected columns (or maybe AUTO - let him decide) and use it when appropriate.
In many cases, when tuning a nasty SQL, the DBA can quickly find where the cardinality estimates goes totally wrong, but can't fix - so let us (SQL profiles can help, but are not good for "families" of similar SQLs) and are not "pre-emptive"

For Andrew from Michigan, USA

Clemens, October 24, 2005 - 5:54 am UTC

Quote:
2) Don't treat empty strings as null (a way to disable this, big design
mistake).
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string? If length of
enpty string > 0, then it is not empty, thus not null.

=> The difference between "empty string = string length=0 entered" and "null string = no data at all entered" may be small, but it's there.

Quote:
3) No implicit type conversions (a way to disable this).
- good coding practices generally dictate explicit type conversions, but to
eliminate it would bust lots and lots of apps. Besides, just because your
coders will not follow your standards ( you have a standard requiring explicit
type conversions, right), does not mean you should use the database to enforce
it.

=> Why not? Could be a database/session option to keep everyone happy.

Quote:
4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers
(currently 30 characters).
- Longer varchar2 is fine with me, but identifiers at 30 is fine too. This
enhancement is usually requested by people who like to mash object names
together to make even longer names. If it were to increase to 60 or 128 or??
all that would happen is that there would be complaints that from people that
they could not mash their 120 char names together to make 600 char names. Give
us a break, someone has to type in those long names every time their code
references them

=> So what? Using your argument, why not allow only for 8-charater identifier limit? On the other side, why is there no tab completion for object names yet? And what about concatenated names in foreign keys, etc. that have to be given artificial names because of that 30-character limit?

Quote:
5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
- Actually seperate sequences are more flexible. Take for instance a bi-directional replication system. We set the sequence to start at 1 on site a and to start at 2 on site b, now we increment by 2 on each site. Voila, no data collisions. Managing that with autoincrement would mean the code (DDL at least) would have to be different on each database.

=> Take a look at postgres, it's all there: sequences connected to ID-columns. Saves some trigger coding (and maintainance) and gives you the best of both worlds.

Quote:
6) Mixed case object names (for better readability) without having to quote,
combined with case insensitive DML.
- You can put mixed case object names in your apps all you want, oracle does not care. DML is case insensitive. update emp set salary = 10; will work just
fine. So will UPDATE EMP SET SALARY = 10; and so will Update Emp Set Salary =
20;. As for dictionary contnt readability, learn to use underscores -- They
take the palce of spaces in object names
AndMakeReadabilityMuchEasierThanMixedCase.

=> Doesn't tackle the whole problem, where DESC and other commands will return the information in Uppercase only. Underscores are nice, but underscores + mixed case are even nicer.

Just my €.02
C.

Is raptor "still sqlplus..."

Stewart W. Bryson, October 24, 2005 - 4:05 pm UTC

"Actually, many of those things I would change are being addressed - a better
sqlplus that is still sqlplus (coming, code name "raptor", previewed at OOW)..."

What I've read about Raptor is that it is more akin to jDeveloper than sqlplus. I don't want another Toad... I want a more powerful sqlplus, and anything that I have to use a JDBC connection with just doesn't seem like sqlplus to me.



Tom Kyte
October 25, 2005 - 1:19 am UTC

why does jdbc make it "not seem like sqlplus"?????



Deferred constraints

Toon Koppelaars, October 25, 2005 - 10:05 am UTC

"What I don't like about Oracle", here is another one:

cod>create table test(i number check (i>0) initially deferred);

Table created.

cod>insert into test values(0);

1 row created.

cod>commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (CBO_WS.SYS_C006049) violated


cod>select * from test;

no rows selected

I would like for Oracle to NOT rollback the transaction. Just have the commit fail with the 2290 error.
I know ansi/sql standard describes the 2091 error: give us a switch then, to prevent the rollback.


Tom Kyte
October 26, 2005 - 7:24 am UTC

all you need to do is:

SQL> set constraints all immediate;


that'll verify the constraints - and if one fails - no rollback happens. You can fix it and then commit when it finally succeeds.


Commit must end the transaction one way or the other. 

How can one have a commit fail

Jim, October 25, 2005 - 11:53 am UTC

Not sure how one could have a commit fail and not issue the rollback. You issue commit and you get an error, now what? You can't commit, it failed, just leave the transaction unfinished?

Toon Koppelaars, October 25, 2005 - 12:49 pm UTC

Yes, and then issue a correcting:
update test set i=1 where i=0;
and then commit succesfully.


Tom Kyte
October 26, 2005 - 8:49 am UTC

set constraints.....

Is raptor "still sqlplus..."

Stewart W. Bryson, October 25, 2005 - 2:36 pm UTC

It seems that Raptor is going to be more like Toad... a very GUI tool much like the tools one would use to administer SQL-Server.

What I want is a more powerful sqlplus... one that is still very command-line oriented, but makes some of the more painful things easier: statement recall and review, formatting output, spooling to a spreadsheet-like output, etc. And it would be fine with me if there was a GUI client built around it... but at it's very heart it was still sqlplus.

Perhaps my comparing it to Toad is unfair... but that's what the screenshots looked like. And I know in toad, all sqlplus commands are ignored. Is the JDBC connection that Raptor uses going to allow commands such as "SHUDOWN IMMEDIATE", "RECOVER DATABASE" or "ALTER SYSTEM kill session..."?

Your new book is excellent... by the way.





Not a review

Huy, October 25, 2005 - 4:46 pm UTC

_Maximum length for identifiers is too low.
_Case-insensitive passwords.

my5cent

Edgar, October 26, 2005 - 9:08 am UTC

1.
On most sites, range partitioning is used on date type columns to maintain "windows of historical data".
Then, to automate partition mintenance, people write some code to create partitions for new months, truncate/drop partitions of "obsolete" data.
Each site has it's own implementation... dissipate the common task
2.
Replication using fast refresh - no built-in functionality which preserves rows in target table from deletes after truncate/drop partition on master table.
3.
Physical standby with read-only mode windows for reporting:
no possibility to use temporary tables.
4.
Express Server killed by Oracle - it was out-of-the-box SOLUTION for datamarts.
Now replaced by analytical sql, m-views - just a bricks for developers.
5.
JDBC - unable to get metaData on PreparedStatement before execution
6.
SQL optimizer hints - it is hard sometimes to get them work, because syntax check of hints not implemented, reasons why they doesn't work are not reported.
7.
Alert log file has inconvenient structure - hard to process with scripts, figure out all lines witch corresponds to the same "bad event", dates of "bad events" (compare with Windows EventLog, for ideas)

ORA-12053

Kim, October 26, 2005 - 9:21 am UTC

- Not the most descriptive error message.

Background (10g R1):
We are trying to figure out *why* it is not allowed to
use nested join MV and fast refresh, but only nested
aggregated MV.
It works when we use group by for all columns, so why
not without ???

ALERT.LOG structure

Andrew Max, October 26, 2005 - 9:30 am UTC

Hi Edgar,

> "Alert log file has inconvenient structure - hard to process with scripts, figure out all lines witch corresponds to the same "bad event", dates of "bad events" (compare with Windows EventLog, for ideas)..."

Have you read this, for example?
</code> http://asktom.oracle.com/~tkyte/alert_arch.html <code>

Tom demonstrated there how to make use of SQL and external tables to mine data from ALERT.LOG file.
Similar smart examples can be found in his new book, "Expert Oracle Database Architecture".

Best regards,
Andrew.

Reply to Edgar

Stewart W. Bryson, October 26, 2005 - 9:57 am UTC

"Replication using fast refresh - no built-in functionality which preserves rows
in target table from deletes after truncate/drop partition on master table."

Materialized views are just that... instantiated views of a source object or objects. It cannot contain more data that the source... it's just not what they are designed for. They are no longer Oracle's preferred replication platform... they are more of a data warehousing/data segmentation tool now.

For replication needs such as those mentioned above, use Streams. Streams allows you to define what transactions you would like to forward. Only want updates and inserts but no deletes? Fine... configure it that way. You do or do not want DDL to be forwarded? It can do either one. Want replication for an entire database or entire schema (instead of having to configure MV's for every single table)? Oracle's got you covered.

varchar2 max length

Alex, October 27, 2005 - 1:35 pm UTC

Wish to remove these annoying length restrictions on VARCHAR2 - 4000 in SQL and 32767 in PL/SQL. They are very very inconvenient.

Of course, we might use CLOB, but due to its' significant performance impact, using CLOBs seems to be overkill for our 50-100K strings.

So, having even 4Mb length varchar2 would be REALLY helpful.

A reader, October 28, 2005 - 3:57 pm UTC

Tom,

Now that this thread has "steadied down" can you pick your top 5 or top 10 ?

Tom Kyte
October 29, 2005 - 10:48 am UTC

I will do that - this week or next weekend..... Yes.



One thing i don't like in SQL

Michel Cadot, October 29, 2005 - 1:39 pm UTC

Hi,

I already posted this question in forum and newsgroup but never received an answer.
Why in the following query, Oracle is not able to resolve a.dummy?

SQL> select ( select dummy
 2           from ( select dummy
 3                  from dual b
 4                  where b.dummy = a.dummy
 5                )
 6         ) dummy
 7  from dual a
 8  /
               where b.dummy = a.dummy
                               *
ERROR at line 4:
ORA-00904: "A"."DUMMY": invalid identifier

The purpose of this question is we often have to optimize the following kind of query:

select ..., 
       (select ... from tabB B1
        where B1.col1 = A.col1
          and B1.col2 = (select max(B2.col2) from taB B2
                         where B2.col1 = A.col1))
from tabA A
where ...

The classical way to limit to one access to tabB is to use an analytic function like:

select ..., 
       (select ... 
        from (select ..., row_number() over (order by col2 desc) rn
              from tabB B
              where B.col1 = A.col1)
         where rn = 1)
from tabA A
where ...

but as we add 1 level in subqueries, A.col1 becomes out of scope.
Why is this the case? Is there any theorical SQL limit on this? Or is it just an Oracle optimizer limit?

Regards
Michel
 

Tom Kyte
October 29, 2005 - 2:05 pm UTC

correlation names go "one level", no more, no less.

let us say b2.col2 is a DATE:

select ....,
substr(
(select MAX( to_char(b2.col2,'yyyymmddhh24miss')|| '..your dotdotdot..' )
from tabB b1
where b1.col1 = a.col1), 15 )
from tablA
where .....



similar for number, varchar.....


Michel Cadot, October 30, 2005 - 6:13 am UTC

Thanks Tom, that's what i finally did but this is not really pretty and especially when the "dotdotdot" is complex (say multiple columns of different types and so).

Do you know the reason of this limit of "one level"?
I hoped there would be no limit, something like variables scope in PL/SQL or C...

Regards
Michel

Tom Kyte
October 31, 2005 - 2:39 am UTC

I do not know the reasoning "why", only that "it is" and how to work around it.

another USD$.02

Andrew, October 31, 2005 - 11:08 am UTC

I think it would be real useful if the dba_tables view were to break out chain_cnt such that chain_cnt reflects the number of rows that require more than one block to store and that a new column, migrate_cnt, reflects the number of rows move due to insufficent free space in the block to allow the row to grow.

Linux Installation Headaches

Basil, October 31, 2005 - 12:08 pm UTC

The headache of installing Oracle under Linux ranks up there as one of the biggest misfeatures of Oracle, at least as far as installing 9.2.0.4 under RHEL 3 (a supported platform) is concerned. Procedure appears to be: Get software, unpack software, apply a patch, start installer, watch it fail, remember to set THREADS_FLAG to native, and run the installer again. Of course, Google and Metalink must be consulted on top of the install document at each step.

I can't figure out why the patch isn't already applied to the product when I get it, nor why I have to have Metalink access to get something that prevents the product from installing in the first place, nor why the runInstaller script can't check the environment and set THREADS_FLAG appropriately before running the Java insaller. It's enough to make me take the Windows performance penalty just to be free of the installation hassle. My most recent Oracle install led to to see why MySQL is so popular. rpm -i and the software is there.

Tom Kyte
November 01, 2005 - 4:57 am UTC

XE, rpm -i

automatic length in constant varchar2 in PL/SQL

Russell H, November 01, 2005 - 4:39 pm UTC

Something like

declare
some_text constant varchar2() := 'Hello World!';
...

would be nice. PL/SQL could automatically set the length.

What don't you like about Oracle?

Nestor Torres, November 01, 2005 - 5:29 pm UTC

What I don't like about Oracle is that it sells Oracle e-Business Suite 11i for Windows and then you subsequently find the in order for it to work (install) you need to have MKS Toolkit which is a $500.00 software. Good job! What does a student that wants to learn the Oracle e-Business Suite 11i suppose to do? You buy the trial version and then Oracle expects you to have MKS Toolkit. They should at least provide a work around for those of us that can't invest on MKS Toolkit. I think this is not a good way to secure a loyal following. Here I am with a worthless set of DVDs. That is what I don't like about Oracle.

Tom Kyte
November 02, 2005 - 5:01 am UTC

well, I cannot imagine a student downloading and installing the e-business suite and making any sense out of it personally.

Suggestion: excluded tables on "imp"

David Weigel, November 02, 2005 - 11:40 am UTC

We sell an application with Oracle, and once in a while we can't figure out problems without the customer bundling up the database and sending it to us. Their database schemae are very similar, not necessarily identical, but one thing I can count on is that there are two tables (say A and B) that will make up 60% of the size and rarely are useful solving the problem.

When loading the database, I have to do "IMP SHOW=Y" and cull out all but the two tables and build a "IMP TABLES=(N1,N2,N3,N4...N200)". What I'd like to be able to do is "IMP EXCLUDE=(A,B)".

Tom Kyte
November 03, 2005 - 5:19 am UTC

data pump has infinitely more sophisticated filters.

You can give them a "fine grained access control" policy of "where 1=0", ask them to put that in place - do the export (zero rows come out) and then drop the policy (i've done that in the past)

Snipped Sessions are a pain

Bill Beilstein, November 02, 2005 - 11:41 am UTC

A major PITA for every DBA I have ever talked to (outside of Oracle) is that when a session is killed by the dba or by a resource_limit, you have to run a cron job to kill the OS session to release the connection. The requirement that the user has one more swip at the apple before it disconnects is just silly!!! What happens to the user who went on vacation for a week. While I know that this behavor is near and dear to the people who work for oracle, provide a database parameter so that the rest of the world can chop the session when the user is killed. Something like.

KILL_SESSION_IMMEDIATELY = TRUE

Tom Kyte
November 03, 2005 - 6:20 am UTC

well, it is not entirely silly - the client application will generate an ora-3113, end of file on communication channel. A fairly obscure and "scary" message.

But I hear you.

datatype text

A reader, November 02, 2005 - 3:18 pm UTC

There is one thing I always like when using other databases, it's the datatype text. With Oracle I have to use a "normalization table"/rows table to contain it which is more work.

Why did Oracle choose not to include datatype text? Obviously you are fully capable of doing it, so there must be a good reason behind the decision not to.

Thanks,
Marcus

Tom Kyte
November 03, 2005 - 6:35 am UTC

clob?

What do you think about this?

D., November 03, 2005 - 11:33 am UTC

"Oracle password-system is putting its users in danger
...it seems, that passwords of Oracle DB users, can be easily uncovered, because of its weak defense mechanism, putting in danger corporative secrets of companies, using this software. The last critical analyze of problem showed clearly observed insufficiencies of technologies, used by company. Explorers of the problem uncovered the password during few minutes. Joshua Right from SANS Institute and Carlos Sida from Royal Colledge Holloway (London), exploring this problem sharply criticize this issue on the conference dedicated to IT-security, that was held in Los Angeles. According to their words, the main problem is weak hashing-mechanism and upcase transformation, that narrows ammount of possible combinations. Also, they say, that Oracle was notified about this issue in July, but there was no official reaction..."

Sorry for poor quality of translation and possible inaccurate name transliteration - I translated this article from other language. Also I can't show the origin of this article, because it was lost during e-mailing.

What I want to ask is next: is where some truth in this article or its just old evil Microsoft trying to lay its dirty hands on Oracle's share of market?


Tom Kyte
November 04, 2005 - 2:38 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:258815248980#50574772893798 <code>

has my comments on that....

would you send me your /etc/shadow file? If not, I won't send you my dba_users table either. That is the bottom line.

Ok

D., November 04, 2005 - 5:15 am UTC

This really makes sense - first thing to protect are hashed password, so nobody can "brute-force" them.
Well, but now I'm interested in such a question:
when I'm logging in... with SQL*Plus, for example, what is sent over the network - plain text, hash of password or the whole thing is somehow encrypted. Can some evil guy steal password or hash this way or am I missing something?..

Tom Kyte
November 04, 2005 - 8:47 am UTC

passwords are sent encrypted (since 7.0).

however, beware of commands like "alter user x identified by y" - as sql is sent cleartext unless you have encrypted the network traffic (ASO)

David Aldridge, November 04, 2005 - 11:33 am UTC

How about the "password" syntax? I'm guessing that it encrypts the passwords?

SQL> password applications
Changing password for applications
New password:
Retype new password:
Password changed 

Tom Kyte
November 04, 2005 - 5:28 pm UTC

set

TRACE_DIRECTORY_CLIENT=/tmp
TRACE_level_client=16


in your sqlnet.ora (in unix .sqlnet.ora if you like in your home directory) and you'll see that you cannot see the password.



donna, November 07, 2005 - 5:30 am UTC

I'd like to get rid of jInitiator, why flog all this web services capabilities and yet still need to install a client application?!

Schema Level Permissions

Bill, November 07, 2005 - 3:45 pm UTC

I have always wanted the ability to grant permissions at the schema level. For example if you want user AAA to be able to select user BBB's tables, you have to grant select on every table in BBB's schema. Wouldn't it be nice if you could simply say

grant select to AAA;
or
grant select on schema {ALL|TABLES|VIEWS|SEQUENCES} to AAA;

Hints

andrew boocock, November 10, 2005 - 5:47 am UTC

Hi Tom,
I guess this is the best topic for my thoughts on hints.

I got a call yesterday from another company site, the DBA had issues with a slow running piece of 3rd party supplied code. The code was hinted. I advised the removal of this as the first step, the plan changed to something more appropriate and the runtime came down for 4 hours + to 8 minutes.

I'd imagine we've all seen this situation before, and for a basic SQL script this was easily changeable. However I've also been in the situation where compiled code has been provided with embedded hints. There are ways and means to change the behaviour without recompiling,the one I remember was changing the degree of parallelism on the base table to change a RULE hint, this led to an inlist iterator being used rather than a concatenation.

Drum roll,

How about ORACLE providing a parameter to be set in initialisation IGNORE_HINTS=TRUE/FALSE.

Thinking about how oracle goes about parsing code, HINTS are identified and the directive followed. If the oracle binary could check the new parameter and ignore the HINTS we have another tool to help us out particularly with compiled code.

Clearly on the occasion when we need to hint to get us around some bad design or whatever the parameter could be set via alter sesssion set IGNORE_HINTS=FALSE or whatever.

I'm assuming this would be quite simple to accomplish - basically another if statment. If doable providing a backport patch would help many of us still running 9i or indeed 8i.

Regards

andrew

Tom Kyte
November 11, 2005 - 11:42 am UTC

(shhh, there is but it is not documented in the current releases)

You know what I'm going to ask next

andrew boocock, November 12, 2005 - 4:22 am UTC

That's good to know. I assume at best this will be an
underscore parameter.

I guess you can't tell me so maybe I'll have to trawl through all the _underscore params looking for something with *hint* or *optimizer* in. Unless you can whisper it !

BTW is the documentation going to include this anytime soon?

Thanks

Andrew

Tom Kyte
November 12, 2005 - 10:58 am UTC

run a 10053 trace and you'll see them all (the optimizer related parameters) in the trace file.

undocumented things are undocumented, don't expect them to become "documented" anytime soon. If and only if they decide to make this a "feature" would it become documented.

comparing nulls

A reader, November 13, 2005 - 1:41 am UTC

It would be nice to have an equality operator that treated two null values as equal to each other. E.g. I'd like to be able to write something like

where col1 == col2

instead of

where nvl( col1, 'x' ) = nvl( col2, 'x' )

where "x" is some character that I pray never actually appears in my column data.


Tom Kyte
November 13, 2005 - 10:32 am UTC

how about:

decode( col1, col2, 'equal, even if null', 'not equal, even if null' )

where decode( col1, col2, 1, 0 ) = 1

for example....

My 2 cents

Mike Wilson, November 13, 2005 - 5:42 pm UTC

I've been working with Oracle for 10+ years now (yes, I know I'm still a newbie) but if I had to pick the single greatest thing I don't like about Oracle it would be:
*) Command-line history in SQL*Plus using up arrow on Unix platforms.

Yes, I know you can download a third-party patch to get this to work in Linux but why, oh, why hasn't this "feature" ever just worked on install? I can't count the number of times I've hit the up arrow in SQL*Plus and then sworn quietly, repeatedly under my breath at a client site. It would have saved me literally man weeks in the past 10 years.

Will Raptor include command-line up-arrow history out of the box?

Defaulting to vi rather than ed would have also been a nice touch once we made it to the 21st century but at least you can configure this with a startup script ...

Tom Kyte
November 13, 2005 - 5:44 pm UTC

raptor is more gui than command line - but with a full history.

(i agree with you, rlwrap has been "life changing" :)

To andrew

oraboy, November 14, 2005 - 12:16 pm UTC

Well you know, when experts like Tom say not to use undocumented parameters, I suppose you realize the meaning & good intention behind it.

and here comes the whisper..(just thought I can save your search time, as you sound very decisive about finding that  secret parameter).

I hope you dont use it for Prod..but for testing in pre-prod or at session level to verify where CBO's default plan is superior than the one hinted by developers (90% of the time) .. you can try this

SQL>explain plan for 
  2   select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |   327 |  6867 |     2 |
|   1 |  TABLE ACCESS FULL   | TEST_T      |   327 |  6867 |     2 |
--------------------------------------------------------------------

SQL>alter session set events '10143 trace name context forever'; 

SQL>explain plan for 
  2    select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

---------------------------------------------------------------------------
| Id  | Operation                   |  Name       | Rows  | Bytes | Cost  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_T      |       |       |       |
|   2 |   INDEX RANGE SCAN          | TEST_T_IDX  |       |       |       |
---------------------------------------------------------------------------

SQL>alter session set events '10143 trace name context off';

Session altered.

SQL>explain plan for 
  2     select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |   327 |  6867 |     2 |
|   1 |  TABLE ACCESS FULL   | TEST_T      |   327 |  6867 |     2 |
--------------------------------------------------------------------

Again, please be informed Tom never advocates using undocumented parameters and thats the strong foundation I have learnt from him. But if you need to show a demo on how bad the application (if it has been hinted by developers on every SQL) is , by all means you can use this , as your quickest way in test or dev. 

hope that helps

and I really hope Tom doesnt mistake me for putting this (* but hey, this is not a secret per se, I found it in metalink :) ) 

Tom Kyte
November 14, 2005 - 2:15 pm UTC

as long as everyone knows "not supported, use at your own risk, caveat emptor, get support permission to use in production, etc etc etc"



Thing I "don't like"

Andrew Max, November 15, 2005 - 12:57 pm UTC

Hi Tom,

I'd like to point out one more "what-you-don’t-like" thing here. 
It’s about ORA-30005 error. Please consider:

SQL> create table Employee as
  2  select 1 EmpNo, 'John Smith' EmpName from dual
  3  union all
  4  select 2, 'Peter Pen' from dual;

Table created.

SQL> select * from Employee
  2    where EmpNo = 1
  3    FOR UPDATE WAIT 5;

     EMPNO EMPNAME
---------- ----------
         1 John Smith

So far so good. We just asked Oracle "hey, give me that row and if it’s already locked then please wait, but no longer than 5 seconds".

However, *why we can not* do it this way???

SQL> variable Wait_Time number
SQL>
SQL> exec :Wait_Time := 5

PL/SQL procedure successfully completed.

SQL> select * from Employee
  2    where EmpNo = 1
  3    FOR UPDATE WAIT :WAIT_TIME;
  FOR UPDATE WAIT :WAIT_TIME
                  *
ERROR at line 3:
ORA-30005: missing or invalid WAIT interval

In other words, we can not use bind variables to specify wait timeout -- only some integer constant. While it may seem obvious, this is where I got a bit confused. I believe that allowing not only constants but also bind variables here would be a nice feature.

Suppose, for example, we want to implement a procedure:

SQL> create procedure UpdateEmp (p_EmpNo in number, p_Wait in number) is
  2     cursor Cur is
  3     select null from Employee where EmpNo = p_EmpNo for update wait p_Wait;
  4  begin
  5     open Cur;
  6     -- Row locked -- do some updates ...
  7     close Cur;
  8  end;
  9  /

Warning: Procedure created with compilation errors.

SQL> show errors
Errors for PROCEDURE UPDATEEMP:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/4      PL/SQL: SQL Statement ignored
3/68     PL/SQL: ORA-30005: missing or invalid WAIT interval

Of course, it failed for obvious reasons. But if it were implemented, it would be possible for end-users to call this procedure passing different wait intervals to it.
As for me, it makes sense: some "impatient" user would call it like this:

exec UpdateEmp (1, 1)

while the other (say, more "patient") would be able to issue:

exec UpdateEmp (1, 10).

Furthermore, as far as I understand implementing this feature would not affect much anything (I mean, query parsing strategy -- there's no need to change something radically). But may be I’m just wrong?

So what do you think about this?

Thanks and best regards,
Andrew. 

to Andrew Max

Oraboy, November 15, 2005 - 3:12 pm UTC

create or replace procedure updateemp(n number) as
begin
count:= 0;
Loop
declare
cursor cur_emp is select ... for update nowait;
begin
open cur_emp;
count:= 10000;
exception
when time_out_error or data_Locked_by_someone then
count:= count + 1;
-- your other application code
end
exit when count>n;
end loop;
end updateemp;

-- if you really want to wait 'n' seconds (and not just try n attempts), simply add a dbms_lock.sleep to sleep one second within the inner loop

PL/SQL gives all the power you want





Include ALM?

Oraboy, November 15, 2005 - 4:02 pm UTC

This is something what just came to my mind..sort of 'nice to have' feature.

9 out of 10 developers dont understand concurrency issues and things that work fine obviously get blown away when exposed to many concurrent users. (and yes, simply they blame it on Oracle and move away - is a different story)

Now , just think some application is already there in production ..getting beaten to death..No one has a clue..I am thinking if there is an option to just capture all production happenings that could be rerun in Non-Prod env, that would be great (to test any system after changes)

what I mean is..say we experience problems around 10 am every morning, if there is some way to record all user activities from say 8am to 11 am , and transfer over to other databases (like dbms_stats and stat table)..we can simply rerun this activity (load) to check whether the modified code makes this any better

I am not sure whether this sounds more like load-runner or volume testing tool (which I dont have any experience)..but I am looking for something within database (may be room for a new acronym: ALM ->Automatic Load Management or ALLM-> Automatic Load Management Monitor)



Tom Kyte
November 16, 2005 - 8:22 am UTC

that is load runner - I'm not sure if we'd be going into that area anytime soon.

Neat idea though.

Oraboy, thanks but...

Andrew Max, November 15, 2005 - 4:19 pm UTC

Hi Oraboy,

And thanks a lot for your comments.
But there’s a small but very crisp difference between FOR UPDATE WAIT approach and the one you posted above.

As far as I know, SELECT ... FOR UPDATE WAIT *enqueues* session request to lock row(s). This means that if some row of interest is locked and session #1 issued such request at some point before session #2, it’s guaranteed that session #1 will be able to lock that row when it’s freed while session #2 will continue to wait.

Consider:

Blocker> select EmpNo from Employee
2 where EmpNo = 1 for update;

EMPNO
----------
1

Session #1> select to_char(sysdate, 'HH24:MI:SS') cur_time from dual;

CUR_TIME
--------
23:49:03

Session #1> select EmpNo from Employee
2 where EmpNo = 1 for update wait 60;
-- Session #1 began to wait...

Session #2> select to_char(sysdate, 'HH24:MI:SS') cur_time from dual;

CUR_TIME
--------
23:49:18

Session #2> select EmpNo from Employee
2 where EmpNo = 1 for update wait 60;
-- Session #2 began to wait...

Blocker> commit;

Commit complete.

Session #1> ...

EMPNO
----------
1

Session #2> ... continue to wait, and in less than minute:
select EmpNo from Employee
*
ERROR at line 1:
ORA-30006: resource busy; acquire with WAIT timeout expired

If we were using "retry counter" logic with LOOP, NOWAIT and dbms_lock.sleep the ordering would be unpredictable and quite *random*. Suppose, some session began to sleep on dbms_lock.sleep and 0.001 sec. later the row of interest is unlocked. Then our sleeping session may have no luck: some third session can "steal" row immediately.

Is this important? Well, it depends...
And I believe sometimes it’s important.

Thanks again for your comments.

Cheers,
Andrew.

create or replace table

Baker, November 16, 2005 - 10:46 am UTC

I would like to see the keyword "or replace" added to the create table syntax, instead of using a key word "truncate".

There are times where I use a drop table in script to wipe out temp tables that I don't want hanging around the database, followed by a create table. Of course, I end up getting the "table do not exist" error. If the syntax was changed to "create or replace table", then I could elimate the drop table and thus remove the error message.



Tom Kyte
November 16, 2005 - 6:02 pm UTC

begin
execute immediate 'drop table t';
exception
when others then null;
end;
/


is one method

To Andrew Max

Oraboy, November 16, 2005 - 11:11 am UTC

Andrew - 
 Again what you are trying (to serialize) is already available , just a matter of how to write the code.

since you said " I believe sometimes its important", I am putting this reply (dont mean to showcase my plsql knowledge)

SQL>desc test_t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 C1                                                 NUMBER
 C2                                                 TIMESTAMP(6)

SQL>ed
Wrote file afiedt.buf

  1  Create or replace procedure Update_test_t( p_c1 in number, p_willing_maxwait in number ) as
  2   cur_c1 sys_refcursor;
  3   lv_c1 number;
  4  begin
  5    open cur_c1 for 'select * from test_t where c1='||p_c1 ||' for update wait '||p_willing_maxwait;
  6    loop
  7     exit when cur_c1%notfound;
  8     fetch cur_c1 into lv_c1;
  9     update test_t set c2=systimestamp where c1=lv_c1;
 10    end loop;
 11  exception
 12    when others then
 13     dbms_output.put_line('Error '||SQLERRM(SQLCODE));
 14* end;
 15  /

Procedure created.

----
se1 ==> Session1
se2 ==> Session2


T1:
----
Se1>exec Update_test_t(2,2);

PL/SQL procedure successfully completed.

se2>set time on
10:41:30 se2>exec Update_test_t(2,2); -- waited 2 seconds..no luck
Error ORA-30006: resource busy; acquire with WAIT timeout expired

PL/SQL procedure successfully completed.

10:41:38 se2>
10:41:49 se2>exec Update_test_t(2,5); -- waited 5 seconds..no luck
Error ORA-30006: resource busy; acquire with WAIT timeout expired

PL/SQL procedure successfully completed.

10:42:22 se2>exec Update_test_t(2,25);
***WAITING ***

 -- flip to Session1 and release the lock
 Se1>commit;
 Commit complete.
 -- swictch to session2 , its done
PL/SQL procedure successfully completed.

checking the update
====================
SQL>select * from test_t;

        C1 C2
---------- -----------------------------
         1
         2 16-NOV-05 10.42.28.018745 AM <-- session 2's time
         3
         4
         5
         6
         7
         8

8 rows selected.

hope that answers your need..
 
Thats probably the last answer from me as I feel this topic is moving away from intended topic (things I dont like or I like to have  - in Oracle) and I dont want to earn the wrath of other oracle guru's here

Sorry Tom, for overstepping in your thread.



 

Tom Kyte
November 16, 2005 - 6:09 pm UTC

Andrew is correct about the enqueue waits though.

Think about it like this....


You get into a line, a queue. You are blocked (waiting for the chance to buy tickets).

If you get out of line every three seconds, you may NEVER be able to buy the ticket (because once you get out of line, you lose your place, you go to the end of the queue).

If you stay in line - you don't lose your place, you don't lose your place.

The last reply to Oraboy...

Andrew Max, November 16, 2005 - 12:53 pm UTC

Hi Oraboy,

Thanks for your response but I still have a problem with it.

> open cur_c1 for 'select * from test_t where c1='|| p_c1 ||' for update wait' || p_willing_maxwait;

Huh?... Not sure I would use this. If such code is "rarely" executed -- well, may be... But re-executing this using many different p_c1 and p_willing_maxwait would force Oracle to hard-parse each brand new statement again and again, thus littering shared pool and killing scalability.

"Parse as little as possible" -- this is Tom’s mantra, and I agree 100% with it.

Consider:

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 1;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 2;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 3;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> column SQL_TEXT format a65
SQL> set numwidth 4
SQL>
SQL> select s.SQL_TEXT, s.CHILD_NUMBER CH
  2   from v$sql s
  3  where s.SQL_TEXT like 'select /* TAG */%';

SQL_TEXT                                                            CH
----------------------------------------------------------------- ----
select /* TAG */ * from Emp where EmpNo = 1  for update wait 3       0
select /* TAG */ * from Emp where EmpNo = 1  for update wait 2       0
select /* TAG */ * from Emp where EmpNo = 1  for update wait 1       0

Just look at that: different SQL areas, different execution plans -- say good bye to scalability and performance. Bind variables really rock. And I would prefer parse query once, use binds and re-execute it many times rather than resorting to dynamic SQL.

But unfortunately we can not use binds to specify WAIT timeout ...

Thank you again, and agree with you: our discussion is surely off-topic and I'd better stop here ... ;)

Cheers,
Andrew.

P.S. Tom, sorry for spamming this thread. 

LIKE operator

RAVI, November 16, 2005 - 5:56 pm UTC

Hi Tom,
       I am not sure whether, my request fits here or not, I apologize if I have posted in a wrong place. 
      below are two sql statements in first I am using LIKE operator and passing a absolute value, but I want to pass that value using a variable where the value of the variable changes. but in this case how would i substitue %, if I have it in the variable, it is loosing its special value.
       In the second sql statement I did it using substr function but it is bit slow, is there any other way to do it. can you please suggest me any work around

SQL> select count(*) from patient_enrollment where 
group_id like '6257%';

  COUNT(*)
----------
   1164589
Elapsed: 00:00:04.22

SQL> var client_id varchar2(15);
SQL> exec :client_id := 6257;

PL/SQL procedure successfully completed.
SQL> select count(*) from patient_enrollment where 
substr(group_id,0,length(:client_id)) = :client_id;

  COUNT(*)
----------
   1164589

Elapsed: 00:00:33.19

 

Tom Kyte
November 16, 2005 - 6:29 pm UTC

but in this case how would i substitue %, if I have it
in the variable, it is loosing its special value.

i don't know what that means?



but seems you want: where group_id like :client_id||'%';



To Ravi

oraboy, November 16, 2005 - 6:04 pm UTC

Will this work?

SQL> select count(*) from patient_enrollment where 
group_id like :client_id||'%' ;
 

that works

Ravi, November 16, 2005 - 10:04 pm UTC

Thank you Tom, and thank you Ravi. sorry for confusing you, but that's what I was looking for

Thank you,
Ravi

May be.....

Muhammad Ibrahim, November 16, 2005 - 11:32 pm UTC

1) nice to have some of usefull scripts/functions/codes which were provided by you and others inside some oracle packages/Apis like dbms_utility. so that it will be well tested in performance wise, everyone in the world use the same functionality and ofcourse easy to use!

eg: scripts like :
owa_sylk
mail_pkg
spell_number/Spell out numbers to words
who_called_me and who_am_I
whoami
tkfprof
runstats etc...

2) include leap_year in the date functions
eg: leap_year[year] like next_day()

3) more exception handling
eg: no_data_found for implicit cursor.

4) utl_file may support to store the files in the client side too.

5) greatest/least as sql function like max/min.

6) some more mathematical/scientific formulas like sqrt etc as SQL functions.


Regards,
Ibrahim.

Tom Kyte
November 17, 2005 - 8:03 am UTC

1) many of them are (the who functions for example... tkprof, mail_pkg is utl_mail)

3) ? no_data_found is there for implicit cursors

4) that would be called a "virus". plsql is running in the *server*, only the client application itself can touch the client filesystem.

5) ? they are?? not sure what you mean.

6) ? it is???

sqlplus

A reader, November 17, 2005 - 7:25 am UTC

Hi Tom, I think you should improve sql plus
Option
1. Fusion sqlplus with perl
2. a perl command line including sql plus command line, or at leats the more important

I don't think in window program, command line is still nice. But with more commands and options

to Muhammad Ibrahim

Peter, November 17, 2005 - 8:05 am UTC

You can get square roots like this!

SQL> select exp(ln(144)/2) from dual;

EXP(LN(144)/2)
--------------
            12

 

Tom Kyte
November 17, 2005 - 8:22 am UTC

ops$tkyte@ORA10GR2> select sqrt( 5 ) from dual;

   SQRT(5)
----------
2.23606798


it is already (and has been) a builtin. 

Sqrt using LN and EXP

Bob B, November 17, 2005 - 8:16 am UTC

Try getting the sqrt of 0 with that ;-)

Tom Kyte
November 17, 2005 - 8:23 am UTC

ops$tkyte@ORA10GR2> select sqrt(0) from dual;

   SQRT(0)
----------
         0
 

Dave, November 17, 2005 - 8:38 am UTC

It was the other comment he said couldn't be done

SQL> select exp(ln(0)/2) from dual;
select exp(ln(0)/2) from dual
              *
ERROR at line 1:
ORA-01428: argument '0' is out of range

 

Tom Kyte
November 17, 2005 - 8:48 am UTC

I understand, I'm just following up with the right way to get the sqrt.

Alberto Dell'Era, November 17, 2005 - 8:55 am UTC

The power of IEEE floating-point arithmetic "infinity":

SQL> select exp(ln( cast(0 as binary_double) )/2) from dual;

EXP(LN(CAST(0ASBINARY_DOUBLE))/2)
---------------------------------
                                0 

nice

A reader, November 17, 2005 - 10:08 am UTC

however
select exp(ln( cast(-1 as complex_number) )/2) from dual;
still does not work

Muhammad Ibrahim, November 17, 2005 - 10:11 am UTC

3) no_data_found for implicit cursor
i know and i am following your trick like
for r_something in ( select ... )
loop
l_found := true;
end loop;
if not(l_found) then
means noting found do what ever..
end if;
instead of this exception handling for when no_data_found for the implicit cursor may be just like exception handling for normal plsql blocks.

5) sorry i meant in other way round. we can not use min/max in plsql code or is it possible??? so normally i use greatest/least in plsql code.

6) sqrt i know it is there i meant like this function may be some other usefull mathematical functions which is more usefull in business side like commercial applications. may be some ready made formulas! tom this is just a thinking only :-)

Tom Kyte
November 18, 2005 - 7:23 am UTC

3) that doesn't make sense to me - it is not an error or anything to have zero rows. ;)


5) min/max are aggregates - greatest/least are comparable equivalents for scalars...

6) sorry, I was looking for concrete examples - there are a wealth of math functions in the database - from analytics with linear regression to the new matrix packages in 10gr2 and so on...

Oracle load testing

canuck, November 17, 2005 - 11:52 am UTC

This may be of interest to the reader who was looking for a load testing tool (free, open source): </code> http://hammerora.sourceforge.net/about.htm <code>

"Hammerora is used to parse Oracle trace files and convert them into Oratcl programs that can then be replayed against the Oracle database with multiple users.

The tool came from the desire to simulate a number of users against the database as an aid to learning. Thereby a small home-based learning/developing system could simulate more closely a production environment. It has grown into a tool that can load test an Oracle database from the smallest development system through to large RAC based clusters.

TCL has been used as it provides high performance OCI interaction with an Oracle database without the inconvenience of needing to recompile generated programs. Each generated program has strict TCL/Oratcl compliance and can therefore be run independently of Hammerora at the command line TCL Shell. The power and flexibility of TCL means that the potential functionality is unlimited. The TCL Threads package is used to establish multiple connections against a Oracle 8i,9i or 10g Database."

Alex, November 17, 2005 - 1:08 pm UTC

Whoo-hoo we're up over 300 posts now. What do I win for submitting the record breaking question ;)

What do you win ...

A reader, November 18, 2005 - 8:39 am UTC

A cookie!!! ... for your machine (boooo)

Alex, November 18, 2005 - 5:29 pm UTC

I'm just glad I didn't get banned for causing Tom all kinds of extra work....

import show=y

Andrew, November 25, 2005 - 12:17 pm UTC

Tom,

If import had an option of the show command to indicate one ddl per line that would make it much more usable. either one per line or some other tag that would make parsing it deterministic. I know dbms_metadata extracts ddl but export/import has ddls in the correct order for recreation. I've taken out the quotes and tried adding the lines together but it takes a full scrub of the file each time.

export from rman backups

Gianluca, December 04, 2005 - 7:13 pm UTC

nice feature I found on DB2 zOS: the ability to uload (export in Oracle terminology) data from a COPY (rman backup in Oracle terminlogy).

on zOS, we use it for these useful things (consider we keep 2 months of daily full backups over GDG files on tapes)
- we avoid daily exports (saving time and disks/tapes) and performing them "on request", usually for moving data from production to test or similar requests from developers
- we can run an export from the past (yes, you could save exports as well as rman backup, I know, but that means extra time and space)
- we can export on another system (es. test machine) avoiding cpu and I/O laod, using rman files
- we can have consistent exports even with lots of activity during the run (full backup alwayes gives you a consistent image thanks to archived logs)
- you can restore the content of a single table in a multitable tablespace from rman backups without restoring the entire tablespace in a commodity instance (tools like BMC do that for you but you still need a complete tablespace recover in a dedicated instance)


Is the new MetaLink built using HTMLDB?

A reader, December 19, 2005 - 10:32 am UTC

It looks familiar. It is also slow, but hopefully they will work the kinks out.

Oracle stress test,

sns, December 19, 2005 - 7:15 pm UTC

Tom,

I am using this thread for my general question.

Tom,

We have a situation that the application with web server and Oracle 9i database needs a stress test. The database is about 10 GB (not a big one), but the application is pretty complex (like logging time for employees and so on).

The application along with the database is delivered to customer.

From our end we need to simulate that 500 users are logging into the database , run the reports, do DML activities on certain tables (assuming all the 500 could do DML activities) and so on.

I need your suggestion on how to go about this. If you have already done something like this could you please share your ideas (in detail if possible)?

Thanks

Tom Kyte
December 20, 2005 - 8:25 am UTC

... I am using this thread for my general question. ...


why? why would you do that - especially after reading:

Please limit this to a comment or question relevant to the other text on this page.

In order to permit NEW questions on this site, I am taking the firm stance that anything deemed a 'new question' asked here will be ignored.

Please think about what you are going to put into here and unless it is very relevant to the existing thread -- a true FOLLOWUP or REVIEW or CLARIFICATION, please do not post it.

This will allow me to accept new questions (I took 2,000 less in 2004 than 2003, but only because I followed up to 3,000 'reviews' that were actually new questions). This is by popular demand -- we need 'fresh blood' in there.


???


there are hundreds of stress test tools, google will likely be hugely helpful.

</code> http://www.google.com/search?q=oracle+stress+test+tool&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official <code>


got it,

sns, December 20, 2005 - 4:51 pm UTC

Tom,

Actually I searched the site for "stress tool" and couldn't find one and have to force my question in this thread.

I will make sure this does not happen again.

Regards,

Followup

Michael Friedman, December 23, 2005 - 11:25 pm UTC

1. Lack of proxy capability within the DB - we should be able to log onto a non-[rivileged account, authenticate ourselves, and then have our connection switched to a selected privileged account.

TK: That is not within the database. We have that ability in the database, sqlplus doesn't "do it"

Well, it's not just SQL*Plus - this should be core PL/SQL functinality. There is an undocumented system privilege, "become user". It's used by IMP/EXP. I would like similar capability.

--

2. Merge statement should take multiple actions just like Insert

TK: it does - when matched, when not matched. you can insert and update and delete from a table ???

You should be able to merge into multiple tables at the same time just like with Insert.

--

3. The current move away from PL/SQL and transactional interfaces to many components needs to be reversed. For example, there is no way to make use of CDM SDK direct from PL/SQL while maintaining transaction atomicity, etc.

TK: that seems to be a contradictory statement? You say "don't go components" but then say "we need interface to components"? (frankly, I don't know what the CDM SDK is..)

Sorry... that's CMSDK.

I don't care how you build things as long as I get transactional PL/SQL APIs to them. How can I get a document in Oracle Forms and then save it to CM SDK in the same transaction as I save the other form data? I can't... Oracle seems to be turning its back on its core strength - an amazingly functional and reliable transactional platform.

--

Here's a fourth complaint...

Why the heck is there a BITAND operator but no BITOR, BITXOR, or BITNOT?

I'm using UTL_RAW now to replace what should be core DB functionality.



Tom Kyte
December 24, 2005 - 9:04 am UTC

1) you have it, it is called n-tier proxy authentication - that is your interface to it. That is how you and I can perform this exact feature. It does and has existed since 8i of the database.

2) that would be entirely ambigous - I disgree very much with that idea. If I merge into "t1 and t2" using "t3" and I find a row in t2 that matches - but not in t1 - should I insert it? if not, why, if so why? And if a row matches in both t1 and t2 - should both be allowed to be updated and so on.

and if I don't have a match on either t1 or t2 - into which table should the row go? and why?

Whereas it makes sense to be able to take a "set" and insert any given row into any number of tables (we control completely what table(s) the given row goes into) - it does not make sense to me to merge into multiple tables.

We can use multiple tables as the "input" to build the set - but to merge into multiple table for me raises more questions than answers.



Raptor

Joe, December 29, 2005 - 3:16 am UTC

Raptor is now available for download.

</code> http://www.oracle.com/technology/products/database/project_raptor/index.html <code>

Cheers!


Hmmm...

Dave, January 06, 2006 - 12:20 pm UTC

Preamble- For the most part Oracle rocks. It has kept me gainfully employed for a number of years. Glad I didn't bet on Ingres in 1996...

My "What don't you like about Oracle?" list...

1. <rant> Silly bugs in released products. I understand the esoteric critters but there were a number of releases and patches that were released profoundly QC-challenged. I remember an flawed 8.x patch that ruined two days of my life and was not revoked for two weeks after my TAR fingered it.
2. No courier font in the metalink forum posts.

What I'd like to see...
1. A raptor or sqlplus utility to show side by side
a. Expected return set for each explain plan stage (already there) vs the actual rowcounts , timings, I/Os for each nesting/hash stage (event 10046 without manually processing traces)
b. If the CBO is "surprised" by a cardinality allow it to learn and update by the actual results or additional stat gathering.
c. Feedback if the query's plan would have changed and allow a rerun of the new plan.

2. An EM utility showing horizontal bars representing the start/stop intervals for every active and inactive SQL statement (for span "X" subject to a resonable number). The x-axis would be time and the y-axis hash value. This concurrency visualization can be quite useful but is a pain to do manually.

Let me know if I need to rtdm...

Regards,
-Dave


Tom Kyte
January 06, 2006 - 2:24 pm UTC

what I'd like....

1.b - is there, 10g SQL Profiles
1.c - not sure what you meant?



Forgot one

Dave, January 06, 2006 - 1:32 pm UTC

Tom,
Forgot one.

The ability to specify and use a RAM-only landing location for a source temp set using Oracle's "WITH temp AS" SQL structure. This would help with that class of problems where you want to use a large SQL source set a number of times/ways (where inline views and global temp tables might be awkward) and you want to avoid re-pulling the direct reads for every UNION set underneath.

There may be an existing elegant solution I'm unaware of. I'm sure you'll let me know...

Regards,
Dave

multi-merge

Michael Friedman, January 06, 2006 - 7:22 pm UTC

TK: Whereas it makes sense to be able to take a "set" and insert any given row into
any number of tables (we control completely what table(s) the given row goes
into) - it does not make sense to me to merge into multiple tables.

Why not? Why can't we use the same methods we use to control which tables the insert rows go to to control which tables the merge rows go to? You could even use almost the same syntax.



Tom Kyte
January 07, 2006 - 9:30 am UTC

how would that be any more efficient than "multiple merges"

Think about the activity that would take place - you would just be saying "hey, if x=5, merge into T1, if x=6 merge into T2 if x=7 merge into T3" - meaning you need three separate joins in the first place - nothing at all would be gained (beyond enourmously complex syntax) by doing this in a singl DML statement.


Rename an Oracle Directory

Michael, January 15, 2006 - 10:42 am UTC

It would be nice if we could rename an oracle directory, or alter it to point to a different location. I know the rename command does not specify this, but I can't imagine why. I can not find any reference to a command to do it, or even alter a directory. We are on 9.2.0.5 on Solaris. Logged in as sys:

SQL> create directory pooper as '/u02/ftpdata/davox/pooper';

Directory created.

SQL> rename directory pooper to popper;
rename directory pooper to popper
                 *
ERROR at line 1:
ORA-00946: missing TO keyword


SQL> rename pooper to popper;
rename pooper to popper
*
ERROR at line 1:
ORA-04043: object POOPER does not exist

etc.

alternatively, it would be okay to do the following:

alter directory accts as '/f06/ftpdata/davox/accts'; assuming you had already created a unix "accts" directory in that location on the server. (the current directory location is /u02).

We have about 20 directories on our development server that we want to relocate to a new file system to better mimic production. (We did not have the ability to do this when the database was originally set up). Seems right now I would have to drop and recreate all of the directories, redo grants, etc. 

If we had any references in any code to the old path (it would be unusual), we would be more than happy to replace that code with the production version, which would almost always work right out of the box because the development database directories and external tables would be named (and located) just like in production.



 

Merge with Conditional Insert

Koshal, January 19, 2006 - 9:22 am UTC

It is great we had merge statement in Oralce.
One of things I would like to have is the ability to filter the records. Normally out of the 100 records that I Process 10 records I would like to Update,
89 Records I would like to Insert and One record I would to
ignore/put them in error log table.

MERGE INTO t1
USING t2
on (t1.col1=t2.col1)
WHEN MATCHED THEN
UPDATE
SET t1.col2 =t2.col2 /* where t1.col>10 */
WHEN NOT MATCHED THEN
/* case when t1.col<20 */
INSERT (col1,col2,col3)
values (t2.col1,t2.col2,t2.col3) /*where t1.col<20 */

/* else
insert into err_log(err_message) values('invalid values '||t1.col) end ; */

Something like this is more common in business application.
In this way the entire routine can be accomplished by a single SQL Statement.

Merge

koshal, January 19, 2006 - 10:06 am UTC

1. It would be nice if we had merge with Returning Clause and insert (within merge) with select clause /where clause.
2. It would be nice if we can query from PL/SQL Table in the same way as we query a table/temporary table.
3. It is better if we have isnumber(),isnull() function Rather than using translate etc. or pl/sql functions. This is especially usefull when u r migrating from legacy systems to Oracle, wherever data cleasing is required.
4. Ability to Automatically Analyze(Mark as needs analyze) if the table Size has grown by X%.
6. A better V$session_Lonops atlease in case of query single table. Needs to shown Estimated Time for the Whole Query Not a part.
6.Ability to have default value for a column referring to another column. In the below example in 99% of cases i would be inserting value for y but wherever i dont insert it should take the default value as x
create table t1(x number, y number default x-2)
*

Thanks
Koshal


Alex, January 20, 2006 - 11:18 am UTC

This shouldn't compile:

SQL> declare
  2     type rc is ref cursor;
  3     ref_cur rc;
  4     v_dummy number;
  5  begin
  6     open ref_cur for
  7       select 1
  8       into v_dummy
  9       from dual;
 10  end;
 11  /

PL/SQL procedure successfully completed.

And this will run successfully up until 10g, then it will throw a run-time error. 

Problem With Metalink

A reader, January 28, 2006 - 1:33 pm UTC

Tom, Sorry to post the question here.
I am unable to login to Metalink from the time it moved to HTMLDB.
Any idea or solution. I cannot report this problem to Oracle as Metalink itself is not available.

Any browser setting issue ??

Tom Kyte
January 29, 2006 - 8:07 am UTC

metalink is, has been and will be available - please try again, it has been working from the very time it was ported over (and given that you pay for metalink, I cannot imagine you would be paying for something you cannot access - doesn't make sense??)

That and I have no idea what you are refering to here as far as a problem goes... No idea what problem you might be having.

Use of multi-merge

Mike Friedman, January 29, 2006 - 11:33 am UTC

I asked for multi-merge functionality just like the existing multi-insert.

Tom responded:

----
how would that be any more efficient than "multiple merges"

Think about the activity that would take place - you would just be saying "hey, if x=5, merge into T1, if x=6 merge into T2 if x=7 merge into T3" - meaning you need three separate joins in the first place - nothing at all would be gained (beyond enourmously complex syntax) by doing this in a singl DML statement.
---

Same benefit as with multi-insert:

1. You are guaranteed that the result from the driving query remains the same for each merge
2. You do not need to run the driving query three times - an important issue if it is slow
3. By only specifying the driving query once instead of three time you reduce the opportunity for errors and produce clearer and more transparent code


Can i use hints in the query clause of export?

ravinder matte, February 07, 2006 - 6:49 pm UTC

Thanks

Tom Kyte
February 08, 2006 - 1:46 am UTC

nope, you provide a where clause, hints don't go there (well, ok, if you have a subquery in the where, you could hint that query block)

Oracle front-end?

Ashutosh, February 08, 2006 - 12:40 pm UTC

After working on Oracle from versions 8 through 10g (and on Forms 3.0, incidentally) I believe I can make this statement: Oracle lacks in a solid front-end for its database. Even with Forms 3.0, I remember that the extremely tiresome and intimidating key-sequences discouraged a new user from adopting it as a standard front-end, but I guess that is history now. This lack of a sound front-end from Oracle Corp. has spawned numerous third-party vendors offering "easy-to-use" front-end software for Oracle databases. This usually translates to an additional investment for the user - in addition to what he/she paid for an Oracle license. I wish Oracle database came bundled with an all-encompassing front-end application, for all platforms, that would totally obviate the need of third-party software. Something that encompasses the functionality of PL/SQL Developer & TOAD, two of the most popular "front-ends" around. How many third-party vendors are there who provide "easy-to-use" front-end software for MS SQL Server? Hardly any, to my knowledge. Can people at Oracle take a cue from this fact?

Tom Kyte
February 09, 2006 - 4:17 am UTC

Have you ever looked at htmldb? It's been around for a while.

Alter table unmigrate

A reader, February 22, 2006 - 1:43 pm UTC

A utility to un-migrate migrated rows. The utility does not make indexes unusable, as "alter table move" does. The utility shares the following characteristics with online segment shrink:

Segment shrink is an online, in-place operation.
DML operations and queries can be issued during the data movement phase of segment shrink.
Indexes are maintained during the shrink operation and remain usable after the operation is complete.

Unmigrating rows is an online, in-place operation.
DML operations and queries can be issued during the operation.
Indexes are maintained during the operation and remain usable after the operation is complete.

Syntax:
ALTER TABLE employees SHRINK SPACE;
ALTER TABLE employees UNMIGRATE ROWS;

New features in next release

Anil Pant, March 08, 2006 - 6:24 am UTC

Tom,
May be you have been asked these questions many times. But please take sometime in answering my queries.

(1) Every release of Oracle has some new features. I was wondering what do you think will be the new features in next release?

(2) Say after 10 years or after 2-3 releases from now, what new features will be available? Agreed some of the core functionalities will remain intact but other than that will there be any significant change? Iam asking from a developer&#8217;s and DBA&#8217;s perspective.

(3) Every release is easing (talking about LMTs, PCTFree and PCTUsed) the work of DBA. How do you see that in future releases?

(4) 10 years back there were DBMS packages like DBase, FoxPro and Clipper which would to solve the business needs (Iam referring to small and meduim size companies) Since these were not able to handle the amount of data and had other performance issues, RDBMS has taken over them. I do agree there are many companies which still run on these DBMS but the number has come down.
Do you see something of this sort happening with RDBMS? By the way, please don't feel bad.

Thanks


Tom Kyte
March 09, 2006 - 12:17 pm UTC

1) no comment right now, bit early.


2) see #1 :)

3) more of it happening, we add more stuff for you to do, we better make the mundane tasks more automated.

4) no, not really, I seem the becoming more important - for the reasons you listed that the others have gone away for.

My single largest beef

Asher, March 09, 2006 - 3:51 pm UTC

Is in direct relation to PL/SQLs error handling. Quite honestly... when compared to most of the newer languages that have come out, oracle's error handling capabilties are left lacking.

In order to generate a named exception with a text message you have to A: declare the exception, B: use a pragma to assign a number to the exception, and C: then use a seperate procedure which you can not directly reference your named error in to get the number out of it. Quite honestly and simply, I'd love to be able to write 'My_Custom_Exception Exception(-20000)' and cut out the pragma exception_init, and then be able to do a raise My_Custom_Exception, "Usefull application error" (or raise_application_error(My_Custom_Exception, "Usefull application error") )

It is quite honestly one of my largest annoyances with PL/SQL. I don't wanna need to memorize all of my error codes, or write database tables & functions to store them all away. I'd also like to have access to more then a mere 1000 different error numbers for raise_application_error, as in a large installation going through 1000 error codes would be exceedingly easy.

My final annoyance is with the lack of ability to, programatically, access the call stack. It may seem like a fairly simple thing, but in atleast one occasion (the one I've been doing most of my work in over the past couple of months) it'd be usefull to be able to read that so it can be displayed. I'm writing procedures which are accessed through the OAS, and if one of those procedures has a problem it can sometimes be a pain in the ass to diagnose.

As a result I've resorted to committing the ultimate evil (atleast by your eyes, not that I can really fault them) and I've put a when others clause on many of my procs which does an htp.p with the error code and message because it's the best I can manage to do.

Alberto Dell'Era, March 09, 2006 - 4:52 pm UTC

desiderata: Having an all_ind_modifications that records modifications on indexes (same thing that all_tab_modifications do for MONITORING tables).

I just have this huge DWH table that gets MERGEd, 99% updates and 1% inserts - would be happy to skip the analyze of the immutable PK (and other indexes that aren't defined on the updated columns and so aren't "stale") most of the time.

Another example with GLOBAL indexes - when I drop a partition using UPDATE GLOBAL INDEXES, maybe 1% of the global index gets updated - I'd like to analyze them only when it's needed.

Check whether a particular query ran in parallel or in serial

Anto, March 27, 2006 - 12:16 pm UTC

Hi,

Some column in V$sql ( or in any other dictionary table) to check whether any past query (select ) against the database, ran in parallel or not - or is this already available in some other dict. table ?

thanks
Anto

Tom Kyte
March 27, 2006 - 3:16 pm UTC

v$sql_plan would be the place to peek - what plans are in there that represent parallel plans.

A reader, March 28, 2006 - 10:15 am UTC

Hi,

Suppose a parallel query - which would have run in parallel under normal circumstances, ran in serial due to parallel_max_processes reaching their maximum limit.

Will this be reflected in v$sql_plan ? I guess not, since that just shows the plan which would be used, not the actual one used

thx
Anto

Tom Kyte
March 28, 2006 - 4:10 pm UTC

v$sql_plan shows the actual plan, always the real actual plan - but it would still be a "parallel", it just would have used less execution servers.

A reader, March 28, 2006 - 10:18 am UTC

Does V$SQL_PLAN contain the plan of all the SQL s after the instance restart or is it also like V$SQL which contains only the latest SQL's for the past few hours or so ?

thx
Anto

Tom Kyte
March 28, 2006 - 4:10 pm UTC

things can age out of there, definitely.

it will not have the plans of EVERY query ever executed in that instance since startup in general.

A reader, March 29, 2006 - 6:04 pm UTC

Hi,

Let me rephrase the question in another way :)

Suppose

SELECT * FROM V$sysstat a where a.NAME like '%Parallel operations downgraded%'

shows there are 1300 operations which were downgraded to serial('Parallel operations downgraded to serial') due to lack of parallel processes or whatever reason it may be

Is there any way we can find out which were these 1300 operations ?

thanks
Anto



Tom Kyte
March 29, 2006 - 7:38 pm UTC

no, because the v$ views are instance wide, not statement by statement executed.

A reader, March 29, 2006 - 6:19 pm UTC

>> v$sql_plan shows the actual plan, always the real actual plan

thanks for this info, but I remember you saying in some other thread that columns like TEMP_SPACE in this V$SQL_PLAN contains an estimate of the temp. tablespace usage anticipated, not the actual temp. space used by the query - is that right ?

Tom Kyte
March 29, 2006 - 7:38 pm UTC

that is correct, but that is the query plan that was used.

limit on pramametrs

satya, March 30, 2006 - 11:15 am UTC

is there a limit for the number of the paramaters to be pass into the procedure we are using 8.i in prod and 9.i in prod

Tom Kyte
March 31, 2006 - 11:27 am UTC

not that I've ever hit and I tested over 1,000 at one point (would never actually DO THAT - used 1000 as an unreasonable upper bound)

Broken link above

3360, March 30, 2006 - 1:53 pm UTC

</code> http://forums.oracle.com/forums/thread.jspa?messageID=996304 <code>

Should have & # 996304 on the end with no spaces, but it keeps getting chopped off. This is a bit better

select * except col1 from t;

Jack, April 03, 2006 - 7:40 pm UTC

Wouldn't it be nice if we could select all but a few columns from a table by listing only the ones we don't want? For example:

create view restricted_emp_vw
as
select * except salary
from emp_tb
;

Or this:

create table dedupe_tb
as
select * except my_row_number
from (
select a.*,
row_number () over
(
partition by id order
order by date_added desc
) my_row_number
from dupe_tb a
)
where my_row_number = 1
;



Easier unusable indexes

Stewart W. Bryson, April 24, 2006 - 2:25 pm UTC

Tom, tell me if this seems conceivable:

It would be great if I could issue a DDL statement against a partitioned table that would prepare all LOCAL indexes to immediately become unusable if a DML statement were going to touch that partition. Something like: "alter table set local indexes to unusable on dml;" That way, oracle could just set all the indexes for a particular partition to unusable as soon as a DML statement started to work on that partition, but leave the other partitions of the local indexes as usable. After the load was done, I could simply issue: "alter table rebuild local unusable indexes".

That would really change things for ETL developers. Not as big of a paradigm shift as the DML logging clause in 10gR2 or the MERGE in 9i... but still big.

"indirect index"

just nobody, April 25, 2006 - 1:32 pm UTC

If table had a primary key then it would sometimes be beneficial to have a secondary index that would not use rowid to point to row, but have primary key value instead. In index would have this primary key by definition, then there would be no need for any "extra pointer to row".

Just like Index organized tables have.

There are some cases where actual reference to table is not needed. Only information contained in index.

Tom Kyte
April 25, 2006 - 2:30 pm UTC

we call that a concatenated index

create index on t( primary_key_col1, primary_key_col2, ...,
another_column, another_column2 );



followup to earlier response

Jeff, April 25, 2006 - 4:58 pm UTC

Tom, I have a question regarding one of your earlier responses to a comment that I thought sounded useful.

"6. Batch commits. When I should be able to do a mass update but ask Oracle to commit at a certain frequency only. I could do this by cursors; but a simple SQL will be very useful. Something very similar to the commit frequency setting in COPY command."

and you said

"6, I disagree, that would be excessively dangerous. How do you restart safely such an operation?"

Why is it dangerous?
This could be very useful for not blowing out all your rollback space when you're doing huge DML operations.

What if this 'feature' existed and we used it as such to update 50 million rows of a 100 million row table.

update emp set id=10 where status='X' commit every 10000;

If the update fails for some reason after updating just 100,000 rows, why couldn't we just restart the update later?

update emp set id=10 where status='X' and id != 10 commit every 10000;


Maybe I'm over-simplifying things, but I'm not seeing the danger.

Tom Kyte
April 25, 2006 - 11:57 pm UTC

because IN GENERAL updates could be:

update emp set sal = sal*1.1 where status = 'X';


Now what?

batch commits - not such a good idea

John, April 25, 2006 - 5:28 pm UTC

Jeff,

In your example, no problems, but what about this one:

update emp set sal=sal*1.04 where status = 'X' commit every 100000 rows

What happens when that crashes?

indirect index

just nobody, April 26, 2006 - 1:39 am UTC

Just trying to clarify my original idea:
In heap organized table your index entry would be

(column1, column1, primarykey_column1, primarykey_column2, rowid)

Now if (primarykey_column1, primarykey_column2) form together the primary key they actually define the row in use. There is no need for rowid. Naturally it is faster to fetch the row from heap table using the rowid. But there are situations when these kind of index is done to support queries and actual row is not at all referenced.

IOT -tables have this kind of index for non primary key indexes. Could this be brought also to heap organized tables ?

Tom Kyte
April 26, 2006 - 7:57 am UTC

I don't see this micro optimization that would not be used in most all general cases (an index that is never used to access the table) as being useful or implemented myself.

Yes, but they are not ALWAYS dangerous.

Jeff, April 26, 2006 - 9:39 am UTC

"because IN GENERAL updates could be:

update emp set sal = sal*1.1 where status = 'X';

Now what?"


The answer is simple. You don't use the 'commit every' option in that scenario.
If we're going to say NO because it COULD BE dangerous, then there's dozens of other things that we shouldn't have in the database because they could be dangerous. How about truncate? That could be really dangerous if used incorrectly.

I'm just saying that having the option available would be very useful, assuming you use it correctly.


Tom Kyte
April 27, 2006 - 4:06 am UTC

but if you permit it, people will do it. No, I don't see this as being a good idea personally.

more on batch commits...

Jeff, April 26, 2006 - 10:17 am UTC

Better yet, if you know you will need batch commits, just modify the table and add a 'flag' column.

update emp
set sal = sal*1.1, bulk_update = 'sal x 1.1'
where status = 'X'
and bulk_update != 'sal x 1.1'
commit every 10000;

If the statement fails, just re-run.


Tom Kyte
April 27, 2006 - 4:10 am UTC


loop
update emp set sal = sal * 1.1, bulk_update = 'sal x 1.1'
where status = 'X'
and bulk_update <> 'sal x 1.1'
and rownum <= 10000;
exit when sql%rowcount = 0;
commit;
end loop;




stateless packages

a writer, April 27, 2006 - 11:48 am UTC

When using J2ee architecture there is usually not a meaningfull state of database package. Each connection is taken from a connection pool, packages are called, sql-statements executed and then a connection is released back to connection pool.

If one would define these packages 'stateless' then there would not be (atleast so many) these error messages stating 'existing state of package has been discarded' and calls failing.


Tom Kyte
April 27, 2006 - 3:27 pm UTC

Scary - I have no clue what you are refering to - our latest discussion has been about transactional integrity. What are you discussing?

Jeff: “update … commit every n“

Gabe, April 27, 2006 - 12:37 pm UTC

<quote>
Why is it dangerous?
This could be very useful for not blowing out all your rollback space when you're doing huge DML operations.
</quote>

Except for the rollback space concern (which can be mitigated without this new syntax) … what would be the other advantages?

Otherwise it seems to be just syntactic sugar.

And the existing transactional model is thoroughly impacted … you now have a DML statement which commits any pending changes for the existing/current transaction … you have a DML statement which in itself represents many transactions. Forget about restart … how do you rollback this entire statement?

So unless one changes the entire transactional system the only option is to make this kind of "statement" something other than a "DML statement" … which would be a program … which you can do right now with a pl/sql procedure taking the update as a string and batch_size as a second parameter and dynamically executing the string||'and rownum < '||batch_size and commit until the job is finished. Document it commits prior work, document it may commit part way through, document reversing the modifications is the end-user responsibility, sql injection maybe, dynamic sql with hard parses, etc.

gabe@XE> create or replace procedure run_batch
2 ( sql_stmt varchar2
3 ,batch_size number
4 )
5 authid current_user
6 as
7 stmt varchar2(32000);
8 sfty number := 10000;
9 n number := 0;
10 begin
11 commit;
12 loop
13 execute immediate sql_stmt||' and rownum <= '||batch_size;
14 exit when sql%rowcount = 0;
15 commit;
16 n := n + 1;
17 exit when n > sfty;
18 end loop;
19 commit;
20 end;
21 /

Procedure created.

Elapsed: 00:00:00.06
gabe@XE> select status,count(*) from x group by status;

S COUNT(*)
- ----------
X 12064

Elapsed: 00:00:00.01
gabe@XE> exec run_batch('update x set sal=-sal, status=''A'' where status!=''A''',1000);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19
gabe@XE> select status,count(*) from x group by status;

S COUNT(*)
- ----------
A 12064

Elapsed: 00:00:00.01

So sure, in theory they could implement something like 'update … commit every n' in the database, but that will not be a SQL DML statement any more.

stateless packages

a writer, April 28, 2006 - 1:41 am UTC

I was trying to discuss about things I don't like about in Oracle.

I don't like about the idea that state of package is kept in situations when from applications point of view there is no 'state'.
I would like to get out of these 'existing state of package has been discarded' error messages (atleast most of them).



Tom Kyte
April 28, 2006 - 2:12 am UTC

umm, but like I was saying


Oracle keeps the state of packages that programers who wrote them - included "state".

The developer of the package created a package, wrote a package - that HAS A STATE.

If the developer did not use package global variables - then the package WOULD NOT HAVE A STATE.

If the developer codes a package that does use package global variables - the developer has explicitly told Oracle "maintain state"

Oracle does what it is told (usually). In this case, the developer asked us to maintain a state.

IF YOU DO NOT WANT a state, thou shall not use package global variables and no state will exist.

Still not getting your point, unless and until you write a package that says "hey, I've got a state - maintain it" - we won't. It is entirely up to you.

well aware of the cursor approach...

Jeff, May 01, 2006 - 1:51 pm UTC

no one said it couldn't be done, the cursor approach was already mentioned in the previous quote.

"Except for the rollback space concern..."
Gabe, that IS the concern.

All I was saying is that a 'commit every' feature would be nice to have, and is not dangerous if implemented properly.

Tom Kyte
May 02, 2006 - 3:05 am UTC

it is - I disagree with you, and we'll have to agree to disagree. There are many features I wish we could "pull" from the database due to the abuse heaped upon them by developers just "not getting it" and using them entirely inappropriately, such as:

o autonomous transactions
o when OTHERS in plsql
o TRIGGERS

I will not promote the addition of features that I obviously know will be used entirely inappropriately (not when I cannot get features that looked useful - but turn out to be used wrong more than they are used correctly - retracted!)


That "commit every N records" is like having a "lets trash data integrity and generate ora-1555's as easy as we can" feature (eg: the commit every N records is a ora-1555 just *waiting* to happen - you are

o reading table A

o modifying table A

o committing bits of work on table A, allowing the undo to be overwritten - in fact THIS IS YOUR STATED GOAL/DESIRE - to overwrite this undo

o but your query READING table A needs those bits of undo you are overwriting in order to get a consistent read on table A


so, not only would you have to DEAL with the fact that this statement might fail, you have to deal with the fact that the probability of this statement failing is VERY HIGH.

Records, Collections and Forall

Chris Seymour, May 03, 2006 - 9:55 pm UTC

I came across an annoying thing today that to the best of my knowledge is still not allowed in any version (even 10G). If this is actually allowed in 10G now I apologize.

I was trying to use forall with a pl/sql table that was made up of records. Unfortunately I found out the hard way that you cannot specify the elements individually, and you can only only use forall if you insert the entire record in one shot. This assumes however that your columns match the record order exactly. I would love to be able to do something like this:

declare
type r_project is record
(project_id number,
task_id number,
parent_task number);

type tb_project is table of r_project
index by binary integer;

t_project tb_project;

begin
--Fill array with values (assume this step is already done)
forall i in t_project.first..t_project.last
insert into table
(col1,
col2,
col3,
col4,
creation_date,
created_by,
last_update_date)
values
(t_project(i).project_id,
t_project(i).task_id,
t_project(i).parent_task_id,
'Y',
sysdate,
fnd_global.user_id,
sysdate);

This hits an pls unimplemented feature error. The workaround (so I'm told) is to create a separate pl/sql table for each variable. It would be nice to be able to use a record and table in the format above.

Regarding REUSE SETTINGS

Giridhar, May 09, 2006 - 1:27 am UTC

Hi Tom,
In order to recompile a pl/sql procedure or function
with the same settings (whatever were defined at the time of earlier compilation) in oracle 10G, we need to use
REUSE SETTINGS. Otherwise, oracle will take the current session settings.

Wont this new clause force users the existing scripts/programs to modify to use REUSE SETTINGS if we have to take advantage of the new features?

suppose if i want to use the new feature compiler warnings. I recompiled few of my objects using new settings.

If i have some scripts, which just recompile without the RESUSE SETTINGS, they would be recompiled with default values of the session.

Wouldn't it be nice to have the new keyword as
"RESET SETTINGS" instead of "REUSE SETTINGS" if i want to change the settings to current session settings and simply use the previous compilation settings by default.

Thanks tom..


Tom Kyte
May 09, 2006 - 7:45 am UTC

This will force no one to do nothing.

If you want to use a feature, you shall modify your scripts to support the use of such a feature.

Else, you won't.

(not sure why you'd be recompiling manually anyway, I never do, just let it "happen")

small fry...

Robbert, May 17, 2006 - 12:05 pm UTC

Hello Tom,

My wishes are actually pretty small compared to the wishes above:
1) DBMS_OUTPUT.FLUSH, or something like that, that flushes out the dbms_output. In the meantime I use inserts into a dummy table, but that means committing repeatedly.
2) A public view variation on DBA_DEPENDENCIES that lists whether the tables used are source or target tables. This would allow lineage insight of larger batch jobs.
2_sub_A) A similar public view on the OWB repository.
2_sub_B) DBA_DEPENDENCIES listing actual procedures/functions besides PACKAGE BODY.

cheers,
Robbert

Tom Kyte
May 18, 2006 - 9:52 am UTC

1) you would have to use a pipelined function - control has to return to the CLIENT application, because the client fetches the data and prints it.

So, instead of:

exec some_procedure;

you would

select * from table( some_function );

and your function would "pipe row( 'your message' );". The flush size would be the client array fetch size.




Follow-up to an old post

Michel Cadot, May 18, 2006 - 3:18 pm UTC

Hi Tom,

I come back to my post above:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#50678380017237

It now works in 10.2:

SQL> select ( select dummy
  2           from ( select dummy
  3                  from dual b
  4                  where b.dummy = a.dummy
  5                )
  6         ) dummy
  7  from dual a
  8  /

D
-
X

1 row selected.

I didn't check how many levels we can have.

Regards
Michel 

Correction

Michel Cadot, May 22, 2006 - 3:01 am UTC


The previous query works in 10.2.0.1 but no more works in 10.2.0.2.
I opened a SR to know which is the correct behaviour.

Regards
Michel



pipelined function vs dbms_output flush

Robbert, May 23, 2006 - 5:40 am UTC

Hello Tom,

your suggestion to use a pipelined function has definite merit if the function of the code is generate all this output.
I however tend to use dbms_output for debugging, usually between "if l_debug = 1 then ... end if".
In this case you're actually just waiting for the output to see where it goes wrong.
The flush would do 2 thing:
1 - print the output to screen, even though the procedure is still running.
2 - clear the buffer so that you don't get a buffer overflow when looping through more data.

Tom Kyte
May 23, 2006 - 7:36 am UTC

there is nothing to "flush to", stored procedures run from START to FINISH. All dbms_output does it fills in "an array" and later a client makes yet another call to the database to GET the array and print it.


If you want "real time" sort of feedback for debugging, I suggest always to use UTL_FILE.

That way you can generate a trace file (in production for example) and have a dba mail it to you for debugging.

That way you can use "tail -f" in test for "real time" feedback.

UTL_FILE for debugging.

Robbert, May 23, 2006 - 7:55 am UTC

Good tip.

Thanks!

Tom Kyte
May 23, 2006 - 7:56 am UTC

Beroetz, May 29, 2006 - 3:36 am UTC

Isn't this strange?
Consider the following scripts:

ORA9i2 > CREATE TABLE Table1 (a INT PRIMARY KEY, b INT)
OK, done in 1,456 seconds

ORA9i2 > CALL Dbms_Wm.EnableVersioning('Table1')
OK, Done in 1,996 seconds

ORA9i2 > RENAME Table1 TO Table2
OK, Done in 0 seconds
Oracle has been renamed only the created from WM view, not the table itself!!! Now I can all kind of DML operations over the new Table2 view. But:

ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table1')
Error: ORA-00942: table or view does not exist

ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table2')
Error: ORA-20132: table 'Table2' is not version-enabled
So in this situation I can't disable the versioning of the table!!!

I have to rename back the view to disable the versioning:
ORA9i2 > RENAME Table2 TO Table1
OK, Done in 0 seconds
ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table1')
OK, Done in 0.845 seconds

1) Is it a bug? Why Oracle let me rename the system created view?
2) Is there a way to rename the version-enabled table without such unpleasant consequences?

Best Regards!

Tom Kyte
May 29, 2006 - 7:41 am UTC

You are not supposed to perform DDL directly on the workspace manager tables, this is documented in the workspace manager guide. There is in general nothing "preventing" you from doing this (just like there is in general nothing preventing you from modifying the data dictionary)

Workspace manager provides an API to perform DDL.

</code> http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14253/long_intro.htm#sthref196 <code>

Subquery name resolution

Michel Cadot, June 09, 2006 - 7:33 am UTC

Hi,

Following up my previous post:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#64283169182746 <code>

The test I made "succeeded" because of a bug in 10.2.0.1 exclusively on Windows XP. It's bug n°5240169.
Currently the "correct" behaviour is a limit of one level in the name resolution.

Regards
Michel

Idle Time parameter.

Suren Dharanikota, June 13, 2006 - 9:24 pm UTC

Tom,

I have seen this specific issue ":N" number of times now. And i am sure you can use a "Bind Variable" for this issue in your forum. ;-)

The last time i talked to you, you have mentioned it that there is an "Enhancement Request" for this feature. I am not sure in which release will this come up.

Anyways, When an application is poorly written ( And Is ofcourse poorly written 90% of the times ),it does`nt not close the connections gracefully after the transaction is completed, even with the Idle_time parameter set the Process remains at the server level with a "Sniped status" at the database level. It would be nice if Oracle just terminates the connection at the server level.

1)Why should Oracle care .. waiting for the application to give a hand shake. "Idle time" parameter should always take precedence.

2)Once Oracle starts throwing the End-Of-Communication channel errors at the application, they will definitely fix there app :).

3) OR ... Oracle should provide a clean-up script/utility which will terminate all the processes with "SNIPED" status at the database level and its corresponding server processes.

Thanks,
Suren

Tom Kyte
June 14, 2006 - 8:40 am UTC

I would hope it would NOT close the connections - that would be really bad.



Merge and Lock

Branka, June 14, 2006 - 6:18 pm UTC

Should I lock whole table before I start merge?

Tom Kyte
June 15, 2006 - 8:26 am UTC

no, not in general.

Merge and Lock

Branka, June 15, 2006 - 9:24 am UTC

Can you tell me mora about it?
I have "batch" program that insert data into table that users use at same time. It run 3 times a day.
Merge need 2-3 minutes to execute. Can users be able to update / delete data from that table, while I am doing merge?

Tom Kyte
June 15, 2006 - 9:29 am UTC

sure, merge will only lock the data it modifies, other users may modify the rest of the data as they wish.

Merge and Lock

Branka, June 15, 2006 - 9:49 am UTC

Thanks a lot.
Just to clarify.
Since, merge modify whole table, I understand now, that from the user perspective it will modify one by one row, so it is big chance that merge and user will not try to modify same row at the same time.

Tom Kyte
June 15, 2006 - 4:43 pm UTC

merge does not modify "entire table" unless you have a set that will match every row.

If you have a set that will cause every row to be updated - sure, you may certainly use the lock table command to ensure you can do the merge "as fast as possible".


but - to say "merge modify whole table" is just not accurate.

Is Raptor Working for everybody?

Maverick, June 15, 2006 - 4:22 pm UTC

I don't know about others but When I tried Raptor couple of months back, it took 4 minutes to start and bring up it's GUI[??].Takes the same time every time i start Raptor.

I am not sure if it was just my PC or everybody is facing this problem ..

I am very much frustrated by TOAD [with it's Access violation messages] and when I heard about Raptor, felt glad Oracle was bringing it to replace TOAD. But when I used it, my feelings changed and stopped using it immediately. It was that slow. Worse, My VSS got associated all SQL's with Raptor. Now when I open Code directly from VSS [i know we don't have to], it opens up with Raptor, and
takes sweet 4 minutes to show the code..

I thought of pointing it out, since everyone is talking about Improvements and changes to Oracle database and it's Tools.

Thanks,

Tom Kyte
June 16, 2006 - 6:36 pm UTC

try it again .

Merge and Lock

Branka, June 15, 2006 - 6:35 pm UTC

Thank you one more time.
I was thinking about specific problem that I am working on, where I do modify whole table.


Merge changes pending

Rakesh Bansal, June 16, 2006 - 4:22 pm UTC

My Wish List

(1) Ability to print column name also when oracle throws error "Inserted value too large for column", something like "Inserted value too large for column EMP_NAME, ... ". It is very difficult to pin point the column when dealing with table having 200 columns and values are not hardcoded but used as variables.

(2) Merge is still throwing error "Unstable set of rows". Remember TOM, you filed a bug on 21st Feb'05 when I raised this issue in this forum.

(3) SQL Loader with Insert-Update facility same as of Merge statment or DB2 Import utility.

Tom Kyte
June 16, 2006 - 7:23 pm UTC

1) wish granted

ops$tkyte@ORA10GR2> create table t ( x varchar2(1), y varchar2(1) );

Table created.

ops$tkyte@ORA10GR2> insert into t values ( '1', '1' );

1 row created.

ops$tkyte@ORA10GR2> insert into t values ( '1x', '1' );
insert into t values ( '1x', '1' )
                       *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."X" (actual: 2, maximum:
1)


ops$tkyte@ORA10GR2> insert into t values ( '1', '1y' );
insert into t values ( '1', '1y' )
                            *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."Y" (actual: 2, maximum:
1)


2) give me a pointer - I believe the "bug" was it should have thrown it BOTH TIMES (and I believe you never wanted it - so no wish granted)

3) you have it - we call it "external tables" - wish granted. 

Yes very useful thread

Khalid, June 17, 2006 - 11:20 am UTC

Here's what I wish:

I wish pl/sql gave me the ability to skip an iteration in the loop, such as break, and continue statement in the C language. without using a goto statement?

for rec1 in c1 loop
-- How do I skip an iteration?
for rec2 in c2 loop
end loop;
end loop;

Tom Kyte
June 17, 2006 - 4:04 pm UTC

I've always used "if" myself (finding continue to be a glorified goto - but very very DIFFERENT from break).


eg: instead of:

for rec1 in c1
loop
if some_condition
then
continue; -- to skip over rest of code
end if;

...... rest of code ...
end loop;


you code:

for rec1 in c1
loop
if NOT(some_condition)
then
..... rest of code ....
end if;
end loop;

same net effect, no "goto" like processing.


Thanks, but

Khalid, June 17, 2006 - 6:16 pm UTC

First of all thanks for the response.

Yes 'break' is much different than continue, I have the equivalent of sorts of break in 'exit', but I don't have an equivalent of 'continue', is what I should have said, but I ended up saying something else (my communication needs some improvement there)

Your solution of using a NOT (some_condition) is what I did this morning also, but I had a 'for rec2 in c2' as shown below. Not a problem, but sometimes difficult to read when there are large blocks nested.

I don't look upon continue as a glorified GOTO, rather I got this neat way in which I can pre-empt from all the extra processing upon a certain condition, and forget about dealing with it.

for rec1 in c1
loop
if NOT(some_condition)
then
..... rest of code ....
for rec2 in c2; loop
.... c2 processing
end loop;
end if;
end loop;


Tom Kyte
June 18, 2006 - 12:11 pm UTC

...
I don't look upon continue as a glorified GOTO, rather I got this neat way in
which I can pre-empt from all the extra processing upon a certain condition, and
forget about dealing with it.
.....

but that is "goto" :) virtually by definition. Skip over this stuff, goto this place and pick up again.


"for rec2 in..." is just part of .... rest of code .....


(in fact - .... rest of code might make a nice subroutine!)

cost of procedure in explain plan

A reader, June 23, 2006 - 11:52 am UTC

There should be a method to get the cost of whole procedure/package in explain plan rather then looking at all sql's individually.

Tom Kyte
June 23, 2006 - 1:24 pm UTC

the cost of a procedure.... hmmm, think about it...


tell me how to 'cost' this:


if ( something )
then
select
if ( something from select is 'x' )
then
update
else
insert
delete
select
end if
else
select
if ......



if stored procedures didn't have any sort of procedural processing, then maybe - ust maybe - you could "add up all of the costs of all of the sql since they would be all executed"

But then you would have a pretty good "random number that is not useful to use in any sense of the word for anything"



Don't likes.....

A reader, June 23, 2006 - 4:00 pm UTC

max(decode...) group funs is related to block size !!!

Code Refactor is need,

10g enterprice manager (aka database control) is web based !! a state-less connection

why oracle promotes that ? what is the advantage ?

Tom Kyte
June 24, 2006 - 11:24 am UTC

stateless rocks and rolls, I would not have it any other way. Waste of resources to do "client server"



I have always wanted

Lior, June 24, 2006 - 6:41 pm UTC

to be able to connect internally
without the need to specify the environment variable
ORACLE_SID..

just like connecting using a listener in sqlplus

> scott/tiger@ORCL

using a special character (% for example) :

> scott/tiger%ORCL

Stateless !!

A reader, June 26, 2006 - 1:12 pm UTC

Is that the reason to make Database control a web-based application ?

You will have to separately run apache or whatever web server on the box and more than that manage it !!

If there is a large tablespace creation , what if the conntion is gone ? (Sure, the tablespace will be created, no doubt about it, but Databaes control might not show the correct result !!) and orale is trying to make DBAs totally depandent(??) on "web-clicking" over the time !!!





Tom Kyte
June 26, 2006 - 2:35 pm UTC

No, there are many other reasons (like the zero client install - very nice, my personal favorite)

I'd rather manage a single instance of something serving an application rather than having to manage some egregious number of clients that would need installation, configuration, patching, upgrading. Not only that, but I could only access the administrative system from a terminal that had, well, the softare installed. How painful is that.


Let's see -

one install, versus N client installs
one thing to manage, versus N things to manage
stateless (always a bonus)
zero client install - access from anywhere, very nice.


I don't understand the attitude here. Why is clicking in a web browser more insidious than clicking in some thick client?!


You have sqlplus just like me as well - you don't have to use anything you don't want to use.


thanks

A reader, June 29, 2006 - 12:00 pm UTC

For some reason, I have perception that web-clicking is "Not-Reliable", may be it's just me !!

sqlplus is "the greate" among all of these , but if the industry is moving you have to move

Tom Kyte
June 29, 2006 - 1:11 pm UTC

I buy most everything online, I do my banking online, I buy/sell stock online.

They are all clicks. I worry more about the reliability of those clicks than I do clicks in enterprise manager for the most part. And they have worked for me so far.

Stock?

A reader, June 29, 2006 - 5:15 pm UTC

Tom
I buy/sell stock online.....
How do you get time for all this? I'm sure you must be doing very good in this also.
Regards,

Tom Kyte
June 29, 2006 - 5:19 pm UTC

well, I don't do it *often* :)

Expanding Ask Tom to "AskToms"

A reader, June 29, 2006 - 5:51 pm UTC

Hi Tom,

I know it is just "you" behind this site ..

Do you have any idea of extending this site to "AskToms" .. meaning many people working together to answer the questions ... :-) as a professional group ...

Did this ever strike your mind before or is this something that I am proposing first time (i bet not)..

Well the point is ... if you have any such ideas ... there are so many of your followers out there (including me) who do things the "AskTom" way :-) ... please do remember to bring them into the AskToms group !!

Thanks !

Tom Kyte
June 29, 2006 - 6:54 pm UTC

see the "other resources" tab above - they are already out there.

My 2 Cents

TRS, June 30, 2006 - 12:02 am UTC

I hate the defaults in the init.ora parameter file for the "out of the box" installation. I have had to over and over and over an gain 'correct' these defaults for a 'simple' oracle installation.... Gawd!

PLSQL table meta data

Andrew Markiewicz, July 28, 2006 - 4:03 pm UTC

I wish that I could interrogate a plsql table structure programmatically so it would be possible to iterate over the attributes and write more generic reusable code that uses plsql tables.

In Perl this would be something such as:
foreach $col (keys %plsqlTab) {
...
}

Then $col is set to each attribute name and the corresponding value can be found with $plsqlTab{$col}. In Perl that is.

I find it very frustrating requiring each use of the plsql table to have each attribute explicitly named for it's use.

For example:
TYPE rec is RECORD(
fld1 varchar2(30)
,fld2 date
);
TYPE tab IS TABLE of rec index by BINARY_INTEGER;

xTab tab;
xTab2 tab;
idx integer;
...
idx := xTab.FIRST
while idx is not null
loop
-- It would be convenient to be able to iterate
-- over the attributes programmatically instead
-- of requring a line for each attribute.
--
xTab(idx).fld1 := xTab2(idx).fld1;
xTab(idx).fld2 := xTab2(idx).fld2;

idx := xTab.NEXT(idx);
end loop;
...


Why? Is there something about the plsql table data structure that makes it impossible to interrogate it's structure at run time?

This becomes even more of an issue when trying to publish plsql procedures that return a plsql table as a web service. JDeveloper will generate the code for you, but it looks at the current plsql data structure and generates a separate line explicit for each attribute. When the plsql table changes, the web service must be maintained. It would be better to programmatically iterate through the attributes and not have that code maintenance.

Tom Kyte
July 28, 2006 - 8:59 pm UTC

plsql is a strongly typed language. How would you do this in C, or Java, or Ada, or PL/I, or Cobol, or ......

Compiled vs interpretted

Andrew Markiewicz, August 01, 2006 - 9:51 am UTC

Is it due to strong typed variables or degree of compilation?

In Java we could use the java.lang.reflect.Field.getFields method to return an array of the attributes of a class at runtime. From my understanding that is because it is run in a JVM and is partially interpretted and the reflection offers the metadata needed.

Similarly, plsql is compiled to pcode which is also interpretted at runtime. Does that offer the possibility for providing the metadata needed similar to the way Java can?

Tom Kyte
August 01, 2006 - 10:29 am UTC

you have the anydata, anytype, any**** types for "introspection" if you like as well.

SQL PLus improvements

Alex, August 01, 2006 - 11:02 am UTC

I had been using Oracle long before SQL Server - my current organization uses both.

I wish SQL Plus have many of the conveniences that comes with the MS SQL Analyzer tool :-

1 execute only the mouse-highlighted a portion of a procedure

2 ease of copying result sets (presented in grid) and pasting it onto another software like Excel.

3 color-coded keywords (some kind of IDE if you like)

4 separation of grid for result sets and messages

5 Finally, why does SQL Plus result set take a long time to display on screen or spooled to file.

Also why sometimes Control-C does not halt the display.



Tom Kyte
August 01, 2006 - 6:49 pm UTC

5) eh? no way. not unless you have a query that takes long to run. plus is about the fastest thing going.


</code> http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.oracle.com%2Ftechnology%2Fproducts%2Fdatabase%2Fsql_developer%2Findex.html&p_cat=SQL_DEVELOPER_2006&p_company=10 <code>

get sql developer.

to: Alex

A reader, August 01, 2006 - 11:22 am UTC

SQL*Developer was developed to provide those features for Oracle users:
1 execute only the mouse-highlighted a portion of a procedure
2 ease of copying result sets (presented in grid) and pasting it onto another
software like Excel.
3 color-coded keywords (some kind of IDE if you like)
4 separation of grid for result sets and messages
</code> http://www.oracle.com/technology/products/database/sql_developer/index.html <code>


Andrew Markiewicz, August 01, 2006 - 3:10 pm UTC

SYS.ANYDATA doesn't seem to work with converting a plsql table.

-> declare
2 vdata SYS.ANYDATA;
3
4 type myrec is record(
5 fld1 varchar2(10)
6 ,fld2 date
7 );
8 type mytab is table of myrec
9 index by binary_integer;
10
11 tab mytab;
12
13
14 begin
15 tab(1).fld1 := 'hi';
16 tab(1).fld2 := sysdate;
17 tab(2).fld1 := 'there';
18 tab(2).fld2 := sysdate;
19
20 -- vdata := sys.anydata.convertObject(tab);
21 vdata := sys.anydata.convertCollection(tab);
22 -- vdata := sys.anydata.convertRaw(tab);
23
24 end;
25 /
declare
*
ERROR at line 1:
ORA-22370: incorrect usage of method ORA-22370: incorrect usage of method ConvertCollection
ORA-06512: at "SYS.ANYDATA", line 0
ORA-06512: at line 21

Use of convertObject and convertRaw methods does not compile due to mismatch on parameter type.

Thanks for your responses Tom. If Oracle could provide access to the metadata of plsql tables somehow, that would be a huge improvement. But for now, it's still a wish.

Tom Kyte
August 02, 2006 - 10:41 am UTC

plsql types (boolean, records, table of records, index by tables) are only ever going to be visible to .... plsql really.


index by tables of scalars - we can bind to them from 3gls but that is about it.

Use an object type instead.

Database Links / Triggers

A reader, August 02, 2006 - 11:05 am UTC

I've just noticed this thread is still going strong, had a re-read of it and thought of some more things I don't like about Oracle (i.e. features I wish Oracle had).

I also agree the ability of a privileged account (eg: DBA) to maintain the DB links of "normal" users would be handy (as suggested by "Mauro Papandrea from Italy"). We can do it for most other objects.

In addition I'd like to...

1. Be able to ALTER [PUBLIC] DATABASE LINK [CONNECT TO ...] [USING ...] etc.

2. From a privileged account (eg: DBA) be able to see what database links other sessions have open - like a DBA level of v$dblink.

3. The ability of the priviliged account close these database links - like a DBA version of "alter session close database link <linkname>;" or a "DBMS_SYSTEM.CLOSE_DATABASE_LINK(session, db_link)".

4. I'd like the ability to schedule automatic timed tasks to happen within a session, but not via dbms_job which will run in a separate session.

Eg: Some housekeeping SQL / procedure to run every nnn minutes in a session.

Maybe this could be done via some new sort of trigger or profile setting.

It would obviously have be sensitive to if the session was already doing something (maybe skip or wait until it's inactive).

5. I'd like a DBA level of the session_roles view. So a priviliged account can tell what roles another account has currently got set.

6. I'd like HSODBC not to give up the ghost when you try and access a remote (non-Oracle) database table that has a column name > 30 characters long! Currently this stops you accessing any of the columns in the table - presumably HSODBC does a "describe" to itself of the whole table. Being able to access the columns with names less than or equal to 30 characters would be nice. Being able to alias the long column would be even nicer - eg: select their_exceedingly_long_mysql_version_4_column_name as foo from some_table@mysql;

That'll do for now ;-)



Logging DML and Select statements on Schema Level triggers

Vijay Ekambaram, September 06, 2006 - 8:37 am UTC

I would like to log the Select statements and Insert, delete, update statements for our specific need in a database - just by writing only one trigger on Schema Level. (working for all or selected tables)

Can that be possible on Oracle now or in future?!


Tom Kyte
September 06, 2006 - 3:31 pm UTC

dbms_fga can do that on selects in 9i, and all dml in 10g

but it is "table" - not schema - nor do I see it being "schema"

your top 10 list

A reader, September 08, 2006 - 6:28 am UTC

Single (global) namespace for ...

Tak Tang, September 29, 2006 - 9:39 am UTC

... savepoint identifiers.


We all know that global variables are bad (I still have nightmares about Forms 3). The ability to declare variables locally enables us to hide our implementation, which in turn provides us with abstraction; encapsulation and promotes re-use.

I particularly admired the way exceptions could be declared locally.

So why cant we declare savepoints locally?


Why would anyone use a savepoint in a procedure? Well, in this article :-
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1302671053281 <code>
you demonstrate how to make a procedure Transactionally Atomic using a savepoint. Without the savepoint/rollback, the procedure has an external dependency that requires the caller (at some level) to detect the exception and respond with a rollback to remain transactionally consistent. You could not, for example, respond to the exception by taking a recourse action, which ultimately ends in a commit. Of course, you could issue a savepoint before calling the procedure and issue the rollback in the event of an exception, but I think it is cleaner to package this inside the procedure to make it 'atomic' and self-contained. K.I.S.S applies to APIs too. You never know when a busy programmer comes along, does not realise the procedure issues multiple dml operations, proceeds with the recourse action and commits! Transactionally atomic procedures is defensive programming.

The downside is that the global namespace means that the programmer must be aware of all savepoints, or risk a procedure deep in the callstack overwriting a savepoint issued earlier on, breaking transactional consistency.

Locally defined, properly scoped savepoints would make this trivial.


Now for the flames ...

Tak

language translation

Sarayu, October 13, 2006 - 2:25 pm UTC

hi tom,
can we expect that we can do language translation in near future in oracle. something like
SELECT DBMS_LANG_TRAN('Text to be translated',FROM_LANGUAGE,TO_LANGUAGE) from dual;

This is If i store my data in english and want to send data in whatever language the end user wants.

As so many new features are added to every release of oracle, just want to know if this would be possible.


Thanks a lot tom,
Sarayu K.S.


Tom Kyte
October 13, 2006 - 7:13 pm UTC

I hope not.

That stuff rarely works (ok for a first cut but someone needs to read it and verify it - and that person could have done the translation). You could definitely build a web service (probably one exists already) and call it.


I remember someone telling just recently about a "translation", they were talking database and talking "fields" which the translator translated into "meadow like place" - the literal translation, not too good. Very confusing :)

Alberto Dell'Era, October 14, 2006 - 4:43 am UTC

Nothing like a *book* I read that translated "silicon" into "silicone" ... so every page had a reference or two to a "silicone computer", probably for submarine data centers :)

Another thing I miss the most

Lior, November 03, 2006 - 8:14 pm UTC

(and other fellow workers)
is the ability to rename a schema\user..

Something like :
alter user scott rename to david;
or
alter schema scott rename to david;

We are just too lazy to export\import :)

able to lock function while developing

oracleo, January 27, 2007 - 10:30 pm UTC

When two developers are working on the same function, they tend to overwrite other developers changes as the compile.

A facility, where in developer can lock the function while its being modified by him. So that a message pops up to the second user if (s)he tries to modify.



dictionary table to select compressed subpartitions

ht, February 22, 2007 - 2:22 pm UTC

Hi Tom,
I'm getting great results by compressing subpartitions via the:

alter table <tablename> compress;
alter table <tablename> move subpartition <spname> update indexes;

Question 1:
Is there a dictionary table that shows me which subpartitions are actually compressed? I'm periodically compressing subpartitions in no particular order and would like to avoid compressing subpartitions multiple times.

Question 2:
Is there a parallel option to updating indexes while "moving" and, if so, is there a capacity "penalty" during/after the operation?

Thanks,
ht
Tom Kyte
February 22, 2007 - 7:45 pm UTC

"compress" is an attribute - there is no flag set other than "this segment supports compressed data"

we don't know if the data is or is not compressed really until we read it.

only to the degree that you can parallelize the DDL itself (move can be parallelized)

thanks

ht, February 22, 2007 - 8:12 pm UTC


Use prior column aliases in the same SELECT clause

Brian Camire, February 23, 2007 - 11:49 am UTC

I think someone already mentioned this on the thread already, but it would be helpful to be able to use prior column aliases in the same SELECT clause, as in something along the lines of:
SELECT
    1 AS DERIVED_COLUMN1,
    DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2
FROM
    DUAL

Unfortunately, this results in:
    DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2
    *
ERROR at line 3:
ORA-00904: "DERIVED_COLUMN1": invalid identifier

I know you can work around this:

1. By using inline views, but this gets messy if you have many "levels" of derived columns because you need a nested inline view for each "level",

2. By using the MODEL clause, but this seems like overkill (and unnecessarily verbose).

Does anyone have any other suggestions?
Tom Kyte
February 26, 2007 - 10:42 am UTC

write indented code so that #1 is not an issue.

or consider "with"



with data
as
(select 1 as derived_column1 
   from dual
),
... other with subqueries if you want....
select derived_column1, 
       derived_column1+1 as derived_column2
  from dual
/




It is all a matter of the rules of scoping of SQL.

ALTER TABLE MOVE with REBUILD INDEX

A reader, February 25, 2007 - 12:24 am UTC

"ALTER TABLE ... MOVE" would be nice if it had a REBUILD INDEX(ES)" clause. This would allow a complete re-org of a TABLE and its INDEXES in one hit. At the moment you end up having an interim period where the indexes are marked as "unusable" after the MOVE - until the REBUILD indexes have been done. This would then mean queries against the table would not be impacted whilst the re-org is taking place (of course updates would still have to wait - which is still preferable to them failing with an "index unusable" error). I know there is always DBMS_REDEF - but this can be a bit of a "slege hammer to crack a walnut".





Re:Use prior column aliases in the same SELECT clause

Brian Camire, February 26, 2007 - 2:26 pm UTC

Thanks for your feedback, Tom.

I assume that you said "write indented code so that #1 is not an issue" because you were interpreting "messy" literally -- as in the visual appearance (formatting) of the code. However, I meant "messy" in the logical sense. If you have N levels of derived columns, you need N levels of nested inline views. This means you need to take the extra, arguably "artificial", step (say, compared to the MODEL clause) of grouping the derived columns into levels. I know this is "par for the course", but once you see how the problem can be formulated more "naturally" using the MODEL clause, you start asking, "Is there an easier way?" Of course, indenting helps the "visual manifestation of the logical mess", but when you have lots of levels, there's only so much it can help...

In any case, I like your WITH suggestion. I haven't used WITH much, and didn't realize you could reference a prior subquery, as in something like this:
WITH
DATA1 AS 
(
    SELECT 
        1 AS DERIVED_COLUMN1 
    FROM
        DUAL
),
DATA2 AS 
(
    SELECT 
        DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2 
    FROM
        DATA1
)
SELECT 
    DERIVED_COLUMN2 + 1 AS DERIVED_COLUMN3 
FROM 
    DATA2


At least with this approach you only need 1 level of indents instead of N.
Tom Kyte
February 26, 2007 - 3:33 pm UTC

we sort of have to follow the SQL standard for scoping rules - there is not too much we can "do" beyond that.

Improvements to Oracle

Kurt Arthur, February 27, 2007 - 11:13 am UTC

I am normally just a lurker in these parts, but I can think
of a few improvements I'd like to see. Whilst the database
engine itself is just about the best on the planet, I can
think of a few improvements elsewhere:

1. Some sort of a "lint" for PL/SQL to find things like
unused variables and to warn against dynamic SQL without
bind variables, etc. If it ever got sufficiently
sophisticated, it could even make recommendations as to
how to use built in SQL to replace PL/SQL logic. I know
these issues can (and should) be found during code
reviews, but how often do development teams actually
perform them?

2. A minor enhancement to PL/SQL overloading to allow two
functions with the same signature but different return
values to exist. The PL/SQL compiler should be able to
determine which method to invoke based on the call to it.
IMHO, the following should execute instead of returning a
PLS-00307 error:

      declare
        nval number         := null ;
        cval varchar2( 10 ) := null ;

        function foo( inval varchar2 ) return number   is
        begin
          return( 0 ) ;
        end foo ;

        function foo( inval varchar2 ) return varchar2 is
        begin
          return( 'string' ) ;
        end foo ;
      begin

        nval := foo( 'in' ) ;
        cval := foo( 'in' ) ;
      end ;
      /


3. Some sort of "link editor" for PL/SQL. Packages can grow
to thousands of lines, which can be extremely unwieldy to
edit and darned near impossible to work on in a
collaberative manner. I currently simulate this by
separating methods into different files and combining
them via "make" into something that can be compiled by
Oracle, but I know this can be improved.

4. IMHO, it sure seems like Oracle is pushing other
languages (especially Java) at the expense of PL/SQL. I
think they need to remember what a competitive advantage
PL/SQL is and recommit company to it.

5. My last suggestion doesn't apply to the database or its
core technologies, but instead to Oracle Financials.
IMHO, this is one of the most poorly designed
applications I've ever had to work with due to a
miserable database design, wretched programming and an
inscrutable client user interface. This leads to a large
slow application that frustrates users and implementors
alike.

I know it probably won't happen, but I want you (Tom) on
the team that's merging OA with PeopleSoft, etc. And I
want you to have complete veto power over the new
application's design!

Thanks for letting me vent!
Tom Kyte
February 27, 2007 - 11:23 am UTC

well, I just shared this comment with the guy that "owns" pl/sql - so feedback received...

Re: Improvements to Oracle

Kurt Arthur, February 27, 2007 - 3:57 pm UTC

Thank you!

SQL Plus

Tim Kessler, March 04, 2007 - 3:45 pm UTC

I would like to see a polling option in SQL Plus that continually executes a SQL plus statment for x seconds with y interval or until a keystroke is typed. This would be good for monitoring/admin functions and would also be good for generating test volumes of data throughput during testing.

SQL and PL/SQL

Joel, March 09, 2007 - 10:21 am UTC

I'd like to see Oracle implement a LIMIT clause for a SELECT query like several other systems do. I know that the same functionality is possible by subselecting and using ROWNUM, but LIMIT is cleaner and more straightforward.

I'd like to be able to use conditionals in PL/SQL queries without having to use dynamic SQL, i.e.,

open rs for
 select * from emp
 if v_empno != 0 then
   and empno = v_empno
 end if
 order by ename;


instead of having to build a string through concatenations and bind variables. These are very difficult to correctly build and maintain, especially if there are multiple optional input parameters.

SQL Plus

robeContent-Type: application/x-www-form-urlencoded, March 14, 2007 - 5:17 pm UTC

I found the hard way (maybe there's a way around) that for instance, when doing the select of a char(1) field, SQL*Plus displays only 1 char of the heading too. I've tried column formatting, alias, set wrap, etc... nothing... please change this!
Tom Kyte
March 15, 2007 - 7:37 am UTC

not sure what you tried, but column formatting has always worked

ops$tkyte%ORA10GR2> create table t ( a_long_name char(1) );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 'x' );

1 row created.

ops$tkyte%ORA10GR2> select * from t;

A
-
x


ops$tkyte%ORA10GR2> column a_long_name format a15
ops$tkyte%ORA10GR2> select * from t;

A_LONG_NAME
---------------
x

Very curious reviewer name, isn't it?

Michel Cadot, March 15, 2007 - 8:37 am UTC


SQL Plus

roberto, March 15, 2007 - 1:06 pm UTC

Sorry, I meant if you have a "select * from t" where you don't know beforehand that you have a char(1) field coming. Is there a way to set the column format tp a15 (for instance) for all char fields without specifically having to name them one by one?

Thanks,
Roberto

Followup:
not sure what you tried, but column formatting has always worked


ops$tkyte%ORA10GR2> create table t ( a_long_name char(1) );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 'x' );

1 row created.

ops$tkyte%ORA10GR2> select * from t;

A
-
x


ops$tkyte%ORA10GR2> column a_long_name format a15
ops$tkyte%ORA10GR2> select * from t;

A_LONG_NAME
---------------
x


Tom Kyte
March 15, 2007 - 1:12 pm UTC

nope, sqlplus - like many generic, simple "dump my data out" sort of tools describes the query and determines the appropriate column width from that.

to MIchel Cadot referring your last comment

A reader, March 16, 2007 - 6:40 am UTC

Michel, do you think it's pronounced French or Flemish ?

How about a yum repository for install/update for Linux

Nick Pierpoint, March 16, 2007 - 6:47 am UTC

I love being able to install XE just with rpm:

    rpm -ivh oracle.rpm


Very nice.

But what I´d really love is to install and patch our full blown Oracle database in the same way - ideally against a yum repository to make things even easier.

Just think:

    yum update oracle


and you´ve just applied the latest patch set. OK - granted - a slightly scary proposition but those Oracle folks are clever and get paid the big bucks - I´m sure they could make it work.

Hey - it could update automatically on the overnight yum update (now that´s scary).

[Tom - Thanks for a great site.]

--
Nick.

great idea

A reader, March 16, 2007 - 8:28 am UTC

great idea, Nick

do you remember the very first Oracle version for Linux ?
I think it was 1997 (1998 ?) when I visited an Oracle training
and they distributed these CDs with (was it 7.0.1.6 or even 7.2 ?) "Oracle for Linux" on it, about 15 or 20 papers with installation instructions included (and also at least two pages disclaimer)
that was quite a big sensation for me, before that Oracle always was saying "Linux ? no chance we'll support these hackers sometimes"

Alexander, March 21, 2007 - 10:25 am UTC

This is starting to get on my nerves: When I grant a user DBA to do some development and they still run into permission problems. So I have waste a lot of time tracking down what specific privilege needs to be granted. I'm having this problem right now and I don't know what's missing so I can't give an example.

And before you say, you shouldn't be doing that you should only grant exactly what they need to get their job done, I'd say, I would do that if I didn't have to grant 7000 individual privileges to get them up and running.
Tom Kyte
March 21, 2007 - 11:26 am UTC

why don't you peek at the sql reference which has a pre-requisite section that outlines the needed grants in most all cases?

Alex, March 21, 2007 - 12:17 pm UTC

"When I grant a user DBA to do some development and they still run into permission problems"

My advice is never to do that, not even in Development. Assuming that you are going to narrow down the privileves going into Test/Production later, some of what used to work in Dev might not work in Test/Production.

You need to start looking into ROLES to group your 7000+ into a dozen+ categories of users.

Things I wish Oracle had

Charles Lanfair, March 22, 2007 - 2:55 pm UTC

Tables and columns have 'comment on' capability. Don't I wish that other Oracle objects had the same capability. How often I've had to scratch various, thinking 'Why did/What is this [role, user, link] for? especially those ones created in the middle of the night that aren't always as semantically as rich as they could be. And No, I don't want to track down some piece of .doc, .xls, er1 any more than I want to update it. This info should be in the database - whadyathink?

Pro*C

Rodrigo, April 04, 2007 - 3:51 pm UTC

What would I like to see?
Updating the Pro*C compiler so it recognizes features present in 9i and above(MERGE springs to mind)

thanks
-Rodrigo

Nickle and Dime

Balbir Singh, April 05, 2007 - 12:53 am UTC

Bundling all these sexy features in the database and charging people for using them. You should have a choice of which features you want and be able to install them separately. I would not like to have my database carrying all the baggage I can't use or don't want to use.

Another feature (limitation) that really bugs me is having same name for schema and user. I have gotten used to it over time, but every once in a while it reminds me of starting anything with a solid foundation, because it becomes impossible to change the silliest of the design features.

Java and the whole JVM system is a great boon for many reasons. I wish the database and this OTHER SYSTEM were not too tightly coupled together. Database should be just for storing and manipulating the data. I don't know why the JVM couldn't reside like Oracle Application Server, a little bit separated from the database.

There are off course many other gripes that I have, but I am too lazy to list all of them...

Enterprise Mgr, Job, RMAN catalog credentials

Khalid, April 17, 2007 - 10:21 pm UTC

EM 10gr2,
When I add an RMAN job to the Enterprise Mgr Job lib, I cannot specify any credentials for my catalog. Therefore, I am forced to either:
1) Use the nocatalog mode OR
2) Put connect rman/<passwd>@catalog_db in my RMAN script, for every one to see

But it appears that this is more of a design flaw in the EM job screens, they forgot about the catalog info. I hope am wrong, in which case, how do I do that?




I was so wrong

k, April 17, 2007 - 11:44 pm UTC

Hey, I am not afraid to say I was wrong! I am a big man (5'8" YMMV) Yes, It is possible to specify Catalog info in RMAN, I just found that. This has to be specfied in the "backup and recovery settings" page, and not in the creation of Jobs page. I still think this is a design flaw.

About comments in database objects

Marco Coletti, April 18, 2007 - 8:40 am UTC

To Charles Lanfair.

Table/columns have got comment capability because you can't embed documentation in the data without somewhat corrupting it.
For procedures/functions/packages/triggers/views/mviews you can easily embed any comment in the code itself.
roles/users/links -- I think they don't usually need any comment but have you considered maintaining your own table?
I.e. you can create a table like the following

create table USER_OBJ_COMMENTS
as
select
substr(OBJECT_NAME,1,30) OBJECT_NAME
,OBJECT_TYPE
,cast(null as varchar2(4000)) MYCOMMENT
from USER_OBJECTS;

then add any missing object like roles and users.

upgrading

Aashar, May 06, 2007 - 8:54 am UTC

The upgrading is very confusing. It should be like other software(s), means upgarde on the same location.

Regards,

"The Path to Oracle Fusion"

Zlatko Sirotic, May 19, 2007 - 4:11 am UTC

Very interesting extract from a very interesting paper
"The Path to Oracle Fusion", Dr. Paul Dorsey, Dulcian, Inc.
http://www.dulcian.com/
Conferences, papers & presentations => Papers and Presentations organized by topic => J2EE Development


"OO Architects seem to be running Project Fusion

OO architects view things differently from database-centric architects.
OO people tend to think of the database as persistent storage. They do not lean towards server side code.
For Project Fusion, there was actually discussion surrounding the feasibility of changing all PL/SQL to Java in the middle tier.
The main reason the idea was rejected was that there exist millions of lines of PL/SQL code. Rewriting it all would have taken too long.
The answer SHOULD have been that it was a stupid idea in the first place.
Complex, data-centric code can run hundreds (if not thousands) of times faster in the database.

Our experience across several projects has been that refactoring code from middle tier-centric to database-centric
(place as much as possible in server-side code) results, on average, in the following advantages:
1) more that a 50% reduction of the total lines of required code
2) more than 10 times improvement in performance
3) more than a 99% reduction in network traffic
4) more than a 50% reduction in server-side activity
(a few well written routines consume fewer resources than thousands of inefficient retrieves)

Many Oracle Corporation developers recognize that server-side development is critical to the success of Project Fusion
(including Thomas Kurian), but there are others who would make Project Fusion database-independent, if given the opportunity.

What this means for the rest of us is that we will have great tools for placing logic in the middle tier,
but probably only basic support for server-side objects.
For example, if you want to use a server-side function that returns an object collection, there is no direct support.
You have to wrap the function in a view. Writing the view is not a difficult task,
but it does demonstrate the lukewarm support for server-side development.

It also means that Oracle¿s tutorials do not demonstrate good server-side development practices.
Everything highlights what can be done using a middle tier-centric approach.
As a result, I believe that the tutorials don¿t demonstrate the best practices that effectively use the database."


Regards,
Zlatko Sirotic

Modifying foreign key constraints

Arun Mathur, June 04, 2007 - 9:48 am UTC

Tom,

The other day a developer pointed out an error where his delete operation did not cascade down to the child tables. I simlpy removed the constraint, and re-declared it adding the "on delete cascade" clause (9i database server). I could be wrong, but I don't think I could have achieved this operation via the modify constraint clause. It would be very useful if this option was available; however, there may be an underlying reason for it not to be?

Regards,
Arun

Tom Kyte
June 06, 2007 - 9:37 am UTC

Not that I am aware of - it just "isn't an implemented feature"

Constraint Violations

Saket, June 05, 2007 - 9:53 am UTC

I am working on Oracle Forms 10g Application. I was really wondering if Oracle provides a facility of showing a meaning full message
when a constraint is violated like "Employee Code already exists." instead of ORA-00001 Unique constraint violated .....

3. Does not Oracle ever think of adding a message string on constraints definitions so that when ever the constraint is violated a more meaningful message could be displayed instead of ora-00001 constraint violated .... kind of stuff....


Tom Kyte
June 06, 2007 - 1:03 pm UTC

maintain your own table of constraint_name to "my meaningful message"

upon error, you get constraint name
you can query table
and display whatever error message you like in your application.


What I would like to see...

JigneshK, June 06, 2007 - 11:34 am UTC

1. Tkprof/SQLLDR should be bundled with Client installation
2. Ability to create trace files on client machine (just like spool files) so that Developers do not go to DBA for trace files / access to oracle server folders.
3. Something similar to "SET MARKUP HTML" for CSV/TSV files as still many people use sqlplus to generate CSV spool files for data unload.
4. PLSQL compiler should be intelligent enough to throw exception for syntax errors in DYNAMIC SQL.
5. Traditional exp/imp to return unix status FALSE if the import / export was not successful (even when specifying ignore=y)
6. enhancement to dbms_job to accept total no of tries (instead of max 16 tries)
7. Parallel execution of Procedures/Packages within SQLPlus. (i.e. without programming it with DBMS_JOBS or Unix & command)

I hate LONG datatypes.

Perer Cooper, June 14, 2007 - 1:01 pm UTC

I hate LONG datatypes.

Oracle advises not using them as it is an obsolete type that will be discontinued, but continues to use them in the data dictionary.
That's hypocrisy!

Thise datatype is a sheer horror! One can do almost nothing with it in SQL.

For example:
SELECT search_condition
FROM user_constraints
WHERE search_condition='ename IS NOT NULL'

...fails misreably.

There is no way to CAST it nor use TO_LOB to process a LONG datatype in a select statement (works only in INSERTs, phew!). You cannotuse DBMS_METADATE_UTIL.long2varchar either because it requires a rowid (which is not available in data dictionaries) not to mention execute permisssions on that exotic package.

Of course a PL/SQL procedure can help some, but that requires having writing permissions to the database to create/replace the procedure, and causes context switches out of the SQL engine when the procedure is used. Sloooowww

One can process the LONG datatype in the client via OCI, but if we need a client to do such basic things, why do we need SQL and DBMS at all !?

Instead of writing:
Count(CASE WHEN search_consition='ename' THEN 1 END)
in some query, and transfering just 1 result from server to the client, the user is forced to transfer hundreds of records over the network and process them on the client side, to accomlish the same thing!

By keeping LONG datatypes in the data dictionary, Oracle forces users to use them.

I hate Oracle for such hypocrisy!

And don't tell me "No one needs to parse data dictionary himself" because it pure BS. I can give you several exmples for such necessity offhand.

Oracle should have a global setting "DISCARD_LEGACY_CRUD" to allow it to break it with the backward compatibility reqirements for those who don't care about it.

Build multipl indexes at the same time on a table

Jim, June 25, 2007 - 3:11 pm UTC

Allow Oracle to build multiple indexes at the same time on the same table. Then the Oracle engine could choose to scan the underlying table once while slaves build the individual indexes (and sorts etc.)

Something along the lines of
exec dbms_indexdef.begin('MYUSER','MY_TABLE');
create unique index my_uniqueindex on MY_TABLE(...);
create index my2ndindex on MY_TABLE(...);
...
exec dbms_indexdef.build_indexes('MYUSER','MY_TABLE');


This would be especially valuable in a datawarehouse environment.



Alexander the ok, July 03, 2007 - 9:35 am UTC

Having seen a Java programmer poorly design something and use improper data types for the 10 million time has prompted me to ask this question;

Why does Oracle implicitly convert? It seems like it allows people the freedom to use incorrect data types.
Tom Kyte
July 03, 2007 - 10:36 am UTC

because the ANSI SQL STANDARD dictates that we do.


and who ...

A reader, July 03, 2007 - 11:23 am UTC

... dictates the ANSI SQL STANDARD ?

mostly Oracle, Microsoft, ...
Tom Kyte
July 05, 2007 - 8:39 am UTC

and what do you base that premise on?

fact is, the standard started way back in hte 80's (that would be - well, way before Microsoft for example even *had* a database with which they might have been interested in influencing stuff with...) and basic language elements like this were decided by committee - not by a company.

Because Microsoft made a C compiler and had much vested interest in exploiting C as a development platform - do you think they drove the C programming standard hard and fast? If so, why did they 'invent' C# and other variants rather than adopting standards based languages like Java? Could it be in part due to the fact that languages implemented by committee many times end up having features you don't like, don't wish to implement and do not incorporate features you (the individual company) feel are relevant?

PLSQL Generics and better collections

Tom, July 04, 2007 - 3:10 pm UTC

I'd like to see much better collection support in PLSQL and the ability to use generics in user defined collections.

In java, you have arrays, vectors, hashtables, etc, etc, etc, and they all take generics. In plsql, you get index by tables and varrays and that's your lot. You can write your own, but you have to recode for every type you want to store in the collection since there is no true generic support (sys.anydata doesn't work with plsql types).

What would be amazing would be if the built in collections in plsql were extended to support generics and also to support more operations on these generics - such as equals (to compare two generic arrays), contains (checks whether a generic array contains a specific object), etc.

I wish I could empty a schema with one statement

Stefano Trallori, July 05, 2007 - 12:31 pm UTC

Something like...

SQL> purge schema schemaname;

;)

Re: I wish I could empty a schema with one statement

Stew Ashton, July 08, 2007 - 2:49 pm UTC


Stefano,

How about
drop user schemaname cascade
?)

How about autonomous transaction?

Rahul, July 11, 2007 - 4:43 pm UTC

Tom,

You said:

quote

Followup May 23, 2006 - 7am US/Eastern:

there is nothing to "flush to", stored procedures run from START to FINISH. All dbms_output does
it fills in "an array" and later a client makes yet another call to the database to GET the array
and print it.


If you want "real time" sort of feedback for debugging, I suggest always to use UTL_FILE.

quote


How about using an autonomous transaction (with commit) for debugging? Then, when you are done, you can comment out the procedure which inserts into the logging table.

Rahul
Tom Kyte
July 11, 2007 - 10:28 pm UTC

so, how does that help you debug? (besides being really slow)

dbms_application_info, rather use that - no, i would not use autonomous transaction to "debug", way too much overhead.

dbms_pipe

Sokrates, July 12, 2007 - 9:34 am UTC

is great for debugging:
PL/SQL routine writes into pipe
and another session is reading from it

64 bit hsodbc

Pete V, July 19, 2007 - 11:37 pm UTC

I wish Oracle would port hsodbc to 64 bit already.

Here I am with a brand spanking new server running 64 bit Enterprise Linux, 8GB ram, SAS raid etc, but I can't connect the database to legacy SQL Server databases because hsodbc has not been ported to 64 bit.

If anyone has been successful at getting the 32 bit hsodbc to run under 64 bit Linux please speak up?

Thanks

Wanted features

JP, July 20, 2007 - 4:23 pm UTC

Hi, just some "simple" features that would be nice to have:
- In memory tables and indexes, with data persistent through sessions, but without ACID, commit or rollback, or any kind of transaction overhead/control.
- archive logs just for selected tables, indexes, etc.
- hierarchical organization of procedures/packages, like a filesystem tree.
- transactional sequences, i.e., if rollback than sequence returns to original value. Even if this feature implies locking all processes using the same sequence.

thanks

JP
Tom Kyte
July 21, 2007 - 9:10 am UTC

transactional sequences - you can do your own in five seconds. they would be the worst thing ever as they necessarily imply SERIALIZATION.

Oracle Forms (.inp files)

Rahul Dutta, July 26, 2007 - 3:25 pm UTC

Hi Tom,
Is there any tool/script that scans the code of Oracle Forms 3.0 (.inp files) and returns a report with the details on the database interaction inside the form. I need to know what tables are accessed by each form and the statement(insert,update,delete stuff) as well.

Thanks and Regards
Rahul Dutta
Tom Kyte
July 27, 2007 - 9:13 am UTC

no idea - try otn.oracle.com -> discussion forums.


wow, version 3, 1980's

SQL Loader

Keyur, July 27, 2007 - 11:10 pm UTC

I think We need to have "OR" operator in SQL Loader When Clause.

Move an Index from one schema to another without droping and recreating into other schema

Built in Explanation of AWR/Statspack about most of wait and latch event.

Thanks

~
Tom Kyte
July 28, 2007 - 9:26 am UTC

external tables.....

sqlldr via sql - all of the power of the where clause.


moving an index to another schema? what use is that - I cannot imagine truly the need for moving an INDEX from schema A to B.


the Oracle Reference has such a builtin feature already....

sql loader

keyur, July 28, 2007 - 10:51 pm UTC

sql FROM sql loader slows down performs and it has to go through database. IT there would be OR it rejects at first point.
Tom Kyte
July 29, 2007 - 8:08 am UTC

what??!?!?!

those sentences make no sense whatsoever.

SQL LOADER

Keyur, July 31, 2007 - 10:26 am UTC

IF, we would have OR it would reject that row at first point, instead of passing to database, executing any function to validate data and then reject that rows. That would be more costly, instead of just having OR in when clause.

I had this kind of problem while I was loading data based on condition when on VET_IND column contain either 'A' or 'I'. I had to run load two time, one for When VET_IND = 'A' and then same load but VET_IND = 'I'. If I would have "in" operator or and I would simply run that load with WHEN VET_IND in ('A','I'
) or If I would have OR then I would say VET_IND = 'A' or VET_IND = 'R'

I am scanning 3 million rows 2 times, it means scanning 6 million rows to insert 2 million rows because only 2 million rows has A or I int that column.

That's why I think It would be better to have IN or OR operator in SQL LOADER "When" Clause.

Thanks

~Keyur


Tom Kyte
August 02, 2007 - 10:00 am UTC

I doubt you'll see sqlldr have much of anything added to it, given the power of SQL and external tables.


the database "isn't slow", instead of worrying about hypothetical problems (as you are), give it a try - see what you see.

multi-query optimization

Saurabh, October 12, 2007 - 6:13 am UTC

Hi Tom,

I have a couple of questions...

Is there any multi-query optimization feature available/expected in Oracle newer versions?

I imagine them a useful in batch scenario's where I am accessing lots of data(i.e. we pretty much do physical reads for every query). What is your take on this? Kindly justify.

What I mean is an application can just submit a query batch and oracle internally figures out which queries to run concurrently or which queries to run in particular order. May be the batch I submit has queries which have no interdependencies...even better is database can handle dependency issues/ or allow me to specify them in a particular way.

Database may choose to do cross query optimization
-- by avoiding unnecessary physical read
-- reuse temp segment created for some other query as it would know there is another query in the batch which will benefit from this temp area.

Basically, I would like to get most from my database.


Tom Kyte
October 14, 2007 - 9:16 pm UTC

there is parallel query - execute a single query in parallel.

it would make no sense to have a "query batch" like that, the client itself is serial - it needs to get some outputs and do stuff.

basically, you run a query, get outputs, make decisions based on the outputs and run other queries. We have no idea what your logic is, we would never be able to figure out what things are "not dependent" on other things.

Online - Webbase software

Nazmul Hoque, November 08, 2007 - 7:22 am UTC

Hi Tom,

I want to know from you that, i have a develoap a software where i used database Oracle 8.1.7 and Developer 6i for forms and reports. That IS working fine under my local Lan IN Office. Now we want use the same software from out side access through internet url/link. Is is possibale to run the software from out site.

Please let us know If It is possible.

Hoque
Dhaka - Bangladesh
Tom Kyte
November 08, 2007 - 10:11 am UTC

sure it is. It is all just a network. The 'internet' is not different really, the same protocols and all are happening.

However, the presence of firewalls and other security measures may well block important traffic, so you'll have to sort through that.

A reader, November 08, 2007 - 11:47 am UTC

For databases where no. of users are small - sqlserver is the best for the price.
Tom Kyte
November 09, 2007 - 11:51 am UTC

ummm, no - not really.

XE is best for price (it being free)
SE1 and SE are as "costly" as sqlserver

beat free first though - for price.

For small db sqlserver is not cheaper than Oracle

Jim, November 08, 2007 - 12:01 pm UTC

Hmm, for a small db you are saying SQLServer is better than Oracle Express because it is cheaper? Since Oracle Express is free is MS paying people to download SQLServer?

Review wasn't related to the original post!

A reader, November 08, 2007 - 9:26 pm UTC

The review about accessing the app on the internet wasn't related to the original post, was it? I am wondering why you took an exception to this review, esp. considering the new warning message that you have included on this site.

Just curious, that is it.

Tom Fox, November 09, 2007 - 11:34 am UTC

@Jim:

SQL Server = Oracle
SQL Server Express = Oracle Express

Plus, since most SMBs can purchase Windows Small Business Server which includes SQL Server, they get it for practically nothing.
Tom Kyte
November 09, 2007 - 12:22 pm UTC

SQL Server = Oracle Express pretty much too - what do they have that Express doesn't really.

And SE is not expensive - practically nothing applies all around.

Descending Indexes

George Robinson, November 09, 2007 - 11:50 am UTC

What I do not like about Oracle is that it treats the descending indexes differently than ascending indexes.


The notable differences are:

1) Descending indexes are always created as Function Based Indexes (FBI). Ascending indexes are not.

2) Because of #1 , it is illegal to specify an index in hints BY ITS COLUMN NAMES (unless a system/internal column name is used).

3) Descending indexes cannot be used for partitioning (unless a system/internal column name is used).
e.g. PARTITION BY name, dob ORDER BY name, dob DESC, id, will not use an index even if one exists on columns: name, dob DESC, id. Contrastingly, ascending indexes can be used for partitioning without problems.

4) I have yet to check if an index containing at least one descending column can be read backwards by Oracle (as in hint INDEX_DESC). If not - it is another thing I wouldn't like about Oracle.

On the bright side, Tom seems to be convinced that in future versions, descending indexes will be treated the same way as ascending indexes.

On the dim side, other databases I know are not any better with descending indexes (of course I do not know them all).

C'mon DB coders! Do a better job next time. More orthogonal thinking and less IFs...,


Regards,
George
Tom Kyte
November 09, 2007 - 12:23 pm UTC

.... On the bright side, Tom seems to be convinced that in future versions,
descending indexes will be treated the same way as ascending indexes. ...

George - your leaps are astounding to me. I never in any way shape or form said anything remotely similar to that.



George Robinson, November 09, 2007 - 2:17 pm UTC

Tom,

I was basing my leap on your followup in the following thread. If I read you wrongly, feel free to delete my post and accept my apologies.

Regards,
George


http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:10445280893164

Tom wrote:
"
'hey guys, it is 8i, this is new -- you can use DESC in an index and it means something.
In the future, when you are using 13i, this will be a big YAWN, but right now they are "special"'
"
Tom Kyte
November 09, 2007 - 4:09 pm UTC

well in context, with the other two sentences:


...
I noted them as a special kind of index in the book.

In a 9i or 10i book -- I would not as they would be "mainstream" by then.
.....

They are Yawn factors now, they've existed for many releases and I would think most are aware of them.

free is less expensive than not free

Jim, November 09, 2007 - 2:41 pm UTC

@Tom Fox,
You make my point. With MS you have to buy small business server and then pay something > 0 to get sqlserver. vs the poster had stated that for small databases use sqlserver because it is cheaper than Oracle. FUDD, no it isn't. For small databases Oracle Express meets those needs and is free which is cheaper than not free. (paying for sqlserver and small business server. )

ORA messages during database upgrades

Björn Olsen, November 09, 2007 - 8:36 pm UTC

With a 1000+ tables database, I would appreciate more descriptive ORA error messages when e.g. adding an already existing column. If you try to add the column "STATUS" (datatype VARCHAR2), and a "STATUS" column already exist but with the NUMBER datatype, it would have been great if this was informed in the ORA-01430 message. Today the only feedback is "column being added already exists in table".

I've started to develop a package that hopefully can used to improve my database upgrades. The idea is to create all new tables with only the first table column by using the package procedure "CREATE_TABLE", and then add the rest of the table columns one by one by using an "ADD_COLUMN" procedure. Then I can add checks to verify if the column already exist or not (or if an already existing column differ when it comes to datatype from what I actually try to add). All DDL operations within the package are executed by use of the EXECUTE IMMEDIATE command. This will also make the spool files from the database upgrade much more readable as the dbms_output in the package will return more information to the user and all text in one line.

My package prototype seems to cover my needs, but if this can lead to any unwanted side effects regarding the table segment (or anything else), your comment is very much appreciated.

Br,
Björn
Tom Kyte
November 10, 2007 - 11:41 am UTC

Well, I cannot believe this is a real problem. If you are seriously stating that "we just try to add stuff" like this - well, I'm not sure if I would call that a 'database upgrade' or 'lack of any sort of software configuration management'.

I have to honestly say - you are the first person I've ever heard a request like this from.

This is scary actually, I cannot believe this is a problem you hit frequently (or even occasionally). I cannot imagine version 2.0 of my application trying to create a status column for a version 1.0 table - and a) having it already exist, b) having it already exist with a different data type.

This is not an application upgrade - I don't know what it is, but it isn't an upgrade.

ORA messages during database upgrades

Björn, November 10, 2007 - 7:38 pm UTC

Adding an already existing column was just an example (and maybe a bad example)... There can be several reasons for why a database model may differ from one installation to another one, even if they are supposed to be identical as they have the same version number. This could be due to poor configuration management (lack of control when having several code tracks, etc), partly failure of a previous database upgrade, DDLs performed manually due to some type of maintenance task, etc.

What I would like to achieve, is to have the possibility to add a script to the database upgrade, where the script contains all historical statements for one particular table. E.g.:

CREATE TABLE test01...
ALTER TABLE test01 ADD column02...
ALTER TABLE test01 ADD column03...

Multiple runs of the statements above will result in the following ORA messages (in addition to recurrence of the current statement, a blank line and the text "ERROR at line 1:"):
- ORA-00955: name is already used by an existing object
- ORA-01430: column being added already exists in table

The idea is to ensure that the table will be identical in all installations without filling the spool files from the upgrade with lots of ORA messages which again may lead to that more important ORA error messages will be overlooked by the operator.

Below you will find a sample output from the package:

--create the table with only one column
begin
p_ddl.create_table(
p_table_name=>'table01'
,p_column_name=>'column01'
,p_column_data_type=>'number'
,p_column_data_precision=>9);
end;
/
INFO: Table TABLE01 already exists. No actions needed.

--add the second column in the table
begin
p_ddl.add_col(
p_table_name=>'table01'
,p_column_name=>'column02'
,p_column_data_type=>'varchar2'
,p_column_data_length=>10);
end;
/
INFO: An identical column (TABLE01.COLUMN02) already exists. No actions needed.

--add the third column in the table (a new column for this release)
begin
pmx_p_ddl.add_col(
p_table_name=>'table01'
,p_column_name=>'column03'
,p_column_data_type=>'varchar2'
,p_column_data_length=>5);
end;
/
INFO: The column "TABLE01.COLUMN03" was successfully added.

So the main purpose with the package is to improve the readability of the spool files from our database upgrades as they tend to contain lots of the ORA error messages listed above (procedures for adding constraints and indexes will also be implemented in the package). And, it will include the check for detecting any mismatch in column data type as well :-)

Then back to the original question: can you think of any negative side effects for creating the table with only one column in the CREATE TABLE statement, and then add the rest of the columns one by one?

Br,
Björn

Tom Kyte
November 10, 2007 - 7:48 pm UTC

...
Adding an already existing column was just an example (and maybe a bad
example)... There can be several reasons for why a database model may differ
from one installation to another one, even if they are supposed to be identical
as they have the same version number. This could be due to poor configuration
management (lack of control when having several code tracks, etc), partly
failure of a previous database upgrade, DDLs performed manually due to some
type of maintenance task, etc.
.....

if this is "true", then your upgrade is an actual APPLICATION and should not be a loose collection of scripts, it has to be really "smart" to handle and deal with this - and be able to back out changes - whatever. So, it should be a program, NOT a sqlplus script by any means.


sorry - but a simple scripting environment like sqlplus - not going to cut it, you need a PROGRAM here.

adding one column after the other is going to be fine.

Tom Fox, November 10, 2007 - 9:41 pm UTC

**SQL Server = Oracle Express pretty much too - what do they have that Express doesn't really.

Greater than 4GB of data space... :) That's about all I can think of. Speaking of which, why 4GB? 4GB is so tiny these days, especially on 500GB and 1TB drives. Why not allow 20GB or more?

As for the cost, that's relative. It depends on the business revenue and such, but yeah, it seems affordable.

**@Tom Fox,
**You make my point. With MS you have to buy small business server and then pay something > 0 to get
**sqlserver.

You can just buy SQL Server if you wish, but most SMBs buy a server and SBS comes with it. You don't buy SBS and then turn around and buy SQL Server. SQL Server is there, ready to use, for free.

**which is cheaper than not free. (paying for sqlserver and small business server. )

Even with Oracle, you have to get an OS. Sure you can use Linux, which is free, but I don't know many SMBs whose app will run on Linux. Also, many companies want support for it all. I'm sure you can find a consultant to support an app on XE, but don't expect Oracle to.

I'd also like to note your point is correct too. People should buy what they can support (and what their app requires). There's no end-all be-all solution to anything.

SqlServer Vs Oracle

A reader, November 13, 2007 - 6:49 pm UTC

I am the reader who made that statement :

For small databases, where no. of users is small, sqlserver is the best for the price

I am not sure about the exact pricing of various oracle/sqlserver installations - but I guess I am not wrong in assuming that Oracle rdbms is far costlier than sqlserver - on comparable editions

I have worked in Oracle for about 12 years and on sqlserver(including SSIS) for the past 6 months. SQLSERVER works very well for small sized databases where no. of users are small. In fact I guess it performs better than Oracle due to lesser overheads when compared to Oracle. But Oracle scores over Sqlserver for large datawarehouse or when the number of concurrent users is large.

But so long as one need not bother about locking/blocking/deadlock etc and when only very few jobs are running simultaneously at any point of time, sqlserver scores over Oracle
Tom Kyte
November 16, 2007 - 2:46 pm UTC

... but I guess I am not wrong in assuming that Oracle rdbms is far costlier than
sqlserver - on comparable editions ...

no, you guessed wrong, you are wrong.

if you believe you are correct, please back it up with facts. Actual quotable dollar amounts.


... In fact I guess it performs better than Oracle due to
lesser overheads when compared to Oracle. ...

you guessed incorrectly there as well.

Oracle Vs Sql Server

A reader, November 13, 2007 - 7:01 pm UTC

And now it is a fight between

Oracle and MS leaving others - far behind as far as RDBMS market shares goes ...

Any idea what is the current market share of Oracle and Sqlserver ?

Updates and Patches

A reader, November 14, 2007 - 10:32 am UTC

I've been involved with Oracle in one way or another for about 10 years or so. In all that time when you got an upgrade or patch from Oracle, it seemed to be rock solid. Lately, they seem to be going the way of Microsoft and letting the users test the patches.

We recently installed a patch to 10g and it bombed all of our Java out (most of which I wrote). So I got about half-a-million calls about why my stuff was not working. Turned out the patch was bad, and the only database where the code still worked was the one where the patch failed.

It would be most helpful if Oracle had tested the patch before sending it out so we didn't have to go through the pain of testing it ourselves.

And that feels good to get off my chest...

"What don't you like about Oracle?"

Kaido Vaikla, November 17, 2007 - 9:37 am UTC

Hi Tom,

A few things what i expect for oracle db
- logfiles rotating
- roll for developers, like it is in htmldb/apex
- additional option to shudtown oracle, for example
sql> shutdown immediate after 15sec shutdown abort
ie. if database does not drop down after X-min(sec) do shutdown abort

br
Kaido

Two things: 1) Directory Naming 2) Moving the ORACLE_HOME

Khalid, November 18, 2007 - 12:26 pm UTC

1) Directory Naming
It doesn't look like directory naming can be implemented without purchasing license for App servers, I will be happy if I am wrong.
2) No easy way to relocate $ORACLE_HOME, (symbolic links are not easy to deal with) again I will be happy if I am wrong.
Tom Kyte
November 21, 2007 - 12:50 pm UTC

stating that 'symbolic links are not easy to deal with' - does not compute.

Licensing for OID

Doug Floyd, November 18, 2007 - 5:46 pm UTC

OID can be used free of charge as long as it is only used for name resolution:

http://download-west.oracle.com/docs/cd/B14117_01/license.101/b13552/editions.htm#CJAHFHBJ

A reader, November 26, 2007 - 4:22 pm UTC

http://www.gartner.com/press_releases/asset_152619_11.html

Stamford,CT - that is where I am from as well :)

But that shows only till 2005.

Maybe their previos versions were far inferior - but with the introduction of sqlserver 2005 and SSIS, I guess MS is reducing the gap between themselves and Oracle ?

Responding to followup

Khalid, December 06, 2007 - 1:25 pm UTC

on 1) Directory Naming
Thanks for pointing me the link, we still need to get in touch with our Oracle rep to enquire about our specific licensing deal w/ Oracle.

2) Moving the ORACLE_HOME, Responding to Tom's comment:
Tom, as I was not clear about what I was saying, let me rephrase completely. Suppose I have allocated about 5g on a file-system /u01 for a database OH (about 2G), then I install an EM agent in a separate home, but on the same file-system, then I install other app-server related components such as OID, and sooner or later that 5G is not enough. so I want to relocate all Homes to another file-system /u01_big. What I am saying is if I shut down all the Oracle related databases and services, and just copy everything from /u01 to /u01_big and startup everything, and it will not just start working, will it?

About symbolic links, I don't have a very good defense for my comment, but I will try: If I make a symbolic link "ln -s /u01_big /u01/app/oracle" and then copy everything to /u01_big, for one I always have to maintain that symbolic link, and it sort of gets messy from there on maintaining all that ... (I know it still doesn't compute, but whatever ...)





Tom Kyte
December 10, 2007 - 10:26 am UTC

of course not - most anything and everything will break of you MOVE it.

what is involved in "maintaining" a symbolic link? It is not like it gets lonely and wants company or anything - it is just "there"???? You don't "maintain" anything.

Responding to Khalid

Todd Pierce, December 10, 2007 - 11:13 am UTC

If you want to do this.
After the move remount /u01_big at /u01.

Then startup should work.
Tom Kyte
December 10, 2007 - 11:23 am UTC

sure, there are a thousand ways to get file systems "larger" - most of them today would just let you add more disk even (assuming some sort of RAID is in place)

What I'd like to see...

A reader, December 10, 2007 - 12:20 pm UTC

Readonly user for particular schema...

In live environment, most of the time, support guys (or less database technical guys) gets read only access to particular schema (i.e. by creating another READONLY schema and assigning role to it or by granting SELECT on all tables to that READONLY schema).

However, particularly in the maintenance phase or with new changes, many time DBAs forget to grant 'SELECT' on new objects to READONLY user.

Just imagine if we can create assign READONLY schema at the time of creating new SCHEMA and let Oracle manage 'SELECT' grants for all the new objects created on our behalf....
like create user FOO identified by FOO ...
readonly schema FOO_READONLY
default tablespace foo_tablespace
blah.. blah...

Metalink and support leaves something to be desired

Edward Stoever, December 13, 2007 - 10:38 am UTC

My biggest problems with Oracle are:

1. Metalink could be better organized. If you are looking for a patch, but you don't know the specific patch number, good luck to you.

2. Some people in the support arena are excellent. They find solutions and don't drop the ball. Others allow support tickets to go on and on and on and on and on....

3. And my all-time biggest complaint is patches that fix patches that you don't realize need to be applied until after you already applied the one that didn't work. I have seen this on Linux and Solaris and this drive me crazy (5240469 and 5117016 are examples). Finding these things is difficult both for the users AND the Oracle support people. Its a messy way to fix software.

Dropping datafiles from tablespaces

Maheswaran Sampath, April 23, 2008 - 3:32 am UTC

Dropping of individual data files from tablespaces is totally unsupported as of now.

If suppose we have created a tablespace with 3 one GB dat files and while using the tablespace we realize that the size of the tablespace is never going to cross 1 GB. There should be a way to drop the extra two datafiles alone and migrate the data from those two datafiles to the one single datafile that will be left behind.


Tom Kyte
April 28, 2008 - 9:13 am UTC

Dropping of individual data files from tablespaces is totally unsupported as of
now.


that is an absolutely false statement.


Now, if the datafiles CONTAIN SOME DATA, you cannot drop them (obvious isn't it?)

And in order to fix the problem you describe (assuming the two files DO contain data) is it 'rather trivial to fix

a) create new, correctly sized tablespace
b) move segment(s) into it
c) drop old

protecting the library cache (the shared pool)

rc, May 03, 2008 - 3:59 pm UTC

You write in your book 'expert Oracle database architecture':

'A single small application that does not use binds variables will cause the SQL of other well-designed applications to get discarded from the shared pool over time. That will cause the well-designed applications to have to hard-parse their SQL all over again as well. You only need one bad apple to spoil the entire barrel.'

It would be very nice if it was possible to prevent this small 'wrong' application spoiling the shared pool.

alter session set caching_in_shared_pool = false;

In an after logon trigger the DBA can decide which applications can cache in the shared pool.

The #1 Thing About The Oracle Database Which Must Change

A reader, May 16, 2008 - 1:41 pm UTC

Is the 30 character length restriction on object naming...

PERIOD


PLEASE hear this plea for help Oracle developers...
How can the most robust database in the world limit me in the good, clear, long names I want to use for variables, tables, procedures, indexes, constraints?!????


Honestly, I see not how it is possible they have not adjusted this limit...


Tom Kyte
May 19, 2008 - 3:11 pm UTC

You would like every 3rd party utility, product, tool ever invented to

fail.

It is a rather huge change and will not be undertaken without a ton of thought - this is a massive, huge change that would destabilize virtually *everything out there ever made to date* in a way that would likely break it - not just make it sometimes not work correctly.

30 chars vs longnames

Duke Ganote, May 20, 2008 - 10:24 am UTC

But Oracle already provides for Java class and method names, which can easily exceed 30 characters. For these, Oracle creates a unique "short name" for the Java code element and uses that name for SQL- and PL/SQL-related access. Why not allow developers to create both long and short name?


Tom Kyte
May 20, 2008 - 11:44 am UTC

yes, and java was new and the names in the dictionary are mangled, useless.


Hopefully - you understand how destabilizing this would be to everything. Do you really want this:

C:\DOCUME~1\TKYTE>


the nice old 8.3 names - popping up in the dictionary.


use comments - they are long and available for extended commentary if you like.

30 chars vs longnames

Duke Ganote, May 20, 2008 - 10:24 am UTC

But Oracle already provides for Java class and method names, which can easily exceed 30 characters. For these, Oracle creates a unique "short name" for the Java code element and uses that name for SQL- and PL/SQL-related access. Why not allow developers to create both long and short name?


Another one..

Arindom, June 12, 2008 - 2:43 pm UTC

We do have a 'direct path load' available to developers.
Why not a 'direct path read' ? (In 10.2.0.3 ; I have noticed the 'hidden' parameter _serial_direct_read ; but could not get it to work ; the full table scan still did a db file scattered read).
Tom Kyte
June 12, 2008 - 2:54 pm UTC

direct path read is available and will be used by the database anytime it feels like it.

Usually with parallel query, the direct path read reads from disk into the pga bypassing the cache.

But we'll use it when we want to - it is our runtime decision to make.



the difference between the results of a direct path load and a conventional path load are huge, enormous, large. I don't see how you could compare a direct path load vs conventional path load and a db file scattered read vs direct path read. The analogy just isn't there. A direct path load does something fundamentally different data wise (what you end up with on disk is very different), a direct path read returns the same exact results - there is nothing but maybe (maybe) a performance difference.

In order to do a direct path read - we have to make sure that segment is checkpointed to disk (we are bypassing the cache, the cache has committed changes we need to see that are not on disk) - parallel query does that, normally this is not something you want to have happen before most queries on most systems.

Alexander the ok, June 26, 2008 - 10:11 am UTC

create materialized view must be granted directly, so even if I have dba I still can't create a mv.....
Tom Kyte
October 16, 2011 - 10:05 pm UTC


if exists ( object_name )

Drew, July 02, 2008 - 2:23 pm UTC

.
Tom Kyte
June 17, 2011 - 3:49 pm UTC


lack of command history

Ravi raj Kumar., July 07, 2008 - 9:41 am UTC

Hi ,
Oracle a great database.
But Here are the two problems that oracle people are facing.
1)Sql*plus does not have a support for command history.
2)"with clause" is not 100% reliable.Especiall "with" clause is not giving proper results when used with "UNION" clause.

I strongly believe that ,oracle will take care of these issues.

cheers
Ravi raj Kumar.

Metalink Documents

A reader, July 07, 2008 - 9:43 am UTC

The access to the Metalink Documents is limited to corporate users only (or rather the access is restricted to employees of the company which purchase the Oracle products).

Though Oracle can restrict opening Service Request etc to authorised users, I think the metalink documents should be available to all for free access.. And then may be you'll get less dumb questions ;-)

Alter Table..

A reader, July 09, 2008 - 7:40 am UTC

How About syntax like

ALTER TABLE [TABLENAME] MOVE [OTHER OPTIONS] [REBUILD INDEXES];

To get table moved and indexes rebuilt in one statement ? ?

Changing File Permissions

Rajesh, August 15, 2008 - 1:59 pm UTC

Hi Tom,

I am familiar of creating a file from Oracle through UTL_FILE. But the default permission is rw-r--r--.
Could you suggest me any way to change the permission of the file to rwxrwxrwx from Oracle ?

Thanks
Rajesh
Tom Kyte
August 20, 2008 - 8:45 am UTC

read about umask

utl_file doesn't create it with any set defaults, the OS does. and you control that with umask - but the DBA won't likely "like" you changing the default...

so, you might need to write a small java stored procedure to explicitly set the file permissions if you really really need them to be rwx for all (x????)

Explain plan showing table aliases

Mattlt, August 20, 2008 - 12:13 pm UTC

Explain plan should list both the table_name and the alias of that table.  

When the same table is joined multiple times in a query, it's sometimes difficult to distinguish which access path is being used by which "view" of the table.  

Simple example, which one's which???

  1  select a.dummy,b.dummy
  2* from dual a, dual b
SQL> /

D D
- -
X X


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   NESTED LOOPS
   2    1     TABLE ACCESS (FULL) OF 'DUAL'
   3    1     TABLE ACCESS (FULL) OF 'DUAL'


Rajesh, August 28, 2008 - 8:11 am UTC

Hi Tom,

A quick question to you... I am creating a merge statement with Table A and Table B.
Table A has 800 columns and B has some 500 columns.

Merge into A /* 800 columns */
using (select **500 columns** from B)
when matched then
update
when not matched then
insert

Now we have 800 columns in A and 500 columns in B...In this merge,
Oracle handles 800+500 = 1300 columns..

When I tried the above merge statement, it was not working..
And I heard something like this...

Oracle can only handle at the maximum of 1000 columns.. If this is true..
how can I handle 1300 columns in oracle ? any idea ?

Pls suggest..

Thanks
Rajesh
Tom Kyte
August 29, 2008 - 10:13 pm UTC

you'll need to do an


update ( select a.c1 a_c1, b.c1 b_c1 ....
          from a, b
         where a.x = b.x)
set a_c1 = b_c1, a_c2 = b_c2...

insert into a
(x,c1,...c500)
select x,c1,....c500
from b
where b.x NOT IN (select a.x from a);


for that.


about data flow in oracle

Sachin Kuchhal, September 01, 2008 - 2:31 am UTC

Hi Tom

I want to know about how data flow in oracle

Because that is the main thing. When any person didn't know about how data flow in oracle. I don't know nothing

Please provide me help about that.

Looking forward to hear from you
Tom Kyte
September 01, 2008 - 12:58 pm UTC

It is the topic of an entire book. There is for free the concepts guide (otn.oracle.com has that under documentation). If you like the way I explain things, my version of it is called Expert Oracle Database Architecture.

clustered partition tables

Amir Cohen, September 19, 2008 - 1:09 pm UTC

it would be nice to have the ability to create partitioned tables where each partition will be stored in a multi table cluster...

Session disconnects after an interval of time

DK, October 24, 2008 - 4:24 am UTC

Hi Tom,

My Oracle session disconnects after sometime if it is left idle.While I have the Oracle DBA privileges,I do not know which parameter has been set by the earlier DBA,that enables the session to disconnect automatically.

Can you please let me know what parameter needs to be reset,so that the session remains connected.

Thanks,
DK
Tom Kyte
October 24, 2008 - 1:19 pm UTC

depends on the error you are getting. This could be a network policy (time out inactive connection after X units of time).

It could be a database profile - query:

ops$tkyte%ORA9IR2> select *
  2    from dba_profiles
  3   where profile = ( select profile
  4                       from dba_users
  5                                          where username = user )
  6    and resource_name = 'IDLE_TIME'
  7  /

PROFILE                        RESOURCE_NAME                    RESOURCE
------------------------------ -------------------------------- --------
LIMIT
----------------------------------------
DEFAULT                        IDLE_TIME                        KERNEL
UNLIMITED


@DK re: Session disconnects after an interval of time

Stew Ashton, October 26, 2008 - 3:55 am UTC


On the network side, could there be a firewall between you and the database server? Firewalls will silently drop packets on connections that have been idle too long.
Tom Kyte
October 26, 2008 - 10:20 am UTC

... This could be a network policy (time out inactive connection after X units of time). ...

not so much "drop packets" - which would just cause a re-transmit, but a network policy that times out and kills a connection.

"drop packets"

Stew Ashton, October 26, 2008 - 12:14 pm UTC


We're talking about different things, each possibly relevant. The firewalls I have to deal with do not kill the connection, they silently drop the packets, and yes the packets are re-transmitted and after 9 minutes TCP gives up and the client gets ORA-3113.

Having learned this the hard way I mentioned it just in case, even though the question as phrased doesn't mention this precise behavior.


Session disconnects after sometime

A reader, November 05, 2008 - 3:47 am UTC

What parameters need to be changed if we want the Oracle session to be connected always.
Tom Kyte
November 11, 2008 - 12:23 pm UTC

well, first and foremost, if your SECURITY TEAM set this up, they might be a bit peeved if you subvert their RULE. You might want to talk to your sysadms about "what changed, what did you do and why, and would you be OK if I break it"


And then you can set sqlnets expire_time - dead client detection - on the server. Then the server will "ping" the clients from time to time, thus keeping the connection alive by making it appear as if you are using it.


http://docs.oracle.com/docs/cd/B28359_01/network.111/b28317/sqlnet.htm#NETRF209


but you might be violating your systems security protocol in doing so.

One select, two result cursors

rc, December 16, 2008 - 12:51 pm UTC

This blog http://www.data-miners.com/blog/2008/01/mapreduce-and-sql-aggregations.html got me thinking.

With grouping sets you 'almost' create two distinct result sets with one select statement. Maybe this idea can be expanded to other situations?

You full table scan a big table once and you produce two or more cursors that are based on select-statements with different where-clauses and select different number of columns and different grouping sets.

This way you can prevent reading the data twice from disc.
Tom Kyte
December 29, 2008 - 11:20 am UTC

it is a single result set however.

Basically - you want to

a) understand what data you need to return
b) understand the capabilities of SQL, in order to be able to phrase the question in the most intelligent way possible.


There are thousands of SQL 'things' - from analytics, to group by cube, rollup, grouping sets, to partitioned outer joins, to connect by, to ..............


Knowledge of the data you need PLUS Knowledge of the SQL you have at your disposal - that is the way to get data efficiently.



You want to think in SETS and have a complete knowledge of SQL - in that fashion, you'll end up packing as much as you can into a single sql statement - having the server perform the least amount of overall work to achieve your goal.


Restrictions on TABLE function

Tony, December 16, 2008 - 4:16 pm UTC

Oracle's wondersful features like TABLE function always come with so many restrictions like it can't be queried over DB link. I developed views using table function but abondend when I came to know that that views can't be accessed over DB link.


Enhance Pl/Sql - More into "SELECT INTO"

Parthiban Nagarajan, December 18, 2008 - 11:01 pm UTC

Hi Tom

Thanks for you and "Alex" for giving an opportunity to tell about something that I want from Oracle.

FORALL enhancements, error logging clause and BULK_EXCEPTIONS make programmer's life easier. Addition of continue statement is really a great thing. We have exponentiation operator ** which is the one that all time language legend "C" lacks. We should also steal some other operators like increment: ++ and decrement: --, to make ourself wealthy.

Similarly....
You always suggest wrapping SQL statements that we use frequently within packages. For example, getting ENAME from EMP. The advantage this gives us is the code reusability and centralised access to the business logic. But that SELECT INTO a variable makes some spaghetti code, isn't it ? (Do you like Spaghetti? ;)

package emp_pkg
  function get_ename(
    p_emp_no emp.emp_no%type
  )return emp.ename%type
  is
    v_ename emp.ename%type;
  begin
    select ename
      into v_ename
      from emp
     where emp_no = p_emp_no;
    return v_ename;
  exception
    when no_data_found then return null;
  end get_ename;
end emp_pkg;


I hope, the following code could give us more meaning that this function is intented only to get ename and where we won't do any other business logics ( logging exceptions, ... ). This will also force the programmer to return a value (prevent ORA-6503). And sometimes, programmers may write number(5,2) to raise numeric or value error. But this enhanced select into asks the programmer not to worry about those things... ;). Here we'll have at most single frequent/favourite/complex SQL select into statement in the block, with no declaration section. Is it similar to view ?. I don't know how it would be when we use it with bulk collect and/or pipelines....

  function get_ename(
    p_emp_no emp.emp_no%type
  )return emp.ename%type
  is
  begin
    select ename
    [ when { others | no_data_found or too_many_rows ... }
      then <some_value>
      when { others | no_data_found or too_many_rows ... }
      then <null> ]
      into return
      from emp
     where emp_no = p_emp_no;
  end get_ename;


Probably you won't appreciate as it handles exceptions without the exception block.... Please comment on / correct this one and apply your art of experience to make the code (perhaps the select into) more beautiful.

Thanks.

Tom Kyte
December 29, 2008 - 2:00 pm UTC

... You always suggest wrapping SQL statements that we use frequently within packages. For example, getting ENAME from EMP....

where have I ever said that?


I am so so so very much on record as being 100% (or more) dead set against any such thing. I hate - despise - 'getter' and 'setter' functions. They lead to the worst code every in a relational database. They are horrible. Please do not put such words in my mouth.

package emp_pkg
  function get_ename(
    p_emp_no emp.emp_no%type
  )return emp.ename%type
  is
    v_ename emp.ename%type;
  begin
    select ename
      into v_ename
      from emp
     where emp_no = p_emp_no;
    return v_ename;
  exception
    when no_data_found then return null;
  end get_ename;
end emp_pkg;


I *hate* that code, 100%. It would not survive in my design.

calling remote procedures and UDTs

rc, December 19, 2008 - 11:38 am UTC

It would be nice if it was possible to call a procedure over a database link that returns a user defined type.

Cycle Sequence in SQL

Rajesh, December 31, 2008 - 1:51 pm UTC

Hi Tom,

One question to you. We have rank/dense rank function in oracle which generates autonumbers in a sequence based on a column in table. Is there any function or way to generate a cycle of numbers (for eg. from 0 to 3 and again 0 to 3) based on a column ?

Thanks!

Tom Kyte
January 05, 2009 - 10:15 am UTC

... (for eg. from 0 to 3 and again 0
to 3) based on a column ?

...

draw a "picture" of that, not sure what you are looking for...


but mod( dense_rank() ...., 4 ) would make it be 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, ...

Wordwrap

Rajesh, January 05, 2009 - 1:46 pm UTC

Hi Tom,

I am passing a huge text as a parameter value from Oracle to UNIX. Actually, the procedure call is from Unix. The text before passing seems to be good in Oracle. But after passing to UNIX shell, I noticed word wraps after every 72 characters in UNIX. It breaks into different line after 72 chars. How to avoid this and Could you please clarify me why is this happening ?

Thanks!
Tom Kyte
January 05, 2009 - 1:51 pm UTC

I have no idea how you pass anything from Oracle (a database) to Unix (an operating system).

give example

Wordwrap

Rajesh, January 05, 2009 - 3:02 pm UTC

Hi Tom,

My Bad. I am using the following query in sqlplus to concatenate five different strings of length more than 4000 each.

select 'UXCMMD: '||nvl(:RTRNSTRNG1,'0')
||nvl(:RTRNSTRNG2,'')
||nvl(:RTRNSTRNG3,'')
||nvl(:RTRNSTRNG4,'')
||nvl(:RTRNSTRNG5,'') from dual

As the concatenated string will be of length > 20000 chars,
I got the following error..

ORA-01489: result of string concatenation is too long

I then used to_clob function as below..

select 'UXCMMD: '||to_clob(nvl(:RTRNSTRNG1,'0'))
||to_clob(nvl(:RTRNSTRNG2,''))
||to_clob(nvl(:RTRNSTRNG3,''))
||to_clob(nvl(:RTRNSTRNG4,''))
||to_clob(nvl(:RTRNSTRNG5,'')) from dual

The strings were concatenated. But they got wrapped up on every 80 chars. Could be the default linesize of the sqlplus. but I am not sure. Could you please clarify me how to remove this word wrapping ? I had used replace(string,chr(10),'') and replace(string,chr(13),'') as well.. I didnt work. I presume that this word wrap happens due to the to_clob function. But I cant remove this to_clob as I may get ORA-01489. Please help me to resolve the same.

Thanks!
Tom Kyte
January 05, 2009 - 4:30 pm UTC

read about:

set linesize
set long
column cname format a<length>


eg:

set linesize 21000
set long 21000
column data format a21000

select to_clob(rpad('*',4000,'*')) ||
       to_clob(rpad('x',4000,'x')) ||
       to_clob(rpad('y',4000,'y')) ||
       to_clob(rpad('z',4000,'z')) data
  from dual
/



Query compare

Rakesh Bansal, January 06, 2009 - 9:08 am UTC

I wish if Oracle has below option,
Query 1 - Taking more time. I tuned it to some extent. Now my problem is, I need to verify if both queries give same output. I wish if oracle provides an option to see if two queries are identical in output. In this case I need to just look for performance part. I just wish.
Tom Kyte
January 06, 2009 - 10:31 am UTC

it would take a semnatic equivalence sort of test - you cannot compare outputs, you have to rip the queries apart and reverse engineer the question itself.

You never need to just look for performance, you have to understand the question being asked.


Think of a sql statement like a 100,000 line source code file - because it is very much like that, it is a program, not a simple line of code.


We do have the ability to take your query and suggest ways it might be rewritten equivalently - but using different approaches. However, that comes with the grave warning that "you - you must understand the SQL that was generate - you must verify that the sql is in fact equivalent"

Contd. - Enhance Pl/Sql - More into "SELECT INTO"

Parthiban Nagarajan, January 08, 2009 - 5:47 am UTC

Hi Tom

Sorry for the mistake! Yes, your approach is: if its possible in SQL, do it in SQL; if not try Pl/Sql, then Java or C. But you also advise to use packages and to have codes in the back-end. Somewhere, I misunderstood. Another Oracle Magazine columnist, Pl/Sql evangelist, Steven Feuerstein, advises for the setters/getters. At least, 60% of Java is setters/getters. Why it should not be in Pl/Sql? Though I don't like setters/getters, I'd like to get any reasons from you. (I think, you are suggesting to store the frequent SQLs as views. Am I right?)
Tom Kyte
January 08, 2009 - 9:36 am UTC

I am adamantly, totally, and 100% against "setters/getters"

Stop thinking procedural code, think in sets, you are using a relational database. You are not using VSAM files.


The massive performance impact of getter/setter functions is the reason.


If you have:

package emp_pkg
function get_ename(
p_emp_no emp.emp_no%type
)return emp.ename%type


you probably have get_address, get_mgr, get_this, get_that, get_the_other_thing and you probably have sql that does


select empno, get_ename(empno), get_address(empno)
from emp_hist
where ....


and you end up executing a TON of silly little queries over and over - making it impossible for the optimizer to do it's job (you would always be doing the least efficient do it yourself nested loops join - always, it will always require more work, more cpu, more latching, more IO, more of everything)


Use SQL
do not be afraid of SQL


I'm afraid that Steven Feurstein disagree on this point.


I'm not afraid, in fact I actively promote, packing as much as you can into a single sql statement. I am not afraid to say "hey, learn this language, learn SQL".

Why so many patches ?

Sanji, January 14, 2009 - 5:59 pm UTC

Hello Tom,

My professional life has revolved around Oracle in the past 11 years and this site has been my source of inspiration ever since.

I have appreciated Oracle all this time and continue to do so..

I though feel that Oracle has started going the Microsoft way in terms of security vulnerability. I'd love to be wrong here in my perception.

Is Oracle that vulnerable that we need to patch our Oracle environments every quarter ?

Your take....

Rgds
Sanji

Alexander, February 06, 2009 - 11:08 am UTC

Flashback tablespace would be pretty awesome.
Tom Kyte
February 06, 2009 - 4:03 pm UTC

well, there is tspirt (tablespace point in time recovery) and it would not be difficult to code

create or replace procedure flashback_tablespace( p_tsname in varchar2 )


basically construct a flashback table command with all of the tables in that tablespace.

but beware, it would of course jump over tablespaces in many cases - indexes, lob segments, overflow segments, table partitions in other tablespaces, etc...

Alexander, February 06, 2009 - 4:11 pm UTC

I would envision it working like tspitr where you list the dependant/related tablespaces and it would give and error if it found an index or whatever in a different one.

I like the proc idea, but there's also the potential of some of the tables to be aged out of undo, and since it would be generating ddl statements, we would want that to be an atomic transaction and not have some tables be left out of whack with the others.

Tom Kyte
February 06, 2009 - 4:21 pm UTC

and the flashback table command supports that.


flashback table dept, emp to ..

Alexander, February 06, 2009 - 4:39 pm UTC

Ok great I did not know you could do that thanks.

Alexander, February 11, 2009 - 9:54 am UTC

Tom can you help me figured out what's wrong with my proc? I'm stuck, I can't reproduce the run time error with the sql I am generating.

CREATE OR REPLACE procedure ORACLE.FLASHBACK_USER (p_user in varchar2, p_time in timestamp)
authid current_user
is 
l_sql varchar2(500);
l_result varchar2(500);
begin

for x in (select table_name, owner from sys.dba_tables where owner = p_user)
loop
execute immediate 'alter table ' || x.owner || '.' || x.table_name || ' enable row movement';

end loop;

l_sql := 'select case when rownum = 1 then ''flashback table '' || owner || ''.''  || table_name' 
        || ' when rownum  = (select count(*) from sys.dba_tables where owner =  ' || p_user 
        || ') then owner || ''.'' || table_name ||'
        || ' '' to timestamp to_timestamp (' || p_time || ', mm/dd/yyyy hh24:mi:ss);'''  
        || ' else owner || ''.'' || table_name end ' 
        || 'from (select table_name, owner from sys.dba_tables ' 
        || 'where owner = ' || p_user || ' order by table_name);';
      
dbms_output.put_line(l_sql);
            
execute immediate l_sql into l_result;
execute immediate l_result;

for y in (select table_name, owner from sys.dba_tables where owner = p_user)
loop
execute immediate 'alter table ' || y.owner || '.' || y.table_name || ' disable row movement;';
end loop;
end;
/

SQL>  exec oracle.flashback_user('SCOTT', TO_TIMESTAMP ('02/09/2009 20:00:00', 'MM/DD/YYYY HH24:MI:SS'));
select case when rownum = 1 then 'flashback table ' || owner || '.'  ||
table_name when rownum  = (select count(*) from sys.dba_tables where owner =
SCOTT) then owner || '.' || table_name || ' to timestamp to_timestamp (09-FEB-09
08.00.00.000000 PM, mm/dd/yyyy hh24:mi:ss);' else owner || '.' || table_name end
from (select table_name, owner from sys.dba_tables where owner = SCOTT order by
table_name);
BEGIN oracle.flashback_user('SCOTT', TO_TIMESTAMP ('02/09/2009 20:00:00', 'MM/DD/YYYY HH24:MI:SS')); END;

*
ERROR at line 1:
ORA-00911: invalid character
ORA-06512: at "ORACLE.FLASHBACK_USER", line 24
ORA-06512: at line 1


When I run the query, it works ok (even in this horrible format). The only strange thing is in the output SCOTT does not show up as a string, even though the parameter is.

SQL> select case when rownum = 1 then 'flashback table ' || owner || '.'  ||
  2  table_name when rownum  = (select count(*) from sys.dba_tables where owner =
  3  'SCOTT') then owner || '.' || table_name || ' to timestamp to_timestamp (09-FEB-09
  4  08.00.00.000000 PM, mm/dd/yyyy hh24:mi:ss);' else owner || '.' || table_name end "Flashback sql"
  5  from (select table_name, owner from sys.dba_tables where owner = 'SCOTT' order by
  6  table_name);

Flashback sql
----------------------------------------------------------------------------------------------------------------------------------------------
flashback table SCOTT.BONUS
SCOTT.DEPT
SCOTT.EMP
SCOTT.SALGRADE to timestamp to_timestamp (09-FEB-09
08.00.00.000000 PM, mm/dd/yyyy hh24:mi:ss);

Tom Kyte
February 11, 2009 - 12:58 pm UTC

ops$tkyte%ORA11GR1> declare
  2          l_n number;
  3  begin
  4          execute immediate
  5          'select 1 from dual semi_colons_are_not_sql;' into l_n;
  6  end;
  7  /
declare
*
ERROR at line 1:
ORA-00911: invalid character
ORA-06512: at line 4


ops$tkyte%ORA11GR1> declare
  2          l_n number;
  3  begin
  4          execute immediate
  5          'select 1 from dual semi_colons_are_not_sql' into l_n;
  6  end;
  7  /

PL/SQL procedure successfully completed.




';' means something to sqlplus

';' is not part of a sql statement

Alexander, February 11, 2009 - 2:22 pm UTC

Thanks, you're right.

I completely missed something else though. I don't know how I can handle this

ERROR at line 1:
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "ORACLE.FLASHBACK_USER", line 25
ORA-06512: at line 1


execute immediate l_sql into l_result;


l_sql returns multiple rows, what's the best way to solve this? I tried experimenting with bulk collect, doesn't look like you can execute an assoc array like you can a string.
Tom Kyte
February 11, 2009 - 3:06 pm UTC

you would use a cursor

declare
  l_cursor sys_refcursor;
begin
  open l_cursor for 'your dynamic sql';
  fetch l_cursor into l_c1, l_c2, l_c3, .....;
  loop
      exit when (l_cursor%notfound);
      process row
      fetch l_cursor into l_c1, l_c2, l_c3, .....;
  end loop;




To Alexander

Reader, February 12, 2009 - 12:17 am UTC

I would also suggest that when using dynamic SQL you need to use bind variables. Search this site on the importance of bind variable.

Alexander, February 19, 2009 - 11:25 am UTC

You know what would be really nice, the ability to grant create/alter/drop to specific users to avoid having to grant overkill system privileges. Many times I encounter situations were a vendor product creates objects on multiple schemas (in a perfect world this would not happen, but it does). It would be nice to be able to grant to just it's schemas. E.g.

grant create table to x, y;
Tom Kyte
February 19, 2009 - 1:44 pm UTC

I agree, it would be useful to have:

grant create table in schemas(a,b,c) to x;


A reader, March 02, 2009 - 9:26 am UTC

Raise error when adding number to NULL as result is NULL and doesn't make any sense, anyway. For example:

declare
l_dummy number;
begin
l_dummy := l_dummy + 1; -- typical error
end;






Tom Kyte
March 03, 2009 - 3:30 pm UTC

it is the defined behavior of NULLs to return NULL when operated on.

Wouldn't it really be annoying otherwise in general


ops$tkyte%ORA10GR1> select comm, comm+1 from scott.emp;

      COMM     COMM+1
---------- ----------

       300        301
       500        501

      1400       1401




         0          1





14 rows selected.



Now, if you have some procedural code that CANNOT allow one to be added to null


ops$tkyte%ORA10GR1> create or replace package utils
  2  as
  3          subtype myNumber is number NOT NULL;
  4  end;
  5  /

Package created.

ops$tkyte%ORA10GR1>
ops$tkyte%ORA10GR1> create or replace function add_em( x in utils.myNumber, y in utils.myNumber ) return number
  2  as
  3  begin
  4          return x+y;
  5  end;
  6  /

Function created.

ops$tkyte%ORA10GR1> show errors
No errors.
ops$tkyte%ORA10GR1>
ops$tkyte%ORA10GR1> exec dbms_output.put_line( add_em( 100, 1 ) );
101

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR1> exec dbms_output.put_line( add_em( null, 1 ) );
BEGIN dbms_output.put_line( add_em( null, 1 ) ); END;

                                    *
ERROR at line 1:
ORA-06550: line 1, column 37:
PLS-00567: cannot pass NULL to a NOT NULL constrained formal parameter
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored


PRAGMA for transaction owners

Matthew McPeak, March 03, 2009 - 6:46 pm UTC

Since this seems to be an Oracle wish-list, I'd like to see a PL/SQL pragma for transaction ownership. Like: "PRAGMA transaction_owner".

A PL/SQL block with this pragma would cause an error if:
1) A commit or rollback is issued in any other block called (directly or indirectly) by the block.
2) It is called (directly or indirectly) from any block that has the same pragma. I.e., one transaction owner cannot call another.
3) It exits without committing or rolling back (like autonomous_transaction)

It'd be really great if code with potential violations wouldn't even compile. (Java has spoiled me).

The idea here is to eliminate situations where you think you've got your transaction control down, but fail to realize that another developer stuck a commit into something that you are calling.

You guys think it could be useful or even done?

Tom Kyte
March 03, 2009 - 9:40 pm UTC

I've more than once asked for commit and rollback to be abolished from the language entirely. Reasoning: 99.9999999999999999% of developers don't "get" what a transaction is, or does, or when you should commit or when you should rollback.

We can disable commit in a procedure....

oracle

A reader, March 03, 2009 - 10:42 pm UTC

<<Reasoning: 99.9999999999999999% of developers don't "get" what a transaction is, or does, or when you should commit or when you should rollback. >>

I am suprised that you called these "Developers".

if a developer does not know what a transaction is he is the wrong business. ALthough i agree with you.

this field is filled with incompetent people that fit your criteria above.

Tom Kyte
March 04, 2009 - 1:03 pm UTC

It is not so much that they are incompetent - they are taught, physically taught the following type of "facts"

o "databases are black boxes"

o if you check 'transactional' in hibernate - all is taken care of, you don't need to think about it

o The application is the end goal

o commit frequently to save resources and make things faster (a sql-server'ism for sure)

o we might want to change the database, so don't bother learning too much about it, you cannot use it's features anyway.

o don't join, sort or filter in the database, we can do it faster in the client

o etc......




I learned differently, starting with this book:

http://www.amazon.com/exec/obidos/redirect?tag=asktom03-20&path=tg%2Fdetail%2F-%2F1558601902

You know what you are taught.

Disagree with your statement :

A reader, March 04, 2009 - 9:15 am UTC

"Reasoning: 99.9999999999999999% of developers don't "get" what a transaction is, or does, or when you should commit or when you should rollback."
Tom Kyte
March 04, 2009 - 1:31 pm UTC

That's ok, you don't have to. I just report what I see.


oracle

sam, March 04, 2009 - 4:07 pm UTC

Tom:

Do you think they were taught wrong intentionally to serve a specific purpose?

Just curious, how many books have you read in your lifetime.

it seem you have more knowledge than a super cray computer. There must be a lot of reading/learning/practice to get to that point.

I sometimes wonder how you remember all of it.
Tom Kyte
March 04, 2009 - 4:24 pm UTC

It is a cascading thing, you teach what you were taught.


I got very lucky, I had an excellent, patient mentor in the beginning
http://asktom.oracle.com/Misc/what-about-mathematics.html

99.9999999999999999% of developers

Sokrates, March 04, 2009 - 5:04 pm UTC

...

assuming there are 7 billion people in the world (which is an exaggeration) and all of them call themselves "developer" (which is an enormous exaggeration), how many then get what a transaction is, or does ?

sql > select 7 * power(10, 9) * (100 - 99.9999999999999999) / 100 from dual;

7*POWER(10,9)*(100-99.9999999999999999)/100
-------------------------------------------
                                 ,000000007


so, most likely, not even you !
:-)
Tom Kyte
March 04, 2009 - 8:31 pm UTC

but we all do understand what hyperbole is :)

Reviewer

A reader, March 11, 2009 - 1:38 am UTC

they are not teaching us the DBMS part which is inportant for us b'coz in our college DBMS is more important tha SQL and PLSQL
Tom Kyte
March 11, 2009 - 3:03 am UTC

spelling should be a priority as well - because, communication is extremely important in our industry.


How can you do "DBMS" in a RDBMS without SQL?

A reader, March 11, 2009 - 8:50 am UTC

"I agree, it would be useful to have:
grant create table in schemas(a,b,c) to x;"

So, since you agree, any chance this will be available in the next Oracle release? :)

A reader, April 06, 2009 - 2:34 pm UTC

Well in sqlplus, if there is a magic way to automatically change "form" to "from", that will be very useful.
And I believe many would agree with this....
Tom Kyte
April 13, 2009 - 10:53 am UTC

hah, I *never* type form

oh, wait, probably every other time I do that - indeed, that would be neat :)

Problem with Insert

KU, April 30, 2009 - 3:16 am UTC

Hi Tom,

I am trying to do a direct insert.
insert /*+append parallel(post,12)*/
into table_name
select --- from <<joins>>

This query would result in inserting around 4 milion records.

Ideally it should not take more 3 hours. But when i am running it it is taking more than 20 hrs. Could you please let me know what we need to check.

The select statement executes very fast.

Is this something regarding rollback segments or buffer size.

Thanks a lot for your help.

KU
Tom Kyte
April 30, 2009 - 10:11 am UTC

what is "very fast"

4 million records is somewhere between 4mb and 4petabytes (4 million is not useful by itself).

is the table in question indexed?

have you compared the plan of the "fast select" with the plan of the "slow insert" to see if they are the same?

are you sure you want to do parallel?

have you traced this?


Alexander, May 01, 2009 - 4:27 pm UTC

Correct me if I'm wrong, but I don't think you can restore I a tablespace from one database to another (at least the same way you can restore online, e.g.

sql 'alter tablespace xxx offline';
restore tablespace xxx;
recover tablespace xxx;
sql 'alter tablespace xxx online';


This would be an amazing addition, so you could backup one ts on database a, copy, restore while being up into database b.
Tom Kyte
May 02, 2009 - 12:02 pm UTC

we call that a transportable tablespace and yes, you can do that since 10g.

http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm#CACFBHIE

A reader, May 09, 2009 - 6:44 am UTC

Mr Tom;

Is there any sql query that shows the which user or session has the highest process?

I can see the total amount of processes from :
select count(*)from v$process;

I also want to know if user or session uses the most of these processes?

Tom Kyte
May 11, 2009 - 4:41 pm UTC

define what you mean by "the highest process"

Ok, it sort of seems you want to know "what user has the most processes 'owned' by them"

Given that the relationship between a process and a user session is many to many, this is a confusing request (why do you need this???)


A session can use many processes in it's life (shared server hops from process to process)

A process can hold zero, one or more sessions *concurrently*.

A process can service many sessions serially (shared server again).

So, a session can be associated with 0, 1 or more processes over it's life.
And a process can be associated with 0, 1 or more sessions over it's life.

ops$tkyte%ORA11GR1> select nvl(username,'the database itself'), count(distinct paddr) from v$session group by username;

NVL(USERNAME,'THEDATABASEITSEL COUNT(DISTINCTPADDR)
------------------------------ --------------------
OPS$TKYTE                                         1
the database itself                              18




that'll give you a snapshot of "right now", but it seems rather "not useful" in general - why do you want this metric?

Back to client/server for forms

Mark Bell, May 22, 2009 - 6:23 am UTC

I don't like the fact that upgrading to web forms (from 4.5) has made users of our system much more aware of network latency (especially users in different countries from the servers).

Thinking about it I realise this is because the old 4.5 forms were executing a query and caching several rows of results, which the user could then scroll through very quickly. Now, every press of 'next record' means a wait while the app server is asked to display the next record - this makes the app seem much slower to users.

Are there any plans to build an applet version of a forms client so that user response times are similar to the old client/server days?
Seems to me that this would be great (if it is possible) or is a forms client just too big to be an applet?

- Mark
Tom Kyte
May 23, 2009 - 10:27 pm UTC

you are just upgrading from 4.5 today? 4.5 was some 14 years ago....

forms is not the focus of future development - it will not disappear, but it is not the "wave of the future" as it is. I do not see any major architectural changes in its future.

foreign key join

Aalbert, August 07, 2009 - 6:19 am UTC

Just like a reader from Mississauga said, what I'd like to see is a construct for easy foreign key joins. I agree with him that 'JOIN USING' doesn't cut it for a number of reasons.
My experience is that most joins are between tables that have a foreign key relationship. And most of the other joins are just shortcutting somewhere, probably because you didn't normalize properly.

So I'm proposing what I'd like to call the 'FOREIGN KEY JOIN'.
SELECT *
FROM
  master
  LEFT OUTER JOIN slave USING FOREIGN KEY

Should any ambiguity arise as to what key to use, just specify the key to use.
SELECT *
FROM
  master
  LEFT OUTER JOIN slave USING FOREIGN KEY fk_master_slave

tokenizer + number generating table

rc, August 20, 2009 - 4:34 pm UTC

1.
It would be nice if there was a fast tokenizer in PL/SQL and SQL that can split varchar2's and clob's into parts. You should not only be able to split comma separated strings but also on other separators like ; or a space.

2.
I like the connect by level <= n method but a fast sys table that generates numbers would be better.

select level l
from dual connect
by level <= 1000000

is a little slow. (I think because it uses recursion?)







SELECT FROM (A PL/SQL Cursor)

Parthiban Nagarajan, August 25, 2009 - 1:50 am UTC

Hi Tom

In Pl/Sql, all SQL statements are treated as cursors (either implicit / explicit).
In an inline-view, we do have a SELECT inside a FROM clause
i.e., a cursor or a result-set inside a FROM clause.
Hence, how about having a Pl/Sql feature to

SELECT ... FROM (<cursor_variable>) alias_name
[WHERE alias_name. ...]

Is it looking meaningful to you?

;) Thanks in Advance
Tom Kyte
August 25, 2009 - 10:01 am UTC

It would be true to say:

In all languages, all APIs, even tools like sql*plus, all SQL statements are treated as cursors




... SELECT ... FROM (<cursor_variable>) alias_name
[WHERE alias_name. ...]

Is it looking meaningful to you?
....

no, because a cursor is a cursor, not a table - not a row source

Alexander, August 25, 2009 - 4:55 pm UTC

I think there should be a truncate table permission (Sybase does) instead of having to grant drop any table to truncate your OWN tables.
Tom Kyte
August 25, 2009 - 8:43 pm UTC

all enhancement requests are done via metalink...

For rc - 20 Aug 2009

A reader, August 25, 2009 - 9:25 pm UTC

SQL> create table somestrings ( n number, str varchar2(4000) );

Table created.

SQL> insert into somestrings values ( 1, 'nothingtosplit' );

1 row created.

SQL> insert into somestrings values ( 2, 'a b     cdef g h ij' );

1 row created.

SQL> insert into somestrings values ( 3, null );

1 row created.

SQL> insert into somestrings values ( 4, '              xyzt               ' );

1 row created.

SQL> commit;

Commit complete.

SQL> select n, substr ( s, b + 1, e - b - 1) r from
  2  ( select s.n n,
  3           s.s s,
  4           instr ( s.s, ' ', 1, c.c ) b,
  5           instr ( s.s, ' ', 1, c.c + 1 ) e
  6  from
  7  ( select n, ' ' || str || ' ' s from somestrings ) s,
  8  ( select level c from dual connect by  level <= 2000 ) c
  9  where instr ( s.s, ' ', 1, c.c + 1 ) > 0 )
 10  where trim ( substr ( s, b + 1, e - b - 1) ) is not null
 11  order by n, b;

         N R
---------- ------------------------------------------------------------------
         1 nothingtosplit
         2 a
         2 b
         2 cdef
         2 g
         2 h
         2 ij
         4 xyzt

8 rows selected.


There's a varchar2 version of a string tokenizer, splitting on space only. Use translate to allow a set of delimiters.

Not sure this would perform for millions of strings, but seemed surprisingly fast for a quick and dirty spellcheck on approx 50,000

Things I don't like in Oracle:date datatype

lh, September 10, 2009 - 2:34 pm UTC

Column of type date currently doesn't hold date; it holds date and time.

When applications refer to this column in where clause
in style "date_column = to_date('20090910','yyyymmdd')"
this doesn't give correct answers if column holds also time part (eg. some accidentally populated date_column with sysdate function).

This can cause some hard to find problems and erraneous application behavior.
(Yes, I know that this can be prevented by having constraint with
condition: date_column = trunc(date_column). )

Could there be datatypes:
- timestamp , date and accurate
- date , holding date and time
- puredate, holding just date


Tom Kyte
September 14, 2009 - 11:39 am UTC

... When applications refer to this column in where clause
in style "date_column = to_date('20090910','yyyymmdd')"
this doesn't give correct answers if column holds also time part (eg. some
accidentally populated date_column with sysdate function).
...

wow, I'm flabbergasted. Not at the date type, but at the expectation here.


A date is a date, dates include times

you: Ok, it is agreed, we will meet - you set the date
me: Monday, Sept 14th, 2009 at 15:00


If the application is to store just a DAY, the application would store just sept 14th, 2009.

.. This can cause some hard to find problems and erraneous application behavior. ...

only if the person writing the code doesn't have a clue as to the data they are querying - but then I'd say that is true of ANYTHING.


there is a puredate if you want one

......,
dt date check(trunc(dt,'d')=dt),
......



Things I don't like in Oracle: between operator with dates and timestamps

lh, September 10, 2009 - 2:41 pm UTC

Many people like to use between operator in where clause when
comparing dates and timestamps.

However it very cumbersome (and errorprone) to have
code like
date_column between to_date('20090910','YYYYMMDD') and
to_char('20090910235959','YYYYMMDDHH24MISS').

This can be overcome by not using between; instead use >= and < operators; but between is somewhat natural to people.

Could there be some easy way to express the upper limit ?
Tom Kyte
September 14, 2009 - 11:40 am UTC

... However it very cumbersome (and errorprone) ...

cumbersome? not really

errorprone? it is SOFTWARE, all of it is errorprone unless you do it right.


I prefer now to use >= and < since timestamps are out there (and 59 doesn't work)

Things I don't like in Oracle: lack of 'conditional' indexes

lh, September 10, 2009 - 3:05 pm UTC

Recently I have tried to make big third party application to work with acceptable performance.

The problem is that one cannot touch the code. Only have some index changes, if the application provider accepts them.

This is very generic application with lots of indexes.
Typically indexes are consisting of only one column.
And most of the rows don't even have any value in that column.

During the (very frustrating) performance project I have sometimes wondered if one could have 'conditional' indexes.
Index would only have entry for the row if first column of the index would have not null value.
Then one could have more columns in index without growing the index enormously big. (The application is built using surrogate keys. Typically one references in joins only one column and surrogate key from one table. If one could change the index of this column to hold also the surrogate key, the system would work faster).

This kind of index: index(column1, decode(column1,null,null,column2)) could be made with function based indexes. But because one cannot change the code, one cannot make the index 'work'.

If one could somehow express this index structure so that
optimizer could utilize this index when referencing to column1.


yes, I know: this is minor thing.

Sridhar, October 02, 2009 - 12:55 pm UTC

One thing what I would ike to see in Oracle is
use of cursor with in the SQL query joins. For example I should able to declare a cursor within my PLSQL program or at the database level stored in the database and just use that cursor in any SQL query as a table.

A little bit more whining...

rc, October 06, 2009 - 1:32 pm UTC

This example http://blogs.oracle.com/datawarehousing/2009/10/in-database_map-reduce.html is great but it would be even better with a built-in tokonizer.

Implicit grants

Nenad, November 11, 2009 - 11:35 am UTC

Hi,
We are trying to stop the use of generic accounts (eg system) and those of
application owners. We prefer that DBAs connect as themselves to create
objects in object owning schemas. It makes for a simple separation and an
easier to follow audit trail and is a company standard.

We have come unstuck on creation of roles. When a DBA creates a role they are
implicitly granted it. This is in violation of our separation of duty standards.
I realize the dba doesn't need the roles, as he is a dba, but we end up with
unauthorised grants.

eg

sys@DEVP4 > create user new_dba identified by new_dba;

User created.

sys@DEVP4 > grant dba,create session to new_dba;

Grant succeeded.

sys@DEVP4 > conn new_dba/new_dba;

Connected.

new_dba@DEVP4 > create role new_role;

Role created.

new_dba@DEVP4 >select *
2 from dba_role_privs
3 where granted_role='NEW_ROLE';

GRANTEE GRANTED_ROLE ADM DEF
---------------- --------------- --- ---
NEW_DBA NEW_ROLE YES YES

We keep our system and application owner schemas locked. We don't want to have
to unlock them to create roles.

What can we do here?

Thanks
Tom Kyte
November 15, 2009 - 12:34 pm UTC

you could create a stored procedure that created the role.

ops$tkyte%ORA10GR2> create user a identified by a;

User created.

ops$tkyte%ORA10GR2> create user b identified by b;

User created.

ops$tkyte%ORA10GR2> grant create procedure, create session, create role to a;

Grant succeeded.

ops$tkyte%ORA10GR2> grant dba to b ;

Grant succeeded.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> connect a/a
Connected.
a%ORA10GR2> create or replace procedure create_role_as_a( p_role in varchar2 )
  2  as
  3  begin
  4          execute immediate 'create role ' || dbms_assert.SIMPLE_SQL_NAME(p_role);
  5          dbms_output.put_line( 'Role ' || p_role || ' created.' );
  6  end;
  7  /

Procedure created.

a%ORA10GR2> connect b/b
Connected.
b%ORA10GR2> exec a.create_role_as_a( 'new_role' );
Role new_role created.

PL/SQL procedure successfully completed.

b%ORA10GR2>
b%ORA10GR2>
b%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> select * from dba_role_privs where granted_role='NEW_ROLE';

GRANTEE                        GRANTED_ROLE                   ADM DEF
------------------------------ ------------------------------ --- ---
A                              NEW_ROLE                       YES YES

ops$tkyte%ORA10GR2>

<b>but remember, DBA is too privileged for most people anyway, B didn't need execute on the procedure (dba has execute any), B can just grant themselves the role later if they want:</b>

ops$tkyte%ORA10GR2> connect b/b
Connected.
b%ORA10GR2> grant new_role to b;

Grant succeeded.

b%ORA10GR2> select * from dba_role_privs where granted_role='NEW_ROLE';

GRANTEE                        GRANTED_ROLE                   ADM DEF
------------------------------ ------------------------------ --- ---
B                              NEW_ROLE                       NO  YES
A                              NEW_ROLE                       YES YES


adderek, November 17, 2009 - 7:02 pm UTC

Hi Tom,
List of my wishes/problems:

1. DDL: Trashbin/flashback after truncate.
2. DB itself: Fix the limit on the object (table) name when partitioning: sub-partition name might become really long (ex. MyTableAlmostMaxLenght_part1_subpart1).
3. PL/SQL: Better nesting of the code: There can be a sub-sub-sub-sub-procedure inside a huge (thousands of lines) package body. Navigation through the code is difficult. In JAVA we can create directories for every group of classes.
4. PL/SQL: Finally block.
5. PL/SQL: Built-in code versioning with optional storage on the filesystem.
6. PL/SQL: Option (PRAGMA?) to compile PL/SQL as case sensitive (table/column/... names, variables, etc.).
7. PL/SQL: EXECUTE IMMEDIATE with named bound variables.
8. PL/SQL: RAISE that would allow error handling with re-raising the exception without loosing current exception stack.
9. PL/SQL: Ability to read current call stack as collection instead of VARCHAR2.
10. PL/SQL: Ability to auto-compile code (and its dependencies) when called. Currently it does the compilation of a single unit of the code without executing until everything is compiled.
11. PL/SQL: Built-in logging mechanism like LOG4J - I am currently using my own PL/SQL code and there are some free JAVA implementations as well.
12. Some built-in mechanism for code unit-testing.
13. Oracle SQL Developer: Better handlling of strange types like CLOBS.
14. Built-in description for the objects (something like longer and more descriptive names).
15. SQL BOOLEAN type - to have TRUE/FALSE inside a column instead of having CHAR(1) with 'Y'/'N'. At some point it often evolves into having full alphabet and then huge tables of combinations for several columns - which is unmaintainable.
16. PL/SQL: Better named blocks: To have "PROCEDURE p","BEGIN p","EXCEPTIONS p","END p".
17. ENUM type - to limit possible values that someone might insert into a column (which can be currently done with ex. dictionary table as FK).
18. Warning on every use of hard-coded formats ('01/02/03' for 2002-03-01 date... '1234^567' for 1234+(567/1000) ).
19. Per-user schema with shared tables (or some other objects).
20. Cross-schema dependencies. Let user A create a type T. Let user B create table with A.T column. Try to drop type A.T from the user A... not possible because user A has no access to data of the user B.
21. PL/SQL: Ability to trace package call stack by sub-sub-sub-procedure's name instead of "my_pkg.my_proc line 99999".

Tom Kyte
November 23, 2009 - 2:39 pm UTC

2) don't know how you would "fix" that?

3) don't know what you mean by that. You can use directories outsize of the database all you want, we don't care. You can organize your source code in the file system the same way you do for java with your plsql stuff - nothing stops you?

5) I think that will always be outside of the realm of the database. source code control is fairly specialized - branching, merging, diffing, etc - it takes a lot more than just storing each intermediate copy of a procedure, a lot more.

6) never see that happening, ever. That was the one thing I really hated about C

10) thank goodness. since you of course compile only when installing and never again - never day to day, minute to minute anyway. I'm not sure what you mean by that one at all?

11) just like log4j is a free one - a utility. There are dozens out there ( google log4plsql for example)

13) what is it missing, that is way too vague

16) you have that already. <<label>>

19) don't know what that means?


What I would like

Zdrawko, December 23, 2009 - 9:15 am UTC

Things that may be useful in my (humble) opinion:

1) Ability to create index on partition of table (not the whole table)
2) Or create index without instant collection of statistics (that takes place in newer Oracle versions).

Why? Because sometimes I do not hawe time to wait until statistics for all partitions (e.g. with billions of rows) are gathered and I know that I need to process only data from specific partition.

3) Extended sqlplus input (Because of "Input is too long (>2499 char)")
Tom Kyte
December 31, 2009 - 12:01 pm UTC

1) exists in 11g Release 2.

http://antognini.ch/2009/11/zero-size-unusable-indexes-and-the-query-optimizer/

2) have you measured any sort of impact from this? the statistics are computed during the index build - not after, not a re-read, as part of the processing.

Hmmm

A reader, January 04, 2010 - 9:28 am UTC

so, in order to
create an index on partition of table (not the whole table),
you first have to create an index on the whole table and then modify all not-wantable partitions unusable
- that's surely only of some use especially when the table is fairly large and you only want the index on a small amout of the table
Tom Kyte
January 04, 2010 - 12:17 pm UTC

create an EMPTY index. You saw one example, you didn't think to try other methods? Or ask "how can we do this to index just one partition from the get go"?


Here is his example modified:

ops$tkyte%ORA11GR2> CREATE TABLE t (
  2    id NUMBER NOT NULL,
  3    d DATE NOT NULL,
  4    n NUMBER NOT NULL,
  5    pad VARCHAR2(4000) NOT NULL
  6  )
  7  PARTITION BY RANGE (d) (
  8    PARTITION t_jan_2009 VALUES LESS THAN (to_date('2009-02-01','yyyy-mm-dd')),
  9    PARTITION t_feb_2009 VALUES LESS THAN (to_date('2009-03-01','yyyy-mm-dd')),
 10    PARTITION t_mar_2009 VALUES LESS THAN (to_date('2009-04-01','yyyy-mm-dd')),
 11    PARTITION t_apr_2009 VALUES LESS THAN (to_date('2009-05-01','yyyy-mm-dd')),
 12    PARTITION t_may_2009 VALUES LESS THAN (to_date('2009-06-01','yyyy-mm-dd')),
 13    PARTITION t_jun_2009 VALUES LESS THAN (to_date('2009-07-01','yyyy-mm-dd')),
 14    PARTITION t_jul_2009 VALUES LESS THAN (to_date('2009-08-01','yyyy-mm-dd')),
 15    PARTITION t_aug_2009 VALUES LESS THAN (to_date('2009-09-01','yyyy-mm-dd')),
 16    PARTITION t_sep_2009 VALUES LESS THAN (to_date('2009-10-01','yyyy-mm-dd')),
 17    PARTITION t_oct_2009 VALUES LESS THAN (to_date('2009-11-01','yyyy-mm-dd')),
 18    PARTITION t_nov_2009 VALUES LESS THAN (to_date('2009-12-01','yyyy-mm-dd')),
 19    PARTITION t_dec_2009 VALUES LESS THAN (to_date('2010-01-01','yyyy-mm-dd'))
 20  );

Table created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> INSERT INTO t
  2  SELECT rownum, to_date('2009-01-01','yyyy-mm-dd')+rownum/274, mod(rownum,11), rpad('*',100,'*')
  3  FROM dual
  4  CONNECT BY level <= 100000;

100000 rows created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> CREATE INDEX i ON t (d) LOCAL UNUSABLE;

Index created.

ops$tkyte%ORA11GR2> alter index i rebuild partition t_dec_2009;

Index altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> SELECT partition_name, bytes
  2  FROM user_segments
  3  WHERE segment_name = 'I'
  4  AND segment_type = 'INDEX PARTITION'
  5  ORDER BY partition_name;

PARTITION_NAME                      BYTES
------------------------------ ----------
T_DEC_2009                         262144

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> SELECT count(d)
  2  FROM t
  3  WHERE d BETWEEN to_date('2009-12-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
  4              AND to_date('2009-12-02 01:00:00','yyyy-mm-dd hh24:mi:ss');

  COUNT(D)
----------
        23

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
SELECT count(d) FROM t WHERE d BETWEEN to_date('2009-12-01
23:00:00','yyyy-mm-dd hh24:mi:ss')             AND to_date('2009-12-02
01:00:00','yyyy-mm-dd hh24:mi:ss')

Plan hash value: 370423021

--------------------------------------------------------
| Id  | Operation               | Name | Pstart| Pstop |
--------------------------------------------------------
|   0 | SELECT STATEMENT        |      |       |       |
|   1 |  SORT AGGREGATE         |      |       |       |
|   2 |   PARTITION RANGE SINGLE|      |    12 |    12 |
|   3 |    INDEX RANGE SCAN     | I    |    12 |    12 |
--------------------------------------------------------


17 rows selected.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> SELECT count(d)
  2  FROM t
  3  WHERE d BETWEEN to_date('2009-11-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
  4              AND to_date('2009-11-02 01:00:00','yyyy-mm-dd hh24:mi:ss');

  COUNT(D)
----------
        23

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
SELECT count(d) FROM t WHERE d BETWEEN to_date('2009-11-01
23:00:00','yyyy-mm-dd hh24:mi:ss')             AND to_date('2009-11-02
01:00:00','yyyy-mm-dd hh24:mi:ss')

Plan hash value: 2993254470

--------------------------------------------------------
| Id  | Operation               | Name | Pstart| Pstop |
--------------------------------------------------------
|   0 | SELECT STATEMENT        |      |       |       |
|   1 |  SORT AGGREGATE         |      |       |       |
|   2 |   PARTITION RANGE SINGLE|      |    11 |    11 |
|   3 |    TABLE ACCESS FULL    | T    |    11 |    11 |
--------------------------------------------------------


17 rows selected.

ops$tkyte%ORA11GR2> SELECT count(d)
  2  FROM t
  3  WHERE d BETWEEN to_date('2009-11-30 23:00:00','yyyy-mm-dd hh24:mi:ss')
  4              AND to_date('2009-12-01 01:00:00','yyyy-mm-dd hh24:mi:ss');

  COUNT(D)
----------
        23

ops$tkyte%ORA11GR2> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EXPLAINED SQL STATEMENT:
------------------------
SELECT count(d) FROM t WHERE d BETWEEN to_date('2009-11-30
23:00:00','yyyy-mm-dd hh24:mi:ss')             AND to_date('2009-12-01
01:00:00','yyyy-mm-dd hh24:mi:ss')

Plan hash value: 302469636

-------------------------------------------------------------
| Id  | Operation                 | Name    | Pstart| Pstop |
-------------------------------------------------------------
|   0 | SELECT STATEMENT          |         |       |       |
|   1 |  SORT AGGREGATE           |         |       |       |
|   2 |   VIEW                    | VW_TE_2 |       |       |
|   3 |    UNION-ALL              |         |       |       |
|   4 |     PARTITION RANGE SINGLE|         |    12 |    12 |
|   5 |      INDEX RANGE SCAN     | I       |    12 |    12 |
|   6 |     PARTITION RANGE SINGLE|         |    11 |    11 |
|   7 |      TABLE ACCESS FULL    | T       |    11 |    11 |
-------------------------------------------------------------


21 rows selected.



thank you

Sokrates, January 04, 2010 - 12:24 pm UTC

most useful

Agent

Wild Bill Steve, January 04, 2010 - 12:58 pm UTC

The agent is tough to work with. Fix it up. Make the logs more human-readable. Make install/uninstall of OEM easier. Make TSPITR easier!

extent create date info in dictionary

JayMag, January 06, 2010 - 10:45 am UTC

I would love to have a timestamp/create-date on extents. This would allow for an easy way to determine segment/tablespace growth over time. Seems like it would be pretty simple for oracle to implement since they add a row for every extent they create anyway.

The only other way is to start an OEM report (you won't have historical growth over time from before the report started) OR you create your own table and/or sql code.
Tom Kyte
January 06, 2010 - 1:17 pm UTC

extents managed in the dictionary are a thing of the past, we manage them in tiny bitmaps - I don't see this ever happening.

However, since 10g - we have been monitoring and reporting on segment growth as part of AWR. You don't need an OEM report, the data is stored in DBA_ tables - accessible to you from sqlplus.


tablespace growth

JayMag, January 06, 2010 - 3:40 pm UTC

Well, aren't you limited to growth over time by the AWR retention period? Also, just looking at an AWR report itself I don't see any "tablespace growth" area.

Either I am missing something or I just don't see any tables that I can leverage that will tell me overall tablespace growth over time for let's say ... the last 2 years.

I personally have no knowledge of a company that keeps their AWR retention policy to anything over 100 days. I could only imagine the size of the sysaux tablespace with 400+ days of retention.

dba_hist_tbspc_space_usage is limited by retention policy.

still seems to me that a simple created_dt column in dba_extents would really make this a very simple task.

If I am not referencing the correct internal tables could you possibly provide the table/view names that are not affected by retention policy that would give growth over time information?

I am guessing you will advise to set retention policy VERY high which I would say would need a good deal of space to store 1-3 years worth of retention data.
Tom Kyte
January 06, 2010 - 3:59 pm UTC

... dba_hist_tbspc_space_usage is limited by retention policy....

really? My dba_hist_tbspc_space_usage has observations going back two years - to when I installed my 10gr2 install.

not all tables are purged.

ops$tkyte%ORA10GR2> select snap_id, rtime from dba_hist_tbspc_space_usage where snap_id = (select min(snap_id) from dba_hist_tbspc_space_usage) or snap_id = (select max(snap_id) from dba_hist_tbspc_space_usage) order by snap_id
  2  ;

   SNAP_ID RTIME
---------- -------------------------
     11453 02/03/2008 18:27:26
     11453 02/03/2008 18:27:26
     11453 02/03/2008 18:27:26
     11453 02/03/2008 18:27:26
     11453 02/03/2008 18:27:26
     11453 02/03/2008 18:27:26
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57
     18369 01/06/2010 16:00:57

15 rows selected.


sys%ORA10GR2> select snap_id, startup_time, BEGIN_INTERVAL_TIME, END_INTERVAL_TIME
  2  from dba_hist_snapshot
  3  where snap_id = (select min(snap_id) from dba_hist_snapshot) or snap_id = (select max(snap_id) from dba_hist_snapshot);

   SNAP_ID
----------
STARTUP_TIME
---------------------------------------------------------------------------
BEGIN_INTERVAL_TIME
---------------------------------------------------------------------------
END_INTERVAL_TIME
---------------------------------------------------------------------------
     11453
03-FEB-08 06.16.18.000 PM
03-FEB-08 06.16.18.000 PM
03-FEB-08 06.27.23.734 PM

     18369
05-JAN-10 10.42.13.000 AM
06-JAN-10 03.00.56.096 PM
06-JAN-10 04.00.57.395 PM





... still seems to me that a simple created_dt column in dba_extents would really
make this a very simple task.
...

until you understand that dba_extents is built on a magical x$ table that converts datafile header bitmaps of 0's and 1's into rows - sure, it would seem easy. In the olden days of 7.x when extents were managed in uet$ and fet$ - I would maybe agree - but today - not quite possible, feasible or practical. Unless you want to get back to dictionary managed extents.


also, a simple maneuver like

alter table t move;

would blow away your information - as would shrink and other reorgs - if you did it at the extent, not segment, level.@

space over time

JayMag, January 06, 2010 - 4:30 pm UTC

Thank you for the information.
I believe I was manually removing snapshots and changing stats history retention to reduce the SYSAUX tablespace size.

I need to change my thinking regarding SYSAUX growth and let it grow as needed.

I appreciate all your replies. Thanks!

Limit Updates

Julio Caragiulo, January 29, 2010 - 10:28 am UTC

I would like to see an 'update' option of 'limit row_count' as
in (Oracle's) MySql 'update' command.
Tom Kyte
January 29, 2010 - 4:23 pm UTC

that is just "where rownum <= ?"

it is already there, longer than mysql existed even.

Limit Updates

Julio Caragiulo, February 01, 2010 - 9:43 am UTC

Thank You Sir.

Why, I asked some of the greatest SQL experts in the country
that same question and non could answer. Wellllll maybe not
the country, more like within 25 feet of my desk.... :)

Julio

Datapump

Kaido, March 05, 2010 - 2:35 pm UTC

Hi,
During tonight import, i found
it would be useful if there

Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 56,726,598,112
Percent Done: 99
Current Parallelism: 4
Job Error Count: 0

are present current date/time as well
br
Kaido

dbua leaves me in such a bad state

Khalid Rahim, March 15, 2010 - 10:44 pm UTC

Hi Tom, haven't visited this site in a while (but I last saw you in Baltimore a few months back.

Here was the problem and a wish:

1) I was trying to upgrade from 8.1.7.4 to 10204. Normally I don't trust dbua, but today I did.
2) dbua failed because of ORA-12560, so I fixed that issue and tried to run it again.
3) It detected that I reran it and refused to run it (it actually asked me to delete a non-existant file and retry)
4) Anyway so I tried to do a manual upgrade, but the database wouldn't startup because the 8i logfiles were too small (apparently 10g uses larger logfiles)
5) So I did "startup mount" and dropped the NON-CURRENT logfiles and recreated them bigger, but I couldn't shift the CURRENT logfile to make it NON-CURRENT because I need to open the database first. (catch-22 redefined).
6) The only way for me to do this is to get back to 8i, resize the logs to something > 8192M and do an upgrade again.

So my wish list for dbua is:

It should ask me do I wish to restore to a point in time before I started? OR at least run "?/rdbms/admin/utlu102i.sql" and refuse to proceed until we fix all the bad stuff and acknowledge or something"

I can't call support because this db is not work related, its mine. Anyway, I don't mean to ask a new question, but this is one of the things I don't like about oracle.

Thanks,
Khalid


Tom Kyte
March 16, 2010 - 9:58 am UTC

8182m?? we do not require 8gb log files. No way, no how. I'm not aware of anything like that.


dbua is not a restore tool - the first step in the process of upgrading - before running dbua - is "backup", and then restore would be the way to restore to a point in time before you started.


If you are upgrading to 10.2.0.4 - you can call support because the only way to get to 10.2.0.4 is to be using a supported licensed copy.

Decode

Claude, March 16, 2010 - 1:44 pm UTC

What I don't like and would like to see in a future version of Oracle is the ability to use decode outside of a select statement.

For example:

val:= decode(myvar, 1, 'x', 2, 'y', 'z');

It seems more natural and convenient to use decode here instaed of case.

Not 8192M

A reader, March 17, 2010 - 12:40 am UTC

my mistake, as a matter of fact I think 10gr2 requires a minimum of 4M logs, but I meant to resize to > 8M (not 8192).

Also agree, I don't care if dbua doesn't restore, but it should do a pre-analysis of my 8i and give fair warning
Tom Kyte
March 17, 2010 - 8:23 am UTC

it is os dependent - it is based on OS blocks:

...
ERROR at line 1:
ORA-00336: log file size 2048 blocks is less than minimum 8192 blocks
....



You had a mistake in your entire configuration - else you would not have gotten the tns error you did, there was nothing to 'check'.

that - and DBUA does do a pre-requisite check. You really do give insufficient information to say what exactly happened here.

...
3) It detected that I reran it and refused to run it (it actually asked me to delete a non-existant
file and retry)

......

that sounds particularly fishy to me, as a) I haven't heard of anything like it, b) it didn't run the first time, it never got started, it never got in touch with the database. so it sounds like it did get started at some time and was aborted.


and then you tried to force feed a manual upgrade on a partially upgraded database starting from the "word go" - overriding any pre-requisite checks.

Not sure what to say - I cannot reproduce (for the reason that I don't have 8i anymore) but the timeline here seems at least partially "missing"

I have to explain this

A reader, March 17, 2010 - 9:33 am UTC

Tom,

Ok, Let me try to reconstruct a timeline, best as I can remember, but don't remember everything that happened,so there may be gaps. (Thankfully I was able to revert back and restart, so all is ok now).

Well you said "overriding any pre-requisite checks. " you're absolutely correct about that and that was the real problem but it was unintended, so let me explain that.

Timeline:

1) Here I have an old 8.1.7.0 db sitting on Windows that was badly configured, I wanted to upgrade to 10204
2) I started off by installing 10201 in a new Home
3) OUI detected that I have an 8i db sitting on that machine, and OUI asked me, do you want us to upgrade that 8i after we are finished installing the software?
4) I should have declined, because I had not prepared that 8i db for upgrade at all. I really meant to run a) utlu102i.sql and also I had not changed the compatible parameter, and many other significant things
5) So I basically Hit the Yes button and went away and realized too late I shouldn't have done that. I had the support doc, "Complete Checklist for Manual Upgrades to 10gR2 [ID 316889.1]" the intended doc that I wanted to follow sitting right in front of me ...

Delete a Missing file?
Yes, in fact it did happen
"... If you believe this is not a Rerun operation, remove the below file and invoke DBUA again.
... ?/cfgtoollogs/dbua/logs/Welcome<... something ...> like that" but when I went to look for it that file was MIA





Alexander, April 02, 2010 - 10:33 am UTC

Do you know if Oracle has any plans to change the behavior of users owning objects and being able to drop objects without a privilege? We get so much heat from auditors and security folk claiming applications that run under accounts that own the objects being considered "elevated", and are required to be flagged and get special exceptions. Exceptions that managers freak out about.
Tom Kyte
April 05, 2010 - 1:12 pm UTC

Excellent, I love that approach. 100% for it, wish everyone had such a policy.

what is wrong with it again?

It should freak managers out, I wish more managers would be security aware - we'd have much less bind variable issues since not using binds is the #1 cause of sql injection bugs.


what are you looking for here exactly as far as functionality goes?

(of course you need no privileges to drop your own objects, you OWN them for goodness sake!)

Alexander, April 05, 2010 - 1:28 pm UTC

The process for identifying security risks is ok. But I question what they flag as a risk, this being one example.

No one cares that the ID owns the object, they see it as an ID that can drop objects. With no specific privilege either. I don't suppose there's a way to change that behavior. Just like you can select/update/insert/delete if you own the table.
Tom Kyte
April 05, 2010 - 10:15 pm UTC

but it is a risk, even before the auditors came around.

it has always been a very very excellent idea to put the tables in a set of schema(s) and the code in a separate and distinct set of schemas and use explicit grants on the tables to the application schema to limit what can and cannot be done to them.

If you own it - you own it, it is like having an ANY privilege - extremely powerful.

If you do not own it- you only get to do to it what you've been granted to do to it.

This was true in 1992, it is still true in 2010, it'll still be true in 2020.

Always useful

Chuck Jolley, April 05, 2010 - 1:47 pm UTC

We lock the accounts of the container schemas.
Would that make them happy?

Mine

Galen Boyer, April 05, 2010 - 7:01 pm UTC

1) sqlplus have shell concept of parent child. So, settings in one
script does not affect the calling script.

2) Allow multiple specs to be supported by a single body. There are
definitely times I want to expose a procedure to one process but not
another but use it to implement the needs of both processes.

3) Private methods in bodies of objects.

4) A plsql warning if WHEN OTHERS not followed by a RAISE. (I think
I've seen this one though)

5) A BOOLEAN type in SQL (runs and hides :-))

6) An on-commit trigger.

7) Some session compilation settings to not allow certain operations,
for example, no commits are allowed to be compiled during compilation
until the compilation setting is changed, or, no execute immediate
allowed ...

8) A DEFAULT object, akin to NULL. Let the DEFAULT be defined at
columnar level and outside the range allowed by the datatype of the
column. This is returned when table is queried. IS DEFAULT and IS
NOT DEFAULT are tests (akin to IS NULL and IS NOT NULL). This would
allow the app to insert DEFAULT and then post operation we would know
when the app inserted the default. With NULL, either the user didn't
give us anything or the code inserted NULL. With an actual default,
we don't know whether the app introduced it or the client entered it.

9) Undocumented parameter, _kyte_has_seen_and_gives_his_blessing = YES


Tom Kyte
April 05, 2010 - 10:27 pm UTC

#4 is done in 11g :)

Alexander, April 06, 2010 - 8:02 am UTC

Oh good that's what we've been doing as a remediation. I hadn't made up my mind if that was stupid or not (keeping tables etc in a locked schema and granting to another ID). Good to know we're taking the right approach.

Alexander, April 06, 2010 - 9:20 am UTC

I forgot to make the point I argue about a schema account vs someone with read/write; What's the big difference? You have access to the data with the application account, does it really matter if you can modify objects on top of that? I think you can be just as destructive with write permissions.

That's why I don't see you view on the schema being extremely powerful. True you can modify objects, but anything sensitive would be in the data, which any application (account) needs access to.
Tom Kyte
April 12, 2010 - 7:17 pm UTC

I think you can be just as destructive with write permissions.

I don't think so, you are able to do things like "drop" and "truncate" and "alter" - which can seriously damage a lot of stuff immediately without much record (other than "it happened").


Having the multiple schemas is extremely powerful and flexible. It gives you a degree of security you simply DO NOT HAVE right now. You can grant the MINIMUM privileges to the MANY application schemas that should exist - some will have read, some will have read and insert, some will have read, insert and update and so on.

And you'll be able to see who has what.
And you'll be able to document why they have that.


A bad practice: put everything into one schema.

A very good practice: use lots of schemas with MINIMAL privileges on objects to put walls between bits of code. It lets you actually physically segregate and lock things down

maybe

Chuck Jolley, April 06, 2010 - 9:39 am UTC

I didn't say it was right.
I said it was what we do. ;)

We are client server, not web based, so we have no "apps" user.
Our objects are divided up in schemas more or less by business process.
The "owners" of the container schemas are locked simply because there is no reason for them to be unlocked.

If the auditors are writing you up because the owner of an object can delete it then it seems to me you have a perception/political problem, not a security problem.


Tom Kyte
April 12, 2010 - 7:19 pm UTC

If the auditors are writing you up because the owner of an object can delete it
then it seems to me you have a perception/political problem, not a security
problem.


I disagree with your view of it - and I agree with the auditors. I'd much rather have the tables (data) managed separate and distinct from the code. Why should application developers be able to drop/alter/truncate/whatever the tables? They are just applications - they are replaceable. The data - not so. Privileged operations like 'drop' should be accessible ONLY to those with the need to drop - the application, it does not have that need.

RE: perception/political vs "separating code from data"

Duke Ganote, April 06, 2010 - 10:57 am UTC

As Tom wrote before about separating code from data:

<quote>
it promotes encapsulation, modularization, least privilege.

Maintaining the grants is a POSITIVE aspect, you'll have to state why you need such and such a privilege, it can be recorded, documented. Auditors can inspect this and see what access you have to the data.
</quote>
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1064629200346545081

"This was true in 1992, it is still true in 2010, it'll still be true in 2020."

Maybe I misunderstand...

Chuck Jolley, April 13, 2010 - 12:28 pm UTC

I was under the impression that the auditors were objecting to the fact that the owner of a schema has the privilege of dropping objects in that schema.
Aren't schema and owners basically the same thing?
I thought that privilege was innate to what a schema/owner is.
How could the owner not have that privilege?

Tom Kyte
April 14, 2010 - 8:02 am UTC

they were upset that the application was in the same schema as the data - and the application owner is the data owner as well.

application owners should own applications.
data owners - data

and data owners grant as little as possible to application schemas


so the "evil" developers cannot "accidentally" (or maliciously) do something to the data - like drop it.

Ahh...

Chuck Jolley, April 13, 2010 - 12:39 pm UTC

Now that I've re-read Alexander's question twice I see that I misunderstood the auditor's objections.

Multiple Table Indexes

Ed, April 19, 2010 - 9:55 am UTC

A simple way to index columns from multiple tables. I'm not talking about using ctx_ddl or anything similar. I mean just using regular index creation syntax and it works like a regular index, like:

create index ix_tbl1tbl2 on table_1(col_1), table_2(col_2);

Bitmaps Join Indexes might work...

Dan, April 19, 2010 - 2:50 pm UTC

http://download.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref1033

Bitmap join indexes allow you to build an index across tables. Depending on your situation, they might work...

re: Bitmaps Join Indexes might work

Ed, April 19, 2010 - 4:00 pm UTC

@Dan from FL, USA

True, but I'd like to see this work for regular indexes so that it would work for an OLTP system.

Better sql*plus?

Xenofon, May 03, 2010 - 4:51 am UTC

Hi Tom,

so many people - including yourself - have wished an improved sql*plus in this thread. Raptor has prooved not to be the sql*plus-successor.

So has the plan been forgotten? Does anybody in Oracle development still have it in the agenda?
Tom Kyte
May 06, 2010 - 11:39 am UTC

what do you mean by sql developer has proved to not be the sql plus successor?

sqlplus is modified/improved with each release. For example:

http://asktom.oracle.com/Misc/new-thing-about-sqlplus.html

sqlplus improvements...

Xenofon, May 07, 2010 - 3:46 am UTC

Well, you had mentioned the project "raptor", which became the sql-developer. So "raptor" did not quite become the awaited sql*plus improvement - even though the sqldeveloper includes a wannabe sqlplus-imitation...

Some people in this thread have made some suggestions for improvements. You are of course also right, I was not aware of the recent improvements. I think a lot of people working on those UNIX/LINUX machines would very much appreciate a command history.

I would also appreciate a more transparent and straightforward way to exit sqlplus, you know, especially when you integrate many sqlplus calls with sql-scripts, inside your shell-scripts.

Perhaps a way to add some conditional flow control in sqlplus would be helpful too, though I now, nobody wants sqlplus to become a full blown shell-script language itself.

Frankfly, there is not much to wish more from sqlplus, it's a great tool, we all are fans of sqlplus here i think. Just some small but over the years amazingly persistent weaknesses would just need to be wiped off.

utf8 preserving mode

Xenofon, May 10, 2010 - 7:34 am UTC

I just want to add one more wish of mine for sql*plus:

We are also using it in order be populate our schema with test-data for development purposes. But the command prompt version for windows does not preserve simple utf8 strings, like the special characters of the German language: ß, ä, ü etc.

Tom Kyte
May 10, 2010 - 7:05 pm UTC

tell it to windows, not a sqlplus issue at all - you need the right character set support for your windows windows.

Xenofon:

Sokrates, May 11, 2010 - 5:11 am UTC

C:\Dokumente und Einstellungen\Sokrates>set NLS_LANG=german_germany.we8pc850


does a great job for me !

SQL > select 'ÄäÖöÜüß' from dual;

'ÄÄÖÖÜÜß'
--------------
ÄäÖöÜüß

at last! many thanks!

Xenofon, May 11, 2010 - 8:51 am UTC

Great! I've been looking for long for this! Thanks!

Alexander, May 12, 2010 - 1:45 pm UTC

Maybe this exists already, but I haven't found it....

The ability to get explain plans using autotrace without executing the statement. E.g., the equivalent to

explain plan for
select * from table( dbms_xplan.display );

Mahesh

Mahesh, May 13, 2010 - 1:09 am UTC

connect to sqlplus
SQL>set autotrace traceonly explain;
SQL>select count(*) from table1
Execution Plan
----------------------------------------------------------
Plan hash value: 3207558960

---------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Cost (%CPU)| Time     |
---------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |     1 | 31355   (1)| 00:06:17 |
|   1 |  SORT AGGREGATE    |        |     1 |            |          |
|   2 |   TABLE ACCESS FULL| TABLE1 |  2529K| 31355   (1)| 00:06:17 |
---------------------------------------------------------------------

SQL>set autorace off

To Mahesh

Alexander, May 13, 2010 - 8:47 am UTC

traceonly explain runs the sql....I tested it with updates.

me again

Mahesh, May 14, 2010 - 1:32 am UTC

-- Need plan table. This on 10.2.0.4.0 64bit
delete from plan_table;
commit;
drop table t1;
create table t1 ( n number);
insert into t1 values(10);
commit;
select * from t1;
explain plan for update t1 set n =1;
select * from t1;
select * from plan_table where plan_id= ( select max(plan_id) from plan_table);

drop table

A reader, May 18, 2010 - 10:12 am UTC

when I drop the table it says table droped. but does not include name of the table tha is dropped.

$ sqlplus scott/tiger
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table x ( id number);

Table created.

SQL> drop table x;

Table dropped.

SQL>


-----------

but when I run the script with 100 lines of code, I need to know which table dropped and don't want to go back and search in the scirpt !! what do you think ? have you experienced this issue ?

Tom Kyte
May 24, 2010 - 9:28 am UTC

you are in 11gr2 - so you can read:

http://asktom.oracle.com/Misc/new-thing-about-sqlplus.html

and there you go.

N, May 21, 2010 - 5:55 pm UTC

^ Maybe you want to use:

set echo on


at the start of your script

It's hard to find who's locking "my" row

A reader, August 19, 2010 - 5:44 am UTC

We have an application that does pessimistic locking.
Now, if the user indicates that he wants to change a row, it's locked with SELECT FOR UPDATE NOWAIT.
If the row is already lock we get an ORA-00054 - fine.
Unfortunately, it's hard for the application (that has one session) to find out who (= which session) is currently locking that row.
Tom Kyte
August 19, 2010 - 2:54 pm UTC

correct, but tell me, what could anyone do with that information sitting at their terminal? The world is a big place, even if I knew that session 1234 had "my" row - what could I do about it?

TIght Integration with Java

Vaij, August 20, 2010 - 10:08 pm UTC

Life would be easy for many developers if Oracle facilitates tight integration with Java like that of Oracle Forms and Reports.

This will address lot of generating SQL statements,connectivity,transactions and etc. issues that have to done 'manually'.

I think Oracle Marketing folks would also see a point here ;)

A v$parameter in database that will define object naming length restriction to upto 128 characters.
Tom Kyte
August 25, 2010 - 12:57 pm UTC

hah, can you imagine a java developer seriously using a tool like forms to quickly build an application - leaving the developer to just implement tiny snippets of java code here and there. I have a feeling your hard core java developer would scoff at such an idea. No more frameworks, no more design patterns - just really quick and functional applications with minimal amounts of code....



External table problem

A Reader, August 25, 2010 - 10:21 pm UTC

Hi,

Oracle external although a very good feature but still not practical enough for unloading huge data. for example during a warehouse system, our requirement was to unload from a data warehouse and load into a central datawarehouse. we have to use oracle external table Oracle_datapump access driver to unload data. unfortunately, another requirement was that you cannot transmit file above 30MB. I would really admire if i could specify a file size in external table and specify its file name with some kind of wild card like file_America%u. so when i start unloading file are created as file_America001 afte it reaches its 35MB size it would create a file file_America002.

Oracle data loading and unloading utilities are still too basic compare to Teradata
Tom Kyte
August 26, 2010 - 1:24 pm UTC

... another requirement was that you cannot transmit file above
30MB. ..

Hmm, let us see, because of some silly requirement probably drafted in the 1990's (early 1990's) you think we are not 'flexible'. Files measured in megabytes??? You cannot be seriously building a 'data warehouse' then, maybe a 'data pup tent', but not a warehouse.


but in any case, you can easily use datapump.

http://docs.oracle.com/docs/cd/E11882_01/server.112/e10701/dp_export.htm#i1006927

That's just so right...

Michael, August 26, 2010 - 6:14 am UTC

As Tom writes:

hah, can you imagine a java developer seriously using a tool like forms to quickly build an application - leaving the developer to just implement tiny snippets of java code here and there. I have a feeling your hard core java developer would scoff at such an idea. No more frameworks, no more design patterns - just really quick and functional applications with minimal amounts of code....

No more black (java) magic, no more volatile GUI fancies, no more OR-Mapping ... that would be just too easy. (And perhaps for some consultants also too maintainable: don't forget about the next customer order. Complex and inscrutable java designs may very well be part of the business concept.)

Moreover, far to many java guys (including the so-called "architects") don't even understand the difference between 3GL (like Java) and 4GL (like Forms/Reports/APEX).

I fear that Oracle itself is moving into that (3GL) direction as Forms/Reports are not seriously evolved and promoted - at least, so it seems to me...
But, on the other hand, fortunately, there is APEX :-)

Oralce Order by

Syed Khurram, August 30, 2010 - 2:45 am UTC

Hi,

Oracle, no doubt, is the leading DBMS among relational databases. However couple of thing i dont like about Oracle is
1- In SQL you can not refer to the alias based column in where clause(beside using inner query). To do this , you have to write the whole formula based column instead of its alias name.

2- Oracle removed the link of bug/problem reporting. I have something to ask for which help is not available. But neither Oracle site(with out SI ) nor asktom are allowing to post a question.

Finally ,Keep the good work going.

Regards

Tom Kyte
September 09, 2010 - 9:04 am UTC

1)

SELECT *
from (select a+b/2 X, z||y || x A from t )
where x > 50 and a like :x and ......

lets you do that - this is not an "oracle" deficiency, it is the language specification for SQL that limits the scope. The selection (projection) happens logically after the "where clause" (selection).


2) huh??? I have no idea what you mean. The support site is up and running, asktom is serviced by me (and me alone) on a "as I have time" basis. Always has been.

Venkat, September 03, 2010 - 9:18 am UTC

I am not very experienced oracle developer..but i want to mention one thing.. it would be great if we have some way to generate series in sql..say if i want to generate a sequence of 1,2,3,..n (i.e. a query which returns n rows containing 1 to n); i dont know is anything there already or not. in qsql we do have generate_series[]; I want to have something similar to that functionality.
Tom Kyte
September 09, 2010 - 7:07 pm UTC

ops$tkyte%ORA11GR2> select rownum from dual connect by level <= 5;

    ROWNUM
----------
         1
         2
         3
         4
         5

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> with data ( r )
  2  as
  3  (select 1 r from dual
  4   union all
  5   select r+1 from data
  6    where r < 5)
  7  select * from data;

         R
----------
         1
         2
         3
         4
         5




@Venkat

Michel Cadot, September 09, 2010 - 10:19 am UTC


Yes, there is, or rather there are many ways, explained in many forums, for so much long time.
It is called row generator.
Here's one link (among hundred ones, including here):
http://www.orafaq.com/forum/t/95011/102589/

Regards
Michel

Loading XML to relational table

Stew Ashton, September 11, 2010 - 4:09 am UTC


Not trying to shoot down XML DB in general, just talking about loading relational data from input that is formatted XML rather than CSV.

There is more than one way to do it and none of them good.

DBMS_XMLSAVE: offers good function but is written in java, takes a lot of memory and doesn't handle large volumes.

DBMS_XMLSTORE: small subset of DBMS_XMLSAVE function. If you use the default insert (without specifying columns), it *uses literals instead of bind variables*. If you specify the columns, it uses bind variables but *does not bulk bind*.

Loading the XML as a CLOB and using XMLTABLE is ssllooww..

The OTN forum recommends writing an external Java program to do the load :(

This is one of those common and annoying problems that many people are wasting their time over because Oracle's implementations are legion and none of them industrial strength.

An ORACLE_XML_LOADER access driver with burned-in functionality would be a godsend.

(As an aside, my current approach is to use xsltproc as an external table pre-processor to parse the input and format it as CSV.)

Sessions Timeout - Forced disconnect

Vijay S, December 21, 2010 - 11:26 am UTC

Forcing a session off the database after defined interval while accessing database through tools like sqlplus, sql developer, TOAD, PL/SQL Developer etc.

A lot of application support people gain access to the production database and they remain connected to it for ever. If there was feature by which a user session can be forced out/disconnected/logged off from the database after defined interval automatically.

The moment the support people connect to the database, the clock should start ticking and after a pre-defined interval they should be forced out off the database, and they should not be abllowed to re-connect, as they were allowed to use the account only for that pre-defined time (May be a special type of user.).

An approach like Fine grained access


UserA owns the application data, and UserB is the application user which has full access (insert/update/delete) to the data, if anyone other than the application connects to the database using UserB account, then those session should be forced out of the database after pre-defined interval.


Alexander, March 14, 2011 - 11:59 am UTC

The way Oracle licenses in a virtual environment, e.g. if you install Oracle on a VM slice you must license everything on the entire frame. I hope that changes, our fortune 100 company is moving away from them because of it.

People don't care if its a better product when it's that much extra $.
Tom Kyte
March 14, 2011 - 12:15 pm UTC

it depends on how you virtualize. If you use soft partitioning - whereby the resources are actually available - then you license to the physical hardware on the machine. If you use hard partitioning - whereby the resources are not available, you license for what you have allocated

http://www.orafaq.com/wiki/Oracle_licensing

so, it depends.

Alexander, March 14, 2011 - 1:15 pm UTC

That's good to know, thank you.

ps, I'm still getting email notifications from when I started this thread in '05 :) Would you mind putting the kibosh on that for me please?
Tom Kyte
March 14, 2011 - 2:24 pm UTC

should be done

My two cents

Brian, March 14, 2011 - 10:13 pm UTC

Why cant a GROUP BY refer to columns positionally like an ORDER BY?

SELECT CASE WHEN EMP_STATUS = 'ACTIVE' THEN 'Y' ELSE 'N' END,COUNT(*)
FROM EMPLOYEE
GROUP BY 1

Can't do that - instead, the GROUP BY has to spell out the entire first column formula again. Shouldn't it be obvious to the interpreter what GROUP BY 1 means?



Tom Kyte
March 15, 2011 - 7:58 am UTC

Because the SQL standard, which we didn't write, says it isn't to be done that way.

you do not have to spell out the entire column again if you don't want to:

  1  select flag, count(*)
  2    from (select case when dummy = 'X' then 'Y' else 'N' end flag from dual)
  3*  group by flag
ops$tkyte%ORA11GR2> /

F   COUNT(*)
- ----------
Y          1

ops$tkyte%ORA11GR2> 



append hint

anonymous, May 03, 2011 - 1:08 am UTC

Oracle 11 has new hint append_values.
Could there be option in table 'APPEND' meaning that all inserts to this table would behave using this semantics ?

In datawarehousing systems there are tables which are allways truncated and then rows inserted to them. It would be nice to have this append functionality defined on table level. Other systems also have some of these tables.
Tom Kyte
May 04, 2011 - 12:18 pm UTC

If you want a direct path load, you have to tell us that. There is no "auto-direct" capability.


The append_values is to be used with large ARRAY inserts of hundreds/thousands of rows at a time. It would be a really bad idea to use it in general - if you use it for single row inserts, you would get a SINGLE ROW per block! (direct pathing never uses any space that is already being used)

A CBO that can admit when its wrong...

Matthew McPeak, May 04, 2011 - 12:56 pm UTC

One thing I keep waiting for and haven't seen yet is an optimizer that can admit when its made a mistake.

When the CBO starts running a plan and sees that it got 10 million rows out of a step that it was supposed to only get 10 rows out of, why can't it stop and say "hold on... I need to reoptimize this query and find a better way of doing this based on this new information." Wouldn't that be better than stubbornly going on an doing 10 million unique index scans in the following step to complete the query?

There could be a threshold or something... we wouldn't want the CBO to waste time over-thinking simple queries.

Tom Kyte
May 04, 2011 - 2:10 pm UTC

Well, it actually does that in a way in 11g, they won't stop an in process query - but it will 'self correct'

There is cardinality feedback

http://asktom.oracle.com/Misc/something-new-i-learned-about-estimated.html

There is adaptive cursor sharing

http://www.oracle.com/technetwork/articles/sql/11g-sqlplanmanagement-101938.html

to name two ways the optimizer admits "i did it wrong"

regarding 'append hint '

visitor, June 03, 2011 - 2:40 am UTC

Same kind of arguments could be said againt 'cache' attribute in tables. It also could cause some unwanted effects when used in wrong places.

It is sometimes difficult and cumbersome to have hints to programs; tools do not support them well, developers find them hard etc.
Sometimes it could be nice to have same effects handled in database definitions.

This feature would be nice in data warehouse applications
when data is often put to 'staging' tables and then
transformed to actual datawarehouse tables.
I had in mind these staging tables.

Somebody has once said: 'It depends'.

Seconding "beauty insert" comment

Kevin Kirkpatrick, June 09, 2011 - 11:05 am UTC

It wasn't replied too, so I'd like to "bump" and elaborate on the "beauty insert" comment earlier in the thread.

Like that commenter, I'd really like to have an INSERT syntax that allowed for direct alignment of "target column" with "desired value". The following hypothetical syntax is probably far from perfect, but would be backwards compatible and would give the flexibility to code INSERTs in what I believe to be a much cleaner format.

For INSERT/VALUES:

INSERT INTO MY_TABLE [ALL] VALUES (column_name => expression [, column_name => expression]*)

> The values list would operate basically as a named parameter list of a procedure call.

For INSERT/SELECT:
INSERT INTO MY_TABLE MATCH COLUMNS
[ALL] select_expression

In both cases, the optional "ALL" value indicats that desired values must be specified for all columns of the INSERTed table.

So, given table

CREATE TABLE MY_TABLE (COL1 VARCHAR2(1), COL2 VARCHAR2(1));

The following examples show the flexibility of the syntax:

INSERT INTO MY_TABLE VALUES (COL1 => 'X', COL2 => 'Y');
-- valid

INSERT INTO MY_TABLE ALL VALUES (COL1 => 'X', COL2 => 'Y');
-- valid

INSERT INTO MY_TABLE VALUES (COL1 => 'X');
-- Valid (COL2 would be assigned NULL (or DEFAULT))

INSERT INTO MY_TABLE VALUES (COL1 => 'X', 'Y');
-- error, can't mix named values syntax with positional values syntax

INSERT INTO MY_TABLE ALL VALUES (COL1 => 'X');
-- error, no value specified for COL2


INSERT INTO MY_TABLE MATCH COLUMNS ALL
SELECT 'X' AS COL1, 'Y' AS COL2 FROM DUAL;
-- valid

INSERT INTO MY_TABLE MATCH COLUMNS
SELECT 'X' AS COL1 FROM DUAL;
-- valid, col2 defaults to NULL

INSERT INTO MY_TABLE MATCH COLUMNS
SELECT 'X' AS COL1, 'Y' AS COL3 FROM DUAL;
-- errors: COL3 not in MY_TABLE

INSERT INTO MY_TABLE MATCH COLUMNS ALL
SELECT 'X' AS COL1 FROM DUAL;
-- errors: COL2 not specified

Any thoughts on the usefulness of this syntax (or reasons Oracle "shouldn't go there")?
Tom Kyte
June 09, 2011 - 12:10 pm UTC

My first real programming lanaguage, PL/I, had a similar construct.

when assigning records to each other, you could:


EMP_rec_new = emp_rec, BY NAME ;

that would assign to emp_rec_new only the elements having the same names - even if the names were in different places in the structure.


I miss PL/I sometimes - it had a lot of really cool things in it as far as languages go.

I could see this syntax being useful, however, unless it finds its way into the sql standard - I'm doubtful...

Alexander, June 15, 2011 - 12:15 pm UTC

What would be great, is a space advisory for memory/space regarding growth. It would be to helpful to be able to query some tables (AWR type of thing) and find out how much the database or tablespaces grew between a time frame and also be able to predict future growth. DBAs probably get asked this a lot (I do), "budget season is here how much storage do we need for next year" etc.

Alexander, June 17, 2011 - 3:35 pm UTC

That's not bad, I was looking for something more high level, not by individual object. Also it looks like it only forecasts out a couple of days.

What don't you like about Oracle ?

morticia, June 20, 2011 - 6:48 am UTC

The documentation (imagine smiley-face-type-of-thing here.

raising hell

mike, July 05, 2011 - 1:40 pm UTC

I know it's pretty much already been said, but I too would like:

raise(true)

or

reraise

So I don't need to:

exception
when others then
send_error_email;
raise_application_error(-20000, '', true);

... in order to let the exception continue on its' merry way without losing the error stack. BACKTRACE works fine within any logic I code myself (for example in send_error_email), but I need the full stack to appear in user_scheduler_job_run_details.additional_info, and that's the only way I found to do this. I hate using raise_application_error when I desire neither a custom error number, nor custom error message. I suppose the issue will resolve itself when we upgrade to 11gR2 and get email notifications, as that's the only thing I need to intercept all unhandled exceptions for (so far). But it sure would be nice.

questions

samantha, July 11, 2011 - 3:54 pm UTC

I am new to oracle and i joined up because i really needed to ask a question about combinating numbers.
Everytime i come to this site the caption say's Tom cannot answer any questions right now?Ggggrrr,anoying or what!
Tom Kyte
July 13, 2011 - 1:29 pm UTC

well, first - you do not "join" anything to come here... You just come.

second, I am one person - I take questions when I can.

third - probably, what you want to ask has already been asked or something close to what you wanted to ask has. You can search - and upon finding something "similar", you can followup (like you did here) and ask for a bit more detail...

Combinating Numbers

samantha, July 13, 2011 - 9:26 am UTC

Tom,I would really like to see Oracle give some help on combinating numbers with or without repititions.
I am trying to find out a way on how to combinate numbers 1 to 11 into sets of 3's without Repititions.

Thankyou for Reading
Tom Kyte
July 15, 2011 - 8:36 am UTC

ops$tkyte%ORA11GR2> with data as (select level l from dual connect by level <= 11)
  2  select *
  3    from data d1, data d2, data d3;

         L          L          L
---------- ---------- ----------
         1          1          1
         1          1          2
         1          1          3
         1          1          4
         ....
        11         11          7
        11         11          8
        11         11          9
        11         11         10
        11         11         11

1331 rows selected.

RE: "combinate numbers 1 to 11 into sets of 3's without Repititions."

Duke Ganote, July 18, 2011 - 7:31 am UTC

@samantha

Can you be more precise in your requirements? It is permutations, combinations, with/without replacement...?

http://en.wikipedia.org/wiki/Permutation

My suggestions

Sarath, July 26, 2011 - 5:57 pm UTC

Hi Tom,

1. I would like to see a new function for below case without the NULL keyword.

NVL2(column, NULL, <somevalue>)

So a function like NOTNULL(column, <somevalue>) which gives an opposite functionality of NVL.

2. UPDATE STATEMENT with 'WITH CLAUSE'. So that the code rewrite / PL/SQL blocks can be avoided.

WITH SQ AS ( <QUERY> )
UPDATE TARGET
SET COL1 = ( SELECT COL1 FROM SQ )
AND KEY = ( SELECT KEY FROM SQ );

Also with DELETE to make it consistent.

3. A way to simplify the following query

SELECT DECODE(FACT, 'ONE', 1, 'TWO', 2, 3) AS FACT,
SUM(DECODE(DR_CR, 'Y', VAL, 'N', -VAL, 0)) AS SUM,
SUM(DECODE(DR_CR, 'Y', VAL, 'N', -VAL, 0)) * DECODE(FACT, 'ONE', 1, 'TWO', 2, 3) AS TOTAL
FROM TAB
GROUP BY DECODE(FACT, 'ONE', 1, 'TWO', 2, 3);

to

SELECT DECODE(FACT, 'ONE', 1, 'TWO', 2, 3) AS FACT,
SUM(DECODE(DR_CR, 'Y', VAL, 'N', -VAL, 0)) AS SUM,
@FACT * @SUM AS TOTAL
FROM TAB
GROUP BY @FACT;

which is much more readable. So basically a way to refer the already selected columns.

Even though we can achieve this by using an inner sub-query, I feel this can be a good addition, when the queries are complex.

SELECT FACT, SUM, FACT * SUM AS TOTAL
FROM (
SELECT DECODE(FACT, 'ONE', 1, 'TWO', 2, 3) AS FACT,
SUM(DECODE(DR_CR, 'Y', VAL, 'N', -VAL, 0)) AS SUM
FROM TAB
GROUP BY DECODE(FACT, 'ONE', 1, 'TWO', 2, 3));

Although out of context, I have a doubt regarding the performancee of first query.
Does Oracle compute the DECODE(FACT, 'ONE', 1, 'TWO', 2, 3) function twice or does it take from some cache?

4. INCREASE TABLE VARCHAR2 LIMIT FROM 4000 to 32K to synchronise with PL/SQL

5. For CONNECT BY PRIOR clause add a selectable function similar to LEVEL which can indicate a loop and stop furthur join, rather than throw an exception.

6. MULTIPLE column in inner joined sub-query in the SELECT part .

SELECT CHILD.KEY,
(SELECT PARENT.CODE, PARENT.NAME
FROM PARENT
WHERE PARENT.KEY = CHILD.PARENT_KEY)
FROM CHILD

This should give KEY, CODE and NAME which will give an exception now.

Thanks,
Sarath

Adding to suggestions

Sarath, Oracle Employee, July 28, 2011 - 4:05 pm UTC

Hi Tom,

In the 6th question above, How does performance gets impacted if the select the same as separate sub-queries as below.

for the questions above code-rewrite is not the issue, but performance is.

SELECT CHILD.KEY,
(SELECT PARENT.CODE
FROM PARENT
WHERE PARENT.KEY = CHILD.PARENT_KEY),
(SELECT PARENT.NAME
FROM PARENT
WHERE PARENT.KEY = CHILD.PARENT_KEY)
FROM CHILD

This should give KEY, CODE and NAME which will give an exception now.

7. Multi-table UPDATE and DELETE as we do INSERT ALL

say if you want to update/delete from multiple tables using a similar result set.

This is achievable using FORALL in PL/SQL but I need a similar thing in SQL.

8. Also adding the point where I want to refer alias in the same select statement, it would be useful when we do analytic functions.

If it is present, we can recursion in SQL which can apply on the last function column.

For eg.

SELECT NVL( COL, LAG(COL, 1) OVER(KEY) ) AS PREV_COL,
LAG(@PREV_COL) OVER() AS CURR_COL
FROM TABLE;

Thanks,
Sarath

user alias

Ravi B, August 16, 2011 - 1:57 pm UTC

Hi Tom,

For example we have SCHEMA1, SCHEMA2, SCHEMA3, SCHEMA4 etc. Is there a way we can create "SCHEMA" and point it to either of the schemas (1..4). I am not a SQLServer person but someone said there is a concept of user alias in SQLServer. Could we do that in oracle (11g R2)?

Thanks!
Tom Kyte
August 16, 2011 - 5:40 pm UTC

alter session set current_schema = schema1;

would do that - if you "select * from t" after issuing that - it would be "selecdt * from schema1.t"

user alias

Ravi B, August 16, 2011 - 6:15 pm UTC

Hi Tom,

Am I doing something wrong here?

$ sqlplus system/******

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Aug 16 16:12:18 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> create user test1 identified by test1;

User created.

SQL> grant connect,resource to test1;

Grant succeeded.

SQL> create user test2 identified by test2;

User created.

SQL> grant connect,resource to test2;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

$ sqlplus test1/test1

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Aug 16 16:13:04 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table t( n number);

Table created.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

$ sqlplus test2/test2

SQL*Plus: Release 10.2.0.4.0 - Production on Tue Aug 16 16:13:23 2011

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> alter session set current_schema =TEST1;

Session altered.

SQL> select * from t;
select * from t
              *
ERROR at line 1:
ORA-00942: table or view does not exist

Tom Kyte
August 16, 2011 - 8:17 pm UTC

you didn't grant select on T to test2.

user alias

Ravikanth Beevanapalli, August 17, 2011 - 11:06 am UTC

Thanks Tom!

oracle

A reader, August 23, 2011 - 7:50 pm UTC

I like oracle but dont like the CEO beard. You need to enforce a clean shaven policy for executives with beards that have large gaps.

http://www.bizjournals.com/seattle/news/2011/08/23/oracle-bid-seen-coming-after-hp-drops.html?ana=yfcpc

itanium

sam, August 30, 2011 - 8:12 pm UTC

Tom:

Is oracle planning to stop support for servers with Itanium chips?

http://allthingsd.com/20110830/oracle-to-court-hp-was-sneaky-when-we-made-that-deal/?reflink=ATD_yahoo_ticker

interesting SAP stealing oracle source code and HP making sneeky deals!
Tom Kyte
August 31, 2011 - 9:07 am UTC

It is well documented that this is happening - yes. Just as other companies are not coding for it anymore. When something becomes commercially "not viable", software stops being written for it.

It is possible to store an empty string in Oracle 10g

Nitiket, October 14, 2011 - 12:32 pm UTC

It is possible to store an empty string in Oracle 9i

Simply run these one after other

CREATE TABLE abcdef
AS
SELECT SUBSTR('  myname',0,1) nitiket
FROM dual;


SELECT *
FROM abcdef
WHERE nitiket =' ';


SELECT length(nitiket)
FROM abcdef;


length returned by this query is 1

Tom Kyte
October 14, 2011 - 11:35 pm UTC

you didn't need to do that, you could just insert a space.

You are saving a 1 character string that contains a space. It is not an empty string, it is a string with 1 character.

ops$tkyte%ORA11GR2> create table t ( msg varchar2(30), str varchar2(5) );

Table created.

ops$tkyte%ORA11GR2> insert into t values ( 'empty string', '' );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'substr', SUBSTR('  myname',0,1) );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'one character string', ' ' );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 'two character string', '  ' );

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select msg, str, dump(str) d from t;

MSG                            STR   D
------------------------------ ----- ----------------------------------------
empty string                         NULL
substr                               Typ=1 Len=1: 32
one character string                 Typ=1 Len=1: 32
two character string                 Typ=1 Len=2: 32,32



your substr trick is no different from
' '
, only harder to type

you cannot store strings in the database at all

A reader, October 15, 2011 - 2:04 am UTC

datatype STRING does not exist
Tom Kyte
October 16, 2011 - 10:04 pm UTC

sure it does, but only to contemplative, thinking people.

In computer technology fields, the concept of a string is rather well understood - when compared to numbers and dates that is.


datatype STRING does not exist

Martin Vajsar, October 15, 2011 - 6:25 am UTC

Neither C has a datatype string. Are you going to tell me I cannot process strings in C?

RE: "dont like the CEO beard" on August 23, 2011 - 7pm Central time zone

Duke Ganote, October 15, 2011 - 7:20 am UTC

Regarding Larry's picture here:
http://www.bizjournals.com/seattle/news/2011/08/23/oracle-bid-seen-coming-after-hp-drops.html

I think he looks like Leonard Nimoy in the Star Trek episode "Mirror, Mirror". :)

What don't you like about Oracle

Kirill Loifman, November 02, 2011 - 4:18 am UTC

Hi Tom

Here are a few tiny but valuable things I suggest to fix or add to future Oracle versions. Please correct me if they are already implemented in 11gR2.

• Ability to prevent a schema user to perform following tasks:
- dropping own objects (probably introduce drop sys rights)
- creating indexes on own tables
- some others..
So the idea that schema should not have any rights by default.

• Ability to create a private DB link in different schema

• More info about the user required:
- Would be nice also to be able to recognise a schema from end DB user. Maybe with another field
- New dba_users field: comments (to put some info about a user), last logon date, etc.

• Possibility for a DBA user to create a materialized view in a different schema (if a schema user has no extra rights, like create table)

• Possibility to grant read access ONLY on PL/SQL source code to different schema (probably select or read privilege on procs, triggers, views, etc.).

• Default possibility to grant read-only and read/write access on all the schema objects to another user (probably “system roles” can be introduced like datareader / datawriter that can be granted for every schema user, similar to MS SQL). Basically to avoid manual grants with a system trigger implementation. It should be standard DB functionality.

• Improve mechanism of redirecting a user to a different schema:
- Implement a concept of “default schema”. Just add an additional field in dba_users “default_schema” & extend create/alter user command & implement standard mechanism of redirecting on logon. So again this is to have a standard functionality for system trigger workaround.
- Add “set schema” command to sql+ (alter session… is a pain)
- Improve “alter session set current_schema” behaviour since it does not work for all the cases properly

• I would add another system privilege: create temporary table. Since create table is very powerful right for a schema user and many of them use temp tables.

• Possibility to do incomplete recovery of one schema / tablespace on the same DB instance.

I explained some of them in more details here:
http://www.dadbm.com/2011/10/oracle-database-restrictions-workarounds-daily-dba-work/

Thanks

Kirill Loifman

Explain via query return?

Tomas, November 04, 2011 - 10:24 am UTC

Why EXPLAIN PLAN is writing into a table? Why does it not return the plan as a result of the query like Explain does e.g. on Postgres? If I want to get the explain plan using an app how can I be sure that the table is there, that someone else did not issue some other explain in the meantime etc.?
Tom Kyte
November 07, 2011 - 10:06 am UTC

Because that is the way explain plan works, you can explain 100 queries and then start analyzing the output.

You can name your output in the plan table if you are sharing it, if you read the syntax for explain plan:

http://docs.oracle.com/docs/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601

you can issue "explain plan set statement_id = 'something you like' for select ..."

You can even tell us what table to put the output into. You can use a global temporary table which would obviate the "what if someone else did something" issue altogether.

Truth be told however, you shouldn't use explain plan at all - because it can lie to you.

Explain plan does a hard parse right here, right now. It does not bind peek. It assumes all binds will be varchar2's. It uses the current sessions optimizer sessions. It would dynamically sample right then and there. Hence - it often will give you a plan very different from what was actually used.

I recommend you use dbms_xplan, here are two examples - fully documented as well:

ops$tkyte%ORA11GR2> set serveroutput off
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select ename, empno from scott.emp;
ENAME           EMPNO
---------- ----------
SMITH            7369
ALLEN            7499
WARD             7521
JONES            7566
MARTIN           7654
BLAKE            7698
CLARK            7782
SCOTT            7788
KING             7839
TURNER           7844
ADAMS            7876
JAMES            7900
FORD             7902
MILLER           7934

14 rows selected.

ops$tkyte%ORA11GR2> select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID  23vqj763zcu4x, child number 0
-------------------------------------
select ename, empno from scott.emp

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |   140 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------


13 rows selected.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> column sql_id new_val S_ID
ops$tkyte%ORA11GR2> select sql_id, sql_text from v$sql where sql_text = 'select ename, empno from scott.emp';
SQL_ID
-------------
SQL_TEXT
-------------------------------------------------------------------------------
23vqj763zcu4x
select ename, empno from scott.emp


1 row selected.

ops$tkyte%ORA11GR2> select * from table(dbms_xplan.display_cursor( '&S_ID' ) );
old   1: select * from table(dbms_xplan.display_cursor( '&S_ID' ) )
new   1: select * from table(dbms_xplan.display_cursor( '23vqj763zcu4x' ) )
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID  23vqj763zcu4x, child number 0
-------------------------------------
select ename, empno from scott.emp

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS FULL| EMP  |    14 |   140 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------


13 rows selected.




That gets the *real* plan actually used by the optimizer at run time.

review

lh, November 14, 2011 - 5:23 am UTC

It could be rather interesting to do some research and look out how many suggestions/complaints have been made on this topic and how many of them have been implemented on some way in current versions.

Unfortunately don't have the necessary knowledge to do this.
(Idea to this came from the fact I learned today about "maintaining statistics on large partitioned tables" ,
http://blogs.oracle.com/optimizer/entry/maintaining_statistics_on_large_partitioned_tables
and years ago I tried to start discussion about similar issue)


some ideas

lh, November 14, 2011 - 5:32 am UTC

1)
Is there ability to find (easily) what is the logging user
when using proxy accounts ?

(btw. proxy accounts are usefull and unknown way to implement personal accounts. There might be need for an article to describe then more thoroughly)


2)
Implementing primary key constraints on combination of several different indexes.

e.g. in some warehouse implementations it would be nice to have primary key constraint based on several columns and all these columns would have bit map indexes.
Tom Kyte
November 15, 2011 - 8:31 am UTC

1)
scott%ORA11GR2> connect ops$tkyte[scott]/foobar;
Connected.
scott%ORA11GR2> select user, sys_context( 'userenv', 'proxy_user' ) from dual;

USER
------------------------------
SYS_CONTEXT('USERENV','PROXY_USER')
-------------------------------------------------------------------------------
SCOTT
OPS$TKYTE




2) in a data warehouse, if you 'cleanse' your data - you don't need a true primary key constraint, you can put one in place using "rely disable novalidate"

http://asktom.oracle.com/Misc/stuck-in-rut.html

If however, you want the database to enforce it - you'll need a normal b*tree index on the columns involved.

'Delayed row fetch'

lh, November 15, 2011 - 12:41 am UTC

Tables a(f1,f3,f4,f5,f6) primary key (f1), index (f3,f1) exists
b(f1,f2) primary key (f1,f2)
c(f2,f7,f8,f9) primary key (f2)


Table c has column f7, which is skew, only few values with 'Y' exists.



Now we have query
select
a.*,
c,*
from
a,b,c
where
a.f3='x'
and a.f1 = b.f1
and b.f2 = c.f2
and c.f7.='Y';

Now in all scenarios I have managed to do with this syntax:
first index for table a is accessed, then table a,
then index for table b and then index for table c and finally table c.


I would like optimizer to delay fetching of actual row
so that this would operate more like:

first index for table a is accessed, then index for table b,
then index for table c, then table c and finally for table a.

The idea being that actual fetching of rows would be delayed until other restrictive criterias have evaluated and thus have reduced number of rows to be fetched.

Lack of command 'likeliness'

lh, November 18, 2011 - 1:49 am UTC

Allmost all sql-commands and Oracle add-ones on then do have same 'likeliness'. I mean that if you know how some keyword works on one command, it will do same thing in some other command. Which is great feature for us whose memory is not what it used be years ago.

However this is not in all commands. E.g.
grant privilege to someuser;
is 'cumulative'. New priveleges are added to someuser.
However proxy user type command
ALTER USER applicationaccount
GRANT CONNECT THROUGH personalccount
WITH ROLE privilege;
is 'replacing'. In order to have three roles given
to user personalaccount when connecting through applicationaccount, all these privileges must be mentioned in command.

I find this this confusing and error producing.

cursor_sharing and exec

lh, November 29, 2011 - 5:54 am UTC

alter session set cursor_sharing works for sql-statements.
But it seems not to work for exec commands in sqlplus.

(e.g. In a sqlplus script there is a procedure which gets its parameters from os shell parameters. They are every time different. Don't know how to have them like bind variables.

variable result number;
alter session set cursor_sharing=force;
exec sp_procedure('$1',$2,:result);
)


Tom Kyte
November 29, 2011 - 11:43 am UTC

it only does sql, not plsql.

Can't do explicit conversion with TO_NUMBER

Stew Ashton, December 02, 2011 - 10:29 am UTC


You can explicitly convert any number of any precision / scale with
TO_CHAR(num, 'TM', 'nls_numeric_characters=''. ''')
but TO_NUMBER doesn't allow 'TM'. The best you can do is
TO_NUMBER(num, '999D999', 'nls_numeric_characters=''. ''')
but if the "num" column has both huge and tiny amounts you can't put enough "9"s before and after the "D".

The only workaround I found is
TO_NUMBER(translate(num, ',.', substr(1/10,1,1)||substr(1/10,1,1) ))
Generally speaking, explicit conversions are a pain because they have to be done for each column; I would welcome some proprietary method for defining conversions at the statement level. Wouldn't this be nice:
SELECT /*+ nls_numeric_characters='. ' */ to_char(num) as num from t;

datapump, character lentgh semantics

lh, December 19, 2011 - 4:24 am UTC

I would like to have datapump import to have an option something like:
remap_character_semantics=char (possibly byte)
which would create imported tables with suitable character length semantics.

Cardinality feedback

Prashant, December 29, 2011 - 5:00 am UTC

Hi Tom,

Its not that I don't like the cardinality feedback feature but I feel I am missing something when it comes to explaining certain behavior and would like to really understand it better and if there are any tips. I have also read Cardinality Feedback - Frequently Asked Questions [ID 1344937.1]

Problem statement:
We had a query that was deployed to production on 3rd Dec that runs as a part of daily batch. It ran for a
longer duration on 3rd Dec (which was ok) since it was doing a historical load. The subsequent runs were faster (the only exception on 10th Dec). We didn't notice it then.

However since the past 4 days (26th Dec onwards) the run times have increased (30 mins vs 40 sec earlier).

Since we have AWR for the last few days I identified that it was using the first cursor (see below) till 25th Dec. From 26th onwards it started using the second cursor.
I think (not confirmed since I used DBA_HIST_SQL_PLAN) that the first cursor got initialized on 5th Dec and the second on 10th Dec.

The execution plans for both are different which explain the difference in execution times.

SQL_ID CHILD_NUMBER USE_FEEDBACK_STATS
------------- ---------------------- ------------------
0ug5y67u7ypdm 0 Y
0ug5y67u7ypdm 1 N


I am a little confused as to how can I tune this query because it appears to me that it probably optimized the query number of times before it settled on a execution
plan. However this plan is sub optimal.

We don't like to disable the parameter _OPTIMIZER_USE_FEEDBACK nor apply the hint opt_param('_optimizer_use_feedback','false'). What are our options (can we age out the cursor from the shared pool) or perhaps create extended stats?

I can tune the query and will try to get the cardinality right but was not keen on doing it now since this code is deployed to production. The users are curious to understand this behavior. Most folks don't understand the cardinality feedback feature and its quite difficult to explain. I had egg on my face once when I was tuning a certain query and it auto tuned by itself because of this. I had a tough time explaining this to my customers and there were anxious to know when does it kick in, are there side effects (response times degrade) etc. (and now we have this sub optimal behavior)


Thanks
Prashant


Tom Kyte
December 29, 2011 - 11:25 am UTC

I'm not saying I'm a total fan of cardinality feedback either for two reasons:

a) it is automagic, just kicks in
b) it isn't persistent. If you shutdown/startup or the cursor ages out - it would start all over again from scratch.

(can we
age out the cursor from the shared pool),


yes, dbms_shared_pool.purge can do that.
http://docs.oracle.com/cd/E11882_01/appdev.112/e10577/d_shared_pool.htm#CHDCBEBB


This one sounds like it might be a candidate for a sql profile.

Force parse from pl/sql ?

James R., December 29, 2011 - 1:51 pm UTC

The problem we constantly have-- plsql automatically binds variables into cursors BUT all subsequent runs are stuck with the "first" execution plan.

I understand why this happens and am totally on board with bind-bind-bind. But compared with using a bad plan, that extra parse is time well spent.

It would be really nice to have a /*+ REPARSE */ hint available that forces a hard parse. Let the developer decide if this query typically benefits from looking at the literals.

Do you see value in such a hint?

Tom Kyte
December 29, 2011 - 5:08 pm UTC

Not in general as people would use and abuse it. In the EXCEEDINGLY rare case where ypu want this- use dynamic sql and literals. Beware sql injection. Or use dynamic sql and a hint in it with sysdate embedded in it with binds for everything else.

And you'll do this maybe once or twice in your career.

Adaptive Cursor Sharing

Marat Tolgambayev, December 29, 2011 - 6:45 pm UTC

Also, there is "Adaptive Cursor Sharing" feature, which help with this problem:
http://docs.oracle.com/cd/E11882_01/server.112/e16638/optimops.htm#sthref872

Cardinality Feedback

Prashant, December 29, 2011 - 10:29 pm UTC

Thanks Tom for your response (cardinality feedback above)

Do you have any suggestions on how best to explain the automagic behavior with customers? I find it challenging for the reasons below
(1) They have hired me to do a job for them and I am not confident when this kicks in, and when it does I can't guarantee of the impact (+ve or -ve).
(2) It's very difficult to reproduce this in dev/stage environments (in this instance the other environments don't show this behavior maybe due to the reasons that the same number of runs wasn't carried out in other environments).
(3) I believe Oracle will keep on improving any of their new features and this could be the way forward. So if we were to I am not saying we will but being forced to when we see a large number of such anamolies disable this feature then their confidence with this feature will be lost and its unlikely they would like this to be enabled again.

Tom, you have suggested SQL profiles not baselines. Any reason?


Thanks
Prashant

Tom Kyte
December 30, 2011 - 10:49 am UTC

to explain it, maybe point them to an article on it, such as this:

http://asktom.oracle.com/Misc/something-new-i-learned-about-estimated.html


A baseline freezes a plan, it presumes you know the best plan ever.

A sql profile gives the optimizer better information (better estimated cardinalities) and allows it to pick the best plan.


If you want to freeze the plan - use a baseline, in fact what you might want to use is sql plan management and query plan evolution.

http://www.oracle.com/technetwork/database/focus-areas/bi-datawarehousing/twp-sql-plan-management-11gr2-133099.pdf

Re: reparse

James R., January 03, 2012 - 11:16 am UTC

"Once or twice in your career"-- you're funny Tom.

It's not exceedingly rare to have an infrequently run procedure that processes a drastically different number of rows depending on the input dates. It's likely one of the main problems that these newer features (SPM, ACS, etc) are targeted at.

Dynamic sql is a terrible workaround for all the same reasons you're not a fan of it.

A /*+reparse*/ hint will be abused. True. Every feature and hint has the potential for abuse, so I'm not sure that should be the only reason against it. Maybe /*+rePEEK*/ has less potential for abuse?

I understand it's a hugely difficult problem to solve by programming alone, which is why I think a hint makes sense.
ACS is good, but it learns by selecting a bad plan and even that info is aged out. SPM... admittedly I'm not an expert with that, but it seems to target plans alone and not bind/plan combinations.

It would be nice to have something that is:

1) Predictably good and predictably testable when using queries that are highly sensitive to bind variables. A full parse fits the bill here. ACS is a step in the right direction.

2) Doesn't require a DBA or some administrative superuser to manage plans behind the scenes.

3) Not a bad workaround like converting a bunch of otherwise fine plsql queries into dynamic sql.

Tom Kyte
January 03, 2012 - 12:20 pm UTC

It's not exceedingly rare to have an infrequently run procedure that processes a drastically different number of rows depending on the input dates. It's likely one of the main problems that these newer features (SPM, ACS, etc) are targeted at.

SPM says "i've got a single plan" - so, no, that is not targeted at it.

ACS - adaptive cursor sharing - absolutely is, and is the way to most likely go about this. We end up with the *minimum* number of shareable cursors we can get - not a parse every single time, but as few parses as possible.


A /*+reparse*/ hint will be abused. True

Honestly, it scares the crap out of me, totally. I don't usually speak that way, but hey - it quite simply would scare the crap out of me.

As much as triggers, autonomous transactions, when others already do.




the fact is - since precompilers and sqlj/jsql are pretty much out of vogue - all SQL coming from any language other than PLSQL is dynamic sql - so, it isn't that big of a deal for them, they are already building it in a string.

And for the exceedingly rare cases where you want to do this in plsql - the difference between:

ops$tkyte%ORA11GR2> create or replace procedure p( p_empno in number )
  2  is
  3      cursor c is select * from emp where empno = p_empno;
  4      l_rec   c%rowtype;
  5  begin
  6      open c;
  7      loop
  8           fetch c into l_rec;
  9           exit when c%notfound;
 10           dbms_output.put_line( l_rec.ename );
 11      end loop;
 12  end;
 13  /

Procedure created.

ops$tkyte%ORA11GR2> exec p(7788);
SCOTT

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> create or replace procedure p( p_empno in number )
  2  is
  3      cursor c is select * from emp where empno = p_empno;
  4      l_rec   c%rowtype;
  5      l_cursor sys_refcursor;
  6  begin
  7      open l_cursor for 'select /* ' || to_char(sysdate,'yyyymmddhh24miss') ||
  8                                '*/ * from emp where empno = :x' using p_empno;
  9      loop
 10           fetch l_cursor into l_rec;
 11           exit when l_cursor%notfound;
 12           dbms_output.put_line( l_rec.ename );
 13      end loop;
 14  end;
 15  /

Procedure created.

ops$tkyte%ORA11GR2> exec p(7788);
SCOTT

PL/SQL procedure successfully completed.


is not huge code wise.

You have to keep the dynamic sql select list and what I calll your 'template' cursor consistent and you have to change but one line of code. Hopefully that is enough of an impediment for most people to not do it, and it is not too onerous on those that know when they really really really want to do it - and it is exceedingly rare - mostly only in warehousing (where the query is typically not in plsql in the first place).

Note: you could have used literals instead of binds in the above 2nd example - it would be six one way, half dozen the other. Bind peeking would have kicked in making it parse like it had literals - without the sql injection scare. And - using binds actually makes the code easier - think about dates, strings with quotes, etc....

Thank you

Prashant, January 03, 2012 - 11:52 pm UTC


SQL issue in reports

Tony Fernandez, January 04, 2012 - 8:51 am UTC

Dear Tom,

We are experiencing a challenge with about 100 reports that now produce different results as opposed to before the new deployment of our internal software.

More specifically, the SQLs involved when run on SQLPLUS produce different number of rows ( which is correct ) than when run on TOAD.

The SQLs, I thought, would be unnecessary to post, since I believe this is more in regards to the deployment itself.

The deploy involved moving from .net 2005 into .net 2008 frameworks, all rest remain constant. No deployments on our production database objects ( packages, procedures, functions or anything else. )

So I thought we post a question and see if from a high level perspective there is a specific issue where SQLs produce different number of rows when executed through SQLPLUS or TOAD. If we identify why this happens, I am hoping we isolate issue and resolve the reports producing different results now than before.

We appreciate your feedback,

Tom Kyte
January 04, 2012 - 9:38 am UTC

.. More specifically, the SQLs involved when run on SQLPLUS produce different
number of rows ( which is correct ) than when run on TOAD. ..

it is unclear to me which produce the "wrong" results.


So I thought we post a question and see if from a high level perspective there
is a specific issue where SQLs produce different number of rows when executed
through SQLPLUS or TOAD.


that should never happen.

I would compare the plans used by each to see if they differ - if they are the same, there should be NO difference in output (expect for sort order, unless there is a deterministic order by clause).


if you run a query and then immediately query dbms-xplan.display_cursor:


ops$tkyte%ORA11GR2> set serveroutput off
ops$tkyte%ORA11GR2> select * from dual;

D
-
X

ops$tkyte%ORA11GR2> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
SQL_ID  a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------


13 rows selected.


You'll have to test that in toad - if toad runs some query after your query - that won't work.

SQL issue in reports

Tony Fernandez, January 04, 2012 - 2:55 pm UTC

Tom,

Sorry for the long SQL.  The first part below shows the correct execution in SQLPLUS.  It returns 15 rows, and that is correct.

Further down, the same SQL executed in TOAD and it returns 13 rows only, the plan is attached also.

One last thing, in TOAD if we activate the where clause commented out with TRIM and deactivate the next one to it, TOAD produces correct results.  Also, the columns trimmed always have 13 characters filled in and it is a VARCHAR2(13) column, so TRIM should not have any impact.

Thanks for your help,

Dumps follow:
SQL> ed
Wrote file afiedt.buf

  1  SELECT  nt.distributor_id
  2         ,ts.distributor_id
  3         ,case when ts.ACCOUNT_TYPE_NAME in ( 'Price Audits','Other') then 2 else 1 end as grp
  4         ,decode(ts.account_type_id,1, 4,2, 1, 3, 3,4, 2, 6, 6,7, 5, 8, 8,9, 9, 10, 7,11,10) as account_type_id
  5         ,ts.ACCOUNT_TYPE_NAME AS category
  6         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 1, nt.REBATE_AMOUNT, 0)), 2) p1
  7         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 2, nt.REBATE_AMOUNT, 0)), 2) p2
  8         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 3, nt.REBATE_AMOUNT, 0)), 2) p3
  9         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 4, nt.REBATE_AMOUNT, 0)), 2) p4
 10         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 5, nt.REBATE_AMOUNT, 0)), 2) p5
 11         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 6, nt.REBATE_AMOUNT, 0)), 2) p6
 12         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 7, nt.REBATE_AMOUNT, 0)), 2) p7
 13         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 8, nt.REBATE_AMOUNT, 0)), 2) p8
 14         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 9, nt.REBATE_AMOUNT, 0)), 2) p9
 15         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 10,nt.REBATE_AMOUNT, 0)), 2) p10
 16         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 11,nt.REBATE_AMOUNT, 0)), 2) p11
 17         ,round(SUM(decode(fy.COMPASS_FISCAL_MONTH, 12,nt.REBATE_AMOUNT, 0)), 2) p12
 18         ,round(SUM(nt.REBATE_AMOUNT), 2) pfy
 19    FROM  nt_data nt
 20         ,customer_history ch
 21         ,( SELECT  DISTINCT term_set_id
 22                   ,term_set_name
 23                   ,owner_id distributor_id
 24                   ,owner_name distributor_name
 25                   ,CONTROL_NUMBER
 26                   ,account_type_id
 27                   ,ACCOUNT_TYPE_NAME
 28             FROM  ts_info_history_or
 29            WHERE  program_class_code = 'NONTRANS') ts
 30          ,( SELECT  period_identifier
 31                    ,compass_fiscal_month
 32                    ,compass_fiscal_year
 33               FROM  calculation_period
 34              WHERE  compass_fiscal_year = ( SELECT compass_fiscal_year
 35                                               FROM calculation_period
 36                                              WHERE period_identifier = :period)
 37                AND period_identifier <= :period) fy
 38     WHERE (nt.invoice_period_id, nt.customer_id) IN (SELECT period_id, customer_id
 39                                                        FROM customer_history
 40                                                       WHERE org_group_id    = 'FO-2002-34055'
 41                                                         AND org_level2_id   = 'FO-2002-30919'
 42                                                         AND period_id in ( SELECT period_identifier
 43                                                                              FROM calculation_period
 44                                                                             WHERE compass_fiscal_year = ( SELECT compass_fiscal_year
 45                                                                                                             FROM calculation_period
 46                                                                                                            WHERE period_identifier = :period)
 47                                                        AND period_identifier <= :period) )
 48       and nt.INVOICE_PERIOD_ID = fy.period_identifier
 49       and nt.transaction_status_id               != 11
 50     --  and trim(nt.distributor_id)                      = trim(ts.distributor_id)
 51       and  nt.distributor_id  =  ts.distributor_id
 52       and nt.CONTROL_NUMBER                       = ts.CONTROL_NUMBER
 53       and nt.invoice_period_id                    = ch.period_id
 54       and nt.customer_id                          = ch.customer_id
 55       and ts.account_type_id = 2
 56  GROUP BY  nt.distributor_id
 57           ,ts.distributor_id
 58           ,(case when ts.ACCOUNT_TYPE_NAME in ( 'Price Audits','Other') then 2 else 1 end)
 59           ,decode(ts.account_type_id,1, 4,2, 1, 3, 3,4, 2, 6, 6,7, 5, 8, 8,9, 9, 10, 7,11,10)
 60           ,ts.ACCOUNT_TYPE_NAME
 61* ORDER BY 1,2,3
 62  
SQL> /

DISTRIBUTOR_I DISTRIBUTOR_I        GRP ACCOUNT_TYPE_ID CATEGORY                                                             P1         P2         P3         P4         P5         P6         P7         P8         P9        P10        P11        P12        PFY                                          
------------- ------------- ---------- --------------- ------------------------------------------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------                                          
DO-2001-00346 DO-2001-00346          1               1 Cold Beverages                                                        0       6244          0          0          0          0          0          0          0          0          0          0       6244                                          
DO-2004-00151 DO-2004-00151          1               1 Cold Beverages                                                  1825.81          0   32491.72          0          0          0          0          0          0          0          0          0   34317.53                                          
DO-2004-00203 DO-2004-00203          1               1 Cold Beverages                                                        0          0    34328.1          0          0          0          0          0          0          0          0          0    34328.1                                          
DO-2004-00403 DO-2004-00403          1               1 Cold Beverages                                                  2742.13     3937.8    3664.38          0          0          0          0          0          0          0          0          0   10344.31                                          
DO-2006-00575 DO-2006-00575          1               1 Cold Beverages                                                  4116.54          0          0          0          0          0          0          0          0          0          0          0    4116.54                                          
DO-2008-00269 DO-2008-00269          1               1 Cold Beverages                                                        0          0    1104.75          0          0          0          0          0          0          0          0          0    1104.75                                          
DO-2010-00031 DO-2010-00031          1               1 Cold Beverages                                                589004.73  807569.69   602780.6          0          0          0          0          0          0          0          0          0 1999355.02                                          
DO-2010-00070 DO-2010-00070          1               1 Cold Beverages                                                534433.55  480018.34  506153.12          0          0          0          0          0          0          0          0          0 1520605.01                                          
DO-2010-00095 DO-2010-00095          1               1 Cold Beverages                                                109057.09  122780.98  113967.76          0          0          0          0          0          0          0          0          0  345805.83                                          
DO-2010-00097 DO-2010-00097          1               1 Cold Beverages                                                 35291.34   11500.22   20792.65          0          0          0          0          0          0          0          0          0   67584.21                                          
DO-2010-00098 DO-2010-00098          1               1 Cold Beverages                                                      873        918        882          0          0          0          0          0          0          0          0          0       2673                                          

DISTRIBUTOR_I DISTRIBUTOR_I        GRP ACCOUNT_TYPE_ID CATEGORY                                                             P1         P2         P3         P4         P5         P6         P7         P8         P9        P10        P11        P12        PFY                                          
------------- ------------- ---------- --------------- ------------------------------------------------------------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------                                          
DO-2010-00100 DO-2010-00100          1               1 Cold Beverages                                                    61.32     261.17     213.16          0          0          0          0          0          0          0          0          0     535.65                                          
DO-2010-00101 DO-2010-00101          1               1 Cold Beverages                                                   3871.5    3210.75    3171.17          0          0          0          0          0          0          0          0          0   10253.42                                          
DO-2010-00423 DO-2010-00423          1               1 Cold Beverages                                                   154.35      78.45      109.3          0          0          0          0          0          0          0          0          0      342.1                                          
DO-2010-00450 DO-2010-00450          1               1 Cold Beverages                                                      174        110        130          0          0          0          0          0          0          0          0          0        414                                          

15 rows selected.

SQL> select * from table(dbms_xplan.display_cursor);

                                                                                                                                                                                                                                                                                                            
Plan hash value: 2425987433                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                            

PLAN_TABLE_OUTPUT                                                                                                                                                                                                                                                                                           
----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                          
----------------------------------------------------------------------------------------------------------------------------------                                                                                                                                                                          
|   0 | SELECT STATEMENT                            |                    |       |       |  8100 (100)|          |       |       |                                                                                                                                                                          
|   1 |  SORT GROUP BY                              |                    |     1 |   127 |  8100   (1)| 00:01:54 |       |       |                                                                                                                                                                          
|   2 |   NESTED LOOPS                              |                    |     1 |   127 |  8098   (1)| 00:01:54 |       |       |                                                                                                                                                                          
|   3 |    NESTED LOOPS                             |                    |     1 |   109 |  8098   (1)| 00:01:54 |       |       |                                                                                                                                                                          
|   4 |     MERGE JOIN                              |                    |     1 |    98 |  8097   (1)| 00:01:54 |       |       |                                                                                                                                                                          
|   5 |      SORT JOIN                              |                    | 83825 |  5239K|  7236   (1)| 00:01:42 |       |       |                                                                                                                                                                          
|   6 |       NESTED LOOPS                          |                    |       |       |            |          |       |       |                                                                                                                                                                          
|   7 |        NESTED LOOPS                         |                    | 83825 |  5239K|  7236   (1)| 00:01:42 |       |       |                                                                                                                                                                          
|   8 |         VIEW                                | VW_NSO_1           |  1915 | 34470 |  2274   (3)| 00:00:32 |       |       |                                                                                                                                                                          
|   9 |          HASH UNIQUE                        |                    |  1915 |   100K|            |          |       |       |                                                                                                                                                                          
|  10 |           NESTED LOOPS                      |                    |       |       |            |          |       |       |                                                                                                                                                                          
|  11 |            NESTED LOOPS                     |                    |  1915 |   100K|  2273   (3)| 00:00:32 |       |       |                                                                                                                                                                          
|* 12 |             MAT_VIEW ACCESS BY INDEX ROWID  | CALCULATION_PERIOD |     6 |    48 |     2   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 13 |              INDEX RANGE SCAN               | PK_CALP            |   155 |       |     1   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|  14 |              MAT_VIEW ACCESS BY INDEX ROWID | CALCULATION_PERIOD |     1 |     8 |     1   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 15 |               INDEX UNIQUE SCAN             | PK_CALP            |     1 |       |     0   (0)|          |       |       |                                                                                                                                                                          
|  16 |             PARTITION LIST AND              |                    | 93437 |       |    66   (7)| 00:00:01 |KEY(AP)|KEY(AP)|                                                                                                                                                                          
|* 17 |              INDEX RANGE SCAN               | FK_CSHIST_CALP     | 93437 |       |    66   (7)| 00:00:01 |KEY(AP)|KEY(AP)|                                                                                                                                                                          
|* 18 |            TABLE ACCESS BY LOCAL INDEX ROWID| CUSTOMER_HISTORY   |   345 | 15870 |   906   (2)| 00:00:13 |     1 |     1 |                                                                                                                                                                          
|* 19 |         INDEX RANGE SCAN                    | NU_NT_DAT_PD_CUST  |     9 |       |     2   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 20 |        MAT_VIEW ACCESS BY INDEX ROWID       | NT_DATA            |    44 |  2024 |     7   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 21 |      SORT JOIN                              |                    |    76 |  2584 |   860   (1)| 00:00:13 |       |       |                                                                                                                                                                          
|  22 |       VIEW                                  |                    |    76 |  2584 |   859   (1)| 00:00:13 |       |       |                                                                                                                                                                          
|  23 |        HASH UNIQUE                          |                    |    76 | 10336 |   859   (1)| 00:00:13 |       |       |                                                                                                                                                                          
|* 24 |         MAT_VIEW ACCESS FULL                | TS_INFO_HISTORY_OR |    76 | 10336 |   858   (1)| 00:00:13 |       |       |                                                                                                                                                                          
|* 25 |     MAT_VIEW ACCESS BY INDEX ROWID          | CALCULATION_PERIOD |     1 |    11 |     1   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 26 |      INDEX UNIQUE SCAN                      | PK_CALP            |     1 |       |     0   (0)|          |       |       |                                                                                                                                                                          
|  27 |      MAT_VIEW ACCESS BY INDEX ROWID         | CALCULATION_PERIOD |     1 |     8 |     1   (0)| 00:00:01 |       |       |                                                                                                                                                                          
|* 28 |       INDEX UNIQUE SCAN                     | PK_CALP            |     1 |       |     0   (0)|          |       |       |                                                                                                                                                                          
|  29 |    PARTITION LIST AND                       |                    |     1 |    18 |     0   (0)|          |KEY(AP)|KEY(AP)|                                                                                                                                                                          
|* 30 |     INDEX UNIQUE SCAN                       | PK_CSHIST          |     1 |    18 |     0   (0)|          |KEY(AP)|KEY(AP)|                                                                                                                                                                          
----------------------------------------------------------------------------------------------------------------------------------                                                                                                                                                                          
                                                                                                                                                                                                                                                                                                            


***---***---***Following is erroneous result executed from TOAD ( Query returns 13 rows )***---***---***

 
Plan hash value: 2623385812
 
-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                    |       |       |   991 (100)|          |       |       |
|   1 |  SORT GROUP BY                               |                    |     1 |   127 |   991   (2)| 00:00:14 |       |       |
|   2 |   NESTED LOOPS                               |                    |     1 |   127 |   989   (2)| 00:00:14 |       |       |
|*  3 |    HASH JOIN                                 |                    |     1 |   109 |   989   (2)| 00:00:14 |       |       |
|   4 |     JOIN FILTER CREATE                       | :BF0000            |       |       |            |          |       |       |
|   5 |      NESTED LOOPS                            |                    |       |       |            |          |       |       |
|   6 |       NESTED LOOPS                           |                    |     1 |    75 |   129   (5)| 00:00:02 |       |       |
|   7 |        MERGE JOIN CARTESIAN                  |                    |     6 |   174 |   116   (6)| 00:00:02 |       |       |
|*  8 |         MAT_VIEW ACCESS BY INDEX ROWID       | CALCULATION_PERIOD |     1 |    11 |     2   (0)| 00:00:01 |       |       |
|*  9 |          INDEX RANGE SCAN                    | PK_CALP            |     3 |       |     1   (0)| 00:00:01 |       |       |
|  10 |          MAT_VIEW ACCESS BY INDEX ROWID      | CALCULATION_PERIOD |     1 |     8 |     1   (0)| 00:00:01 |       |       |
|* 11 |           INDEX UNIQUE SCAN                  | PK_CALP            |     1 |       |     0   (0)|          |       |       |
|  12 |         BUFFER SORT                          |                    |    10 |   180 |   114   (6)| 00:00:02 |       |       |
|  13 |          VIEW                                | VW_NSO_1           |    10 |   180 |   114   (6)| 00:00:02 |       |       |
|  14 |           HASH UNIQUE                        |                    |    10 |   540 |            |          |       |       |
|  15 |            NESTED LOOPS                      |                    |       |       |            |          |       |       |
|  16 |             NESTED LOOPS                     |                    |    10 |   540 |   113   (6)| 00:00:02 |       |       |
|* 17 |              MAT_VIEW ACCESS BY INDEX ROWID  | CALCULATION_PERIOD |     1 |     8 |     2   (0)| 00:00:01 |       |       |
|* 18 |               INDEX RANGE SCAN               | PK_CALP            |     3 |       |     1   (0)| 00:00:01 |       |       |
|  19 |               MAT_VIEW ACCESS BY INDEX ROWID | CALCULATION_PERIOD |     1 |     8 |     1   (0)| 00:00:01 |       |       |
|* 20 |                INDEX UNIQUE SCAN             | PK_CALP            |     1 |       |     0   (0)|          |       |       |
|  21 |              PARTITION LIST AND              |                    |  4751 |       |    67   (8)| 00:00:01 |KEY(AP)|KEY(AP)|
|* 22 |               INDEX RANGE SCAN               | FK_CSHIST_CALP     |  4751 |       |    67   (8)| 00:00:01 |KEY(AP)|KEY(AP)|
|* 23 |             TABLE ACCESS BY LOCAL INDEX ROWID| CUSTOMER_HISTORY   |    18 |   828 |   111   (6)| 00:00:02 |     1 |     1 |
|* 24 |        INDEX RANGE SCAN                      | NU_NT_DAT_PD_CUST  |     1 |       |     2   (0)| 00:00:01 |       |       |
|* 25 |       MAT_VIEW ACCESS BY INDEX ROWID         | NT_DATA            |     1 |    46 |     3   (0)| 00:00:01 |       |       |
|  26 |     VIEW                                     |                    |    19 |   646 |   859   (1)| 00:00:13 |       |       |
|  27 |      HASH UNIQUE                             |                    |    19 |  2584 |   859   (1)| 00:00:13 |       |       |
|  28 |       JOIN FILTER USE                        | :BF0000            |    19 |  2584 |   858   (1)| 00:00:13 |       |       |
|* 29 |        MAT_VIEW ACCESS FULL                  | TS_INFO_HISTORY_OR |    19 |  2584 |   858   (1)| 00:00:13 |       |       |
|  30 |    PARTITION LIST AND                        |                    |     1 |    18 |     0   (0)|          |KEY(AP)|KEY(AP)|
|* 31 |     INDEX UNIQUE SCAN                        | PK_CSHIST          |     1 |    18 |     0   (0)|          |KEY(AP)|KEY(AP)|
-----------------------------------------------------------------------------------------------------------------------------------
 
   - cardinality feedback used for this statement
 

Tom Kyte
January 04, 2012 - 3:49 pm UTC

how are you binding period?

are you binding *exactly* the same in both environments - describe in detail.

what is the datatype of period_identifier ?

what is the bind type of :period?


SQL issue in reports

Tony Fernandez, January 04, 2012 - 4:04 pm UTC

period variable in TOAD is bound when executed, TOAD claims for the value.

IN SQLPLUS I do "variable period number;" followed by "exec :period := 155;" then execute SQL

Tom Kyte
January 04, 2012 - 5:50 pm UTC

is toad binding a string or a number? I don't know what "toad claims for the value" means?

what is the datatype of the database column itself?


if you hard code (remove binds from the problem altogether), does it still happen?

SQL issue in reports

Tony Fernandez, January 04, 2012 - 4:14 pm UTC

In TOAD I make it into number, still same result wrong, so please ignore the binding report of the explain plan.

And the TRIM statement that make it right in TOAD is in lines 50 and 51 of initial SQL.

Tom Kyte
January 04, 2012 - 5:50 pm UTC

see above.

SQL issue in reports

A reader, January 05, 2012 - 6:42 am UTC

Tony, it just impossible to answer such questions without knowing definitions of all involved DB objects.

As for TOAD - how it is related to your Net-based software?
TOAD is not Net-based AFAIK.


SQL issue in reports

Tony Fernandez, January 05, 2012 - 9:17 am UTC

Tom,

From your questions:

is toad binding a string or a number?
TOAD is binding a number, when prompted at execution it gives the chance to select what kind of variable and NUMBER is the one selected.

what is the datatype of the database column itself?
The column is a number also and the table is range partitioned by the "period_id" and that also is a number.


if you hard code (remove binds from the problem altogether), does it still happen?
If value is hard-coded instead, then query works fine.
Tom Kyte
January 05, 2012 - 10:15 am UTC

what is the plan when hard-coding.

can you get sql_traces from sqlplus and from toad. I'd be interested in the raw trace data that looks like this:

<code>

PARSING IN CURSOR #7796688 len=31 dep=0 uid=629 oct=3 lid=629 tim=1325780016562463 hv=2021375686 ad='371432c0' sqlid='bywy2a1w7rgq6'
select * from dual where 1 = :x
END OF STMT
PARSE #7796688:c=2000,e=2144,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=1,plh=0,tim=1325780016562460
=====================
BINDS #7796688:
Bind#0
oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000000 frm=00 csi=00 siz=24 off=0
kxsbbbfp=0076d650 bln=22 avl=02 flg=05
value=1
EXEC #7796688:c=20997,e=25873,p=1,cr=3,cu=0,mis=0,r=0,dep=0,og=1,plh=3752461848,tim=1325780016588469



Just the parse and bind and initial execute of the queries from each environment - just to see if toad and plus are doing something different.


we already know the plans are different - that is the root problem - we need to figure out why.



and silly question - you are 100% sure that plus and toad are connected to the same exact database right?


SQL issue in reports

Tony Fernandez, January 09, 2012 - 9:18 am UTC

Tom,

I wanted to loop back to you is regards of this issue. The DBA says "The 11.0.3 patch addresses the bind variable issue." and the patch was also applied to production.

We do not see the binding of the variable error anymore.

Thank you kindly for your suggestions.

read_only hint

lh, January 23, 2012 - 4:00 am UTC

Hi

I would like have 'read_only' hint to be used in functions in select statements.

e.g.
Insert into tablex (... )
select /*+ READ_ONLY */
*
from
table(pipelined_functio());


If I have understood correctly functions in sql-statements do not have read consistent view of data.

This would help concurrency problems in long running statements.

Could same effects be done otherwise ?



Tom Kyte
January 23, 2012 - 6:15 am UTC

... If I have understood correctly functions in sql-statements do not have read
consistent view of data. ....


that is not 100% correct.

every sql statement in oracle is read consistent.

HOWEVER

unless you use read only or serializable isolation, the sql statements are read consistent with respect to themselves (each sql statement - since they are started at different points in time - see the database at a different commit point by default).


The solution for you is to use read only or serializable isolation. Or, to use flashback query with a fixed SCN in your pipelined_function

Anyone of those three things would cause all sql to be executed as of the same point in time.

- zero downtime upgrades

A reader, January 25, 2012 - 12:57 pm UTC

Hi Tom - what is your opinion on the fact that we still in 2012 don't have true zero-downtime (rolling) version upgrades?

Having upgraded from 11.2.0.2 to 11.2.0.3 I've found that 'out of the box' there are at least 2 solid hours of full DB unavailability even in a RAC cluster. Without GoldenGate or similar solution, there is at least the short switchover full outage to contend with.

I know there is the capability for zero percieved downtime if you are willing to use GoldenGate, or Logical Standby with application changes to support short switchover outages with logical standby or maybe some other vendor's solutions.

But what I am talking about is the ability to upgrade from 11g to 12g one cluster node at a time. Or even the ability to go from 11.2.0.3 to 11.2.0.4 one node at a time. This was being promised as 'coming soon' for a while now.

Clusterware seems to be there but not the DB.

I know it's not an easy task - having two or more nodes with different ideas about what the catalog looks like, etc. But it's possible - our company has a product from another vendor which is a multi node cluster where the DBMS has not had a full outage in 8-9 years and at least 4 major version upgrades.

Is there just not enough demand for such a feature?

Tom Kyte
January 26, 2012 - 10:26 am UTC

Hi Tom - what is your opinion on the fact that we still in 2012 don't have true
zero-downtime (rolling) version upgrades?


we've had near zero down time upgrades since 10.2?

https://www.google.com/search?q=rolling+upgrade+oracle+data+guard


<there are at least 2 solid hours

that is absolutely not true - where did you come up with that number?


But what I am talking about is the ability to upgrade from 11g to 12g one
cluster node at a time. Or even the ability to go from 11.2.0.3 to 11.2.0.4
one node at a time. This was being promised as 'coming soon' for a while now.


where was that promised? You understand how simply complex and error prone that is - and we have the data to worry about here? It is something being thought about - yes, but it is a hugely complex undertaking.

what is the other product? I'd be interested in seeing such a thing and comparing it to what we have.

RE: read_only hint

Duke Ganote, January 26, 2012 - 8:06 am UTC

I'd recommend that the reader of that question examine his source of "understanding"; which is clearly not Oracle. The Oracle documentation is very clear:

"Read consistency, as supported by Oracle, does the following:

"• Guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)

"• Ensures that readers of database data do not wait for writers or other readers of the same data

"• Ensures that writers of database data do not wait for readers of the same data

"• Ensures that writers only wait for other writers if they attempt to update identical rows in concurrent transactions

"The simplest way to think of Oracle's implementation of read consistency is to imagine each user operating a private copy of the database, hence the multiversion consistency model."


That's straight from "Oracle® Database Concepts - 10g Release 2 (10.2)"

http://docs.oracle.com/cd/B19306_01/server.102/b14220/intro.htm#i61063

re: read_only hint

lh, January 27, 2012 - 3:49 am UTC

Hi,

First I have to thank Tom for his answer. Haven't used isolation_levels and didn't understood that it would solve the problem.

Please see

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:27330770500351

and search for keyword function.

re: read_only

lh, January 27, 2012 - 4:15 am UTC

Hi,

Problem (and strength) in these kinds of web sites is that here are readers from all over the world.
However, english is not native language for all of us and it is sometimes difficult to clearly express ones thoughts and comments.


Originally I tried to ask question for the problem Tom answered in HOWEVER part; how to get read consistent view from data with several
statements when read_only transaction cannot be used. As above mentioned link points out, default read consistency works for statement level.

A reader, January 30, 2012 - 1:46 pm UTC

"we've had near zero-time upgrades since 10.2" - Agreed, as noted in my post the near-zero-time options are there.

But they are not optimal. They all require a "logical" copy of the DB on another machine. Logical Standby or a 2nd DB with application syncronization. So out of the box, without GoldenGate or similar, the minimum possible outage is a pair of dataguard switchovers which requires a full outage of about a minute or so, AND requires your applications to a) have a standby cluster able to handle the workload and b) be able to tolerate a full outage of a minute or two.

"absolutely not true - where did you get that number" - I got it from the 50+ 11.2.0.1 to 11.2.0.2 ugprades I've done the last 6 months. the DBUA changes the DB to non-clustered mode and does a full DB shutdown and the catalog upgrade scripts take from 45 minutes (fastest) to 3.5 hours (slowest). There are several other steps (timezone updates, etc) that extend the time. Running the upgrade manually via scripts is not signficantly different in run time.

As I tried to note in my question, you can have a 1 minute outage with Logical Standby (if the DB can use it) or even zero downtime with GoldenGate (if the APp schema qualifies). But that's not the same as an out-of-the box no-downtime rolling upgrade, because only the critical apps can afford that. On the other had, all apps get the benefit of rolling upgrades for clusterware, but need to take an outage for the DB upgrade.

I do understand how complex the undertaking is. For starters, each node has to recognize that there is an upgrade underway, worry about whether this the first, second, third, last node upgraded etc. There is similar logic now for the OCR during an upgrade, but the logic for a full DB with catalog and data is intimidating. There was never a documented in writing commitment, but it was mentioned most times the question was asked. For example an IOUG meeting discussing 10g to 11g differences mentioned that in all likelyhood 11.1 to 11.2 and all future upgrades would be rolling. By the way, you were in the room :)

The "other product" is IBM mainframe multi-node parallel sysplex running DB2 and IMS. It is active-active (similar to RAC), and hasn't had a full 'datasharing' group down for scheduled outage in many years. Rolling version upgrades have been available from the mid 90s. I've worked on both.



CASE statements

Parthiban Nagarajan, February 04, 2012 - 9:28 am UTC

Hi Tom

Clemens from Vienna, Austria already mentioned about this on October 4, 2005

It would be so nice if we have the CASE construct works like this.

CASE p_varname
WHEN IS NULL THEN
WHEN NOT IN (1, 2, 3) THEN
WHEN LIKE '%text%' THEN
...
END;

It really looks great that the following requests are implemented after posting in this page.
(1) Optional inserts/updates with where clause in MERGE
(2) Fixing the execution order of triggers.

Thanks and regards

A reader, February 23, 2012 - 7:34 am UTC

Hi Tom,

any comments to this
http://tonyhasler.wordpress.com/2011/08/22/why-is-acid-important/ ?

Regards,
Oleksandr

PL/SQL still lacks FINALLY!

Michael, March 20, 2012 - 10:22 am UTC

Hi Tom,

i like PL/SQL very much - but i miss FINALLY, e.g.:

declare
  cursor l_cur is ...;
begin
  open l_cur;
  ...
exception
  when some_exception then ...
finally
  if l_cursor%isopen then close l_cur; end if;
end;

In Oracle Magazie Nov/Dec 2011 you wroten:
"In fact, I believe this so strongly that when Oracle Database 11g Release 1 was still on the drawing board and I was permitted to submit three requests for new features in PL/SQL, I jumped at the chance."

Hope FINALLY was included in your list ;-)
It would really help us to write better and more secure code.
Tom Kyte
March 20, 2012 - 3:22 pm UTC

well, easy enough to have


declare
  cursor l_cur is ...;
begin
BEGIN
  open l_cur;
  ...
exception
  when some_exception then ...
EXCEPTION
when others then
  if l_cursor%isopen then close l_cur; end if;
  RAISE;
end;

PL/SQL still lacks FINALLY... And it's not that easy to mimic it...

Michael, March 21, 2012 - 3:21 am UTC

Well, no...
Your proposal does not clean up when everything goes OK...

Unfortunately it's not that easy to mimic FINALLY with the exsting language elements of PL/SQL. But, of course, it's doable...

Let's see:

declare
  cursor l_cursor is ...;
  ...
  -- This is our cleanup code... could be much bigger.
  -- And we don't want to duplicate code!
  procedure cleanup is
  begin
    if l_cursor%isopen then close l_cursor; end if;
    ...
  end;
begin
  begin
    open l_cursor;
    ...
  exception
    when some_exception then ...
  end;
  cleanup; -- In case it went OK (or exception was handled)
exception
  when others then
    cleanup; -- In case of (unhandled) exception/error
    raise; -- Not to be forgotten! (would be unnecessary with FINALLY)
end;

I think this code demostrates quite well, why it would be a good idea to add FINALLY to PL/SQL. It would make writing a robust error handling much easier.

On PL/SQL and FINALLY

Stew Ashton, March 28, 2012 - 4:31 am UTC


1) I agree that with FINALLY you can put code in one place that has to be duplicated today (or put in a module that is called twice).

2) I can't find the link right now, but I read that Steven Feuerstein was given the same chance as Tom to list three things to add to PL/SQL, and Steven asked for the FINALLY clause.

3) Exceptions raised during DECLARE processing are a problem. Consider:
SQL> set serveroutput on
SQL>
declare
  l_num number := 'A';
  procedure cleanup is
  begin
    dbms_output.put_line('>>>Cleanup done.');
  end cleanup;
begin
  dbms_output.put_line('>>>Main processing.');
  cleanup();
exception when others then
  cleanup();
  dbms_output.put_line('>>>Exception handled.');
end;
/
Error starting at line 3 in command:
declare
  l_num number := 'A';
  procedure cleanup is
  begin
    dbms_output.put_line('>>>Cleanup done.');
  end cleanup;
begin
  dbms_output.put_line('>>>Main processing.');
  cleanup();
exception when others then
  cleanup();
  dbms_output.put_line('>>>Exception handled.');
end;
Error report:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 2
06502. 00000 -  "PL/SQL: numeric or value error%s"
*Cause:    
*Action:

SQL>
declare
  procedure cleanup is
  begin
    dbms_output.put_line('>>>Cleanup done.');
  end cleanup;
begin
  declare
    l_num number := 'A';
  begin
    dbms_output.put_line('>>>Main processing.');
    cleanup();
  end;
exception when others then
  cleanup();
  dbms_output.put_line('>>>Exception handled.');
end;
/
anonymous block completed
>>>Cleanup done.
>>>Exception handled.
In order to intercept every possible exception, you have to enclose everything, including the DECLARE section.

But that means you have to declare the cleanup procedure outside of the original scope.

But that means the cleanup procedure doesn't have access to variables DECLAREd in the original scope.

So you have to DECLARE those variables in the enclosing scope,

which means any exceptions they raise will not be handled by the cleanup procedure!

Remember the novel Catch-22 ?

Missing: SQL:2008 "FETCH FIRST" for result set pagination

Stew Ashton, March 28, 2012 - 4:53 am UTC


The SQL:2008 standard allows for result set pagination through OFFSET and FETCH FIRST n ROWS ONLY. This would have several advantages over the current double ROWNUM technique:

1) FETCH FIRST would always be done after the ORDER BY, whereas too often people now do the ORDER BY after the WHERE ROWNUM..., which is of course a bug.

2) The optimizer could plan for a smaller result set without needing the FIRST_ROWS(n) hint.

3) The optimizer could make more efficient plans. For example, if the required rows could be identified from indexes only, there would be no need to access the table for the first OFFSET-1 rows.

4) This would be a new opportunity to promote result set pagination in SQL, whereas currently incredible amounts of processing power and memory are used to return big results sets and paginate in the middle tier.
Tom Kyte
March 28, 2012 - 9:13 am UTC

you can avoid issue #1 with row_number()

2) already happens with rownum /row number - count stopkey for example is the top n optimization

3) nice idea


4) agree


a new release of the database is pending ;) let us see what happens.

regarding FINALLY

djb, March 28, 2012 - 1:37 pm UTC

I'd like to see PL/SQL have a construct like this:
try
...
catch
...
finally
...
end

which is similar to Java. I think the benefit of adding these keywords is that the existing BEGIN/EXCEPTION/END block would not need to change.

NDS with unknown inputs

Christopher Long, March 29, 2012 - 11:07 am UTC

My biggest request for Oracle would have to do with what I call the Achilles' heel of Native Dynamic SQL: Its inability to directly handle an unknown number of inputs. SQLServer's implementation of dynamic SQL solves this problem nicely with a parameterized interface where every parameter does not have to appear in the final SQL. I find using sys_context to be a cumbersome work-around - especially when dealing with SQL pulled from the statement cache - direct binds are much easier to deal with.

- Chris
Tom Kyte
March 29, 2012 - 11:56 am UTC

two solutions for this:

a) http://www.oracle.com/technetwork/issue-archive/2009/09-jul/o49asktom-090487.html

no sys_context there

b) dbms_sql.to_cursor_number - dbms_sql.bind_variable


ops$tkyte%ORA11GR2> declare
  2          l_cursor   sys_refcursor;
  3          l_cursor#  number;
  4          l_rc       number;
  5          l_query    long;
  6          l_rec      scott.emp%rowtype;
  7  begin
  8          l_query := 'select * from scott.emp where ename like :x';
  9  
 10          l_cursor# := dbms_sql.open_cursor;
 11          dbms_sql.parse( l_cursor#, l_query, dbms_sql.native );
 12  
 13          dbms_sql.bind_variable( l_cursor#, ':x', '%A%' );
 14          l_rc := dbms_sql.execute( l_cursor# );
 15  
 16          l_cursor := dbms_sql.to_refcursor( l_cursor# );
 17  
 18          loop
 19                  fetch l_cursor into l_rec;
 20                  exit when l_cursor%notfound;
 21                  dbms_output.put_line( l_rec.ename );
 22          end loop;
 23          close l_cursor;
 24  end;
 25  /
ALLEN
WARD
MARTIN
BLAKE
CLARK
ADAMS
JAMES

PL/SQL procedure successfully completed.




You can open the query using an API that lets you bind bit by bit and then convert to 'native dynamic sql' for the rest of the job.

NDS with unknown inputs

Christopher Long, March 29, 2012 - 1:37 pm UTC

I don't disagree that there are other ways to do it, but I find them all very clunky, to say the least. When NDS first appeared I wrote a wrapper package to handle variable binds in a way that that makes both the PL/SQL and SQL code much 'nicer', in my humble opinion. If there were a way, I would love to send it to you. As it's ~3,000 lines, I'm not going to paste it here. But I essentially use an array of varchar2's and the user adds their bind values (all of which must be strings, just like sys_context) to the array. I have been using it ever since, but I still yearn for the day when I can throw it all away. Here's a small example (that only really uses a single bind in the end, but you get the idea) of its usage:

procedure g_sp_close_accts
(
i_cust_id in varchar2:= null ,
i_cust_acct_id in varchar2:= null ,
i_cust_acct_ext_id in varchar2:= null
)
is

-- ---------------
-- Local Variables
-- ---------------
l_v_sql varchar2(32767) ;
l_v_bind_values_va pkg_util_nds.g_typ_bindvalues_va :=
pkg_util_nds.g_typ_bindvalues_va() ;
begin

-- ------------------
-- Build the base sql
-- ------------------
l_v_sql := '
update -- pkg_test.g_sp_close_accts
t_cust_acct
set
actv_flg = ''N''
where
actv_flg = ''Y'' ';

-- ----------------
-- Handle i_cust_id
-- ----------------
if (i_cust_id is not null) then
l_v_sql := l_v_sql || 'and
cust_id = :cust_id ';
pkg_util_nds.g_sp_addbindvalue(l_v_bind_values_va, i_cust_id );

-- ---------------------
-- Handle i_cust_acct_id
-- ---------------------
elseif (i_cust_acct_id is not null) then
l_v_sql := l_v_sql || 'and
cust_acct_id = :cust_acct_id ';
pkg_util_nds.g_sp_addbindvalue(l_v_bind_values_va, i_cust_acct_id );

-- -------------------------
-- Handle i_cust_acct_ext_id
-- -------------------------
elseif (i_cust_acct_ext_id is not null) then
l_v_sql := l_v_sql || 'and
cust_acct_ext_id = :cust_acct_ext_id ';
pkg_util_nds.g_sp_addbindvalue(l_v_bind_values_va, i_cust_acct_ext_id );
end if;

-- ---------------------
-- Execute the statement
-- ---------------------
pkg_util_nds.g_sp_executeimmediate
(
l_v_sql ,
l_v_bind_values_va
);

end g_sp_close_accts;


Pkg_util_nds.g_sp_ExecuteImmediate (or g_sp_OpenRefCur) then checks the number of entries in the array and does a large multi-nested IF to call EXECUTE IMMEDIATE (or OPEN/FOR) with the appropriate number of entries in the USING clause.

While the code within pkg_util_nds is necessarily large and ugly, and has an arbitrary upper limit of 100 binds, the PL/SQL that calls it and the SQL that is generated are both very clean and much easier to use and understand, in my opinion, than the other possible workarounds. But again, these are all workaround. It would be really nice if none of them were necessary at all.

Anyway, thank you as always for your time, patience, and expertise.

- Chris
Tom Kyte
March 29, 2012 - 3:03 pm UTC

As it's ~3,000 lines,

in 11g, it would be about 10 lines. I think I'll do it for grins...

ops$tkyte%ORA11GR2> create or replace package nds_cursor
  2  authid current_user
  3  as
  4      type binds is table of varchar2(4000) index by varchar2(31);
  5  
  6      function open_it( l_query in varchar2, l_binds in binds ) return sys_refcursor;
  7  end;
  8  /

Package created.


ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace package body nds_cursor
  2  as
  3  
  4  function open_it( l_query in varchar2, l_binds in binds ) return sys_refcursor
  5  is
  6      l_cursor number   := dbms_sql.open_cursor;
  7      l_idx    varchar2(31);
  8      l_rc     number;
  9  begin
 10      dbms_sql.parse( l_cursor, l_query, dbms_sql.native );
 11  
 12      l_idx := l_binds.first;
 13      while (l_idx is not null)
 14      loop
 15          dbms_sql.bind_variable( l_cursor, l_idx, l_binds(l_idx) );
 16          l_idx := l_binds.next(l_idx);
 17      end loop;
 18      l_rc := dbms_sql.execute( l_cursor );
 19      return dbms_sql.to_refcursor( l_cursor );
 20  end;
 21  
 22  end;
 23  /

Package body created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable x refcursor
ops$tkyte%ORA11GR2> declare
  2      l_binds nds_cursor.binds;
  3  begin
  4      l_binds( ':x' ) := 'A%';
  5      l_binds( ':y' ) := 'B%';
  6  
  7      :x := nds_cursor.open_it
  8            ( 'select empno, ename
  9                 from scott.emp
 10                where ename like :x
 11                   or ename like :y',
 12              l_binds );
 13  end;
 14  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> print x

     EMPNO ENAME
---------- ----------
      7499 ALLEN
      7698 BLAKE
      7876 ADAMS

ops$tkyte%ORA11GR2> 



(note: the authid current_user is important - to avoid sql injection issues)

Suhweeeeet!!!

Christopher Long, March 29, 2012 - 2:48 pm UTC

<seriously salivating for 11g now>

Awesome!

Thanks as always!

your feature wishlist

Sokrates, March 30, 2012 - 2:31 am UTC

I remember when I first attended a speech of you ( May 2008 Munich ) you were talking about
dbms_sql.to_refcursor and
dbms_sql.to_cursor_number
being on the very top of your personal wishlist of features implemented finally in 11g ( I'm sure you already had Christopher Long's question in mind then )

Since we have them now - what are the top features for future releases on your wishlist now ?
Tom Kyte
March 30, 2012 - 7:11 am UTC

you'll see them coming soon... I cannot really talk about things like that right now - but soon.

Statistics on standby databases

Sri, April 05, 2012 - 4:56 pm UTC

Tom,

I searched your site ( & of course documentation too) and didn't find any information on how statistics is maintained in a physical standby database in 10gR2. I would expect the statistics gathering window to be the same as primary but what about the statistics themselves - does it get replicated from primary or does it gather its own?

I recently (after a switchover) found my physical standby db to have more accurate statistics than what it used to be in the primary DB. This would perfectly make sense if physical standby databases gather stats in the same window as primary but are more efficient for the fact that they are not open for use (no write) and hence gather more accurate stats but need to confirm.

Thanks,
Sri
Tom Kyte
April 06, 2012 - 1:52 pm UTC

A physical standby is a bit for bit, byte for byte replica of the source database.

The entire data dictionary, everything, every byte - is there.

I recently (after a switchover) found my physical standby db to have more
accurate statistics than what it used to be in the primary DB.


that is not physically possible. The standby database is bit for bit, byte for byte the *same* as production. It is a recovered backup.


Thank you much!

Sri, April 08, 2012 - 9:15 am UTC

Thank you much Tom!

As always your expert opinion is greatly appreciated.

Thanks,
Sri

Zero revision information in Oracle docs

Stew Ashton, May 06, 2012 - 4:24 am UTC


I subscribe to "Oracle Database New/Updated Books". 37 books revised this weekend. Absolutely no clue what got changed.

A list or summary of what got changed would have great added value. As it is, nada.
Tom Kyte
May 06, 2012 - 3:26 pm UTC

I'll ask the documentation folks about that.

"Join elimination" and "subquery refactoring"

anonymous, May 09, 2012 - 7:08 am UTC

It looks like "join elimination" does not work for
select statements used as "with statements" in subquery refactoring.

Two nice features seem not to work well together.
Tom Kyte
May 10, 2012 - 3:24 am UTC

It seems like that might be an incorrect statement:


ops$tkyte%ORA11GR2> create table emp as select * from scott.emp;

Table created.

ops$tkyte%ORA11GR2> create table dept as select * from scott.dept;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter table dept add constraint dept_pk primary key(deptno);

Table altered.

ops$tkyte%ORA11GR2> alter table emp  add constraint emp_pk primary key(empno);

Table altered.

ops$tkyte%ORA11GR2> alter table emp  add constraint emp_fk_dept foreign key(deptno) references dept(deptno);

Table altered.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select ename
  2    from emp, dept
  3   where emp.deptno = dept.deptno;

Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    14 |   280 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |    14 |   280 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("EMP"."DEPTNO" IS NOT NULL)

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> with data as
  2  (
  3  select ename
  4    from emp, dept
  5   where emp.deptno = dept.deptno
  6  )
  7  select * from data;

Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    14 |   280 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |    14 |   280 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("EMP"."DEPTNO" IS NOT NULL)

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> with data1 as (select * from emp where deptno is not null),
  2       data2 as (select * from dept)
  3  select ename
  4    from data1, data2
  5   where data1.deptno = data2.deptno;

Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    14 |   280 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |    14 |   280 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("EMP"."DEPTNO" IS NOT NULL)

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> set autotrace off

join elimination, correction

anonymous, May 10, 2012 - 2:40 am UTC

Correction.
Noticed this behavior in complex query and erranously first assumed that it was due to subquery refactoring.

But it seems to be about same table in two places in query.

Is this the expected behavior ?

(oracle versio was 11.2.0.2, EE)

CREATE TABLE DEMO1
(
F1 NUMBER NOT NULL,
F2 NUMBER NOT NULL,
F3 NUMBER
)
;

CREATE TABLE DEMO2
(
F1 NUMBER NOT NULL,
F2 NUMBER NOT NULL,
F4 VARCHAR2(10 CHAR)
)
;

CREATE UNIQUE INDEX PK_DEMO2 ON DEMO2 (F1);
ALTER TABLE DEMO2 ADD ( CONSTRAINT PK_DEMO2 PRIMARY KEY (F1));

CREATE TABLE DEMO3
(
F2 NUMBER NOT NULL,
F5 VARCHAR2(10 CHAR)
)
;
CREATE UNIQUE INDEX PK_DEMO3 ON DEMO3 (F2);
ALTER TABLE DEMO3 ADD ( CONSTRAINT PK_DEMO3 PRIMARY KEY (F2));

ALTER TABLE DEMO1 ADD (CONSTRAINT FK_DEMO1_F1 FOREIGN KEY (F1) REFERENCES DEMO2 (F1));
ALTER TABLE DEMO1 ADD (CONSTRAINT FK_DEMO1_F2 FOREIGN KEY (F2) REFERENCES DEMO3 (F2));
ALTER TABLE DEMO2 ADD (CONSTRAINT FK_DEMO2_F2 FOREIGN KEY (F2) REFERENCES DEMO3 (F2));

CREATE OR REPLACE FORCE VIEW DEMO_V1 as
SELECT d1.f1,
d1.f2,
d1.f3,
d2.f4,
d3.f5,
d3_2.f5 f5_2
FROM demo1 d1
INNER JOIN demo2 d2 ON d1.f1 = d2.f1
INNER JOIN demo3 d3 on d1.f2 = d3.f2
INNER JOIN demo3 d3_2 on d2.f2 = d3.f2;

CREATE OR REPLACE FORCE VIEW DEMO_V2 as
SELECT d1.f1,
d1.f2,
d1.f3,
d2.f4,
d3.f5
FROM demo1 d1
INNER JOIN demo2 d2 ON d1.f1 = d2.f1
INNER JOIN demo3 d3 on d1.f2 = d3.f2;

select f1,f2 from demo_v2;
-- elimination seems to works correctly

select f1,f2 from demo_v1;
-- elimination seems to not occur

Tom Kyte
May 10, 2012 - 8:47 am UTC

it looks like a likely optimizer bug:

ops$tkyte%ORA11GR2> create or replace view v1
  2  as
  3  SELECT d1.f1,
  4         d1.f2,
  5             d1.f3,
  6             d2.f4,
  7             d3.f5
  8    FROM demo1 d1
  9         INNER JOIN demo2 d2 ON d1.f1 = d2.f1
 10         INNER JOIN demo3 d3 on d1.f2 = d3.f2;

View created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select *
  2    from v1;

Execution Plan
----------------------------------------------------------
Plan hash value: 3289554365

-------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |     1 |    79 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                  |          |       |       |            |          |
|   2 |   NESTED LOOPS                 |          |     1 |    79 |     2   (0)| 00:00:01 |
|   3 |    NESTED LOOPS                |          |     1 |    59 |     2   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL          | DEMO1    |     1 |    39 |     2   (0)| 00:00:01 |
|   5 |     TABLE ACCESS BY INDEX ROWID| DEMO3    |     1 |    20 |     0   (0)| 00:00:01 |
|*  6 |      INDEX UNIQUE SCAN         | PK_DEMO3 |     1 |       |     0   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN           | PK_DEMO2 |     1 |       |     0   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID  | DEMO2    |     1 |    20 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("D1"."F2"="D3"."F2")
   7 - access("D1"."F1"="D2"."F1")

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> select f1, f2
  2    from (
  3  SELECT d1.f1,
  4         d1.f2,
  5             d1.f3,
  6             d2.f4,
  7             d3.f5
  8    FROM demo1 d1
  9         INNER JOIN demo2 d2 ON d1.f1 = d2.f1
 10         INNER JOIN demo3 d3 on d1.f2 = d3.f2
 11             );

Execution Plan
----------------------------------------------------------
Plan hash value: 970289426

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |    26 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO1 |     1 |    26 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> set autotrace off


can you please file this test case with support - skip the one in your example that "works", it is not relevant and only confuses the issue.

suplement to previous

anonymous, May 10, 2012 - 7:43 am UTC

SQL> set autotrace traceonly explain
SQL>
SQL>
SQL>
SQL> select f1,f2 from demo_v2;

Execution Plan
----------------------------------------------------------
Plan hash value: 2757767156

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |    26 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO1 |     1 |    26 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)

SQL> select f1,f2 from demo_v1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1576436373

--------------------------------------------------------------------------------
----------

| Id  | Operation                     | Name     | Rows  | Bytes | Cost (%CPU)|
Time     |

--------------------------------------------------------------------------------
----------

|   0 | SELECT STATEMENT              |          |     1 |    52 |     3   (0)|
00:00:01 |

|   1 |  MERGE JOIN CARTESIAN         |          |     1 |    52 |     3   (0)|
00:00:01 |

|   2 |   NESTED LOOPS                |          |       |       |            |
         |

|   3 |    NESTED LOOPS               |          |     1 |    52 |     2   (0)|
00:00:01 |

|   4 |     TABLE ACCESS FULL         | DEMO1    |     1 |    26 |     2   (0)|
00:00:01 |

|*  5 |     INDEX UNIQUE SCAN         | PK_DEMO2 |     1 |       |     0   (0)|
00:00:01 |

|*  6 |    TABLE ACCESS BY INDEX ROWID| DEMO2    |     1 |    26 |     0   (0)|
00:00:01 |

|   7 |   BUFFER SORT                 |          |     1 |       |     3   (0)|
00:00:01 |

|   8 |    INDEX FULL SCAN            | PK_DEMO3 |     1 |       |     1   (0)|
00:00:01 |

--------------------------------------------------------------------------------
----------


Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("D1"."F1"="D2"."F1")
   6 - filter("D1"."F2"="D2"."F2")

Note
-----
   - dynamic sampling used for this statement (level=2)


concatenated string is longer than 4000 cha

sonu, May 21, 2012 - 12:00 am UTC

Hi Tom,

I am using following query to concatenate 5 columns and generate tab delimited spool file. This spool file will have around 2 Million records.

select to_clob(ref1) || chr(9) ||
to_clob(ref2) || chr(9) ||
to_clob(ref3) || chr(9) ||
to_clob(ref4) || chr(9) ||
to_clob(ref5) data
from table

everything is working fine, output is comming properly but it is taking long time (around 2 hrs) to generate the output. I have very few rows (100 out of 2 million) whose resulted size is more than 4000 character. We are using Oracle 11g. We have to generate this data everyday.. is there any other way to concatenate and generate output very quickly.

Thanks

Tom Kyte
May 21, 2012 - 8:30 am UTC

I doubt I would be using "spool", sqlplus is a decent ad-hoc query tool for casual use. Given it has limited (like no) error handling capabilities and is not a programming environment....

I'm pretty sure I'd be using a 3gl language, array fetching ref1.. ref5 using an array fetch size of about 500, and printing them

Timestamp arithmetic

Mike, May 21, 2012 - 6:43 am UTC

I have never really understood why Oracle implemented the 'interval' datatype.

Subtraction of 2 dates yields a result in days. I can then use simple multiplication if I need my answer in hours, seconds, milliseconds etc.

I dont understand why timestamp wasn't implemented the same way. Subtraction of 2 timestamps yields a result of type 'interval' which in its raw format is of use to neither man nor beast. You have to write your own function to pick the individual pieces out of it and convert it to something you can actually use.


Tom Kyte
May 21, 2012 - 8:36 am UTC

ANSI compliance. We did not create the specification for the datatype, the ANSI committee did.

We just implemented it to be "compliant" with the current standard.

The interval datatype and timestamp arithmetic drives me nuts sometimes :)

add months to a month ending on the 31st.... ugh.

ops$tkyte%ORA11GR2> select to_timestamp( '31-oct-2012', 'dd-mon-yyyy' ) + interval '1' month from dual;
select to_timestamp( '31-oct-2012', 'dd-mon-yyyy' ) + interval '1' month from dual
                                                    *
ERROR at line 1:
ORA-01839: date not valid for month specified


ops$tkyte%ORA11GR2> select to_timestamp( '30-oct-2012', 'dd-mon-yyyy' ) + interval '1' month from dual;

TO_TIMESTAMP('30-OCT-2012','DD-MON-YYYY')+INTERVAL'1'MONTH
---------------------------------------------------------------------------
30-NOV-12 12.00.00.000000000 AM



I personally find them hard to use as well.

Alexander, May 22, 2012 - 12:26 pm UTC

This has been driving me insane since I start with Oracle; The impact filling up the archivelog directory has on your database instance. No one can log in, in many cases the entire database freezes up, and you have to restart.

I had this happen to me the other day using ASM, and I couldn't even get a prompt back to try and rm * them.

Why does Oracle return that archiver error message when trying to connect instead of allowing connections. Queries should work (unless block clean outs need to take place?) but in either case, I think connections should be allowed and if they generate redo then they wait.

In general, I wish the impact could be minimized to some degree.

Tom Kyte
May 22, 2012 - 2:34 pm UTC


If you are restarting, you are doing something seriously wrong.

You don't even need to log in - just *free space*, it'll start automagically.


Auditing cannot work, smon cannot work, pmon cannot work, nothing can work - there are ALWAYS things going on - log into an "idle" instance as the only user and just watch that SCN increase. Stop the ability of the database to perform transactions and you just *stopped the database*


For: "concatenated string is longer than 4000 cha "

anonymous, May 24, 2012 - 2:00 am UTC

If You are using sqlplus, have You tried
- set flush off
- set arraysize 'as big as you can get'
- set trimspool on
etc other possible settings.
I have noticed that operating system specific operations in writing the spool file can actually take quite a lot of time if You have long linesize.


Oracle future versions

Mahesh, May 24, 2012 - 8:20 am UTC

have you came-across this? TransRelational Model

http://bookboon.com/en/textbooks/it-programming/go-faster


PS: I did not find any thread discussing about future versions of oracle DB, something like a 'wish list' for future releases so I posted here, you can move this to some appropriate thread if exists or to a new one

one question

venkat, June 06, 2012 - 4:56 am UTC

when i

select user from dual;


How can i know if the user is an application owner or table owner or package owner ?

Whats the difference between these, application owner, table owner and package owner ?
Tom Kyte
June 06, 2012 - 5:48 am UTC

http://docs.oracle.com/cd/E11882_01/server.112/e26088/functions227.htm#SQLRF06156

it returns a constant for any given session.


What is an "application owner"? I believe you might mean "schema", that is just the "owner". A schema may own a table, a package, a procedure and so on.

USER just returns the name of the thing that logged in.

join elimination/continue

anonymous, June 07, 2012 - 2:00 am UTC

I did a service request about this more than three week ago.

Looks like a black hole. Nothing happens. Only feedbacks are few questions which look like they are just asking to play time. Last comment from support is more than week ago saying there will shortly be an update.
Nobody answers for my questions (e.g. can they reproduce themselves this behavior).

Tom: If You think that this truely is a optimizer bug and somebody should look into it, please make service request yourself. Support propably will not ignore Ýou.

While looking myself into this, found out that there is a problem
'Bug 6487603 JOIN elimination does not occur' which is reported as corrected and seems to be related to this.
Tom Kyte
June 07, 2012 - 6:57 am UTC

wait a minute, in looking at this further, I think it might not be an issue.


The first view does this sort of join:

d1 -> d2 -> d3
      |
      +  -> d3


the second view does this:

d1 -> d2 -> d3



It may be an implementation restriction due to the join tree currently.

Answer me this: why are you joining to D3 twice by the same exact join key?

join elimination

anonymous, June 07, 2012 - 8:39 am UTC

Hi,


I must apologize for my error in constructing the test case from actual production problem.
The basic idea in here is that there are two references to same table (e.g. address-table, one for
billing address and other for delivery address) but they are from two separate tables.


drop table demo1;
drop table demo2;
drop table demo3;


CREATE TABLE   DEMO1
(
  F1  NUMBER NOT NULL,
  F2  NUMBER NOT NULL,
  F3  NUMBER
)
;

CREATE TABLE   DEMO2
(
  F1  NUMBER                                    NOT NULL,
  F2  NUMBER    NOT NULL,
  F4  VARCHAR2(10 CHAR)
)
;

CREATE UNIQUE INDEX   PK_DEMO2 ON   DEMO2 (F1);
ALTER TABLE   DEMO2 ADD ( CONSTRAINT PK_DEMO2 PRIMARY KEY (F1));

CREATE TABLE   DEMO3
(
  F2  NUMBER                                    NOT NULL,
  F5  VARCHAR2(10 CHAR)
)
;
CREATE UNIQUE INDEX   PK_DEMO3 ON   DEMO3 (F2);
ALTER TABLE   DEMO3 ADD ( CONSTRAINT PK_DEMO3 PRIMARY KEY (F2));

ALTER TABLE   DEMO1 ADD (CONSTRAINT FK_DEMO1_F1  FOREIGN KEY (F1) REFERENCES   DEMO2 (F1));
ALTER TABLE   DEMO1 ADD (CONSTRAINT FK_DEMO1_F2  FOREIGN KEY (F2) REFERENCES   DEMO3 (F2));
ALTER TABLE   DEMO2 ADD (CONSTRAINT FK_DEMO2_F2  FOREIGN KEY (F2) REFERENCES   DEMO3 (F2));

CREATE OR REPLACE FORCE VIEW   DEMO_V1 as
   SELECT d1.f1,
          d1.f2,
          d1.f3,
          d2.f4,
          d3.f5,
          d3_2.f5 f5_2
     FROM demo1 d1 
          INNER JOIN demo2 d2 ON d1.f1 = d2.f1
          INNER JOIN demo3 d3 on d1.f2 = d3.f2
          INNER JOIN demo3 d3_2 on d2.f2 = d3_2.f2;
--
-- please notice corrected join condition on d3_2
--

CREATE OR REPLACE FORCE VIEW   DEMO_V2 as
   SELECT d1.f1,
          d1.f2,
          d1.f3,
          d2.f4,
          d3.f5
     FROM demo1 d1 
          INNER JOIN demo2 d2 ON d1.f1 = d2.f1
          INNER JOIN demo3 d3 on d1.f2 = d3.f2;

SQL> select f1,f2 from demo_v1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1271946016

-------------------------------------------------------------------------------
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |          |     1 |    52 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS      |          |     1 |    52 |     2   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| DEMO1    |     1 |    26 |     2   (0)| 00:00:01 |
|*  3 |   INDEX UNIQUE SCAN| PK_DEMO2 |     1 |    26 |     0   (0)| 00:00:01 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("D1"."F1"="D2"."F1")

Note
-----
   - dynamic sampling used for this statement (level=2)
   - automatic DOP: skipped because of IO calibrate statistics are missing

SQL> select f1,f2 from demo_v2;

Execution Plan
----------------------------------------------------------
Plan hash value: 2757767156

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |    26 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO1 |     1 |    26 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)
   - automatic DOP: skipped because of IO calibrate statistics are missing


Thank you for taking notice on this problem.

Tom Kyte
June 07, 2012 - 10:19 am UTC

it is the double join to D3 that is causing the issue as I stated above.

I'm not 100% sure if you still feel there is an issue here or not?

regarding "cursor_sharing and exec "

anynymous, June 19, 2012 - 1:37 am UTC

try call.

Alexander, June 28, 2012 - 8:24 am UTC

Really wish Oracle RAC would send a notification of some kind when it's evicting a node so it doesn't appear like a server is randomly disappearing off the network and then have system administrators trying to figure out what happened.
Tom Kyte
June 29, 2012 - 9:20 am UTC

well, it can? there are messages written to the alert logs - these are easily monitored.

are you using any monitoring tools like OEM - something that can send you messages?

Alexander, June 29, 2012 - 10:47 am UTC

What alert log messages? There are the standard "shutting down" stuff, but by then it's basically too late and the node is coming down so you don't see that until it comes back up.

We don't really use OEM, not a fan of it here. We do monitor the alert log for new messages though via shell scripts.

I suppose we could setup something similar for the cluster logs etc, but it would be a lot of work (plus all the noise in those logs we have to worry about...) for something that I feel should be a baked in notification of some kind.
Tom Kyte
July 02, 2012 - 6:43 am UTC

the trace files should be on a disk that is available to everyone - every node in the cluster (that is how you diagnose what is going on)(.

Sorry, if you don't use the tools - I cannot really help you. OEM does what you are asking for.

I suppose we could setup something similar for the cluster logs etc, but it
would be a lot of work (plus all the noise in those logs we have to worry
about...) for something that I feel should be a baked in notification of some
kind.


there is, you just told us you don't want to use it.

One more to the wishlist

Maya, August 06, 2012 - 6:01 am UTC

I would like to see the 'continue' statement for loops in PLSQL (as in C). Yes, we are getting on without it but continue seems a neater way than using another IF block. CONTINUE would be a logical extension to the BREAK statement.

@Maya, continue statement.

Shannon Severance, August 20, 2012 - 12:44 pm UTC

partition wise parallel aggregation

anonymous, September 07, 2012 - 5:23 am UTC

I would like to have partition wise parallel aggregation.
(e.g. if group by would have partition key in it,
aggregation would be done on partition level )


partition wise parallel aggregation

anonymous, September 07, 2012 - 5:55 am UTC

Hi

In previous comment I had in my mind problems I have run into.
These probles are described pretty well (and certainly better than I could) in

http://www.pythian.com/news/34543/interval-partitioning-and-parallel-query-limit-access-paths/

plsql collection using Varchar2 subscripts

Rajeshwaran, Jeyabal, October 22, 2012 - 4:18 am UTC

Tom,

Is it possible for oracle to come up with a single sql to populate the collection ( collection's using Varchar2 index subscripts ) specified below.

I was looking something like this.

select empno
bulk collect into t2 index by ename
from emp;


I know you will definitely say “ we will populate less volume of elements into collection so why this looping is big deal “ – but this above sql looks very handy rather than looping construct – would you agree ?


rajesh@ORA11G>
rajesh@ORA11G> declare
  2     type t1 is table of number index by varchar2(30);
  3     t2 t1;
  4  begin
  5     for x in (select empno,ename from emp)
  6     loop
  7             t2(x.ename) := x.empno;
  8     end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10
rajesh@ORA11G>

Tom Kyte
October 23, 2012 - 12:21 pm UTC

current we do not support that construct, you can only bulk collect into an associative array that uses a number as the index and we populate that number from 1..N.

Datapump

lh, January 15, 2013 - 5:04 am UTC

Hi

I like a lot about Datapump. Using its parallel features is fast and convenient.

What I don't like is that parallel option is not implemented to all phases of import operation, e.g.:
- contraint definition phase is executing each constraint one by one. With big tables, this can take a lot of time.
- updating optimizer statistics for partitioned tables can take a lot of time.

lh
Tom Kyte
January 15, 2013 - 12:30 pm UTC

you can only add one constraint at a time - this isn't a datapump thing. A constraint addition will take out a lock and only one session can hold that lock. If we did it in "parallel" - it would still serialize.


RE: Datapump

Stephen, January 15, 2013 - 2:32 pm UTC

The unfortunate thing about this no parallel for constraints is that when multiple tables are imported, the constraints for the different tables could be created by parallel processes. The only constraints that I can see that need serialization are the primary keys.

Once those are created it seems the other constraints could be parallelized based on table.
Tom Kyte
January 16, 2013 - 11:31 am UTC

they are looking to do that in the future...

it is trickier than it sounds..

think about referential integrity - cannot create foreign keys until unique or primary key constraint exists on parent object.

Datapump enhancements

lh, January 16, 2013 - 2:23 am UTC

Hi,


When datapump was originally released parallel option worked so that whole tables were handled in separate processes when using database link import.
Index creation did not use parallel processes. It was done sequentially in one process.

In later versions datapump has been enhanced so that
- indexes are created using parallel index creation
- partitioned tables are transferred using partitions, not whole tables. So parallel option works better in environments
where size distribution of tables is skew. Also need for temporory tablespace has decreased.

Currently enabling database constraints is done in a single process which is executing serially each constraint.
If tables are big, this process can take a lot of time.

What I would like to have, is that constraint enablation process could be done somehow parallel:
Alternatives might be
- tables would be sorted to decreasing size order. Each datapump worker would take next table from that list and enable constraints
for that table. This process should be done in two phases: first enabling check and primary key constraints for all tables and after that enabling foreign key constraints.
- alter table command enhanced so that enabling constraint could use parallel processes like index creation does. This feature could be useful in other cases also.


Updating optimizer statistics also seems to take quite a long time. I was wondering if parallelism would help. E.g. parallel processes handling different tables/partitions ?


lh

partitioning and TDE... not in EE (sigh...)

Stuart, February 14, 2013 - 1:56 pm UTC

Yet another vote for including partitioning and TDE in the Enterprise Edition license.

Partitioning has been around for 10+ years now, and I believe the landscape has well and truely changed since then. I could bet that the majority of customers who have the Enterprise Edition license would highly benefit from partitioning, but many don't because of the price.

As for transparent data encryption, well, at least the option to encrypt the database (functionality that is bundled as part of SQL Server Standard Edition).

These two would make a tremendous difference in my world.

Any chance of a review?

Hierarchical query in a view.

anonymous, April 11, 2013 - 3:03 pm UTC

Maybe this is more problem of sql-language.

However: I don't like that it seems to be impossible to have efficient hierarhical query in a view.

Assume we have a view having hierarchical query. E.g. agains managers and employeesee. I would like to query this view so that I could have in a where clause someones id and results would be he/her and all the possible employees under her/him.
Because hierarchy is large, it should start with the person in question, not from the 'root' person.
E.g. somehow pass parameter value used in view into select statements 'start with'.

Recursive subquery refactoring doesn't seem to help.
Tom Kyte
April 22, 2013 - 2:58 pm UTC

search this site for parameterized views

Handling of date columns and trunc-function

anonymous, April 26, 2013 - 7:19 am UTC

Hi,


This wish is for handling date columns containing time part.

E.g. in reporting systems, propably due to tools used, it is quite typical to use in where clauses
constructs like
trunc(date_column) between to_date('01032013','ddmmyyyy') and to_date('07032013','ddmmyyyy')

Use of trunc function messes with index usage and partition pruning.

Because date_column is used in where statemens both as 'plain' and with trunc function, use of
function based indexes or functions in partition definition is not feasible.

Educating quys writing these statements would be the preferred method.
In practice this isn't working very well and nobody wants to touch existing code.

What I would like to have the optimizer to understand that trunc(date_column) is just another
representation of the date_column and use the knowledge that
trunc(date_column) <= date_column and trunc(date_column) +1 > date_column and trunc(date_column) > date_column -1

and rewrite statement to something like
date_column >= to_date('01032013','ddmmyyyy')
and date_column < to_date('07032013','ddmmyyyy') +1

Same thing also for truncation to other levels.


Wishes for speeding up refresh groups

anonymous, May 02, 2013 - 8:14 am UTC

Hi


We have a refresh group containing several big materialized views.
These views are refreshed by using incremental (fast) refresh.

Source tables contain data for about 100 days. Older rows are deleted
in daily batch operation, which is using parallel deletes.
Refresh process can handle normal updates pretty well, but it cannot cope
from massive deletions from source tables. It takes several hours to 'catch up'
with the updates.

What I would like to have is
- somekind of parallel incremental refresh, so deletions would be done faster in target database.
In normal operations it is essential that transaction consistency is preserved so we have to do replication in a refresh group.
- somekind of simple way to push high water mark down for materialized views, so normal replication
process would not have to scan empty blocks after deletions have been processed. Shrinking materialized view log would be little bit overkill
because the space would be needed in next day.


As a question: When materialized view log is created the default value for pctfree is 60. In order to reduce the number of
blocks to be scanned, is it feasible to decrease value of pctfree and reorganized materialized view log ?` Only one materialized view is registering
for each table is in source database.








enabling constraints

anonymous, August 02, 2013 - 10:38 am UTC

Hi

One can create constraints
- one by one
- many constraints at same time, but if any one of them fails, all fail.

There are some Oracle tools (datapump, data masking) which create constraints. They do it using this 'one by one' method and that can take time.

Could there be some option for the command which is creating many constraints at the same time, that if one of the constraints fails, it is ignored, but others are created?
This could reduce exection time, because table would be scanned only once and not for every constraint.
Tom Kyte
August 02, 2013 - 7:27 pm UTC

we do not have the ability to enable many constraints at the same time - they all run different queries to validate the data.

in short, we'd scan the table/index every time anyway, even if there were a single statement with multiple constraints.

LISTAGG Sucks

Goyyala, August 02, 2013 - 5:21 pm UTC

LISTAGG seems to be excellent feature but when I use it I get the error 'ORA-01489: result of string concatenation is too long' That's why Oracle sucks.

Tom Kyte
August 02, 2013 - 7:41 pm UTC

have a nice day.

enabling constraint

A reader, August 05, 2013 - 12:33 pm UTC

eg.creating multiple check constraints on single table scan pass might be usefull.
Also multiple foreign keys.Thismight be usefull enhancement.

It is frustrating to wait full table scans for big tables when using Oracle tools.

PL/SQL Functions Defined in the SQL WITH Clause

anonymous, August 06, 2013 - 9:18 am UTC

Oracle 12c has feature 'PL/SQL Functions Defined in the SQL WITH Clause'

I have always regarded as one of the great features of pl/sql that you store it once in database and then you can use it 'anywhere' and changes in stored code are instantly in use.

If I have understood correctly, 'PL/SQL Functions Defined in the SQL WITH Clause' contradicts this feature, because each statement having this PL/SQL has own copy of code. If function changes, then you have to change all statements.

Could there be some way where you would define pl/sql code in with clause and say, that definitions should be taken from some stored function so performance benefits from having pl/sql in sql-statement are achieved but this 'write once/use anywhere' is retained.
Tom Kyte
August 08, 2013 - 4:52 pm UTC

pragma udf

yes.

http://docs.oracle.com/cd/E16655_01/appdev.121/e17622/langelems001.htm#LNPLS1851


however that said - being able to embed a snippet of plsql in sql is very cool. Suppose I want to write a script, and I'd like you to be able to run it. And assume that it would be very nice for me to be able to include a tiny bit of PLSQL in it.

Now, in order to run my script - you'd need create procedure. you'd need to install my code. then you could run my script.

Having a reusable bit of code as a standalone function - great.

being able to incorporate a tiny bit of plsql when you need it - WITHOUT create procedure (so anyone can use it, without having to install code) - really great.


for example, case in point:


ops$tkyte%ORA12CR1> with
  2  function get_partition_high_value( p_tname in varchar2, p_pname in varchar2 ) return date
  3  as
  4          l_date date;
  5          l_high_value varchar2(32000);
  6  begin
  7          select high_value into l_high_value
  8            from user_tab_partitions
  9           where table_name = p_tname
 10             and partition_name = p_pname;
 11
 12          if (l_high_value = 'MAXVALUE')
 13          then
 14                  l_date := to_date( '1-jan-9999', 'dd-mon-yyyy' );
 15          else
 16                  execute immediate 'select ' || l_high_value || ' from dual' into l_date;
 17          end if;
 18          return l_date;
 19  end;
 20  select partition_name, high_value, get_partition_high_value('T',partition_name)
 21    from user_tab_partitions
 22   where table_name = 'T';
 23  /

PARTITION_ HIGH_VALUE                                    GET_PARTI
---------- --------------------------------------------- ---------
JUNK       MAXVALUE                                      01-JAN-99
PART1      TO_DATE(' 2003-03-13 00:00:00', 'SYYYY-MM-DD  13-MAR-03
           HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')

PART2      TO_DATE(' 2003-03-14 00:00:00', 'SYYYY-MM-DD  14-MAR-03
           HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')





now, as a DBA, I can run that on a plethora of databases - without having to worry about the snippet of code being installed.


or I might be in an environment where it would be darn handy to have a tiny formatting function - to remove repetitive code from my query - but for whatever reason "I'm not allow to install code". Perhaps I'm using a database link to a database I cannot create code in.

Erix, August 06, 2013 - 7:21 pm UTC

Things I don't like about Oracle:

1) Table name length limit. It is still 30 symbols.

2) Procedure/function/package name length limit. Very often it is not possible to give a meaningful description to a routine using only 30 characters.

3) Requirement to specify varchar2 length. Why do I need to specify it? Why Oracle couldn't figure it out automatically.
You could add procedure like set_length(l_variable, 1000) for performance reasons, but for simple operations like l_var := l_a || l_b it is should be automatic.

4) 32K limit for varchar2. It should be removed together with point 3. See points about CLOB.

5) Clobs. Inconsistency with them.
-- For example, this works:
DECLARE
l_clob1 CLOB;
BEGIN
l_clob1 := 'A';
END;

-- This works as well:
DECLARE
l_clob1 CLOB;
BEGIN
select 'A' into l_clob1 from dual;
END;

-- This fails if you don't call dbms_lob.createtemporary :
DECLARE
l_clob1 CLOB;
l_clob2 CLOB;
BEGIN
l_clob1 := 'A';
--dbms_lob.createtemporary(lob_loc => l_clob2, cache => FALSE);
dbms_lob.append(dest_lob => l_clob2, src_lob => l_clob1);
END;

6) Clobs. Performance

-- try to run this.
DECLARE
c_max constant number := 50000;
TYPE t_varchar2_tab IS TABLE OF VARCHAR2(100) INDEX BY VARCHAR2(100);
TYPE t_clob_tab IS TABLE OF CLOB INDEX BY VARCHAR2(100);
l_tab1 t_varchar2_tab;
l_tab2 t_clob_tab;
l_start NUMBER;
BEGIN
l_start := dbms_utility.get_time;
FOR nn IN 1 .. c_max LOOP
l_tab1(nn) := 'DATA:' || nn;
END LOOP;
dbms_output.put_line(round((dbms_utility.get_time - l_start) / 100, 2) ||
' seconds for varchar2');
l_start := dbms_utility.get_time;
FOR nn IN 1 .. c_max LOOP
l_tab2(nn) := 'DATA:' || nn;
END LOOP;
dbms_output.put_line(round((dbms_utility.get_time - l_start) / 100, 2) ||
' seconds for clob');
END;

-- results from Oracle 12C in Windows,
-- but it's pretty much the same on Oracle 10 and 11 as well.
,05 seconds for varchar2
16,51 seconds for clob


I, personally, would :
a) Remove length limit from VARCHAR2 and requirement to specify the length.
b) Get rid of CLOB at all and replace it with VARCHAR2 from step a.

7) PL/SQL and OOP. Nice and needed feature, but very limited. Here's why:
a) You must have at least one attribute in a type specification. This is weird;

b) You cannot use parameters in member procedures/functions which are in format table.column%type. You need to use number, varchar2(100) etc.
-- this works
CREATE OR REPLACE TYPE t_test AS OBJECT
(
-- attribute -- why do I need at least one?
l_dummy NUMBER,

MEMBER PROCEDURE dummy(pi_dummy IN VARCHAR2),
)

-- this doesn't
CREATE OR REPLACE TYPE t_test AS OBJECT
(
l_dummy NUMBER,

MEMBER PROCEDURE dummy(pi_dummy IN user_tables.TABLE_NAME%type)
)


c) You cannot use associative arrays as a members or parameters in PL/SQL OOP:
--
CREATE OR REPLACE PACKAGE test_type IS
TYPE t_tab IS TABLE OF VARCHAR2(100) INDEX BY BINARY_INTEGER;
END test_type;

-- this doesn't work
CREATE OR REPLACE TYPE t_test AS OBJECT
(
l_tab test_type.t_tab,
MEMBER PROCEDURE dummy(pi_dummy IN VARCHAR2)
)

-- this one doesn't work as well
CREATE OR REPLACE TYPE t_test AS OBJECT
(
l_dummy NUMBER,
MEMBER PROCEDURE dummy(pi_dummy IN l_tab test_type.t_tab)
)

d) You cannot have private procedures/function in a type body. They must be in the type specification as well.

e) I kinda understand those limitations, because it is possible to store objects in a database. That would be fine, but this really cripples PL/SQL OOP functionality.




Tom Kyte
August 08, 2013 - 5:06 pm UTC

1) stay tuned...

2) see #1

3) that is a bad idea. the length of a datatype is an integrity constraint. integrity constraints are a good thing (tm), not a bad thing. If you truly don't care about the length, use CLOB (don't - really don't - but that is what you are effectively asking for)

4) if you don't want a 32k length and you don't want to specify a length - well, then clob.

5) I don't see how that is inconsistent, you have a NULL in your clob, you are trying to append to a NULL pointer some stuff. That doesn't work in any language that I'm aware of.

Now, if you are comparing this to varchar's, then your code should have been:

ops$tkyte%ORA11GR2> DECLARE
  2          l_clob1 CLOB;
  3          l_clob2 CLOB;
  4  BEGIN
  5         l_clob1 := 'A';
  6             l_clob2 := l_clob2 || l_clob1;
  7             dbms_output.put_line( l_clob2 );
  8  END;
  9  /
A

PL/SQL procedure successfully completed.



that is concatenating NULL with 'A' and assigning that to a clob. just like you would do with strings.

6) if you want unlimited string size.... yes there is a performance ding for setting up a data field that has basically unlimited size.



Erix, August 11, 2013 - 4:21 pm UTC

Thanks for answers. I agree about constraints, but don't agree that size limit/requirement cannot/shouldn't be removed from varchar2.
Also, I don't believe that CLOB cannot be as fast as varchar2 :)

Here's some more for CLOB.

DECLARE
  l_clob1 CLOB;
  l_clob2 CLOB;
  l_clob3 CLOB;
BEGIN
  l_clob1 := rpad('X', 32000); -- this works
  --l_clob1 := rpad('X', 34000); -- this doesn't -- ORA-06502: PL/SQL: numeric or value error: character string buffer too small

  l_clob2 := 'A'||l_clob1;
  l_clob3 := REPLACE(l_clob1, 'A', l_clob2); -- this works

  l_clob2 := 'A'||l_clob1||l_clob1;
  l_clob3 := REPLACE(l_clob1, 'A', l_clob2); -- this doesnt -- ORA-22828: input pattern or replacement parameters exceed 32K size limit
END;


If rpad/lpad works with CLOBs < 32K just because it is internally converted to varchar2, but doesn't with CLOBs > 32K I call it inconsistency.
The same for replace.

It is very annoying that you might get those errors only during runtime.


And one more

type t_tab1 is table of varchar2(100) index by binary_integer;  -- works
type t_tab2 is table of varchar2(100) index by varchar2(10);    -- works
type t_tab3 is table of varchar2(100) index by number(10);      -- doesn't -- PLS-00315: Implementation restriction: unsupported table index type

If index by varchar2(10) works, why index by number(10) doesn't? You can always convert number(10) to varchar2(10) and this is what I do, and Oracle could do this internally as well.

PS
Don't get me wrong. I like Oracle. These are things that confuse and sometime make me angry and maybe some of them will be updated/fixed/changed in the future :)

Alexander, October 09, 2013 - 6:41 pm UTC

This probably isn't that relevant here because it's not a technical limitation of Oracle. But since you said these things are sometimes seen internally to the company I figured what the heck....

One word: licensing.

Oracle really needs to work on this in several areas. One example is the charging of most neat features additionally even if you already buy the super expensive enterprise edition. Microsoft does it right, you buy the big daddy version you get all the features with it. There are some exceptions like say RAC, that's really a different product altogether.

Another example is their visualization licensing model. Charging for every core on the machine even if a small percentage of your VM's are being used for that. Again, Microsoft lets you license just your VM's.

These things make it very hard to convince people to use Oracle. My argument becomes "Oracle offers all these amazing features, competitor X doesn't*

*If you're willing to pay through the nose for it"

Instead of "Oracle offers these features that the competition does not. End of story."

I work for a Fortune 100 company and they are moving away from Oracle and it's very frustrating. But at the same time, they have a good case.

SQL Developer GUI

Vinay, October 15, 2013 - 9:44 am UTC

Oracle SQL Developer is a huge and bloated piece of software, and they can't get a simple thing like autocomplete of schema object names right. Performance is terrible, and it is a pity that Oracle, with such a great database, has chosen to give us such a useless GUI tool.

Maybe you need to look at taking over AllRoundAutomations and start packaging their excellent PL/SQL Developer product along with the Oracle DB.

user defined query transformation

asktom fan, November 01, 2013 - 8:31 pm UTC

What I would like to have is the so-called user defined query transformation. That would allow us (DBA's or developers) to tell the optimizer to execute SQL ID bbbbbbbbbbbbbbb when it sees SQL ID aaaaaaaaaaaaaaa.

The reason I want this is that very often we see bad SQL's generated by a third party application. We cannot touch its code, change its code, or hint its code. Regardless what we do, we cannot get a good plan for its SQL as it is simply poorly writen bad SQL. We can definitely write a better SQL to get the same result. However, we cannot "inject" the good SQL into the third party application.

If Oracle allows user defined query transformation, it would be reallly cool.


user defined query transformation

asktom fan, November 04, 2013 - 4:38 pm UTC

Hi Tom,

Thanks for the reply.

While I am not a great fan of DBMS_ADVANCED_REWRITE due to its limitations especially with bind variables, I am really excited with the 12c new feature: SQL Translation Framework.

Kerry also wrote a nice blog about his feature:

http://kerryosborne.oracle-guy.com/2013/07/sql-translation-framework/

I will sure test this thing out.

Again, thanks for you service to the Oracle community!

A reader, November 06, 2013 - 9:07 pm UTC

what I would change is the PLSQL developer to something like TOAD.
Tom Kyte
November 07, 2013 - 3:42 pm UTC

we call that sqldeveloper.

I think you mean SQL Developer

Graeme King, November 06, 2013 - 10:20 pm UTC

but anyway I do agree. In my opinion TOAD is and always has been a great product.

Things I don't like about Oracle - SQL Developer. I remember Tom Kyte used to call Windows 'Windoze' and I think he was expressing his frustration at a poorly designed and 'slow' product that couldn't really hack it in the professional space - hello SQL Developer.

Michael, November 07, 2013 - 5:43 pm UTC

Oracle error reporting is horrible. Some simple information related to an error message (which keyword? what user? which constraint, etc.) would make trouble shooting better.

Oracle documentation is always a step short. I'm always left asking a couple of key questions that need answering.

A reader, November 08, 2013 - 6:13 pm UTC

I will creae a better OERR utility for error messaging.

SQL Developer

A reader, November 11, 2013 - 11:11 pm UTC

This is undoubtedly the worst thing about Oracle. Some people may love SQLPLus, but a majority of us like to work with GUIs.

The best tool out there is PL/SQL Developer by Allround Automations. Beautifully designed, fast and fun to use. Toad comes second. SQL Developer comes a very very distant third.

SQL Developer is a bulky hog. It takes 1-2 seconds to give me autocomplete hints for table names. Unbelievable. And when it does work, usually the "autocomplete" just gets in the way.

And it is soooo sluggish. Slow dinosaur-like responses. And once you open a SQL Developer window, everything is one single session. So if you run a statement in one window, you cannot do anything till the statement completes. You will have to open a new instance of the sluggish and gigantic SQL Developer to get any work done when the query is running. And it just "hangs" if you try clicking anything in the window when a query is running, especially a DDL statement.

This is very disappointing, coming from Oracle. I hope this feedback is passed on to the decision makers w.r.t. the SQL Developer product, and either improve the product drastically or junk it.

so true - unfortunately

Graeme King, November 12, 2013 - 3:43 pm UTC

Your criticism is spot on. SQL Developer is a medicore product at best and is probably still slower today than TOAD was when i first started using it in 1999 on Windows 98.

If this is the Oracle & Java's flagship database tool application (write once run anywhere - remember that?) then they can keep it. Give me something fast and nimble written in C or C++ any day.

A reader, November 13, 2013 - 4:23 pm UTC

Personally, I am very happy with SQL Developer. Having a single program I can run on multiple platforms is very handy.

>> It takes 1-2 seconds to give me autocomplete hints for table names.

That's configurable. Check the Preferences

>> And once you open a SQL Developer window, everything is one single session. So if you run a statement in one window, you cannot do anything till the statement completes.

That's not correct. Create an "Unshared" connection using Alt-F10 or Ctrl+Shift+N.

Great points

Graeme King, November 13, 2013 - 5:41 pm UTC

Thanks for your insight. I must admit my ignorance on the autocomplete 'speed' settings. The 'unshared' connections information is very helpful also although I wish that was better 'advertised' e.g I don't see it as a setting in the connections window or menu options anywhere. I could be missing it though.

The abilty to run on something other than Windows is cool although i would imagine that it is a tiny userbase - since 99% of developers and dbas should not be working on the server directly and the world is still full of Windows clients. They should build it as a browser based version if they want it to be universally accessible (including mobile).

A reader, November 14, 2013 - 3:19 pm UTC

>> They should build it as a browser based version if they
want it to be universally accessible (including mobile).

They do - it's called Enterprise Manager.

Enterprise Manager is not a developers tool

Graeme King, November 14, 2013 - 3:51 pm UTC

It's for 'Managing the Enterprise'. SQL Developer is for 'Developing SQL' - broadly speaking of course...

but if you can seriously use it for developing SQL and PL/SQL packages then go for it...

such thread are useless

A reader, November 20, 2013 - 2:36 am UTC

such threads have not significance, since oracle will do what they like.

The biggest problems with oracle products are the bugs. on such leading product of oracle in bugs is their Oracle Data Integrator. its full of bugs. I have filed more than 3 bugs in just simple import/export of projects. I have barely able to same my job due to tight timelines and the stupid metalink support which is now totally useless.

They dont let you file bugs and if you are able to prove that you are hitting a bug, they dont give you workaround and become inactive, then after one or two weeks they call you back and hope that you problem is resolved and then close the bug.

What a strategy?.

New type of constraint

anonymous, December 04, 2013 - 8:16 am UTC

I would like to have a new type of constraint.

Many tables have a form:
KEY,
VALID_FROM,
VALID_TO,
DATA1,
DATA2


I would like to have oracle to understand that on a single point of time there is atmost/exists one row that is valid.

Refential integrity would would be enhanced to refer also to this kind of structure either by some fixed column from referencing table or by general reference so e.g. sysdate could be used in sql-statements.

This would reduce errors and would enable table/join elimination to understand join structure and remove unneeded joins based on this.

Join elimination is a nice feature, I do like about it. But it still is not complete.


why Oracle is moving away from Mac

A reader, December 31, 2013 - 9:05 am UTC

Tom,

Mac user base has increased considerably. especially window 8 is now not an enterprise operating system. Why oracle is not considering Max OSX as the alternative to windows. I know you dont like windows but that strategy is not prevalent from your products

Hearsay

Graeme King, January 02, 2014 - 10:28 pm UTC

When did Windows 8 stop becoming an enterprise O/S?
http://en.wikipedia.org/wiki/Windows_8_editions

I would doubt that Max OS X Server is a significant player in the Oracle Database Server OS market (although I would be interested in seeing evidence to the contrary). As for client side, no doubt Mac user base has increased but SQL Developer runs just as badly on Macs as it does on Windows.

I can't speak for Tom Kyte but I would guess that he has mellowed a great deal in terms of newer versions of Windows and its ability to run Oracle as an enterprise product. In fact it's pretty good at it.

I believe hardware and OS choices will become less of a factor as the move to a service/utility model (like SaaS) increases.

What don't you like about Oracle - sql developer Tool it sucks

A reader, January 21, 2014 - 8:10 am UTC

Just wanted to know users view on this terrible garbage free tool from oracle called sql developer.

Just how user-enemy a tool can be. So difficult to use
Just try finding the DDLS for example.

Compare it with TOAD and Sql developer is trash.


I have been a great fan of Oracle products

I really don't know how someone from Oracle can develop such an un-intuitive waste of time tool.

Please learn from TOAD and adopt some TOAD features.
This product is result of some confused minds at Oracle which has come up with such great innovations in DB technology.

Please stop the torture and stop this tool or make it unpoisionous.
Thoughts ?

Hilarious

Graeme King, February 04, 2014 - 8:15 pm UTC

>> 'terrible garbage', 'user-enemy', 'trash', 'un-intuitive waste of time', 'confused minds', 'stop the torture', 'make it unpoisionous'

man that is what i call a RANT! :) you really don't like it do you?

>> thoughts?

don't use this 'free tool' if it upsets you that much

full outer join

anonymous, March 14, 2014 - 11:55 am UTC

I would like Oracle to have an extension for outer joins:

Now the join column should be referenced in select part as

nvl(table1.join_column, table2.join_column) join_column.

If this select statement is used in a view and join_column would be used as selection criteria (=where clause) in referencing the view, there seems to be performance problems because selection criteria is not pushed through nvl.


OPtimizer to understand use of nvl in these situation ?
Oracle specific function to reference to columns used in full outer joins ?
Something clever ?

To "SQL Developer is terrible garbage" guy

Mark, May 29, 2014 - 3:51 pm UTC

"Just try finding the DDLS for example."
Are you talking about generating a table's DDL? If so, you just right-click the table name then click "Export DDL". Ta-da. You can even select multiple tables if you'd like.

Predicate information in SQL Monitor report

Rajeshwaran, Jeyabal, April 09, 2020 - 7:26 am UTC

The output produced by SQL Monitor report is really helpful.

current we use this dbms_sqltune.report_sql_monitor(type=>'text') - to get the Textual version of sqlmonitor report, very light weight when compared to "active" format. - How the predicate information section is missing in the Textual version, but the same is available in the "active" format

It would be very helpful, if the textual version supports the predicate information, that would help us to isolate what part of the predicates were access and filters, while we navigate through the execution plan details.


Connor McDonald
April 22, 2020 - 4:26 am UTC

Activity % in SQL Monitor

Narendra, April 22, 2020 - 8:06 pm UTC

This is not exactly a wish but after 11.2.0.4, sql monitor report with text format do not always show activity% and wait breakdown details. I believe this has been logged as a bug but it is so useful and I get annoyed when I could not see it.

Also, it is almost 20c.....log access and filter predicates in AWR.....please
They are already available in v$ views...am sure not a big change to store them in AWR....but will be so much better to have them in display_awr

Activity % in SQL Monitor

Rajeshwaran, Jeyabal, April 23, 2020 - 1:01 pm UTC

....
This is not exactly a wish but after 11.2.0.4, sql monitor report with text format do not always show activity% and wait breakdown details. I believe this has been logged as a bug but it is so useful and I get annoyed when I could not see it.
....


this is on 12.2 - the activity% and wait breakdown details are still available on the SQL Monitor (Text) report.

SQL Plan Monitoring Details (Plan Hash Value=714537045)
================================================================================================================================================================================================================
| Id   |              Operation               |       Name        |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read  | Read  |  Mem  | Activity |           Activity Detail           | Progress |
|      |                                      |                   | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs  | Bytes |       |   (%)    |             (# samples)             |          |
================================================================================================================================================================================================================
|    0 | SELECT STATEMENT                     |                   |         |      |           |        |     1 |          |       |       |     . |          |                                     |          |
|    1 |   SORT AGGREGATE                     |                   |       1 |      |           |        |     1 |          |       |       |     . |          |                                     |          |
|    2 |    PX COORDINATOR                    |                   |         |      |           |        |    17 |          |    76 | 608KB |     . |          |                                     |          |
|    3 |     PX SEND QC (RANDOM)              | :TQ10000          |       1 |      |           |        |    16 |          |       |       |     . |          |                                     |          |
| -> 4 |      SORT AGGREGATE                  |                   |       1 |      |       439 |     +0 |    16 |        0 |       |       |     . |    26.14 | Cpu (1832)                          |          |
| -> 5 |       OPTIMIZER STATISTICS GATHERING |                   |     12M | 149K |       439 |     +0 |    16 |     475M |       |       | 768KB |    23.43 | Cpu (1642)                          |          |
| -> 6 |        PX BLOCK ITERATOR             |                   |     12M | 149K |       437 |     +2 |    16 |     475M |       |       |     . |          |                                     |          |
| -> 7 |         TABLE ACCESS STORAGE FULL    | XXXXXXXXXXXXXXXXX |     12M | 149K |       439 |     +0 |   129 |     475M | 25828 |  24GB |  48MB |    50.43 | Cpu (3516)                          |     100% |
|      |                                      |                   |         |      |           |        |       |          |       |       |       |          | cell single block physical read (3) |          |
|      |                                      |                   |         |      |           |        |       |          |       |       |       |          | cell smart table scan (15)          |          |
================================================================================================================================================================================================================



....
Also, it is almost 20c.....log access and filter predicates in AWR.....please
They are already available in v$ views...am sure not a big change to store them in AWR....but will be so much better to have them in display_awr
....


Yes, Frank has raised it in Database Ideas forum, go and vote that to get it delivered in future Oracle database releases.

https://community.oracle.com/ideas/2494

But there is an work around for that - Frank discuss that in his blog with a neat demo.

https://blog.dbi-services.com/awr-dont-store-explain-plan-predicates/

Hope this would be helpful.
Connor McDonald
April 28, 2020 - 3:06 am UTC

Here's a potential workaround for getting SQL Monitor predicate information from the text version of the report



Video not available??

Narendra, April 25, 2020 - 6:59 pm UTC

Hello Connor,

Looks like the video is no longer available?? At least that is what I get.

@Rajeshwaran,

The activity data gets displayed as long as it is in v$ views (even in 12.1.0.2) but not after it is flushed to disk.
Connor McDonald
April 28, 2020 - 3:09 am UTC

You caught us in-between the old video and the more specific newer version

Oracle is shockingly NOT user-friendly

Sirin, March 17, 2022 - 1:56 pm UTC

I am a computer science student in Oslo, Norway. I have completed several courses on Python, Java / OOP, and was looking for certifications. So I wanted to check out Oracle of course. I would be more than happy to embrace a code challenge, but I got tired of just try to navigate and get started. Both the webpage and the courses takes too much time to load and navigate. I have to wait for some time. I cannot skip to the end of the course because I just want to take the exam (free Java course).

It was an annoying experience. I have issued a "ticket", haven't got any help yet, all in all very disappointing and annoying.

Btw, I am at the campus city centre Oslo Uni., and have a good connection and high performer laptop. My first experience with Oracle is not positive, and I think I had enough.
Connor McDonald
March 18, 2022 - 4:52 am UTC

Can you provide the link that was slow? We'll try track it down and see what the issue was.