Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Alex.

Asked: September 28, 2005 - 6:54 pm UTC

Last updated: March 18, 2022 - 4:52 am UTC

Version: 8.1.7

Viewed 100K+ times! This question is

You Asked

Hi Tom,

I was wondering if there are some thing you don't like about Oracle?
I personally wish more things came setup, out of the box such as autotrace, intermedia text, scott schema etc. Any thoughts on this?

-Alex

and Tom said...

Well, Oracle text does (dba just needed to have selected it). Scott schema does (ditto, dba just needs to have installed the demos)

Autotrace - yes, be nice if it were there HOWEVER, on a production system - it shouldn't necessarily be there so..... Maybe that is good.

What would I change....

Actually, many of those things I would change are being addressed - a better sqlplus that is still sqlplus (coming, code name "raptor", previewed at OOW)...

htmldb - that was a big one.

Most of the things I would change are "political", the DBA/Developer relationship (or lack thereof). The (sometimes) developer approach of "I don't need to learn the database"

There is so much in there - what I would change is peoples knowledge level of what is actually available.

I would appreciate better error messages sometimes - I can usually figure them out, but.... ;)

I'll publish this to see what others add - maybe some have been addressed and we can point it out - others, I'll file away for the days when I'm asked "what would you add...."


Rating

  (690 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Things we DON'T like

Mike Friedman, September 29, 2005 - 10:11 pm UTC

1. Lack of proxy capability within the DB - we should be able to log onto a non-[rivileged account, authenticate ourselves, and then have our connection switched to a selected privileged account.

2. Merge statement should take multiple actions just like Insert

3. The current move away from PL/SQL and transactional interfaces to many components needs to be reversed. For example, there is no way to make use of CDM SDK direct from PL/SQL while maintaining transaction atomicity, etc.

Tom Kyte
September 30, 2005 - 8:55 am UTC

1) that is not within the database. We have that ability in the database, sqlplus doesn't "do it"

2) it does - when matched, when not matched. you can insert and update and delete from a table ???

Don't know what you mean

3) that seems to be a contradictory statement? You say "don't go components" but then say "we need interface to components"? (frankly, I don't know what the CDM SDK is..)

Wow! You Let me Build my Wishlist?

Arup Nanda, September 29, 2005 - 11:48 pm UTC

Oh, I could go on forever. Here are some:

1. SQL*Plus with a built-in tabular format output; useful for rows that span across many lines. No, Grid Control SQL Output just doesn't cut it; it's a static output of the entire table. It doesn't have the powerful SQL*Plus commands.

2. An unloader utility that reads at the block level and dumps data as plaintext ASCII much faster than the SQ*Plus or Pro*C programs can do.

3. I miss the SQL*Forms 3.0 (if anyone is old enough to remember that)which allowed me to do quick and dirty data entry screens for dat manipulations. And, sweet oh sweet, I could edit the INP files for quick modifications. Master detail forms were a snap.

4. Separation of roles where DBA can see the data files, dictionary, etc., but not the actual data. A separate role will allow the data to be seen and allow privileges. A third account type will have access to audit trails only. I understand this is being addressed in Data Vault.

5. PL/SQL compilation where the decision to make the dependent objects such as triggers, functions, etc invalid due to table alters are controlled by DBA. When a table column data type is modified, the dependednt objects are immediately modified; however, sometimes the changes to the table do not alter the contents of the object and it need not be invalidated. This will help immensly in keeping the uptime of the system.

6. Batch commits. When I should be able to do a mass update but ask Oracle to commit at a certain frequency only. I could do this by cursors; but a simple SQL will be very useful. Something very similar to the commit frequency setting in COPY command.

7. Conditional Shared Pool manipulations. Shared pool contains SQL statements that should have been aged out eons ago; but linger about and cause the ORA-4031 error. How about allowing the DBA to age certain SQLs, either explicitly or via some pattern matching or both.

8. Ability to use different userids in unix for the oracle owner and the user whos owns the logs such as alert log, listerner log, crs log, etc. This will allow the separation of roles - the DBA can perform database maintenance but a different user owns the logs. So the oracle software owner can't delete logs. The process of logging can be performed by a root-owned process, installed during the execution of root.sh.

... there is more; but I leave it to others how they feel.

Tom Kyte
September 30, 2005 - 8:59 am UTC

(1, 2) => raptor

(3) => htmldb!


4, You can create your own DBA role "my_dba" and grant it select on the dictionary objects you want -- but not the "select any table" and such. Meaning - you can accomplish this today?

6, I disagree, that would be excessively dangerous. How do you restart safely such an operation?


7, I'd sort of like a "don't share" hint :) for applications that don't bind - don't have their sql even go there....




Things we don't like

Jim Weatherly, September 30, 2005 - 12:30 am UTC

1. Cross-platform data file incompatibility. With the size of databases these days export/import takes too long. Transportable tablespaces require 10g to cross platforms and have to go through an extra RMAN process--and as of 9.2.0.5 the process is still buggy.

2. Single oracle.exe on 32-bit windows. Context switching among threads may be faster, but the limits of addressable memory render this less imporant, even with the /3GB switch. 64bit Windows should address this, but not without upgrading hardware.

3. Inability to bypass redo for statements that don't need recovery (for example, the insert /*+ append */ into the Hits table for Tom's pagination solution). Always writing above the high watermark can eat up space quickly and like a temporary tablespace, I don't care if I can't recover it. Create a new kind of tablespace of type unrecoverable and stick segments you don't care about here. Just a thought.

4. Lackluster management tools. While grid control is promising, it has taken a long time to get there. After paying the price for the best database on the planet it's annoying to have to pay such a premium for the tools to manage it. Also, more stable Intelligent Agents would be nice.

5. Lack of dynamically expandable OCFS. It would be nice to be able to resize an OCFS drive on the fly as with other file systems. ASM on raw devices may make this moot, but not on 9iR2.

Tom Kyte
September 30, 2005 - 9:01 am UTC

1) 10gR2 has full database transports across platforms with the same endianess now.

2) multi-process would be horrendous on windows - it would not perform even a little bit.


...

Bob B, September 30, 2005 - 12:38 am UTC

I'd like to concur and extend Tom's point further. As the database becomes more "self-managing" it becomes more user friendly. Also, providing better Oracle supplied tools (e.g. Raptor) to use Oracle also improves the user friendliness. The end result will (hopefully) be a better learning curve for Oracle users. HtmlDB was also another step in the same direction.

If you know how to use it, Oracle can do some amazing things. By making Oracle easier to use for more people, you increase the number of people who know how to use it. More people kicking a$$ with Oracle will lead to more kick a$$ applications. This opinion is a take off from the Creating Passionate Users blog (see Tom's blog for the url)

Amen

Tony, September 30, 2005 - 12:51 am UTC

<quote>
1. SQL*Plus with a built-in tabular format output; useful for rows that span
across many lines. No, Grid Control SQL Output just doesn't cut it; it's a
static output of the entire table. It doesn't have the powerful SQL*Plus
commands.

2. An unloader utility that reads at the block level and dumps data as plaintext
ASCII much faster than the SQ*Plus or Pro*C programs can do.
</quote>

1. I couldn't agree more. Occassionally I need to look at some very wide tables (135 columns) and SQL*Plus doesn't really cut it. I use SQL*Plus for 98% of my work, the other 2% is JDeveloper just to see these wide tables. I wish I could just stay in my terminal window.

2. Not only something that is easy and fast but provides some good error handling.

Peevs and Wants

Billy, September 30, 2005 - 2:05 am UTC

My biggest pet peeve is that grants assigned via roles do not work in named PL/SQL (despite working in anonymous PL/SQL) - which means those grants have to explictly granted to the user/schema. I have never really seen a sensible explanation for this in any Metalink material.. Tom have you perhaps addressed this here? (had a brief look some time ago and did not see any specific from you on this).

On the Wanted List, no-one yet mentioned O-O (or O-R) in Oracle. I'm nuts about ADTs. It is one of the best features of Oracle.. and one of the most ignored.

But I want loads more. In an ATD I want to be able to define private, protected and public properties and methods. I want to be able to define Getters and Setters for properties. Most of all, I want to be able to call a parent constructor in a child class's constructor without having to resort to ugly hacks. Went into some detail on this in Usenet: </code> http://groups.google.com/group/comp.databases.oracle.server/browse_frm/thread/c62736aa46a3c46b/de3e8541b26d2364 <code>

The PL/SQL team should have a close look at Delphi and C# features and think of implementing some of the most useful ones into PL/SQL.


Tom Kyte
September 30, 2005 - 9:12 am UTC

(I'm nuts the other way on ADT's in Oracle personally ;)



I did address this role thing in my book Expert one on one Oracle:

<quote>
What happens when we compile a Definer rights procedure

When we compile the procedure into the database, a couple of things happen with regards to privileges. We will list them here briefly and then go into more detail:

q All of the objects the procedure statically accesses (anything not accessed via dynamic SQL) are verified for existence. Names are resolved via the standard scoping rules as they apply to the definer of the procedure.
q All of the objects it accesses are verified to ensure that the required access mode will be available. That is, if an attempt to UPDATE T is made - Oracle will verify the definer or PUBLIC has the ability to UPDATE T without use of any ROLES.
q A dependency between this procedure and the referenced objects is setup and maintained. If this procedure SELECTS FROM T, then a dependency between T and this procedure is recorded

If, for example, I have a procedure P that attempted to 'SELECT * FROM T', the compiler will first resolve T into a fully qualified referenced. T is an ambiguous name in the database - there may be many T's to choose from. Oracle will follow its scoping rules to figure out what T really is, any synonyms will be resolved to their base objects and the schema name will be associated with the object as well. It does this name resolution using the rules for the currently logged in user (the definer). That is, it will look for an object owned by this user called T and use that first (this includes private synonyms), then it will look at public synonyms and try to find T and so on.

Once it determines exactly what T refers to - Oracle will determine if the mode in which we are attempting to access T is permitted. In this case, if we as the definer of the procedure either owns the object T or has been granted SELECT on T directly or PUBLIC was granted SELECT, the procedure will compile. If we do not have access to an object called T by a direct grant - the procedure P will fail compilation. So, when the object (the stored procedure that references T) is compiled into the database, Oracle will do these checks - and if they "pass", Oracle will compile the procedure, store the binary code for the procedure and set up a dependency between this procedure and this object T. This dependency is used to invalidate the procedure later - in the event something happens to T that necessitates the stored procedures recompilation. For example, if at a later date - we REVOKE SELECT ON T from the owner of this stored procedure - Oracle will mark all stored procedures this user has that are dependent on T, that refer to T, as INVALID. If we ALTER T ADD some column, Oracle can invalidate all of the dependent procedures. This will cause them to be recompiled automatically upon their next execution.

What is interesting to note is not only what is stored but what is not stored when we compile the object. Oracle does not store the exact privilege that was used to get access to T. We only know that procedure P is dependent on T. We do not know if the reason we were allowed to see T was due to:

q A grant given to the definer of the procedure (grant select on T to user)
q A grant to public on T (grant select on T to public)
q The user having the SELECT ANY TABLE privilege

The reason it is interesting to note what is not stored is that a REVOKE of any of the above will cause the procedure P to become invalid. If all three privileges were in place when the procedure was compiled, a revoke of ANY of them will invalidate the procedure - forcing it to be recompiled before it is executed again. Since all three privileges were in place when we created the procedure - it will compile successfully (until we revoke all three that is). This recompilation will happen automatically the next time that the procedure is executed.

Now that the procedure is compiled into the database and the dependencies are all setup, we can execute the procedure and be assured that it knows what T is and that T is accessible. If something happens to either the table T or to the set of base privileges available to the definer of this procedure that might affect our ability to access T -- our procedure will become invalid and will need to be recompiled.

This leads into why ROLES are not enabled during the compilation and execution of a stored procedure in Definer rights mode. Oracle is not storing exactly WHY you are allowed to access T - only that you are. Any change to your privileges that might cause access to T to go away will cause the procedure to become invalid and necessitate its recompilation. Without roles - that means only 'REVOKE SELECT ANY TABLE' or 'REVOKE SELECT ON T' from the Definer account or from PUBLIC. With roles - it greatly expands the number of times we would invalidate this procedure. If some role that was granted to some role that was granted to this user was modified, this procedure might go invalid, even if we did not rely on that privilege from that role. ROLES are designed to be very fluid when compared to GRANTS given to users as far as privilege sets go. For a minute, let's say that roles did give us privileges in stored objects. Now, most any time anything was revoked from ANY ROLE we had, or any role any role we have has (and so on -- roles can and are granted to roles) -- many of our objects would become invalid. Think about that, REVOKE some privilege from a ROLE and suddenly your entire database must be recompiled! Consider the impact of revoking some system privilege from a ROLE, it would be like doing that to PUBLIC is now, don't do it, just think about it (if you do revoke some powerful system privilege from PUBLIC, do it on a test database). If PUBLIC had been granted SELECT ANY TABLE, revoking that privilege would cause virtually every procedure in the database to go invalid. If procedures relied on roles, virtually every procedure in the database would constantly become invalid due to small changes in permissions. Since one of the major benefits of procedures is the 'compile once, run many' model - this would be disastrous for performance.

Also consider that roles may be

q Non-default: If I have a non-default role and I enable it and I compile a procedure that relies on those privileges, when I log out I no longer have that role -- should my procedure become invalid -- why? Why not? I could easily argue both sides.
q Password Protected: if someone changes the password on a ROLE, should everything that might need that role be recompiled? I might be granted that role but not knowing the new password - I can no longer enable it. Should the privileges still be available? Why or Why not? Again, arguing either side of this is easy. There are cases for and against each.

The bottom line with respect to roles in procedures with Definer rights are:

q You have thousands or tens of thousands of end users. They don't create stored objects (they should not). We need roles to manage these people. Roles are designed for these people (end users).
q You have far fewer application schema's (things that hold stored objects). For these we want to be explicit as to exactly what privileges we need and why. In security terms this is called the concept of 'least privileges', you want to specifically say what privilege you need and why you need it. If you inherit lots of privileges from roles you cannot do that effectively. We can manage to be explicit since the number of development schemas is SMALL (but the number of end users is large)...
q Having the direct relationship between the definer and the procedure makes for a much more efficient database. We recompile objects only when we need to, not when we might need to. It is a large efficiency enhancement.
</quote>



Raptor?

Iru, September 30, 2005 - 2:48 am UTC

Hello to everyone.
I don't know about you, but i would like a lot to hear more about this 'Raptor'.

Please Tom?
Thanks a zillion for being there.

Raptor

Tarry Singh, September 30, 2005 - 3:03 am UTC

See here, this guy has been to the OOW as well </code> http://www.oratransplant.nl/2005/09/20/openworld-project-raptor-%e2%80%93-watch-out-toad/ <code>I don't know about cool, it's just another tool and I guess they want the Raptor to devour the Frog(TOAD).

I'm curious though, I'd like to see it work.

Tom Kyte
September 30, 2005 - 9:20 am UTC

(you beat me to it!)

I don't like Oracle's DESC functionality all that much

Hans Wijte, September 30, 2005 - 3:16 am UTC

Hi Tom,

In my opinion the DESC command lacks several useful features when used to describe a table or a view.
When it is a table I'd like to know its PK columns and
which columns are FK columns; from the DESC it is also not clear whether the described object is a table or a view and you have to know in advance who owns the described object.

However, I found a very nifty procedure created by a Swiss
programmer (Rene Nyffenegger) at this URL
</code> http://www.adp-gmbh.ch/ <code>
which does everything described above.

Regards
HAnS

Thanks Tarry

Iru, September 30, 2005 - 3:17 am UTC

Thanks a lot. From what i've seen there looks a lot like they want to smash the frog, yes. Maybe a lot of people out there who won't use Toad because of some limitations it have would use Raptor because it's from Oracle itself. I think that is what Larry may have think of.

Saludos.

Oracle is improving all the time, but...

mobra, September 30, 2005 - 3:47 am UTC

... I wish that the following could soon be addressed:

1. The 32K limit on strings in PL/SQL.

2. The "table or view does not exist" error message should include the name of the invalid table/view. This would be very helpful in a huge query. Similar for non-existent columns, and value errors (string to number conversions, etc.).

Wishlist

Marc Blum, September 30, 2005 - 4:32 am UTC

- BEFORE COMMIT TRIGGER
- disable asynchronous commits
- robust timestamp arithmetics
- robust daylight saving support

Tom Kyte
September 30, 2005 - 9:34 am UTC

timestamp and daylight saving? what is missing in 9i and above?

I do not like COMPRESS=Y as default (EXP-ort)

Kim Anthonisen, September 30, 2005 - 5:05 am UTC

Hi Tom

I am a really big fan of Oracle, but I have never understood why COMPRESS=Y is the default. Have never used it, but have felt the pain from it several times, when people (including myself) have forgot to set it to N during export.

Tom Kyte
September 30, 2005 - 9:36 am UTC

because EXP is a tool as old as dirt and to change a default is something they are very hesitant to do.

EXPDP got to do it over.

wishlist

A reader, September 30, 2005 - 5:29 am UTC

asynchronous rollback

Tom Kyte
September 30, 2005 - 9:38 am UTC

That can be achieve by this C code:


*((char*)0) = "go away";


:)

beauty Insert

Alvaro, September 30, 2005 - 8:57 am UTC

How about an alternative for insert sintax. I know that is not ANSI SQL but...

We can do
UPDATE tTable a
SET a.col1 = '1'
, a.col2= 2
.
.
.
, a.coln = 'n';

but we can not do
INSERT INTO tTable a
SET a.col1 = '1'
, a.col2 = 2
.
.
.
,a.coln = 'n';

We must read complex and large insert like:
INSERT INTO tTable
(col1, col2, ... , coln)
VALUES
('1',
2,
.
.
.
, 'n')

In this way, we can not associate destination column with value for it (who is who), except we add some comments or something else.

Alex, September 30, 2005 - 9:01 am UTC

Well this thread sure blew up pretty quick. I didn't put too much thought into the things I don't care for, I just wanted to provoke some thought. However, you brought up some things I missed. (Statspack was another thing I was thinking of, but maybe the DBA can enable too?)

Good call on the error messages. I agree there. I had not heard of the new and improved sqlplus, that's a great idea.

I am just getting a feel for the DBA/developer relationship, this is my first job, coming up on two years. It's tough though, trying to explain the importance of preparedStatements without sounding like a database Nazi or using confusing (to them) technical terminology.

I had a crazy thought about binds while on the subject of Oracle improvements and; I wonder why it is not considered syntactically incorrect to write code in Oracle without binds where they are necessary. I've heard and seen enough to know now that code without binds is just wrong. Is it nuts to suggest this? I know it's possible to overbind, so maybe it's unrealistic to ask the compiler to be able to know when they should be used and when not? Although if we as people know, I would think we could program the rules and circumstances? It's just some insane idea I was thinking of. I just think that code open to sql injection is serious enough that it shouldn't even compile.

Looking forward to reading your new book.

Tom Kyte
September 30, 2005 - 10:56 am UTC

... (Statspack was another thing I was
thinking of, but maybe the DBA can enable too?)...

yes, they are the ones that would either

a) include it
b) or not


.... I wonder why it is not considered syntactically incorrect to write code in
Oracle without binds where they are necessary.....

there is never a case where they *have* to be used. It is that they better be used if you want more than one user ;0


It is impossible for us to detect SQL that has been sql injected, to the database it is just more code to run.

Von, September 30, 2005 - 9:16 am UTC

I wish oracle has an easy way of grabbing the source_code
from the database for procs/functions/packages

Tom Kyte
September 30, 2005 - 11:02 am UTC

------------- getcode.sql --------------------------
set feedback off
set heading off
set termout off
set linesize 1000
set trimspool on
set verify off
spool &1..sql
prompt set define off
select decode( type||'-'||to_char(line,'fm99999'),
'PACKAGE BODY-1', '/'||chr(10),
null) ||
decode(line,1,'create or replace ', '' ) ||
text text
from user_source
where name = upper('&&1')
order by type, line;
prompt /
prompt set define on
spool off
set feedback on
set heading on
set termout on
set linesize 100
-------------------------------------------------------





-------------------- getallcode.sql -------------------
set termout off
set heading off
set feedback off
set linesize 50
spool xtmpx.sql
select '@getcode ' || object_name
from user_objects
where object_type in ( 'PROCEDURE', 'FUNCTION', 'PACKAGE' )
/
spool off
spool getallcode_INSTALL
select '@' || object_name
from user_objects
where object_type in ( 'PROCEDURE', 'FUNCTION', 'PACKAGE' )
/
spool off
set heading on
set feedback on
set linesize 130
set termout on
@xtmpx.sql
------------------------------------------------------


dave, September 30, 2005 - 9:55 am UTC

[quote]
I wish oracle has an easy way of grabbing the source_code
from the database for procs/functions/packages
[/quote]


there is, it is called user/dba_source

David Aldridge http://oraclesponge.blogspot.com, September 30, 2005 - 10:03 am UTC

I hope that everyone who has posted an enhancement request here has also officially raised it on MetaLink ... ?

Tom Kyte
September 30, 2005 - 11:10 am UTC

(i've sent this link onto the raptor guys - many of the ideas would fall in there)...

but yes, metalink with a business case is the official way, this is just ideas being thrown out there. They'll get read and all, but no "promises" :)

Vipin, September 30, 2005 - 10:17 am UTC

I would like to have something like db2look ( a db2 utility which dumps whole database definition really quick) in Oracle . I know we have exp/imp but the output is not user friendly - the user should not have to format the output before using it .

Backup/restore could be made more easier .

Finally i am waiting eagerly to have an improved sqlplus - with the capability to display wide outputs without wrapping at the end .

After venturing briefly into db2 & sqlserver worlds , I can safely say oracle is a much better product then the other two in my opinion.



Tom Kyte
September 30, 2005 - 11:25 am UTC

(have you looked at rman? it really isn't that "complex")


dbms_metadata.get_ddl is fairly useful these days..

wishlist

A reader, September 30, 2005 - 10:38 am UTC

that quality of asktom stops decreasing with this enormous speed

Cols_as_rows

Bhavesh, September 30, 2005 - 10:44 am UTC

hi tom,
I just need cols_as_rows() function built-in in oracle...
;-)
Bhavesh

Benthic Software Golden32

A reader, September 30, 2005 - 11:21 am UTC

<quote>
1. I couldn't agree more. Occassionally I need to look at some very wide
tables (135 columns) and SQL*Plus doesn't really cut it. I use SQL*Plus for
98% of my work, the other 2% is JDeveloper just to see these wide tables. I
wish I could just stay in my terminal window.
<quote>

Benthic software has a tool called Golden32. Its not just another query tool. It has multiple tab(one table per tab) capability(and keeps queried data even after you switch tabs)...

Not only can you run querys but you can also:
- Create database objects(tables, stored proc/func/pack, collections so on...)
- Run plsql stored procedures/functions, pl/sql anon blocks
- You can automatically export data to excel spreadsheet, csv, xml(no tool configuration required)
- You can do table data import/export(csv format) from tool
- You can do automatic editing of table data within tool(within the data output grid itself)
- You can if you dont know sql tool comes with sql builder that allows you to point click build sql statement with mouse.
- You can from sql builder query/search user, all_objects, dict objects in order to view tables, views, store procedures, triggers, functions. Or you can simply click on table_name and it pulls up all corresponding columns
- You can do searches of particular table columns

I started using tool back in 1999 and for 35 dollars its the best db tool for price/functionality I ever used.

And its free 30 day trial...

Just google benthicsoftware and you will find the website!




PL/SQL improvement

Menon, September 30, 2005 - 11:22 am UTC

I wish the following two cases would not compile - too often I have been bitten by this.

scott@MYORA92> create or replace function f (x number )
2 return number
3 is
4 l_ret number;
5 begin
6 if x = 1then
7 l_ret := 2;
8 elsif x = 2then
9 l_ret := 3;
10 else
11 l_ret := -1;
12 end if;
13 end f;
14 /

Function created.

create or replace function f (x number )
return number
is
l_ret number;
begin
if x = 1then
l_ret := 2;
elsif x = 2then
l_ret := 3;
else
l_ret := -1;
end if;
end f;
/
show errors;

scott@MYORA92> create or replace package pkg
2 as
3 g_const varchar2(1) := '22';
4 end pkg;
5 /
Package created.

Similarly, one of the favorite peeves with PL/SQL abou not having a good way of getting stack trace is taken care of in 10g.

a native stragg would be nice too

A reader, September 30, 2005 - 11:23 am UTC

I often need to display values from multiple rows as a single column value. Judging from the number of times people ask how to do this on askTom and the OTN SQL/PLSQL discussion forums, it seems many others need this too.

Your "stragg" function does the trick, but a native function would be better. I'm thinking of something like an aggregate version of the single-row function CONCAT, with a parameter for a separator string. The aggregate COLLECT function comes close, but it only returns a nested table. We need a COLLECT that returns a VARCHAR2. Barring that, a native way to cast a nested table into a VARCHAR2 string could be used with COLLECT to achieve acceptable results.


Alex, September 30, 2005 - 11:27 am UTC

"It is impossible for us to detect SQL that has been sql injected, to the database it is just more code to run. "

Just to clarify Tom, I didn't mean the injected code itself, I meant the code that leaves your app open to injection such as an execute immediate without a USING clause. But I realize that doesn't make sense because what if we wanted to code a literal? Then we don't want to bind there.

Tom Kyte
September 30, 2005 - 12:01 pm UTC

right, and I use sys_context to "bind" all of the time - because I don't know at compile time how many binds I'll actually have!

Re: PL/SQL improvement

Menon, September 30, 2005 - 11:28 am UTC

Sorry I repeated the function f twice - Just ignore the second one.
Interestingly, notice that the function compiled even though I said
"if x = 1then" - no space between the 1 and then.

That was not the error I wanted the compiler to catch though.

Tom Kyte
September 30, 2005 - 12:02 pm UTC

that was the error I thought you wanted - but you mean "a function that forgets to return a value"

A reader, September 30, 2005 - 11:33 am UTC

Some utility or function to automatically calculate the bucket size for histograms, depending on the data distribution

Anto

Tom Kyte
September 30, 2005 - 12:02 pm UTC

done, dbms_stats does that.

Anto, September 30, 2005 - 11:35 am UTC

Another one - To get the 10046 trace(as a single file) for parallel queries,

Anto

A reader, September 30, 2005 - 11:47 am UTC

Another one - sorry for not putting it all together in one place

Making the Ora-600 and Ora-7445 errors more specific and detailed - splitting the Ora-600 and Ora-7445 errors into sub categories with separate error numbers

Anto

Tom Kyte
September 30, 2005 - 12:06 pm UTC

by definition these are "oh crud" errors - they are not anticipated. Hard to make the unanticipated more specific and more detailed, you don't really know what happened (thats the point of those two)

what I don't like

Andrew, September 30, 2005 - 12:14 pm UTC

Plsql and oracle sql should have a option to disable implicit type casting. make it strongly typed.

select into...

Mariano, September 30, 2005 - 12:17 pm UTC

I wish that in PL/Sql we could test if a value exists in a table without select .. into .., just:
if (select x from v = true) (or whatever) then
...



Restartable Batch Process

Bob B, September 30, 2005 - 1:11 pm UTC

create table batches(
batch_id number primary key,
db_procedure_name varchar2(100) not null,
driving_table varchar2(100) not null,
starting_scn number not null,
last_rowid rowid -- not sure if this is a ligit datatype
)

create sequence batch_id_seq;

create or replace package batch_pkg as
procedure create_batch(
p_db_procedure_name in batches.db_procedure_name%type,
p_driving_table in batches.driving_table%type,
p_starting_scn in batches.starting_scn%type,
p_last_rowid in batches.last_rowid%type,
p_batch_id out batches.batch_id%type
);

procedure execute_batch(
p_batch_id in bathes.batch_id%type
);

procedure commit_batch(
p_last_rowid in batches.last_rowid%type
);

end batch_pkg;

"create_batch"
Insert the batch into the batches table.

"execute_batch"
Create a query from the driving_table as of the starting_scn and add a "rowid > last_rowid" when the last_rowid is not null. Select out the rowid and all columns of the driving table, order by rowid, and set a private global variable "g_batch_id" (exists only in the package body). It will then open this query as a ref cursor and pass it as a parameter to the db_procedure_name defined for the batch.

"commit_batch"
Called by the procedure doing the batch process to commit. The procedure updates the batch with the last_rowid and then commits.

Each batch process would need a wrapper to call the create_batch and execute_batch routines and then a batch_processor that would take a ref cursor as parameter, process it and use "batch_pkg.commit_batch" whenever it wants to commit.

There are some issues that I can think of off the top of my head and I'm sure there are a ton more:

1) Row movement may affect this
2) Must have enough rollback to flashback to the given scn.
This could be a problem when running the batch process without any failures. It is worsened if it does have to be restarted (the longer it went without failing, the worse it is)
3) Performance might be an issue when ordering by rowid for larger result set

Tom Kyte
September 30, 2005 - 2:21 pm UTC

1) yes it could
2) correct
3) why order?

I'd rather use rowid RANGES - use between low and high

Some more requests

Sergei Agalakov, September 30, 2005 - 1:16 pm UTC

I would like to lift a restriction of 30 characters for object names, I have situations when I can't create a good descriptive name just 30 characters long.

I do have problems with inability to see individual procedures/functions in PL/SQL package in errors and profiler information. When I do see a profiler information that 95% of the time was consumed in the package PCK1 and this package has 100 procedures and functions - it doen't help much. I do see a time distribution by line number in package but againt I can't even reliably combine them and tell that the most time consuming lines 1125, 1201 and 1245 belongs to just one procedure in package or to three different procedures in this package.
It would be nice to have something like DBA_SUBROUTINES view where you can see first/last line of every procedure in the package, names and parameters of this procedures. Then at last profiler would be able to report that 80% of time was consumed by PCK1.PRC1 procedure, 12% in PCK1.PRC2 procedure and 3% in PCK1.PRC3 procedure instead of the current that 95% of the time was spent somewhere in PCK1. The same applies to Oracle types.

Encrypting of network traffic has not to be an Advanced Security option on the top of Enterprise Edition - it is must have, very basic security measure. A simple test: enable sqlnet tracing and send a command 'ALTER USER SCOTT IDENTIFIED BY TIGER' and then take a look at a trace file - here you have a new password for a SCOTT guy.

Oracle doesn't have a thin .NET data provider. Oracle's .NET data provider doesn't support Oracle objects. All that exists for JDBC, but does Oracle really believe that all applications for MS Windows have to be written on Java?


Jon Roberts, September 30, 2005 - 1:18 pm UTC

1. PL/SQL development tool with tight integration with version control like VSS (I sure hope you say Raptor).

2. OCA (Open Client Adapter) brought back. It was nice when building Oracle Reports that I could open a connection with SQLPlus and run the exact SQL that I am going to run in the report. With Reports now, you have to use JDBC which means more configuring on the client and the server than the old days with OCA. Or you could use HS to connect to a different database but that has its problems too because now I have an additional database to manage.

With Oracle Forms, you have to use HS to connect to a different database. With Forms 6i, I could use OCA to connect to a different database.

Discoverer too was nice with the OCA and now I have to create the EUL in an Oracle database only. I see the point but it was nice to be able to install the EUL in a SQL Server database when it was more appropriate.

3. Ref Cursor support in Discoverer.

4. Oracle Text support in Discoverer.

5. Better patching in Application Server. When a new release comes out, many times it means installing a new version where in the database world, Oracle supplies a patch.
1.0.2.2.0 couldn't be patched to 1.0.2.2.2.
1.0.2.2.2 couldn't be patched to 9.0.2
9.0.2 couldn't be patched to 9.0.4.
9.0.4 couldn't be patched to 10.1.2.0.
10.1.2.0 can't be patched to 10.1.2.0.2.

I understand why 9iAS R1 couldn't be patched to 9iAS R2 as it was a major change but there should have been patches for things like 9.2.0.2 to 9.2.0.4.

6. A real Access killer version of HTML DB. The current version is great for building apps but a lot of people use Access to just load tables and analyze data. You must be a developer in HTML DB before you can create a table and load data to it which means more management on top of the database someone is already managing.

It would be nice if I could just grant a user rights in the database, let the user log into the web application with the database username and password, and then based on database security, be able to create and load tables into the database.


Tom Kyte
September 30, 2005 - 2:24 pm UTC

1) i wont....



Anto, September 30, 2005 - 1:42 pm UTC

Some option in sqlplus to return only 'n' rows of output ( instead of giving rownum in each query), for all the queries, given hereafter for the session.

Anto, September 30, 2005 - 1:45 pm UTC

Some dictionary view(s) to see all the SQLs executed by each user,after the instance startup, and the 'actual' explain plans those queries had used.

My wish list...

Philip Moore, September 30, 2005 - 2:13 pm UTC

Hi Tom,

My wish list is as follows:

1) Star-transformation with bind variables - the reason I ask for this is that we have a datawarehouse with Siebel Analytics (which you guys now own) as our primary reporting tool. We have thousands of users issuing relatively similar SQL queries, often differing only by literals in the WHERE clause. Our shared pool is getting killed with unique SQL due to the fact that Analytics does not bind - as it shouldn't since this is a DSS (a "seconds-per-query" system). But I think the mentality should shift by Oracle putting a "blind_star_transformation" parameter or something of the like that allows the very powerful star-join technique to take place - with Binds. I realize that in some cases, it might not be optimal - but having such a feature would allow to prove the case for/against blindly star-transforming queries with binds. With such a feature - I could force Siebel Analytics to bind by using cursor-sharing, until you get ahold of them, Tom ;)

2) Cascading Updates to child tables - I know you've put code on asktom for doing this - but it is a tricky workaround to the mutating table error. Why not build it in the database? My argument is that my $200 Axxess database can do it, why not my $40,000 (per cpu) Oracle database do it? (please be gentle here, Tom) :)

3) PIVOT features built into SQL - without the use of Analytical functions. You have a handy pivot routine (in Expert Oracle Signature Edition) - but why not make this a built in feature of the RDBMS? Again - my $200 Axxess database can do it, why not my $40,000 (per cpu) Oracle database do it?

4) Why doesn't Oracle mow my grass for me, my $200 Axxess database does, why not Oracle? Just kidding...

5) Hmm, nevermind - 10G R2 has taken care of a lot of my wants :)

Thanks as always Tom, you da man...

Sincerely,

Philip

Tom Kyte
September 30, 2005 - 2:36 pm UTC



2) because it would just be *wrong* ;)



sqlldr log file in XML

Alex, September 30, 2005 - 2:28 pm UTC

1 how about XML format for sqlldr log output.

The current report format is difficult to parse.

2 Execution Plan - REMOTE path (via db link); this should be further expanded.

3 An easier to program and understand DBMS_JOB package



Tom Kyte
September 30, 2005 - 2:37 pm UTC



3) dbms_scheduler?

easier to program, but a lot more feature rich so harder to understand.

(i thought dbms_job was fairly "straight forward", the hardest thing was the interval and the scheduler has that fairly well knocked down I think)

David Aldridge, September 30, 2005 - 2:39 pm UTC

"4) Why doesn't Oracle mow my grass for me, my $200 Axxess database does, why not Oracle?"

Or dig trenches and move piles of wood ...

:D

Tom Kyte
October 01, 2005 - 8:08 pm UTC

lots of wood, I hurt very much right now, but a good hurt :) (except for that piece of wood that banged off my kneecap, that wasn't a good hurt, that was more like a blinding shot of pain....)

Thought I could move it all in a day - I was wrong, next weekend is yet another weekend.... Almost done.

Lev, September 30, 2005 - 2:51 pm UTC

Hi Tom,

It looks like you opened PandoraÂ’s Box.

1) I'd like to be able to kill a session without orakill (in windows). Something like:
alter session ... kill now
and all recourses (locks especially) are free

2) More detailed error messages, probably as an xml document - application will be able to analyze it and heal itself.

Thanks,
Lev


Tom Kyte
October 01, 2005 - 8:12 pm UTC

1) that is alter system kill session -- that releases the resources such as locks.

It won't release the SESSION entirely until the client says something to us (else we would be overwhelmed with "i'm getting 3113's all of the time, whats up with that)



Love pandora's box, I've sent this url to more than one person, They'll see the feedback and take it into consideration.


Let is never be said that I've said "this is perfect, we are done"....

Anto, September 30, 2005 - 2:51 pm UTC

To execute all operations in the least possible time and most efficient way - (we have to do away with statistics collection, varying the values of init.ora parameters etc ;-)).

It has to automatically set the best values for the various init.ora parameters, for any system - maybe after the system has been UP for a decent amount of time

Tom Kyte
October 01, 2005 - 8:13 pm UTC

I believe we are getting there for many types of systems - there will likely always be exceptions.

For some systems - all you need to tell us is how much memory to use (pga/sga targets) and that is about it - today (10gr1/10gr2).

Anto, September 30, 2005 - 2:55 pm UTC

Never saw a thread so 'hot' other than the "Undocumented secrets for super-sizing your PGA" and "Predictive reorganization" for obvious reasons ;-)

Tom Kyte
October 01, 2005 - 8:14 pm UTC

Seems like controversy brings out everyone :)

but yes, this is the 2nd "hottest" thread right now....

sql loader control file generator

Jim, September 30, 2005 - 4:15 pm UTC

Nice little tool (sort of like how htmldb does it) that will generate the sqlloader control file. Allow a browse to the file, and show the begining of the file. Make a guess on the parse, but allow changes etc. Then save the control file. It could be in Java (not a requirement) so you wouldn't have to have it reqritten for each platform.

Oh, yes. The problem was chained rows and IO queue length. Thanks for the possibilities of things to look for. (We spoke briefly at the Oracle Book Store Wed. at OOW.)

Tom Kyte
October 01, 2005 - 8:28 pm UTC

I remember you -- we were guessing :) Glad to hear we guessed right....

bulk collect into

Art, September 30, 2005 - 4:35 pm UTC

I would like to be able to

select a, b, c
bulk collect into my_table_type index by d
from my_table

where d would set my_table_type's index


I wish ....

Enayet, September 30, 2005 - 4:43 pm UTC

I wish .. I am hesitant to spell out... Oracle has another nifty version for DW. Often I heard, Oracle underperform in DW arena.

I wish, Oracle would be as easy as .NET + SQLServer duo; Oracle should be sweet as a pie; so Developers (.NET + Java dudes) can sucumbs into it like VB developers sucks into SQLServer. There should be native API (like ADO) to connect to Oracle.

I wish Larry 'walmart'ize' Oracle as a de-facto database; one can buy Oracle from Bestbuy :))



Tom Kyte
October 01, 2005 - 8:29 pm UTC

... Often I heard, Oracle underperform in DW arena. ...


don't hear things, experience them. If it were so, well, we'd be hurting big time.




I wish,,,,

sns, September 30, 2005 - 4:52 pm UTC

When I do deletes and inserts of my table every day (I couldn't partition it and trucate the partition), I perfer Oracle would have the facility to "exchange" the record . This way the delete from my base table is not required and insert from the incremental can be avoided too.

Something like this:
exchange base_table with inc_table on base_table.column_name = inc_table.column_name.

I know partitions can be exchanged with other tables.

boom......




Tom Kyte
October 01, 2005 - 8:30 pm UTC

we call this "merge"???

rough interface

Alberto Dell'Era, September 30, 2005 - 5:34 pm UTC

Instrumentation:
(a) we have bind variables peeking but "explain plan" and "autotrace" don't
peek;
(b) we have session cached cursors but we can't distinguish them in
v$open_cursors;
(c) we have dbms_xplan to wonderfully format plans but until 10gR2,
"explain plan" and "autotrace" didn't use the same rich-information format;
(e) we have parallel execution, but getting the plans of slave sql is an art
in itself;
(f) we have tkprof, but its plan dumps don't use dbms_xplan rich format;
(etcetera)

DBA interface:
(g) to rebuild a non-parallel index in parallel, we must "alter index rebuild
parallel" it, then set it back to noparallel - why we can't simply specify
the parallel degree to use for the rebuild ? Same for "alter table move".
(h) why an insert/*+append*/ into an *empty* table has to maintain indexes
(basically) row-by-row, instead of rebuilding them at the end, thus saving
tons of redo, cpu etc ? I'm tired of setting them UNUSABLE ...
(etcetera)

That's the hardest part of working with the Oracle database for me.

I think that I spend 40% of my time coping with limitations similar to the ones above, when implementing/testing a design, diagnosing a problem, explaining Oracle to juniors ...

In other words - working with Oracle is 40% learning about the dashboard, and the rest about the engine, suspensions, driving techniques - in my car, the ratio is something around 1%.

Tom Kyte
October 01, 2005 - 8:46 pm UTC

(e) have you seen the slave sql in 10g? totally redone from 9i?

(h) it doesn't - it does a MERGE of the index - the append builds the index off to the side and merges with an empty index in this case.

Right on Tom!

Menon, September 30, 2005 - 6:46 pm UTC

"that was the error I thought you wanted - but you mean "a function that forgets
to return a value"

Yup - left that detail out for the reader to appreciate it is easy to miss it out.



A real whoami

Todd, September 30, 2005 - 6:54 pm UTC

A built-in function that would return the name of the current function/procedure, even within packages.

schema.funcproc_name
or
schema.package_name.funcproc_name

Hard to believe we're at 10GR2 and still don't have this yet...

Tom Kyte
October 01, 2005 - 8:51 pm UTC

hey, whoami isn't imaginary ;)

Obvious one....

David Rydzewski, September 30, 2005 - 7:01 pm UTC

new init.ora setting

FAST=TRUE

Dave

Tom Kyte
October 01, 2005 - 8:51 pm UTC

;)

wishlist

Yelena, September 30, 2005 - 7:53 pm UTC

Statement and row level database triggers on SELECT.

Tom Kyte
October 01, 2005 - 8:51 pm UTC

what would you do with the row level trigger?

Whishlist keeps growing.. keeps growing

Raj Kathamuthu, September 30, 2005 - 9:46 pm UTC

* To Able to rename a schema (just got the rename TBS wish granted in 10gR2..).

* Making HTML_DB able to work with other web servers.. (or have some kind of a PL/SQL container in the middle to make it true 3-tier environment)

* Wouldn't it be nice if OID is self contained like any other LDAP servers without requiring a DB instance? Some security admins have rules (I'd say blind rules).. like
<1> There shouldn't be no databases hosted in the application layer..
<2> Web layer shouldn't directly talk to DB layer..
<..>
Sometimes it's hard to convince them and get the waiver..
Just wondering though.. storing hierarchical data into a relational structure (in case of OID), what exactly we're gaining?



Tom Kyte
October 01, 2005 - 8:59 pm UTC

OID - it'll never happen, you know what you need for LDAP.....


umm, it is called a database, you can call it whatever you like but it is A DATABASE...

Hierarchical data goes well into "relational databases", it is where most of it is anyway.

You gain security, scalability, concurrency, backup, RECOVERY, replication, etc etc etc - without actually having to reinvent it all over again.



Enhancement Requests

A reader, September 30, 2005 - 11:55 pm UTC

How about an area in OTN where enhancment requests for all products can be entered, queried, and voted on (one vote per OTN member). This way we don't have to fill up poor Tom's forum with our wish list rants. ;-)

Tom Kyte
October 01, 2005 - 9:00 pm UTC

metalink - it does that. voting, no, but the only voting is done by customer advisory boards.

What I would like?

Joris, October 01, 2005 - 3:06 am UTC

1) Macros in plsql. Like #define in C.
2) Bash shell style command history and tab-autocomplete in sqlplus.
3) Ability to flush individual statements from the sqlarea. Sometimes a statement with a bad plan sits there because the first time it was executed the values in the binds were unusual.

Tom Kyte
October 01, 2005 - 9:03 pm UTC

1) there is conditional compilation in 10gr2 - like #ifdef, but no macros.....

3) agreed.

Improved plsql namespaces

Tom, October 01, 2005 - 6:05 am UTC

Couple of wishes which come from doing a lot of plsql work in the database.

Better Namespaces
-----------------
I would like to be able to actually have package hierarchies so I can create packages

testapp.crm.model.accountapi
testapp.crm.model.contactapi

Obviously this wouldn't be a small change :-) but pretty much all languages have the ability to subdivide code into namespaces like this. Unfortunately the best we can do in Oracle is separate schemas with all the related grant management etc.

Check Constraint Error Messages
-------------------------------
Easy one to implement I would think. I want to type the following

alter table widgets add constraint cost_chk check (cost < 100) error "Widget cost must be less than $100";

so I can catch check constraint violations and return a friendly message to the user.

Where do you go in Metalink to log enhancement requests?


Tom Kyte
October 01, 2005 - 9:04 pm UTC

it is done via the regular ITAR process..


I like that last idea myself.

hierarchical queries to xml

arnaud, October 01, 2005 - 8:07 am UTC

turn the result of a "connect by" query into xml

A reader, October 01, 2005 - 12:22 pm UTC

sometimes i don't like the support provided by metalink :o)

And now to lose some friends.

Niall Litchfield, October 01, 2005 - 12:42 pm UTC

OEM. Its just horrid. I'm yet to be convinced 10g is better than 9i rather than just a backwards step. OK I might not lose too many friends for that one but

BINDS

I really don't see why

select cols from tab where pk = 10;

should be radically worse than

select cols from tab where pk = :b1;

I know why it is, I just wish it wasn't.


Tom Kyte
October 01, 2005 - 9:10 pm UTC

Niall,

you won't lose friends by expressing lucid opinions based on your experiences...


But on the bind thing - you and I see opposites I think. Maybe - when I come to the UKOUG - you can sit in on my "all about binds" session and I'll describe "why"


the first time around, "where pk = 10" is not any different from "where pk = :b1"

Even on the 1000th time, "where pk = 10" is not any different from "where pk = :b1"

However, you don't tend to submit "where pk = 10" 1000 times, you tend to do it once and it must be compiled (hard parsed)


so, it is the difference between:

where pk = 10
where pk = 11
....
where pk = 10000
where pk = 10001
...
where pk = .............


and

where pk = :b1



Think about a developer writing a SUBROUTINE for each unique set of inputs and calling them:

subroutine_1
subroutine_2
subroutine_3
.....


that is the logical equivalent.

rman improvements

Chris Crisante, October 01, 2005 - 1:42 pm UTC

I have liked the improvments in 9i and 10g. Some of the things that were available in mainframe DBMSs I have supported have slowly made their way into Oracle.

One useful feature still missing is Change Accumulation. This would be an RMAN function to go through the archivelogs for a time period and accumulate all the physical changes for each block. Subsequent restore could then use an incremental level 0 or full backup, and recovery would apply the 'change accumulation' file (and possibly whatever logs accumulated after the change accum utility had run. This would reduce the need for incremental level '>0' backups. It could also be done using only archive logs, no need to use the DB datafiles, and possibly even run on a different machine. And if you have a DB with multiple updates to the same blocks, it could speed up recovery (If the block changed every day, only one block image would need to be applied during recovery).

I also don't like much about the installation, cost and licensing of Oracle DB. Especially I dislike the way we must license individual 'options' such as partitioning, RAC, Parallel Query, OLAP, etc. I can understand some things such as Advanced Security or OLAP being add-ons. But Partitioning?? Parallel Query?? These should be a basic part of any DBMS these days. There is already the additional cost for 'Enterprise' edition which should cover the features such as Partitioning that most "enterprises" would need.

When I watch a sporting event these days, there is usually an Oracle commercial talking about the "low-cost" Grid - I see about 16 processors on the screen and do the math in my head. Processor licenses, partitioning liccenses, RAC Licenses, Parllel Query licenses, license the hot standby site at the exact same cost as production even though there are no users, etc, etc, etc.








Tom Kyte
October 01, 2005 - 9:15 pm UTC

well - a physical standby does change accumulation and since you can backup from standby and restore to production - you sort of have that concept today.


PQ is a feature of Enterprise edition, not an option (hasn't been for a while). The license options are based on market - what do the others have or not have as well drives this.

Queue DDL statements.

Sai, October 01, 2005 - 3:36 pm UTC

Hi,

My wishlist is to have the ability to run some DDL commands like "alter table add column...." on a busy table and let it queue along with other DML statements hitting this table. I know dbms_redifinition does it, but how about recreating a 300G table every now and then with each application release and take the performance hit whie the rebuild is going on.

This is in the lines of what online index rebuild does, it queues with DML statements to get a lock on the table and start recording changes happening in a journal table.

We have already filed enhancement request for this.

Tom, do you see any downside of having this feature?

Tom Kyte
October 01, 2005 - 9:17 pm UTC

well, it could lead to a logjam.

In order to queue in there, you prevent further updates from starting (they block on the ALTER), but if the alter is blocked long enough - bamm, the system sort of stops dead.

that is one potentional downside - and since an add columm with a default could take a really really long time.....

that would be another.

Standby <> Change Accum

Chris, October 01, 2005 - 9:42 pm UTC

You have a point in theory i guess, but in practise:

1. Standby requires another physical copy of the database, whereas a 'change accum' would not require this disk space. Just sufficient sort space to go through the logs. Sort of like an incremental backup requires less tapes than a full backup (in most cases).

2. Running a standby DB on a remote system requires a full production license for that system. Double the cost. A 'change accum' function could be run on the same system as the primary DB (or maybe on an offsite system configured in a less costly way to offload the work).

Other DBMS's i have used had both of these functions (standby and change accum) and we could use whichever was a better fit for us. It would be nice if Oracle did the same.

As for the licensing rant, i was just venting. Of course Oracle has the right to charge whatever the market will bear. But it is an inconvenience of having to go ask for more money from mgmt or users when we figure out, say, that partitioning would be helpful.

But I do wonder how well Grid computing is catching on with the average company when each processor in the Grid requires a processor license?

Tom Kyte
October 02, 2005 - 10:35 am UTC

Agreed, I was just pointing out that while not identical, you can achieve a very similar goal.

It is not double the cost, standby machines are typically a fraction of the size.


Another "accum" approach is the improved incrementals in 10g -- where we catch a backup up on disk.


Each processor in a NON-GRID environment requires a license too. At the end of the day, you either have 16 cpus in a single big machine or 16 cpus in 4 small very cheap machines. You still have "16"

Reduce/Eliminate parsing of REF CURSORS

A reader, October 01, 2005 - 10:19 pm UTC

It would be nice to have an "option" that would prevent parsing of the same SQL statement in a reference cursor. While I use them in specific places for specific problems with bind variables and session_cache_cursors, executions from multiple sessions can hit the library cache latch pretty hard.

Having the option to specify that a reference cursor is weak, strong, or static would be really nice!

just a couple of things

Mark Wooldridge, October 01, 2005 - 11:42 pm UTC

Well there are probably others but this is all I can think of now.

1. insert into tab values my_rec overiding col1 with myseq.nextval;

1.b assign user object = user rec, mapping the columns where the names are the same. ability to also assign rec = obj.

2. parametric search indexing similar to Verity's where you can refine a result search using predefined parametric values. Implemented this with an IOT which is maintained using a PL/SQL api for insert/update on the table. Just would like to see this more integrated with Oracle text.

Robert, October 02, 2005 - 12:23 am UTC

>>1. PL/SQL development tool with tight integration with version control like VSS <<

the new and free JDeveloper ?

>>
create or replace function f (x number )
return number
is
l_ret number;
begin
if x = 1then
l_ret := 2;
elsif x = 2then
l_ret := 3;
else
l_ret := -1;
end if;
end f;
<<

Wow ! I haven't written standlone functions in a while
didn't even know this was compilable.

to Robert

Muhammad Riaz Shahid, October 02, 2005 - 1:07 am UTC

Why do you think this code is not compilable ?

Tom Kyte
October 02, 2005 - 10:42 am UTC

the lack of spaces in the thens looks "not valid"


 
ops$tkyte@ORA9IR2> select*from dual where 1>0and 5<6;
 
D
-
X


sort of like that. 

Alberto Dell'Era, October 02, 2005 - 6:56 am UTC

On:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#48858384840202 <code>
The point I was trying to make is that a lot of little "issues" or "inconveniences" scattered here and there, when added together, make a big issue and a lot of wasted time for everyone.

The "devil is in the details" is another way to put it.

I think that when re-engineering the engine, brakes and suspensions, the Oracle Engineers should walk the extra mile and apply their top-notch Engineering skills to the dashboard and stick-shift controls also (perhaps with the help of an ergonomic specialist).

Eg implementing the "bind variable peeking" feature took a lot of time for sure - why not walking the extra mile and modify "explain plan" also, that takes, I bet, just 1% more time ?

That's where I waste a lot of time - on details.

And when mentoring juniors, I usually start a mentoring session with a smiling-face junior ("great, today I can learn how to drive the Ferrari of the databases!!"), and very often see his/her face switch to disillusioned mode ("gosh, I expected the Ferrari cockpit to be a tad better"). Speaking about *perceived* quality.

Tom Kyte
October 02, 2005 - 10:56 am UTC

but the binds don't exist for an explain plan?  applications do not "bind" to explain plans ;)  

ops$tkyte@ORA9IR2> explain plan for select * from dual where :x= :y;
Explained.

no binds were defined, declared, anything - the concept has never existed.



As for your last paragraph, not that it matters - but have you seen the cockpits of the high end cars? 

Alberto Dell'Era, October 02, 2005 - 10:35 am UTC

> (h) it doesn't - it does a MERGE of the index - the append
> builds the index off to the side and merges with an empty
> index in this case.

Ok, but why merging with an empty index (and thus generating undo) instead of replacing the empty index with the "off to the side" one ? Or just rebuilding the index after the rows have been inserted above the table HWM (thus "kernelizing" the classical manual SET UNUSABLE-INSERT APPEND-REBUILD sequence - avoiding the UNUSABLE which is not necessary).

Inserting into an empty table is not infrequent - staging tables, local copies of remote tables, indexed GTTs, etc

Furthering the override above

Bob B, October 02, 2005 - 11:18 am UTC

Or just a way to attach a sequence to a table.column as a default. If the column isn't provided, it uses the sequence number.

Tom Kyte
October 02, 2005 - 12:27 pm UTC

well, I know it is not a sequence, but if all you want is a "surrogate key":

ps$tkyte@ORA10G> create table t ( x raw(16) default sys_guid(),
  2                   y varchar2(32) default sys_guid(),
  3                                   z varchar2(5) );
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> insert into t (z) values ( 'hello' );
 
1 row created.
 
ops$tkyte@ORA10G> select * from t;
 
X                                Y                                Z
-------------------------------- -------------------------------- -----
0227D36794A61D44E040007F010061CC 0227D36794A71D44E040007F010061CC hello
 


(using RAW would be best - but some people are 'afraid' of raw...) 

Alberto Dell'Era, October 02, 2005 - 12:25 pm UTC

>ops$tkyte@ORA9IR2> explain plan for select * from dual where :x= :y;
>Explained.

I know, it assumes that :x and :y are varchar2 and doesn't peek at the bind variables values.

Wouldn't you be happy if "explain plan" peeked at the values (and TYPES) of the :x and :y bind variables (including the LENGTH of varchar2 variables that can influence the plan) and so generated and explained the *real* plan ?

Or - how many people actually know about that "explain plan" limitation ? Even if they know, it's Yet Another Thing to remember - add all them together and it makes for a pretty cluttered dashboard. Fix these little "inconveniences" and you'll get a much more pleasant driving experience, and enjoy the top-notch engine, instead of fighting with the cockpit controls.

Tom Kyte
October 02, 2005 - 5:35 pm UTC

(there is nothing to peek at is my point...)

I didn't define any binds, you don't define binds, you don't even have the syntax to bind to a explain plan...

I hear you - but it would take almost a new command and a change in code in all existing applications.

sys_guid()

Bob B, October 02, 2005 - 1:05 pm UTC

I didn't know sys_guid() could be used. The only problem I see is that the raw is always going to take 16 bytes, whereas a number will take between 1 and 8(?) bytes. This isn't much for a single table, but using it across an application and with RI it could add up quickly.

Another advantage of being able to bind a sequence to a table.column would be the ability to easily tell what sequences are used for which tables. Plus, the database engine could use this information to optimize insert into select, merge, and other bulk operations.

Tom Kyte
October 02, 2005 - 5:36 pm UTC

1 and 22 bytes....
numbers are 1 to 22 bytes in length.


I don't see how this would optimize anything?

I wish I could. . .

Andrew, October 02, 2005 - 3:47 pm UTC

a) Set an initialization parameter that would return something like "ORA-01002: fetch out of sequence" for any transaction that attempts to fetch across commit -- not just on a FOR UPDATE cursor. We have a standard that we do not do this, but it would sure be nice to be able to have the database enforce it.

b) See the values of bind variables for other sessions. Something like v$session_binds
sid,
serial#,
sql_address,
sql_hash_value,
name,
value

There are probably others, but those are the only ones that come to mind right now.

New undocumented ora.init parameter

Andrew, October 02, 2005 - 4:52 pm UTC

First step:
_db_optimization = SO_SO
Second step:
_db_optimization = OK
Third step:
_db_optimization = PRETTY_GOOD
Fourth step:
_db_optimization = VERY_VERY_FAST
Fifth step:
_db_optimization = TRULY_AMAZING

After each iteration run some benchmarks and give them to the management. Now that's something that every DBA would appreciate!







about the exchange row facility,

sns, October 02, 2005 - 6:10 pm UTC

Oh my god. I keep using this facility from the last 2 years and I forgot about this. The reason I forgot is, in my current working place, there are lots of ETL transactions that does DELETE and INSERT. Nobody has tried doing MERGE as an alternative. May be they are scared of the performance since some tables have more than 100 columns. Can we use "parallel" hint while doing MERGE?

I like to work in a place where management gives full liberty to make use of all the features the database has. But buerocracy sometimes prohobits that.


Thanks,

Tom Kyte
October 03, 2005 - 7:17 am UTC

you can do a parallel merge, direct path the insert part....


100 columns? so?

SQL Loader dinosaur should evolve

Andrew, October 02, 2005 - 10:50 pm UTC

1. SQL Loader control and shell run files should really be generated through a GUI so that people like myself don't have to look at the docs literally every time (or I have to look at an example of an older SQL Loader process). OWB has now got some functionality there but why not copy that functionality somewhere else and expand it further...

2. One thing that I really loathe about SQL Loader is that you have to call it from command-line and cannot do it straight from PL SQL. Does anyone really feel like creating shell scripts and even worse, extracting error info from log files?! It shouldn't be that hard to create PLSQL and Java APIs for it, with nice error codes. External tables are usually a good alternative but some functionality is missing (e.g. loading into multiple tables - very often found in many legacy and even non legacy environments).

Tom Kyte
October 03, 2005 - 7:20 am UTC



2) multi-table inserts? you can load into as many tables as you like, even easier than sqlldr. You can use complex when clauses with OR's, AND's, whatever sql function you like

Re: What I would like?

Egor Starostin, October 02, 2005 - 11:39 pm UTC

> 2) Bash shell style command history and tab-autocomplete in sqlplus.
You know, it's already here. Just install rlwrap to enable these features. :)


Standby is a fraction of cost?

chris crisante, October 03, 2005 - 1:23 am UTC

Sorry about this - I know this thread isn't about licensing. But.. in reply to your points

"It is not double the cost, standby machines are typically a fraction of the size."

- I meant double the cost of Oracle licenses, not double the cost of the machine. According to a very official looking document I have - I could fax it to you - if we implement a Hot Standby which we intend to use for DRP (The usual reason for a Standby at our shop), we must license that standby machine at the same level as the production machine. We can't buy a 'small' Oracle license (eg 10 named user, etc) for our DRP machine even though nobody will (hopefully) ever sign on to that machine. And every time we add capacity to a production machine, we are obligated to add the same license for DRP. We have argued this point unsuccessfully for years.

"Each processor in a NON-GRID environment requires a license too. At the end of the day, you either have 16 cpus in a single big machine or 16 cpus in 4 small very cheap machines. You still have "16" "

Agreed, 4*4=16*1. But in my experience there is usually the choice of upgrading CPUs vs adding more CPUs. Sometimes we feel 'more CPUs' might perform better than 'faster CPUs' for a particular app, but we can rarely use that approach because the licenses cost more than the hardware.

The Grid approach makes a lot of sense. It would be nice to manage CPUs as a 'low-cost' commodity much like disk. But to date, the licensing costs have made that approach hard to sell. It's not just Oracle, most software vendors base their costs on the number of CPUs in the machine. Hopefully that will change someday soon.



Tom Kyte
October 03, 2005 - 7:31 am UTC

that is not my understanding on how it works, but I am not a license person.

The database license guide
</code> http://www.oracle.com/corporate/pricing/specialtopics.html <code>

states that the same *metric* must be used (named user vs processor) but does not say you have to license 16 cpus on a standby that only has 4. You cannot license named user on production and cpu on standby, you have to do cpu/cpu or named user/named user.

Cost: Oracle standby vs DB2

Enayet, October 03, 2005 - 6:12 am UTC

Just to add my recent experience about standby licensing:

Oracle does charge for standby licensing which is disheartening. My arguement is: standby is an inactive database, why we should pay for that?

I know IBM/DB2 do not charge any extra for their standby. On the same token, Oracle argue that their standby is a HOT standby and can be turned as a read-only DB anytime for reporting purposes.

But my personal opinion: this is just a over kill. Standby should be free of charge, no money should be paid for such an idle database, does not make any valid business case!

I have worked with Veritas, EMC, IBM and Microsft. They all agreed not to charge any money for the software (even for hardware IBM and EMC agreed to opt out for a premimum pay) until activated for production usage; except Oracle. That's beat me!


dave, October 03, 2005 - 6:19 am UTC

Taking Oracle's point of view here. A true standby database is a live database it is accepting redo and processing that data. With a logical standby you can actively use that database for queries and reporting (and you can with a physical standby as well if you open it read only)

To say it is idle is false really because it is doing a lot of work on your behalf to keep your data in sync

If you dont want that consider a cold failover one where if the first machine fails you can remount the disks onto the second server and then startup the instance. If that database is active for more than 10 databases per year then you need to buy a license, otherwise 'free'

my wishlist

Markus, October 03, 2005 - 6:58 am UTC

1. a "switch user" command like the "su" command on unix. this would be a tremendous help as we - the dba's - would not have to know all user passwords any longer. (tom, i know your script where you temporary change a user password and set it back in following. but this is not feasible in any case).

2. comment opportunities for nearly all database objects like the already existing table and column comments. it is very common that objects, starting with users, roles and associated privileges are created/granted by someone, maybe also documented somewhere "off-line", but after a while no one really knows what these objects have been initially created for. providing the ability for object comments would make the database scheme more "self-documenting", eg. looking at DBA_USERS could not only show what is in this case but also why it is.

Tom Kyte
October 03, 2005 - 7:40 am UTC

1) but why is a DBA having to even log in as another user? what are the use cases (i can aleady create a table, index, procedure, etc etc etc "for" you, why do I need to "be" you?)



ad "my wishlist", "why a dba have to log in..."

Markus, October 03, 2005 - 8:12 am UTC

hi tom,

as why a DBA have to log in as another user.

eg. to maintain private database links (stuff and organisation change and sometimes new stuff is not aware how to do this, or it is not pretty clear at the moment who is responsible for this in future,...).

eg. to work and try something in a specific users security domain directly. sometimes users call and simply claim "it is" where i, as dba, want to validate that "it is" for real. this hold especially true when using security contexts and logon triggers,...

regards,
markus



Tom Kyte
October 03, 2005 - 11:11 am UTC

for private dblinks, I've not heard/experienced a need for a DBA to do that personally - that seems fishy.....

AJB, October 03, 2005 - 8:45 am UTC

What about enforcing the use of APIs if required.

If you have table(s) that should only be maintained via api calls then prohibit other update methods (single dml statements).

This should probably be configurable as it wouldn't always be needed or desirable but would be a good option to protect data from rogue updates.

Tom Kyte
October 03, 2005 - 11:15 am UTC

GRANT and REVOKE are awesome for that?

I mean, with n-tier proxy authentication, secure application contexts and fine grained access control - you have that (since 8i). You can make it so that only your application can muck with the data

Column dependency.

MBPP, October 03, 2005 - 8:50 am UTC

One nice feature to implement would be to have a dictionary view to show where a particular column is referenced, not only table.

Alex, October 03, 2005 - 9:08 am UTC

Tom,

Since I submitted this thread I've gotten about 50 emails from you saying "a reply has been posted here....etc". It seems I am getting one for every time someone posts a followup here. I was thinking you might like to check into that. Thanks.



Tom Kyte
October 03, 2005 - 11:17 am UTC

You asked to be notified when the question is updated when you submitted it?

Would you like for me to update that and turn it off for you?

To: AJB from England

Andrew, October 03, 2005 - 10:06 am UTC

That facility is already available, thus;
a) create a schema to own all objects. The password is known only to the DBAs.
b) DO NOT grant any privileges (other than maby select )on these tables to anyone.
c) create the API procs ( in the same schema as the tables ) as necessary.
d) Grant execute on these API procs to the appropriate users/roles.

Now no one can manipulate the data at all, except through one of the procs you have created.


AJB, October 03, 2005 - 11:04 am UTC

Thanks Andrew - I know you can limit access in this way, but I'm looking for an extra level of protection to stop developers (the people maintaining the apis) from bypassing existing APIs.
I know you would expect these people to be aware of existing APIs and how to use them, but in complex applications this isn't always the case especially if the APIs aren't as fully documented as they should be.


Tom Kyte
October 03, 2005 - 11:30 am UTC

n-tier proxy authentication, you can set it up so that only the application can make changes.

Your Top 5 or Top 10 suggestions

Anto, October 03, 2005 - 11:28 am UTC

Hi Tom,

Before you decide to close down this thread, please pick your Top 5 or Top 10 suggestions from the list given by various users of oracle



Tom Kyte
October 03, 2005 - 8:22 pm UTC

There are precisely 2 threads out of 27,231 that I've "closed down" regretfully ever. 2... There is a commonality between those two threads not shared by this one. It won't shutdown.

I was asked to get the top ten from here and send them along (this thread has generated interest), I'll share that all with you after I do it.

Mark, October 03, 2005 - 11:29 am UTC

"
Followup:
1) but why is a DBA having to even log in as another user? what are the use
cases (i can aleady create a table, index, procedure, etc etc etc "for" you, why do I need to "be" you?)"

What about Jobs? As a DBA, can you scedule a job for a user? Or run a job as a user? I have not used 10g so maybe so, but in 9i this was a problem.


I would like to see the power and flexibility like the Concurrent manager in Oracle Apps. Have the ability to group jobs, prioritize them, and serialize them. I also would like to see Enterprise Manager have a job management screen.

Thanks..


Virtual column?

David Rydzewski, October 03, 2005 - 11:34 am UTC

Something to the effect of

alter table add new_column varchar2(50) new segments only;

In other words, altering a table to add a new column but only having it effect new partitions added. SQL against older partitions would yield a null for this column. Once older partitions are aged out, the column is no longer considered "virtual."

Could be useful in supporting change in a data warehousing environment that can't afford down time.

Dave

A reader, October 03, 2005 - 11:34 am UTC

Oh is that how that works? Sorry I thought that was to notify me when YOU responded to it. This is the first time I asked a question and it's been published. If it's not too much trouble you can kill that, this turned out to be a hot one.

Tom Kyte
October 03, 2005 - 8:29 pm UTC

done...

Above post is Alex

A reader, October 03, 2005 - 11:36 am UTC


column specific errors

Igor, October 03, 2005 - 11:39 am UTC

Hi,

It would be nice when doing large INSERTS (I mean many columns) that error messages that are column specific are more informative. So, if you have error when attemped to insert NULL in not null column or when insert alpha character which are not digits, that's is clear what column is problematic.
Also it would be nice that view column and value mappings are made much clearer.


Tom Kyte
October 03, 2005 - 8:31 pm UTC

ops$tkyte@ORA10GR2> insert into t values ( null );
insert into t values ( null )
                       *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("OPS$TKYTE"."T"."X")



for nulls that's been true for a couple of versions. 

Add a storage option for LOBS

Jim, October 03, 2005 - 1:55 pm UTC

Add a storage option for LOBS to store them as gzip or a zip or simlar compressed format. Maybe this would be limited to binary (BLOBS). The BLOB would be uncompressed on the return. The compressed format would not allow function manipulation (instr, etc.) or where clause usage. It would just compress the data.

Sqlplus Enhancement

Mathew Butler, October 03, 2005 - 2:11 pm UTC

How about a facility in sqlplus to enhance the SQL buffer and enable the aliasing of ad-hoc SQL in sqlplus. It seems I type similar things within a session many times. It would be great to be able to alias the last 10 queries that you exectuted and then be able to access these in the buffer.

Mat.

parameter request

Tom Jordan, October 03, 2005 - 4:16 pm UTC

I would request an "exclude" parameter for imp/exp. My fingers are cramped from listing out all (1000 - 1) tables for an export.

Sure I can script it with all_tables etc., I'm just saying...

-- Tom

Tom Kyte
October 03, 2005 - 8:59 pm UTC

data pump.......

A reader, October 03, 2005 - 5:29 pm UTC

I guess it is there in expdb in 10g

Raptorize

Tarry Singh, October 03, 2005 - 5:42 pm UTC

"(you beat me to it!)"

heh heh

init parameter request

A reader, October 03, 2005 - 6:26 pm UTC

Hi,

a nice parameter would be
implicit_type_conversion=false

My requests

Gary, October 03, 2005 - 8:49 pm UTC

Something similar was set up on
"</code> http://www.oracleplsqlprogramming.com/IC/" <code>
but it had a tendancy to throw MySQL errors when submitting suggestions :)

My list.
1.A couple of mentions for disabling implicit type conversion. I'd like that to extend to explicit type conversions without format masks, at least for TO_DATE.
2.DESC for a specific procedure/function in a package
3.Extended explain syntax to add an optional USING clause (similar to EXECUTE IMMEDIATE) to allow the equivalent of bind variable peeking.
4.Something that is sort of like a function, but only involves SQL so doesn't require a context switch.
For example, you want to strip all the non-numeric characters from a string, you have
CREATE OR REPLACE ????? strip_nonnum (p_in varchar2)
returns
translate(p_in,' '||translate(p_in,' 1234567890',' '),' ');
and your SQL is simply SELECT strip_nonnum(col_name)....

[Thinking about it, it would be nice to have STRIP and KEEP SQL functions that discard or keep characters from strings rather than using TRANSLATE. I hate the fact that TRANSLATE fundamentally deals with arrays of characters rather than strings, but treats a zero length array as a null string. CONCAT is quite happy with the idea of a zero-length strings.]


Tom Kyte
October 03, 2005 - 9:15 pm UTC

2) oh, so you want to go back to 7.3 :) (that was the way it used to work....)

3) nice, in line with Alberto above...

4) I like that.



User Friendliness

Dhimant, October 04, 2005 - 12:54 am UTC

I guess anybody who has worked with MS SQL Server 7.0 and above can easily understand the ease of use and flexibility Enterprise Manager in SQL Server offers. Java based consoles [OEM] are good, platform idependent but slooooooooooooooooooooow. Oracle has fantastic manuals but the same needs to coupled ***with*** sqlplus, at least for SQL and PL/SQL. A light weight database modelling tool too should be made a part of Oracle Enterprise Edition. Forget everything else I think just add the SQL and PL/SQL Documentation with sqlplus. Do it and Do it asap.

regds.

Tom Kyte
October 04, 2005 - 1:42 pm UTC

Forget
everything else I think just add the SQL and PL/SQL Documentation with sqlplus.


what do you mean by that?

Idle musings

Connor, October 04, 2005 - 1:15 am UTC

Most of these can be done via other means - so this is more of a "convenience" list.

1) "on commit trigger"

For doing those transactional level validation things - sort
of like deferred constraints on steroids.

2) "v$sql_children"

child of v$sql showing statistics for each execution of an SQL (say at stats_level = all). Sometimes 5 executions of the same child cursor have very different execution characteristics.

3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this but be nice to have it all under 10046)

4) SQL to PLSQL call stack

SQL's called from PL/SQL can be linked to the executing package without having to trace/profiler/x$kglrd

5) Extended unique constraint

alter table T add constraint C unique ( upper(col) )

rather than having to use 'create index'

6) scott/tiger

$> ls -l /oracle/10gr2/sqlplus/demo/demobld.sql
/oracle/10gr2/sqlplus/demo/demobld.sql: No such file or directory

I dont *want* those new example schemas :-)

7) rman "duplicate as copy"

rman dup appears to always use a backup or take a new one. Why not just backup from the live db straight into the duplicated instance. The 'backup as copy' gets me closer in 10g

8) parent-id shown with dbms_xplan

The indenting is nice, but the explicit link between id to parent id is more concrete imho.

9) lob segments named better by default

rather then "SYS_L....". why not do it like materialised views etc - use the first 'n' chars of the column name with a sequence# suffix. You could always fall back to the default if a class occurred.

10) smart date based add-partition

If you've got a range partition table and each range is a day/week/month/year, then its not too much of a stretch to allow a ddl as:

alter table T add partition default range;

11) /*+ cursor_sharing_force */ hint

Trying to convince java developers to rewrite their string-concatenated SQL is a big ask. But getting them to add the hint to the string is easy. Allows selective cursor sharing without having to toggle 'alter session' calls throughout the code.

12) dbms_stats('execute-for-n-secs')

Rather than estimate/compute etc, simply a "please calculate stats until n seconds have elapsed". Under the scenes it could be similar to the auto mode - start with small sample and keep extending until time runs out.

13) aux_stats$ entries per instance

14) alter sequence reset to n;

15) online table move

"alter table T move online";

Spare the dbms_redefinition mumbo jumbo :-)

16) transactional truncate (under certain conditions)

So we could do:

truncate
insert /*+ append */
rollback

I'm thinking the truncate simple sets a seg header flag which forces all new data about the hwm. If you commit, we move the start block to the old hwm.

17) usage counts in v$object_usage

18) alter (context) index rebuild online;

and no its *not* already there when the manual says: "At the very beginning or very end of this process, DML might fail."

19) Removal of anything related to XML :-)

Cheers
Connor

Tom Kyte
October 04, 2005 - 2:08 pm UTC

5) yes....


thanks for the list - nice ideas. I will be (after this settles a bit, getting there soon I think) pick sort of a top-n to be sent on up the chain.

For "Queue DDL statements"

Connor, October 04, 2005 - 1:49 am UTC

You might want to play with "ddl_wait_for_lock"

Tom Kyte
October 04, 2005 - 2:08 pm UTC

well, now you are just getting pushy ;)

If not already mentioned ...

Clemens, October 04, 2005 - 3:08 am UTC

... how about these:

- NULL-Strings <> '', as session or init-Parameter, that would help a lot (and keep away those NVL-constructs).

- Trigger "numbering" (as for example seen in firebird), so we can have for example several "insert for each row"-triggers on the same table, with a fixed execution order - could be handy if you automate some insert triggers for sequence no's or have to add new features to existing applications.

- Sequence/Id-Columns, as seen in postgres. You can still create your own sequences, but it keeps away the unavoidable insert-triggers for that purpose.

- Removing the varchar2 barrier - lobs are nice, but why not open up the "character" datatype as seen in other DBs (even firebird manages char-columns up to 32K length).

- A "step"-clause in the for-loop would be nice (as seen in other languages [2]).

- Case statements with selector which allows extended compares, such as IN, LIKE, etc. Searched case statements are nice, but couldn't there be more relaxed variant for the selector-type? (as seen in other languages [2]).

- Mixed case object names combined with case insensitive DML. All upper case (such as in desc-statements) is harder to read (in my opinion), "<objectname>" requires " "-syntax on selects, etc. if I remember correctly. Could be a session or init-parameter.


- Already mentioned: constraints with it's own error messages (I think Sybase ASE has that already built in).

- Already mentioned: 30 character limit on (all) object names. Tables/Columns are fine (for me), but constraints (such as foreign keys), where I usually prefer to include both tablenames and columns for easier identification it's a drag, especially if you use 3rd-party modelling tools.

C.

[1] My first post here, but I'm a reader of this, in my opinion very valuable "knowledge base", for a couple of years now.
[2] I'm refering to languages such as Visual Basic (gambas if you're on die *[IU]X-Side).

I wish...

Vladimir Andreev, October 04, 2005 - 6:06 am UTC

Removal of LONG columns from the dictionary views (or adding new LOB columns with the same data and keeping the LONG columns for backward compatibility - sort of what happened to V$SESSION_EVENT with regard to TIME_WAITED and TIME_WAITED_MICRO)

Cheers,
Flado

I would like...

Frank, October 04, 2005 - 7:57 am UTC

... to be able to see dependencies between packaged functions and procedures (i.e. at function-level; not at package level), as well as to be able to query for parameters of (private) procedures/functions


Tkprof stuff

Jim, October 04, 2005 - 8:26 am UTC

1. Finer resolution on CPU times.
2. Format bind values.

group by syntax

Graham Halsey, October 04, 2005 - 9:27 am UTC

I may be way behind the times (8i!), but why can I do this:

"select table_name,tablespace_name from user_tables order by 2,1;"

but not this:

"select tablespace_name,count(*) from user_tables group by 1;"

i.e. why is ORDER BY and GROUP BY handled differently when it comes to referring to fields in the SELECT list numerically?

Tom Kyte
October 04, 2005 - 4:30 pm UTC

SQL standard.... It permits order by <ordinal column position>, but not so for group by.

Finer grained dependencies on packages ...

Dushan, October 04, 2005 - 9:46 am UTC

I wish I could see dependency like:

package1.procedure1 calls package2.procedure2

It would be nice for documentation issues.

Syntax Re-write..

Pet, October 04, 2005 - 10:10 am UTC

I would like PL/SQL syntax to match up with other languages (java probably..)

:= should become =
= should become ==
if() then end if should become if(){} else {}
...

it's tough everytime to change the coding style from PL/SQL and Java . I guess most of the developers has the same problem..


Tom Kyte
October 04, 2005 - 4:33 pm UTC

that'll not be happening.

Make java change. := predates java by a couple of years and a quite a few languages :)

Sorry, but I disagree ...

Clemens, October 04, 2005 - 10:21 am UTC

@Pet: Java is Java and PL/SQL is PL/SQL (or ADA). No need to turn PL/SQL into something different. But if Oracle would offer the usage of different SP-Languages (like for example postgres) ... my personal favorite then would be LUA. ;-)

C.

Tom Kyte
October 04, 2005 - 4:41 pm UTC

we do allow for different languages

plsql
C
java

and in 10gr2 - CLR languages on windows

wish list

Georg, October 04, 2005 - 11:00 am UTC

1. Improved data dictionary security
Why is it possible to modify internal views (like DBA_USERS)? I wish a new security layer would make such internal objects "read-only". Or even having any internal checksums to be able to detect manipulations...

2. From the other DB-world: TOP N Query and Identity
I know, it's easy to build it in Oracle - but...
Why not just
create table <t-name> (id number identity using sequence <seq-name>)

and SELECT TOP N ... is really much more user friendly then using inline views.

3. "NULL or unique" constraint.
Business case: You have a form having email field. The user don't have to fill it - but if he does, the email address has to be unique. So there are many NULLs values and unique not null values

4. already mentioned: 30 char limit and reset a sequence value


Tom Kyte
October 04, 2005 - 4:42 pm UTC

3) that already exists?? a unique constraint on a NULLABLE column is always "null or unique"



Arrays as IN arguments to SQL Statements

Steve G, October 04, 2005 - 11:08 am UTC

I wish plsql would allow arrays passed in to be used as an IN argument to a sql clause such as:

p_array_demo(p_int in int_array)
is
begin
select *
from temp
where pk in (p_int);
end;

That would be real helpful.

Tom Kyte
October 04, 2005 - 4:46 pm UTC

if you use collections - you can do that.




The Installer

C Wayne Huling, October 04, 2005 - 11:23 am UTC

If this is available, I have not found it? Running the Universal Installer in text mode. Having to run that across a 10 base T (really, the Government Installation won't upgrade!) through the firewall is a killer!

Tom Kyte
October 04, 2005 - 4:50 pm UTC

there is no character mode, there is

a) gui mode
b) silent mode (you ran the gui and created a response file to replay later)

Access of tracefile to developers

Jagjeet Singh, October 04, 2005 - 11:43 am UTC

I would like to have a feature which can provid the trace output to direct sql*plus client.

Developer can issue "alter session set sql_trace=true"
but for getting this file they need to have access on database server or dba needs to setup

I would like to have a global_temporary table for this output.


Delink Full Table Scan I/O from db_file_multiblock_read_count & db_block_size parameters

Anto, October 04, 2005 - 11:49 am UTC

Do all Full Table Scans at the maxium io rate possible for the system, irrespective of the settings for db_file_multiblock_read_count & db_block_size



Tom Kyte
October 04, 2005 - 4:53 pm UTC

10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?

Some estimate of time remaining

Anto, October 04, 2005 - 12:21 pm UTC

Some way of determing the time remaining time for operations like Nested Loop (based on the statistics available with the optimizer), even if they are not accurate (if stats are not uptodate)

Similar to what we have for long operations like Full table scan, fast full index scan / hash join etc

How about a column name

Ian, October 04, 2005 - 12:45 pm UTC

It would be nice to be told the column name when you get an ORA-01401: inserted value too large for column.

Tom Kyte
October 04, 2005 - 5:15 pm UTC

ops$tkyte@ORA10G> select * from v$version;
 
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 - Prod
PL/SQL Release 10.1.0.4.0 - Production
CORE    10.1.0.4.0      Production
TNS for Linux: Version 10.1.0.4.0 - Production
NLSRTL Version 10.1.0.4.0 - Production
 
ops$tkyte@ORA10G> @test
 
Table dropped.
 
 
Table created.
 
insert into t values ('xxx','xx')
                      *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."X" (actual: 3, maximum:
2)
 
 
insert into t values ('xx','xxx')
                           *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."Y" (actual: 3, maximum:
2)
 

ASM - even more intelligence

Jim, October 04, 2005 - 1:05 pm UTC

Automatic Storage management is nice but it goes on the assumption that raw IO times and available size of each mount point are equal. Two issues here:

1. If you have two mount points with unequal disk acess times (eg 1 with 10,000 RPM disks and one with 15,000 PRM disks) and you use them as one ASM file system. You are going to have unusual performance characteristics. Yes, the advice is that you shouldn't do that or you should have two files systems. (one for the 10K disks and one for the 15K disks). However, the reality is that over time it becomes difficult to do that.

2. Mount points of uneven size. ASM really wants mount points that are the same size. It wants to spread the data evenly over the mount points. Again, on a brand spanking new system you would want to set things up this way. As things age, the data bits age into their 40's and their waist lines expand, taking up moore room. (just kidding) So another mount point gets added that is much larger. Now you have a situation that is similar to situation#1. Potentially uneven access.


It would be nice to "environmentally tune" ASM. It should know what the IO patterns are and what the disk characteristics are over time. (how fast different types of reads and writes occur over time.) It could then make a better allocation of data over the mount points to optimize data access. It would have to take into account segment access, frequency and volume also.

This would not be a trivial piece of work and you might not want ASM to always be that smart. But I envision that a simple parameter could activate it or not. drum roll please.... _fast=true

(grin)

Tom Kyte
October 04, 2005 - 5:20 pm UTC

1) that would be a "don't put them into the same diskgroup" yes. but that would be true OF ANY volume manager.

2) no it doesn't? it just wants disk, as much as you want to give it. I don't understand that comment?



What to send up the chain

Bob B, October 04, 2005 - 2:21 pm UTC

I was reading on the "Creating Passionate Users" site about 37 signals and their sort of mantra on customer requests. If something's really important, people will keep bringing it up. So far, the big one I'm seeing is increasing the length of object names. I think many situations have come up where a (slightly) longer name for a constraint or index would make it self-documenting. Or maybe leave object names as they are and allow all objects to be commented on? *shrug*

Improve sqlplus editing

Jack, October 04, 2005 - 4:21 pm UTC

Add editing and history functionality, similar to Korn shell editing, to sqlplus. I'm fortunate in that I work on HP-UX, which has the ied utility. I go into sqlplus using the "ied sqlplus" command, which gives me shell-like editing and history functions. It's much easier to use than the existing sqlplus editing commands, and I can retrieve commands from history using ESC-k.

Tom Kyte
October 04, 2005 - 8:25 pm UTC

(that is rlwrap to the rest of us ;)

Continuing gripes

Bill C., October 04, 2005 - 4:38 pm UTC

First, kudos to Oracle for making the DB much better and not screwing up too much in the process. I don't know about you, but it seems that up until 8.1.6, my life revolved around waiting for the next patchset. But not since then. 8.1.7.4 was pretty good. 9.2.0.7 is looking pretty stable as well. I especially congratulate the team that finally allowed me to use records and arrays of records in DML and bulk DML. I had been waiting on pins and needles for that since '94.

Now, for a few gripes, some of which should have been improved eons ago.

1) Losing track of the point of error if you dared to use error handling. I understand this is sort-of fixed in 10g if you parse the output of the new backtrace function. But it still shouldn't be so complex to get the current line number, or the line number of the offending code when throwing an error. It should be a built-in, like
l_line := DBMS_UTIL.current_line_num;
OR
l_line := DBMS_UTIL.point_of_error(stacklevel => 1);


2) Not being able to programatically determine what the name is of the currently executing routine. Sorry Tom, last time I tried "whoami" deep within some packaged calls, it just didn't cut it. Having to parse strings to get that info is ridiculous. It should be a built-in, like
l_pkg_nm := SELF.package_name;
l_fqn := SELF.fully_qualified_name;
l_routine := SELF.current_routine;

3) Being constrained by an old-world 30 char limit to object names. I can live with uppercase only, but I really need 128 or 256 for identifiers.

4) No alphanumeric datatype somewhere between the extremes of 4000 and 4GB. Even lowly Access has this requirement nailed. Can't we have a datatype, or simply expand VARCHAR2, to allow text up to, say 1MB in a column? While we're on datatypes, I type VARCHAR2 so many times in my workday, it would help my weary fingers to introduce a new name for VARCHAR2, say VARC, or STR, which transparently compiles (and so is backwards compatible) to VARCHAR2 under the covers.

5) Everything else is being pushed and encouraged to dump LONG and LONG RAW in favor of CLOB and BLOB, why not Oracle internal tables? They cause me a good deal of grief when attempting certain impact analysis and comparisons across links. I have to jump through a lot of hoops to get around them. (Thanks to Vladimir for the reminder).

6) Dependent item Recycle Bin. Too often I or one of the DBAs gets a little too busy or rattled and drop a table, inadvertently losing grants or triggers that we forgot to check for first. Of course there are processes and tools to prevent that, but it would be sweet if there were a built-in facility to either a) given a table name, produce a drop script that captures the recreate scripts for the automatically-dropped dependent items, or b) a recycle bin that would contain the recreate DDL for items that disappeared when the table dropped. On a similar note, if option (a) were attempted, I'd love to see it generate the commands required to drop and recreate all the dependent foreign keys pointing to the table being dropped.

Anto, October 04, 2005 - 5:17 pm UTC

" Do all Full Table Scans at the maxium io rate possible for the system,
irrespective of the settings for db_file_multiblock_read_count & db_block_size


Followup:
10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?
"

We don't want to reduce the cost associated with Full table scan , thereby favoring more Full Table Scans - The cost should be the same irrespective of this init.ora. But whenever a full table scan is chosen by the optimizer, it has to be done at the max. i/o rate possible for the system, irrespective of the values for db_block_size and
db_file_multiblock_read_count

Tom Kyte
October 04, 2005 - 8:38 pm UTC

but then the cost (he says) should be set appropriately for that Max IO size - meaning - you already got it, if you want to use the max IO size - use it, the cost will be the cost.

I'm confused?

No Longer Names !!!!!

Andrew, October 04, 2005 - 6:16 pm UTC

30 characters is plenty long enough for me. If the developers I support could make longer names the would and that means that I would have to type in names upto 128 chars. NO THANKS! For those of you who use the point-n-click tools to manage your databases, you do not care how long the names are but us command like commandos do. In fact, our standards limit table names to 25 characters, so there is enough room to add _PK, _IDX1, etc. We use short name conventions when we make compound names. If you had longer name space, all you would do is end up complaining that you need more because of the compounding of longer names into even longer compound names.

I hate the idea of trying to read columns of data about tables, constraints, etc where the names are 128 char wide and wrapped into two or three lines. Ouch.


Compress subpartition?

Quadro, October 04, 2005 - 8:09 pm UTC

It's 2005 and 10G Release 2.

And we still can't compress subpartitions!

Tom Kyte
October 04, 2005 - 9:18 pm UTC

sure you can???  You cannot compress an "indvidual" subpartition but subpartitions can be compressed (at the partition level, all of them are - or not)


ops$tkyte@ORA9IR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x) subpartitions 8
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA9IR2> CREATE TABLE t2
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  COMPRESS
  8  PARTITION BY RANGE (dt)
  9  subpartition by hash(x) subpartitions 8
 10  (
 11    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 13    PARTITION junk VALUES LESS THAN (MAXVALUE)
 14  )
 15  /

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t1
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30733 rows created.

ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t2
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30733 rows created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T2' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name in ( 'T1', 'T2' );

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    268            0
T2                                    128            0



table is full of subpartitions and they are compressed... 

About subpartition compression

Quadro, October 05, 2005 - 1:34 am UTC

Ok, thank you Tom, it's my fault - i should me more specific.

Consider: you loading data into table partition during month. In the end of the month you do alter table move partition compress - to compres it content.

With list subpartitions you cannot do that.

SQL> create table test
  2  (
  3   dt date,
  4   lst number
  5  ) partition by range (dt)
  6  subpartition by list (lst)
  7  subpartition template
  8  (
  9   subpartition t1 values (1),
 10   subpartition t2 values (2),
 11   subpartition t3 values (3)
 12  )
 13  (
 14   partition test values less than (maxvalue)
 15  );

Table created.

SQL> insert /*+ append */ into test select created, mod(rownum,3)+1 from all_objects;

51816 rows created.

SQL> commit;

Commit complete.

SQL> alter table test move partition test compress;
alter table test move partition test compress
                                *
ERROR at line 1:
ORA-14257: cannot move partition other than a Range or Hash partition


SQL> alter table test move subpartition test_t1 compress;
alter table test move subpartition test_t1 compress
                                           *
ERROR at line 1:
ORA-14160: this physical attribute may not be specified for a table
subpartition 

Tom Kyte
October 05, 2005 - 7:17 am UTC

just alter table test move subpartition??  The subpartition already has the compressed attribute.  It is inherited from the TABLE or PARTITION attribute


ops$tkyte@ORA9IR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x) subpartitions 8
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t1
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30702 rows created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name = 'T1';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    268            0

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table t1 compress;

Table altered.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
  2          for x in ( select * from user_tab_subpartitions where table_name = 'T1' )
  3          loop
  4                  execute immediate 'alter table t1 move subpartition ' || x.subpartition_name;
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select table_name, blocks, empty_blocks from user_tables where table_name = 'T1';

TABLE_NAME                         BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T1                                    128            0
 

Followup ...

Clemens, October 05, 2005 - 3:13 am UTC

@Tom:

we do allow for different languages
plsql
C
java --> yes, but you still need PL/SQL as foundation. But that's fine with me. Now to the python, perl, ... whatever integration. ;-)

and in 10gr2 - CLR languages on windows

So what about mono support on the *[IU]X platforms?

@Andrew from m

Short column/table-names, why not. But finding "artificial" names for objects such as foreign keys, where it would be nice to know the relationships between tables _and_ columns by reading its name.

As a side note: I'm using both "point'n'click" (for DB model desing) and command line tools, and I'm well aware of the misuse of long names. Brings us to another issue:



How about tab completion (such as in bash, zsh, etc.) in sqlplus for object names, etc.? Don't think that rlwrap helps you there. [1]

C.

[1] Workaround for me right now is vim with dbext plugin.

Re: Queue DDL statements.

Sai, October 05, 2005 - 4:16 am UTC

Connor,

Thanks for your tip. It seems this feature "DDL_WAIT_FOR_LOCKS" was added in 10g release 1. But, I can't get it to work, any clues?

SQL> alter session set DDL_WAIT_FOR_LOCKS=true;

Session altered.

SQL> alter table test add sai_ddl number;
alter table test add sai_ddl number
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified


SQL> show rel
release 1001000200

Thanks,
 Sai. 

Tom Kyte
October 05, 2005 - 7:37 am UTC

It is an unimplemented feature that was erroneously documented in 10gr1 and removed in 10gr2 (from the docs)

Alberto Dell'Era, October 05, 2005 - 5:18 am UTC

> It seems this feature "DDL_WAIT_FOR_LOCKS" was added in 10g
> release 1. But, I can't get it to work, any clues?

metalink - doc id 660901.999 "DDL_WAIT_FOR_LOCKS=TRUE has no effect" seems to offer a possible interpretation:

<quote>
Got response from the TAR and Oracle told me the following:

"It has been confirmed that this is a documentation bug. This feature has not yet been implemented and it is not being implemented in the next release of
Oracle either."
</quote>

ddl_wait_for_locks

Connor, October 05, 2005 - 5:36 am UTC

bug 4254209

From the description:

"The parameter "ddl_wait_for_locks" is not used for anything in the code"

Charming...

Cheers
Connor

ddl_wait_for_Locks

Connor, October 05, 2005 - 5:40 am UTC

Sorry - should have also added some assistance.

You can simulate this with something like the below (untested):

declare
n number := dbms_utility.get_time;
begin
loop
begin
execute immediate 'alter table ...';
exit;
exception
when others then
if sqlcode = -54 then
if dbms_utility.get_time - n > 60*100 then
raise;
else
dbms_lock.sleep(0.1);
end if;
else
raise;
end if;
end;
end loop;
end;

which will repeatedly try the DDL in 1/10th second intervals for up to 60 seconds.

hth
Connor

Actually for me, most things have already been mentioned

Bob, October 05, 2005 - 6:58 am UTC

I actually like most of Oracle!
But the things I'd change would be to simplify some of the things that I've needed to do regularly withing commands or packages.

alter table t rebuild online tablespace x;

instead of db redef package. Have it auto rebuild the indexes in place. Simplify it to just do that.

SQLPlus grid results. 90% of the reason people use Toad or PLSQL developer is just to issue queries when developing. It's way easier to look at a table with 50 columns in those tools than sqlplus.

30 char limit for object names. Too small today.

re: improved SQL*Plus

A reader, October 05, 2005 - 8:05 am UTC

if all these wishes for an enhanced SQL*Plus will get implemented may be this program then could be renamed to "UFI" (user friendly interface)

;-)

oups


I'll add my vote

Heath, October 05, 2005 - 9:31 am UTC

Just agreeing with the earlier comments that Raptor would recognize the fact that database developers have their PL/SQL source code in a version control system, too. Let us check those files out, edit the text file locally, and then be able to compile and debug based on that local file. Expecting the user to just pull the source from the data dictionary and edit it doesn't cut it. In a collaborative development environment, I can't trust the source in the database. I do trust the code in the source code system.

For Robert, the free JDeveloper doesn't allow this either. It expects you to edit from source in the database. (If I'm wrong, feel free to tell me how to do this.)

Please let me not have to develop using PLSQL Developer with its substandard debugger simply because that product allows me to work with local files. I love the JDeveloper PL/SQL debugger and would be pleased to be able to use the JDeveloper IDE (or similar) to do my development as well.

db_file_multiblock_read_count parameter

Anto, October 05, 2005 - 10:33 am UTC

"Do all Full Table Scans at the maxium io rate possible for the system,
irrespective of the settings for db_file_multiblock_read_count & db_block_size



Followup:
10gr2 - "self tuning" setting for that.

or just set db_file_multiblock_read_count high?
"

We don't want to reduce the cost associated with Full table scan , thereby
favoring more Full Table Scans - The cost should be the same irrespective of
this init.ora. But whenever a full table scan is chosen by the optimizer, it has
to be done at the max. i/o rate possible for the system, irrespective of the
values for db_block_size and
db_file_multiblock_read_count


Followup:
but then the cost (he says) should be set appropriately for that Max IO size -
meaning - you already got it, if you want to use the max IO size - use it, the
cost will be the cost.

I'm confused?

"

Hi Tom,

Here is my understanding of the parameter - db_file_multiblock_read_count

Correct me if I am wrong.

If this parameter db_file_multiblock_read_count is set too high to say 128 or 256 or more, more and more SQLs which were using indexes before, will start using full table scans, since we are reducing the cost of FTS, by increasing the value for this parameter.

If this parameter is set too low, to say 4 or 8 or 16, then more and more SQLs(which were using FTS before) will start using indexes,since the cost of FTS is high, due to low value for this parameter. But even with this low value for db_file_multiblock_read_count parameter , there will be a few SQLs which will still be using the FTS(full table scan) at a max. i/o rate of db_file_multiblock_read_count* db_block_size (which will be low, due to low value for db_file_multiblock_read_count parameter) . Ideally those SQLs which are still using FTS, should do the FTS at the maximum i/o rate possible for that system instead of doing the FTS at db_file_multiblock_read_count* db_block_size(which will be low due to low value for db_file_multiblock_read_count parameter)

Tom Kyte
October 05, 2005 - 11:40 am UTC

but the cost of the full scan at the MAX IO SIZE is the cost of the full scan at the MAX IO SIZE.

What you are really asking for is a parameter to set the "full scan cost", not a parameter to muck with the IO size.


You really DO reduce the cost of the fts by increasing the multi-block read count see -- you really do, so you either want the cost we are going to use or not.


you set it to the max io size (settings above that don't count, it doesn't care if you set it really high, it adjusts) and that -- that is the COST of a full scan.


You have two different concepts going here. Perhaps you want to look at system statistics (so we know the cost of multi versus single block IO) and the optimizer_index_* parameters.

Good thread here

Jim Tommaney, October 05, 2005 - 10:49 am UTC

1.) If this can be done and not impact concurrency:

create sequence seq_1;
alter table t_1
add constraint sk_1 sequence (c_1 )
references seq_1{.last_number};

Anyone who's dealt with thousands of sequences and hundreds of developers will appreciate the documentation aspect of this.

2.) I agree with the non-context-switching-function-like-thing suggested by Gary from Sydney. This would go a long way toward cleaning up some really tangled string manipulation.

3.) Cursor_sharing (nn ) approach, get sharing behavior when the number distinct literal values goes above the parameter value.

NAME DESCRIPTION
cursor_sharing_threshold re-write literal value to bind when literal exhibits > nn distinct values


Thanks - Jim T


Yikes

Scot, October 05, 2005 - 10:57 am UTC

This is a great thread, and there are some good suggestions here. But also some really bad ones, I trust that close scrutiny will be paid to all comments :)


Tom Kyte
October 05, 2005 - 11:46 am UTC

;) indeed, I have my own filters.

User, October 05, 2005 - 11:34 am UTC

I would like to get an error message like "Object does not exist" while creating a synonym, if the object does not exist.

column specific errors (part2)

Igor, October 05, 2005 - 11:38 am UTC

Yes, sorry, I wrote from top of my head. Actually 1401 with a lot of values is sometimes not easy to find:
SQL> insert into t values ( 'ABC' );
insert into t values ( 'ABC' )
                        *
ERROR at line 1:
ORA-01401: inserted value too large for column

 

more on binds

Niall Litchfield, October 05, 2005 - 11:39 am UTC

wow - this thread grew a bit in the last few days...

Not sure if this belongs here or in a blog entry.

"Niall,

you won't lose friends by expressing lucid opinions based on your experiences... "

Hey, I know that. Just overstating again.

"But on the bind thing - you and I see opposites I think. Maybe - when I come to
the UKOUG - you can sit in on my "all about binds" session "

It's on my agenda already. I don't think we are actually on opposites (at least as far as understanding goes).

"
the first time around, "where pk = 10" is not any different from "where pk =
:b1"

Even on the 1000th time, "where pk = 10" is not any different from "where pk =
:b1"

However, you don't tend to submit "where pk = 10" 1000 times, you tend to do it
once and it must be compiled (hard parsed)


so, it is the difference between:

where pk = 10
where pk = 11
....
where pk = 10000
where pk = 10001
...
where pk = .............


and

where pk = :b1



Think about a developer writing a SUBROUTINE for each unique set of inputs and
calling them: "

What you describe is how it *is* in Oracle, given how Oracle works binds are just so, so important. The fact is though that developers, and more particularly sql generators have a real hard time getting binding right. I guess what I am saying is that, given the nature of apps, I'd like Oracle to behave like CURSOR_SHARING=<FORCE> by default (only without the bugs :(). i.e that writing code like

select col from tab where pk =||x (ie string concatenation)

resulted in nice straightforward sharable code. Microsoft's SQL server has done this for a while now.

I guess what I am probably saying is that developers and sql generators don't and won't bind (and we maybe can't teach them better) I'd like it if Oracle didn't suffer so much as a result.

Incidentally I agree with Connor for some reason I can easily see persuading developers to insert hints to make up for their lack of proper coding. Never will get that mentality.







Tom Kyte
October 05, 2005 - 11:53 am UTC

MS SQL Server doesn't have the concept of shared sql at all. It cannot be doing it "nicely" :)

exp/imp specifics & LOBs

Igor, October 05, 2005 - 11:45 am UTC

Even in 9.2, after quite a few releases with LOBs, when you change tablespace during import, LOB storage is not done automaticaly, as "normal" storage for tables and indexes. So that you should use indexfile, edit, create, import etc.




Tom Kyte
October 05, 2005 - 11:54 am UTC

data pump. have you looked at that in 10g.

Follow up for Tom and/or Connor

Robert, October 05, 2005 - 11:48 am UTC

Hello Tom or Connor,

Connor said the following in his list of suggestions...

=========================
3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this
but be nice to have it all under 10046)
=========================

Could you please explain exactly how we can trace logical I/O's (8i and 9i).

Thanks!

Robert.

not using binds in MS SQL Server

Jim, October 05, 2005 - 12:03 pm UTC

MS actually would prefer you use binds for MS SQL Server. (I suspect they would prefer you not use binds for Oracle <grin>.) If you look at the SQL Server API it does prefer you use binds. Years ago a colleage of mine wrote a drive to MS SQL Server (prior to ODBC existing) and used binds. It was a lot faster than not using binds.

The code generators that don't use binds have lazy or ignorant developers. The recommendation of the use of binds hasn't changed since at least Oracle 6, probably farther back than the birth of some of the developers!

Anto, October 05, 2005 - 1:37 pm UTC

If we have enabled system stats, will the optimizer use the mbrc from that or will it be taken from db_file_multiblock_read_count of init.ora( if this parameter is set to a lower value in init.ora) ?

Tom Kyte
October 05, 2005 - 1:51 pm UTC

system statistics tell the optimizer how expensive multi-block reads are and single block reads are.


db_file_multiblock_read_count is used to perform the IO.

system stats do not "set" the multiblock read count, they report how long the multi block reads take.

Follow up on Connor's comments

Robert, October 05, 2005 - 2:45 pm UTC

Tom,

Connor said the following in his list of suggestions...

=========================
3) "10046 level 16"

logical IO's dumped to trace file. (I know there are other means of doing this
but be nice to have it all under 10046)
=========================

Could you please explain exactly how we can trace logical I/O's (8i and 9i).

Thanks!

Robert.



Tom Kyte
October 05, 2005 - 3:19 pm UTC

via undocumented events.

all about tracing

Alberto Dell'Era, October 05, 2005 - 3:02 pm UTC

1) having in the STAT lines (and so in tkprof) the CBO estimates of the cardinalities, cost, temp space etc - basically what's in v$sql_plan.

Getting the real plan estimates by using "explain plan" is currently near to impossible - you have to reproduce the actual environment at parsing time (hard or impossible, eg think about workarea_size_policy=auto), you would have to know the bind variables values at bind variable peeking time (hard or impossible, not in the trace ... oh, yes, i was about to forget, explain plan doesn't peek ;).
The only feasible way is to look at v$sql_plan, but the plan may have been invalidated or changed in the meanwhile, you should identify the correct child cursor, etc.

2) having an API to dump the STAT lines at will, not only at cursor closing time (that may never happen, think about PL/SQL cached cursors, or JDBC 3.0 cursor caching, for app using a connection pool or simply never closing the connection).

3) I "vote" for Connor's idea to have "logical IO's dumped to trace file". Excellent for statements that have the accessed blocks almost always cached in the buffer cache, eg that join a table to some lookup tables.

4) nice to have: tkprof using the same format of dbms_xplan.

Tom Kyte
October 05, 2005 - 3:23 pm UTC

1) what about workarea_size_policy auto??? that it is on and the pga set to "x" is what decides the plan. It is rather deterministic.

binds are in the trace if you ask for them. and you would explain the query with the binds replaced by literals.

I find it is in general not near to impossible - but almost always obtainable - usually very easily obtainable (it would only be if the application did something to change it's environment).....




Alberto Dell'Era, October 05, 2005 - 4:36 pm UTC

> 1) what about workarea_size_policy auto???

And how can you be sure to be able to reproduce the PGA utilization at the moment of parsing ? Maybe the statement got parsed three days ago at 02:12 am ? Maybe at that time a series of batch session started, and allocated "most" of the PGA ?

> binds are in the trace if you ask for them.

What if the plan got parsed before you enabled tracing, or parsed by another session 1millisecons before you (soft) parsed in your traced session ?

> you would explain the query with the binds replaced by literals.

And can you 100% sure that the CBO will generate the same plan in that case ? Say, you don't know that Thin Drivers in 9i don't peek, replace by literals, and get a different plan.

But even assume that this isn't an issue, that you could theoretically reproduce the PGA utilization, the bind variables values at parsing (execution) time, and the environment. How can you you be sure that you won't commit a mistake - will you trust your generated plan by 100% ?

If that were in the STAT lines - no uncertainty, you get the CBO estimates, in zero time. No errors, no mistakes possible - everything is there to see and diagnose.

Tom Kyte
October 05, 2005 - 8:37 pm UTC

1) and do you have an example where it makes a difference? :)

what matters is

a) workarea was auto
b) pga aggregate target was set

Have you seen different plans for the same query? What changes is the amount of memory given to a workarea....


true about the binds.



But - if the plans were "not" the same - it would be extremely apparent as well (and fairly rare).


The stat lines are reality - the guesses - that only comes from explain plans. You would not want to do an explain plan per parse/execute.

A couple more suggestions

Justin, October 05, 2005 - 5:15 pm UTC

1) The ability to grant privileges to a particular user or role for all objects of a particular type in a particular schema, i.e.

GRANT SELECT ANY TABLE IN schema_owner TO read_only_role
GRANT EXECUTE ANY PROCEDURE IN schema_owner TO application_role

You can get the same effect today with a DDL trigger that grants privileges on the newly created objects, but having Oracle automatically handle this sort of maintenance would be nice.

2) A SQL*Loader "create table" option. If the data files are on the server, I can obviously use an external table, but if I'm doing a load from a client machine, it would be nice to avoid having to duplicate information in my control file into my CREATE TABLE statement.

3) You can't MERGE into a workspace enabled table (Workspace Manager).

Events for these already exist

A reader, October 05, 2005 - 6:32 pm UTC

"2) having an API to dump the STAT lines at will, not only at cursor closing time"
"3) I "vote" for Connor's idea to have "logical IO's dumped to trace file"."

Seperate events for both of these already exist. You can set multiple events in the same session and they will write into the same trace file (not that you would do this in any production environment).

Tom Kyte
October 05, 2005 - 8:38 pm UTC

but events don't necessarily count - I talk about two of them (10046 and 10053), but wish I didn't have to - and with 10g, I can stop talking about 10046...

Followup on IAN's comments

Madhava Reddy, October 05, 2005 - 9:37 pm UTC

The column name is not being specified for number datatype:

madhava@MADHAVA>drop table test
2 ;

Table dropped.

madhava@MADHAVA>create table test(x number(3,2),y varchar2(2));

Table created.

madhava@MADHAVA>insert into test(x,y) values(123.23,'xx');
insert into test(x,y) values(123.23,'xx')
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column


madhava@MADHAVA>insert into test(x,y) values(1,'test');
insert into test(x,y) values(1,'test')
*
ERROR at line 1:
ORA-12899: value too large for column "MADHAVA"."TEST"."Y" (actual: 4, maximum: 2)


madhava@MADHAVA>begin
2 insert into test(x,y) values(123.23,'xx');
3 end;
4 /
begin
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column
ORA-06512: at line 2


madhava@MADHAVA>begin
2 insert into test(x,y) values(1,'test');
3 end;
4 /
begin
*
ERROR at line 1:
ORA-12899: value too large for column "MADHAVA"."TEST"."Y" (actual: 4, maximum: 2)
ORA-06512: at line 2

Will this be done in the next release?

"but events don't necessarily count"

Connor, October 05, 2005 - 9:53 pm UTC

which is precisely why 'alter session' is a terrible terrible privilege to grant to anyone in a production system

Re: Follow up on Connor's comments

Sai, October 05, 2005 - 10:52 pm UTC

Robert,

To get the dump of logical I/O's performed for any sql statement, you can use 10200 event in Oracle 8i. But you can check v$bh to see what buffer blocks are cached and x$bh to see which buffers were touched when.

Connor, is that the event you meant to get logical I/O dump?

Thanks,
Sai.

"but events don't necessarily count"

A reader, October 06, 2005 - 12:10 am UTC

How often are you going to do this? Should there be a reason that 10046 is not enough, then maybe in development or even in a performance test lab.

I just see the need for this as being sooooo remote, I ask why having this as an event is not "good enough"??? 99% of the time, the database does what it was meant to do and it does it extremely well. I am just happy that Oracle even gives you the ability to enable a event that will dump out this detailed information (granted, it would be nice if Oracle was a little more public about some of the events outside 10046 and 10053).

Just my $.02

Tom Kyte
October 06, 2005 - 7:41 am UTC

should there be a reason 10046 is not enough?

Absolutely, for every developer should be using it every day and in every way. dbms_monitor in 10g takes case of it for us in a documented, accessible, safe fashion.


Events are not documented, will not be documented as far as I know. That is why they are not good enough.

online rename datafiles

steen bartholdy, October 06, 2005 - 5:48 am UTC

I would like to :
alter database rename file X to Y *online* ;
all files preferred : capacity : datafile,
easy of usage controlfiles to !
backup online
create copy online
but switch online, naaah
or have I missed something here ?


Tom Kyte
October 06, 2005 - 7:53 am UTC

alter database rename file X to Y *online* ;

that doesn't make sense??


'switch online'?? don't understand.

Adding constraints/Columns etc.

Bipul, October 06, 2005 - 6:56 am UTC

My 2c.

1. Whenever a table structure is changed [ add a new column], this invalidates all the dependent stored pl/sql and also all the dependent cursors in shared pool. In an OLTP environment, where thousands of transactions/queries take place every minute by hundreds of session, this causes some issue. For example, most of the sessions will go in "library cache pin" or "library cahce lock" wait and the database appears to hang. I have noticed this from 8.1.7 to 10.1.0.4. This is not very helful in providing 24x7 uptime. Oracle support havn't provided any workaround for this so far.

2. Adding a constraint locks the parent table. Again, this causes issues in high transactional OLTP system. The lock time varies with the size of parent table. Often this results in some "timeouts".

It would be useful, if Oracle can address these issues as part of "high availablity" features.

Other bit I would like to be improved is Oracle support. I don't think [this is completely based on my experience] that Oracle Support puts a lot of effort in recreating and diagonising customer issues. Often they tell us to recreate it in our environment, whereas the problem can be easily recreated anywhere i.e. Oracle support's own environment. At times, it seems like they are after "closing the ticket" rather than "fixing customer issues" [ The last statement assumes that asktom.com is not part of Oracle support. In my opinion, to date this is most useful site for fixing oracle issues/gaining insight into issues and learning to do things so that we don;t bump into issues]

Thanks
bipul




Tom Kyte
October 06, 2005 - 7:59 am UTC

1) because upgrading an application is a hard nut to crack - think about it, you are patching or upgrading/updating an application - you are CHANGING it.

2) same comment - you are upgrading.




As for support - I agree with their approach. If you call and say "hey, my 500,000 line program fails - why" - they can do nothing.

If you call and say "hey, run this and what do you see", they can do something.

Developing a test case in most all cases is paramount - you see me do it all of the time, you see me demand it all of the time as well.

no, this is not part of support as it is.

cursor_sharing

Igor, October 06, 2005 - 6:58 am UTC

And sometimes it would be handy to have "finer grained" cursor_sharing that could be handled when no application code could be changed and say "Hey, this part I want with FORCE, rest is OK". Based on tables or something similar...

BLOBs

Igor, October 06, 2005 - 7:06 am UTC

"data pump. have you looked at that in 10g."

OK, but on systems that I touch are "still" mostly 9i.
And correct me if I'm wrong, BLOB was, I think, introduced in 8.0 and in 9.2 this relatively basic thing does not work.

Also, in exp/imp context is a bit unusual that I could have problems exporting with exp 9.2.0.5 from database 9.2.0.4.
I know that exists Metalink compatibility matrix and all this but it's a bit "counter-intuitive"



Tom Kyte
October 06, 2005 - 8:01 am UTC

exp/imp have no techniques for remapping tablespaces - unless you have the relatively simple "single segment object", the facility just didn't exist.

And I was just pointing out that what you ask for was added a couple of years ago....

CBO decision

Igor, October 06, 2005 - 7:09 am UTC

Sometimes it would be nice to have CBO saying how "far" he is from changing execution plan or if second is pretty close...

Alberto Dell'Era, October 06, 2005 - 7:53 am UTC

> 1) and do you have an example where it makes a difference? :)

Not specific to pga_aggregate_target; anyway I've seen many plans in tkprof that I wasn't able to reproduce using explain plan. Call it my fault, perhaps I missed some "alter session" of the app, perhaps we were using 9i thin drivers and I wasn't clever enough to know that 9i thin drivers don't peek, whatever ...

Having the estimated cardinalities, cost, temp space, etc in the STAT lines would leave no space to guessing, and would mean, *at the very least*, that you wouldn't need to spend time to reproduce the environment for explain plan.

> You would not want to do an explain plan per parse/execute.

You need not, the infos are already there, in the kernel structures read by v$sql_plan, since hard parse time - you would just need to dump them in the trace file.

Followup on your comment re: constraints/structure change

Bipul, October 06, 2005 - 8:51 am UTC

<TK>
1) because upgrading an application is a hard nut to crack - think about it, you
are patching or upgrading/updating an application - you are CHANGING it.

</TK>

Upgrading an application is slightly different. We plan it/get some downtime etc. But in reality what happens is

1. Managers/designers and stakeholders read about a development methodology called RAD [rapid application development].
2. They want to develop something using RAD very quickly with few tables. Get application online, and keep adding features to it. Adhoc, whenever they feel like.
3. Some of the enhancement/features require change in existing table, which has lots of rows, used by many stored procedures/cursors etc. And then we [DBA] hit this problem when we have to modify the table or add constraints etc.

I think we [IT industry in general]are spending less time in design.

<TK>
As for support - I agree with their approach. If you call and say "hey, my
500,000 line program fails - why" - they can do nothing.

If you call and say "hey, run this and what do you see", they can do something.

Developing a test case in most all cases is paramount - you see me do it all of
the time, you see me demand it all of the time as well.
</TK>

I agree that developing a test case is important, but sometimes the problem is bleeding obvious and you don;t need few GB of data to reproduce it. For example, "library cache pin"/"library cache lock" issue in high transactional OLTP database can be recreated in support's environment.

I should add that sometimes [at least once I have noticed] Oracle changes functionality without putting that in release notes. For example XDB in 10g doesn;t support unicode characters > 255, and it does in 9i. This wasn;t listed anywhere in Oracle documentation. Support suggested to convert the database character set to UTF-8. Sure we can do that, thats pretty easy for a > 1TB database with loads of VARCHAR and CLOB that requires a 24X7 uptime !

Anyway, I think this thread is not about griping support issues. So I should stick with the product feature.

How about ability to rename schema? This will be useful in upgrades, so a sample upgrade process could be like

1. Clone schema,
2. Upgrade the cloned schema
3. rename original schema to something else.
3. Rename cloned schema to original schema.

It will save us the restore of original schema if upgrade fails.


Thanks
bipul

Tom Kyte
October 06, 2005 - 9:26 am UTC

as for the upgrade -

IF this is a mission_critical_system
AND downtime is to be avoided
THEN
you need to have a 'mainframe' mentality, for lack of a better term.
END IF



surprised no one else mentioned a schema rename yet ;)

Support

Connor, October 06, 2005 - 9:09 am UTC

Don't want to turn this into an anti-support rant because

i) compared with other product support knowledge bases, I rate Metalink as the benchmark by which all other pale

ii) being a support person must be a thankless job

but as a user of support for around 12 years, I would contend the quality has dropped significantly over the past few years.

(Hopefully my book content lends credibility to my statement) that I'm a real stickler for providing small, easy to run, cut-paste test cases - something I do in my TAR's, but even then there is so much drive by support analysts to

a) close your TAR asap (I assume they're measured on turnaround time)

b) resist any possibility that what you've described is a bug.

Cheers
Connor

Is it possible to move datafiles online ?

steen bartholdy, October 06, 2005 - 9:20 am UTC

I believe that in order to move a datafile you have to :
tablespace Tab offline;
alter database rename file X to Y;
tablespace Tab online;
If this is true then I want to move datafiles without taking the tablespace offline.
And I would like to move controlfiles online as well.


Tom Kyte
October 06, 2005 - 11:50 am UTC

and like I said - to me that does make sense - you have to MOVE the file, meaning the file is in two locations.....

Hints

DaPi, October 06, 2005 - 9:21 am UTC

Any chance of a warning diagnostic from SQL+ for incorrectly formated hints? (yes, I know, better not use them, but . . . . ) Currently you don't know if it made no difference or if you got the format wrong.

Excellent DaPi

A reader, October 06, 2005 - 9:45 am UTC

Of course you only mean the good hints

Like it Dapi

padders, October 06, 2005 - 10:06 am UTC

Perhaps a small section in the DBMS_XPLAN.DISPLAY output (as for predicates) to indicate what hints were applied to which operation etc.

Split user / schema, anyone?

Clemens, October 06, 2005 - 10:38 am UTC

> Tom: surprised no one else mentioned a schema rename yet ;)

I wouldn't mind if you could create (and drop) users without corresponding schemas (sometimes you only need them for logon/security issues) as seen in postgres. Get's a bit messy if you've got lots of users and you want to select a certain schema in one of those point'n'click tools.

C.

A reader, October 06, 2005 - 10:38 am UTC

The quality of Oracle support has indeed gone down over the years. The final answer from support in most of the cases - upgrade to the next version. In some cases, they don't even care to find out the exact bug we are hitting.

Metalink is quite good in itself and if I dont get the answer there,I would rather ask in asktom.oracle.com rather than raising a TAR with Oracle Support

Context Switch

AndyN, October 06, 2005 - 10:58 am UTC

If I could change something about Oracle it would be the removal of the context switch between the PL/SQL engine and the SQL engine. I have never been quite understood why this has to be there. If the two engines were integrated together couldn't this be removed?

Tom Kyte
October 06, 2005 - 12:04 pm UTC

plsql is a separate vm, just like java.

we did have a request above that seems neat -- sort of a "sql fuction" capability.

the overhead of calling sql from plsql or plsql from sql are fairly small - problem is, many people try to do it 1,000's of times and it ADDS up.

Alberto Dell'Era, October 06, 2005 - 12:30 pm UTC

A schema SYNONYM would be dandy, to be able to repoint a schema as we do with tables.

create or replace schema synonym interface for schema1;

Users point to the "interface" schema and get redirected to schema1. Much transparent then "alter session set current_schema=schema1", could be retrofitted to existing applications with no modifications.

Or, the ability to be able to repoint a set of synonyms atomically and consistently:

create or replace synonym t for schema1.t, u for schema1.u ..

Tom Kyte
October 06, 2005 - 12:48 pm UTC

yeah, let me chime in...


a create schema that is atomic (as it is) but supports everything....

an alter schema to go with it.


so that a group of DDL statements can be done together as an atomic unit of work.

Alberto - you just made this thread TIE that other recent thread and have dethroned it! This is now the current "hot" one :)

Anto, October 06, 2005 - 2:08 pm UTC

Nice to know that a purely technical thread has come to the top. ;-)

Never in his wildest dreams would Alex have thought that his thread will become the hottest one in less than a week

A reader, October 06, 2005 - 2:21 pm UTC

Some dictionary view to list all the queries currently being parsed by the optimizer or does V$SQL or some other view, have this info ?

Even though parsing typically takes only msecs, there are cases where we had to wait for seconds or even minutes (for the parsing to complete) and to see the explain plan. This happens when the optimizer_max_permutation is retained at its 8i default value of 80,000, even after you upgrade the database to 9i. The problem disappears if you reduce optimizer_max_permutation by 1 to 79999



Alberto Dell'Era, October 06, 2005 - 3:52 pm UTC

> you just made this thread TIE that other recent thread

If technical threads draw more attention than political ones, there's still hope for the [Oracle] world ...

Tom Kyte
October 06, 2005 - 6:27 pm UTC

Hey, the other thread *started* technical.

Alberto Dell'Era, October 06, 2005 - 6:26 pm UTC

"alter table exchange SEGMENT including indexes"

Some uses:

1) you want to delete 90% of rows in a table - you just insert-select the other 10% in a twin table (or CTAS it), index, analyze, then exchange the segment(s) back.

Similar to ctas-drop old-rename, but without having to regrant privileges, recreate triggers, and invalidating depending cursors and packages.

2) segment reorganization - "alter table move" invalidates indexes, the technique above allows you to (re)build them on the "new" table while allowing concurrent reads of the "old" table.

You may also, say, reorder physically the rows of the table to decrease the clustering factor of one index.

3) 24x7 read interface tables

We currently use this technique:
a) readers read from t
b) we prepare the new data image in the t_part, a table with a single partition identical to t
c) we exchange the table with the partition

NB I've tested that if a reader has a cursor opened on t, it can continue its fetching across the "exchange partition" - it reads the "old" image of the data (perfect read consistency!).

The "exchange segment" would definitely make it easier to manage (no need to know about partitioning, same script to create the twin table, no need to maintain both t and t_part, etc).

Tom Kyte
October 06, 2005 - 7:15 pm UTC

1) that would be an online redefine with a where clause - but it (as anything) will invalidate cursors, have to - plans are different, different data objects to query.

(so, online redefine with where clause is what you are asking for)

2) that is online redefine as it exists

3) careful, that is imperfect read consistency at best, strange things will happen when the space is reused.


commit other session's transactions

Bobcat, October 06, 2005 - 10:50 pm UTC

How about the ability to (force) commit other session's transactions? We can force rollback other session's transaction by killing the session, but no "force commit" capability.

Tom Kyte
October 07, 2005 - 8:21 am UTC

never, I would never even consider running that one up the chain. No way.

How dangerous is that.

Just extending...

Connor, October 06, 2005 - 11:08 pm UTC

To extend my initial one of "transactional truncate" and the later comments by others about atomic schema operations, I suppose we could generalise this to: transactional DDL facility. Its something that some other databases can do. It would be nice to be able to do (eg)

Task:
- create table
- insert into table select * from ...
- create index on table

and if the 'create index' fails, we rollback to the table never having existed. It would make scripting of application installs very easy


Tom Kyte
October 07, 2005 - 8:23 am UTC

Re: Just extending.

Sai, October 07, 2005 - 4:50 am UTC

I disagree, transactional DDL rollback could compromise database integrity and introduce more failure paths. How many times application installs are done compared to the total number of database transactions?

Imagine a scenario where session 1 is running a transactional DDL script and another session is trying to modify data in a table which is being resolved through public synonym for this user, but the changes started taking place in a table with the same name just created by session 1. This may be avoided by not auto commiting data dictionary modifications after every DDL command, but it's like asking for something to be done in a hard way, which can be very easily achieved by other means.

Tom Kyte
October 07, 2005 - 8:35 am UTC

No it wouldn't.

It would increase database integrity and reduce failure paths.




Imagine a scenario where by every night you load two tables with data. You use two schemas to do this (I have just such a situation).

Now, after the data is loaded and validated and indexed and so on in schema1 - you need to repoint two synonyms or recreate two views to point to schema1 (from schema2 where yesterdays data was!)


create or replace synonym t1 for schema1.t1;
create or replace synonym t2 for schema1.t2;


We need those to be atomic, if a failure were to occur in between - that would be "bad", we'd be half pointing to yesterday and half pointing to today.


There are many other examples as well. I'd love to have the ability to run a couple of related DDL statements together as an atomic unit of work - we already have the basic capability to do so:

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_6014.htm#sthref5662 <code>

needs to be expanded.

How cool is this!

Vladimir Andreev, October 07, 2005 - 9:33 am UTC

That CREATE SCHEMA is so cool! Pity indeed that it only supports tables, views and grants.

My new thing learned about Oracle for today :-)
Is it new in 10gR2?
Never mind, I'll check the docs myself.

Thanks!
Flado

Tom Kyte
October 07, 2005 - 8:15 pm UTC

it goes way back.

A reader, October 07, 2005 - 9:53 am UTC

So many suggestions in here that you need not conduct a brainstorming session again in Oracle corp. before releasing 11? for the new features to be included

Tom Kyte
October 07, 2005 - 8:36 pm UTC

yes, when it settles down (as it is beginning to) I will be developing a list to send on - alread have been asked to

Doh!

Vladimir Andreev, October 07, 2005 - 10:08 am UTC

Turns out CREATE SCHEMA has been around since at least Oracle8i, maybe even 7.3 for it is not mentioned in the "Getting to Know Oracle8i" guide:
</code> http://download-west.oracle.com/docs/cd/A87860_01/doc/server.817/a76962/toc.htm#top <code>

It's still cool, though :-)

Cheers,
Flado

David Aldridge http://oraclesponge.blogspot.com, October 07, 2005 - 10:43 am UTC

>> There are many other examples as well. I'd love to have the ability to run a couple of related DDL statements together as an atomic unit of work - we already have the basic capability to do so: <<

On a similar note I submitted an enhancement request for a new partitioned table operation in which a non-partitioned table can be added to a partitioned table as a new partition, kind of like:

i) add partition
ii) exchange partition with non-partitioned table
iii) drop non-partitioned table

... but as a single operation for the purpose of simplicity of recovery from some kind of failure. However this ability to rollback beyond DDL commit points would also be a great advantage.

More about commit other session's transactions

Bobcat, October 07, 2005 - 12:40 pm UTC

It should certainly be a privilege limited to the DBA role. But I really don't see how dangerous it can be, considering that we can rollback a transaction by killing the session.

Tom Kyte
October 07, 2005 - 9:56 pm UTC

rolling back is always safe (as if never happened)

commiting from a distance is ALWAYS dangerous - you have no clue "where they are"

Alberto Dell'Era, October 07, 2005 - 3:01 pm UTC

> 3) careful, that is imperfect read consistency at best,
> strange things will happen when the space is reused.

Ok, but readers' selects end in at most 10 minutes, and after exchanging the partitions, we keep the "old" data, we don't truncate - next refresh happens after 24 hours.

Do you think we're safe here, since we don't "reuse space" while it's being read - or were you thinking about something else ?

TIA

Tom Kyte
October 08, 2005 - 8:47 am UTC

you are *likely* safe if the space isn't reused, that is what I was thinking - if the space is reused....

although there are cases of dropped tables where even when the space isn't reused - there can be issues.

how about alter table read-only

andrew, October 07, 2005 - 9:06 pm UTC

How about an easy way to prevent any of insert/update/delete and maybe even select to a table?

alter table EMP enable/disable insert|update|delete|select(?);

Trying to control it through tablespaces, triggers, grants etc all have their limitations.

Tom Kyte
October 08, 2005 - 8:49 am UTC

what limitation they have that this would not?



Re:how about alter table read-only

Sai, October 07, 2005 - 9:34 pm UTC

You can achieve it through triggers. You can also control DML's on any table through object level privileges for other users.

Alberto Dell'Era, October 08, 2005 - 12:18 pm UTC

Continuing on:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#49216618020078

> 1) online redefine with where clause is what you are asking for

Yes, please consider adding the request for a "where clause" on online reorganization into the Wish List.

Better than what I proposed, online redefine automates the index cloning too - cool.

And should be very easy to implement also - probably all the Oracle developers need is to add the user provided where clause to the MV.

Eh, with a bit of additional effort, the developers could also add to dbms_redefinition the automatic management of FKs as well, automating this technique you demonstrated here:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4636779130376#6783811821076 <code>
> 2) that is online redefine as it exists

Correct, thanks - with the automatic management of FKs - would be heaven.

> 3) careful, that is imperfect read consistency at best

Ok, assuming that we don't reuse space and that your worries don't apply (no issues so far from production for my implementations of the "24x7 exchange table"), that's the only thing that would benefit from "exchange SEGMENT" :)

Two additional considerations on "exchange SEGMENT" - I think that
a) it would be easy to implement (a partition is a table in disguise, so the code is probably already there and already tested to destruction in production)
b) it would be a command very simple and very easy to understand, the main characteristics of good tools/bricks.

optional row migration

A reader, October 08, 2005 - 5:19 pm UTC

Allow row movement on non-partitioned heap tables. When creating a heap table, if row movement is specified, then don't migrate rows by leaving a forwarding address. Instead, if the row has to move, immediately move the row to a new address and update any indexes that are pointing to it.

Tom Kyte
October 08, 2005 - 5:50 pm UTC

you mean update all indexes....

I'm not sure I like this. why? because it would "hide" the fact that you are migrating tons of rows. I'd rather

a) find out I had the problem
b) implement a fix for the problem

rather than masking the problem, hiding it - but negatively impacting the performance of modifications without being able to see it very easily.

IOT physical guesses

A reader, October 08, 2005 - 6:08 pm UTC

When creating an index on an IOT, have a "no physical guesses" clause. If "no physical guesses" is requested, then the index won't contain physical guesses.

Physical guesses may be appropriate for DW. However, they're not as useful for busy OLTP. For OLTP databases, dispensing with the physical guesses will make indexes smaller and more efficient.

re: optional row migration

A reader, October 08, 2005 - 6:45 pm UTC

> I'd rather
>
> a) find out I had the problem
> b) implement a fix for the problem

Sometimes there is no good fix to prevent row migration, such as when the initial and final row sizes have a wide variability. Quite common in our database.

> rather than masking the problem, hiding it - but
> negatively impacting the performance of modifications
> without being able to see it very easily.

In general, rows are updated much less frequently than they are read. I'd rather pay the one-time migration price during the update, rather than incur it every time the row is read thereafter.

Again, non-row migration would be optional. The default setting would remain row migration.



Tom Kyte
October 08, 2005 - 8:32 pm UTC

still not convinced it'll make "my" top ten but I hear you.

My whishes - new features of Oracle Database 11br

Zlatko Sirotic, October 09, 2005 - 3:09 pm UTC

My whishes - new features of Oracle Database 11br

A) PL/SQL

1. References between transient objects (instances of objects types) and (then) garbage collector

2. Generic classes (templates, generics) like in Eiffel, C++, Java 1.5
(PL/SQL was modeled after ADA 83, and ADA 83 has generic packages)

3. Multiple inheritance like in Eiffel (inner classes like in Java - no, please)

4. Design By Contract like in Eiffel (C++ / Java 1.4 assert is not enough)


B) Database

Better support for business rules in database

We need "What Not How: The Business Rules Approach to Application Development" (book by C.J.Date).
Rules Manager, a new feature for Oracle Database 10g Release 2, seems like good direction, but Oracle has Oracle Business Rules in Application Server too.
My motto is: "Thou shall not do in the Middle Tier, what thou could have done in the Data Tier" (Toon Koppelaars in "A first JDeveloper project: Choices made, lessons learned", Oracle World 2002).


Regards,
Zlatko Sirotic


... one mo' time ...

Clemens, October 10, 2005 - 4:06 am UTC

Keep coming back to this thread ... ;-)

I did an install [1] of 10g1/g2 SE on one of my linux boxes this weekend and did notice some things.

- why the memory check on 10g2 (with 1GB memory), what's the big difference to 10g1? Yes, I know about the "what's new" chapter, but 1GB???
- why the "bloat" in 10g2. ~ 860 MB on 10g1 vs. 1.2 GB on 10g2
- library hell: libaio for example, was not needed in 10g1, now it screws up my installation process on 10g2. Same goes for hard coded lib paths such as libstdc.so.5 [2]

These are all support issues. What I'm questioning is the concept of bloat SW, where everything plus the metaphorical coffee machine is included. Maybe 11g could become more like "lean and mean", such as 10g1 was/is, compared to 9i, at least from the size of the install CDs. Reduced memory consumption for small/medium servers would also be nice, since we're on the dawn of server virtualization/consolidation (again) - that is if Oracle is interested in the mass market of small/medium sized companies.

Aside from this, some more ideas on PL/SQL:

- Synonyms on Public Functions/Procedures inside Packages, anyone? (or did I miss that in the newer releases?)
- Already mentioned: variable IN lists with arrays. Table/Pipelined Functions is nice, but that one would save some coding.
- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used code in large applications.

C.

[1] The reason for my installation was to try out the c't db challenge - a web shop with 1G/10G/100G data.

[2] I do only recall the xxxx_so.5 and /usr/lib path, even with LD_LIBRARY_PATH it did not consider the /usr/lib/libstdc++v3 into its relink stage.

perl inside the database

Stefan, October 10, 2005 - 9:43 am UTC

I'd love to have a perl interpreter inside the database, to be able to do something like

CREATE OR REPLACE PERL SOURCE ....


Stefan

Compressed table

Anto, October 10, 2005 - 9:54 am UTC

a) How about the table indexes still remaining valid after compressing a table - or is it already available in 10g ?

b) I remember at least in Oracle 8, Oracle optimizer does not take table stats into account for remote tables accessed through dblink, for deriving the explain plan of distributed queries. Is that the case still now ?



Tom Kyte
October 10, 2005 - 11:17 am UTC

a) with partitioned tables - yes...


ops$tkyte@ORA10G> create table t (object_id, object_name)
  2  partition by range(object_id)
  3  (
  4  partition all_data values less than(MAXVALUE)
  5  )
  6  as
  7  select object_id, object_name from all_objects;
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create index t_idx on t(object_id);
 
Index created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> select object_name, object_type, status from user_objects;
 
OBJECT_NAME                    OBJECT_TYPE         STATUS
------------------------------ ------------------- ----------
T                              TABLE PARTITION     VALID
 
T                              TABLE               VALID
 
T_IDX                          INDEX               VALID
 
 
ops$tkyte@ORA10G> alter table t compress;
 
Table altered.
 
ops$tkyte@ORA10G> alter table t move partition all_data update indexes;
 
Table altered.
 
ops$tkyte@ORA10G> select object_name, object_type, status from user_objects;
 
OBJECT_NAME                    OBJECT_TYPE         STATUS
------------------------------ ------------------- ----------
T                              TABLE PARTITION     VALID
 
T                              TABLE               VALID
 
T_IDX                          INDEX               VALID
 



b) the cbo always had the ability to use remote stats?  not sure what you mean. 

Anto, October 10, 2005 - 12:01 pm UTC

a) "the update indexes" option for "alter table .. move" works only for partitioned tables or is it available for ordinary tables as well ?

b) I thought that CBO in Oracle 8, never takes into account statistics on remote tables, for distributed or remote queries. That is the impression I got after opening a TAR, for some related issue, way back in 1999. So I was wrong.

thanks

Tom Kyte
October 10, 2005 - 12:12 pm UTC

a) as stated "with partitioned tables - yes.."....




anto, October 10, 2005 - 12:05 pm UTC

for (b) we were using Oracle Apps that time and hence we may have been using RBO instead of CBO

Thanks for confirming this



Re: how about alter table read-only

andrew, October 10, 2005 - 3:46 pm UTC

> How about an easy way to prevent any of
> insert/update/delete and maybe even
> select to a table?
>
> alter table EMP enable/disable insert|update|delete|select(?);
>
> Trying to control it through tablespaces, triggers, grants etc all have their
> limitations.
>
> Followup:
> what limitation they have that this would not?

Altering the tablespace containing the table read-only hardly an acceptable way to control individual tables.
Creating (and dropping) triggers - or adding logic to triggers do do some lookup is messy and has overhead.
Grants are fine - as long as the app doesn't connect as the table owner.

Just a simple alter table command would be clean and any novice could understand it.

Oracle optimizer, Stats and DB Links

Gary, October 10, 2005 - 7:47 pm UTC

"I remember at least in Oracle 8, Oracle optimizer does not take table stats into account for remote tables accessed through dblink, for deriving the explain plan of distributed queries. Is that the case still now ?"

Tables work fine. Views can be more complex because they don't have stats themselves and (at least up to 9iR2) the remote db doesn't do much in the way of 'resolving' the view to guess stats. Also, the remote db won't tell the local db about any indexes on the tables underlying the view. [Frankly, it would be pretty difficult for Oracle to do any of this. To be comprehensive about it, it would need to tell the originating database the view query, stats and index information for all the tables and non-oracle functions involved etc. And for that to really work, you'd probably need some closer tie-in between the versions of Oracle running on both databases.]

I've got some simple scripts demonstrating the issues :
</code> http://igor-db.blogspot.com/2005/09/views-db-links-and-cbo.html
and
http://igor-db.blogspot.com/2005/09/views-db-links-cbo-and-indexes.html <code>


some of my my wishes ...

Mauro Papandrea, October 11, 2005 - 3:17 am UTC

1) SQL: currently it is not possible to manage db links for other users.
I should like something like this:
create/drop database link ... OWNER BY scott with a CREATE/DROP ANY DATABASE LINK privilege

2) the possibility to use alias everywhere and not only in order by clause; it is a simple matter of search/replace that should be very easy to implement and would improve readability very much

3) a RESERVED_PROCESS init parameter to be used only for system account ( or with an additional privilege, even better ): when for some error the maximum number of process is reached even admins are no longer allowed to connect and the db gets kind of frozen

4) the possibility of seeing the value of bind variables on the fly and the environments of other sessions ( no way to see if a session altered some parameters )

5) as of 9i it is possible remove datafiles when dropping tablespaces however "including contents" must be used; i cannot understand why: it is potentially dangerous.
I'd rather to remove this bound

6) Sqlplus copy command supporting new datatypes

7) update cascade

8) In Metalink searching with "exact sentence" many docs are retrieved where that sentence does not appear

9) better testing: for instance, the master script generated by dbca is written for bourne shell but with mixed cshell syntax: had it ever been run just once it would be easily fixed, 9.2.0.7 does not allow to drop a user and this is really a big one ...

Best regards



Silly me ...

Clemens, October 11, 2005 - 10:56 am UTC

- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used code in large applications.

=> is already solved through:

SELECT object_name, timestamp
FROM dba_objects
WHERE object_type LIKE 'PACKAGE%'
ORDER BY timestamp;

C.

Tom Kyte
October 11, 2005 - 3:40 pm UTC

that doesn't tell you when it was last used.

Re: Clemens comments

andrew, October 11, 2005 - 12:19 pm UTC

> Well if it's a dynamic call, then monitoring on the
> package/function/proc/sequence would still be useful.


Well if it's a dynamic call, then monitoring on the package/function/proc/sequence would still be useful.

Read locks in Oracle!

Brian Camire, October 11, 2005 - 1:13 pm UTC

Seriously, it would be helpful to have a more flexible row locking mechanism -- sometimes SELECT...FOR UPDATE is too restrictive.

Specifically, it would be helpful to have way to lock a row that blocks other transactions that try to update it, but does not (unlike SELECT....FOR UPDATE) block other transactions that try lock it in the same (or a "compatible") way.

It would be like saying:

"I'm depending on these rows not to change for the rest of this transaction. It's OK to let other transactions do the same, but not to let other transactions update these rows."

It might look like SELECT...IN <lock mode> MODE, where <lock mode> might correspond (in meaning, if not name) to the lock modes supported by LOCK TABLE.

SELECT...IN EXCLUSIVE MODE would be synonomous with SELECT...FOR UPDATE.

I know you can emulate this using DBMS_LOCK, but it would be easier if it was more declarative.

Dependencies

anto, October 11, 2005 - 2:01 pm UTC

To show the dependencies(referenced_name in user_dependencies) even if the objects are used within execute immediate statements in procedures/packages

Tom Kyte
October 11, 2005 - 3:52 pm UTC

that is impossible. think about it.


execute immediate string;


execute immediate string1 || ' emp ' || string2;


and if the string is "imutable" - so it could be done, well, then one asks "why dynamic sql"???

sql 'macros'

lh, October 11, 2005 - 3:00 pm UTC

1) In order to write modular code it is sometimes useful to encapsulate complex sql-functions (e.g replace, upper, translate stacks or case statements) into
stored functions or packages.

Referencing these functions is however slower than using (unmodular) straight sql-sentence. (there has already been discussion about the context switch between pl/sql engine and sql).

In c-language there is syntax for macros. Something like that so that one could:
create or replace macro example (p_param) is
replace(upper(substr(p_param,1,20)),'XXX','YYY');

and one could use this is sql-statements:
select
field1,
field2,
example(field3)
from
table1
;

2) Enhancing ORA-600 messages so that it would give more information. Something like the number of the bug that is causing the error message.
Wouldn't this be a great and benefical feature ?
(perhaps some minor difficulties in implementing...)

Tom Kyte
October 11, 2005 - 3:54 pm UTC

1) already noted above..

2) if we knew the bug, we would not have thrown the error as the error is due to a bug. chicken a very much "egg" here.

separation of application server and database server

lh, October 11, 2005 - 3:11 pm UTC

For license, maintenance, security reasons it is sometimes usefull to dedicate a server just for the database.

Application would run in a different machines connecting into a database using sql*net.

However application/client can use utl_file -package that is writing into database server. Also alter session set sql_trace = true commands produces trace files into database server.

If these files could somehow be directed into calling clients machine ? Automatic writing using sqlnet ?

Tom Kyte
October 11, 2005 - 3:54 pm UTC

that would be very virus like.

Mount the file systems to the server and you can do this, but you have to take the initiative to do that.

sql 'macros'

lh, October 11, 2005 - 3:33 pm UTC

When reading more of these suggestions I found that Gary has already suggested same kind of functionality.

So I am not alone in missing something like this.

Wish list

Pet, October 11, 2005 - 4:22 pm UTC

Build-in Scheduler and event based job kick off ( Enhance DBMS_JOB., for ex. start a PL/SQL procedure when we receive files in a directory of OS)., so that we don't need to use KRON.
Build-in basic workflow., ( data goes from one table to other and wait for user action and goes to same table with updated status , or other tables..)
Autoformat of result and Row limit on SQL plus client( like set ROWLIMIT 50 so that the all the subsequent queries stops after 50 rows) (very hard to cancel the Query in SQL*Plus client)
Named SQL ( type select statement.. and save it as by giving a name in the database., instead of saving it in OS file ) so that you can re-use the same statement later.. namespace of named SQL should be current DB schema.)


Tom Kyte
October 12, 2005 - 6:38 am UTC

someone would have to POLL the directory - so just schedule the job to run periodically and scan the directory (I've a java stored procedure on this site to get a list of files in a directory).


I think AQ is built in basic workflow (to build your workflows with)




something between shutdown immediate and abort.

nowhere man, October 11, 2005 - 4:59 pm UTC

shutdown immediate defer long transaction rollback.

Nice to have...

Chris, October 11, 2005 - 9:44 pm UTC

A few wishes off the top of my head...

a. Being able to join database tables to pl/sql tables in pl/sql code? A friend of mine swears blind he saw this listed as a feature in 10g (but he can't find it now).

b. I would like to be able to be able to 'drop table space including datafiles' - with an exception being raised if there were any contents. At the moment you can only specify 'including contents and datafiles' together - which can be dangerous.

c. The ability to drop individual datafiles from a tablespace (if empty).

d. User defined pfile/spfile parameters. For example, this could be used to automatically suppress certain features if a database if it were a copy of production. This would save having to remember to set a values in a database table after a refresh of a test database from production one.

Eg: '!my_special_parameter = 123'

From the previously mentioned features in this thread I'd like to add my vote to:

1. Defaulting a column to sequence.nextval (or sequence.currval).

2. Constraints with error messages associated (not just check constraints either). It might also be useful if you could include parameters in the message which would be replaced by the value that a column was being set to.

Eg: '$1 is not a valid cost code'

3. Allowing comments on everything (OK - boring but handy).

4. Rename user / schema. This would be good for testing - to be able to swap back and forward between alternate versions of data quickly.

5. Allowing in-line SELECTs in PL/SQL such as:

v_variable := NVL((SELECT ...), 'Unknown');

IF EXISTS(SELECT NULL FROM emp WHERE ...)

IF NVL((SELECT emp_id FROM emp ...), 0)

6. ON COMMIT triggers.


Tom Kyte
October 12, 2005 - 7:09 am UTC

a) you can join to collections - and have been able to since 8i....

b) if you use OMF.... that happens already with just drop tablespace.

c) 10gr2 does that.


for previous poster(s)

Connor, October 12, 2005 - 5:59 am UTC

a) select on plsql table

select * from table(plsql_table_type)

is already there.

b) alter table read only

Isn't this just:

create trigger before insert or update or delete
pragma exception_init(-942,fake_privs_msg);
begin
raise fake_privs_msg;
end;

I'm not sure what the "overhead" of such a trigger is

Cheers
Connor

Bitmap Join Index - make the more robust (and fix some bugs)...

Philip Moore, October 12, 2005 - 10:16 am UTC

Hi Tom,

I was just thinking of one of the features that I believed to be very promising when I first heard of them - Bitmap Join Indexes. After experience, however - I find them to be a little less useful.

The first thing is that join-back is still required if I reference the dimension table column that was indexed in the SELECT clause (oddly enough - it works fine if I only mention the column in the "WHERE" clause).

Example:
philip@ORA9iR2 -> DROP TABLE products CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts
2 /

Table dropped.

philip@ORA9iR2 -> CREATE TABLE products( product_id VARCHAR2(15)
2 , product_name VARCHAR2(30)
3 , CONSTRAINT product_pk PRIMARY KEY (product_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> CREATE TABLE facts ( product_id VARCHAR2(15)
2 , sales_date DATE
3 , sale_in_dollars NUMBER
4 , CONSTRAINT facts_pk PRIMARY KEY (product_id, sales_date)
5 , CONSTRAINT facts_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
6 )
7 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_product_bjix ON facts (products.product_name)
2 FROM products
3 , facts
4 WHERE products.product_id = facts.product_id
5 /

Index created.

philip@ORA9iR2 -> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER, tabname=>'PRODUCTS');

PL/SQL procedure successfully completed.

philip@ORA9iR2 -> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER, tabname=>'FACTS');

PL/SQL procedure successfully completed.

philip@ORA9iR2 -> SET autotrace traceonly EXPLAIN ;
philip@ORA9iR2 -> -- Here the index is used appropriately...
philip@ORA9iR2 -> SELECT SUM(facts.sale_in_dollars)
2 FROM facts
3 , products
4 WHERE facts.product_id = products.product_id
5 AND products.product_name = 'Tom Kyte''s BBQ'
6 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=22)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'FACTS' (Cost=2 Card=1 Bytes=22)
3 2 BITMAP CONVERSION (TO ROWIDS)
4 3 BITMAP INDEX (SINGLE VALUE) OF 'FACTS_PRODUCT_BJIX'



philip@ORA9iR2 -> /* Oracle already did the work to join the FACTS and PRODUCTS table as well as catalog each unique
DOC> entry of PRODUCT_NAME (for storage in the Bitmap Join Index).
DOC> So why the join back to the PRODUCTS table? Why not just use the bitmap join index entry
DOC> to bring back the PRODUCT_NAME? */
philip@ORA9iR2 -> SELECT products.product_name
2 , SUM(facts.sale_in_dollars)
3 FROM facts
4 , products
5 WHERE facts.product_id = products.product_id
6 AND products.product_name = 'Tom Kyte''s BBQ'
7 GROUP BY products.product_name
8 /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=48)
1 0 SORT (GROUP BY NOSORT) (Cost=3 Card=1 Bytes=48)
2 1 NESTED LOOPS (Cost=3 Card=1 Bytes=48)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'FACTS' (Cost=2 Card=1 Bytes=22)
4 3 BITMAP CONVERSION (TO ROWIDS)
5 4 BITMAP INDEX (SINGLE VALUE) OF 'FACTS_PRODUCT_BJIX'
6 2 TABLE ACCESS (BY INDEX ROWID) OF 'PRODUCTS' (Cost=1 Card=1 Bytes=26)
7 6 INDEX (UNIQUE SCAN) OF 'PRODUCT_PK' (UNIQUE)

--------------------

The second thing is that if I have two indexes on the fact table (which is partitioned) - I can't exchange that partition - this bug has been around since bitmap join index inception - and even exists in Oracle 10G Release 2 (I verified it there)!!!

Example:
philip@ORA9iR2 -> DROP TABLE products CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE geographies CASCADE CONSTRAINTS
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts
2 /

Table dropped.

philip@ORA9iR2 -> DROP TABLE facts_swap
2 /

Table dropped.

philip@ORA9iR2 -> CREATE TABLE products( product_id VARCHAR2(15)
2 , product_name VARCHAR2(30)
3 , CONSTRAINT products_pk PRIMARY KEY (product_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> CREATE TABLE geographies ( geography_id VARCHAR2(15)
2 , geography_name VARCHAR2(30)
3 , CONSTRAINT geographies_pk PRIMARY KEY (geography_id)
4 )
5 /

Table created.

philip@ORA9iR2 -> -- Here's our partitioned fact table...
philip@ORA9iR2 -> CREATE TABLE facts ( product_id VARCHAR2(15)
2 , geography_id VARCHAR2(15)
3 , sales_date DATE
4 , sale_in_dollars NUMBER
5 , CONSTRAINT facts_pk PRIMARY KEY (product_id, geography_id, sales_date) USING INDEX LOCAL
6 , CONSTRAINT facts_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
7 , CONSTRAINT facts_fk_geographies FOREIGN KEY (geography_id) REFERENCES geographies (geography_id)
8 )
9 PARTITION BY RANGE (sales_date)
10 (PARTITION p_only VALUES LESS THAN (MAXVALUE))
11 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_product_bjix ON facts (products.product_name)
2 FROM products
3 , facts
4 WHERE products.product_id = facts.product_id
5 LOCAL
6 /

Index created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_geography_bjix ON facts (geographies.geography_name)
2 FROM geographies
3 , facts
4 WHERE geographies.geography_id = facts.geography_id
5 LOCAL
6 /

Index created.

philip@ORA9iR2 -> -- Now for the "swap table"...
philip@ORA9iR2 -> CREATE TABLE facts_swap ( product_id VARCHAR2(15)
2 , geography_id VARCHAR2(15)
3 , sales_date DATE
4 , sale_in_dollars NUMBER
5 , CONSTRAINT facts_swap_pk PRIMARY KEY (product_id, geography_id, sales_date)
6 , CONSTRAINT facts_swap_fk_products FOREIGN KEY (product_id) REFERENCES products (product_id)
7 , CONSTRAINT facts_swap_fk_geographies FOREIGN KEY (geography_id) REFERENCES geographies (geography_id)
8 )
9 /

Table created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_swap_product_bjix ON facts_swap (products.product_name)
2 FROM products
3 , facts_swap
4 WHERE products.product_id = facts_swap.product_id
5 /

Index created.

philip@ORA9iR2 -> CREATE BITMAP INDEX facts_swap_geography_bjix ON facts_swap (geographies.geography_name)
2 FROM geographies
3 , facts_swap
4 WHERE geographies.geography_id = facts_swap.geography_id
5 /

Index created.

philip@ORA9iR2 -> -- Drum Roll:
philip@ORA9iR2 -> ALTER TABLE facts
2 EXCHANGE PARTITION p_only WITH TABLE facts_swap
3 INCLUDING INDEXES WITH VALIDATION
4 /
EXCHANGE PARTITION p_only WITH TABLE facts_swap
*
ERROR at line 2:
ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION


philip@ORA9iR2 -> -- Now drop the Bitmap Join Indexes...
philip@ORA9iR2 -> DROP INDEX facts_swap_product_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_swap_geography_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_product_bjix
2 /

Index dropped.

philip@ORA9iR2 -> DROP INDEX facts_geography_bjix
2 /

Index dropped.

philip@ORA9iR2 -> -- Now retry the swap...
philip@ORA9iR2 -> ALTER TABLE facts
2 EXCHANGE PARTITION p_only WITH TABLE facts_swap
3 INCLUDING INDEXES WITH VALIDATION
4 /

Table altered.

-------------------------

Finally - could you kindly share your top ten thus far?

Thanks.

Phil

standard proc for compiling invalid objects in a schema

Anto, October 12, 2005 - 10:30 am UTC

What about a oracle supplied procedure for compiling all invalid objects in a schema based on dependencies ( similar to what we have in TOAD - compile all invalid objects in a schema ) - rather than going through the pain of writing our own procedures

Tom Kyte
October 12, 2005 - 2:06 pm UTC

utlrp/utlirp.sql exist for the database.

me, i just let them be, they'll take care of themselves.

A reader, October 12, 2005 - 3:05 pm UTC

What about those developers who don't have access to those files ?

Anto

Tom Kyte
October 13, 2005 - 9:35 am UTC

me, i just let them be, they'll take care of themselves.

dave, October 12, 2005 - 6:41 pm UTC

dbms_utility.compile_schema()

Tom Kyte
October 13, 2005 - 10:23 am UTC

that is true, I forgot about that one - had issues in some 8i releases with a connect by loop in a view - but that should have been fixed in the latest 817 releases.


Clemens, October 13, 2005 - 4:47 am UTC

- ALTER PROCEDURE/FUNCTION/PACKAGE MONITOR - so you could find dead/not used
code in large applications.

=> is already solved through:

SELECT object_name, timestamp
FROM dba_objects
WHERE object_type LIKE 'PACKAGE%'
ORDER BY timestamp;

C.


Followup:
that doesn't tell you when it was last used.

------------------------------------------------------
Ok, so I wasn't that far of with my "wish". Just beeing curios, what does Oracle mean by

</code> http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10755/statviews_1102.htm#sthref1294 <code>

TIMESTAMP VARCHAR2(20)
Timestamp for the _specification_ of the object (character data)

Regards
C.

Tom Kyte
October 13, 2005 - 10:40 am UTC

for timestamp checking with remote PLSQL (forms/reports). specification is like the package specificiation for example

An Oracle newbie's experience

Andy Mackie, October 13, 2005 - 7:14 am UTC

As someone who has recently moved to Oracle from "the other DBMS" (SQL Server), here's my experience:

1. Development tools - impossible to use Oracle without 3rd party tools such as Toad or PL/SQL Developer. iSQL*Plus, or SQL*Plus and notepad ? Need I say more...

2. 30 character limit for names - AAGGHH!

3. Can't select CLOB columns across a database link ("ORA-22992: cannot use LOB locators selected from remote tables") - need to resort to workarounds like creating temp.tables.

4. XML Schema evolution - extremely limited. (discussion of copyEvolve limitations at </code> http://forums.oracle.com/forums/message.jspa?messageID=1051365 <code>

5. DHCP environments - when installing on developer workstations, requires a fixed IP address, needing loopback adapters setup.

6. UPPERCASE COLUMN NAMES ONLY UNLESS YOU "Start putting everything in quotes". Jeez.

7. Enterprise Manager (either web or Java UI) - sloooowww...lacks the slickness of modern GUI's.

8. Extremely heavy footprint. Just how many GB's of RAM do I need on my workstation to run Oracle and my development tools at the same time ?

9. Case sensitivity mixture - data comparisons are case sensitive, but the SQL itself is case insensitive.

My initial impression is that Oracle may have a good database engine, but the interface between me and the database leads to an extremely frustrating experience. Everything just seems so much more difficult to achieve. Oracle really needs to improve its usability significantly.

Grants for all tables of a schema

Kim Berg Hansen, October 13, 2005 - 9:04 am UTC

A small enhancement I just wish I had today:

GRANT SELECT ON SCOTT.* TO JONES

At present I can do two things only:
GRANT SELECT ANY TABLE TO JONES
or
GRANT SELECT <one! table> TO JONES

Sure I can quickly whip up a script to GRANT SELECT on all Scott's tables to Jones, but when Scott creates a new table, that has to be explicitly granted too.

My reason is that I would like to create some users who have a sort of "READ ONLY" access to a particular schema - including any new tables that will come.

Similarly it would be nice with something like:
GRANT EXECUTE ON SCOTT.* TO JONES

And there are probably other grants that would be nice to be able to do on a schema-level rather than just either globally or object-level.


linker error : _sqlcxt module

sohli, October 13, 2005 - 10:16 am UTC

hey
i m a newbie
i didnt get the proper idea of what
u were saying could u please explain in details
thank you


Tom Kyte
October 13, 2005 - 11:04 am UTC

nt rly

to Andy in England

Jim, October 13, 2005 - 11:42 am UTC

Items 6,8,9. In 10G you can have case insensitive searches. Also you can make the database take up less RAM. (SQL server takes all RAM available unless you specifically limit it.) Also in other databases the tools automatically quote the table and column names so they rely on mixed case object names. I like not having to quote every darn thing just to query things. I don't want to have mixed case object names. Do you really want an Employee table that is different from the EMPLOYEE table? UGH.

GRANT SELECT ON SCOTT.* TO JONES

Alex, October 13, 2005 - 11:48 am UTC

A small enhancement I just wish I had today:

GRANT SELECT ON SCOTT.* TO JONES


Have you tried CREATE TRIGGER on schema level for handling DDL events ?



For Alex

Andrew, October 13, 2005 - 12:06 pm UTC

A little generic, but you get the idea...

BEGIN
FOR x IN ( SELECT owner,
object_name,
DECODE(object_type, 'TABLE' , 'select',
'SEQUENCE', 'select',
'VIEW', 'select',
'execute') AS privs,
DECODE (owner, 'SCOTT', 'jones') AS app_user
FROM dba_objects
WHERE object_type IN ('TABLE', 'PACKAGE', 'PROCEDURE',
'FUNCTION', 'SEQUENCE', 'VIEW')
AND owner IN ('SCOTT' )
LOOP
BEGIN
EXECUTE IMMEDIATE 'grant ' || x.privs || ' on ' || x.owner ||
'.' || x.object_name || ' to ' || x.app_user ;

EXECUTE IMMEDIATE 'create or replace synonym '|| x.app_user||'.'||x.object_name||
' for ' ||x.owner||'.'||x.object_name ;
END;
END LOOP;
END;
/


A reader, October 13, 2005 - 12:06 pm UTC

What about those developers who don't have access to those files ?

Anto


Followup:
me, i just let them be, they'll take care of themselves.

You mean the invalid objects will take care of themselves (will become valid, if it can) , the next time someone runs them ?

Not if there are 2 or 3 levels of dependencies .
If some object A is dependent on object B which is invalid(due to some object C being invalid)...., there is the need to comile in the order C,B,A



Tom Kyte
October 13, 2005 - 1:17 pm UTC

yes, they will become valid - the database will "fix them" all by itself.

Not if there are 2 or 3 levels of dependencies .

Kamal Kishore, October 13, 2005 - 12:57 pm UTC

Is this what you mean?

SQL> create table t as select * from dual ;

Table created.

SQL>
SQL> create or replace procedure proc_c is
  2  begin
  3    for rec in (select * from t)
  4    loop
  5      null ;
  6    end loop ;
  7  end ;
  8  /

Procedure created.

SQL> create or replace procedure proc_b is
  2  begin
  3    proc_c ;
  4  end ;
  5  /

Procedure created.

SQL> create or replace procedure proc_a is
  2  begin
  3    proc_b ;
  4  end ;
  5  /

Procedure created.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          VALID
PROC_B                          VALID
PROC_C                          VALID

3 rows selected.

SQL> rename t to t1 ;

Table renamed.

SQL> rename t1 to t ;

Table renamed.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          INVALID
PROC_B                          INVALID
PROC_C                          INVALID

3 rows selected.

SQL> exec proc_a ;

PL/SQL procedure successfully completed.

SQL> select object_name, status from user_objects where object_name LIKE 'PROC%' ;

OBJECT_NAME                     STATUS
------------------------------- -------
PROC_A                          VALID
PROC_B                          VALID
PROC_C                          VALID

3 rows selected.

SQL>
 

2 or 3 or more levels of dependency

A reader, October 13, 2005 - 1:42 pm UTC

What if the objects involved are materialized views and not procedures ? Will Oracle fix all the dependent MVs in the correct order ?

Anto

Tom Kyte
October 13, 2005 - 4:16 pm UTC

define "fix", what is "broken" about them

Join Conditions

A reader, October 13, 2005 - 2:37 pm UTC

I hate having to explicitly write join conditions
that effectively duplicate foreign key information
already stored in the database. In 90% of the queries
I write the join conditions are a redundant expression
of pre-defined foreign key constraints.

For example:

create table table_a
( a_id number,
a_name varchar2(10)
primary key(a_id)
);

create table table_b
( b_id number,
a_id_fk number references table_a,
b_name varchar2(10)
);

create table table_c
( c_id number,
b_id_fk number references table_b,
c_name varchar2(10)
);

-- instead of this

select
a.name, b.name, c.name
from
table_a a, table_b, table_c
where
a.a_id = b.a_id_fk and
b.b_id = c.b_id_fk and
...
;

-- or this ugly ANSI SQL construct

select
a.name, b.name, c.name
from
table_a a inner join
table_b b inner join
table_c c
on ( b.b_id = c.b_id_fk )
on ( a.a_id = b.a_id_fk )
where
...
;

-- something like this would be nice;
-- it is fast and easy to write and others
-- can clearly follow the logic with minimal effort

select
a.name, b.name, c.name
from
table_a a,
table_b b,
table_c c
join
a inner join b,
b inner join c
where
...
;

Oracle invested huge amounts to develop a complex cost based
optimizer to pre-process queries. I'm surprised that, after
all these decades, that they haven't invested a small fraction of
that amount to make their parser smart enough to know how to join
two tables together simply using constraint information already
available in its data dictionary.

When you have a situation where you need joins that don't
follow the pre-defined foreign key constraints, the
existing syntax will do. In a well designed database,
these cases should definitely be the minority however.

I'm aware of the "USING (column_name)" clause. This
construct comes close, but it has a few deficiencies
that cancel out is usefulness as a time saving syntax
construct:

1. you can't use it when the primary key and foreign
key column names differ

2. it precludes the use of shortcuts like "a.*" in the
SELECT clause

3. you have to remember to treat the join columns
differently when you reference them in other
parts of the SELECT clause; i.e. if you are in
the habit of prefixing all your column names with
table aliases you have to remember not to do this
for columns mentioned in the USING clause.



Andy

DaPi, October 13, 2005 - 2:37 pm UTC

"5. DHCP environments - when installing on developer workstations, requires a
fixed IP address, needing loopback adapters setup."

Can't you get round this by using IPC (Inter-Process Connect/Communication) ?

DHCP - yes

Tom Fox, October 13, 2005 - 4:59 pm UTC

Sure, you can use DHCP. I run 9i and 10gR2 on my box with a DHCP address and no loopback installed. I get errors during the installation wondering about my hostname and IP, but I hit continue and it functions normally. Of course, I don't need other people on the network to connect to me so this is fine.

Things to add..

Rob, October 13, 2005 - 4:59 pm UTC

1. Would love the ability to display user sessions and be able to sort on the by the amount of archivelogging that they have generated.

2. Would love a utility to BLAST updates to multi-million rows in an extremely fast manner. I have used a couple manners using hints, updating the stats on the object, etc. Just a great "method" that saves me the time of having to play with the different methods to find the fastest.

3. Built-in virus scanning of uploaded attachments in the database. (Can I say Mcafee?)

4. Cross database-link explain plans with more detail.

Those would be my first requests off the top of my head.

RMAN like commands in sqlplus would be helpfull too

Yegna, October 13, 2005 - 7:02 pm UTC

Like in RMAN if I could do the following in sqlplus that would be great..

List tablespace (to list all the tablespaces with size)
list constraints for table schema.tname (to list the constraints of the table)
..
..

What I don't like...

Toon Koppelaars, October 17, 2005 - 4:01 am UTC

Fully argree with Zlatko a few posts ago.

What I don't like about Oracle (or any other DBMS out there) is the lack of more support for data integrity constraints (aka business rules). We need much more than just CHECK, PRIMARY KEY, UNIQUE and FOREIGN KEY.

Support for the CREATE ASSERTION ANSI SQL construct would be EXTREMELY useful and thus a good start. Allthough I know this is also EXTREMELY difficult to implement... ;-)


...and another wish

A reader, October 17, 2005 - 1:17 pm UTC

I would like an additional variable to be available in an exception block (like we already have SQLERRM and SQLCODE) - to give the line number within the module that caused the branch to the exception block.

You can currently look at the call stack / exception stack but that will only give you a line number within the exception block as opposed to the one that sent you there.



Tom Kyte
October 17, 2005 - 1:37 pm UTC

10g added that

dbms_utility.FORMAT_ERROR_BACKTRACE

re-raise original exception

Alberto Dell'Era, October 19, 2005 - 2:30 pm UTC

In Java we can do:

catch (Exception e) {
// log infos about exception etc
throw e;
}

Would be nice to have something similar in pl/sql, currently you can do:

dellera@ORACLE10> declare
2 l_dummy varchar2(1);
3 begin
4 select dummy into l_dummy from dual where 1=0;
5 exception when others then
6 raise;
7 end;
8 /
declare
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 6

Would be nice to have something like "raise original_exception;", that gives "ORA-06512: at line 4" instead (and the full original stack trace in general of course).

Using dbms_utility.FORMAT_ERROR_BACKTRACE would "flatten" the stack into the message string, it's not the same (besides being a tad verbose).

Change(s) Needed - Idle Time Parameter

Magesh, October 19, 2005 - 4:58 pm UTC

The IDLE TIME Parameter when set in the database. When a user leaves his or her session inactive exceeding the idle time set for whatever be the reason, Oracle waits for the user to return and submit an action to the database before completely cleaning up the session. It would be nice if it cleans up automatically instead of putting the session as "Sniped". The user either knowingly or unknowingly violated the IDLE TIME limit and his session is anyway gone. What use is it to the database by keeping it in "Sniped" Status anyway?

Tom Kyte
October 19, 2005 - 5:02 pm UTC

the end user would get "3113 end of file on communication channel" upon return.

What do you do about 3113's?
You call support....



A Simple Way to list dependencies may be

A reader, October 19, 2005 - 5:11 pm UTC

When we have to make change to an object, say change the data type or name, I want to know all the 20-30 sub-programs that is using it, so I know how much work is going to be required or what is needed to change in advance. I know you can do it through Enterprise Manager but I am talking about some simple way probably from sqlplus or something. Probably even like a system table which would store the object and the dependencies of it will do good. But I would really love to do something like 

SQL> LIST DEPENDENCY EMP; or something like it. 

Tom Kyte
October 19, 2005 - 7:42 pm UTC

(connect by query on dba_dependencies can do this....)

how about better support for binds in explain plan

Sean D Stuber, October 19, 2005 - 11:45 pm UTC

In particular, allow date bind variables when doing a plan.

Clearly Oracle can do it, the statements execute and have plans generated when they run, why can't that same logic (or nearly the same) be applied when just running the plan?

I'd be willing to have the functionality toggled through an

alter session xplan_allow_date_binds=true

in order to allow "default" behavior to work (or rather, fail) the current way; but still give me the option to generate a plan from the the real application sql without tinkering with it first.


empty strings and nulls

Raymond Allo, October 20, 2005 - 12:48 am UTC

As a DBA I am involved in many Java and PHP projects. Most developers are complaining (specifically the Java ones) about how Oracle deals with empty strings and nulls. Nulls is to confusing somehow and creates all this extra work for them.(ok guys it is what it is so get over it, just put it in a class) Having said that, it seems Oracle is the only database which has this issue. DB2, Postgres, MYSQL, SQLServer etc handle empty strings as empty strings.
An init.ora parameter like empty_strings=true/false would be nice thank you.

DBMS_OUTPUT.PUT_LINE

A reader, October 20, 2005 - 4:03 am UTC

1)
Few books on PL/SQL says that we should be able to use DBMS_OUTPUT.PUT_LINE using boolean variable, which is not overloaded yet for its usage.
2) If there is command line history of the SQL statements, it would be very much helpful in SQL*PLUS


Tom Kyte
October 20, 2005 - 8:13 am UTC

1) I'm sorry that some books are wrong? are they Oracle books (eg: something we can actually fix?)

2) raptor.

gathering global statistics for partitioned table

lh, October 20, 2005 - 5:24 am UTC

Have I understood it correctly :if sql-statement is referencing only one partition, it will use statistics for that partition, otherwise it will use global statistics.

Generating global statistics can be costly operation for large partitioned tables especially when using histograms.

Some kind of 'cumulative' gathering of statistics could be beneficial. Calculation of statistics for partition would e.g. update global statistics.

Tom Kyte
October 20, 2005 - 8:18 am UTC

if the optimizer can determine a) that a single partition will be accessed and b) which partition that is then partition stats for that partition will be used.

Else, global table statistics are used.


calculation of local stats for all partitions DOES generate global stats HOWEVER if you have 100 partition and column X has 1000 distinct values in each partition, tell me how many distinct values of X does the table have?

a) 1,000
b) 100,000
c) some number between 1,000 and 100,000


and for bonus points

q) is X skewed, if so how?


you cannot roll up partition stats and answer those....

DBMS_OUTPUT.PUT_LINE

A reader, October 20, 2005 - 12:48 pm UTC

Hi tom,
does oracle support printing boolean values using DBMS_OUTPUT.PUT_LINE?


Tom Kyte
October 20, 2005 - 4:49 pm UTC

nope.

ops$tkyte@ORA9IR2> exec dbms_output.put_line( true );
BEGIN dbms_output.put_line( true ); END;

      *
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00306: wrong number or types of arguments in call to 'PUT_LINE'
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

 

gathering global statistics for partitioned table

lh, October 21, 2005 - 5:54 am UTC

I had in my mind situation were new data is constantly added into table. New partitions are created and statistics are gathered for them after they have 'representative' volume.

Column statistics also should use histograms. This is the key point. These might be cumulated with some accurancy as global statistics.
Histograms have some of the required information.
This would be an estimate, agreed. This could be (yet) another option in dbms_stats -package when gathering statistics for named partition.

Other way to solve the basic problem is that if sql-statement would use several partitions (with local indexes)then optimizer would use these partitions statistics, not global statistics.

(I have few times have probles using partitioned tables with local indexes. Statistics we gathered for partitions.
But sql-statements where referencing several partitions.
Especially queries which had indexed date columns in where clauses performed sometimes poorly because old global statistics did not understand values beyond old statistics range.
Table had hundreds of millions rows and gathering global statistics in that machine takes a long time. We are now doing it, because we couldn't modify that third party application. But I am wondering if this could be solved otherwise).

Tom Kyte
October 21, 2005 - 8:28 am UTC

stats should be gathered for them from the get go - else the global stats are not rolled up.


it is not that histograms would be estimated - rather it is the fact that there isn't a way to roll them up as described.


100 partitions
10000 distinct values of X in each

do you have 10,000 distinct X's or 1,000,000 distinct X's or something in between??

The optimizer EITHER

a) uses global stats because it cannot figure out the SINGLE partition that will be accessed at parse time

b) uses local stats because it CAN figure out the SINGLE partition that will be accessed.

execute immediate @ db-link

A reader, October 21, 2005 - 8:16 am UTC

would be handy in some cases
(instead of dbms_sql@db-link)

booleans in dbms_output

A reader, October 21, 2005 - 9:57 am UTC

I generally use following syntax :
exec dbms_output.put_line(case 1>2 when true then 'true' when false then 'false' else 'what?' end);



My votes

Peter Nybo Rasmussen, October 21, 2005 - 10:29 am UTC

My votes goes to the following (all said by others before):

1) Multi-table check constraints or assertions as defined by ANSI SQL (don't see why this would be "extremely difficult to implement" as stated by Toon Koppelaars).
2) Don't treat empty strings as null (a way to disable this, big design mistake).
3) No implicit type conversions (a way to disable this).
4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers (currently 30 characters).
5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
6) Mixed case object names (for better readability) without having to quote, combined with case insensitive DML.
7) Extended explain syntax to allow the equivalent of bind variable peeking.


Not technical but...

Steve, October 21, 2005 - 11:22 am UTC

Having AskTom occasionally open to accept questions during UK working hours!

;-)

Tom Kyte
October 21, 2005 - 11:39 am UTC

I'll be in Europe for the next two weeks :)

malcolm, October 21, 2005 - 11:45 am UTC

GPL the source code. Hahaha!

And another vote for the option to compile pl/sql with strong typing. (We don't like being treated like VB programmers...)



For Peter Nybo Rasmussen

Andrew, October 21, 2005 - 12:49 pm UTC

1) Multi-table check constraints or assertions as defined by ANSI SQL . . .
- seems reasonable to me

2) Don't treat empty strings as null (a way to disable this, big design mistake).
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string? If length of enpty string > 0, then it is not empty, thus not null.

3) No implicit type conversions (a way to disable this).
- good coding practices generally dictate explicit type conversions, but to eliminate it would bust lots and lots of apps. Besides, just because your coders will not follow your standards ( you have a standard requiring explicit type conversions, right), does not mean you should use the database to enforce it.


4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers (currently 30 characters).
- Longer varchar2 is fine with me, but identifiers at 30 is fine too. This enhancement is usually requested by people who like to mash object names together to make even longer names. If it were to increase to 60 or 128 or?? all that would happen is that there would be complaints that from people that they could not mash their 120 char names together to make 600 char names. Give us a break, someone has to type in those long names every time their code references them

5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
- Actually seperate sequences are more flexible. Take for instance a bi-directional replication system. We set the sequence to start at 1 on site a and to start at 2 on site b, now we increment by 2 on each site. Voila, no data collisions. Managing that with autoincrement would mean the code (DDL at least) would have to be different on each database.

6) Mixed case object names (for better readability) without having to quote, combined with case insensitive DML.
- You can put mixed case object names in your apps all you want, oracle does not care. DML is case insensitive. update emp set salary = 10; will work just fine. So will UPDATE EMP SET SALARY = 10; and so will Update Emp Set Salary = 20;. As for dictionary contnt readability, learn to use underscores -- They take the palce of spaces in object names AndMakeReadabilityMuchEasierThanMixedCase.

7) Extended explain syntax to allow the equivalent of bind variable peeking.
- That is nice. But also, having to diagnose problems others are having in thier sessions, it would be nice to peek at the bind variables for other sessions too.

Just my US$.02


Chuck Jolley, October 21, 2005 - 1:16 pm UTC

Quote:
AndMakeReadabilityMuchEasierThanMixedCase.

Thats no problem. Just make it: TblAndMakeReadabilityMuchEasierThanMixedCase and it becomes easy to read.
Ask any SQLServer/VB programmer ;)




Those naughty NULL's

DaPi, October 21, 2005 - 2:09 pm UTC

Andrew said:
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string?

I would say that the length of a NULL string is unknown, so should be NULL.

The difference is between:
a message '' received with a good check-sum => empty string
a message received with bad checksum, unknown message => NULL

I'm sure Fabian Pascal can do it better.


Nice thread

Andrew Max, October 21, 2005 - 2:30 pm UTC

Well, my strong belief is that one should be a smart Oracle guru to feel 100% free criticizing something about Oracle and saying "Hey there, I would like you to add this feature! Why Oracle is still missing it???". Otherwise, we may end up with beating the air...

But anyway -- there is one intricate (and very simple) thing I still can't get.

Why in SQL*Plus we can, for example:

SQL> variable NumV number

... but can not:

SQL> variable DateV DATE
Usage: VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
                    VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
                    NVARCHAR2 (n) | CLOB | NCLOB | REFCURSOR ] ]

So -- VARCHAR2, REFCURSOR and even CLOB are allowed whereas DATE is not.
But why???

I really have no clues so I'm afraid it's a kinda stupid question.
Well, I am not guru. So I realize that this may be just "beating the air..." :)

Best regards,
Andrew. 

Tom Kyte
October 22, 2005 - 9:44 am UTC

they just didn't "implement it".

applications generally don't *bind* dates - they bind strings which hold characters that when a specific format is applied to these characters can be converted into a 7 byte internal date format (or 11 bytes for timestamps with fractional seconds)

Compiling invalid objects in the correct order

Marco Coletti, October 21, 2005 - 2:33 pm UTC

Yet another sample of "selecting DDL commands".

This query is a bit slow, but it should give you a list of DDL statements in (one of) the correct execution order.

The basic idea here is first to find the set INVOBJS_INDEPENDENT_INVOBJS of invalid objects which do not refer to other invalid objects, then from this set start a hierarchical query wich goes down the dependency tree; the resulting set is then aggregated by object and ordered by maxlevel to take care of objects which appear more than once in the dependency hierarchy.

with INVOBJS_INDEPENDENT_INVOBJS as (
select OBJECT_NAME, OBJECT_TYPE
from USER_OBJECTS where STATUS = 'INVALID'
minus
select d.NAME, d.TYPE
from
USER_DEPENDENCIES d,
(select OBJECT_NAME, OBJECT_TYPE from USER_OBJECTS where STATUS = 'INVALID') o
where d.REFERENCED_NAME = o.OBJECT_NAME
and d.REFERENCED_TYPE = o.OBJECT_TYPE)
select
'alter '||rpad(decode(OBJECT_TYPE,'PACKAGE BODY','PACKAGE',OBJECT_TYPE),14,' ')||
rpad(OBJECT_NAME,31,' ')||'compile'||
decode(OBJECT_TYPE,'PACKAGE BODY',' body')||';' COMPILE_COMMAND
from (
select 0 L, OBJECT_NAME, OBJECT_TYPE, NULL REFERENCED_NAME, NULL REFERENCED_TYPE
from INVOBJS_INDEPENDENT_INVOBJS
union all
select LEVEL L, NAME, TYPE, REFERENCED_NAME, REFERENCED_TYPE
from USER_DEPENDENCIES
connect by prior NAME = REFERENCED_NAME and prior TYPE = REFERENCED_TYPE
start with (REFERENCED_NAME, REFERENCED_TYPE) in (select OBJECT_NAME, OBJECT_TYPE from INVOBJS_INDEPENDENT_INVOBJS))
group by OBJECT_NAME,OBJECT_TYPE
order by max(L);

COMPILE_COMMAND
----------------------------------------------------------------
alter VIEW V_OM_CDN compile;
alter VIEW V_OM_ROD compile;
alter PACKAGE CMKANALISI compile body;
alter VIEW V_OM_ALICE compile;
alter PROCEDURE ALLINEA_CPC compile;
alter VIEW TV_CPC_VIEW compile;
alter VIEW V_RPT_SMART compile;
alter FUNCTION FAMIGLIA_ROD compile;
alter PROCEDURE OM_MERGE_ROD compile;
alter PROCEDURE SOGLIE_PER_SISTEMA compile;
alter PROCEDURE SLAMONITOR_WRAP_OM compile;
alter PROCEDURE SLAMONITOR_OM_WRAP compile;
alter PROCEDURE OM_MERGE_WHOLESALE compile;
alter PROCEDURE ALLINEA_CPC_ORESTE compile;
alter PROCEDURE SLAMONITOR_OM_DAI compile;
alter PROCEDURE SLAMONITOR_OM_CPC compile;
alter PROCEDURE SLAMONITOR_CPC_OM compile;
alter PROCEDURE SLAMONITOR_ALL_OM compile;
alter PROCEDURE SMART_BUS_SIEBEL_CLEAN compile;
alter VIEW TV_TD_INNOVATIVA_VIEW compile;
alter PROCEDURE SVECCHIA_OL_IN_CATENA compile;
alter VIEW TV_ELABORAZIONE_VIEW compile;
alter VIEW TV_CPC_VIEW_X_PORZIO compile;
alter VIEW V_RPT_WHOLESALE_NEW compile;
alter PROCEDURE INFASA_OL_IN_CATENA compile;
alter PROCEDURE CARICA_OL_IN_CATENA compile;
alter PROCEDURE SP_OM_DELETE_TABLE compile;
alter VIEW TV_STATO_ORDINATIVI_ALTA_VIEW compile;
alter PROCEDURE ANDREA_AGG_CPC_DOPPIA_ATTIVAZ compile;
alter VIEW TV_CONFIGURAZIONE_CIRCUITO_V compile;
alter PROCEDURE SP_DEL_DUPLICATI_TD_INNOV compile;
alter VIEW TV_STATO_ORDINATIVI_VIEW compile;
alter VIEW V_RPT_TD_INNOVATIVA_NEW compile;
alter PACKAGE TEST_TIZIANO compile body;
alter PROCEDURE PR_TOSC_UNICA compile;
alter PROCEDURE SLAMONITOR_CPC compile;
alter PROCEDURE PULITURA_DB_BAM compile;
alter PROCEDURE SLAMONITOR_WRAP compile;
alter PACKAGE TEST_TIZIANO_BE compile body;
alter PROCEDURE OM_MERGE_SOSPESI compile;
alter PROCEDURE SLAMONITOR_RUN_1 compile;
alter PROCEDURE SLAMONITOR_RUN_2 compile;


Chuck Jolley, October 21, 2005 - 2:49 pm UTC

I'm for null strings having a length of 0.

It just seems inconsistent to have it work the way it does.

I you say, well what about null numbers etc:
Null strings are different.
null it's self is a string.

I just can't think of a case where a program would treat a null string differently from a 0 length string as a prctical matter.



NULL's

DaPi, October 21, 2005 - 3:00 pm UTC

It's not what a program would do, it's what you want your data to mean. e.g. in a survey:

Q: What is your boat's name?
A: It doesn't have a name. (empty string)

Q: What is your boat's name?
A: . . . AGHHHHH! my house is on fire . . . . . (NULL string)





Enhancement Request

djb, October 21, 2005 - 3:46 pm UTC

I realize this thread is getting long and I may have missed this, but I'd like something like a dynamic (or conditional) foreign key constraint, like this:

create table T1 (
T1F1 VARCHAR2(30) primary key
)
/

create table T2 (
T2F1 VARCHAR2(30) primary key
)
/

create table TMAIN (
F1 number(1),
F2 VARCHAR2(30)
)
/

alter table TMAIN
add foreign key (F2)
when (F1 = 1) references (T1.T1F1)
when (F1 = 2) references (T2.T2F1)
when (F1 = 3) references (NOTHING)
/

I worked with a proprietary database that did this sort of thing (it was not SQL-based) and this feature was great.
When the value of F1 changed, the DB checked F2 against the appropriate table. To change values, you simply updated both at the same time.

I'd be curious to know what others think of this (including Tom).

Tom Kyte
October 22, 2005 - 10:29 am UTC

Never, never never.

I would hang up my hat.


You have three attributes here, not a single column multiplexed to mean one of three things.

re: nulls -vs- empty strings

Chuck Jolley, October 21, 2005 - 3:59 pm UTC

But Oracle is not consistent about it.
Is '' a null value or an empty string?
If it's not an empty string then what is?

"My boat is on fire" should raise a null pointer error,
Not be an acceptable answer to "What is the name of your boat?"

SQL*Plus: Release 8.1.7.0.0 - Production on Fri Oct 21 14:45:30 2005

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

SQL> @connect chajol@tax_test
Enter password:
Connected.
chajol@tax.experior>create table blah(id number, string varchar2(16) default '' not null);

Table created.

Elapsed: 00:00:00.43
chajol@tax.experior>insert into blah(id) values(1);
insert into blah(id) values(1)
            *
ERROR at line 1:
ORA-01400: cannot insert NULL into ("CHAJOL"."BLAH"."STRING")


chajol@tax.experior>alter table blah modify(string default 'Hello World');

Table altered.

chajol@tax.experior>insert into blah(id) values(1);

1 row created.

chajol@tax.experior>select * from blah;

        ID STRING
---------- ----------------
         1 Hello World


chajol@tax.experior>

 

Improved sqlplus parameter handling..

Jmv, October 21, 2005 - 5:01 pm UTC

For instance if &1, &2, et al are not defined on the command line is there a way to have a default automatically assigned. Similar to the ${var:=value} fuctionality in ksh.
Or to see the number of arguments from the command line.

Sorry, missed that

Andrew Max, October 22, 2005 - 10:59 am UTC

Aah... my bad! Completely forgot about that 7-byte SQLT_DAT datatype. 
Indeed, client applications generally do not bind this datatype.

> applications generally don't *bind* dates - they bind strings which hold characters ...

... did you mean this I believe?

SQL> create table dates (x date);

Table created.

SQL> variable some_string varchar2(48)
SQL>
SQL> exec :some_string := '22.10.05'

PL/SQL procedure successfully completed.

SQL> insert into dates values (:some_string);

1 row created.

Well, I do not code clients and involved mostly in server-side logic programming (SQL, PL/SQL, a bit of OCI). But I realize that if someone had decided to bind strings containing dates he(she) must be aware of this:

SQL> alter session set NLS_DATE_FORMAT = 'DD/MON/YYYY';

Session altered.

SQL> insert into dates values (:some_string);
insert into dates values (:some_string)
                           *
ERROR at line 1:
ORA-01843: not a valid month


SQL>  exec :some_string := '22/OCT/2005'

PL/SQL procedure successfully completed.

SQL> insert into dates values (:some_string);

1 row created.

Anyway -- agree with you, DATE for SQL*Plus VARIABLE command makes no real sence because most other client apps never use SQLT_DAT. Sorry for that silly question. ;)

Thanks and best regards,
Andrew. 

Some extra about statistics

Ofir Manor, October 22, 2005 - 11:12 am UTC

There are many great suggestion here.
Some ideas about statistics:
1. I'd like to be able to do ALTER TABLE xxx DEFAULT STATISTICS METHOD FOR ALL INDEXED COLUMNS...
It would allow us to use gather_schema_stats and easily customize the way it analyzes a few selected tables that need some special care. Also, it might be useful to be able to customize the default estimate percent(or default to compute) etc.
2. I wish that when specifing parallelism for gather_schema_stats, each parallel slave would analyze a different table or index. So, if my schema has 200 tables and 800 indexes, and I run with a degree of 4, I'll have 4 processes that each analyzes a different object (starting with the biggest and the unanalized, maybe, so they finish more or less the same time). Again, otherwise we might need to work with our own scripts to create that parallelism.
Well 1+2 mean - find out why people don't use gather_schema_stats (or database_stats) on many production systems and use custom scripts instead, then make the builtin way easy and flexible enough so people will use it out of the box (using db control..)
3. I'd like the optimizer to use a new kind of statistics - COLUMN CORRELATION. For example, when a query has table t (1 million rows) as the driving table, and the two predicates that filter rows are WHERE t.a = :x and t.b = :y, and both A and B are NOT NULL with a 200 distinct values, how many rows will return from this part of the query? Oracle will guess a 25(a million divided twice to 200). It is a good guess if the columns are not correlated. but if they are strongly corelated, a better guess might be a few thoursands! so the rest of the plan might be really of. I'd like to be able to ask the database to collect correlation statistics on selected columns (or maybe AUTO - let him decide) and use it when appropriate.
In many cases, when tuning a nasty SQL, the DBA can quickly find where the cardinality estimates goes totally wrong, but can't fix - so let us (SQL profiles can help, but are not good for "families" of similar SQLs) and are not "pre-emptive"

For Andrew from Michigan, USA

Clemens, October 24, 2005 - 5:54 am UTC

Quote:
2) Don't treat empty strings as null (a way to disable this, big design
mistake).
- length of emptry string = 0
- length of null string = 0
-- what is the difference between empty string and null string? If length of
enpty string > 0, then it is not empty, thus not null.

=> The difference between "empty string = string length=0 entered" and "null string = no data at all entered" may be small, but it's there.

Quote:
3) No implicit type conversions (a way to disable this).
- good coding practices generally dictate explicit type conversions, but to
eliminate it would bust lots and lots of apps. Besides, just because your
coders will not follow your standards ( you have a standard requiring explicit
type conversions, right), does not mean you should use the database to enforce
it.

=> Why not? Could be a database/session option to keep everyone happy.

Quote:
4) Up-to-date size limits for varchar2 (currently 4000 bytes) and identifiers
(currently 30 characters).
- Longer varchar2 is fine with me, but identifiers at 30 is fine too. This
enhancement is usually requested by people who like to mash object names
together to make even longer names. If it were to increase to 60 or 128 or??
all that would happen is that there would be complaints that from people that
they could not mash their 120 char names together to make 600 char names. Give
us a break, someone has to type in those long names every time their code
references them

=> So what? Using your argument, why not allow only for 8-charater identifier limit? On the other side, why is there no tab completion for object names yet? And what about concatenated names in foreign keys, etc. that have to be given artificial names because of that 30-character limit?

Quote:
5) Autoincrement/identity columns (implicit sequences to avoid trigger code).
- Actually seperate sequences are more flexible. Take for instance a bi-directional replication system. We set the sequence to start at 1 on site a and to start at 2 on site b, now we increment by 2 on each site. Voila, no data collisions. Managing that with autoincrement would mean the code (DDL at least) would have to be different on each database.

=> Take a look at postgres, it's all there: sequences connected to ID-columns. Saves some trigger coding (and maintainance) and gives you the best of both worlds.

Quote:
6) Mixed case object names (for better readability) without having to quote,
combined with case insensitive DML.
- You can put mixed case object names in your apps all you want, oracle does not care. DML is case insensitive. update emp set salary = 10; will work just
fine. So will UPDATE EMP SET SALARY = 10; and so will Update Emp Set Salary =
20;. As for dictionary contnt readability, learn to use underscores -- They
take the palce of spaces in object names
AndMakeReadabilityMuchEasierThanMixedCase.

=> Doesn't tackle the whole problem, where DESC and other commands will return the information in Uppercase only. Underscores are nice, but underscores + mixed case are even nicer.

Just my €.02
C.

Is raptor "still sqlplus..."

Stewart W. Bryson, October 24, 2005 - 4:05 pm UTC

"Actually, many of those things I would change are being addressed - a better
sqlplus that is still sqlplus (coming, code name "raptor", previewed at OOW)..."

What I've read about Raptor is that it is more akin to jDeveloper than sqlplus. I don't want another Toad... I want a more powerful sqlplus, and anything that I have to use a JDBC connection with just doesn't seem like sqlplus to me.



Tom Kyte
October 25, 2005 - 1:19 am UTC

why does jdbc make it "not seem like sqlplus"?????



Deferred constraints

Toon Koppelaars, October 25, 2005 - 10:05 am UTC

"What I don't like about Oracle", here is another one:

cod>create table test(i number check (i>0) initially deferred);

Table created.

cod>insert into test values(0);

1 row created.

cod>commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02290: check constraint (CBO_WS.SYS_C006049) violated


cod>select * from test;

no rows selected

I would like for Oracle to NOT rollback the transaction. Just have the commit fail with the 2290 error.
I know ansi/sql standard describes the 2091 error: give us a switch then, to prevent the rollback.


Tom Kyte
October 26, 2005 - 7:24 am UTC

all you need to do is:

SQL> set constraints all immediate;


that'll verify the constraints - and if one fails - no rollback happens. You can fix it and then commit when it finally succeeds.


Commit must end the transaction one way or the other. 

How can one have a commit fail

Jim, October 25, 2005 - 11:53 am UTC

Not sure how one could have a commit fail and not issue the rollback. You issue commit and you get an error, now what? You can't commit, it failed, just leave the transaction unfinished?

Toon Koppelaars, October 25, 2005 - 12:49 pm UTC

Yes, and then issue a correcting:
update test set i=1 where i=0;
and then commit succesfully.


Tom Kyte
October 26, 2005 - 8:49 am UTC

set constraints.....

Is raptor "still sqlplus..."

Stewart W. Bryson, October 25, 2005 - 2:36 pm UTC

It seems that Raptor is going to be more like Toad... a very GUI tool much like the tools one would use to administer SQL-Server.

What I want is a more powerful sqlplus... one that is still very command-line oriented, but makes some of the more painful things easier: statement recall and review, formatting output, spooling to a spreadsheet-like output, etc. And it would be fine with me if there was a GUI client built around it... but at it's very heart it was still sqlplus.

Perhaps my comparing it to Toad is unfair... but that's what the screenshots looked like. And I know in toad, all sqlplus commands are ignored. Is the JDBC connection that Raptor uses going to allow commands such as "SHUDOWN IMMEDIATE", "RECOVER DATABASE" or "ALTER SYSTEM kill session..."?

Your new book is excellent... by the way.





Not a review

Huy, October 25, 2005 - 4:46 pm UTC

_Maximum length for identifiers is too low.
_Case-insensitive passwords.

my5cent

Edgar, October 26, 2005 - 9:08 am UTC

1.
On most sites, range partitioning is used on date type columns to maintain "windows of historical data".
Then, to automate partition mintenance, people write some code to create partitions for new months, truncate/drop partitions of "obsolete" data.
Each site has it's own implementation... dissipate the common task
2.
Replication using fast refresh - no built-in functionality which preserves rows in target table from deletes after truncate/drop partition on master table.
3.
Physical standby with read-only mode windows for reporting:
no possibility to use temporary tables.
4.
Express Server killed by Oracle - it was out-of-the-box SOLUTION for datamarts.
Now replaced by analytical sql, m-views - just a bricks for developers.
5.
JDBC - unable to get metaData on PreparedStatement before execution
6.
SQL optimizer hints - it is hard sometimes to get them work, because syntax check of hints not implemented, reasons why they doesn't work are not reported.
7.
Alert log file has inconvenient structure - hard to process with scripts, figure out all lines witch corresponds to the same "bad event", dates of "bad events" (compare with Windows EventLog, for ideas)

ORA-12053

Kim, October 26, 2005 - 9:21 am UTC

- Not the most descriptive error message.

Background (10g R1):
We are trying to figure out *why* it is not allowed to
use nested join MV and fast refresh, but only nested
aggregated MV.
It works when we use group by for all columns, so why
not without ???

ALERT.LOG structure

Andrew Max, October 26, 2005 - 9:30 am UTC

Hi Edgar,

> "Alert log file has inconvenient structure - hard to process with scripts, figure out all lines witch corresponds to the same "bad event", dates of "bad events" (compare with Windows EventLog, for ideas)..."

Have you read this, for example?
</code> http://asktom.oracle.com/~tkyte/alert_arch.html <code>

Tom demonstrated there how to make use of SQL and external tables to mine data from ALERT.LOG file.
Similar smart examples can be found in his new book, "Expert Oracle Database Architecture".

Best regards,
Andrew.

Reply to Edgar

Stewart W. Bryson, October 26, 2005 - 9:57 am UTC

"Replication using fast refresh - no built-in functionality which preserves rows
in target table from deletes after truncate/drop partition on master table."

Materialized views are just that... instantiated views of a source object or objects. It cannot contain more data that the source... it's just not what they are designed for. They are no longer Oracle's preferred replication platform... they are more of a data warehousing/data segmentation tool now.

For replication needs such as those mentioned above, use Streams. Streams allows you to define what transactions you would like to forward. Only want updates and inserts but no deletes? Fine... configure it that way. You do or do not want DDL to be forwarded? It can do either one. Want replication for an entire database or entire schema (instead of having to configure MV's for every single table)? Oracle's got you covered.

varchar2 max length

Alex, October 27, 2005 - 1:35 pm UTC

Wish to remove these annoying length restrictions on VARCHAR2 - 4000 in SQL and 32767 in PL/SQL. They are very very inconvenient.

Of course, we might use CLOB, but due to its' significant performance impact, using CLOBs seems to be overkill for our 50-100K strings.

So, having even 4Mb length varchar2 would be REALLY helpful.

A reader, October 28, 2005 - 3:57 pm UTC

Tom,

Now that this thread has "steadied down" can you pick your top 5 or top 10 ?

Tom Kyte
October 29, 2005 - 10:48 am UTC

I will do that - this week or next weekend..... Yes.



One thing i don't like in SQL

Michel Cadot, October 29, 2005 - 1:39 pm UTC

Hi,

I already posted this question in forum and newsgroup but never received an answer.
Why in the following query, Oracle is not able to resolve a.dummy?

SQL> select ( select dummy
 2           from ( select dummy
 3                  from dual b
 4                  where b.dummy = a.dummy
 5                )
 6         ) dummy
 7  from dual a
 8  /
               where b.dummy = a.dummy
                               *
ERROR at line 4:
ORA-00904: "A"."DUMMY": invalid identifier

The purpose of this question is we often have to optimize the following kind of query:

select ..., 
       (select ... from tabB B1
        where B1.col1 = A.col1
          and B1.col2 = (select max(B2.col2) from taB B2
                         where B2.col1 = A.col1))
from tabA A
where ...

The classical way to limit to one access to tabB is to use an analytic function like:

select ..., 
       (select ... 
        from (select ..., row_number() over (order by col2 desc) rn
              from tabB B
              where B.col1 = A.col1)
         where rn = 1)
from tabA A
where ...

but as we add 1 level in subqueries, A.col1 becomes out of scope.
Why is this the case? Is there any theorical SQL limit on this? Or is it just an Oracle optimizer limit?

Regards
Michel
 

Tom Kyte
October 29, 2005 - 2:05 pm UTC

correlation names go "one level", no more, no less.

let us say b2.col2 is a DATE:

select ....,
substr(
(select MAX( to_char(b2.col2,'yyyymmddhh24miss')|| '..your dotdotdot..' )
from tabB b1
where b1.col1 = a.col1), 15 )
from tablA
where .....



similar for number, varchar.....


Michel Cadot, October 30, 2005 - 6:13 am UTC

Thanks Tom, that's what i finally did but this is not really pretty and especially when the "dotdotdot" is complex (say multiple columns of different types and so).

Do you know the reason of this limit of "one level"?
I hoped there would be no limit, something like variables scope in PL/SQL or C...

Regards
Michel

Tom Kyte
October 31, 2005 - 2:39 am UTC

I do not know the reasoning "why", only that "it is" and how to work around it.

another USD$.02

Andrew, October 31, 2005 - 11:08 am UTC

I think it would be real useful if the dba_tables view were to break out chain_cnt such that chain_cnt reflects the number of rows that require more than one block to store and that a new column, migrate_cnt, reflects the number of rows move due to insufficent free space in the block to allow the row to grow.

Linux Installation Headaches

Basil, October 31, 2005 - 12:08 pm UTC

The headache of installing Oracle under Linux ranks up there as one of the biggest misfeatures of Oracle, at least as far as installing 9.2.0.4 under RHEL 3 (a supported platform) is concerned. Procedure appears to be: Get software, unpack software, apply a patch, start installer, watch it fail, remember to set THREADS_FLAG to native, and run the installer again. Of course, Google and Metalink must be consulted on top of the install document at each step.

I can't figure out why the patch isn't already applied to the product when I get it, nor why I have to have Metalink access to get something that prevents the product from installing in the first place, nor why the runInstaller script can't check the environment and set THREADS_FLAG appropriately before running the Java insaller. It's enough to make me take the Windows performance penalty just to be free of the installation hassle. My most recent Oracle install led to to see why MySQL is so popular. rpm -i and the software is there.

Tom Kyte
November 01, 2005 - 4:57 am UTC

XE, rpm -i

automatic length in constant varchar2 in PL/SQL

Russell H, November 01, 2005 - 4:39 pm UTC

Something like

declare
some_text constant varchar2() := 'Hello World!';
...

would be nice. PL/SQL could automatically set the length.

What don't you like about Oracle?

Nestor Torres, November 01, 2005 - 5:29 pm UTC

What I don't like about Oracle is that it sells Oracle e-Business Suite 11i for Windows and then you subsequently find the in order for it to work (install) you need to have MKS Toolkit which is a $500.00 software. Good job! What does a student that wants to learn the Oracle e-Business Suite 11i suppose to do? You buy the trial version and then Oracle expects you to have MKS Toolkit. They should at least provide a work around for those of us that can't invest on MKS Toolkit. I think this is not a good way to secure a loyal following. Here I am with a worthless set of DVDs. That is what I don't like about Oracle.

Tom Kyte
November 02, 2005 - 5:01 am UTC

well, I cannot imagine a student downloading and installing the e-business suite and making any sense out of it personally.

Suggestion: excluded tables on "imp"

David Weigel, November 02, 2005 - 11:40 am UTC

We sell an application with Oracle, and once in a while we can't figure out problems without the customer bundling up the database and sending it to us. Their database schemae are very similar, not necessarily identical, but one thing I can count on is that there are two tables (say A and B) that will make up 60% of the size and rarely are useful solving the problem.

When loading the database, I have to do "IMP SHOW=Y" and cull out all but the two tables and build a "IMP TABLES=(N1,N2,N3,N4...N200)". What I'd like to be able to do is "IMP EXCLUDE=(A,B)".

Tom Kyte
November 03, 2005 - 5:19 am UTC

data pump has infinitely more sophisticated filters.

You can give them a "fine grained access control" policy of "where 1=0", ask them to put that in place - do the export (zero rows come out) and then drop the policy (i've done that in the past)

Snipped Sessions are a pain

Bill Beilstein, November 02, 2005 - 11:41 am UTC

A major PITA for every DBA I have ever talked to (outside of Oracle) is that when a session is killed by the dba or by a resource_limit, you have to run a cron job to kill the OS session to release the connection. The requirement that the user has one more swip at the apple before it disconnects is just silly!!! What happens to the user who went on vacation for a week. While I know that this behavor is near and dear to the people who work for oracle, provide a database parameter so that the rest of the world can chop the session when the user is killed. Something like.

KILL_SESSION_IMMEDIATELY = TRUE

Tom Kyte
November 03, 2005 - 6:20 am UTC

well, it is not entirely silly - the client application will generate an ora-3113, end of file on communication channel. A fairly obscure and "scary" message.

But I hear you.

datatype text

A reader, November 02, 2005 - 3:18 pm UTC

There is one thing I always like when using other databases, it's the datatype text. With Oracle I have to use a "normalization table"/rows table to contain it which is more work.

Why did Oracle choose not to include datatype text? Obviously you are fully capable of doing it, so there must be a good reason behind the decision not to.

Thanks,
Marcus

Tom Kyte
November 03, 2005 - 6:35 am UTC

clob?

What do you think about this?

D., November 03, 2005 - 11:33 am UTC

"Oracle password-system is putting its users in danger
...it seems, that passwords of Oracle DB users, can be easily uncovered, because of its weak defense mechanism, putting in danger corporative secrets of companies, using this software. The last critical analyze of problem showed clearly observed insufficiencies of technologies, used by company. Explorers of the problem uncovered the password during few minutes. Joshua Right from SANS Institute and Carlos Sida from Royal Colledge Holloway (London), exploring this problem sharply criticize this issue on the conference dedicated to IT-security, that was held in Los Angeles. According to their words, the main problem is weak hashing-mechanism and upcase transformation, that narrows ammount of possible combinations. Also, they say, that Oracle was notified about this issue in July, but there was no official reaction..."

Sorry for poor quality of translation and possible inaccurate name transliteration - I translated this article from other language. Also I can't show the origin of this article, because it was lost during e-mailing.

What I want to ask is next: is where some truth in this article or its just old evil Microsoft trying to lay its dirty hands on Oracle's share of market?


Tom Kyte
November 04, 2005 - 2:38 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:258815248980#50574772893798 <code>

has my comments on that....

would you send me your /etc/shadow file? If not, I won't send you my dba_users table either. That is the bottom line.

Ok

D., November 04, 2005 - 5:15 am UTC

This really makes sense - first thing to protect are hashed password, so nobody can "brute-force" them.
Well, but now I'm interested in such a question:
when I'm logging in... with SQL*Plus, for example, what is sent over the network - plain text, hash of password or the whole thing is somehow encrypted. Can some evil guy steal password or hash this way or am I missing something?..

Tom Kyte
November 04, 2005 - 8:47 am UTC

passwords are sent encrypted (since 7.0).

however, beware of commands like "alter user x identified by y" - as sql is sent cleartext unless you have encrypted the network traffic (ASO)

David Aldridge, November 04, 2005 - 11:33 am UTC

How about the "password" syntax? I'm guessing that it encrypts the passwords?

SQL> password applications
Changing password for applications
New password:
Retype new password:
Password changed 

Tom Kyte
November 04, 2005 - 5:28 pm UTC

set

TRACE_DIRECTORY_CLIENT=/tmp
TRACE_level_client=16


in your sqlnet.ora (in unix .sqlnet.ora if you like in your home directory) and you'll see that you cannot see the password.



donna, November 07, 2005 - 5:30 am UTC

I'd like to get rid of jInitiator, why flog all this web services capabilities and yet still need to install a client application?!

Schema Level Permissions

Bill, November 07, 2005 - 3:45 pm UTC

I have always wanted the ability to grant permissions at the schema level. For example if you want user AAA to be able to select user BBB's tables, you have to grant select on every table in BBB's schema. Wouldn't it be nice if you could simply say

grant select to AAA;
or
grant select on schema {ALL|TABLES|VIEWS|SEQUENCES} to AAA;

Hints

andrew boocock, November 10, 2005 - 5:47 am UTC

Hi Tom,
I guess this is the best topic for my thoughts on hints.

I got a call yesterday from another company site, the DBA had issues with a slow running piece of 3rd party supplied code. The code was hinted. I advised the removal of this as the first step, the plan changed to something more appropriate and the runtime came down for 4 hours + to 8 minutes.

I'd imagine we've all seen this situation before, and for a basic SQL script this was easily changeable. However I've also been in the situation where compiled code has been provided with embedded hints. There are ways and means to change the behaviour without recompiling,the one I remember was changing the degree of parallelism on the base table to change a RULE hint, this led to an inlist iterator being used rather than a concatenation.

Drum roll,

How about ORACLE providing a parameter to be set in initialisation IGNORE_HINTS=TRUE/FALSE.

Thinking about how oracle goes about parsing code, HINTS are identified and the directive followed. If the oracle binary could check the new parameter and ignore the HINTS we have another tool to help us out particularly with compiled code.

Clearly on the occasion when we need to hint to get us around some bad design or whatever the parameter could be set via alter sesssion set IGNORE_HINTS=FALSE or whatever.

I'm assuming this would be quite simple to accomplish - basically another if statment. If doable providing a backport patch would help many of us still running 9i or indeed 8i.

Regards

andrew

Tom Kyte
November 11, 2005 - 11:42 am UTC

(shhh, there is but it is not documented in the current releases)

You know what I'm going to ask next

andrew boocock, November 12, 2005 - 4:22 am UTC

That's good to know. I assume at best this will be an
underscore parameter.

I guess you can't tell me so maybe I'll have to trawl through all the _underscore params looking for something with *hint* or *optimizer* in. Unless you can whisper it !

BTW is the documentation going to include this anytime soon?

Thanks

Andrew

Tom Kyte
November 12, 2005 - 10:58 am UTC

run a 10053 trace and you'll see them all (the optimizer related parameters) in the trace file.

undocumented things are undocumented, don't expect them to become "documented" anytime soon. If and only if they decide to make this a "feature" would it become documented.

comparing nulls

A reader, November 13, 2005 - 1:41 am UTC

It would be nice to have an equality operator that treated two null values as equal to each other. E.g. I'd like to be able to write something like

where col1 == col2

instead of

where nvl( col1, 'x' ) = nvl( col2, 'x' )

where "x" is some character that I pray never actually appears in my column data.


Tom Kyte
November 13, 2005 - 10:32 am UTC

how about:

decode( col1, col2, 'equal, even if null', 'not equal, even if null' )

where decode( col1, col2, 1, 0 ) = 1

for example....

My 2 cents

Mike Wilson, November 13, 2005 - 5:42 pm UTC

I've been working with Oracle for 10+ years now (yes, I know I'm still a newbie) but if I had to pick the single greatest thing I don't like about Oracle it would be:
*) Command-line history in SQL*Plus using up arrow on Unix platforms.

Yes, I know you can download a third-party patch to get this to work in Linux but why, oh, why hasn't this "feature" ever just worked on install? I can't count the number of times I've hit the up arrow in SQL*Plus and then sworn quietly, repeatedly under my breath at a client site. It would have saved me literally man weeks in the past 10 years.

Will Raptor include command-line up-arrow history out of the box?

Defaulting to vi rather than ed would have also been a nice touch once we made it to the 21st century but at least you can configure this with a startup script ...

Tom Kyte
November 13, 2005 - 5:44 pm UTC

raptor is more gui than command line - but with a full history.

(i agree with you, rlwrap has been "life changing" :)

To andrew

oraboy, November 14, 2005 - 12:16 pm UTC

Well you know, when experts like Tom say not to use undocumented parameters, I suppose you realize the meaning & good intention behind it.

and here comes the whisper..(just thought I can save your search time, as you sound very decisive about finding that  secret parameter).

I hope you dont use it for Prod..but for testing in pre-prod or at session level to verify where CBO's default plan is superior than the one hinted by developers (90% of the time) .. you can try this

SQL>explain plan for 
  2   select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |   327 |  6867 |     2 |
|   1 |  TABLE ACCESS FULL   | TEST_T      |   327 |  6867 |     2 |
--------------------------------------------------------------------

SQL>alter session set events '10143 trace name context forever'; 

SQL>explain plan for 
  2    select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

---------------------------------------------------------------------------
| Id  | Operation                   |  Name       | Rows  | Bytes | Cost  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |             |       |       |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_T      |       |       |       |
|   2 |   INDEX RANGE SCAN          | TEST_T_IDX  |       |       |       |
---------------------------------------------------------------------------

SQL>alter session set events '10143 trace name context off';

Session altered.

SQL>explain plan for 
  2     select /*+ no_index(t test_t_idx) */ * from test_t t where owner='SYS';

Explained.

SQL>select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |   327 |  6867 |     2 |
|   1 |  TABLE ACCESS FULL   | TEST_T      |   327 |  6867 |     2 |
--------------------------------------------------------------------

Again, please be informed Tom never advocates using undocumented parameters and thats the strong foundation I have learnt from him. But if you need to show a demo on how bad the application (if it has been hinted by developers on every SQL) is , by all means you can use this , as your quickest way in test or dev. 

hope that helps

and I really hope Tom doesnt mistake me for putting this (* but hey, this is not a secret per se, I found it in metalink :) ) 

Tom Kyte
November 14, 2005 - 2:15 pm UTC

as long as everyone knows "not supported, use at your own risk, caveat emptor, get support permission to use in production, etc etc etc"



Thing I "don't like"

Andrew Max, November 15, 2005 - 12:57 pm UTC

Hi Tom,

I'd like to point out one more "what-you-donÂ’t-like" thing here. 
ItÂ’s about ORA-30005 error. Please consider:

SQL> create table Employee as
  2  select 1 EmpNo, 'John Smith' EmpName from dual
  3  union all
  4  select 2, 'Peter Pen' from dual;

Table created.

SQL> select * from Employee
  2    where EmpNo = 1
  3    FOR UPDATE WAIT 5;

     EMPNO EMPNAME
---------- ----------
         1 John Smith

So far so good. We just asked Oracle "hey, give me that row and if itÂ’s already locked then please wait, but no longer than 5 seconds".

However, *why we can not* do it this way???

SQL> variable Wait_Time number
SQL>
SQL> exec :Wait_Time := 5

PL/SQL procedure successfully completed.

SQL> select * from Employee
  2    where EmpNo = 1
  3    FOR UPDATE WAIT :WAIT_TIME;
  FOR UPDATE WAIT :WAIT_TIME
                  *
ERROR at line 3:
ORA-30005: missing or invalid WAIT interval

In other words, we can not use bind variables to specify wait timeout -- only some integer constant. While it may seem obvious, this is where I got a bit confused. I believe that allowing not only constants but also bind variables here would be a nice feature.

Suppose, for example, we want to implement a procedure:

SQL> create procedure UpdateEmp (p_EmpNo in number, p_Wait in number) is
  2     cursor Cur is
  3     select null from Employee where EmpNo = p_EmpNo for update wait p_Wait;
  4  begin
  5     open Cur;
  6     -- Row locked -- do some updates ...
  7     close Cur;
  8  end;
  9  /

Warning: Procedure created with compilation errors.

SQL> show errors
Errors for PROCEDURE UPDATEEMP:

LINE/COL ERROR
-------- -----------------------------------------------------------------
3/4      PL/SQL: SQL Statement ignored
3/68     PL/SQL: ORA-30005: missing or invalid WAIT interval

Of course, it failed for obvious reasons. But if it were implemented, it would be possible for end-users to call this procedure passing different wait intervals to it.
As for me, it makes sense: some "impatient" user would call it like this:

exec UpdateEmp (1, 1)

while the other (say, more "patient") would be able to issue:

exec UpdateEmp (1, 10).

Furthermore, as far as I understand implementing this feature would not affect much anything (I mean, query parsing strategy -- there's no need to change something radically). But may be IÂ’m just wrong?

So what do you think about this?

Thanks and best regards,
Andrew. 

to Andrew Max

Oraboy, November 15, 2005 - 3:12 pm UTC

create or replace procedure updateemp(n number) as
begin
count:= 0;
Loop
declare
cursor cur_emp is select ... for update nowait;
begin
open cur_emp;
count:= 10000;
exception
when time_out_error or data_Locked_by_someone then
count:= count + 1;
-- your other application code
end
exit when count>n;
end loop;
end updateemp;

-- if you really want to wait 'n' seconds (and not just try n attempts), simply add a dbms_lock.sleep to sleep one second within the inner loop

PL/SQL gives all the power you want





Include ALM?

Oraboy, November 15, 2005 - 4:02 pm UTC

This is something what just came to my mind..sort of 'nice to have' feature.

9 out of 10 developers dont understand concurrency issues and things that work fine obviously get blown away when exposed to many concurrent users. (and yes, simply they blame it on Oracle and move away - is a different story)

Now , just think some application is already there in production ..getting beaten to death..No one has a clue..I am thinking if there is an option to just capture all production happenings that could be rerun in Non-Prod env, that would be great (to test any system after changes)

what I mean is..say we experience problems around 10 am every morning, if there is some way to record all user activities from say 8am to 11 am , and transfer over to other databases (like dbms_stats and stat table)..we can simply rerun this activity (load) to check whether the modified code makes this any better

I am not sure whether this sounds more like load-runner or volume testing tool (which I dont have any experience)..but I am looking for something within database (may be room for a new acronym: ALM ->Automatic Load Management or ALLM-> Automatic Load Management Monitor)



Tom Kyte
November 16, 2005 - 8:22 am UTC

that is load runner - I'm not sure if we'd be going into that area anytime soon.

Neat idea though.

Oraboy, thanks but...

Andrew Max, November 15, 2005 - 4:19 pm UTC

Hi Oraboy,

And thanks a lot for your comments.
But thereÂ’s a small but very crisp difference between FOR UPDATE WAIT approach and the one you posted above.

As far as I know, SELECT ... FOR UPDATE WAIT *enqueues* session request to lock row(s). This means that if some row of interest is locked and session #1 issued such request at some point before session #2, itÂ’s guaranteed that session #1 will be able to lock that row when itÂ’s freed while session #2 will continue to wait.

Consider:

Blocker> select EmpNo from Employee
2 where EmpNo = 1 for update;

EMPNO
----------
1

Session #1> select to_char(sysdate, 'HH24:MI:SS') cur_time from dual;

CUR_TIME
--------
23:49:03

Session #1> select EmpNo from Employee
2 where EmpNo = 1 for update wait 60;
-- Session #1 began to wait...

Session #2> select to_char(sysdate, 'HH24:MI:SS') cur_time from dual;

CUR_TIME
--------
23:49:18

Session #2> select EmpNo from Employee
2 where EmpNo = 1 for update wait 60;
-- Session #2 began to wait...

Blocker> commit;

Commit complete.

Session #1> ...

EMPNO
----------
1

Session #2> ... continue to wait, and in less than minute:
select EmpNo from Employee
*
ERROR at line 1:
ORA-30006: resource busy; acquire with WAIT timeout expired

If we were using "retry counter" logic with LOOP, NOWAIT and dbms_lock.sleep the ordering would be unpredictable and quite *random*. Suppose, some session began to sleep on dbms_lock.sleep and 0.001 sec. later the row of interest is unlocked. Then our sleeping session may have no luck: some third session can "steal" row immediately.

Is this important? Well, it depends...
And I believe sometimes itÂ’s important.

Thanks again for your comments.

Cheers,
Andrew.

create or replace table

Baker, November 16, 2005 - 10:46 am UTC

I would like to see the keyword "or replace" added to the create table syntax, instead of using a key word "truncate".

There are times where I use a drop table in script to wipe out temp tables that I don't want hanging around the database, followed by a create table. Of course, I end up getting the "table do not exist" error. If the syntax was changed to "create or replace table", then I could elimate the drop table and thus remove the error message.



Tom Kyte
November 16, 2005 - 6:02 pm UTC

begin
execute immediate 'drop table t';
exception
when others then null;
end;
/


is one method

To Andrew Max

Oraboy, November 16, 2005 - 11:11 am UTC

Andrew - 
 Again what you are trying (to serialize) is already available , just a matter of how to write the code.

since you said " I believe sometimes its important", I am putting this reply (dont mean to showcase my plsql knowledge)

SQL>desc test_t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 C1                                                 NUMBER
 C2                                                 TIMESTAMP(6)

SQL>ed
Wrote file afiedt.buf

  1  Create or replace procedure Update_test_t( p_c1 in number, p_willing_maxwait in number ) as
  2   cur_c1 sys_refcursor;
  3   lv_c1 number;
  4  begin
  5    open cur_c1 for 'select * from test_t where c1='||p_c1 ||' for update wait '||p_willing_maxwait;
  6    loop
  7     exit when cur_c1%notfound;
  8     fetch cur_c1 into lv_c1;
  9     update test_t set c2=systimestamp where c1=lv_c1;
 10    end loop;
 11  exception
 12    when others then
 13     dbms_output.put_line('Error '||SQLERRM(SQLCODE));
 14* end;
 15  /

Procedure created.

----
se1 ==> Session1
se2 ==> Session2


T1:
----
Se1>exec Update_test_t(2,2);

PL/SQL procedure successfully completed.

se2>set time on
10:41:30 se2>exec Update_test_t(2,2); -- waited 2 seconds..no luck
Error ORA-30006: resource busy; acquire with WAIT timeout expired

PL/SQL procedure successfully completed.

10:41:38 se2>
10:41:49 se2>exec Update_test_t(2,5); -- waited 5 seconds..no luck
Error ORA-30006: resource busy; acquire with WAIT timeout expired

PL/SQL procedure successfully completed.

10:42:22 se2>exec Update_test_t(2,25);
***WAITING ***

 -- flip to Session1 and release the lock
 Se1>commit;
 Commit complete.
 -- swictch to session2 , its done
PL/SQL procedure successfully completed.

checking the update
====================
SQL>select * from test_t;

        C1 C2
---------- -----------------------------
         1
         2 16-NOV-05 10.42.28.018745 AM <-- session 2's time
         3
         4
         5
         6
         7
         8

8 rows selected.

hope that answers your need..
 
Thats probably the last answer from me as I feel this topic is moving away from intended topic (things I dont like or I like to have  - in Oracle) and I dont want to earn the wrath of other oracle guru's here

Sorry Tom, for overstepping in your thread.



 

Tom Kyte
November 16, 2005 - 6:09 pm UTC

Andrew is correct about the enqueue waits though.

Think about it like this....


You get into a line, a queue. You are blocked (waiting for the chance to buy tickets).

If you get out of line every three seconds, you may NEVER be able to buy the ticket (because once you get out of line, you lose your place, you go to the end of the queue).

If you stay in line - you don't lose your place, you don't lose your place.

The last reply to Oraboy...

Andrew Max, November 16, 2005 - 12:53 pm UTC

Hi Oraboy,

Thanks for your response but I still have a problem with it.

> open cur_c1 for 'select * from test_t where c1='|| p_c1 ||' for update wait' || p_willing_maxwait;

Huh?... Not sure I would use this. If such code is "rarely" executed -- well, may be... But re-executing this using many different p_c1 and p_willing_maxwait would force Oracle to hard-parse each brand new statement again and again, thus littering shared pool and killing scalability.

"Parse as little as possible" -- this is TomÂ’s mantra, and I agree 100% with it.

Consider:

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 1;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 2;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> select /* TAG */ * from Emp where EmpNo = 1
  2   for update wait 3;

EMPNO EMPNAME
----- ----------
    1 John Smith

SQL> column SQL_TEXT format a65
SQL> set numwidth 4
SQL>
SQL> select s.SQL_TEXT, s.CHILD_NUMBER CH
  2   from v$sql s
  3  where s.SQL_TEXT like 'select /* TAG */%';

SQL_TEXT                                                            CH
----------------------------------------------------------------- ----
select /* TAG */ * from Emp where EmpNo = 1  for update wait 3       0
select /* TAG */ * from Emp where EmpNo = 1  for update wait 2       0
select /* TAG */ * from Emp where EmpNo = 1  for update wait 1       0

Just look at that: different SQL areas, different execution plans -- say good bye to scalability and performance. Bind variables really rock. And I would prefer parse query once, use binds and re-execute it many times rather than resorting to dynamic SQL.

But unfortunately we can not use binds to specify WAIT timeout ...

Thank you again, and agree with you: our discussion is surely off-topic and I'd better stop here ... ;)

Cheers,
Andrew.

P.S. Tom, sorry for spamming this thread. 

LIKE operator

RAVI, November 16, 2005 - 5:56 pm UTC

Hi Tom,
       I am not sure whether, my request fits here or not, I apologize if I have posted in a wrong place. 
      below are two sql statements in first I am using LIKE operator and passing a absolute value, but I want to pass that value using a variable where the value of the variable changes. but in this case how would i substitue %, if I have it in the variable, it is loosing its special value.
       In the second sql statement I did it using substr function but it is bit slow, is there any other way to do it. can you please suggest me any work around

SQL> select count(*) from patient_enrollment where 
group_id like '6257%';

  COUNT(*)
----------
   1164589
Elapsed: 00:00:04.22

SQL> var client_id varchar2(15);
SQL> exec :client_id := 6257;

PL/SQL procedure successfully completed.
SQL> select count(*) from patient_enrollment where 
substr(group_id,0,length(:client_id)) = :client_id;

  COUNT(*)
----------
   1164589

Elapsed: 00:00:33.19

 

Tom Kyte
November 16, 2005 - 6:29 pm UTC

but in this case how would i substitue %, if I have it
in the variable, it is loosing its special value.

i don't know what that means?



but seems you want: where group_id like :client_id||'%';



To Ravi

oraboy, November 16, 2005 - 6:04 pm UTC

Will this work?

SQL> select count(*) from patient_enrollment where 
group_id like :client_id||'%' ;
 

that works

Ravi, November 16, 2005 - 10:04 pm UTC

Thank you Tom, and thank you Ravi. sorry for confusing you, but that's what I was looking for

Thank you,
Ravi

May be.....

Muhammad Ibrahim, November 16, 2005 - 11:32 pm UTC

1) nice to have some of usefull scripts/functions/codes which were provided by you and others inside some oracle packages/Apis like dbms_utility. so that it will be well tested in performance wise, everyone in the world use the same functionality and ofcourse easy to use!

eg: scripts like :
owa_sylk
mail_pkg
spell_number/Spell out numbers to words
who_called_me and who_am_I
whoami
tkfprof
runstats etc...

2) include leap_year in the date functions
eg: leap_year[year] like next_day()

3) more exception handling
eg: no_data_found for implicit cursor.

4) utl_file may support to store the files in the client side too.

5) greatest/least as sql function like max/min.

6) some more mathematical/scientific formulas like sqrt etc as SQL functions.


Regards,
Ibrahim.

Tom Kyte
November 17, 2005 - 8:03 am UTC

1) many of them are (the who functions for example... tkprof, mail_pkg is utl_mail)

3) ? no_data_found is there for implicit cursors

4) that would be called a "virus". plsql is running in the *server*, only the client application itself can touch the client filesystem.

5) ? they are?? not sure what you mean.

6) ? it is???

sqlplus

A reader, November 17, 2005 - 7:25 am UTC

Hi Tom, I think you should improve sql plus
Option
1. Fusion sqlplus with perl
2. a perl command line including sql plus command line, or at leats the more important

I don't think in window program, command line is still nice. But with more commands and options

to Muhammad Ibrahim

Peter, November 17, 2005 - 8:05 am UTC

You can get square roots like this!

SQL> select exp(ln(144)/2) from dual;

EXP(LN(144)/2)
--------------
            12

 

Tom Kyte
November 17, 2005 - 8:22 am UTC

ops$tkyte@ORA10GR2> select sqrt( 5 ) from dual;

   SQRT(5)
----------
2.23606798


it is already (and has been) a builtin. 

Sqrt using LN and EXP

Bob B, November 17, 2005 - 8:16 am UTC

Try getting the sqrt of 0 with that ;-)

Tom Kyte
November 17, 2005 - 8:23 am UTC

ops$tkyte@ORA10GR2> select sqrt(0) from dual;

   SQRT(0)
----------
         0
 

Dave, November 17, 2005 - 8:38 am UTC

It was the other comment he said couldn't be done

SQL> select exp(ln(0)/2) from dual;
select exp(ln(0)/2) from dual
              *
ERROR at line 1:
ORA-01428: argument '0' is out of range

 

Tom Kyte
November 17, 2005 - 8:48 am UTC

I understand, I'm just following up with the right way to get the sqrt.

Alberto Dell'Era, November 17, 2005 - 8:55 am UTC

The power of IEEE floating-point arithmetic "infinity":

SQL> select exp(ln( cast(0 as binary_double) )/2) from dual;

EXP(LN(CAST(0ASBINARY_DOUBLE))/2)
---------------------------------
                                0 

nice

A reader, November 17, 2005 - 10:08 am UTC

however
select exp(ln( cast(-1 as complex_number) )/2) from dual;
still does not work

Muhammad Ibrahim, November 17, 2005 - 10:11 am UTC

3) no_data_found for implicit cursor
i know and i am following your trick like
for r_something in ( select ... )
loop
l_found := true;
end loop;
if not(l_found) then
means noting found do what ever..
end if;
instead of this exception handling for when no_data_found for the implicit cursor may be just like exception handling for normal plsql blocks.

5) sorry i meant in other way round. we can not use min/max in plsql code or is it possible??? so normally i use greatest/least in plsql code.

6) sqrt i know it is there i meant like this function may be some other usefull mathematical functions which is more usefull in business side like commercial applications. may be some ready made formulas! tom this is just a thinking only :-)

Tom Kyte
November 18, 2005 - 7:23 am UTC

3) that doesn't make sense to me - it is not an error or anything to have zero rows. ;)


5) min/max are aggregates - greatest/least are comparable equivalents for scalars...

6) sorry, I was looking for concrete examples - there are a wealth of math functions in the database - from analytics with linear regression to the new matrix packages in 10gr2 and so on...

Oracle load testing

canuck, November 17, 2005 - 11:52 am UTC

This may be of interest to the reader who was looking for a load testing tool (free, open source): </code> http://hammerora.sourceforge.net/about.htm <code>

"Hammerora is used to parse Oracle trace files and convert them into Oratcl programs that can then be replayed against the Oracle database with multiple users.

The tool came from the desire to simulate a number of users against the database as an aid to learning. Thereby a small home-based learning/developing system could simulate more closely a production environment. It has grown into a tool that can load test an Oracle database from the smallest development system through to large RAC based clusters.

TCL has been used as it provides high performance OCI interaction with an Oracle database without the inconvenience of needing to recompile generated programs. Each generated program has strict TCL/Oratcl compliance and can therefore be run independently of Hammerora at the command line TCL Shell. The power and flexibility of TCL means that the potential functionality is unlimited. The TCL Threads package is used to establish multiple connections against a Oracle 8i,9i or 10g Database."

Alex, November 17, 2005 - 1:08 pm UTC

Whoo-hoo we're up over 300 posts now. What do I win for submitting the record breaking question ;)

What do you win ...

A reader, November 18, 2005 - 8:39 am UTC

A cookie!!! ... for your machine (boooo)

Alex, November 18, 2005 - 5:29 pm UTC

I'm just glad I didn't get banned for causing Tom all kinds of extra work....

import show=y

Andrew, November 25, 2005 - 12:17 pm UTC

Tom,

If import had an option of the show command to indicate one ddl per line that would make it much more usable. either one per line or some other tag that would make parsing it deterministic. I know dbms_metadata extracts ddl but export/import has ddls in the correct order for recreation. I've taken out the quotes and tried adding the lines together but it takes a full scrub of the file each time.

export from rman backups

Gianluca, December 04, 2005 - 7:13 pm UTC

nice feature I found on DB2 zOS: the ability to uload (export in Oracle terminology) data from a COPY (rman backup in Oracle terminlogy).

on zOS, we use it for these useful things (consider we keep 2 months of daily full backups over GDG files on tapes)
- we avoid daily exports (saving time and disks/tapes) and performing them "on request", usually for moving data from production to test or similar requests from developers
- we can run an export from the past (yes, you could save exports as well as rman backup, I know, but that means extra time and space)
- we can export on another system (es. test machine) avoiding cpu and I/O laod, using rman files
- we can have consistent exports even with lots of activity during the run (full backup alwayes gives you a consistent image thanks to archived logs)
- you can restore the content of a single table in a multitable tablespace from rman backups without restoring the entire tablespace in a commodity instance (tools like BMC do that for you but you still need a complete tablespace recover in a dedicated instance)


Is the new MetaLink built using HTMLDB?

A reader, December 19, 2005 - 10:32 am UTC

It looks familiar. It is also slow, but hopefully they will work the kinks out.

Oracle stress test,

sns, December 19, 2005 - 7:15 pm UTC

Tom,

I am using this thread for my general question.

Tom,

We have a situation that the application with web server and Oracle 9i database needs a stress test. The database is about 10 GB (not a big one), but the application is pretty complex (like logging time for employees and so on).

The application along with the database is delivered to customer.

From our end we need to simulate that 500 users are logging into the database , run the reports, do DML activities on certain tables (assuming all the 500 could do DML activities) and so on.

I need your suggestion on how to go about this. If you have already done something like this could you please share your ideas (in detail if possible)?

Thanks

Tom Kyte
December 20, 2005 - 8:25 am UTC

... I am using this thread for my general question. ...


why? why would you do that - especially after reading:

Please limit this to a comment or question relevant to the other text on this page.

In order to permit NEW questions on this site, I am taking the firm stance that anything deemed a 'new question' asked here will be ignored.

Please think about what you are going to put into here and unless it is very relevant to the existing thread -- a true FOLLOWUP or REVIEW or CLARIFICATION, please do not post it.

This will allow me to accept new questions (I took 2,000 less in 2004 than 2003, but only because I followed up to 3,000 'reviews' that were actually new questions). This is by popular demand -- we need 'fresh blood' in there.


???


there are hundreds of stress test tools, google will likely be hugely helpful.

</code> http://www.google.com/search?q=oracle+stress+test+tool&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official <code>


got it,

sns, December 20, 2005 - 4:51 pm UTC

Tom,

Actually I searched the site for "stress tool" and couldn't find one and have to force my question in this thread.

I will make sure this does not happen again.

Regards,

Followup

Michael Friedman, December 23, 2005 - 11:25 pm UTC

1. Lack of proxy capability within the DB - we should be able to log onto a non-[rivileged account, authenticate ourselves, and then have our connection switched to a selected privileged account.

TK: That is not within the database. We have that ability in the database, sqlplus doesn't "do it"

Well, it's not just SQL*Plus - this should be core PL/SQL functinality. There is an undocumented system privilege, "become user". It's used by IMP/EXP. I would like similar capability.

--

2. Merge statement should take multiple actions just like Insert

TK: it does - when matched, when not matched. you can insert and update and delete from a table ???

You should be able to merge into multiple tables at the same time just like with Insert.

--

3. The current move away from PL/SQL and transactional interfaces to many components needs to be reversed. For example, there is no way to make use of CDM SDK direct from PL/SQL while maintaining transaction atomicity, etc.

TK: that seems to be a contradictory statement? You say "don't go components" but then say "we need interface to components"? (frankly, I don't know what the CDM SDK is..)

Sorry... that's CMSDK.

I don't care how you build things as long as I get transactional PL/SQL APIs to them. How can I get a document in Oracle Forms and then save it to CM SDK in the same transaction as I save the other form data? I can't... Oracle seems to be turning its back on its core strength - an amazingly functional and reliable transactional platform.

--

Here's a fourth complaint...

Why the heck is there a BITAND operator but no BITOR, BITXOR, or BITNOT?

I'm using UTL_RAW now to replace what should be core DB functionality.



Tom Kyte
December 24, 2005 - 9:04 am UTC

1) you have it, it is called n-tier proxy authentication - that is your interface to it. That is how you and I can perform this exact feature. It does and has existed since 8i of the database.

2) that would be entirely ambigous - I disgree very much with that idea. If I merge into "t1 and t2" using "t3" and I find a row in t2 that matches - but not in t1 - should I insert it? if not, why, if so why? And if a row matches in both t1 and t2 - should both be allowed to be updated and so on.

and if I don't have a match on either t1 or t2 - into which table should the row go? and why?

Whereas it makes sense to be able to take a "set" and insert any given row into any number of tables (we control completely what table(s) the given row goes into) - it does not make sense to me to merge into multiple tables.

We can use multiple tables as the "input" to build the set - but to merge into multiple table for me raises more questions than answers.



Raptor

Joe, December 29, 2005 - 3:16 am UTC

Raptor is now available for download.

</code> http://www.oracle.com/technology/products/database/project_raptor/index.html <code>

Cheers!


Hmmm...

Dave, January 06, 2006 - 12:20 pm UTC

Preamble- For the most part Oracle rocks. It has kept me gainfully employed for a number of years. Glad I didn't bet on Ingres in 1996...

My "What don't you like about Oracle?" list...

1. <rant> Silly bugs in released products. I understand the esoteric critters but there were a number of releases and patches that were released profoundly QC-challenged. I remember an flawed 8.x patch that ruined two days of my life and was not revoked for two weeks after my TAR fingered it.
2. No courier font in the metalink forum posts.

What I'd like to see...
1. A raptor or sqlplus utility to show side by side
a. Expected return set for each explain plan stage (already there) vs the actual rowcounts , timings, I/Os for each nesting/hash stage (event 10046 without manually processing traces)
b. If the CBO is "surprised" by a cardinality allow it to learn and update by the actual results or additional stat gathering.
c. Feedback if the query's plan would have changed and allow a rerun of the new plan.

2. An EM utility showing horizontal bars representing the start/stop intervals for every active and inactive SQL statement (for span "X" subject to a resonable number). The x-axis would be time and the y-axis hash value. This concurrency visualization can be quite useful but is a pain to do manually.

Let me know if I need to rtdm...

Regards,
-Dave


Tom Kyte
January 06, 2006 - 2:24 pm UTC

what I'd like....

1.b - is there, 10g SQL Profiles
1.c - not sure what you meant?



Forgot one

Dave, January 06, 2006 - 1:32 pm UTC

Tom,
Forgot one.

The ability to specify and use a RAM-only landing location for a source temp set using Oracle's "WITH temp AS" SQL structure. This would help with that class of problems where you want to use a large SQL source set a number of times/ways (where inline views and global temp tables might be awkward) and you want to avoid re-pulling the direct reads for every UNION set underneath.

There may be an existing elegant solution I'm unaware of. I'm sure you'll let me know...

Regards,
Dave

multi-merge

Michael Friedman, January 06, 2006 - 7:22 pm UTC

TK: Whereas it makes sense to be able to take a "set" and insert any given row into
any number of tables (we control completely what table(s) the given row goes
into) - it does not make sense to me to merge into multiple tables.

Why not? Why can't we use the same methods we use to control which tables the insert rows go to to control which tables the merge rows go to? You could even use almost the same syntax.



Tom Kyte
January 07, 2006 - 9:30 am UTC

how would that be any more efficient than "multiple merges"

Think about the activity that would take place - you would just be saying "hey, if x=5, merge into T1, if x=6 merge into T2 if x=7 merge into T3" - meaning you need three separate joins in the first place - nothing at all would be gained (beyond enourmously complex syntax) by doing this in a singl DML statement.


Rename an Oracle Directory

Michael, January 15, 2006 - 10:42 am UTC

It would be nice if we could rename an oracle directory, or alter it to point to a different location. I know the rename command does not specify this, but I can't imagine why. I can not find any reference to a command to do it, or even alter a directory. We are on 9.2.0.5 on Solaris. Logged in as sys:

SQL> create directory pooper as '/u02/ftpdata/davox/pooper';

Directory created.

SQL> rename directory pooper to popper;
rename directory pooper to popper
                 *
ERROR at line 1:
ORA-00946: missing TO keyword


SQL> rename pooper to popper;
rename pooper to popper
*
ERROR at line 1:
ORA-04043: object POOPER does not exist

etc.

alternatively, it would be okay to do the following:

alter directory accts as '/f06/ftpdata/davox/accts'; assuming you had already created a unix "accts" directory in that location on the server. (the current directory location is /u02).

We have about 20 directories on our development server that we want to relocate to a new file system to better mimic production. (We did not have the ability to do this when the database was originally set up). Seems right now I would have to drop and recreate all of the directories, redo grants, etc. 

If we had any references in any code to the old path (it would be unusual), we would be more than happy to replace that code with the production version, which would almost always work right out of the box because the development database directories and external tables would be named (and located) just like in production.



 

Merge with Conditional Insert

Koshal, January 19, 2006 - 9:22 am UTC

It is great we had merge statement in Oralce.
One of things I would like to have is the ability to filter the records. Normally out of the 100 records that I Process 10 records I would like to Update,
89 Records I would like to Insert and One record I would to
ignore/put them in error log table.

MERGE INTO t1
USING t2
on (t1.col1=t2.col1)
WHEN MATCHED THEN
UPDATE
SET t1.col2 =t2.col2 /* where t1.col>10 */
WHEN NOT MATCHED THEN
/* case when t1.col<20 */
INSERT (col1,col2,col3)
values (t2.col1,t2.col2,t2.col3) /*where t1.col<20 */

/* else
insert into err_log(err_message) values('invalid values '||t1.col) end ; */

Something like this is more common in business application.
In this way the entire routine can be accomplished by a single SQL Statement.

Merge

koshal, January 19, 2006 - 10:06 am UTC

1. It would be nice if we had merge with Returning Clause and insert (within merge) with select clause /where clause.
2. It would be nice if we can query from PL/SQL Table in the same way as we query a table/temporary table.
3. It is better if we have isnumber(),isnull() function Rather than using translate etc. or pl/sql functions. This is especially usefull when u r migrating from legacy systems to Oracle, wherever data cleasing is required.
4. Ability to Automatically Analyze(Mark as needs analyze) if the table Size has grown by X%.
6. A better V$session_Lonops atlease in case of query single table. Needs to shown Estimated Time for the Whole Query Not a part.
6.Ability to have default value for a column referring to another column. In the below example in 99% of cases i would be inserting value for y but wherever i dont insert it should take the default value as x
create table t1(x number, y number default x-2)
*

Thanks
Koshal


Alex, January 20, 2006 - 11:18 am UTC

This shouldn't compile:

SQL> declare
  2     type rc is ref cursor;
  3     ref_cur rc;
  4     v_dummy number;
  5  begin
  6     open ref_cur for
  7       select 1
  8       into v_dummy
  9       from dual;
 10  end;
 11  /

PL/SQL procedure successfully completed.

And this will run successfully up until 10g, then it will throw a run-time error. 

Problem With Metalink

A reader, January 28, 2006 - 1:33 pm UTC

Tom, Sorry to post the question here.
I am unable to login to Metalink from the time it moved to HTMLDB.
Any idea or solution. I cannot report this problem to Oracle as Metalink itself is not available.

Any browser setting issue ??

Tom Kyte
January 29, 2006 - 8:07 am UTC

metalink is, has been and will be available - please try again, it has been working from the very time it was ported over (and given that you pay for metalink, I cannot imagine you would be paying for something you cannot access - doesn't make sense??)

That and I have no idea what you are refering to here as far as a problem goes... No idea what problem you might be having.

Use of multi-merge

Mike Friedman, January 29, 2006 - 11:33 am UTC

I asked for multi-merge functionality just like the existing multi-insert.

Tom responded:

----
how would that be any more efficient than "multiple merges"

Think about the activity that would take place - you would just be saying "hey, if x=5, merge into T1, if x=6 merge into T2 if x=7 merge into T3" - meaning you need three separate joins in the first place - nothing at all would be gained (beyond enourmously complex syntax) by doing this in a singl DML statement.
---

Same benefit as with multi-insert:

1. You are guaranteed that the result from the driving query remains the same for each merge
2. You do not need to run the driving query three times - an important issue if it is slow
3. By only specifying the driving query once instead of three time you reduce the opportunity for errors and produce clearer and more transparent code


Can i use hints in the query clause of export?

ravinder matte, February 07, 2006 - 6:49 pm UTC

Thanks

Tom Kyte
February 08, 2006 - 1:46 am UTC

nope, you provide a where clause, hints don't go there (well, ok, if you have a subquery in the where, you could hint that query block)

Oracle front-end?

Ashutosh, February 08, 2006 - 12:40 pm UTC

After working on Oracle from versions 8 through 10g (and on Forms 3.0, incidentally) I believe I can make this statement: Oracle lacks in a solid front-end for its database. Even with Forms 3.0, I remember that the extremely tiresome and intimidating key-sequences discouraged a new user from adopting it as a standard front-end, but I guess that is history now. This lack of a sound front-end from Oracle Corp. has spawned numerous third-party vendors offering "easy-to-use" front-end software for Oracle databases. This usually translates to an additional investment for the user - in addition to what he/she paid for an Oracle license. I wish Oracle database came bundled with an all-encompassing front-end application, for all platforms, that would totally obviate the need of third-party software. Something that encompasses the functionality of PL/SQL Developer & TOAD, two of the most popular "front-ends" around. How many third-party vendors are there who provide "easy-to-use" front-end software for MS SQL Server? Hardly any, to my knowledge. Can people at Oracle take a cue from this fact?

Tom Kyte
February 09, 2006 - 4:17 am UTC

Have you ever looked at htmldb? It's been around for a while.

Alter table unmigrate

A reader, February 22, 2006 - 1:43 pm UTC

A utility to un-migrate migrated rows. The utility does not make indexes unusable, as "alter table move" does. The utility shares the following characteristics with online segment shrink:

Segment shrink is an online, in-place operation.
DML operations and queries can be issued during the data movement phase of segment shrink.
Indexes are maintained during the shrink operation and remain usable after the operation is complete.

Unmigrating rows is an online, in-place operation.
DML operations and queries can be issued during the operation.
Indexes are maintained during the operation and remain usable after the operation is complete.

Syntax:
ALTER TABLE employees SHRINK SPACE;
ALTER TABLE employees UNMIGRATE ROWS;

New features in next release

Anil Pant, March 08, 2006 - 6:24 am UTC

Tom,
May be you have been asked these questions many times. But please take sometime in answering my queries.

(1) Every release of Oracle has some new features. I was wondering what do you think will be the new features in next release?

(2) Say after 10 years or after 2-3 releases from now, what new features will be available? Agreed some of the core functionalities will remain intact but other than that will there be any significant change? Iam asking from a developer&#8217;s and DBA&#8217;s perspective.

(3) Every release is easing (talking about LMTs, PCTFree and PCTUsed) the work of DBA. How do you see that in future releases?

(4) 10 years back there were DBMS packages like DBase, FoxPro and Clipper which would to solve the business needs (Iam referring to small and meduim size companies) Since these were not able to handle the amount of data and had other performance issues, RDBMS has taken over them. I do agree there are many companies which still run on these DBMS but the number has come down.
Do you see something of this sort happening with RDBMS? By the way, please don't feel bad.

Thanks


Tom Kyte
March 09, 2006 - 12:17 pm UTC

1) no comment right now, bit early.


2) see #1 :)

3) more of it happening, we add more stuff for you to do, we better make the mundane tasks more automated.

4) no, not really, I seem the becoming more important - for the reasons you listed that the others have gone away for.

My single largest beef

Asher, March 09, 2006 - 3:51 pm UTC

Is in direct relation to PL/SQLs error handling. Quite honestly... when compared to most of the newer languages that have come out, oracle's error handling capabilties are left lacking.

In order to generate a named exception with a text message you have to A: declare the exception, B: use a pragma to assign a number to the exception, and C: then use a seperate procedure which you can not directly reference your named error in to get the number out of it. Quite honestly and simply, I'd love to be able to write 'My_Custom_Exception Exception(-20000)' and cut out the pragma exception_init, and then be able to do a raise My_Custom_Exception, "Usefull application error" (or raise_application_error(My_Custom_Exception, "Usefull application error") )

It is quite honestly one of my largest annoyances with PL/SQL. I don't wanna need to memorize all of my error codes, or write database tables & functions to store them all away. I'd also like to have access to more then a mere 1000 different error numbers for raise_application_error, as in a large installation going through 1000 error codes would be exceedingly easy.

My final annoyance is with the lack of ability to, programatically, access the call stack. It may seem like a fairly simple thing, but in atleast one occasion (the one I've been doing most of my work in over the past couple of months) it'd be usefull to be able to read that so it can be displayed. I'm writing procedures which are accessed through the OAS, and if one of those procedures has a problem it can sometimes be a pain in the ass to diagnose.

As a result I've resorted to committing the ultimate evil (atleast by your eyes, not that I can really fault them) and I've put a when others clause on many of my procs which does an htp.p with the error code and message because it's the best I can manage to do.

Alberto Dell'Era, March 09, 2006 - 4:52 pm UTC

desiderata: Having an all_ind_modifications that records modifications on indexes (same thing that all_tab_modifications do for MONITORING tables).

I just have this huge DWH table that gets MERGEd, 99% updates and 1% inserts - would be happy to skip the analyze of the immutable PK (and other indexes that aren't defined on the updated columns and so aren't "stale") most of the time.

Another example with GLOBAL indexes - when I drop a partition using UPDATE GLOBAL INDEXES, maybe 1% of the global index gets updated - I'd like to analyze them only when it's needed.

Check whether a particular query ran in parallel or in serial

Anto, March 27, 2006 - 12:16 pm UTC

Hi,

Some column in V$sql ( or in any other dictionary table) to check whether any past query (select ) against the database, ran in parallel or not - or is this already available in some other dict. table ?

thanks
Anto

Tom Kyte
March 27, 2006 - 3:16 pm UTC

v$sql_plan would be the place to peek - what plans are in there that represent parallel plans.

A reader, March 28, 2006 - 10:15 am UTC

Hi,

Suppose a parallel query - which would have run in parallel under normal circumstances, ran in serial due to parallel_max_processes reaching their maximum limit.

Will this be reflected in v$sql_plan ? I guess not, since that just shows the plan which would be used, not the actual one used

thx
Anto

Tom Kyte
March 28, 2006 - 4:10 pm UTC

v$sql_plan shows the actual plan, always the real actual plan - but it would still be a "parallel", it just would have used less execution servers.

A reader, March 28, 2006 - 10:18 am UTC

Does V$SQL_PLAN contain the plan of all the SQL s after the instance restart or is it also like V$SQL which contains only the latest SQL's for the past few hours or so ?

thx
Anto

Tom Kyte
March 28, 2006 - 4:10 pm UTC

things can age out of there, definitely.

it will not have the plans of EVERY query ever executed in that instance since startup in general.

A reader, March 29, 2006 - 6:04 pm UTC

Hi,

Let me rephrase the question in another way :)

Suppose

SELECT * FROM V$sysstat a where a.NAME like '%Parallel operations downgraded%'

shows there are 1300 operations which were downgraded to serial('Parallel operations downgraded to serial') due to lack of parallel processes or whatever reason it may be

Is there any way we can find out which were these 1300 operations ?

thanks
Anto



Tom Kyte
March 29, 2006 - 7:38 pm UTC

no, because the v$ views are instance wide, not statement by statement executed.

A reader, March 29, 2006 - 6:19 pm UTC

>> v$sql_plan shows the actual plan, always the real actual plan

thanks for this info, but I remember you saying in some other thread that columns like TEMP_SPACE in this V$SQL_PLAN contains an estimate of the temp. tablespace usage anticipated, not the actual temp. space used by the query - is that right ?

Tom Kyte
March 29, 2006 - 7:38 pm UTC

that is correct, but that is the query plan that was used.

limit on pramametrs

satya, March 30, 2006 - 11:15 am UTC

is there a limit for the number of the paramaters to be pass into the procedure we are using 8.i in prod and 9.i in prod

Tom Kyte
March 31, 2006 - 11:27 am UTC

not that I've ever hit and I tested over 1,000 at one point (would never actually DO THAT - used 1000 as an unreasonable upper bound)

Broken link above

3360, March 30, 2006 - 1:53 pm UTC

</code> http://forums.oracle.com/forums/thread.jspa?messageID=996304 <code>

Should have & # 996304 on the end with no spaces, but it keeps getting chopped off. This is a bit better

select * except col1 from t;

Jack, April 03, 2006 - 7:40 pm UTC

Wouldn't it be nice if we could select all but a few columns from a table by listing only the ones we don't want? For example:

create view restricted_emp_vw
as
select * except salary
from emp_tb
;

Or this:

create table dedupe_tb
as
select * except my_row_number
from (
select a.*,
row_number () over
(
partition by id order
order by date_added desc
) my_row_number
from dupe_tb a
)
where my_row_number = 1
;



Easier unusable indexes

Stewart W. Bryson, April 24, 2006 - 2:25 pm UTC

Tom, tell me if this seems conceivable:

It would be great if I could issue a DDL statement against a partitioned table that would prepare all LOCAL indexes to immediately become unusable if a DML statement were going to touch that partition. Something like: "alter table set local indexes to unusable on dml;" That way, oracle could just set all the indexes for a particular partition to unusable as soon as a DML statement started to work on that partition, but leave the other partitions of the local indexes as usable. After the load was done, I could simply issue: "alter table rebuild local unusable indexes".

That would really change things for ETL developers. Not as big of a paradigm shift as the DML logging clause in 10gR2 or the MERGE in 9i... but still big.

"indirect index"

just nobody, April 25, 2006 - 1:32 pm UTC

If table had a primary key then it would sometimes be beneficial to have a secondary index that would not use rowid to point to row, but have primary key value instead. In index would have this primary key by definition, then there would be no need for any "extra pointer to row".

Just like Index organized tables have.

There are some cases where actual reference to table is not needed. Only information contained in index.

Tom Kyte
April 25, 2006 - 2:30 pm UTC

we call that a concatenated index

create index on t( primary_key_col1, primary_key_col2, ...,
another_column, another_column2 );



followup to earlier response

Jeff, April 25, 2006 - 4:58 pm UTC

Tom, I have a question regarding one of your earlier responses to a comment that I thought sounded useful.

"6. Batch commits. When I should be able to do a mass update but ask Oracle to commit at a certain frequency only. I could do this by cursors; but a simple SQL will be very useful. Something very similar to the commit frequency setting in COPY command."

and you said

"6, I disagree, that would be excessively dangerous. How do you restart safely such an operation?"

Why is it dangerous?
This could be very useful for not blowing out all your rollback space when you're doing huge DML operations.

What if this 'feature' existed and we used it as such to update 50 million rows of a 100 million row table.

update emp set id=10 where status='X' commit every 10000;

If the update fails for some reason after updating just 100,000 rows, why couldn't we just restart the update later?

update emp set id=10 where status='X' and id != 10 commit every 10000;


Maybe I'm over-simplifying things, but I'm not seeing the danger.

Tom Kyte
April 25, 2006 - 11:57 pm UTC

because IN GENERAL updates could be:

update emp set sal = sal*1.1 where status = 'X';


Now what?

batch commits - not such a good idea

John, April 25, 2006 - 5:28 pm UTC

Jeff,

In your example, no problems, but what about this one:

update emp set sal=sal*1.04 where status = 'X' commit every 100000 rows

What happens when that crashes?

indirect index

just nobody, April 26, 2006 - 1:39 am UTC

Just trying to clarify my original idea:
In heap organized table your index entry would be

(column1, column1, primarykey_column1, primarykey_column2, rowid)

Now if (primarykey_column1, primarykey_column2) form together the primary key they actually define the row in use. There is no need for rowid. Naturally it is faster to fetch the row from heap table using the rowid. But there are situations when these kind of index is done to support queries and actual row is not at all referenced.

IOT -tables have this kind of index for non primary key indexes. Could this be brought also to heap organized tables ?

Tom Kyte
April 26, 2006 - 7:57 am UTC

I don't see this micro optimization that would not be used in most all general cases (an index that is never used to access the table) as being useful or implemented myself.

Yes, but they are not ALWAYS dangerous.

Jeff, April 26, 2006 - 9:39 am UTC

"because IN GENERAL updates could be:

update emp set sal = sal*1.1 where status = 'X';

Now what?"


The answer is simple. You don't use the 'commit every' option in that scenario.
If we're going to say NO because it COULD BE dangerous, then there's dozens of other things that we shouldn't have in the database because they could be dangerous. How about truncate? That could be really dangerous if used incorrectly.

I'm just saying that having the option available would be very useful, assuming you use it correctly.


Tom Kyte
April 27, 2006 - 4:06 am UTC

but if you permit it, people will do it. No, I don't see this as being a good idea personally.

more on batch commits...

Jeff, April 26, 2006 - 10:17 am UTC

Better yet, if you know you will need batch commits, just modify the table and add a 'flag' column.

update emp
set sal = sal*1.1, bulk_update = 'sal x 1.1'
where status = 'X'
and bulk_update != 'sal x 1.1'
commit every 10000;

If the statement fails, just re-run.


Tom Kyte
April 27, 2006 - 4:10 am UTC


loop
update emp set sal = sal * 1.1, bulk_update = 'sal x 1.1'
where status = 'X'
and bulk_update <> 'sal x 1.1'
and rownum <= 10000;
exit when sql%rowcount = 0;
commit;
end loop;




stateless packages

a writer, April 27, 2006 - 11:48 am UTC

When using J2ee architecture there is usually not a meaningfull state of database package. Each connection is taken from a connection pool, packages are called, sql-statements executed and then a connection is released back to connection pool.

If one would define these packages 'stateless' then there would not be (atleast so many) these error messages stating 'existing state of package has been discarded' and calls failing.


Tom Kyte
April 27, 2006 - 3:27 pm UTC

Scary - I have no clue what you are refering to - our latest discussion has been about transactional integrity. What are you discussing?

Jeff: “update … commit every n“

Gabe, April 27, 2006 - 12:37 pm UTC

<quote>
Why is it dangerous?
This could be very useful for not blowing out all your rollback space when you're doing huge DML operations.
</quote>

Except for the rollback space concern (which can be mitigated without this new syntax) Â… what would be the other advantages?

Otherwise it seems to be just syntactic sugar.

And the existing transactional model is thoroughly impacted Â… you now have a DML statement which commits any pending changes for the existing/current transaction Â… you have a DML statement which in itself represents many transactions. Forget about restart Â… how do you rollback this entire statement?

So unless one changes the entire transactional system the only option is to make this kind of "statement" something other than a "DML statement" Â… which would be a program Â… which you can do right now with a pl/sql procedure taking the update as a string and batch_size as a second parameter and dynamically executing the string||'and rownum < '||batch_size and commit until the job is finished. Document it commits prior work, document it may commit part way through, document reversing the modifications is the end-user responsibility, sql injection maybe, dynamic sql with hard parses, etc.

gabe@XE> create or replace procedure run_batch
2 ( sql_stmt varchar2
3 ,batch_size number
4 )
5 authid current_user
6 as
7 stmt varchar2(32000);
8 sfty number := 10000;
9 n number := 0;
10 begin
11 commit;
12 loop
13 execute immediate sql_stmt||' and rownum <= '||batch_size;
14 exit when sql%rowcount = 0;
15 commit;
16 n := n + 1;
17 exit when n > sfty;
18 end loop;
19 commit;
20 end;
21 /

Procedure created.

Elapsed: 00:00:00.06
gabe@XE> select status,count(*) from x group by status;

S COUNT(*)
- ----------
X 12064

Elapsed: 00:00:00.01
gabe@XE> exec run_batch('update x set sal=-sal, status=''A'' where status!=''A''',1000);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.19
gabe@XE> select status,count(*) from x group by status;

S COUNT(*)
- ----------
A 12064

Elapsed: 00:00:00.01

So sure, in theory they could implement something like 'update Â… commit every n' in the database, but that will not be a SQL DML statement any more.

stateless packages

a writer, April 28, 2006 - 1:41 am UTC

I was trying to discuss about things I don't like about in Oracle.

I don't like about the idea that state of package is kept in situations when from applications point of view there is no 'state'.
I would like to get out of these 'existing state of package has been discarded' error messages (atleast most of them).



Tom Kyte
April 28, 2006 - 2:12 am UTC

umm, but like I was saying


Oracle keeps the state of packages that programers who wrote them - included "state".

The developer of the package created a package, wrote a package - that HAS A STATE.

If the developer did not use package global variables - then the package WOULD NOT HAVE A STATE.

If the developer codes a package that does use package global variables - the developer has explicitly told Oracle "maintain state"

Oracle does what it is told (usually). In this case, the developer asked us to maintain a state.

IF YOU DO NOT WANT a state, thou shall not use package global variables and no state will exist.

Still not getting your point, unless and until you write a package that says "hey, I've got a state - maintain it" - we won't. It is entirely up to you.

well aware of the cursor approach...

Jeff, May 01, 2006 - 1:51 pm UTC

no one said it couldn't be done, the cursor approach was already mentioned in the previous quote.

"Except for the rollback space concern..."
Gabe, that IS the concern.

All I was saying is that a 'commit every' feature would be nice to have, and is not dangerous if implemented properly.

Tom Kyte
May 02, 2006 - 3:05 am UTC

it is - I disagree with you, and we'll have to agree to disagree. There are many features I wish we could "pull" from the database due to the abuse heaped upon them by developers just "not getting it" and using them entirely inappropriately, such as:

o autonomous transactions
o when OTHERS in plsql
o TRIGGERS

I will not promote the addition of features that I obviously know will be used entirely inappropriately (not when I cannot get features that looked useful - but turn out to be used wrong more than they are used correctly - retracted!)


That "commit every N records" is like having a "lets trash data integrity and generate ora-1555's as easy as we can" feature (eg: the commit every N records is a ora-1555 just *waiting* to happen - you are

o reading table A

o modifying table A

o committing bits of work on table A, allowing the undo to be overwritten - in fact THIS IS YOUR STATED GOAL/DESIRE - to overwrite this undo

o but your query READING table A needs those bits of undo you are overwriting in order to get a consistent read on table A


so, not only would you have to DEAL with the fact that this statement might fail, you have to deal with the fact that the probability of this statement failing is VERY HIGH.

Records, Collections and Forall

Chris Seymour, May 03, 2006 - 9:55 pm UTC

I came across an annoying thing today that to the best of my knowledge is still not allowed in any version (even 10G). If this is actually allowed in 10G now I apologize.

I was trying to use forall with a pl/sql table that was made up of records. Unfortunately I found out the hard way that you cannot specify the elements individually, and you can only only use forall if you insert the entire record in one shot. This assumes however that your columns match the record order exactly. I would love to be able to do something like this:

declare
type r_project is record
(project_id number,
task_id number,
parent_task number);

type tb_project is table of r_project
index by binary integer;

t_project tb_project;

begin
--Fill array with values (assume this step is already done)
forall i in t_project.first..t_project.last
insert into table
(col1,
col2,
col3,
col4,
creation_date,
created_by,
last_update_date)
values
(t_project(i).project_id,
t_project(i).task_id,
t_project(i).parent_task_id,
'Y',
sysdate,
fnd_global.user_id,
sysdate);

This hits an pls unimplemented feature error. The workaround (so I'm told) is to create a separate pl/sql table for each variable. It would be nice to be able to use a record and table in the format above.

Regarding REUSE SETTINGS

Giridhar, May 09, 2006 - 1:27 am UTC

Hi Tom,
In order to recompile a pl/sql procedure or function
with the same settings (whatever were defined at the time of earlier compilation) in oracle 10G, we need to use
REUSE SETTINGS. Otherwise, oracle will take the current session settings.

Wont this new clause force users the existing scripts/programs to modify to use REUSE SETTINGS if we have to take advantage of the new features?

suppose if i want to use the new feature compiler warnings. I recompiled few of my objects using new settings.

If i have some scripts, which just recompile without the RESUSE SETTINGS, they would be recompiled with default values of the session.

Wouldn't it be nice to have the new keyword as
"RESET SETTINGS" instead of "REUSE SETTINGS" if i want to change the settings to current session settings and simply use the previous compilation settings by default.

Thanks tom..


Tom Kyte
May 09, 2006 - 7:45 am UTC

This will force no one to do nothing.

If you want to use a feature, you shall modify your scripts to support the use of such a feature.

Else, you won't.

(not sure why you'd be recompiling manually anyway, I never do, just let it "happen")

small fry...

Robbert, May 17, 2006 - 12:05 pm UTC

Hello Tom,

My wishes are actually pretty small compared to the wishes above:
1) DBMS_OUTPUT.FLUSH, or something like that, that flushes out the dbms_output. In the meantime I use inserts into a dummy table, but that means committing repeatedly.
2) A public view variation on DBA_DEPENDENCIES that lists whether the tables used are source or target tables. This would allow lineage insight of larger batch jobs.
2_sub_A) A similar public view on the OWB repository.
2_sub_B) DBA_DEPENDENCIES listing actual procedures/functions besides PACKAGE BODY.

cheers,
Robbert

Tom Kyte
May 18, 2006 - 9:52 am UTC

1) you would have to use a pipelined function - control has to return to the CLIENT application, because the client fetches the data and prints it.

So, instead of:

exec some_procedure;

you would

select * from table( some_function );

and your function would "pipe row( 'your message' );". The flush size would be the client array fetch size.




Follow-up to an old post

Michel Cadot, May 18, 2006 - 3:18 pm UTC

Hi Tom,

I come back to my post above:
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#50678380017237

It now works in 10.2:

SQL> select ( select dummy
  2           from ( select dummy
  3                  from dual b
  4                  where b.dummy = a.dummy
  5                )
  6         ) dummy
  7  from dual a
  8  /

D
-
X

1 row selected.

I didn't check how many levels we can have.

Regards
Michel 

Correction

Michel Cadot, May 22, 2006 - 3:01 am UTC


The previous query works in 10.2.0.1 but no more works in 10.2.0.2.
I opened a SR to know which is the correct behaviour.

Regards
Michel



pipelined function vs dbms_output flush

Robbert, May 23, 2006 - 5:40 am UTC

Hello Tom,

your suggestion to use a pipelined function has definite merit if the function of the code is generate all this output.
I however tend to use dbms_output for debugging, usually between "if l_debug = 1 then ... end if".
In this case you're actually just waiting for the output to see where it goes wrong.
The flush would do 2 thing:
1 - print the output to screen, even though the procedure is still running.
2 - clear the buffer so that you don't get a buffer overflow when looping through more data.

Tom Kyte
May 23, 2006 - 7:36 am UTC

there is nothing to "flush to", stored procedures run from START to FINISH. All dbms_output does it fills in "an array" and later a client makes yet another call to the database to GET the array and print it.


If you want "real time" sort of feedback for debugging, I suggest always to use UTL_FILE.

That way you can generate a trace file (in production for example) and have a dba mail it to you for debugging.

That way you can use "tail -f" in test for "real time" feedback.

UTL_FILE for debugging.

Robbert, May 23, 2006 - 7:55 am UTC

Good tip.

Thanks!

Tom Kyte
May 23, 2006 - 7:56 am UTC

Beroetz, May 29, 2006 - 3:36 am UTC

Isn't this strange?
Consider the following scripts:

ORA9i2 > CREATE TABLE Table1 (a INT PRIMARY KEY, b INT)
OK, done in 1,456 seconds

ORA9i2 > CALL Dbms_Wm.EnableVersioning('Table1')
OK, Done in 1,996 seconds

ORA9i2 > RENAME Table1 TO Table2
OK, Done in 0 seconds
Oracle has been renamed only the created from WM view, not the table itself!!! Now I can all kind of DML operations over the new Table2 view. But:

ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table1')
Error: ORA-00942: table or view does not exist

ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table2')
Error: ORA-20132: table 'Table2' is not version-enabled
So in this situation I can't disable the versioning of the table!!!

I have to rename back the view to disable the versioning:
ORA9i2 > RENAME Table2 TO Table1
OK, Done in 0 seconds
ORA9i2 > CALL Dbms_Wm.DisableVersioning('Table1')
OK, Done in 0.845 seconds

1) Is it a bug? Why Oracle let me rename the system created view?
2) Is there a way to rename the version-enabled table without such unpleasant consequences?

Best Regards!

Tom Kyte
May 29, 2006 - 7:41 am UTC

You are not supposed to perform DDL directly on the workspace manager tables, this is documented in the workspace manager guide. There is in general nothing "preventing" you from doing this (just like there is in general nothing preventing you from modifying the data dictionary)

Workspace manager provides an API to perform DDL.

</code> http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14253/long_intro.htm#sthref196 <code>

Subquery name resolution

Michel Cadot, June 09, 2006 - 7:33 am UTC

Hi,

Following up my previous post:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:48704116042682#64283169182746 <code>

The test I made "succeeded" because of a bug in 10.2.0.1 exclusively on Windows XP. It's bug n°5240169.
Currently the "correct" behaviour is a limit of one level in the name resolution.

Regards
Michel

Idle Time parameter.

Suren Dharanikota, June 13, 2006 - 9:24 pm UTC

Tom,

I have seen this specific issue ":N" number of times now. And i am sure you can use a "Bind Variable" for this issue in your forum. ;-)

The last time i talked to you, you have mentioned it that there is an "Enhancement Request" for this feature. I am not sure in which release will this come up.

Anyways, When an application is poorly written ( And Is ofcourse poorly written 90% of the times ),it does`nt not close the connections gracefully after the transaction is completed, even with the Idle_time parameter set the Process remains at the server level with a "Sniped status" at the database level. It would be nice if Oracle just terminates the connection at the server level.

1)Why should Oracle care .. waiting for the application to give a hand shake. "Idle time" parameter should always take precedence.

2)Once Oracle starts throwing the End-Of-Communication channel errors at the application, they will definitely fix there app :).

3) OR ... Oracle should provide a clean-up script/utility which will terminate all the processes with "SNIPED" status at the database level and its corresponding server processes.

Thanks,
Suren

Tom Kyte
June 14, 2006 - 8:40 am UTC

I would hope it would NOT close the connections - that would be really bad.



Merge and Lock

Branka, June 14, 2006 - 6:18 pm UTC

Should I lock whole table before I start merge?

Tom Kyte
June 15, 2006 - 8:26 am UTC

no, not in general.

Merge and Lock

Branka, June 15, 2006 - 9:24 am UTC

Can you tell me mora about it?
I have "batch" program that insert data into table that users use at same time. It run 3 times a day.
Merge need 2-3 minutes to execute. Can users be able to update / delete data from that table, while I am doing merge?

Tom Kyte
June 15, 2006 - 9:29 am UTC

sure, merge will only lock the data it modifies, other users may modify the rest of the data as they wish.

Merge and Lock

Branka, June 15, 2006 - 9:49 am UTC

Thanks a lot.
Just to clarify.
Since, merge modify whole table, I understand now, that from the user perspective it will modify one by one row, so it is big chance that merge and user will not try to modify same row at the same time.

Tom Kyte
June 15, 2006 - 4:43 pm UTC

merge does not modify "entire table" unless you have a set that will match every row.

If you have a set that will cause every row to be updated - sure, you may certainly use the lock table command to ensure you can do the merge "as fast as possible".


but - to say "merge modify whole table" is just not accurate.

Is Raptor Working for everybody?

Maverick, June 15, 2006 - 4:22 pm UTC

I don't know about others but When I tried Raptor couple of months back, it took 4 minutes to start and bring up it's GUI[??].Takes the same time every time i start Raptor.

I am not sure if it was just my PC or everybody is facing this problem ..

I am very much frustrated by TOAD [with it's Access violation messages] and when I heard about Raptor, felt glad Oracle was bringing it to replace TOAD. But when I used it, my feelings changed and stopped using it immediately. It was that slow. Worse, My VSS got associated all SQL's with Raptor. Now when I open Code directly from VSS [i know we don't have to], it opens up with Raptor, and
takes sweet 4 minutes to show the code..

I thought of pointing it out, since everyone is talking about Improvements and changes to Oracle database and it's Tools.

Thanks,

Tom Kyte
June 16, 2006 - 6:36 pm UTC

try it again .

Merge and Lock

Branka, June 15, 2006 - 6:35 pm UTC

Thank you one more time.
I was thinking about specific problem that I am working on, where I do modify whole table.


Merge changes pending

Rakesh Bansal, June 16, 2006 - 4:22 pm UTC

My Wish List

(1) Ability to print column name also when oracle throws error "Inserted value too large for column", something like "Inserted value too large for column EMP_NAME, ... ". It is very difficult to pin point the column when dealing with table having 200 columns and values are not hardcoded but used as variables.

(2) Merge is still throwing error "Unstable set of rows". Remember TOM, you filed a bug on 21st Feb'05 when I raised this issue in this forum.

(3) SQL Loader with Insert-Update facility same as of Merge statment or DB2 Import utility.

Tom Kyte
June 16, 2006 - 7:23 pm UTC

1) wish granted

ops$tkyte@ORA10GR2> create table t ( x varchar2(1), y varchar2(1) );

Table created.

ops$tkyte@ORA10GR2> insert into t values ( '1', '1' );

1 row created.

ops$tkyte@ORA10GR2> insert into t values ( '1x', '1' );
insert into t values ( '1x', '1' )
                       *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."X" (actual: 2, maximum:
1)


ops$tkyte@ORA10GR2> insert into t values ( '1', '1y' );
insert into t values ( '1', '1y' )
                            *
ERROR at line 1:
ORA-12899: value too large for column "OPS$TKYTE"."T"."Y" (actual: 2, maximum:
1)


2) give me a pointer - I believe the "bug" was it should have thrown it BOTH TIMES (and I believe you never wanted it - so no wish granted)

3) you have it - we call it "external tables" - wish granted. 

Yes very useful thread

Khalid, June 17, 2006 - 11:20 am UTC

Here's what I wish:

I wish pl/sql gave me the ability to skip an iteration in the loop, such as break, and continue statement in the C language. without using a goto statement?

for rec1 in c1 loop
-- How do I skip an iteration?
for rec2 in c2 loop
end loop;
end loop;

Tom Kyte
June 17, 2006 - 4:04 pm UTC

I've always used "if" myself (finding continue to be a glorified goto - but very very DIFFERENT from break).


eg: instead of:

for rec1 in c1
loop
if some_condition
then
continue; -- to skip over rest of code
end if;

...... rest of code ...
end loop;


you code:

for rec1 in c1
loop
if NOT(some_condition)
then
..... rest of code ....
end if;
end loop;

same net effect, no "goto" like processing.


Thanks, but

Khalid, June 17, 2006 - 6:16 pm UTC

First of all thanks for the response.

Yes 'break' is much different than continue, I have the equivalent of sorts of break in 'exit', but I don't have an equivalent of 'continue', is what I should have said, but I ended up saying something else (my communication needs some improvement there)

Your solution of using a NOT (some_condition) is what I did this morning also, but I had a 'for rec2 in c2' as shown below. Not a problem, but sometimes difficult to read when there are large blocks nested.

I don't look upon continue as a glorified GOTO, rather I got this neat way in which I can pre-empt from all the extra processing upon a certain condition, and forget about dealing with it.

for rec1 in c1
loop
if NOT(some_condition)
then
..... rest of code ....
for rec2 in c2; loop
.... c2 processing
end loop;
end if;
end loop;


Tom Kyte
June 18, 2006 - 12:11 pm UTC

...
I don't look upon continue as a glorified GOTO, rather I got this neat way in
which I can pre-empt from all the extra processing upon a certain condition, and
forget about dealing with it.
.....

but that is "goto" :) virtually by definition. Skip over this stuff, goto this place and pick up again.


"for rec2 in..." is just part of .... rest of code .....


(in fact - .... rest of code might make a nice subroutine!)

cost of procedure in explain plan

A reader, June 23, 2006 - 11:52 am UTC

There should be a method to get the cost of whole procedure/package in explain plan rather then looking at all sql's individually.

Tom Kyte
June 23, 2006 - 1:24 pm UTC

the cost of a procedure.... hmmm, think about it...


tell me how to 'cost' this:


if ( something )
then
select
if ( something from select is 'x' )
then
update
else
insert
delete
select
end if
else
select
if ......



if stored procedures didn't have any sort of procedural processing, then maybe - ust maybe - you could "add up all of the costs of all of the sql since they would be all executed"

But then you would have a pretty good "random number that is not useful to use in any sense of the word for anything"



Don't likes.....

A reader, June 23, 2006 - 4:00 pm UTC

max(decode...) group funs is related to block size !!!

Code Refactor is need,

10g enterprice manager (aka database control) is web based !! a state-less connection

why oracle promotes that ? what is the advantage ?

Tom Kyte
June 24, 2006 - 11:24 am UTC

stateless rocks and rolls, I would not have it any other way. Waste of resources to do "client server"



I have always wanted

Lior, June 24, 2006 - 6:41 pm UTC

to be able to connect internally
without the need to specify the environment variable
ORACLE_SID..

just like connecting using a listener in sqlplus

> scott/tiger@ORCL

using a special character (% for example) :

> scott/tiger%ORCL

Stateless !!

A reader, June 26, 2006 - 1:12 pm UTC

Is that the reason to make Database control a web-based application ?

You will have to separately run apache or whatever web server on the box and more than that manage it !!

If there is a large tablespace creation , what if the conntion is gone ? (Sure, the tablespace will be created, no doubt about it, but Databaes control might not show the correct result !!) and orale is trying to make DBAs totally depandent(??) on "web-clicking" over the time !!!





Tom Kyte
June 26, 2006 - 2:35 pm UTC

No, there are many other reasons (like the zero client install - very nice, my personal favorite)

I'd rather manage a single instance of something serving an application rather than having to manage some egregious number of clients that would need installation, configuration, patching, upgrading. Not only that, but I could only access the administrative system from a terminal that had, well, the softare installed. How painful is that.


Let's see -

one install, versus N client installs
one thing to manage, versus N things to manage
stateless (always a bonus)
zero client install - access from anywhere, very nice.


I don't understand the attitude here. Why is clicking in a web browser more insidious than clicking in some thick client?!


You have sqlplus just like me as well - you don't have to use anything you don't want to use.


thanks

A reader, June 29, 2006 - 12:00 pm UTC

For some reason, I have perception that web-clicking is "Not-Reliable", may be it's just me !!

sqlplus is "the greate" among all of these , but if the industry is moving you have to move

Tom Kyte
June 29, 2006 - 1:11 pm UTC

I buy most everything online, I do my banking online, I buy/sell stock online.

They are all clicks. I worry more about the reliability of those clicks than I do clicks in enterprise manager for the most part. And they have worked for me so far.

Stock?

A reader, June 29, 2006 - 5:15 pm UTC

Tom
I buy/sell stock online.....
How do you get time for all this? I'm sure you must be doing very good in this also.
Regards,

Tom Kyte
June 29, 2006 - 5:19 pm UTC

well, I don't do it *often* :)

Expanding Ask Tom to "AskToms"

A reader, June 29, 2006 - 5:51 pm UTC

Hi Tom,

I know it is just "you" behind this site ..

Do you have any idea of extending this site to "AskToms" .. meaning many people working together to answer the questions ... :-) as a professional group ...

Did this ever strike your mind before or is this something that I am proposing first time (i bet not)..

Well the point is ... if you have any such ideas ... there are so many of your followers out there (including me) who do things the "AskTom" way :-) ... please do remember to bring them into the AskToms group !!

Thanks !

Tom Kyte
June 29, 2006 - 6:54 pm UTC

see the "other resources" tab above - they are already out there.

My 2 Cents

TRS, June 30, 2006 - 12:02 am UTC

I hate the defaults in the init.ora parameter file for the "out of the box" installation. I have had to over and over and over an gain 'correct' these defaults for a 'simple' oracle installation.... Gawd!

PLSQL table meta data

Andrew Markiewicz, July 28, 2006 - 4:03 pm UTC

I wish that I could interrogate a plsql table structure programmatically so it would be possible to iterate over the attributes and write more generic reusable code that uses plsql tables.

In Perl this would be something such as:
foreach $col (keys %plsqlTab) {
...
}

Then $col is set to each attribute name and the corresponding value can be found with $plsqlTab{$col}. In Perl that is.

I find it very frustrating requiring each use of the plsql table to have each attribute explicitly named for it's use.

For example:
TYPE rec is RECORD(
fld1 varchar2(30)
,fld2 date
);
TYPE tab IS TABLE of rec index by BINARY_INTEGER;

xTab tab;
xTab2 tab;
idx integer;
...
idx := xTab.FIRST
while idx is not null
loop
-- It would be convenient to be able to iterate
-- over the attributes programmatically instead
-- of requring a line for each attribute.
--
xTab(idx).fld1 := xTab2(idx).fld1;
xTab(idx).fld2 := xTab2(idx).fld2;

idx := xTab.NEXT(idx);
end loop;
...


Why? Is there something about the plsql table data structure that makes it impossible to interrogate it's structure at run time?

This becomes even more of an issue when trying to publish plsql procedures that return a plsql table as a web service. JDeveloper will generate the code for you, but it looks at the current plsql data structure and generates a separate line explicit for each attribute. When the plsql table changes, the web service must be maintained. It would be better to programmatically iterate through the attributes and not have that code maintenance.

Tom Kyte
July 28, 2006 - 8:59 pm UTC

plsql is a strongly typed language. How would you do this in C, or Java, or Ada, or PL/I, or Cobol, or ......

Compiled vs interpretted

Andrew Markiewicz, August 01, 2006 - 9:51 am UTC

Is it due to strong typed variables or degree of compilation?

In Java we could use the java.lang.reflect.Field.getFields method to return an array of the attributes of a class at runtime. From my understanding that is because it is run in a JVM and is partially interpretted and the reflection offers the metadata needed.

Similarly, plsql is compiled to pcode which is also interpretted at runtime. Does that offer the possibility for providing the metadata needed similar to the way Java can?

Tom Kyte
August 01, 2006 - 10:29 am UTC

you have the anydata, anytype, any**** types for "introspection" if you like as well.

SQL PLus improvements

Alex, August 01, 2006 - 11:02 am UTC

I had been using Oracle long before SQL Server - my current organization uses both.

I wish SQL Plus have many of the conveniences that comes with the MS SQL Analyzer tool :-

1 execute only the mouse-highlighted a portion of a procedure

2 ease of copying result sets (presented in grid) and pasting it onto another software like Excel.

3 color-coded keywords (some kind of IDE if you like)

4 separation of grid for result sets and messages

5 Finally, why does SQL Plus result set take a long time to display on screen or spooled to file.

Also why sometimes Control-C does not halt the display.



Tom Kyte
August 01, 2006 - 6:49 pm UTC

5) eh? no way. not unless you have a query that takes long to run. plus is about the fastest thing going.


</code> http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.oracle.com%2Ftechnology%2Fproducts%2Fdatabase%2Fsql_developer%2Findex.html&p_cat=SQL_DEVELOPER_2006&p_company=10 <code>

get sql developer.

to: Alex

A reader, August 01, 2006 - 11:22 am UTC

SQL*Developer was developed to provide those features for Oracle users:
1 execute only the mouse-highlighted a portion of a procedure
2 ease of copying result sets (presented in grid) and pasting it onto another
software like Excel.
3 color-coded keywords (some kind of IDE if you like)
4 separation of grid for result sets and messages
</code> http://www.oracle.com/technology/products/database/sql_developer/index.html <code>


Andrew Markiewicz, August 01, 2006 - 3:10 pm UTC

SYS.ANYDATA doesn't seem to work with converting a plsql table.

-> declare
2 vdata SYS.ANYDATA;
3
4 type myrec is record(
5 fld1 varchar2(10)
6 ,fld2 date
7 );
8 type mytab is table of myrec
9 index by binary_integer;
10
11 tab mytab;
12
13
14 begin
15 tab(1).fld1 := 'hi';
16 tab(1).fld2 := sysdate;
17 tab(2).fld1 := 'there';
18 tab(2).fld2 := sysdate;
19
20 -- vdata := sys.anydata.convertObject(tab);
21 vdata := sys.anydata.convertCollection(tab);
22 -- vdata := sys.anydata.convertRaw(tab);
23
24 end;
25 /
declare
*
ERROR at line 1:
ORA-22370: incorrect usage of method ORA-22370: incorrect usage of method ConvertCollection
ORA-06512: at "SYS.ANYDATA", line 0
ORA-06512: at line 21

Use of convertObject and convertRaw methods does not compile due to mismatch on parameter type.

Thanks for your responses Tom. If Oracle could provide access to the metadata of plsql tables somehow, that would be a huge improvement. But for now, it's still a wish.

Tom Kyte
August 02, 2006 - 10:41 am UTC

plsql types (boolean, records, table of records, index by tables) are only ever going to be visible to .... plsql really.


index by tables of scalars - we can bind to them from 3gls but that is about it.

Use an object type instead.

Database Links / Triggers

A reader, August 02, 2006 - 11:05 am UTC

I've just noticed this thread is still going strong, had a re-read of it and thought of some more things I don't like about Oracle (i.e. features I wish Oracle had).

I also agree the ability of a privileged account (eg: DBA) to maintain the DB links of "normal" users would be handy (as suggested by "Mauro Papandrea from Italy"). We can do it for most other objects.

In addition I'd like to...

1. Be able to ALTER [PUBLIC] DATABASE LINK [CONNECT TO ...] [USING ...] etc.

2. From a privileged account (eg: DBA) be able to see what database links other sessions have open - like a DBA level of v$dblink.

3. The ability of the priviliged account close these database links - like a DBA version of "alter session close database link <linkname>;" or a "DBMS_SYSTEM.CLOSE_DATABASE_LINK(session, db_link)".

4. I'd like the ability to schedule automatic timed tasks to happen within a session, but not via dbms_job which will run in a separate session.

Eg: Some housekeeping SQL / procedure to run every nnn minutes in a session.

Maybe this could be done via some new sort of trigger or profile setting.

It would obviously have be sensitive to if the session was already doing something (maybe skip or wait until it's inactive).

5. I'd like a DBA level of the session_roles view. So a priviliged account can tell what roles another account has currently got set.

6. I'd like HSODBC not to give up the ghost when you try and access a remote (non-Oracle) database table that has a column name > 30 characters long! Currently this stops you accessing any of the columns in the table - presumably HSODBC does a "describe" to itself of the whole table. Being able to access the columns with names less than or equal to 30 characters would be nice. Being able to alias the long column would be even nicer - eg: select their_exceedingly_long_mysql_version_4_column_name as foo from some_table@mysql;

That'll do for now ;-)



Logging DML and Select statements on Schema Level triggers

Vijay Ekambaram, September 06, 2006 - 8:37 am UTC

I would like to log the Select statements and Insert, delete, update statements for our specific need in a database - just by writing only one trigger on Schema Level. (working for all or selected tables)

Can that be possible on Oracle now or in future?!


Tom Kyte
September 06, 2006 - 3:31 pm UTC

dbms_fga can do that on selects in 9i, and all dml in 10g

but it is "table" - not schema - nor do I see it being "schema"

your top 10 list

A reader, September 08, 2006 - 6:28 am UTC

Single (global) namespace for ...

Tak Tang, September 29, 2006 - 9:39 am UTC

... savepoint identifiers.


We all know that global variables are bad (I still have nightmares about Forms 3). The ability to declare variables locally enables us to hide our implementation, which in turn provides us with abstraction; encapsulation and promotes re-use.

I particularly admired the way exceptions could be declared locally.

So why cant we declare savepoints locally?


Why would anyone use a savepoint in a procedure? Well, in this article :-
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1302671053281 <code>
you demonstrate how to make a procedure Transactionally Atomic using a savepoint. Without the savepoint/rollback, the procedure has an external dependency that requires the caller (at some level) to detect the exception and respond with a rollback to remain transactionally consistent. You could not, for example, respond to the exception by taking a recourse action, which ultimately ends in a commit. Of course, you could issue a savepoint before calling the procedure and issue the rollback in the event of an exception, but I think it is cleaner to package this inside the procedure to make it 'atomic' and self-contained. K.I.S.S applies to APIs too. You never know when a busy programmer comes along, does not realise the procedure issues multiple dml operations, proceeds with the recourse action and commits! Transactionally atomic procedures is defensive programming.

The downside is that the global namespace means that the programmer must be aware of all savepoints, or risk a procedure deep in the callstack overwriting a savepoint issued earlier on, breaking transactional consistency.

Locally defined, properly scoped savepoints would make this trivial.


Now for the flames ...

Tak

language translation

Sarayu, October 13, 2006 - 2:25 pm UTC

hi tom,
can we expect that we can do language translation in near future in oracle. something like
SELECT DBMS_LANG_TRAN('Text to be translated',FROM_LANGUAGE,TO_LANGUAGE) from dual;

This is If i store my data in english and want to send data in whatever language the end user wants.

As so many new features are added to every release of oracle, just want to know if this would be possible.


Thanks a lot tom,
Sarayu K.S.


Tom Kyte
October 13, 2006 - 7:13 pm UTC

I hope not.

That stuff rarely works (ok for a first cut but someone needs to read it and verify it - and that person could have done the translation). You could definitely build a web service (probably one exists already) and call it.


I remember someone telling just recently about a "translation", they were talking database and talking "fields" which the translator translated into "meadow like place" - the literal translation, not too good. Very confusing :)

Alberto Dell'Era, October 14, 2006 - 4:43 am UTC

Nothing like a *book* I read that translated "silicon" into "silicone" ... so every page had a reference or two to a "silicone computer", probably for submarine data centers :)

Another thing I miss the most

Lior, November 03, 2006 - 8:14 pm UTC

(and other fellow workers)
is the ability to rename a schema\user..

Something like :
alter user scott rename to david;
or
alter schema scott rename to david;

We are just too lazy to export\import :)

able to lock function while developing

oracleo, January 27, 2007 - 10:30 pm UTC

When two developers are working on the same function, they tend to overwrite other developers changes as the compile.

A facility, where in developer can lock the function while its being modified by him. So that a message pops up to the second user if (s)he tries to modify.



dictionary table to select compressed subpartitions

ht, February 22, 2007 - 2:22 pm UTC

Hi Tom,
I'm getting great results by compressing subpartitions via the:

alter table <tablename> compress;
alter table <tablename> move subpartition <spname> update indexes;

Question 1:
Is there a dictionary table that shows me which subpartitions are actually compressed? I'm periodically compressing subpartitions in no particular order and would like to avoid compressing subpartitions multiple times.

Question 2:
Is there a parallel option to updating indexes while "moving" and, if so, is there a capacity "penalty" during/after the operation?

Thanks,
ht
Tom Kyte
February 22, 2007 - 7:45 pm UTC

"compress" is an attribute - there is no flag set other than "this segment supports compressed data"

we don't know if the data is or is not compressed really until we read it.

only to the degree that you can parallelize the DDL itself (move can be parallelized)

thanks

ht, February 22, 2007 - 8:12 pm UTC


Use prior column aliases in the same SELECT clause

Brian Camire, February 23, 2007 - 11:49 am UTC

I think someone already mentioned this on the thread already, but it would be helpful to be able to use prior column aliases in the same SELECT clause, as in something along the lines of:
SELECT
    1 AS DERIVED_COLUMN1,
    DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2
FROM
    DUAL

Unfortunately, this results in:
    DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2
    *
ERROR at line 3:
ORA-00904: "DERIVED_COLUMN1": invalid identifier

I know you can work around this:

1. By using inline views, but this gets messy if you have many "levels" of derived columns because you need a nested inline view for each "level",

2. By using the MODEL clause, but this seems like overkill (and unnecessarily verbose).

Does anyone have any other suggestions?
Tom Kyte
February 26, 2007 - 10:42 am UTC

write indented code so that #1 is not an issue.

or consider "with"



with data
as
(select 1 as derived_column1 
   from dual
),
... other with subqueries if you want....
select derived_column1, 
       derived_column1+1 as derived_column2
  from dual
/




It is all a matter of the rules of scoping of SQL.

ALTER TABLE MOVE with REBUILD INDEX

A reader, February 25, 2007 - 12:24 am UTC

"ALTER TABLE ... MOVE" would be nice if it had a REBUILD INDEX(ES)" clause. This would allow a complete re-org of a TABLE and its INDEXES in one hit. At the moment you end up having an interim period where the indexes are marked as "unusable" after the MOVE - until the REBUILD indexes have been done. This would then mean queries against the table would not be impacted whilst the re-org is taking place (of course updates would still have to wait - which is still preferable to them failing with an "index unusable" error). I know there is always DBMS_REDEF - but this can be a bit of a "slege hammer to crack a walnut".





Re:Use prior column aliases in the same SELECT clause

Brian Camire, February 26, 2007 - 2:26 pm UTC

Thanks for your feedback, Tom.

I assume that you said "write indented code so that #1 is not an issue" because you were interpreting "messy" literally -- as in the visual appearance (formatting) of the code. However, I meant "messy" in the logical sense. If you have N levels of derived columns, you need N levels of nested inline views. This means you need to take the extra, arguably "artificial", step (say, compared to the MODEL clause) of grouping the derived columns into levels. I know this is "par for the course", but once you see how the problem can be formulated more "naturally" using the MODEL clause, you start asking, "Is there an easier way?" Of course, indenting helps the "visual manifestation of the logical mess", but when you have lots of levels, there's only so much it can help...

In any case, I like your WITH suggestion. I haven't used WITH much, and didn't realize you could reference a prior subquery, as in something like this:
WITH
DATA1 AS 
(
    SELECT 
        1 AS DERIVED_COLUMN1 
    FROM
        DUAL
),
DATA2 AS 
(
    SELECT 
        DERIVED_COLUMN1 + 1 AS DERIVED_COLUMN2 
    FROM
        DATA1
)
SELECT 
    DERIVED_COLUMN2 + 1 AS DERIVED_COLUMN3 
FROM 
    DATA2


At least with this approach you only need 1 level of indents instead of N.
Tom Kyte
February 26, 2007 - 3:33 pm UTC

we sort of have to follow the SQL standard for scoping rules - there is not too much we can "do" beyond that.

Improvements to Oracle

Kurt Arthur, February 27, 2007 - 11:13 am UTC

I am normally just a lurker in these parts, but I can think
of a few improvements I'd like to see. Whilst the database
engine itself is just about the best on the planet, I can
think of a few improvements elsewhere:

1. Some sort of a "lint" for PL/SQL to find things like
unused variables and to warn against dynamic SQL without
bind variables, etc. If it ever got sufficiently
sophisticated, it could even make recommendations as to
how to use built in SQL to replace PL/SQL logic. I know
these issues can (and should) be found during code
reviews, but how often do development teams actually
perform them?

2. A minor enhancement to PL/SQL overloading to allow two
functions with the same signature but different return
values to exist. The PL/SQL compiler should be able to
determine which method to invoke based on the call to it.
IMHO, the following should execute instead of returning a
PLS-00307 error:

      declare
        nval number         := null ;
        cval varchar2( 10 ) := null ;

        function foo( inval varchar2 ) return number   is
        begin
          return( 0 ) ;
        end foo ;

        function foo( inval varchar2 ) return varchar2 is
        begin
          return( 'string' ) ;
        end foo ;
      begin

        nval := foo( 'in' ) ;
        cval := foo( 'in' ) ;
      end ;
      /


3. Some sort of "link editor" for PL/SQL. Packages can grow
to thousands of lines, which can be extremely unwieldy to
edit and darned near impossible to work on in a
collaberative manner. I currently simulate this by
separating methods into different files and combining
them via "make" into something that can be compiled by
Oracle, but I know this can be improved.

4. IMHO, it sure seems like Oracle is pushing other
languages (especially Java) at the expense of PL/SQL. I
think they need to remember what a competitive advantage
PL/SQL is and recommit company to it.

5. My last suggestion doesn't apply to the database or its
core technologies, but instead to Oracle Financials.
IMHO, this is one of the most poorly designed
applications I've ever had to work with due to a
miserable database design, wretched programming and an
inscrutable client user interface. This leads to a large
slow application that frustrates users and implementors
alike.

I know it probably won't happen, but I want you (Tom) on
the team that's merging OA with PeopleSoft, etc. And I
want you to have complete veto power over the new
application's design!

Thanks for letting me vent!
Tom Kyte
February 27, 2007 - 11:23 am UTC

well, I just shared this comment with the guy that "owns" pl/sql - so feedback received...

Re: Improvements to Oracle

Kurt Arthur, February 27, 2007 - 3:57 pm UTC

Thank you!

SQL Plus

Tim Kessler, March 04, 2007 - 3:45 pm UTC

I would like to see a polling option in SQL Plus that continually executes a SQL plus statment for x seconds with y interval or until a keystroke is typed. This would be good for monitoring/admin functions and would also be good for generating test volumes of data throughput during testing.

SQL and PL/SQL

Joel, March 09, 2007 - 10:21 am UTC

I'd like to see Oracle implement a LIMIT clause for a SELECT query like several other systems do. I know that the same functionality is possible by subselecting and using ROWNUM, but LIMIT is cleaner and more straightforward.

I'd like to be able to use conditionals in PL/SQL queries without having to use dynamic SQL, i.e.,

open rs for
 select * from emp
 if v_empno != 0 then
   and empno = v_empno
 end if
 order by ename;


instead of having to build a string through concatenations and bind variables. These are very difficult to correctly build and maintain, especially if there are multiple optional input parameters.

SQL Plus

robeContent-Type: application/x-www-form-urlencoded, March 14, 2007 - 5:17 pm UTC

I found the hard way (maybe there's a way around) that for instance, when doing the select of a char(1) field, SQL*Plus displays only 1 char of the heading too. I've tried column formatting, alias, set wrap, etc... nothing... please change this!