Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Karthick.

Asked: February 19, 2008 - 12:29 am UTC

Last updated: July 12, 2012 - 6:36 pm UTC

Version: 10 g

Viewed 10K+ times! This question is

You Asked

http://www.toadworld.com/Education/StevenFeuersteinsPLSQLExperience/TrainingandPresentations/BestPracticeProgramming/tabid/172/Default.aspx#Best%20Practice%20PL/SQL

This site could give you a presentation by Steven Feuersteins on PL/SQL practices. He has lot of good stuff to share. I like the things like drink more water. Ok let me get to the point. In most of his presentation the important topic he covers is

STOP USING SO MUCH SQL IN PL/SQL.

He asks to consider SQL as a service and not to put it in all over the PL/SQL. But all these days I have been considering SQL as an integral part of PL/SQL and SQL being the only access point to the data I thought it needs to be used extensively.

Generalizing the SQL and putting it in some utility or keep then as a separate layer and just calling them (Which will be reusable) does not look like a good idea to me.

But some of the points that he says are very true. Like the data structure is constantly changing thing and if we put our SQL all over the place then it¿s tough to maintain.

You must have written tons and tons of code. My question is how you go about using SQL in PL/SQL.

Do you consider SQL as a service and try to keep it as a separate layer or it¿s bounded with your processes.



and Tom said...

#b
(use this link to the page http://tinyurl.com/2zn277 instead of the toadworld one above)


He and I disagree very much on this point.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:47458244605081

...
But some of the points that he says are very true. Like the data structure is constantly changing thing and if we put our SQL all over the place then it¿s tough to maintain.
....

this I don't understand. Why would the use of sql or not (fully use sql or not) change that.

In fact, by packing all I can into a single sql statement - I'll eradicate huge blocks of procedural code - less code = less bugs = less to maintain.....


PL/SQL is a procedural extension to SQL, SQL is the core - it all starts there.

Rating

  (75 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Karthick Pattabiraman, February 20, 2008 - 1:15 am UTC

I am not sure if I am missing the point. What i tought he was saying is that to keep SQL as a separate layer and treat it as a service.

Say I have my process like this

BEGIN
.
.
Update status...
.
.
Do stuff...
.
.
Update status...
.
.
Do stuff...
.
.
Update status...
.
.
END

What I understand from his presentation is that he wants to keep the update status out of this process and treat it as a service. What I assume from this is that to keep the update status in a separate package and call where ever necessary.

On seeing this it looks like a good idea. Because when I get a performance issue in the status update SQL I can check for it at one point.

But the problem is that segregating the SQL and putting them at one place to improve reusability and maintainability may sound nice but the SQL layer that he is talking might be like lots of unrelated and irrelevant queries.

From what I under stand he was not telling to use pl/sql instead of SQL. He was telling to use LESS SQL in pl/sql.

So again my question is¿

Will keep the SQL some where else from core process is worth wile doing. Have you done any thing like that?

Tom Kyte
February 20, 2008 - 8:22 am UTC

he likes small, simple, non-complex, don't do too much in sql approaches. I disagree with that.

I do not see any problem with inlining the SQL in a procedure - in fact, that is basically the only way to do it - the sql will be in a procedure. I see no reason to segregate it in general. It belongs right there.

... From what I under stand he was not telling to use pl/sql instead of SQL. He was
telling to use LESS SQL in pl/sql. ...

please rationalize that.

If you use less sql in plsql, are you not using plsql instead of sql.


Where do Tom and I really disagree?

Steven Feuerstein, February 20, 2008 - 8:26 am UTC

Tom wrote that he and I "very much disagree" on "this point" - now, I am not exactly sure which point you think we disagree on. I looked at your mantra (you included a link implying that this would highlight our disagreement):

1. Do it in a single SQL statement if at all possible.
2. If you cannot, then do it in PL/SQL (as little PL/SQL as possible!).
3. If you cannot do it in PL/SQL, try a Java Stored Procedure (extremely rarely necessary with Oracle9i and above.)
4. If you cannot do it in Java, do it in a C external procedure. (when raw speed, or 3rd party API written in C is needed.)
5. If you cannot do it in a C external routine, you might want to seriously think about why it is you need to do it...

and I have no disagreement here at all.

So here is a quick summary of my position regarding SQL:

It is a form of hard-coding at least as problematic as hard-coded literal values. Every SQL statement you write hard-codes or freezes your business model at that point in time, and it will change. It is, therefore, extremely important to avoid repetition of SQL statements; such repetition makes your code extremely hard to debug, optimize, maintain and enhance.

If application developers write a NEW query every time they need data, if they write an NEW insert statement every time they want to push data into a table, repetition will occur, hard-coding will occur, and you will have a mess.

Consequently, my mantra is:

* Application developers should not include SQL statements directly in their application-specific logic.
* Instead, at the application level (whether in PL/SQL or Java), you should call pre-built API programs to do the SQL for you (generally, functions to retrieve data, procedures to change data).
* Generate as much of the API yourself, but write whatever cannot be generated - ONCE.
* Do NOT use PL/SQL when it is not necessary. In other words, if you can write a single SQL statement to get the job done, do it! "Native" SQL is almost always faster and simpler than a PL/SQL-SQL combination.
* But after you write that wonderful SQL statement, put it inside a PL/SQL subprogram so that it can be reused.

So, Tom, where do we disagree?

Thanks, Steven Feuerstein
www.ToadWorld.com/SF
Tom Kyte
February 20, 2008 - 9:49 am UTC

I disagree, especially on data retrieval

This micro-modularization of code - whereby every select is hidden in a getter type of function leads to code that calls PLSQL all over the place - instead of using a join, the developers call that function from sql or from a loop.

I've seen this approach taken to the illogical extreme where by the SQL looks like this after a couple of days:


select a, b, get_name(pk), get_address(pk), get_zip_code(pk), get_this(pk),
       get_that(pk), get_the_other_thing(pk)
  from table;



instead of just joining and doing it right (the recursive sql goes through the roof in something like that)

I find in my experience the probability of the same exact SQL statement being used in two places is pretty low (unless the code was horrifically bad to begin with - meaning they do their own nested loop joins via:

for x in ( select * from t1 )
loop
   for y in (select * from t2 where t2.key = X.key )
   loop
       for z in (.....



therefore, taking every sql statement, factoring it out of the code into some API that will be called by one person (well, should be called by one person - in reality would be called inappropriately by many people because it either selects too much information or not enough information for them, but they'll use it because they were told to reuse it)

My mantra is:

SQL is ok in code, it belongs inline, for me it makes the code a lot more readable.

I do not like "getter and setter" routines - to hard to employ bulk processing in the right places, makes "thinking in sets" almost impossible, forces you down a strictly procedural route - which is bad.



I have no problem inlining the SQL...

I *hate* table API's. (tapi's). Tapi's lead to VSAM processing in plsql - everytime I see it. "Oh know, we use tapi's, we cannot access the tables, that is why we call function after function after function"

I *love* Transactional API's. Maybe I'll start calling them Xapi's. They encapsulate a transaction in a bit of code. Here we agree - no SQL in the client application, just call stored procedures - but those stored procedures in my world would be XAPI's - well tuned and developed explicitly to do that one thing.


http://asktom.oracle.com/pls/ask/search?p_string=tapi

we disagree on the use of TAPI's, I'm not a fan of them at all, I actively discourage their use.



Karthick Pattabiraman, February 20, 2008 - 8:49 am UTC

Thanks for the response Steven. I am glade you saw this thread.

All this started when I went through your presentation and at an instant I felt it as a great idea and wanted to implement it in my application.

But in a large application if you are going to package each distinct SQL and have it as a service and call it then it¿s going to be a mess. A considerably big application can have thousands of distinct SQL statements. Putting them all into a separate layer and calling it again is going to be a hectic job. Isn¿t it. Maintainability can also become a problem.

"... From what I under stand he was not telling to use pl/sql instead of SQL. He was
telling to use LESS SQL in pl/sql. ...

please rationalize that.

If you use less sql in plsql, are you not using plsql instead of sql."

You got me here tom :-)

I posted this thread to see if any one has done such approach and been benefited. If so they can share there experiences which can help us build better application.

Tom Kyte
February 20, 2008 - 9:50 am UTC

comments above...

Value of Table APIs

A reader, February 20, 2008 - 11:22 am UTC

Tom and others,

For any technique or piece of advice, it is always possible to come up with "horror stories" based on extreme or inappropriate application of the technique.

You can definitely take table APIs to extremes that become cumbersome or problematic.

To me, it is the principle that matters, and the principle is that you want to avoid repetition as much as possible in your code. Repetition leads to code that is very hard to debug, maintain, enhance.

I find it odd that there would be such easy acceptance of the need to "hide" literal values in one's code, but so much resistance to hiding SQL.

Sure you can take it to extremes. So don't! Don't hide EVERY SQL statement - hide only the ones you think are likely to be needed/written over and over again.

Fine, don't create table APIs - they really only make the most sense in an OLTP kind of environment, for example, not data warehousing. Create transactional APIs, as Tom names them, but HIDE THE SQL, HIDE THE BUSINESS RULES.

Tom, you like to inline SQL statements. Doesn't this reduce the possibility that developers will ever be able to take advantage of features like the function result cache? Expose those queries in dozens of places in an application (and, c'mon, you must acknowledge that this sort of thing does happen!) and you will most likely never have the bandwidth to search them all out and change them to functions with the result_cache clause!

SF
Tom Kyte
February 20, 2008 - 12:58 pm UTC

I see no value whatsoever in a table api. none.

what is the difference between:

insert into t ( a, b, c ) values ( ?, ?, ? );

and

t_pkg.do_insert( a=>?, b=>?, c=>? );

if T changes, all of your calls will likely have to change. You might think "I can handle errors in there" - but you cannot. Suppose I needed to do "try to insert, if duplicate then update" logic. Well, if the TAPI decided "dup val on insert" is to be handled one way - you would be stuck.

... I find it odd that there would be such easy acceptance of the need to "hide" literal values in one's code, but so much resistance to hiding SQL. ..

I do not know what you mean by that.


... they really only make the most sense in an OLTP kind of environment, ...

that is where they are most horrific. OLTP is where you need finely tuned SQL and if everything is a TAPI - you have a big old VSAM system (it happens everytime I see tapi's in place)



Hide the SQL - absolutely.
XAPIs

but not TAPIs


I wish that select/insert/update/delete and even to a degree SELECT could not exist outside of PLSQL. The best client applications only have 'sql' that starts with declare or being (or call).



... Tom, you like to inline SQL statements. Doesn't this... ...

No, not at all - they are in XAPIs, not TAPIS - tapis would cause a feature like the server result cache to be defeated - because the queries are mostly all single table, single row queries. They are not big, beefy, set oriented things (because the developers were locked off from accessing the tables, writing the SQL that would be best - shoe horned into using TAPIS to access everything)



And as I said above, I frankly do not see the level of "query duplication" you are worried about - I just do not run into it. I see *unique* sql executed from many locations - each using the correct and proper predicate and projection. Each selecting just from the tables they should - not the tables someone else deemed to be correct.

I have the same issue with views. To me, there should be one level of views (no views of views of views of views) and each view is probably very special purpose, used perhaps in one place.


So no, I don't see the proliferation of the same SQL everywhere - especially if you use XAPIs that bundle all of the SQL needed to perform some operation together.



What's the goal?

Chen Shapira, February 20, 2008 - 12:16 pm UTC

Reading the post and the comments, it is quite obvious that SF and TK are optimizing for different situations with very different goals in mind.

TK seems to be thinking 100% performance. You want every query and every report to run as quickly and efficiently as possible. Using the most optimized approach, not wasting a single cpu cycle when it can be avoided. This approach resonates well with the DB crowd, since optimizing specific queries is much of what we do all day.

SF seems to be looking at a situation where the main problem is not tables with 10 million rows, but pl/sql packages containing 10 million lines of code. Performance takes second place to any technique that will allow better, easier and less buggy way to maintain this monster. So, you hide queries away where you can reuse them and test them separately, and worry about the performance impact of it later.

Reading the debate is like reading a debate between Assembly and Java developers: "But all these objects make everything so SLOW!" "But how can you maintain all your code when you have no encapsulation?". Its not really a debate at all. The requirements and the environment just differ too much.

Tom Kyte
February 20, 2008 - 1:07 pm UTC

... TK seems to be thinking 100% performance. You want every query and every report
to run as quickly and efficiently as possible. ...

absolutely FALSE. I'm thinking modular, resuable code. At the business transaction level.

Thinking at the table level is so "oracle forms", so "vsam" - I don't go there. It is not only non-performant, but to me, leads to way overly procedural code that joins data all by itself - every time, it happens over and over.

I don't see it at the level of assemble and java - not at all. I see it as a fundementally different approach.

XAPIS - transactional apis versus TAPIS - table apis.


XAPIS "hide sql" as well as anything else - and lead to massive reuse of code - but they cannot be generated by a script passed a tablename.

Because a transaction rarely if ever involves accessing a single table.


and I just don't see the massive duplication of the same sql all over the place.

And - for things like "we need to have a column that is first_name || ' ' || last_name - but next week will change that to last_name || ', ' || first_name" - I say "ah the power of a view"


I prefer views to hide things that might change.
A set of procedures that are transactional APIs

Probably my first 5 stars ever ...

nameless, February 20, 2008 - 2:23 pm UTC

I didn't think you'd ever address this head-on. "Says something about me I guess".

Interesting Discussion

SeánMacGC, February 20, 2008 - 3:52 pm UTC

One aspect that hasn't been mentioned here (apart from the dup val), but which I'd say is relevant, is the exploitation in full of the declarative constraints leverage of the DBMS (and which range could be extended profitably by the vendors, i.e., the ANSI CREATE ASSERTION).

It further weakens the case, it seems to me, for Table APIs -- if cr*p is prevented from getting into the database in the first instance, then as Tom has demonstrated, calling a TAPI procedure for this adds little more than a level of unnecessary abstraction.

Understand what you're saying Steve, but I think that any benefits are tenuous at best, and as for modfications and maintenance -- whether it be an INSERT, DELETE, or UPDATE -- there's equal effort involved in maintaining the straight DML and maintaining a persistent stored module (procedure or function).

hard coded literals ...

Sokrates, February 20, 2008 - 4:03 pm UTC

very interesting discussion, this XAPI vs. TAPI
interestingly, in my experience, most developpers stand on the side of SF saying:
yeah, re-use every SQL at most as possible, build an access layer around every SQL, this is good practice, this helps you having clean and structured code, and helps you maintaining your code.
I always stood on the side of TK here and never saw the point here. I never avoid plenty of hard-coded SQL and never even avoid hard coded literals. Never saw a point here.
And never had an issue with it.

PL/SQL's reason to exist

David Weigel, February 20, 2008 - 5:14 pm UTC

Culling the SQL out of PL/SQL and isolating it in special procedures is kinda like buying a car because you like the stereo. For me, the whole point of using PL/SQL is because it's so darn good at doing SQL-y things and integrating those with the business logic. If I had to create all those TAPIs, I suppose I'd use PL/SQL for them, but then I'd write the rest of the application logic in some other language and use it to call the PL/SQL TAPIs.

when to use PLSQL API

Jakub Illner, February 21, 2008 - 12:49 pm UTC


Thank you very much for this very interesting topic. However it seems to me the opinions are too black and white - there are times when PLSQL API's are good and times when they are wrong.

Any API is usually created in order to a) enforce a business logic (for developers, users), b) to hide complexity of underlying objects and c) to encapsulate function repeated in multiple places. And, of course, API's should not be used blindly, regardless of the impact on performance and manageability. Note that in this sense database view is an API, table function is API, PLSQL can be used to define API. Each of them should be used when appropriate.

I would not use PLSQL Table API just to encapsulate basic select/insert/update/delete operations; however I would use PLSQL API to encapsulate a "business" select where each user/developer must use exactly the same select to get the required results. And yes, there are situations when you have to choose between optimal performance and business logic encapsulation.

Consider the following scenario: you are loading financial data warehouse with a lot of balances, transactions etc. each day. Most "amounts" must be converted from the foreign currency to the functional currency which is used by reporting.

The currency conversion table can look like this

create table currency_conversion_rates (
  from_currency_code  varchar2(3) not null   -- source currency such as USD
, to_currency_code    varchar2(3) not null   -- target currency such as EUR
, conversion_date     date not null          -- date of the rate
, conversion_type     varchar2(10) not null  -- multiple rate types allowed, we use STD for conversion
, conversion_rate     number not null        -- rate to be applied during conversion
);

alter table currency_conversion_rates
add constraint currency_conversion_rates_pk
primary key (
  from_currency_code
, to_currency_code
, conversion_date
, conversion_type
);


Now we are loading transactions from

create table source_transactions (
  booking_date        date not null
, valuta_date         date not null
, currency_code       varchar2(30) not null
, amount_fx           number
);


into the target with almost the same structure, only when loading we have calculate the amount converted to the functional currency.

create table target_transactions (
  booking_date        date not null
, valuta_date         date not null
, currency_code       varchar2(30) not null
, amount_fx           number
, amount_fc           number
);


We can do this operation using inline select, e.g.

insert into target_transactions (
  booking_date
, valuta_date
, trans_currency_code
, amount_fx
, amount_fc
) select
  booking_date
, valuta_date
, currency_code
, amount_fx
, (select amount_fx*conversion_rate
   from currency_conversion_rates
   where conversion_date = valuta_date
   and from_currency_code = trans_currency_code
   and to_currency_code = 'EUR' -- our functional currency
   and conversion_type = 'STD'  -- conversion type to be applied

) as amount_fc
;


Or we can write a PLSQL function to perform the conversion. The inline select is better from the performance point of view (though the PLSQL can be optimized by not converting transactions already in functional currency, "remembering" the previous conversion rate etc.) but in this case I would opt for the PLSQL even if the performance is worse, because

- it enforces the rule always to use STD conversion type
- it enforces the rule always to use EUR as functional currency
- it allows flexible change to another funtional currency if necessary
- it prevents users/developers to revert the from/to currencies
- it could handle cases when we have currencies without rates or with gaps

The morale of this review is that it does not make sense to have rules like "do not use PLSQL API for selects" since such rules are too generic and sometimes they are valid and sometimes not. It is necessary to use common sense and choose whichever option is best in the given case.
Tom Kyte
February 21, 2008 - 4:54 pm UTC

... there are times when PLSQL API's are good and times when they are wrong. ...

no, PLSQL apis are UNIVERSALLY good and should really be the ONLY way to modify data in the database.

but, you can write good API's and bad ones.

But using a plsql API is always the way to go.


... however I would use PLSQL API to encapsulate a "business" select where each user/developer must use exactly the same select to get the required results. ...

then you and I are in exact agreement.

I like XPAIs - transactional, business level APIs

never tapi's (table api's)

Response to David Weigel

Stewart Bryson, February 21, 2008 - 1:33 pm UTC

That is an excellent point David. It is difficult to understand what benefit PL/SQL brings as a language when the PL and the SQL are not commingled throughout.

Response to Jakub Illner

Stewart Bryson, February 21, 2008 - 1:48 pm UTC

"- it enforces the rule always to use STD conversion type
- it enforces the rule always to use EUR as functional currency
- it allows flexible change to another funtional currency if necessary
- it prevents users/developers to revert the from/to currencies
- it could handle cases when we have currencies without rates or with gaps"

I don't see how PL/SQL enforces anything here. Views can do the same thing... just do "select * from view" where the view has the correct currency attributes "hard-coded" in it and the correct data formatting already done. Also, I don't understand why well-commented code (such as --hey, always use EUR here) doesn't do the same thing.

Your use of row-by-slow-row PL/SQL in the data warehousing scenario is the most egregious of all. Choosing not to use inline SQL eliminates the benefit of direct-path loading (insert /*+ APPEND */), parallelism, data segment compression of fact tables, the MERGE statement, multi-table inserts, extent-trimming for better space usage, drastic reduction of UNDO, elimination of REDO if desired, etc., etc., etc.

Besides, how often are you going to load TARGET_TRANSACTIONS in a data warehouse? Is forcing the use of specific attributes ONCE such an inconvenience?

Tom Kyte
February 21, 2008 - 5:04 pm UTC

you know what, I didn't read the end of that as closely as I should

I see they are proposing procedural slow by slow code over a bulky single statement.

I violently disagree with that approach.

It is not "plsql vs sql", it is "do it in sql, until you cannot and then use as little plsql as you can"


just to add to Stewart's comment ...

nameless, February 21, 2008 - 2:19 pm UTC

Jakub:

Your example just goes to prove Tom's point ... you shouldn't use the scalar subquery approach and you certainly shouldn't use any pl/sql ... you should just join.

Mindsets

SeánMacGC, February 22, 2008 - 4:43 am UTC

It largely comes back to something Tom touched on initially: processing data in sets (SQL), and until that record-by-excruciatingly-slow-record procedural mindset is overcome, the box will not be peered beyond.
Tom Kyte
February 22, 2008 - 7:21 am UTC

and a TAPI forces you to think in VSAM... Because you cannot join two stored procedures together.

well, actually, unfortunately you can - you end up with lots of plsql called from sql with tons of recursive sql and a machine 15 times larger than it should be with everyone saying "these databases are slow, I'm pulling my data into the middle tier where we can be fast"

RE: Mindsets

Duke Ganote, February 22, 2008 - 9:23 am UTC

And it is a mindset. Assembling a data set is like assembling a set of cars. Row-by-row means hand-constructing each car one-by-one, repeatedly scurrying back to the distant part bin to fetch each part.

Set-based is like moving the part bins to where you need them, bolting all the fenders first, etc. and letting the supervisor (optimizer) do some operations in parallel. Just so much faster because it's done with an eye to efficiency.

A good SQL coder puts together instructions that will allow the optimizer to do things in bulk, as efficiently as possible. PL/SQL coders are like craftsmen trying to build each record, one at a time.

I worked with one such coder who was stunned to see the results when I re-worked one of his procs. My single SQL statement would do in seconds what his cursor-and-loop did in nearly an hour. Particularly with 10gR2's DML error logging, there's just no excuse for slow-by-slow.

The Get and the Set service

Mohamed Houri, February 22, 2008 - 10:22 am UTC

Let me give you a story of my experience

I have worked in a project where we decided to use GetClientInformation to select the client information coming from several tables and SetInformation to update or create one or many of this client information. We have decided to do not do any direct select nor update without using the GetClientInformation and the SetClientInformation. Well in this context we decided to use an input/output object type parameter which maps all client information. Each information has a flag within the object type that indicates if the SetClientInformation service has to update it or not. The GetClientInformation and the SetClientInformation can be triggered from an oracle forms 6i screen from a web screen and from inside the data base during UNIX overnight jobs.

As far as object types are not supported within the oracle forms 6i screen we were forced to add a supplementary layer of plsql code between the client oracle forms and the GetClientInformation/SetClientInformation service.

What did you think I have learned from this experience?

I have learned that a simple update of one client information (which is for example located in table t1) needs to fill up this information in the corresponding mapping value in the object type together with its flag = 'Y' in order to be updated by the SetClientInformation.

Where 3 lines of code as simple as the following ones give us what we need
update t1
set t1.a = input_value
where client = input client id

with our method we need to

Call GetClientInformation + displays client data info + Change the data within the screen + map this changed data into the object type + call the SetClientInformation in order to update this data in table t1 (more than 300 lines of codes). Several times our SetClientInformation has been successful but the t1.a has not been correctly updated. Sometime because its flag has not been set correctly, some time because of some thing wrong in the GetClientInformation, etc...

Also when we need to select only one client information like

Select infor1
From table t1
Where client = input client id

We were obliged to make a call to the GetClientInformation(input object type in out) + loop through the content of the object type looking around to find our infor1 from table t1.

I am not against encapsulating update or insert into a reusable procedure but I am not also in favour of saying do not do update or select directly into your code. With the Set and Get service, we have got more problems compared to the added value these services are supposed to bring. This is not theory this is a real life experience

Mohamed
Tom Kyte
February 22, 2008 - 12:15 pm UTC

there was a better answer in the middle

You tried the "end all be all mother of all API calls"

You went down to assembly language as a fix.

You didn't try "let's think about what we actually do, what API's we need, lets build a set of API's that actually *make sense*. There will probably be many entry points here to set bits of various client information - but that is OK because if we need the mother of all API calls sometimes, we can just layer that in (it calls the individual ones) but most of the times we call the simple ones"

Building a useful API involves thought, design, thought, and some design.

SQL all the way

Matt, February 22, 2008 - 11:42 am UTC

I Started with Oracle in 6 and onwards and the constant throughout the releases has been using set based SQL whereever possible. It performs and it scales.

Its only fairly recently (9i onwards) that Oracle have boosted PL/SQL performance so that SF is able to mount a challenge to the SQL throne without getting laughed out of the joint.

Its interesting that Mr SF makes most of his money with PL/SQL - not that this would sway his thoughts in any way i'm sure.

PL/SQL great for wrapping my SQL!

Tom Kyte
February 22, 2008 - 12:20 pm UTC

I would disagree, PL/SQL has always been the way to go - it is what replication was written in in 7.1.6 for example, it runs the Oracle Application transactions.

But, I can still blow away slow by slow code with a nice bit of set based SQL.


I don't think that because Steve works with PLSQL and makes a living around PLSQL means he is pushing PLSQL. The one thing - and pretty much only thing - we don't see eye to eye on is TAPI vs XAPI. I like to use sql as complex as can be (using the full feature set) where it makes sense to use it - he has a different approach.

I don't see anything evil here. Steve is doing some really good stuff out there, really really good stuff.

Bit harsh

Matt, February 22, 2008 - 12:39 pm UTC

No it was a bit harsh - coming towards the end of a long week...

It is frustrating however when our developers come back from attending courses armed with a big chunk of new bulk collects and really fast pl/sql tricks that are now available in the new 25x database...

 We end up batting back lots of code because at the end of the day nothing is as fast as insert into <table> select <stuff> from <table>.

Modularisation of code is great - at the right level, and if we happen to reuse a bit of SQL so be it.



question on the use of views

James Su, February 22, 2008 - 2:56 pm UTC

hi Tom,
can you elaborate more on this point:
To me, there should be one level of views (no views of views of views of views) and each view is probably very special purpose, used perhaps in one place.

If a complex select can be split into layers of views, isn't it more readable? And what's wrong if we use it in many places?

Thank you.
Tom Kyte
February 22, 2008 - 3:12 pm UTC

... If a complex select can be split into layers of views, isn't it more readable? ...

absolutely NOT. Think about it, in order to read and digest what you are really truly looking at, you need to get N print outs.

I want a single statement to have, everything right there, self contained.

You have with subquery factoring.
You have inline views.

they can be used to make things more readable - use them.


The problem with views is - I create a view, it selects the 15 columns from three tables I need. You are forced in some shops to reuse that view. Well, you only needed six columns from two of the tables - and need to join them to a fourth. So, now, you have this extra table which isn't changing your result, but is not needed by you. Oh, and this extra table, we aggregated it - so it performs a lot of work.

Views to me are a special purpose thing, they were created to solve a specific problem, return a specific answer. anywhere you have a need for that SPECIFIC answer - reuse it, use it over and over.

But don't build yet another view on top of it, feel free to build another view next to it, but not on top of it. That is how views get the bad rap of "views are slow, their performance is bad" - that is because people compare a view that selects tons of stuff they don't need and performs many operations they didn't want - to a single query that gets precisely what they require.


Confessions of an old TAPI addict...

Bill Coulam, February 23, 2008 - 1:03 am UTC

I love SF's good advice, and have followed a good chunk of it for years.

However, this topic highlighted something I hadn't even noticed, until now. Table APIs used to be a part of me, so much so, that a presentation I gave at RMOUG (and will give again at UTOUG and IOUG) contains a plug for table APIs...and I don't even use them anymore! Grumble, grumble... gotta go revise it before I forget.

Over the last 12 years of different application architectures that all use PL/SQL heavily, I now see that I have almost no table APIs in my base PL/SQL framework. In fact, I've got only one, the one that wraps the insert to the logging table within an anonymous transaction and fills in a few columns transparently from application context. This is used by the LOGS, MAIL, IO and EXCP packages, so it made sense to centralize. But that's it. One left!

How did this happen? Organically.

I used to dutifully write (and later generate) tons of generic table API routines in a package per table in anticipation of them being reused by developers. Despite my best intentions and cooperative developers, they were never sufficient for the business logic, queries and screens. So over the years, what happened is I stopped writing over-engineered table APIs (doing something you don't yet need is a sin in the Xtreme Programming dogma), and gradually started writing views and routines that return ref cursors which match the requirements of the screen, report or automated thingy at hand. BTW, in case anyone has difficulty testing routines with ref cursors, PL/SQL Developer from Allround makes it a joy.

Something else I don't seem to use much of are public cursors (in the package spec). Not sure if I'm overlooking a valuable use of PL/SQL, or if it's another example of a less-than-useful feature of PL/SQL being weeded out organically.

bill c.
Tom Kyte
February 24, 2008 - 11:27 am UTC

...
Something else I don't seem to use much of are public cursors (in the package
spec). Not sure if I'm overlooking a valuable use of PL/SQL, or if it's another
example of a less-than-useful feature of PL/SQL being weeded out organically.
.....

I think the latter, globals are in general not a good thing and global cursors especially. One thing I like about cursors in plsql is (with the exception of ref cursors, dbms_sql cursors and global package cursors) they close automagically when control leaves that block - when they go out of scope.

And they close in a nice way, meaning, they are logically closed in plsql, but kept open by the plsql engine in a pool - so that if they are reopened in plsql, they are already opened (performance = good, scalability = excellent, downside = none really). If the pool fills up, plsql will silently "really close" some of the cursors - but only when forced to.

Best of everything.

karthick, February 23, 2008 - 6:35 am UTC

Just few weeks back i came to know about SF. I was just checking out for some presentation in YouTube and got this

http://www.youtube.com/watch?v=nLZlR3nSrsc

That was interesting so i went to his site.

http://www.toadworld.com/SF

He has got lot of great stuff. Any PL/SQL developer should go through those stuff.

I am not sure if he has any forum like what tom does. If any one knows do let me know.

To Karthick

Muhammad Riaz Shahid, February 23, 2008 - 8:52 am UTC

No exactly as tom does here but he has a blog:

http://feuerthoughts.blogspot.com/

TAPI, XAPI, whatever - the main thing is stop writing so much SQL.

Steven Feuerstein, February 25, 2008 - 9:17 am UTC

Great discussion! So, just a few comments:

* I loved that comment about how Steven makes his money from PL/SQL, so what do you expect? Ha! It is true, I generate almost all my income from PL/SQL, but I don't think I am a PL/SQL bigot. I agree with Tom et al - if you can write it in "straight" SQL, do so. It is almost always faster and easier. But then hide that SQL behind a PL/SQL API and have people access the SQL through that (or views or...)

* Or whatever. Because I really don't care much if you have found table APIs to be helpful or a hindrance. I do use table APIs and like them a lot; I am sorry to hear that others have not seem the same value, but I sure am not going to bang all of our heads against the wall trying to convince you!

* I also use transaction level APIs. I use APIs all OVER the place. The principle of "information hiding" is universally powerful and applicable, at many levels.

* The main thing is to follow the principle: avoid repetition; avoid hard-coding; make it easy to manage, optimize and upgrade your SQL statements - because they will be undergoing near constant change.

Regards, SF

PS - http://feuerthoughts.blogspot.com is my personal blog. My more tech-focused blog is available at http://www.ToadWorld.com/SF

please, anything but <i>whatever</i>

nameless, February 25, 2008 - 4:42 pm UTC

Steven:

TAPI, XAPI, whatever ...

It is not whatever because you're not just another programmer with an opinion. These ad-hoc terms coined here, TAPIs and XAPIs, do actually carry some weight, that of being APIs, interfaces.

The question is if there is any benefit of having the XAPIs work through TAPIs. And if there are, are there any caveats, any downside? They say, nothing is for free.

Personally I didn't doubt for one second that you would not implement, and promote the need for, the transaction level API as well. But elevating TAPIs from mere easy to generate snippets of code to an actual API, an architectural artifact, is still a problem. What's frustrating - for me - about debating with the proponents of TAPIs is that they invariably end up referring to your books as if the mere mention of your name should seal the argument. And on this particular subject there hasn't been actual value being demonstrated ... on balance. If you've found value, fine; but if you are to promote the approach then show how others can find value too while mitigating its obvious drawbacks.

You're right ... "minimizing hard-coding" (ultimately unavoidable), "reuse", "modularization", "encapsulation" ("hide" is such a wrong word to use) are programming tenets with universal applicability. Hence, to argue the benefits for the existence of a particular layer takes more than just that. Else, any layer is a good layer.

Good intentions do not necessarily lead to good things ... plenty of insightful quotes out there about good intentions.

If a middle-tier architect does it with their own Java API it is questionable ... if a database architect does it with their PL/SQL API it is questionable. I can remark as well ... PL/SQL, Java, whatever. Whoever they are and whatever their other accomplishments, this is not good, claims of good intentions notwithstanding.

And the thing is, it has been invented and re-invented to death. Loosely speaking, TAPIs are not unlike the entity beans and XAPIs are not unlike the session beans. Insisting on the session beans to interface through the entity beans was the craze of the initial EJB framework and for the same good intentions. And it didn't work, not as promised anyway, and they have, organically, moved on (upped the EJB spec, as they always do); now the session beans are going through DAOs - some chance of getting it right if they employ our PL/SQL XAPIs rather than just mapping one DAO per table - or ORMs like Hibernate and Toplink - if they still believe a relational database is a necessary evil better kept out of sight ("hiding" is good, they'll argue).

Employing TAPIs within XAPIs like entity beans within session beans has the same detrimental effect of choking the relational database, way too early, of its set-based strength. In some hands, it can lead to a workable, acceptable solution but that doesn't necessarily mean the approach is right and suitable for mass consumption.

For a database centric application, taking those good intentions to their logical conclusion should yield: a database model is the ultimate mechanism for reuse and encapsulation of business rules, declarative is always better than procedural. The database model is the holy grail. No one is arguing for writing more SQL than it is necessary; they are arguing for writing less procedural code because its need is questionable and more often than not, detrimental.

The big problem with TAPIs, whatever the implementation language, is that it fragments one's carefully designed database model. These fragments become glued together through nested for loops and/or sql statements heavily loaded with pl/sql calls (almost the same thing). Either the TAPIs is not really an interface ... or it is an interface to a co-located, but now virtually distributed database ... and distributed databases are always difficult to work with.

If I were to chip away at Tom's mantra then not mentioning the database model would be it.

1. Do it in a single SQL statement if at all possible

Before moving on to the next step, one should maybe reflect on why isn't it possible. In many, many, many cases it is because the database model is incomplete or incorrect. The defaulting to PL/SQL seems to me like a "when others" abandonment. But that's probably just me .... believing that some things which should go without saying should, in fact, be said once in a while.

If Oracle is a religion and hence maybe in need of evangelists then, well, so be it ... but raising churches, like any other standing structure, still follows the laws of science. Preferences for elements of style are fine, but architectural blueprints are not whatever.

Some bricklayer.

Constant change?

3360, February 25, 2008 - 8:55 pm UTC

> * The main thing is to follow the principle: avoid
> repetition; avoid hard-coding; make it easy to
> manage, optimize and upgrade your SQL statements - because
> they will be undergoing near constant change.

I disagree with the observation that SQL code undergoes constant change, or that it should be a primary design consideration. Many applications execute the same business logic time and time again, sometimes with very few changes. These changes can be made and scheduled along with other changes in code required to add new functionality. Is it a good idea for all the code of an application to be frozen at version 1.0?

There are some cases, too many in my opinion, where applications are written to accommodate dynamic changing requirements, but something always needs changing somewhere anyway and they don't work at all well either.

Requirements tend to change a lot more before they are got right than after.

Personally I find a query that is all in one place, for reuse a view can be used, is better than one that is defined as a cursor in a package spec somewhere.

Micro-modularisation...

Tony Andrews, March 28, 2008 - 11:47 am UTC

> This micro-modularization of code - whereby every select is hidden in a getter type of function leads to code that calls PLSQL all over the place - instead of using a join, the developers call that function from sql or from a loop.

Agreed. I work with a very large system where a TAPI approach was mandated originally, and every time I want to select the current address of a person or whatever I have to find the appropriate TAPI package (easy enough though there are over a thousand), search through the various similar "get_address" functions and work out which one (if any) does what I need. Often none does exactly, so I have to add a new one (can't change any of the existing ones because the impact analysis and regression testing would be prohibitive). We actually have XAPIs, but our XAPIs have to call the TAPIs...

TAPI's, XAPI's and User Defined Types

rc, May 24, 2008 - 4:14 am UTC

If you use a XAPI instead of calling one or more TAPI's what kind of parameters should one use? User defined types or combinations of numbers, dates, varchar2s?

In my C# middle layer I have objects so if I wanted to call a XAPI I would want to use these objects as parameters to call a PL/SQL procedure. That means I have to provide a mapping between a .NET class and an user defined type also known as an oracle object also known as an abstract data type.

How many developers that develop XAPI's use user defined types for the 'in parameter'? What is the best way?

If one uses user defined types one could insert a master and one or more details in the database with one call to a pl/sql procedure.


Tom Kyte
May 24, 2008 - 7:05 pm UTC

... what kind of parameters
should one use? User defined types or combinations of numbers, dates,
varchar2s?
....


answer is Yes.


xapi

A reader, May 25, 2008 - 11:57 am UTC


Field experince with TAPI

Oleksandr Alesinskyy, May 26, 2008 - 7:33 am UTC

I have on hand old system (developed not by me) completely based on TAPI methodology (and many others "nice" SF advices, like error handling strategy) - I never have seen a system that is more cumbersone for support and maintenance (and I develop software for good 28 years).


SQL vs. Procedural

A reader, February 17, 2009 - 8:40 pm UTC

Why is it so difficult to to get people to understand that relational databases are born to process in sets? I have used Teradata extensively and pretty much everyone in the Teradata world understands that SET processing via SQL beats the pants off of cursor based/procedural programming. I just don't understand why a large section of the Oracle users to understand that set processing is how you get top performance (with the added benefit of writing as little code as possible). My guess is because a lot of Oracle users get introduced to PL/SQL very early and start thinking it is the answer to eveything - I have a hammer and every problem is a nail syndrome...


I have seen so many shops where they complain that Oracle is slow and I look at their code is almost always row based processing. The moment I convert that to set based processing they see orders of magnitude improvement in processing times. Then they say "but this requires a lot of rollback and temp space" and I say SO?? When a Java app requires tonnes of memory no body says why do you need so much memory. The moment you say the database needs Rolllback and temp space they start questioning it.

In the Teradata world you typically leave about 25% of storage to Spool (temp space). That kind of temp allocation is almost unheard of in the Oracle world - and I fail to understand why that is.

Tom Kyte
February 17, 2009 - 9:28 pm UTC

my other 'guess' (not so much a guess as an observation) is a lot of Oracle users are not Oracle users. They are coders that want to code pretty user interfaces with neat graphics and cool visualizations - and then are told "but you have to actually use a database".

self fulfilling prophecy...


SQL

A reader, February 17, 2009 - 10:59 pm UTC


Sometimes procedural code beats sql

Muhammet, February 17, 2009 - 11:41 pm UTC

Some of problems is better solved in procedural code.
Lets I have a table like :

T
ID quantity quantity_on_hand
1 10 0
2 -10 0
3 50 0
4 100 0
5 0 110

I want to insert new records to table
by substraction the the quantity
which have lower_ids from quantity_on_hand with the last id to make it 0.
(ID,quntity,quantity_on_hand)==> (6,100,0) and (7,10,0)
It is easier to write in pl/sql,
How about with SQL
Tom Kyte
February 18, 2009 - 7:32 am UTC

you need to write a slightly better specification.

I cannot develop psuedo code for this, let alone sql or procedural code.


No clue what:

... I want to insert new records to table
by substraction the the quantity
which have lower_ids from quantity_on_hand with the last id to make it 0.
(ID,quntity,quantity_on_hand)==> (6,100,0) and (7,10,0)
It is easier to write in pl/sql,
....

means, no idea what the logic is to be.


so, as a quick challenge to you - code that procedure, do the entire set of logic, then I'll tell you how wrong your code is (I'm betting I have a better than 50/50 chance your code will be incorrect..... :) )


SQL vs Procedural

SeánMacGC, February 18, 2009 - 6:52 am UTC

And another reason why Teradata (for example) programmers are more oriented towards set processing is that the procedural option is so rudimentary and restricted when compared to PL/SQL, that oftentimes it just isn't possible to use anything other than SQL. That's not a bad thing at all, and it is exactly where all SQL programmers should begin, but... it's great to have the power and flexibility of PL/SQL when it's needed -- not everything can be achieved through SQL.

A reader, February 18, 2009 - 7:27 am UTC

SeánMacGC from Dublin -
Your observation about Teradata and its rudimentary procedural language is spot on. But it also has a lot to do with the philosophy that databases are built on set theory and they are especially good at it when you execute using large/massive parallellism. Teradata actively preaches that philosophy and everybody buys into that.

What we need is a shift in thought process in the Oracle world. Everyone that touches Oracle needs to learn the mantra - Try and do it in SQL first understanding that SQL is not at odds with PL/SQL. If you need the SQL to be executed repeatedly then by all means make it a procedure/function, bind variabilize it - just don't turn it into a loop just because you put it in a procedure/function. IF it can't be done in a single SQL then try to break it up and do it in a few steps still using the SQL method. If you still can't do it then that is why you have PL/SQL. I am not against PL/SQL at all. In fact I use it quite frequently but I know where it makes sense and where it doesn't. Unfortunately a lot of folks don't think that way and I see it all the time.

Oracle needs to change their training courses/materials and actively push this methodology. With Exadata Oracle will be entering the world of very large warehouses and th power of Exadata comes from massive sequential I/O which is synonymous with SQL and large scans. It would be almost criminal to have the power of Exadata and not use the raw power of SQL - may be it should be a crime :-)

I will get off my soap box now but I just had to get it out. I see way too much nonsense and too little common sense now-a-days..

Keep up the good work Tom. There are many people who do listen to you and learn from you.

The explaination and pl/sl solution of problem is

A reader, February 18, 2009 - 10:07 am UTC



RULES ==>
quantity_on_hand can not bigger than sum of positive quantity
which has lower id's

ID is primary key
all of columns has 'not null' constraint

Algoritm or pl/sql procedure :
****
create or replace
procedure do_load
as
l_remain_quantity number;
begin
select quantity_on_hand
into l_remain_quantity
from T
where id=(select max(id) from T);

if l_remain_quantity = 0 then
insert into t values (t_seq.nextval, 0, 0);
return;
end if;

for stock_rec in (select * from T where quantity >0 order by ID desc) loop

if ( stock_rec.quantity > l_remain_quantity) then
insert into t values (t_seq.nextval, l_remain_quantity, 0);
exit;
else
insert into t values (t_seq.nextval, stock_rec.quantity, 0);
l_remain_quantity := l_remain_quantity - stock_rec.quantity;
end if;
end loop;

end;

example
******************
table T
ID quantity quantity_on_hand
1 10 0
2 -10 0
3 50 0
4 100 0
5 0 110

after do_load procedure
********************
table T
ID quantity quantity_on_hand
1 10 0
2 -10 0
3 50 0
4 100 0
5 0 110
6 100 0
7 10 0


I tested my code it is working but it can be wrong
Because I did not solve the problem. My friend
has the pl/sql solution and asked me the question whether the problem can be
solved with sql easy and efficient.

Note :=> the above pl/sql code is mine
and I have done it in 10 minutes
but I could not do with sql :(

Tom Kyte
February 18, 2009 - 3:57 pm UTC

It took me less than 2 seconds to find a bug.

Who else sees it? It punched me in the nose, ouch. It is a huge one.


that, and the rule:

RULES ==>
quantity_on_hand can not bigger than sum of positive quantity
which has lower id's


does not seem to be implemented by the code? As I read the 'rule' it says:

you cannot insert a record unless the quantity_on_hand value is less than or equal to the sum of the quantity column when quantity is greater than zero.

I don't see that logic implemented at all.

Please try to be more precise.


Your 'rule' as I read it would be:

lock table t;
insert into t (id, quantity, quantity_on_hand)
select t_seq.nextval, p_qty, p_qoh
from t
having p_qoh <= sum( case when qty > 0 then qty end );


and my implementation of your stated rule, as stated (I could not figure out your 10 minutes of code, what it was doing) should give you a clue as to the very large bug in your implementation :)


A reader, February 18, 2009 - 11:39 am UTC

All you need is a case statement. You can use an inline view or a scalar subquery to get the quantity on hand of the max(id) and then use a case statement to evaluate if the quantity is greater than remain quantity. Something like below. I have not run this so there may be errors but it gives you the general idea...

select
case when quantity > l_remain_quantity then
else..
from
(select id, quantity, (select quantity_on_hand from t where id = (select(max(id) from t)) l_remain_quantity from t where quantity > 0)




Tom Kyte
February 18, 2009 - 4:02 pm UTC

still missing something....

think *concurrency*

A reader, February 18, 2009 - 11:41 am UTC

the order of id column
is important also :)

My code beats your code :)

Muhammet, February 19, 2009 - 3:13 am UTC

yes, in pl/sql rule 1 is not implemented
because the way application work says it.

You are right, I did not think concurrency,
Since I have forgetten :( 
and other reason why I have forgetten is 
this procedure will be called only for once after 
each january 1 for each material_id column which I did not state
for the simplicity of question. It will be a dbms job like that.
So I do not think concurrency is a big problem :)




table T
ID   quantity     quantity_on_hand
1    10                   0
2   -10                   0
3    50                   0
4   100                   0
5     0                 110

I will take quantity top to bottom ID values
and substracting.

Algoritm.
1. Take qoh which have max(id)  ==> 110
2. if qoh > quantity which is positive and firstly begin with the top id 
       then     insert into t (t_seq.nextval, qoh,0) 
         assign qoh = qoh - quantity
   else if qoh < quantity
          insert into t (t_seq.nextval, quantity,0)
          return
   end if;
3. repeat step 2 for lower id's 


  in this case   qoh    q      remain
                 110 - 100 ==> 10  since quantity_on_hand bigger insert into t (t_seq.nextval, 100,0) 
                  10 - 100==> since quantity_on_hand smaller  ==> insert into t values (t_seq.nextval,10,0)
 

                   

Unfourtanelty your sql code does not work correct either :(

(select * from T  where quantity >0 order by ID desc)

The order of quantity values is important. It will be  started with max(ID) to min(ID) when
substracted from qoh.
 
How did you implemented in this sql ?
select t_seq.nextval, p_qty, p_qoh 
from t 
having p_qoh <= sum( case when qty > 0 then qty end ); 


For my sample after I execute your code
SQL> insert into t (id, quantity, quantity_on_hand)
  2  select t_seq.nextval, quantity, quantity_on_hand
  3  from t
  4  having quantity_on_hand <= sum( case when quantity> 0 then quantity_on_hand end );
select t_seq.nextval, quantity, quantity_on_hand
             *
ERROR at line 2:
ORA-02287: sequence number not allowed here

Your  code even does not work :(

Tom Kyte
February 19, 2009 - 8:16 am UTC

... yes, in pl/sql rule 1 is not implemented
because the way application work says it. ...

huh?


... So I do not think concurrency is a big problem :) ...

wrong, you need to lock the table, not doing so means your code is wrong. It has a serious bug, it would take a very simple operation done at the same time to cause "data integrity to disappear"


... Unfourtanelty your sql code does not work correct either :( ...

sure it does in general (we couldn't test it, no creates, no inserts, no test case, but we could get it to work), it does exactly what the stated requirement was - but - and this is important - you did not state the requirement you are trying to satisfy. Read your rule 1 above, read your code from above. Does your code even REMOTELY resemble doing what 'rule 1' says???? No, it does not.



... 3. repeat step 2 for lower id's


in this case qoh q remain
110 - 100 ==> 10 since quantity_on_hand bigger insert into t
(t_seq.nextval, 100,0)
10 - 100==> since quantity_on_hand smaller ==> insert into t
values (t_seq.nextval,10,0)
...

that doesn't make sense. I see qoh=110, I see qty=100. Ok, we have 10.

but you said:

... 2. if qoh > quantity which is positive and firstly begin with the top id
then insert into t (t_seq.nextval, qoh,0)
...


so, we should be inserting qoh, not qty.



If this is how you write specifications - we'll never be able to generate correct code OR sql.


If you get it concisely and precisely - I'll be more than happy to show you the single SQL that does it.

But you have to actually write the specification so a coder would be able to generate code. We are not there yet.

a little correction

Muhammet, February 19, 2009 - 3:20 am UTC


in this case qoh q remain
110 - 100 ==> 10 since quantity_on_hand bigger insert into t (t_seq.nextval,
100,0)
10 - 50==> since quantity_on_hand smaller ==> insert into t values
(t_seq.nextval,10,0)


Tom Kyte
February 19, 2009 - 8:16 am UTC

start over, write the specification carefully.

Another correction

Muhammet, February 19, 2009 - 3:23 am UTC

It will be last correction I hope.

Algoritm.
1. Take qoh which have max(id) ==> 110

2. if qoh > quantity which is positive and firstly begin with the top id
then insert into t (t_seq.nextval, qoh,0)
assign qoh = qoh - quantity
else if qoh < quantity
insert into t (t_seq.nextval, qoh,0)
return
end if;
3. repeat step 2 for lower id's

Tom Kyte
February 19, 2009 - 8:16 am UTC

see above, write the specification from start to finish - clearly, so one could generate code from it.

Concurrency is a problem

Muhammet, February 19, 2009 - 3:31 am UTC

I will admit
concurreny will be a problem.
Since there are other transaction besides this one
on that table.
It is my confession :(.


Best regards,



Your sql without sequnce

muhammet, February 19, 2009 - 3:46 am UTC

Your sql without sequence does not work either :)
SQL> select quantity, quantity_on_hand
  2   from t
  3   having quantity_on_hand <= sum( case when quantity> 0 then quantity_on_hand end );
 having quantity_on_hand <= sum( case when quantity> 0 then quantity_on_hand end )
        *
ERROR at line 3:
ORA-00979: not a GROUP BY expression

Tom Kyte
February 19, 2009 - 8:30 am UTC

like I said Muhammet - give us a clean specification that means something - and the test case to go with it and we'll see.


Here is the first example after I created a table and finished the thought:

ops$tkyte%ORA10GR2> create table t ( id number, qty number, qoh number );

Table created.

ops$tkyte%ORA10GR2> create sequence s;

Sequence created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t (id,qty,qoh) values ( s.nextval, 10, 0 );

1 row created.

ops$tkyte%ORA10GR2> insert into t (id,qty,qoh) values ( s.nextval, -10, 0 );

1 row created.

ops$tkyte%ORA10GR2> insert into t (id,qty,qoh) values ( s.nextval, 50, 0 );

1 row created.

ops$tkyte%ORA10GR2> insert into t (id,qty,qoh) values ( s.nextval, 100, 0 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable qoh number
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec :qoh := 110;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t ( id, qty, qoh )
  2  select s.nextval, 0, :qoh
  3    from (select null
  4            from t
  5                  having :qoh <= sum(case when qty > 0 then qty end )
  6             )
  7  /

1 row created.

ops$tkyte%ORA10GR2> exec :qoh := 999999;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t ( id, qty, qoh )
  2  select s.nextval, 0, :qoh
  3    from (select null
  4            from t
  5                  having :qoh <= sum(case when qty > 0 then qty end )
  6             )
  7  /

0 rows created.

A reader, February 19, 2009 - 4:20 am UTC

begin
   select  quantity_on_hand
   into l_remain_quantity
   from T 
  where id=(select max(id) from T);
<
=> This is where my session is going to change the data ans issue a commit
  if l_remain_quantity = 0 then
    insert into t values (t_seq.nextval, 0, 0);
    return;
   end if;

=> And here is where your session picks up a different set of data
    for  stock_rec in (select * from T  where quantity >0 order by ID desc)  loop

      if ( stock_rec.quantity > l_remain_quantity) then
         insert into t values (t_seq.nextval, l_remain_quantity, 0);
          exit;
     else
        insert into t values (t_seq.nextval, stock_rec.quantity, 0);
        l_remain_quantity := l_remain_quantity - stock_rec.quantity;
     end if;
   end loop;

end;

Tom Kyte
February 19, 2009 - 8:31 am UTC

* specification *, text, a description of precisely what process needs to happen. stop writing code, just describe the process.

Muhammet

A reader, February 19, 2009 - 9:22 am UTC

table T
ID quantity quantity_on_hand
1 10 0
2 -10 0
3 50 0
4 100 0
5 0 110



1. Firstly find the quantity_on_hand value from
table which have maximum ID (5)




2. find a single quantity column (a single value, not sum) which has positive value and begin with
the quantity which have top id (4). (In next iteration use lower id, procedural iteration).

3. if quantity_on_hand bigger than quantity for that specific quantity row;
insert a next sequence value,quantity and 0 as a row into T table values.
else
insert a next sequence value,quantity_on_hand and 0 as a row into T table values.
finish procedure

4. repeat step 3 single row which has quantity positive and has lower id column value


In my example

step 1. quantity_on_hand, which have maximum id is 110.
step 2. quantity, which is positive and has top id column value is 100.
step 3. quantity_on_hand bigger than quantity
insert into t (t_seq.nextval, 100,0)
assign quantity_on_hand = 110 - 100 = 10

step 4. repeat step 2 for quantity= 50 and Id=3
step 3 quantity_on_hand < quantity (10,50) then
insert into t (t_seq.nextval,10,0)
finish procedure

the inserted rows are
(6,100,0)
(7,10,0)

Am I clear now ?
I wish I am clear.
I beg for God :)
It is a bit harder for me to write in English.
Sorry for that.

Note ==> I have not written the message before this message
Tom Kyte
February 19, 2009 - 1:31 pm UTC

ops$tkyte%ORA10GR2> with max_row
  2  as
  3  (select * from (select * from t order by id desc) where rownum = 1 ),
  4  qty_data
  5  as
  6  (select *
  7     from
  8  (
  9  select id, qty, sum(qty) over (order by id desc) sum_qty
 10     from t
 11    where id < (select id from max_row)
 12      and qty > 0
 13  )
 14  where sum_qty-qty < (select qoh from max_row)
 15  )
 16  select s.nextval,
 17         case when qty_data.sum_qty < max_row.qoh then qty_data.qty else max_row.qoh-(qty_data.sum_qty-qty_data.qty) end qty,
 18             0 qoh
 19    from qty_data, max_row
 20  /

   NEXTVAL        QTY        QOH
---------- ---------- ----------
         6        100          0
         7         10          0



that just fell out of your specification....

And if I were to comment it, it would be more understandable than the existing code....


A litte fix for step 3

A reader, February 19, 2009 - 9:33 am UTC

Sorry, new step 3 will be

3. if quantity_on_hand bigger than quantity for that specific quantity row;
insert a next sequence value,quantity and 0 as a row into T table values.
substract quantity from quantity_on_hand and assign it to quantity_on_hand
else
insert a next sequence value,quantity_on_hand and 0 as a row into T table values.
finish procedure

Nobel for you

A reader, February 19, 2009 - 11:35 am UTC

Tom,

If there is a Nobel prize for patience you should get it. How can you keep your cool with this guy who sticks a totally irrelevent question in between a great discussion and cannot even provide any specifications for what he want(or atleast creates and inserts for his tables)? Reading these repeated posts with bits and pieces of what he is trying to do gave me a headache aleady. Hats off to your paticence!

Wonderful

Muhammet, February 19, 2009 - 3:04 pm UTC

Nothing to say except the solution is perfect.
thanks a lot. There are always something to learn.
And very sorry for these parazite messages.
Since this was my first time to send messages to this form, I am a bit excited and in hurry mode; I have written anything which are in my mind to screen without thinking others.
Lastly I have learned too much from your thougths and knowledge.
Thanks alot for that also.

A slight misunderstanding

A reader, February 20, 2009 - 2:16 am UTC

Tom,

in reply to my post of February 19, 2009 - 4am US/Eastern ( http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:672724700346558185#1515925300346507751 ) you said: "* specification *, text, a description of precisely what process needs to happen. stop writing code, just describe the process.".

It was, however, not _my_ problem. I was referring to your comment on http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:672724700346558185#1512992100346945752, where you said
"It took me less than 2 seconds to find a bug.

Who else sees it? It punched me in the nose, ouch. It is a huge one.
..
"

The bold commenting in the code block is what I deem to be the bug.

Strange though, when I posted the reply there was not other comment after the posting I referred to. Now revisiting the forum a day later there are a bunch of postings in between.

Anyway, a big thumbs up for Asktom. I keep learning everytime I visit this site.

procedural...declarative...it depends

A reader, February 22, 2009 - 4:17 am UTC

sometimes you just need one row of a table and several rows of its detail (master detail relation).

to get the master row, table API is used (get function),
simpler and shorter then select into bla bla.

then to get the corresponding details rows, another table API is used....

some processing is made, the corresponding rows are update.

in this case, table APIs are used to implement transaction APIs...

anything wrong with the above?
hmmm...
can it be done using full declarative style..?
maybe.....maybe not...
maybe yes but then it will not be easy for the programmers to understand.

derivation of values when selecting, can be built into that TAPI.
can it also be done using declarative style?
maybe ....but then...not as many programmers in THAT company understand the code as easily..

point of the story?

it depends...

as long as the goals are reached....as long as the way it is done is the most comfortable for those who are doing it..

what's wrong with that?


for some other type of processing, that includes a lot of numbers of rows, or for reporting purpose...
declarative is the way to go...

Yusan
Tom Kyte
February 22, 2009 - 7:09 am UTC

disagree all of the way around

first of all - procedural vs declarative? Where does that come from, this is "doing it slow by slow versus doing it right in SQL".




How could

... to get the master row, table API is used (get function),
simpler and shorter then select into bla bla.
...

be "more simple"????? come on, be realistic. Either

a) you know how to code sql or
b) you don't


The only way - REPEAT - the only way that

... to get the master row, table API is used (get function),
simpler and shorter then select into bla bla.
...

could be true is you have a fear of sql, period. You have just described the easiest thing to do with SQL - get a single row by primary key, it does not get easier.



Table API's - useless, slow, inefficient, wrong, nothing good about them.

Transactional API's - good, fast, efficient, the right way.


... as long as the goals are reached....as long as the way it is done is the most
comfortable for those who are doing it..
....

wrong - so wrong. If the most comfortable way is the least efficient, slowest, most error prone, hardest to tune way - but it makes YOU FEEL GOOD - it is still *wrong*.


You are a developer, a coder, it is not not about so much as it is about writing good code.

You just reminded me of an article I just read and plan on blogging about soon
http://sob.apotheon.org/?p=935

This paragraph in particular:

... What I'm saying is that people are the proximate cause ¿ people who don't know what the hell they're doing. These people accept all the marketing for OOP, which states that it'll make your code more modular and more stable and just generally better. It won't. It will help you do good things for your code if you apply it intelligently, and if you don't use it even when you shouldn't. ...


You keep going with your slow by slow, I'll (or someone like me) will still have to keep coming behind you - ripping out your code and replacing it with the right way.


doing it slow by slow versus doing it right in SQL? IT DEPENDS!!

A reader, February 22, 2009 - 8:27 am UTC

I have no fear of SQL. I write alot of SQL for reporting purpose, to calculate summary etc etc.

But when it comes to do transaction, to ME it is easier and faster using PLSQL, where the rows are delivered using function.... it is shorter and TO ME EASIER to understand.
all this PLSQL API are mainly generated based on the table information in the data dictionary.

I also have PLSQL layers that emulates "objects", so that
information about that objects (attributes) can be obtained using function (get function).

slow by slow because it is procedural or object oriented?
not necessarily...depending on the processing needed and the performance goal.
can it be faster if doing it in pure SQL, that may be...

assembler if faster then .... is faster then ...
pure sql is faster then plsql api is faster then ...

BUT,
as long as the performance goal is reached,
why bother whether things can be faster if the way it is done is the EASIEST for THOSE WHO ARE INVOLVED to code and to maintain?


Yusan


Tom Kyte
February 22, 2009 - 4:37 pm UTC

It is certainly not easier nor faster - if you know how to write sql.

It is definitely not faster nor easier to write procedural code for the database to process. What you always (repeat ALWAYS) end up with this silly little table apis (get me a row, get me that row, look up this first name, look up this address) is billions of tiny inefficient SQL queries - repeated over and over and over and over again and again and again - each doing 4 or 5 IO's - adding up to hundreds or thousands - when all of the data you needed could have been requested in a SINGLE sql statement that takes 10's of IO's.


If you make the statement

... TO ME EASIER
to understand.
...

then I can say with 100% degree of confidence - you do not know sql as well as you should.


I see it time after time after time. Slow by slow procedural code - hard to understand BY ANYONE after 2 or 3 months, doing things inefficiently - always needing to be tuned.


do not go down the path of assembler is faster - first of all it is NOT, not without knowledge, not without experience. You can write crappy assembler, you can write good assembler.


But you know what, you could not write assembler that can out do what can be done in a single sql statement - for the simple reason that I can pack more into one line of the NON-PROCEDURAL LANGUAGE that is sql - then you can code in your lifetime in the procedural thing that is assembler.

We are talking set based vs procedural - I don't care WHAT language you choose as your procedural language - pick any one. We are talking SET based vs SLOW by SLOW procedural. Do not try to confuse the message by picking a different procedural language, that is just a red herring.



.... as long as the performance goal is reached,
....

for the simple reason that "it never is" using this approach, maybe in single user developer prototypes - but in real life as you scale up hundreds to thousands of users - it fails, time after time after time.


And for every N cases you have where it seems to have worked, I'll give you N^2 cases where it did not - this is an approach to be avoided - not to be stressed.


Set based - learn it and you'll never look back. It is a lot more than summing up an account balance (simple aggregation) - really - really and truly learn it. You'll never look back and you'll wonder how you got anything done in the past

Yusan

A reader, February 22, 2009 - 11:39 am UTC

It may perform now, but what about in the future? I'm working on an apex solution which has been written in PL/SQL, plenty of function calls, plenty of cursors, loops, if statements.

Its fine for their current requirements processing one client at a time via the Apex UI - however the business now wants to use this same code to run batch processing and it could be up to a million records. They haven't got a hope in hell, and we will have to rewrite 100,000's of lines of code.


Tom Kyte
February 22, 2009 - 5:38 pm UTC

and please - to the entire world - do not take that as a criticism of APEX.

You can (in ANY development environment) write really bad code
You can (in ANY development environment) write really good code


In plsql (any language) you can write slow by slow code, in any language, you can incorporate and use the power of set based processing (distinctly non-procedural).



Apologies for not making that clear

A reader, February 23, 2009 - 3:02 am UTC

Apex just happens to be the example in our case, it could have been any UI, Forms, Java etc..

The process was requested by the business to enter one person, and apply a series of business rules and pop out the results the other end. The analysts requirement specs were written as a series of flowcharts, with conditional branching. The development team basically coded the requirement spec no thought to sets of data, and we're hitting the same tables over and over again.

If they'd approached it from a SQL set based perspective we could have just added a batch_people table and joined it in to the existing SQL. The business hoped we could just add another big loop around the outside of the process iterating through each person calling the existing code.

So 10 seconds to process one person, 10 * 100,000 seconds to process 100, 000 people, and they want to process 1,000,000 eventually.
Tom Kyte
February 23, 2009 - 3:18 am UTC

no worries, I just wanted to point out that "the choice of a development platform/language is not what made you slow by slow", "slow by slow was made by choice, by the choice of the developer"

I was just afraid that someone would see:

I'm working on an apex solution which has been
written in PL/SQL, plenty of function calls, plenty of cursors, loops, if statements.


and leap from there to "apex must be only capable of slow by slow processing" - which is not accurate.

So, no worries, I just was trying to short circuit any incorrect leaps of thought there.

code reuse

A reader, May 07, 2009 - 2:39 pm UTC

In our application , the following query repeats in lot of places ( either as an insert or select ) .
What is the best practice to achieve code reusability ?

In other words , how can I implement transaction oriented APIs on this .


INSERT INTO temp_table (col_id, t_code, col_3)
SELECT col_id, t_code, col_3
FROM table1
WHERE col_pk = :1
AND col_2 = :2
AND col3 = :3
UNION
(SELECT col_id, t_code, col_3
FROM table2
WHERE col_pk = :1
AND col_2 = :2
AND col3 = :3
MINUS
SELECT col2_id as col_id, t_code, col_3
FROM table1
WHERE col_pk = :1
AND col_2 = :2
AND col3 = :3
);

Tom Kyte
May 11, 2009 - 1:51 pm UTC

put it into a packaged procedure and call the procedure from multiple places.


Use common sense and modular coding practices. A transactional API does not mean "all of the SQL required for a transaction shall be in this one procedure". Common sense will mandate that you break things up logically - for code reuse, for code maintainability, for understandability.

You never want a subroutine to be bigger than your screen, it would never all fit in a single procedure anyway.

No bigger than your screen

Kim Berg Hansen, May 12, 2009 - 8:20 am UTC

...
You never want a subroutine to be bigger than your screen, it would never all fit in a single procedure anyway.
...
He he.. I often have single SQL statements bigger than my screen :-)

My background is a very procedural mindset, but I'm learning the power of "set-based" thinking.
For me the "with" clause has been a great lifesaver many times when converting slow-by-slow to set-based processing.

(Just my "tuppence" :-)

Tom Kyte
May 13, 2009 - 10:33 am UTC

I have a really big screen with lots of pixels :)

http://asktom.oracle.com/Misc/it-has-been-while.html

Love the discussion

A reader, May 14, 2009 - 6:27 am UTC

This is one of the best discussions I've read so far. Fantastic!
I am 100% with Tom when he says do it in SQL if you can. Unfortunately, you do come across developers who just do not want to see reason. I remember at my last job, I'd written a simple select using analytics to solve a problem. Except the senior lead decided that it was too difficult to understand analytics and he (and the rest of the team) couldn't be bothered with learning it. So he re-wrote the entire thing in PLSQL! Thankfully I left soon after, with some hair still left on my head!

Code Reuse

A reader, June 22, 2009 - 12:18 pm UTC

Tom -

I am using a generic view ( with sys contexts ) to achieve code reusablity . Is there any downside in using sys context for these kind of operation?
Tom Kyte
June 22, 2009 - 3:38 pm UTC

do you mean like a parameterized view?

it is a valid, workable approach - suggested on this site a few times...

http://asktom.oracle.com/pls/ask/search?p_string=%22parameterized+view%22

Zahir M, June 22, 2009 - 9:31 pm UTC

Yes , I meant "Parameterized View" .Thanks

when to separate the logic from SQL

A reader, September 16, 2009 - 1:50 am UTC

Hi Tom,
Do you recommend we should separate the logic from SQL if it is used by more than one SQL ?

For example,lets say we have customer,customertype and order tables.

customer table:
customer_ID,
customer_type_id,
name

CustomerType table
customer_type_id,
customer_type
discount

Order table
order_id,
customer_id,
order_details

Lets say,we have to develop 2 reports
1. Report to list all customers
2. Report to list all orders
Both the reports need to display discount field, which has some logic
we can do this single SQL.
select customerid,name,customer_type,
case
when customer_type = 'VIP' then discount*2
else
discount
end discount
from customer c,customertype ct
where c.customer_type_id = ct.customer_type_id

Similarly,
select orderid,order_details,customer_id,
case
when customer_type = 'VIP' then discount*2
else
discount
end discount
from order o,customer c,customertype ct
where o.customerid = c.customerid and
c.customer_type_id = ct.customer_type_id


thanks.

Tom Kyte
September 16, 2009 - 10:09 am UTC

prior to 11g, you would
create or replace view customerType_v 
as
select customer_type_id, customer_type, discount,
       case when customer_type = 'VIP' then discount * 2 
            else discount 
        end reported_discount
  from customerType;


and use it. Keep the logic in SQL.


In 11g, you would

alter table customerType add reported_discount as case when customer_type = 'VIP' then discount * 2 
            else discount 
        end;



add a virtual column.

Segregation into camps?

Jens, September 16, 2009 - 8:31 am UTC

I think the "bricklayer" made a lot of sense in his rather long post. In a lot of shops it seems very important to "hide" the database and data model from the programmers.
They have special "Application DBA's" that write views and APIS that the "Frontend programmers" can use, and they claim they can do so without database knowledge...
In the java world they have view objects that "hides" the database by introducing another layer of abstraction. In .net they have different terms for the same thing.
And they introduce one "framework" after the other...

Our shop has a different approach:
Each programmer is responsible for his application, every bit of it, including all the sql needed.
Ok, so maybe we have a lot of inefficient sql run against the database, but that doesn't matter.
What matters is, the developer knows the requirements (he collected and digested it), the data model (he designed it himself) and the code (he wrote it).
Backside of this is that we do spend a fair amount of time in training new developers. And its not so easy to find developers "offshore" with the entire skillset needed...
The positive:
Developers taking ownership of their apps, and develop a great understanding of the business process the application supports.

I may have gone a bit off topic here, if so:
Sorry for that...

PS: Have seen some mentions of "Hiding literals". Thats not why we use bind variables. Not for hiding anything. Just to help the database avoid parse-work.
And of course, to help prevent sql-injection..

Stovepipes and lost of Data Integrity

Rick Davis, September 16, 2009 - 11:30 am UTC

Jens, I agree mostly with what you said, but disagree strongly with your statement: "the data
model (he designed it himself)". Does this programmer have an enterprise-wide view/concept of the data? Doesn't data belong to the enterprise? Without an enterprise-wide view of data you end up with a bunch of application/data stove pipes and risk duplication of data and the inherent loss of integrity. Someone must provide data architecture/design with the enterprise in mind and how this new data fits into the enterprise picture. Applications come and go, data is forever. A quick search on this site will provide more info on this topic. Regards
Tom Kyte
September 16, 2009 - 5:36 pm UTC

yes, agreed, developer should be in the entire loop, but they should not own the entire stack from the bottom up.

they cannot in general, not unless everyone makes standalone apps that share no data - but that never happens...

Why is the difference

Galen Boyer, September 16, 2009 - 11:59 am UTC

> What is the difference between:
>
> insert into t ( a, b, c ) values ( ?, ?, ? );
>
> and
>
> t_pkg.do_insert( a=>?, b=>?, c=>? );

1) If you add a column, the compiler will help you out with #2 because
you can change its API and all callers of that API will then break,
which tells you what to fix.

2) If you want to take care of auditing columns in a specific way, all
the time, #2 allows you to make sure that happens outside of a trigger
in one place, #1 means that auditing code has to happen in all places
where you would put that code.

3) When t changes, you can test #2 to a very detailed level "insert
happened level" and then test the callers of it from a "did the insert
get embodied correctly in the outer code". With #1, you have to test
each occurrence at the detailed level and then test it from a
surrounding code level, ie, unit testing is much more implementable
from #2.

4) #2, used as an interface to applications allows one to generate the
low-level ObjectRelation layer stuff. This gets quite useful as the app gets
larger.

But, at the end of the day, I consider #2 an XAPI, used for
applications and row-by-row inserting needs. I do not consider #2
something that should be considered if one needs to do set
based-processing, ie, retrieve me a page of data for display, or, ETL
needs to insert records, lets run those through the TAPI. What does
happen is that #2 becomes a standard for all things database, and then
that proliferates to function calls for everything sql and set
logic/thinking does start to go out the window. But, I do like TAPI
for row level operations coming from APPs as well as other PL/SQL row
level needs.
Tom Kyte
September 16, 2009 - 5:41 pm UTC

1) if you add a column, you have the dependent code (user dependencies) and can figure out what code needs to be reviewed since you have to do an impact analysis first anyway. You cannot just "add a parameter to a stored procedure" like that.

This is a really horrible reason :(

2) This is one time whereby a trigger would make sense.


3) huh? I don't know what you mean. You have to test the effect of changing the call to t_pkg.do_insert in exactly, precisely the same way you would have to for the insert - I don't see any semantic difference in code between the two.

both are sql statements.
both take a set of parameters.
both must be tested for correct use after changing.

4) again, huh? what is if the difference if the OR layer does

prepareStmt( some_string );
bind(a)
bind(b)
bind(c)
executeStmt()

vs


prepareStmt( some_string );
bind(a)
bind(b)
bind(c)
executeStmt()


they are the same?


XAPI - rocks.

TAPI - doesn't.

Re reply: On Stovepipes and loss of Data Integrity

Jens, September 17, 2009 - 3:17 am UTC

Rick/Tom,
I didn't describe completely our organisation, maybe I should have ;-)

In each of the main areas of application development, like Visual/Usability Design, Security, Database Design etc. we have appointed one person to own that area. So all designs are run by this persion for review. And developers do *NOT* work in silos restricted to "their" application, on the contrary.
Project management is of course also involved in tying everything together across applications/business functions etc.
One of the reasons for our (relative) success is EXACTLY the fact that we create/add value from the data by being the bridge between the various departmental "silos".
And seeing that:
Hey, if we combine Application A's data with Application B's (we may have to add a table or two), then Department C will no longer have to manually collect those 10 excel spreadsheets each month..

I am not saying we have the perfect model, but we are getting better and better.

Jens

TAPI - I guess they are good if used properly

Karthick, September 17, 2009 - 3:28 am UTC

Never say Never, Never say Always, You always say ;)

I think TAPI can be a good choice if used properly.

If you have 1000 table in your application and you create 1000 TAPI then that could be a bad thing.

But at the same time if you identify tables like master tables and create TAPI for them to perform INSERT, UPDATE, DELETE and SELECT i think thats not a very bad idea. By doing that i have successfully separated my SQL from the presentation layer.

And guess what once i define the structure of these TAPI i can automate the generation of these TAPI by which i don't have to write any code in the future.

So my thought is an application would be much benefited if it uses both TAPI and XAPI. Leaving one for the other would not be a wise decision.
Tom Kyte
September 17, 2009 - 10:54 am UTC

... By doing that i have successfully separated my SQL from the
presentation layer. ....

big deal? so what? so like I said, instead of coding

prepareStmt( some_string );
bind(a)
bind(b)
bind(c)
executeStmt()

you will code:

prepareStmt( some_string );
bind(a)
bind(b)
bind(c)
executeStmt()


I am utterly failing to see HOW that makes a difference.


AND, more to the point, why is the front end doing sql at all - why does it need to insert into a single table - why would it not invoke an XAPI to take a set of inputs and take the database from one consistent state to the next????

I'm not saying the front end should call sql (I've written that it would be good if insert/update/delete/select/merge were forbidden outside of PLSQL), but it should not call a little procedure that does a single sql statement, you have provided nothing for the front end.



Okay, lets extend the question

Galen Boyer, September 17, 2009 - 10:59 am UTC

Let me extend your question, so maybe my points can be clearer.

> What is the difference between:

1)
> a_pkg
> BEGIN
> ...
> insert into t ( a, b, c ) values ( ?, ?, ? );
> ...
> END;
>
> b_pkg
> BEGIN
> ...
> insert into t ( a, b, c ) values ( ?, ?, ? );
> ...
> END;
>
> c_pkg
> BEGIN
> ...
> insert into t ( a, b, c ) values ( ?, ?, ? );
> ...
> END;
>

2)
> t_pkg
> BEGIN
> ...
> insert into t ( a, b, c ) values ( ?, ?, ? );
> ...
> END;
>
> a_pkg
> BEGIN
> ...
> t_pkg.do_insert( a=>?, b=>?, c=>? );
> ...
> END;
>
> b_pkg
> BEGIN
> ...
> t_pkg.do_insert( b=>?, a=>?, c=>? );
> ...
> END;
>
> c_pkg
> BEGIN
> ...
> t_pkg.do_insert( c=>?, b=>?, a=>? );
> ...
> END;
>

--------------------
1) You have to line up the column name and the inserted value 3 times.

2) You have to line up the column name and the inserted value once,
note the parm calls are different orders and still works. Just less
chance for an error, especially if if attribute a and attribute b are
same datatypes.

--------------------
1) You have to test all iterations of data values that might happen
(consider it to be a 100 row table) TIMES 3.

2) You have to test all iterations of data values that might happen
(consider it to be a 100 row table) once. Yes, of course, you have to
test a_pkg, b_pkg and c_pkg, but those tests can look at that package
from a different point of view. You've can trust t_pkg fully as well
as hand that off to someone else, where you cannot trust a_pkg, b_pkg
and c_pkg fully until those suite of tests performed against t_pkg are
performed 3 times, once for a_pkg, b_pkg and c_pkg.

--------------------
1) Anything that needs to be added later, any logic that might need to
be amended to inserting to table t, needs to be added in 3 places.

2) Anything that needs to be added later, any logic that might need to
be amended to inserting to table t, needs to be added in 1 place.
Call this code insulation, if you will.

1) When coding this change to table t and the three packages, I have
to have the "dependency" list somewhere and refer to it. And, when I
want to review whether I made my changes correctly, the compiler
cannot help me, I just have to constantly look at each occurrence of
my changes. The compiler helps me know each statement compiled
correctly, but it certainly will not help me if I misalign column
names with their inserted values, or whether I added the amended logic
everywhere.

2) When coding this change to table t and the four packages, I first
fix package t_pkg, then, the compiler can start helping me out. I can
cross-reference the compiler's output to the "dependency" list once
and show that the compiler's output and the dependency list give me
the same information. From that point in time, I can then depend on
the compiler to tell me alot as to whether I coded this correctly. I
can change a,b and c_pkg and the compiler can be relied upon to let me
know if my insert to t is happening correctly. I can already start
being 100% certain that my code is going to work, because the compiler
is telling me so.
Tom Kyte
September 17, 2009 - 1:43 pm UTC

corect- what is the difference from the point of view of the invoker? what is the difference - please - do tell me?


1) You have to line up the column name and the inserted value 3 times.


huh??? no clue what you mean there. If you mean "match the column list to the values list", I'm not buying that. I rarely, if ever, see people using named notation either - what we'd end up with would be the same.


1) You have to test all iterations of data values that might happen
(consider it to be a 100 row table) TIMES 3.


I don't know what that means either.

1) Anything that needs to be added later, any logic that might need to
be amended to inserting to table t, needs to be added in 3 places.


I'm back to TRANSACTIONAL API.


1) When coding this change to table t and the three packages, I have
to have the "dependency" list somewhere and refer to it. And, when I
want to review whether I made my changes correctly, the compiler
cannot help me, I just have to constantly look at each occurrence of
my changes. The compiler helps me know each statement compiled
correctly, but it certainly will not help me if I misalign column
names with their inserted values, or whether I added the amended logic
everywhere.


Not buying that either

coder in java codes

prepare( begin p( a=>?,b=>?,c=>? ); end; )
bind(1, a)
bind(2, b)
bind(3, c)

prepare( begin p( a=>?,b=>?,c=>? ); end; )
bind(1, b)
bind(2, a)
bind(3, c)


same difference, you have the same chance of error here - they bind by position. Named notation with 100 columns isn't going to fix that.


I can already start
being 100% certain that my code is going to work, because the compiler
is telling me so.


No, it isn't. If you think so, you might have code to go back and look at .


again - I'm for XAPI
I've done the TAPI stuff
I've been there
I've used it
I won't do it ever again, never.

INSERT syntax improvements

Duke Ganote, September 18, 2009 - 7:45 am UTC

Chasing that rabbit...I liked Chet Justice's suggested INSERT syntax of named parameters:
http://www.oraclenerd.com/2009/06/oracles-new-insert-syntax.html

"Wouldn't it be cool if you could do something like this though?

INSERT INTO my_table
( id => seq.nexval,
create_date => SYSDATE,
update_date => SYSDATE,
col1 => 'A',
col2 => 'SOMETHING',
...
Tom Kyte
September 18, 2009 - 12:31 pm UTC

I agree with that. Yes.

Well, I have no problem with that

Galen Boyer, September 18, 2009 - 9:23 am UTC

Tom,

Sounds good. I have no problem with hearing you being totally against
something. Your opinion on this will resonate the next time I
architect a system from scratch, but I've used it successfully in the
past, so I do believe it has merit. My guess is that you are treating
this alot like the "WHEN OTHERS". That clause is very useful for one
thing, logging errors autonomously, but there is so much risk in not
typing "RAISE" after that you just yell and scream at the top of your
internet lungs, "DO NOT USE WHEN OTHERS. I WISH ORACLE DID NOT EVEN
HAVE IT!!!!". In the same way, I'm guessing when you've seen TAPI
you've seen a procedural hodpog of piles of poorly performing sql and
you seen it enough that you just yell, "NEVER!!!".

I can respect that for sure. You've yelled it enough, I
may not use the TAPI again.
Tom Kyte
September 18, 2009 - 12:39 pm UTC

... My guess is that you are treating
this alot like the "WHEN OTHERS". ...

you got it, 100%. If I was to say once 'well, ok, i guess that is ok', then that turns into "he said it is a good practice, a best practice even" and they get used inappropriately all over the place.

The Asktom humour

Associate, September 18, 2009 - 5:10 pm UTC

"If I was to say once 'well, ok, i guess that is ok', then that turns into "he said it is a good practice, a best practice even" "

Brilliant!

A reader, September 24, 2009 - 3:10 pm UTC

PLSQL support was a very good improvement to Oracle.
Tom Kyte
September 29, 2009 - 8:15 am UTC

yes? your point is what exactly?


Enjoying the discussion, but what's so bad about TAPIs?

Jack, December 19, 2009 - 12:24 pm UTC

Consider a banking application that requires a graphical interface and must provide the following functionality:

1) show balance for account x
2) credit n dollars to account x
3) debit n dollars from account x
4) transfer n dollars from account x to account y


I am considering writing the application in one or two programming languages. The first will be something other than PL/SQL (Java, C++, Python, or whatever). This is required for the interface. The second (optional) is PL/SQL.

I know my application logically requires only three SQL statements:

A) select balance
from account_tb
where acctnum = p_acctnum;

B) update account_tb
set balance = balance - p_amount
where acctnum = p_acctnum;

C) update account_tb
set balance = balance + p_amount
where acctnum = p_acctnum;


I need to make some choices about how to organize my application. I want it to be as simple and clear as possible for all the normal reasons (e.g. readability, maintability, etc.). I am considering three approaches.

The first is to use only the non-PL/SQL language. In this approach, the text strings of the SQL will exist in the non-PL/SQL code and will be submitted to the database in a manner such as:

balance = db_submit_sql(sql_a, acctnum_bind)

OR

db_submit_sql(sql_c, acctnum_bind, amount_bind)
db_commit()

OR

db_submit_sql(sql_b, acctnum1_bind, amount_bind)
db_submit_sql(sql_c, acctnum2_bind, amount_bind)
db_commit()



The second approach is to use PL/SQL for the transaction's one transaction (XAPI). Here is the PL/SQL:

create procedure transfer_pr (
p_from_acctnum in number,
p_to_acctnum in number,
p_amount in number
)
as
begin

update account_tb
set balance = balance - p_amount
where acctnum = p_from_acctnum;

update account_tb
set balance = balance + p_amount
where acctnum = p_to_acctnum;

end;
/

Now in the non-PL/SQL language I do everything as before, except I do the transaction as follows:

db_execute_plsql(transfer_pr,acctnum1_bind,acctnum2_bind,amount_bind)


This approach has some drawbacks in the areas of readability and maintenance. One, I have to look in two places, reading two languages, to see all of the application's SQL (increases mental overhead in reading the application). Also, my application now has five SQL statements instead of three. Statements B and C are duplicated (may one day become out of sync).


The third approach is to put all of the SQL in PL/SQL. The PL/SQL application will have four procedures:

get_balance_pr (p_acctnum in number, p_balance out number);
debit_pr (p_acctnum in number, p_amount in number);
credit_pr (p_acctnum in number, p_amount in number);
transfer_pr (p_from_acctnum in number, p_to_acctnum in number, p_amount in number);


The SQL statements duplicated in approach 2 will be factored out by modifying the transfer procedure as follows:

create procedure transfer_pr (
p_from_acctnum in number,
p_to_acctnum in number,
p_amount in number
)
as
begin

debit_pr(p_from_acctnum, p_amount);
credit_pr(p_to_acctnum, p_amount);

end;
/

The non-PL/SQL language will do all database access using db_execute_plsql(). SQL text will not exist in the non-PL/SQL language.

Approach 3 solves the drawbacks of approach 2. Approach 1 is simplest in that it doesn't involve PL/SQL (avoids a whole extra language). Approach 3 seems more organized in that it puts all the SQL in the database where it might be tuned and maintained by someone who doesn't know the non-PL/SQL language.

Approach 3 uses both transaction API (XAPI) and table API (TAPI) as I understand the terms. What is wrong with TAPIs in this case?
Tom Kyte
December 20, 2009 - 8:53 am UTC

what's good about them? They just mimic "insert, update, delete", leaving the transaction in the hands of the coder - who is typically woefully underskilled to know what to do or how to do it correctly.

That is my problem - I wish insert/update/delete and even in most cases select were outlawed outside of PLSQL. Stored procedures to take the database from one consistent state to the next, in a performant, correct, standard fashion (everyone does the same thing the same way every time)


If this is true:


I know my application logically requires only three SQL statements:

A) select balance
from account_tb
where acctnum = p_acctnum;

B) update account_tb
set balance = balance - p_amount
where acctnum = p_acctnum;

C) update account_tb
set balance = balance + p_amount
where acctnum = p_acctnum;


then your XAPI will be three things and three things only. That they happen to have one sql statement apiece is fine and dandy. This is NOT a TAPI however, because if (C) for example changes to something else, you would update the XAPI to do it.

If this were a TAPI, then you would have two entry points only - you would lose the fact there are 'three things' you do and the TAPI would permit modification of any/all columns.




....
Now in the non-PL/SQL language I do everything as before, except I do the
transaction as follows:

db_execute_plsql(transfer_pr,acctnum1_bind,acctnum2_bind,amount_bind)


This approach has some drawbacks in the areas of readability and maintenance.
.....


You were only kidding when you wrote that right? I mean - what could be more clear than:

....
please do the transfer transaction right here
....


I don't even know your 3gl language and I could read your code snippet and see what it was doing??? I don't see any readability or maintenance negatives, I see EXCESSIVELY readable code that is EASY to maintain (since you probably won't call transfer from just one place but ultimately dozens of places as you maintain and enhance your code)


... Approach 1 is simplest in that
it doesn't involve PL/SQL (avoids a whole extra language). ...

if you ask me, approach 1 is the hardest, since developing sql access in plsql is trivial and the other languages - harder.


...Approach 3 seems
more organized in that it puts all the SQL in the database where it might be
tuned and maintained by someone who doesn't know the non-PL/SQL language.
....

bingo :)


It does not use TAPI however, you have a defined transaction piece of functionality called "debit_pr"

If it were a tapi, it would be:


account_pkg.read_row( p_from_acctnum, l_some_record, lock_record =>true );
l_some_record.amount := l_some_record.amount - p_amount;
account_pkg.update_row( p_from_accnum, l_some_record ); -- update entire record

account_pkg.read_row( p_to_acctnum, l_some_record, lock_record => true );
l_some_record.amount := l_some_record.amount + p_amount;
account_pkg.update_row( p_to_accnum, l_some_record ); -- update entire record


see how that would look sort of like you are doing ALL of the work - in fact, you are. It is not anymore useful (I would say less useful) than just using SQL at that point.


There is no reason to NOT use modular code (option 3 is just an exercise in modular code, subroutines, small small subroutines, that do something small). I do it that way.

It is NOT a TAPI at all though

SQL

A reader, December 20, 2009 - 8:01 pm UTC


A reader, August 09, 2010 - 4:59 am UTC


TAPI for best fit of survey data

Jim, November 26, 2010 - 11:06 pm UTC

This is a very interesting discussion, apart from that thing with the poorly specced qoh.

I just watched some SF videos on how to break my "addiction" to SQL. One thing I learned was the (11g) result cache hint. He put it into his TAPI and will have to remember to put it in again when he adds a column to his constantly changing data model and regenerates his packages. I'll be putting it into some of my XAPIs on Monday. For example, it can go into a function which selects from just one table - Permitted_Countries - and resides in the Customer XAPI (but is not exposed in the package spec). That's how XAPI guys roll!

I did almost get convinced by TAPIs when he showed how to update lots of rows in a table, using the TAPI functions and saving lots of coding. It's true that using the TAPI is quicker than defining a collection, fetching into the collection, iterating through the collection whilst changing attributes on various records and finally processing a bulk update. But if you have a healthy relationship with SQL you could write
update emp set col=?? where dept = ?? and ??;
and leave work early to concentrate on more enjoyable addictions.
Tom Kyte
November 28, 2010 - 3:02 pm UTC

qoh? no clue what you are talking about. quantity on hand?



sql

A reader, November 28, 2010 - 8:16 pm UTC


Say no to TAPIS

Dan, February 02, 2011 - 9:26 am UTC

The Oracle developer user community for my previous employer was almost evangelical about Table Encapsulation Tiers (TAPIS).

They also had access to a tool from Quest known as QNXO, this allowed the wanton generation of TAPIS, so without thinking about how the tables would be used; there was a suddenly TAPI for over 300 tables.

The tables were joined together as necessary procedurally in what was known as a PL-SQL Object Tier....Slow... WTF….

About 95% of the TAPIS were never used and 90% of the code was re-factored to increase performance, (as it was very set intensive) we used a novel concept, known as SQL to achieve this.

nico, July 12, 2012 - 3:33 pm UTC

let us be honest. We have an OLTP application and very few inserts, updates are really simple. Most of them are performing different task like validate some input, log something,.. before the actual insert and then afterwards populate some other tables too. For that I'm a big fan of putting this in a PL/SQL package/procedure. Some might call it TAPI other XAPI.

At first it might seem there's no benefit in putting a simple insert into a procedure but I sure would prefer to change that ONE simple procedure If I later on decide to do an extra step in the insert process, than rewriting 100 insert statements all over the place. As for select statements it's a different thing. Mostly these are not simple 1 table statements so I DON't create TAPIS for them but I DO put my SQL in PL/SQL packages and NOT IN THE CLIENT application. Database is where SQL ought to be and it's way easier to maintain it if it's in the database.
Tom Kyte
July 12, 2012 - 6:36 pm UTC

perfection