Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Ivan.

Asked: December 06, 2001 - 12:21 pm UTC

Last updated: September 11, 2018 - 10:08 am UTC

Version: 8.1.6

Viewed 50K+ times! This question is

You Asked

To me is often send only description of tables, I need to create and some business logic to put in stored packages. But no flat files to load test data.
That is the reason for my thoughts of dynamic procedure, which should (depended of column type in tables) insert number of test records.

- And I think about using pl/sql table for information in:
select column_id,column_name, data_type
from user_tab_columns
where table_name = upper(TableName)

to put it in something like:

create or replace procedure
create_test(TableName, nRecords)
declare
ExcStmt VARCHAR2(2000);
TestChar VARCHAR2;
TestDate DATE;
TestNum NUMBER;
i number;
…
Begin
TestChar := 'A';
TestDate := to_DATE('01012000', 'DDMMYYYY');
TestNum := 0;

for i in 1 .. nRecords
loop
for j in 1 .. max(column_id)
loop
-- preparing data for insert record
if data_type(colomn_id) = 'NUMBER' then
-- somehow
colomn_name(column_id) := TestNum + 1;
elsif data_type(colomn_id) = 'DATE' then
colomn_name(column_id) := TestDate + 1;
else
colomn_name(column_id) := TestChar;
end if;
ExcStmt := 'INSERT INTO '||TableName|| ... --record
EXECUTE IMMEDIATE ExcStmt;
end loop;
end loop;
…


and Tom said...

This'll work then. Instead of dbms_random, you could use your own values, but this is as good as anything else:

scott@TKYTE9I.US.ORACLE.COM> create or replace procedure clone( p_tname in varchar2, p_records in number )
2 authid current_user
3 as
4 l_insert long;
5 l_rows number default 0;
6 begin
7
8 execute immediate 'create table clone_' || p_tname ||
9 ' as select * from ' || p_tname ||
10 ' where 1=0';
11
12 l_insert := 'insert into clone_' || p_tname ||
13 ' select ';
14
15 for x in ( select data_type, data_length,
16 rpad( '9',data_precision,'9')/power(10,data_scale) maxval
17 from user_tab_columns
18 where table_name = 'CLONE_' || upper(p_tname)
19 order by column_id )
20 loop
21 if ( x.data_type in ('NUMBER', 'FLOAT' ))
22 then
23 l_insert := l_insert || 'dbms_random.value(1,' || x.maxval ||
'),';
24 elsif ( x.data_type = 'DATE' )
25 then
26 l_insert := l_insert ||
'sysdate+dbms_random.value+dbms_random.value(1,1000),';
27 else
28 l_insert := l_insert || 'dbms_random.string(''A'',' ||
x.data_length || '),';
29 end if;
30 end loop;
31 l_insert := rtrim(l_insert,',') ||
' from all_objects where rownum <= :n';
32
33 loop
34 execute immediate l_insert using p_records - l_rows;
35 l_rows := l_rows + sql%rowcount;
36 exit when ( l_rows >= p_records );
37 end loop;
38 end;
39 /

Procedure created.

scott@TKYTE9I.US.ORACLE.COM> exec clone( 'emp', 5 );

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
scott@TKYTE9I.US.ORACLE.COM> select * from clone_emp;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ------
4923 BrCyhfdHaQ mGSqkWMvy 8302 07-SEP-02 2765.67 89231.32 27
323 ImuBZCYDrt TdjoflvYE 9613 11-AUG-03 82158.44 34478.25 89
8773 jnTPtKkchC KzUezmTTL 8432 20-MAY-04 29909.7 23860.67 24
7374 aETDfeptSS ObVBgtnAP 9033 28-MAR-03 34012.77 92188.64 45
4530 MKGvauUMOQ GXwcRjGUn 51 07-AUG-04 84510.9 93629.66 79



Rating

  (45 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Just to add...

Connor, December 07, 2001 - 4:52 pm UTC

Good ol' mod, trunc and rownum are great for creating data - all you need is a nice table to source it from (for example sys.source$).

insert into mytable
select rownum, mod(rownum,100), trunc(rownum/10)
from sys.source$
where rownum < ...

So you can tewak the frequency and distribution of data - which helps when benchmarking optimizer decisions.

Tom Kyte
December 07, 2001 - 6:37 pm UTC

definitely true -- there you need participate in the cloning more closely -- here they seem to want to just say "copy the structure and fill er up with regular please".

I myself use the mod/rownum trick..

100 % mach with what I need

Ivan Korac, December 10, 2001 - 3:19 am UTC

Even more, You demonstrate to us, very welcome, usage of dbms_random
My compliments


more complcted

Ivan, February 13, 2002 - 4:42 am UTC

Is it possible that when generating data values, foreign key constraints can be respected? i.e for emp, because of foreign key from dept, value for dept field should be randomly selected from the domain of dept.

Tom Kyte
February 21, 2002 - 4:32 pm UTC

Sure, just load up the primary key values of the DEPT table into a plsql table type and use dbms_random to index randomly into the array:

scott@ORA817DEV.US.ORACLE.COM> declare
2 type numArray is table of number index by binary_integer;
3
4 l_deptnos numArray;
5 begin
6 select deptno bulk collect into l_deptnos from dept;
7
8 for i in 1 .. 5
9 loop
10 dbms_output.put_line( l_deptnos( dbms_random.value( 1,
l_deptnos.count ) ) );
11 end loop;
12 end;
13 /
40
10
40
20
30

PL/SQL procedure successfully completed.



problems in clone

Enrique Ortiz, March 18, 2002 - 9:16 am UTC

Hi I think the procedure clone is great but i found a
problem

If you define a column as NUMBER in the source table data_precition and data_scale ( line 15 ) returns Null
and the line appended is 'dbms_random.value(1,)'


Tom Kyte
March 18, 2002 - 9:26 am UTC

Yes, change

rpad( '9',data_precision,'9')/power(10,data_scale) maxval
to
nvl(rpad( '9',data_precision,'9')/power(10,data_scale),9999999999) maxval

in the select and that'll fix that up.

Test Data

Vijay.S, March 19, 2002 - 12:48 am UTC

GENERATE TEST DATA AUTOMATICALLY...

This will not handle if the table has got an referencing columns (F.Keys)..

Rgds

Flexibility of insert into select approach..,

Matt, March 20, 2003 - 8:36 pm UTC

I've been using the:

insert into mytable
select rownum, mod(rownum,100), trunc(rownum/10)
from sys.source$
where rownum < ...

approach as well. It seems to work well for simple data setups.

I want to generate data that looks like this:

ID ID_TYPE VERSION
1 1 1
1 1 2
1 1 3
. . .
. . .
. . .
1 1 10
1 2 1
1 2 2
1 2 3
. . .
. . .
. . .
1 2 10
1 3 1
1 3 2
1 3 3
. . .
. . .
. . .
1 3 10
2 1 1
2 1 2
2 1 3
. . .
. . .
. . .
2 1 10

That is, ID is a number, every ID has three ID_TYPE (1,2,3) and each ID,ID_TYPE combo has ten VERSIONS (1,2,...10)

How can I do this in one SQL statement? Is it possible?

I've come up with a select that looks like this:

select a.r
, b.s
, c.t
from (select rownum r from all_objects where rownum < 11 ) a
,(select rownum s from all_objects where rownum <= 4 ) b
,(select rownum t from all_objects where rownum <= 11 ) c
where rownum < 1000
order by 1,2,3

Which I think produces what I want:

r s t

1 1 1
1 1 2
1 2 1
1 2 2
1 3 1
2 1 1
2 1 2
2 2 1
2 3 1

I can then MOD, TRUNC, TO_DATE() +, these values to generate the real data set.

Is there a better way of doing this, of have I just answered my own question?


Tom Kyte
March 20, 2003 - 8:40 pm UTC

your query will take

1..10 "X" 1..4 "X" 1..11

which generates 440 rows -- and comes up with every combination of them...

I do it all of the time. Another technique (some slow but effective) is:

create or replace procedure gen_data( p_tname in varchar2, p_records in number )
authid current_user
as
l_insert long;
l_rows number default 0;
begin

dbms_application_info.set_client_info( 'gen_data ' || p_tname );
l_insert := 'insert /*+ append */ into ' || p_tname ||
' select ';

for x in ( select data_type, data_length,
nvl(rpad( '9',data_precision,'9')/power(10,data_scale),9999999999) maxval
from user_tab_columns
where table_name = upper(p_tname)
order by column_id )
loop
if ( x.data_type in ('NUMBER', 'FLOAT' ))
then
l_insert := l_insert || 'dbms_random.value(1,' || x.maxval || '),';
elsif ( x.data_type = 'DATE' )
then
l_insert := l_insert ||
'sysdate+dbms_random.value+dbms_random.value(1,1000),';
else
l_insert := l_insert || 'dbms_random.string(''A'',' ||
x.data_length || '),';
end if;
end loop;
l_insert := rtrim(l_insert,',') ||
' from all_objects where rownum <= :n';

loop
execute immediate l_insert using p_records - l_rows;
l_rows := l_rows + sql%rowcount;
commit;
dbms_application_info.set_module( l_rows || ' rows of ' || p_records, '' );
exit when ( l_rows >= p_records );
end loop;
end;
/



it does RANDOM data -- not within bounds like you have. What you did is right if you want all of the rows where r is between 1 and 10 combined with S from 1..4 and so on.

Thanks for that.

Matt, March 20, 2003 - 10:27 pm UTC

Nice procedure.

I have read about using application_info as you do ( I assume that this populates v$long_ops )

"dbms_application_info.set_module( l_rows || ' rows of ' || p_records, '' "

I guess that this just has the same overhead as a simple insert?

Is this the only mechanism to monitor the progress of transactions like these? ie: It seems to me that you would need to consider this approach when designing the routine rather than being ableto retro-fit this monitoring in existing code (which I may have to do soon...).



Tom Kyte
March 21, 2003 - 8:32 am UTC

it has so little overhead -- infinitely less then a simple insert. It modifies memory.. It doesn't touch disk, generate undo, create redo.


...
It seems to me that you would need to consider this approach when designing
the routine rather than being ableto retro-fit this monitoring in existing code
.....

Bravo -- well said! I like that statement so much. Yes, you actually need to consider things like this when designing your application. I'm very very big into this -- instrumenting your code to death with tons of stuff like this (just like the database did for us) so you can audit performance, pinpoint issues -- all at the drop of a hat.

How many times I hear "It is slow, why?" -- well where is it slow -- "I don't know, somewhere in the middle of this really big process" -- ok, turn your tracing and timings and such so we can see where -- "huh? wadda mean".....



Test data genberation for clustered table.

Matt, March 25, 2003 - 10:50 pm UTC

Is there a speedy way to get bulk data into an index clustered table?

ict is the index clustered table, clustered on C1, C2, C3, C4

insert /*+ append */ into ict
select mrs.C1
, mrs.C2
, mrs.C3
, mrs.C4
, mrs.C5
, 'A' C6
, 999.99 C7
, a.r interval
, 1 C8
, 1 C9
from puser.my_driver mrs
, (select rownum r from all_objects where rownum <= 96) a

I have 360,000 rows in puser.my_driver. These took 1.5 minutes to load. The statement above ran for over an hour before I decided to investigate.

Thanks

Tom Kyte
March 26, 2003 - 7:25 am UTC

so you were loading 34,560,000 rows into this cluster....

no idea what the data looked like (is that hundreds of meg of data or 10s of gig of data)

no idea what the cluster key looks like -- how the cluster was sized...

makes it hard.

Schoolboy Error?

Matt, March 27, 2003 - 9:13 pm UTC

OK, I think my data load is roughly 600Mb (average row length of 57bytes, loading 120,000*96 rows - 120,000 comes from the missing predicate that I failed to cut and paste that restricts the data load).

That tablespace that this data was going into was defined thus:

CREATE TABLESPACE TEST_PARTITION_001
DATAFILE '/db/oradata/NDEVT/test_partition_001_N9DEVT_01.dbf' SIZE 512064K
AUTOEXTEND ON NEXT 512K MAXSIZE 1000M
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO

But the datafile was then resized to 1000M (which should really have been 1024064K)

The cluster is defined like this:

CREATE CLUSTER ICT_C
("C1" NUMBER(10, 0)
, "C2" NUMBER(4, 0)
, "C3" DATE
, "C4" NUMBER(3, 0)
)
TABLESPACE "TEST_PARTITION_001"
/

CREATE INDEX ICT_C_IDX
ON CLUSTER "ICT_C"
TABLESPACE "TEST_PARTITION_001"
/

CREATE TABLE "ICT"
("C1" NUMBER(10, 0) NOT NULL ENABLE
, "C2" NUMBER(4, 0)
, "C3" DATE NOT NULL ENABLE
, "C4" NUMBER(3, 0) NOT NULL ENABLE
, interval NUMBER(4, 0) NOT NULL ENABLE
, "C5" NUMBER(12, 0) NOT NULL ENABLE
, "C6" VARCHAR2(10)
, "C7" NUMBER(12, 2)
, "C8" NUMBER(5, 0)
, "C9" NUMBER(5, 0))
CLUSTER "ICT_C"
("C1", "C2", "C3", "C4")
/

These are the only objects in this TS.

I am thinking that as I have not sized the objects they are defaulting to 512K extents and then dynamically extending. ie: 1200 dynamic space allocations.

The data dictionary changes (and redo) and the fact that I need the index enabled for the cluster is slowing this load down.

Do you agree?

Is there anythig that I can do (apart from understand LMS SMA and size the objects correctly!) that could speed this data load up?

Thanks!

Tom Kyte
March 28, 2003 - 8:41 am UTC

no, don't agree.  First, you cannot direct path load clusters (that append is meaningless).  So, this is the same as "insert into t select"

the 512k extents -- not really a big deal.  It is a locally managed tablespace, there are no data dictionary changes here -- it is bitmap updates.

I think it is at least partially due to the fact you haven't sized your cluster (using SIZE on the create cluster) and that the initial load of data isn't sorted by the cluster key.  

Given a value for C1, C2, C3, C4 what would you expect this query:


select sum(vsize(mrs.C1 ))+
      sum(vsize(mrs.C2))+
      sum(vsize(mrs.C3))+
      sum(vsize(mrs.C4))+
      sum(vsize(mrs.C5))+
      sum(vsize('A'))+
      sum(vsize(999.99))+
      sum(vsize(a.r))+
      sum(vsize(1))+
      sum(vsize(1))
from puser.my_driver mrs
   , (select rownum r from all_objects where rownum <= 96) 
where c1 = :c1 and c2 = :c2 and c3 = :c3 and c4 = :c4


Assuming that c1,c2,c3,c4 are unique in mrs -- we would get 96 rows for each cluster key and you said 57 bytes, so that is about 6k/key.  so, if I build a driver table:


ops$tkyte@ORA920> create table my_driver
  2  as
  3  select rownum c1, mod(rownum,9999) c2, created c3, mod(rownum,999) c4, rownum c5
  4    from big_table
  5   where rownum <= 120000
  6  /

Table created.


ops$tkyte@ORA920> CREATE CLUSTER ICT_C
  2    ("C1" NUMBER(10, 0)
  3     , "C2" NUMBER(4, 0)
  4     , "C3" DATE
  5     , "C4" NUMBER(3, 0)
  6    )<b>
  7   size 6192</b>
  8   TABLESPACE "TEST_PARTITION_001"
  9  /

Cluster created.


And it is also related to the "order" of the data.  A cluster differs big time from a HEAP table in that data has a physical location.  If you look at how Oracle itself uses a cluster in the data dictionary -- you'll find that all of the rows for a cluster key more or less go into the database "at the same time".  that is when clusters work best.  During a bulk load of a cluster -- you really want to try and get the data into the table in key order.  It is not that the keys must be sorted -- just that all or most of the rows for a given key go in at about the same time.  This results in the most efficient load as well as most tightly packed cluster possible (least amount of chaining).

I modified your query to help facilitate this.  first, I lost all_objects and just use the simple my_driver table twice.  Additionally -- I used an ordered hint in this very special case to tell Oracle "read the MY_DRIVER table and join the first row to the 96 rows, then read the second row from MY_DRIVER and join to the 96 rows and so on".  So the net result was that the cluster was loaded by key:


ops$tkyte@ORA920> delete from plan_table;

7 rows deleted.

ops$tkyte@ORA920> explain plan for
  2  insert into ict
  3  (c1,c2,c3,c4,c5,c6,c7,interval,c8,c9)
  4  select /*+ ORDERED */ mrs.C1 , mrs.C2 , mrs.C3 , mrs.C4 , mrs.C5 , 'A' C6 , 999.99 C7 , a.r interval , 1 C8 , 1 C9
  5  from my_driver mrs
  6     , (select rownum r from my_driver where rownum <= 96) a;

Explained.

ops$tkyte@ORA920> set echo off

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------
| Id  | Operation             |  Name       | Rows  | Bytes | Cost  |
---------------------------------------------------------------------
|   0 | INSERT STATEMENT      |             |    11M|   461M|  6240K|
|   1 |  MERGE JOIN CARTESIAN |             |    11M|   461M|  6240K|
|   2 |   TABLE ACCESS FULL   | MY_DRIVER   |   120K|  3398K|    52 |
|   3 |   BUFFER SORT         |             |    96 |  1248 |  6240K|
|   4 |    VIEW               |             |    96 |  1248 |       |
|*  5 |     COUNT STOPKEY     |             |       |       |       |
|   6 |      TABLE ACCESS FULL| MY_DRIVER   |   120K|       |    52 |
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter(ROWNUM<=96)

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

Note: cpu costing is off

19 rows selected.

ops$tkyte@ORA920> set timing on
ops$tkyte@ORA920> insert into ict
  2  (c1,c2,c3,c4,c5,c6,c7,interval,c8,c9)
  3  select /*+ ORDERED */ mrs.C1 , mrs.C2 , mrs.C3 , mrs.C4 , mrs.C5 , 'A' C6 , 999.99 C7 , a.r interval , 1 C8 , 1 C9
  4  from my_driver mrs
  5     , (select rownum r from my_driver where rownum <= 96) a;

11520000 rows created.

Elapsed: 00:14:06.33


ops$tkyte@ORA920> select count(*) from user_extents where segment_name = 'ICT_C';

  COUNT(*)
----------
      1884



Now that was 14 minutes on a stock, $800 compaq desktop pc with an icky IDE single disk drive and 512meg of RAM running Linux.  I would assume your server would probably best that.


clusters are used to provide high performance retrievals -- they have necessary overhead on the load in order to do that.  They are more efficient on the buffer cache since all of the rows for a key will be on as few blocks as possible.  Don't be discouraged by a slow load time -- as the payback at query retrieve time can far outweigh any negative effects there.  Pay me once up front and reap the benefits on every query -- or pay me over and over with each query.




 

Simply Superb!!!!

A reader, March 28, 2003 - 1:07 pm UTC


Matt, March 30, 2003 - 11:07 pm UTC

Ok, I understand why I can't direct path load clusters.

I also appreciate that the space management is not such an issue.

I haven't had chance to re-test my dummy data load with your suggested apporach, it all seems to make sense though.

I reality the data loaded would always come in in cluster key order, I've only seen the /*+ ORDERED hint */ in some RBO tuning. It is interesting what you say about the data needing to go in at around the same time. In an earlier load I was using "order by" to generate the data set (before I realised that things were slow).

How does this approach differ from the /*+ ORDERED hint */?

Thinking about this I guess that this is more efficient: Driving the query with my_driver to achieve this is better than a huge sort of the end result set.

I analyzed a sample of live data to get the 57 byte average row length. I also tried your query which came out with 2592 bytes.

I assume that your query produces the bytes size of a set of 96 rows (why the sum() though?)?

This should be roughly 57*96 = 5472 . Which doesn't add up. Nope, I see it! The cluster key needs to be stored only once. So, I assume that I can subtract 95 times the size required for the cluster key from 5472 and get your result. Errr, however using your vsize() approach I calculate that each cluster key needs 14 bytes (which isn't enough to accound for the difference). Either I am mistaken, or else there is something else going on here that I don't yet understand.

Can you shed any light?

Assuming all the above is correct I would need to size my cluster to be 2592 bytes per key, so each cluster entry has enough space for the 96 required rows.

However, I think I am slightly off track. Does the above make sense?


Tom Kyte
March 31, 2003 - 8:02 am UTC

I used ORDERED in this case to have the database read the BIG table with the cluster key and join each row in that table with the 96 rows in the other table.  It gave me all 96 rows for a cluster key all at once.  It did not sort the data, just processed the tables in the from clause in the ORDER i wanted.  If you 

set pagsize 100
set pause on
select ....  (the query with the ordered hint)

you'll see 96 rows with the same cluster key, then 96 rows with the next key and so on.  

The sum was to get the size of the data associated with a cluster key value -- I had where c1 = .. c2 = ... and so on in there.  It would have summed up the vsizes of all the of the columns associated with a cluster key.

Yes, I forgot to subtract out the cluster key in the calculation -- you are correct.


If you load up a couple of representative rows and query:

select avg(vsize(c1)+vsize(c2)+vsize(c3)+vsize(4)+sz)
from 
( select c1, c2, c3, c4, sum( vsize(c5)+vsize(c6)+ ..... vsize(cN) ) sz
    from t
   group by c1, c2, c3, c4 )

eg:

AVG(VSIZE(C1)+VSIZE(C2)+VSIZE(C3)+VSIZE(C4)+SZ)
-----------------------------------------------
                                     1447.72682

ops$tkyte@ORA920> l
  1  select avg( vsize(c1)+vsize(c2)+vsize(c3)+vsize(c4) + sz )
  2    from (
  3  select c1, c2, c3, c4, sum(vsize(c5)+vsize(c6)+vsize(c7)+vsize(interval)+vsize(c8)+vsize(c9)) sz
  4    from (
  5  select *
  6    from (
  7  select /*+ ORDERED */ mrs.C1 , mrs.C2 , mrs.C3 , mrs.C4 , mrs.C5 , 'A' C6 , 999.99 C7 , a.r interval , 1 C8 , 1 C9
  8    from my_driver mrs
  9    , (select rownum r from my_driver where rownum <= 96) a
 10         )
 11         )
 12   group by c1, c2, c3, c4
 13*        )
ops$tkyte@ORA920> /

AVG(VSIZE(C1)+VSIZE(C2)+VSIZE(C3)+VSIZE(C4)+SZ)
-----------------------------------------------
                                     1447.72682


for the data I generated (YMMV)...

It is less of an issue if this is read mostly data that is bulk loaded -- or if the keys and related rows are all created more or less at the same time (eg: when you do a CREATE TABLE -- you load up a cluster with all of the data at once).  There, right sizing the cluster key size isn't as important since all of the values go in at once.  

If loading out of order -- it is much much more important.
 

Error

A reader, May 20, 2003 - 11:06 pm UTC

SQL>  CREATE TABLE TEST( X VARCHAR2(10));

Table created.

SQL> EXEC GEN_DATA('TEST',2);
insert /*+ append */ into TEST select dbms_random.string('A',10) from all_objects where rownum <= :N

BEGIN GEN_DATA('TEST',2); END;

*
ERROR at line 1:
ORA-00904: invalid column name
ORA-06512: at "JOHNDEV.GEN_DATA", line 33
ORA-06512: at line 1

--I added dbms_output.put_line to see the actual insert statement

The :N should be replaced by p_records, but is not happeneing. Can you please guide me in finding out where I'm going wrong. I havent changed your routine, except the dbms_output.put_line.

Thanks


 

Tom Kyte
May 21, 2003 - 7:51 am UTC

no it shouldn't, :n is a bind variable.

it is most likely that dbms_random is not in your database.  did you check that out


ops$tkyte@ORA817DEV> create or replace procedure gen_data( p_tname in varchar2, p_records in number )
  2  authid current_user
  3  as
  4      l_insert long;
  5      l_rows   number default 0;
  6  begin
  7
  8          dbms_application_info.set_client_info( 'gen_data ' || p_tname );
  9      l_insert := 'insert /*+ append */ into ' || p_tname ||
 10                  ' select ';
 11
 12      for x in ( select data_type, data_length,
 13                  nvl(rpad( '9',data_precision,'9')/power(10,data_scale),9999999999) maxval
 14                   from user_tab_columns
 15                  where table_name = upper(p_tname)
 16                  order by column_id )
 17      loop
 18          if ( x.data_type in ('NUMBER', 'FLOAT' ))
 19          then
 20              l_insert := l_insert || 'dbms_random.value(1,' || x.maxval || '),';
 21          elsif ( x.data_type = 'DATE' )
 22          then
 23              l_insert := l_insert ||
 24                    'sysdate+dbms_random.value+dbms_random.value(1,1000),';
 25          else
 26              l_insert := l_insert || 'dbms_random.string(''A'',' ||
 27                                         x.data_length || '),';
 28          end if;
 29      end loop;
 30      l_insert := rtrim(l_insert,',') ||
 31                    ' from all_objects where rownum <= :n';
 32
 33          dbms_output.put_line( l_insert );
 34      loop
 35          execute immediate l_insert using p_records - l_rows;
 36          l_rows := l_rows + sql%rowcount;
 37                  commit;
 38                  dbms_application_info.set_module( l_rows || ' rows of ' || p_records, '' );
 39          exit when ( l_rows >= p_records );
 40      end loop;
 41  end;
 42  /

Procedure created.

ops$tkyte@ORA817DEV> drop table test;

Table dropped.

ops$tkyte@ORA817DEV> create table test( x varchar2(10) );

Table created.

ops$tkyte@ORA817DEV> exec gen_data('TEST', 2 )
insert /*+ append */ into TEST select dbms_random.string('A',10) from all_objects where rownum <= :n

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV> select * from test;

X
----------
\JlLWWKogS
IUnoXAtHOR

ops$tkyte@ORA817DEV> 

Bingo

A reader, May 21, 2003 - 8:57 am UTC

Yes you are right dbms_random was not installed

Now when I'm trying to create the dbms_random package, I ran into the following question.

Why should I run it logged in as sys, can I do it logged in as any other user? and once it is created how should I grant all the privileges to everyone , and the special admin option to the dba user.

Usually how does it happen , sys grants all the privs with admin option to dba user, who in turn grants it to others?

Thanks

Tom Kyte
May 21, 2003 - 9:39 am UTC

you have to install database packages as SYS or SYSDBA (or internal)


it is just the rules of the game.

generating data , joining randomly

Asim Naveed, September 06, 2003 - 1:49 am UTC

Hi,

I have a table locations whose columns are

Country
Region
City
loc_id

I have to generate data for testing. There
will 20 country, each country will have 4 regions
and each region have 8 cities.

I used the SQL.

INSERT INTO locations (country, region, city, loc_id)
SELECT country, region, city, rownum FROM
(
SELECT 'Country'||rownum country
FROM big_table Where rownum < 21
),
(
SELECT 'Region'||rownum region
FROM big_table Where rownum < 5
),
(
SELECT 'City'||rownum city
FROM big_table Where rownum < 9
)

It works fine. But now I need to assign random
no. of cities to each region. Currently
its assigning 8 cities to each region. What I want
is, Each region will have a random no. of cities
between 1 to 8. for e.g region1 has 2 cities,
region 2 has 5 cities and so on.
Can the last inline view's rows be joind to the others
randomly?

Is there a way to do this in ONLY SQL.

Thanks



Tom Kyte
September 06, 2003 - 9:02 am UTC

ops$tkyte@ORA920> INSERT INTO locations (country, region, city, loc_id)
  2  select country, region, city,  loc_id
  3  from (
  4  SELECT country, region, city, rownum loc_id, rr, dbms_random.value(1,9) rand FROM
  5  (
  6  SELECT 'Country'||rownum country
  7  FROM big_table Where rownum < 21
  8  ),
  9  (
 10  SELECT 'Region'||rownum region
 11  FROM big_table Where rownum < 5
 12  ),
 13  (
 14  SELECT 'City'||rownum city, rownum RR
 15  FROM big_table Where rownum < 9
 16  )
 17  )
 18  where rr <= rand;
 
347 rows created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> select min(cnt), max(cnt), avg(cnt)
  2    from (
  3  select country, region, count(*) cnt
  4    from locations
  5   group by country, region
  6         );
 
  MIN(CNT)   MAX(CNT)   AVG(CNT)
---------- ---------- ----------
         2          7     4.3375
 

great

Asim Naveed, September 06, 2003 - 5:35 pm UTC


Thank you very much, Although it solved my problem
but i need something additional, and that is the
cities should be assigned consecutive numbers.

If a region has 3 cities, they should be numbered as
city1, city2, city3. It cannot be city2, city4, city7.

Is this possible ?

Thanks

Tom Kyte
September 06, 2003 - 6:02 pm UTC

ops$tkyte@ORA920> INSERT INTO locations (country, region, city, loc_id)
  2  select country, region, city,  loc_id
  3  from (
  4  SELECT country, region, city, rownum loc_id, rr, dbms_random.value(1,9) 
rand FROM
  5  (
  6  SELECT 'Country'||rownum country
  7  FROM big_table Where rownum < 21
  8  ),
  9  (
 10  SELECT 'Region'||rownum region
 11  FROM big_table Where rownum < 5
 12  ),
 13  (
 14  SELECT 'City'||rownum city, rownum RR
 15  FROM big_table Where rownum < 9
 16  )
 17  )
 18  where rr <= rand;

look at the query 

where rr <= rand

rr = rownum from "city"
rr <= threshold

we are getting the first N cities, not a random N, the FIRST n

you already have it! 

its not assigning consecutive city no.s

Asim Naveed, September 07, 2003 - 4:01 am UTC

I have executed following SQL on my computer, i
added an ORDER BY in the end to show the problem to you.


select country, region, city, loc_id
from (
SELECT country, region, city, rownum loc_id, rr, dbms_random.value(1,9)
rand FROM
(
SELECT 'Country'||rownum country
FROM emp, emp, emp Where rownum < 21
),
(
SELECT 'Region'||rownum region
FROM emp, emp, emp Where rownum < 5
),
(
SELECT 'City'||rownum city, rownum RR
FROM emp, emp, emp Where rownum < 9
)
)
where rr <= rand
order by 1, 2, 3

The results at this time on my computer was this.
(Actually there were many rows, I am showing you only
first few rows)



COUNTRY REGION CITY LOC_ID
---------------------------------------------------
Country1 Region1 City1 1
Country1 Region1 City2 81
Country1 Region1 City3 161
Country1 Region1 City5 321
Country1 Region1 City6 401
Country1 Region2 City1 21
Country1 Region2 City2 101
Country1 Region2 City3 181
Country1 Region2 City5 341
Country1 Region2 City7 501
Country1 Region3 City1 41
Country1 Region3 City2 121
Country1 Region3 City4 281
Country1 Region3 City5 361
Country1 Region4 City1 61
Country1 Region4 City2 141
Country1 Region4 City3 221
Country1 Region4 City4 301
Country10 Region1 City1 10
Country10 Region1 City2 90
Country10 Region1 City3 170
Country10 Region1 City4 250
Country10 Region1 City5 330
Country10 Region1 City6 410
Country10 Region2 City1 30
Country10 Region2 City2 110
Country10 Region2 City3 190
Country10 Region2 City8 590


Look at the cities of country1 , region1

Country1 Region1 City1 1
Country1 Region1 City2 81
Country1 Region1 City3 161
Country1 Region1 City5 321
Country1 Region1 City6 401

there are 5 cities but they are numbered as
city1, city2, city3, city5, city6. It should be
city1, city2, city3, city4, city5.

Also Look at the cities of country1 , region2

Country1 Region2 City1 21
Country1 Region2 City2 101
Country1 Region2 City3 181
Country1 Region2 City5 341
Country1 Region2 City7 501

there are 5 cities but they are numbered as
city1, city2, city3, city5, city7. It should be
city1, city2, city3, city4, city5.


Also look at the cities of country10 region2

Country10 Region2 City1 30
Country10 Region2 City2 110
Country10 Region2 City3 190
Country10 Region2 City8 590

They are numbered city1, city2, city3, city8. it should
be city1, city2, city3, city4.

Its actually cross joining all inline views and then
randomly excluding the rows from it because the
dbms_random.value will potentially generate a different
random no. of each row not for each country, region.


Thanks.



Tom Kyte
September 07, 2003 - 6:11 pm UTC

yes, i guess that is true -- sorry about that.

you can use row_number() in the outer query instead of rownum in the inner query to build the city name.


ops$tkyte@ORA920LAP> select country, region, 'City' || row_number() over (partition by country, region order by loc_id) city, loc_id
  2    from (
  3  select country, region, loc_id
  4    from (
  5    SELECT country, region, city, rownum loc_id, rr, dbms_random.value(1,9)
  6  rand FROM
  7     (
  8     SELECT 'Country'||rownum country
  9     FROM emp, emp, emp Where rownum < 21
 10     ),
 11     (
 12     SELECT 'Region'||rownum region
 13     FROM emp, emp, emp Where rownum < 5
 14     ),
 15     (
 16     SELECT 'City'||rownum city, rownum RR
 17     FROM emp, emp, emp Where rownum < 9
 18     )
 19     )
 20     where rr <= rand
 21     )
 22  order by 1, 2, 3
 23  /

COUNTRY    REGION     CITY           LOC_ID
---------- ---------- ---------- ----------
Country1   Region1    City1               1
Country1              City2              81
Country1              City3             161
Country1              City4             241
Country1              City5             321

Country1   Region2    City1              21
Country1              City2             101
Country1              City3             181
Country1              City4             261
Country1              City5             421
 

i think i solved it

Asim Naveed, September 07, 2003 - 4:48 am UTC

I just made a little change and now its working perfectly.
But please give me a better solution if possible
here is the final query



select country, region, 'City'||
ROW_NUMBER() OVER (PARTITION BY country, region order by city) city, loc_id
from (
SELECT country, region, city, rownum loc_id, rr, dbms_random.value(1,9)
rand FROM
(
SELECT 'Country'||rownum country
FROM emp, emp, emp Where rownum < 21
),
(
SELECT 'Region'||rownum region
FROM emp, emp, emp Where rownum < 5
),
(
SELECT 'City'||rownum city, rownum RR
FROM emp, emp, emp Where rownum < 9
)
)
where rr <= rand
order by 1, 2, 3


Now deleting randomly

Asim Naveed, September 10, 2003 - 6:21 am UTC

Thanks for your valuable answers. I have another question
now.

I generated alot of data in my fact table which have
several dimensions. About 12 million records. Now I want
to randomly delete 9 million records from the table.

Thanks again for your help.

Tom Kyte
September 10, 2003 - 7:34 pm UTC

"randomly delete?????"

what possible use could that be. sorry, but I find this an "amazing" request.

no idea what you could be possibly doing.

Randomly delete rows from a table

Asim Naveed, September 11, 2003 - 6:04 am UTC

Actually we are developing a front end B.I. application.
To test the performance of that application, I have been
assigned a task to generate large data automatically in 
the datawarehouse. The data should be generated in
such a way that it should look like real data not 
computer generated data. For example if Quarter 1-1998
has sales of 300,000 the Quarter 2-1998 sales should
be around 300,000. 

Therefore I generated the data using SELECT .... FROM bigTable ....... and dbms_random.value. I generated
about 12 million rows in my fact table. The data is
generated randomly and it looks like real data not
computer generated data. Till here its all fine.
But when I started making Materialized Views for 
aggregations, it took very long time (because of my computers small RAM and CPU). So i decided  to cut down the
data from 12 million to only 3 million rows. But I still
want the data to look like its random and real.

If I use DELETE FROM sales WHERE time_id = '1'
or DELETE FROM sales WHERE product_id = 4 , the data
will show that there is no sales in a particular day
or of a particular product respectively. So I thought
its good to randomly delete 9 million rows.

May be the structure of the table can help decide
some strategy to randomly delete rows. So here it
is

SQL> desc sales
 Name                       Null?    Type
-------------------------- -------- -------- 
SALES_ID                   NOT NULL NUMBER(10)
MARKET_ID                  NOT NULL NUMBER(10)
PRODUCT_ID                 NOT NULL NUMBER(10)
TIME_ID                    NOT NULL NUMBER(10)
SALES_CHANNEL_ID           NOT NULL NUMBER(10)
SALESMAN_ID                NOT NULL NUMBER(10)
SALES                      NOT NULL NUMBER(11,2)
SALES_QUANTITY             NOT NULL NUMBER(11,2)
TODATE                     NOT NULL VARCHAR2(10)
CAMPAIGN_COST              NOT NULL NUMBER(11,2)
 

Tom Kyte
September 11, 2003 - 8:43 am UTC

maybe you just drop the table and rerun your data generation program to regenerate it with just 3 million rows.

maybe you use the SAMPLE clause and do a create table as select with a 25% random sample -- drop the big table and rename the small, new table.



Lookup

Darryl, April 27, 2004 - 3:48 am UTC

Re: Feb 13, 2002 update - looking up a value in a reference table for RI purposes.

How do you substitute a value from the array into the insert statement?

If I code:
l_deptnos( dbms_random.value( 1, l_deptnos.count ) )
in place of a call to dbms_random... in the insert statement I get a run-time error saying:
ORA-00904: "L_DEPTNOS": invalid identifier

The look-up works ok when used as an argument of dbms_output.put_line - but I need help getting it to work inside the insert statement.

Thanks.

Tom Kyte
April 28, 2004 - 11:55 am UTC

ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2          type numArray is table of number index by binary_integer;
  3
  4          l_deptnos  numArray;
  5                  l_cnt      number;
  6  begin
  7          select deptno bulk collect into l_deptnos from scott.dept;
  8
  9                  l_cnt := l_deptnos.count;
 10          for i in 1 .. 5
 11          loop
 12                   insert into t values ( l_deptnos( dbms_random.value( 1, l_cnt ) ) );
 13          end loop;
 14  end;
 15  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
        20
        10
        30
        30
        10



do you have a counter example to work from? 

Lookup

Darryl, April 28, 2004 - 8:47 pm UTC

Thanks, this helped to pinpoint that the problem is related to building the insert statement dynamically:

ora1> create table t ( x int );

Table created.

ora1> declare type numArray is table of number index by binary_integer;
2
3 l_deptnos numArray;
4 l_cnt number;
5 l_insert long;
6 begin
7 select deptno bulk collect into l_deptnos from scott.dept;
8 l_cnt := l_deptnos.count;
9
10 l_insert := 'insert into t' ||
11 ' select' ||
12 ' l_deptnos( dbms_random.value( 1, l_cnt ) )' ||
13 ' from all_objects' ||
14 ' where rownum <= 5'
15 ;
16
17 execute immediate l_insert;
18 commit;
19 end;
20 /
declare type numArray is table of number index by binary_integer;
*
ERROR at line 1:
ORA-00904: "L_DEPTNOS": invalid identifier
ORA-06512: at line 17


Can the lookup array be used in a dynamic statement, or will I have to revert to one-shot insert statements in order to do the lookup?

Thanks,
Darryl.

Tom Kyte
April 29, 2004 - 7:23 am UTC

ahh -- we are comparing elephants to rocks.  And there is nothing to compare!!!


in dynamic sql, you have to BIND inputs.  the plsql variables are not in scope in the execute immediate, we must pass them to that "function" (think of execute immediate as a function -- no function would be able to 'see' your variables)

Also, native dynamic sql can only see SQL types -- never PLSQL types as binds.

And we cannot subscript a nested table in sql.


so, you would fill an array with the values and forall i insert that array.


ops$tkyte@ORA9IR2> declare
  2      type numArray is table of number;
  3      l_deptnos           numArray;
  4      l_deptnos_to_insert numArray := numArray();
  5      l_rows              number := 5;
  6  begin
  7      select deptno bulk collect into l_deptnos from scott.dept;
  8
  9      l_deptnos_to_insert.extend(l_rows);
 10      for i in 1 .. l_rows
 11      loop
 12          l_deptnos_to_insert(i) := l_deptnos( dbms_random.value(1,l_deptnos.count) );
 13      end loop;
 14
 15      forall i in 1 .. l_rows
 16          execute immediate
 17         'insert into t values ( :deptno )' using l_deptnos_to_insert(i);
 18  end;
 19  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
         X
----------
        20
        20
        30
        40
        30
 

 

Thanks for explaining the scope of PLSQL variables

Darryl, April 29, 2004 - 8:02 pm UTC


generate test data

lizhuohua, May 12, 2004 - 11:52 am UTC

Hello Tom,
  I want generate some test data. 
Look this:
SQL> create table teacher(
  2  tno varchar2(20),
  3  name varchar2(30));

Table created

SQL> create table student(
  2  sno varchar2(20),
  3  name varchar2(30));

Table created

SQL> create table TEACHER_STD
  2  (
  3    TNO VARCHAR2(20),
  4    SNO VARCHAR2(20)
  5  );

Table created

SQL> insert into teacher
  2    select 't'||rownum, object_name
  3      from all_objects
  4     where object_name > 'A' and rownum <= 1000;

1000 rows inserted

SQL> insert into student
  2    select 's'||rownum, object_name
  3      from all_objects
  4     where object_name > 'A' and rownum <= 10000;

10000 rows inserted

SQL> commit;

Commit complete

sql1:
select tno,sno,a.r r1,b.r r2
  from (select tno, round(dbms_random.value(1, 50)) r
          from teacher) a,
       (select sno, round(dbms_random.value(1, 50)) r
          from student) b
 where a.r = b.r;
sql2:
select tno,sno,a.r r1,b.r r2
  from (select tno, round(dbms_random.value(1, 50)) r
          from teacher where rownum<=1000) a,
       (select sno, round(dbms_random.value(1, 50)) r
          from student where rownum<=10000) b
 where a.r = b.r;

Most time I get zero rows data ,
sometime I get 10,000,000 rows data 
and in the result a.r<>b.r when I use sql1.
(
 The rows I get like this:
    tno    sno    r1    r2
    t1    s1    4    5
    t2    s1    2    3
    t3    s1    3    3
    t4    s1    1    4
    t5    s1    5    1
......
)
Sql2 work excatly.(I get about 200,000 rows data.)

Can you please explain why the first sql doesn't work?

Thanks in advance
Lizhuohua 

Tom Kyte
May 13, 2004 - 8:03 am UTC

select tno,sno,a.r r1,b.r r2
from (select tno, round(dbms_random.value(1, 50)) r
from teacher) a,
(select sno, round(dbms_random.value(1, 50)) r
from student) b
where a.r = b.r;

is the same as:

select a.tno, round(dbms_random.value(1,50)) r1,
b.sno, round(dbms_random.value(1,50)) r2,
from teacher, student
where round(dbms_random.value(1,50)) = round(dbms_random.value(1,50))
/


that is why -- the odds of the where clause being satisfied are near "zero".


You have attributed to SQL some linear, procedural processing -- that is does not obey. SQL is very set oriented. You can write whatever you want -- but it will rewrite it in an equivalent fashion to process it.

Inline views are not necessarily "materialized off to the side" and use.

However, when you added ROWNUM in there, you changed the game -- I use that technique (if you have Effective Oracle By Design -- i cover it in some depth) to tune a query from time to time. By using ROWNUM, Oracle did materialize the results off to the side -- fixing the dbms_random values for each row -- and then joined.

Thanks

lizhuohua, May 13, 2004 - 10:56 am UTC

Tom,
Thanks for the great explanation.
I think I understand how the server process my sql now.
I have expected your book Effective Oracle By Design translated to Chinese for several months ;)

Best Regards

Amazing

Tony, June 04, 2004 - 10:31 pm UTC

Hi, I have two questions

Question1: Suppose I want to insert data into test then

create table test(c1 number(4) primary key,c2 varchar2(20),c3 varchar2(25),c4 date,c5 number(5,3));

SQL> exec clone( 'test',3);
BEGIN clone( 'test',3); END;

*
ERROR at line 1:
ORA-00936: missing expression
ORA-06512: at "SCOTT.CLONE", line 34
ORA-06512: at line 1

while at the very next moment

SQL> exec clone( 'emp', 5 );

PL/SQL procedure successfully completed.

so what's wrong with the test table

Question2:

Now suppose i want to insert some test data in the table within a range specially for date columns.

 exec clone( 'emp', 5 ,'1-jul-2004','31-jul-2004')
Arguments:

table_name    -- this is the table to be populated with test data
no_of_rows    -- this is the no. of rows for test data 
lower_date       -- this is lower date for date column
higest date     -- this is the highest date for date column

while numeric and character data will generate randomly.
 

Tom Kyte
June 05, 2004 - 8:47 am UTC

1) ops$tkyte@ORA9IR2> create table test(c1 number(4) primary key,c2 varchar2(20),c3 varchar2(25),c4
  2  date,c5 number(5,3));
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec gen_data( 'TEST', 5 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec clone( 'test', 3 )
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from clone_test;
 
        C1 C2                   C3                        C4                C5
---------- -------------------- ------------------------- --------- ----------
      1085 IwxPRkQbNUNevniexfek weZuELTMCMMBWRqrsEUGkrTdl 10-MAY-05     85.896
      1656 VasXzANHQbTxdcklDVnr CkhXyNhYqnNhtEZMMOehRLqhx 08-AUG-05     91.128
      8022 RcwCZgUoavqnboKgtJqr PTVheZAbDSofKUtNtBifUCApT 01-FEB-07     10.242
 


worked fine for me, perhaps you tweaked the code??


2) you'd have to modify clone to do so (eg: write code)
 

Any SQL soln for this ?

A reader, June 27, 2004 - 6:06 am UTC

Hello Sir.
I have a table TABLE TEST_DATA (
ID NUMBER,
TYPE VARCHAR2 (1),
START_DATE DATE NOT NULL,
END_DATE DATE,
AMT NUMBER);
For every ID I need to generate the dates where

types A and B mismatch for the AMT.

starting with the min start date (1/6/2000)to the max end date (6/22/2004) for an ID

SELECT * FROM TEST_DATA

ID TYPE START_DATE END_DATE AMT
--------------------------------------- ---- --------------------- --------------------- ---------------------------------------
1 A 7/5/2003 12/27/2003 122
1 A 1/3/2001 12/27/2001 119
1 A 12/28/2002 7/4/2003 122
1 A 12/28/2003 6/22/2004 122
1 A 12/28/2001 12/27/2002 122
1 B 12/29/2001 12/27/2003 122
1 B 1/3/2001 12/28/2001 119
1 B 1/6/2000 1/2/2001 115
1 B 12/28/2003 6/22/2004 122
9 rows selected

so the output must generate -- list all the dates for a given id
and date where the amt mismatch.

If a type doesnot have a range entry say type "A" in this case just generate dates from type "B"
from start date to end date for that range of "B" and list
the prices of type A as null and what ever the amt for type B

id date amt_for_A amt_for_B
1 1/6/2000 null 115
1 1/7/2000 null 115
: : : :
: : : :
1 1/2/2001 null 115
1 12/28/2001 112 119


CREATE TABLE TEST_DATA (
ID NUMBER,
TYPE VARCHAR2 (1),
START_DATE DATE NOT NULL,
END_DATE DATE,
AMT NUMBER);

INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'A', TO_Date( '07/05/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '12/27/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'A', TO_Date( '01/03/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '12/27/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '119');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'A', TO_Date( '12/28/2002 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '07/04/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'A', TO_Date( '12/28/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '06/22/2004 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'A', TO_Date( '12/28/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '12/27/2002 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'B', TO_Date( '12/29/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '12/27/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'B', TO_Date( '01/03/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '12/28/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '119');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'B', TO_Date( '01/06/2000 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/02/2001 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '115');
INSERT INTO TEST_DATA ( ID, TYPE, START_DATE, END_DATE, AMT ) VALUES (
'1', 'B', TO_Date( '12/28/2003 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '06/22/2004 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM')
, '122');
COMMIT;



Tom Kyte
June 27, 2004 - 11:26 am UTC

ops$tkyte@ORA9IR2> select id,
  2         start_date+r,
  3         max( decode( type, 'A', amt )) A_amt,
  4         max( decode( type, 'B', amt )) B_amt
  5    from test_data,
  6         (select rownum-1 r from all_objects where rownum<=1000)
  7   where r <= (end_date-start_date+1)
  8   group by id, start_date+r
  9  having max( decode( type, 'A', amt )) <> max( decode( type, 'B', amt ))
 10         or
 11         max( decode( type, 'A', amt )) is null and max( decode( type, 'B', amt )) is not null
 12         or
 13         max( decode( type, 'A', amt )) is not null and max( decode( type, 'B', amt )) is null
 14   order by 2
 15  /
 
        ID START_DAT      A_AMT      B_AMT
---------- --------- ---------- ----------
         1 06-JAN-00                   115
         1 07-JAN-00                   115
         1 08-JAN-00                   115
         1 09-JAN-00                   115
         1 10-JAN-00                   115
         1 11-JAN-00                   115
         1 12-JAN-00                   115

.....
         1 31-DEC-00                   115
         1 01-JAN-01                   115
         1 02-JAN-01                   115
         1 28-DEC-01        122        119
 
364 rows selected.


change rownum<=1000 to be the max "spread" between end/start_date at least (and make sure the table you select it from has at least that many rows!  or use a pipelined function to generate as many rows as you need) 

Any Sql soln DB version 9.2

A reader, June 27, 2004 - 6:07 am UTC

Sorry forgot the db version

Tuning scenario

anurag, November 16, 2004 - 8:49 am UTC

Hi! Can You guide me /give me the code to simulate db with tuning problems like chaining, migration, intensive i/o's, frequent ora-01555 and more ... , Since I've to appear for interview next week and I wish to do as much realistic practice as I can..

best regards

Tom Kyte
November 16, 2004 - 12:58 pm UTC

read on - search this site, it is what it is all about.

but, in a week -- forget it, read the concepts guide. if you read that, retain 10% of it, you'll already know 90% more than most

can u refer....

anurag, November 17, 2004 - 11:22 pm UTC

thanks ... can u refer any useful links or books in this regard for quick walk through.

Tom Kyte
November 18, 2004 - 10:23 am UTC

umm

the CONCEPTS GUIDE


free
on otn
a must read
read it, retain 10% of it, you'll know 90% more than most.

Creating test tables on the fly

A reader, July 18, 2005 - 12:01 am UTC

Tom

I remember reading threads on your website wherein you created test tables and populated them at the same time (ctas).

For example there was a thread where you inset a row in a paritular position in a table.You first insert it on the top them in the middle and then in the last.

I just cant get to that question,

do you remember that question. I remember that question having to do with EXISTS.

Thanks




Tom Kyte
July 18, 2005 - 7:58 am UTC

there are over 29,000 questions and answers in my database.  Sorry but that one is eluding me right now.

but do you mean something like this:

ops$tkyte-ORA9IR2> create table t
  2  ( x int,
  3    a char(2000),
  4    b char(2000),
  5    c char(2000),
  6    d varchar2(2000)
  7  ) tablespace &1;
old   7: ) tablespace &1
new   7: ) tablespace USERS
 
Table created.
 
ops$tkyte-ORA9IR2> pause
 
ops$tkyte-ORA9IR2>
ops$tkyte-ORA9IR2> insert into t (x) values (1);
 
1 row created.
 
ops$tkyte-ORA9IR2> insert into t (x,a) values (2,'x');
 
1 row created.
 
ops$tkyte-ORA9IR2> insert into t (x,a,b,c,d) values (3,'a', 'y', 'z', rpad('x',500,'x') );
 
1 row created.
 
ops$tkyte-ORA9IR2> insert into t (x,d) values (4,rpad('x',725,'x') );
 
1 row created.
 
ops$tkyte-ORA9IR2> pause
 
ops$tkyte-ORA9IR2>
ops$tkyte-ORA9IR2> clear screen
 
ops$tkyte-ORA9IR2> select dbms_rowid.rowid_block_number(rowid), x from t;
                                                                                                    
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)          X
------------------------------------ ----------
                                 143          1
                                 143          2
                                 143          4
                                 144          3
                                                                                                    
ops$tkyte-ORA9IR2> commit;
                                                                                                    
Commit complete.

<b>1,2 >>>4<<<<, 3  'out of order', because rows in heaps have no order</b>
                                                                                                    
ops$tkyte-ORA9IR2> pause
                                                                                                    
ops$tkyte-ORA9IR2>
ops$tkyte-ORA9IR2> delete from t where x = 2;
                                                                                                    
1 row deleted.
                                                                                                    
ops$tkyte-ORA9IR2> commit;
                                                                                                    
Commit complete.
                                                                                                    
ops$tkyte-ORA9IR2> insert into t (x) values (5);
                                                                                                    
1 row created.
                                                                                                    
ops$tkyte-ORA9IR2> commit;
                                                                                                    
Commit complete.
                                                                                                    
ops$tkyte-ORA9IR2> pause
                                                                                                    
ops$tkyte-ORA9IR2> clear screen
                                                                                                    
                                                                                                    
ops$tkyte-ORA9IR2> select dbms_rowid.rowid_block_number(rowid), x from t;
                                                                                                    
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)          X
------------------------------------ ----------
                                 143          1
                                 143          4
                                 143          5
                                 144          3
                                                                                                    
ops$tkyte-ORA9IR2> pause

<b>now even more "out of order"</b> 

generate insert statements, dynamic record

A reader, January 13, 2006 - 1:24 pm UTC

Hi

I need to write a package or procedure to generate insert statements, for example generate insert statements for EMP table I would use this

select 'insert into emp values ('
|| empno
||', '
|| '''' || ename || ''''
|| ', '
|| '''' || job || ''''
|| ', '
|| mgr
|| ', '
|| '''' || hiredate || ''''
|| ', '
|| sal
|| ', '
|| '''' || nvl(comm, '') || ''''
||', '
|| deptno
|| ');'
from emp;

and this generates

insert into emp values (7369, 'SMITH', 'CLERK', 7902, '19801217 00:00:00', 800, '', 20);
insert into emp values (7499, 'ALLEN', 'SALESMAN', 7698, '19810220 00:00:00', 1600, '300', 30);
insert into emp values (7521, 'WARD', 'SALESMAN', 7698, '19810222 00:00:00', 1250, '500', 30);

I am trying to do this using a pl/sql which takes a parameter, the table name and by querying the data dictionary this will generate the insert statements. I can generate

select 'insert into emp values ('
|| empno
||', '
|| '''' || ename || ''''
|| ', '
|| '''' || job || ''''
|| ', '
|| mgr
|| ', '
|| '''' || hiredate || ''''
|| ', '
|| sal
|| ', '
|| '''' || nvl(comm, '') || ''''
||', '
|| deptno
|| ');'
from emp;

But I want to execute this using dynamic SQL but I dont know how to output the resultset dynamically (the insert statements) because I dont know a way to store the columns dynamically in a record. Do you know if this is possible?

I have been trying for 2 days and I almost give up, I am thinking to just generate the select statements and run that statement then using a spool to get the insert statements :-(


Tom Kyte
January 13, 2006 - 1:43 pm UTC

ugh, I cannot tell you how much I don't like this idea.


see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:68212348056 <code>

and you can use describe_columns to get the colum names - using that technique I use to create a CSV file, you can create this file.

not only insert

A reader, January 13, 2006 - 2:05 pm UTC

Hi

I am not only going to generate insert statements, I also will have to generate delete and update statements. Also the number of rows used to generate these statements is small, 10 to 20 rows. I will only deal with varchar, date and number datatypes (else == ERROR)

Just wondering why this is such a bad idea?

Tom Kyte
January 13, 2006 - 2:08 pm UTC

all literal SQL - no binds.



ah I am only running once a week

A reader, January 13, 2006 - 5:03 pm UTC

Hi

I am going to run this once a week or probably two no more. The dynamic record I guesss it´s not gonna work. I have read all the docs and cant find such function or trick to do it.

Do you think a better way would be just generating the query then run the query + spool? (the ''' are becoming quite a nightmare too :-) )

Tom Kyte
January 15, 2006 - 3:20 pm UTC

you can do this using dbms_sql - it is just code, you can turn all ' into '' as you write the data out.

interesting, the proc to generate insert statements

A reader, January 16, 2006 - 4:26 pm UTC

Interesting idea, I have been looking a way to generate insert statements for rows of a specific table, until now I have been doing it using Quest Toad.

How would DBMS_SQL fit in this procedure thought? Except the describe part.

I think this is not that simple since we would have to deal with NULL for columns which would have NULL values for eaither number, varchar2, char or date data types!

To the reader who asked the question, would be interesting see the pseudo code, mind share?

Tom Kyte
January 16, 2006 - 8:56 pm UTC

dbms_sql is used to

o open a select * from T
o describe the column names/datatypes
o using the describe information - create an insert into t ( c1, c2, c3, ... ) statement
loop
o dynamically fetch a row
o retrieve each of the columns into a number, or a date, or a string printing
them out using utl_file, eg you might retrieve a date and print out:

utl_file.put(f, 'to_date('||to_char(l_date,'yyyymmmmddhh24miss')||,
'''yyyymmddhh24miss''),' );

(eg: resulting string looks like:

to_date( 20060101122344, 'yyyymmddhh24miss' ),

very much like the dump_csv routine I pointed to already.


how to deal with NULL

A reader, January 17, 2006 - 3:38 am UTC

Hi

The dump_csv is very useful. I would like to generate insert statements as well, but as the previous poster mentioned how do we deal with columns which can have NULLs?

With dump_csv we dont have this problem since ", ," is a NULL no problem with SQLLOADER.

Tom Kyte
January 17, 2006 - 8:50 am UTC

you are, well, writing code are you not?

therefore, you can do *anything* at all *you want to do*. The world is your playground....

You should be able to figure out how to print out

a) data when data is not null
b) the word NULL when it is....


Ok, here is your first cut, refine as needed:


create or replace procedure dump_table_to_csv( p_tname in varchar2,
p_dir in varchar2,
p_filename in varchar2 )
authid current_user /* <<<== if you want... runs as "invoker" and runs with ROLES */
is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(4000);
l_status integer;
l_query varchar2(1000)
default 'select * from ' || p_tname;
l_colCnt number := 0;
l_separator varchar2(2);
l_descTbl dbms_sql.desc_tab;
l_text long;
begin
l_output := utl_file.fopen( p_dir, p_filename, 'w' );
execute immediate
'alter session set nls_date_format=''yyyymmddhh24miss'' ';
utl_file.put_line( l_output, 'alter session set nls_date_format = ''yyyymmddhh24miss'';' );
utl_file.put_line( l_output, 'alter session set cursor_sharing=force;' );

dbms_sql.parse( l_theCursor, l_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

l_text := 'insert into ' || p_tname || ' (';
for i in 1 .. l_colCnt
loop
l_text := l_text || l_separator || '"' || l_descTbl(i).col_name || '"';
dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
l_separator := ', ';
end loop;
l_text := l_text || ') values (';

l_status := dbms_sql.execute(l_theCursor);

while ( dbms_sql.fetch_rows(l_theCursor) > 0 )
loop
l_separator := '';
utl_file.put( l_output, l_text );
for i in 1 .. l_colCnt
loop
dbms_sql.column_value( l_theCursor, i, l_columnValue );
if ( l_columnValue is not null )
then
utl_file.put( l_output, l_separator || '''' || replace(l_columnValue, '''', '''''' ) || '''' );
else
utl_file.put( l_output, l_separator || 'NULL' );
end if;
l_separator := ', ';
end loop;
utl_file.put_line( l_output, ');' );
end loop;
dbms_sql.close_cursor(l_theCursor);
utl_file.fclose( l_output );

execute immediate 'alter session set nls_date_format=''dd-MON-yy'' ';
exception
when others then
execute immediate 'alter session set nls_date_format=''dd-MON-yy'' ';
raise;
end;
/





drop table emp;
create table emp as select * from scott.emp;
exec dump_table_to_csv( 'EMP', 'MY_DIR', 'tkyte.emp' );
host cat /tmp/tkyte.emp
delete from emp;
commit;
@/tmp/tkyte.emp


question about DBMS_SQL

A reader, January 17, 2006 - 4:28 pm UTC

Hi

I have some doubts about how DBMS_SQL works, in your last example you used this

loop the number of columns
dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
end loop

I dont quite understand what does define_column do, I think it´s to assign appropriate variables to store column values however as you loop arent you overwrtting?

The example I see in the doc define the column using variables such as

DBMS_SQL.DEFINE_COLUMN(source_cursor, 1, id_var);
DBMS_SQL.DEFINE_COLUMN(source_cursor, 2, name_var, 30);
DBMS_SQL.DEFINE_COLUMN(source_cursor, 3, birthdate_var);

then fetch into the variables

DBMS_SQL.COLUMN_VALUE(source_cursor, 1, id_var);
DBMS_SQL.COLUMN_VALUE(source_cursor, 2, name_var);
DBMS_SQL.COLUMN_VALUE(source_cursor, 3, birthdate_var);

Seems obvious for me but seeing your code it´s like l_columnValue is an array instead a varchar2 variable!

Tom Kyte
January 17, 2006 - 4:46 pm UTC

define column is used to tell oracle "this is what I'll be fetching into - would you be so kind to convert whatever you get into this for me"

I'm simply saying "whatever I select * - I'll be retrieving into a varchar2(4000), convert dates and numbers into strings please...."


l_columnValue is a scalar in my code??

is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(4000);

it is a single varchar2(4000) that I use in a call to column value over and over to retrieve the i'th column from a given row:

for i in 1 .. l_colCnt
loop
dbms_sql.column_value( l_theCursor, i, l_columnValue );
if ( l_columnValue is not null )
then .....






question about DBMS_SQL

A reader, January 17, 2006 - 4:58 pm UTC

Hi

What it´s confusing me is I thought whenever you use define_column a column you need a seperate variable for each column otheriwse how does Oracle then know (later in the code) how to store the variables?

I thought you must define a variable per dbms_sql.define_column. As I can see from the documentation example...


Tom Kyte
January 17, 2006 - 5:17 pm UTC

Nope - all that "variable" is used for in define column is to pick WHICH API function actually gets called! (date makes it call the date version - telling it "hey, it'll be a date" and so on)

define tells oracle:

o datatype
o length

nothing more, nothing less. Since PLSQL doesn't have "pointers", it cannot really fetch into the variable you sent to define! That is why you later call column_value to actually *get* the value.

after read the documentation again

A reader, January 17, 2006 - 5:18 pm UTC

Hi

I went throught the documentation again and I see that the define_column accepts four parameters

c
ID number of the cursor for the row being defined to be selected.

position
Relative position of the column in the row being defined.
The first column in a statement has position 1.

column
Value of the column being defined.
""""The type of this value determines the type for the column being defined.""""
Is here saying: this value determines the DATATYPE for the column being defined?

column_size
Maximum expected size of the column value, in bytes, for columns of type VARCHAR2, CHAR, and RAW.

What happens if I have defined:

l_columnValue varchar2(100);
dbms_sql.define_column(xxx, l_cnt, l_columnValue, 4000)

Is this a inconsistency since l_columnValue should have 4000 as the column_size :-?

--------------

Is this code

loop
dbms_sql.define_column(xxx, l_cnt, l_columnValue, 4000)
end loop

saying

column 1 will be varchar2(4000)
column 2 will be varchar2(4000)
column 3 will be varchar2(4000)
column 4 will be varchar2(4000)

and so on?

Tom Kyte
January 18, 2006 - 7:24 am UTC

yes, it is saying it determines the TYPE - that is the datatype - of the column.


There is no inconsistency, you are saying "I am selecting a varchar2(100), when I fetch - I shall be giving you a string of length 4000 to put the result into"

I am exactly saying "I will be giving you a character string of maximum length 4000 bytes. Please retrieve the data and place it into that variable I send you in the column_value call"

dbms_random documentation

A reader, January 24, 2006 - 1:53 pm UTC

Hi

I was wondering why dbms_random documentation is so poor? There are 9 procedures and functions and only 4 is explained in the docs why? Any dangerous side effects???

generate testing data by update

Steve, February 23, 2006 - 2:34 pm UTC

Tom,

I have table t and s, I just want to change
first n rows' col1 of table t
by using the first n rows' column1 of table s.

something like

update t
set col1 = (Select col1
from s
where t.rownum = s.rownum )

Thanks!

Tom Kyte
February 23, 2006 - 7:20 pm UTC

well, there isn't really "the first rows" of anything - they are sets.

but...

ops$tkyte@ORA9IR2> create table t1 ( col1 int );

Table created.

ops$tkyte@ORA9IR2> create table t2 ( col1 int );

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t1 select rownum from all_users;

32 rows created.

ops$tkyte@ORA9IR2> insert into t2 select object_id from all_objects;

30913 rows created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
  2      type rid_array is table of rowid;
  3      l_rid  rid_array;
  4      type int_array is table of int;
  5      l_new  int_array;
  6  begin
  7      select t1.rid, t2.col1
  8        bulk collect into l_rid, l_new
  9        from (select rownum r, rowid rid
 10                from t1 ) t1,
 11             (select rownum r, col1
 12                from t2
 13               where rownum <= (select count(*) from t1)
 14             ) t2
 15       where t1.r = t2.r;
 16
 17      forall i in 1 .. l_rid.count
 18          update t1 set col1 = l_new(i) where rowid = l_rid(i);
 19  end;
 20  /

PL/SQL procedure successfully completed.



If T1 is large - you'll need to use an explicit cursor, use the LIMIT clause on the fetch - and iterate over the set. 

generate rows based on a column value

Ravi B, January 03, 2012 - 12:29 pm UTC

Hi Tom,

Happy new year!

I have a requirement where i have to generate some test data as following.

create table test_cpu (id number, speed number, type varchar2(50), num_logical_processors number,num_processors number);

insert into test_cpu values (39970,1401,'Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz',1,1);
insert into test_cpu values (39971,1401,'Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz',2,2);
insert into test_cpu values (39972,1401,'Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz',4,4);

For example, pick a row where num_logical_processors = 4

select * from test_cpu
where id = 39972
and num_logical_processors = 4;

I have to generate 4 different rows (num_logical_processors = 4) with a new column added to the query CPU0, CPU1, CPU2, CPU3.

The result should be like:

CPU0 39972 1401 Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz 4 4
CPU1 39972 1401 Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz 4 4
CPU2 39972 1401 Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz 4 4
CPU3 39972 1401 Intel(R) Core(TM)2 Duo CPU U9400 @ 1.40GHz 4 4

Similarly, i have to replicate the rows for all the rows in test_cpu table based on num_logical_processors.

Could we have SQL only solution so that i could run this query on Oracle as well as SQL Server?

Thanks for your help!
Tom Kyte
January 04, 2012 - 8:27 am UTC

ops$tkyte%ORA11GR2> with data ( cpu_num, type, num_logical_processors, num_processors )
  2  as
  3  ( select 0 cpu_num, type, num_logical_processors, num_processors
  4      from test_cpu
  5     where num_logical_processors = 4
  6     union all
  7    select cpu_num+1, type, num_logical_processors, num_processors
  8      from data
  9     where cpu_num+1 < num_processors )
 10  select 'CPU' || cpu_num cpu, type, num_logical_processors, num_processors
 11    from data
 12  /

CPU   TYPE                                               NUM_LOGICAL_PROCESSORS NUM_PROCESSORS
----- -------------------------------------------------- ---------------------- --------------
CPU0  Intel(R) Core(TM)2 Duo CPU U9400  @ 1.40GHz                             4              4
CPU1  Intel(R) Core(TM)2 Duo CPU U9400  @ 1.40GHz                             4              4
CPU2  Intel(R) Core(TM)2 Duo CPU U9400  @ 1.40GHz                             4              4
CPU3  Intel(R) Core(TM)2 Duo CPU U9400  @ 1.40GHz                             4              4

ops$tkyte%ORA11GR2> 

generate rows based on a column

Ravi B, January 03, 2012 - 4:02 pm UTC

Tom,

I came up with the following SQL for above question.

select c2.*,'CPU' || (ROW_NUMBER() over (partition by c2.id order by c2.num_logical_processors) -1) CPU
from (select level-1 RowNumber from dual
connect by level <= 100
) c1,
test_cpu c2
where RowNumber < c2.num_logical_processors;

Would you do this any other way?
Tom Kyte
January 04, 2012 - 8:32 am UTC

I thought you wanted it in sql server?

Not that I'm as good at sql server as I am at Oracle, but last I looked - connect by was something we've had since 1979 - and they haven't ever had. And dual is something we use - they don't.


generate rows based on a column

Ravi B, January 04, 2012 - 12:38 pm UTC

Thanks Tom.

I was stuck trying to form a query for SQL Server hence i went head to first write it in Oracle semantics.

I did this for SQL Server as workaround:

select c2.*,'CPU' + CAST((ROW_NUMBER() over (partition by c2.host_ci_id order by c2.num_logical_processors) -1) as varchar) CPU
from (select Row_Number() Over (Order By N1.object_id) As RowNumber
From (select * FROM sys.columns) N1
) c1,
test_cpu c2
where RowNumber < c2.num_logical_processors;

I used SQL Server system table sys.columns to generate sequence numbers.

The one you gave me is a very good one.
I never thought we could do a recursive query with WITH clause. I cant stop learning from you every single day!

Thank you.

Could you please tell how the randomise the length of data also

Himani Dudeja, July 07, 2016 - 10:53 am UTC

Hello,

Its a nice procedure but when I tried this for each column the length of characters is same specially for random.string

Could you please tell that how we can randomize length of string in each row?

Thanks
Chris Saxon
July 07, 2016 - 2:36 pm UTC

You can use dbms_random.value(lower_length, upper_length) to get differently sized strings:

create table t as
  select cast(
           dbms_random.string('a', dbms_random.value(2, 10)) 
           as varchar2(10)
         ) c1
  from dual connect by level <= 5;

select * from t;

C1         
Aqaktg     
ksgTLbAdc  
bhfN       
yJSceaPjp  
plhSEOceO 


Chris

One small tweak

mayukh mukhopadhyay, September 11, 2018 - 9:56 am UTC

Rather than using execute immediate 'create table clone_' || p_tname ||

Using execute immediate 'create table clone_temp_tab'.. at first and inserting data into main table from this clone_temp_tab and running execute immediate 'drop table clone_temp_tab' might be good, to cover those conditions where the main table has a tablename close to 30 characters.
Chris Saxon
September 11, 2018 - 10:08 am UTC

Or upgrade to 12.2+ where you can have 128 byte long names ;)

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library