architecture
oralearner, April 10, 2002 - 2:26 am UTC
1.I wanted to write 1000 by mistake I wrote 100.
but still I am confused how to start my calculations
2. 5 star answer.
April 10, 2002 - 8:47 am UTC
100, 1000, 1000000 -- doesn't matter.
N entries a day -- that doesn't compute.
My answer however stands:
There are no rules of thumb here, experience is the best teacher.
I might start with:
1meg of ram * number of concurrent users
50meg shared pool
50meg buffer cache
and work up from there. It is 100% impossible to say -- the above is sort of a minimal configuration, something to start with.
sizing files - wholly 100% totally impossible, you have to do that based on what you put in there.
architecture
oralearner, April 10, 2002 - 11:54 am UTC
thankyou tom
atleast i get a start clue.
sometimes the first step seems to be difficult.
Is there any manual I can read about this.
April 10, 2002 - 12:25 pm UTC
The admin guide for each platform has rudimentary sizing -- but it won't tell you about the SGA sizing -- thats experience, knowledge of how its used (read the concepts guide from cover to cover) and testing.
A reader, November 22, 2002 - 2:51 pm UTC
Regarding #2 of the original question:
I like your approach. Can I have your openion on this?
We have a data warehouse build which loads up the tables with data and then recreate indexes, collect statistics etc. Due to issues with our data feeds, we have to truncate and reload the tables everytime.
We are trying to make things faster by running jobs parallely. We are using parallel Direct loads for Sqlloader. For creating indexes, we have functions which will read the index ddl from ut_fil_dir. We are running all these jobs at o/s level. Will there be any advantage/disadvantage if we schedule the jobs(except sqlldr jobs) inside the d/b? I assume that, may be we can monitor the resource usage in a better way inside the d/b and use parallel processing more effectively. Some DBAs are of the openion that dbms_jobs will be slower compared to o/s level schedulig. I can't believe that. What would be your approach to this. Can there be any benefit at all to move these jobs inside the d/b? Thank you for your time.
November 22, 2002 - 4:29 pm UTC
In 9i -- i would do it all in the DB with external tables and parallel automatic tuning.
You get all of the advantages of parallel direct path loads.
Without having to mess with the OS.
In 8i -- since some of it has to be done "outside" -- from a coordination perspective -- it may be easier to do it all outside.
A reader, November 23, 2002 - 9:23 am UTC
Thanks for answering.
I understand about external tables. But it will take some more time for us to get there. What we really wanted to know was if dbms_jobs has any advantage/disadvantage as against o/s scheduling from a performance standpoint. I am aware of the convenience of doing it all from either inside the db or outside. I am looking at the performance gain only. I know that if it is done inside the d/b, I can monitor the resource usage in a better way (without depending on the Unix SAs). But there will be a host of other parameters like job_que_processors/buffers which might affect the performance. I wanted to get your openion specifically on that. Hope I am clear about what I am asking.
Thanks again for your time.
November 23, 2002 - 9:41 am UTC
dbms_jobs and using the os will have no performance differences cause at the end of the day -- the create statements are all about SQL and that is where the time will be spent.
dbms_jobs has a limit as to the number of jobs it'll run at a time (so does cron by the way, it'll stop doing things as the load explodes).
dbms_job has a period (interval), it'll only peek for jobs every N seconds.
there are no performance gains EITHER WAY (could not be) or performance hits.