I have installed and configured Apex 20.2 with ORDS 19 and Apache TomCat 9 on a Windows 2016 Server. The Database is 12c Enterprise Edition without a Container DB. I have an application built for initial registration for an event and I am expecting 3000 to 5000 users hitting the web application after a newspaper advertisement. The Server is on 1 gbps bandwidth. Since it is open registration I opted for no authentication mode i.e. through Apex Public User. I request experts to please guide me with regard to concurrent users / sessions to look for and also the storage related. I expect image attachments about 1 MB for each entry etc. Thanks in Advance.
There is a big difference between 5000 users hitting a website all at the same time and 5000 users hitting a website over the course of (say) 12 hours.
In any event, you should be looking at doing some benchmarking (even just using CURL is a good starting point) to see what the server can handle. You can do some rough math to get a starting guess, eg
- APEX/ORDS uses a connection pool, so each connection will equate to a database session.
- If your server has (say) 12 cores, then this means after you have 12 sessions all *concurrently* on CPU, then you'll be close to max-ing out the server.
- Let say each APEX interaction you do is (say) 25% CPU and 75% IO. Then this would mean you could cope with (on average) 48 sessions concurrently (equating to 12 of them on CPU at any given instant).
- So *based on the above numbers* you might start with a connection pool size of (say) 40 because more than that has the potential to max out the server, or at least get it close to its capacity.
In the scenario above, if you had more than 40 concurrent APEX requests, then some will wait ... this is a *good* thing because better for them to wait than for your server to collapse under load.
If at a connection pool of 40, you're seeing your server running at (say) 60%, then you could then adjust it to be slightly higher to allow the server to run at 70 etc. Often heading into the 80%+ mark you'll start to see inconsistent performance (ie, some people observe a big drop in performance at the CPUs get closed to max'd out).
Of course, all of this applies to other facets as well - concurrent disk activity versus storage performance, the amount of network bandwidth consumed at any given instant. eg, 1Gb = 100MB = (very optimistically) 100 people uploading 1MB at a time, not taking into account all the overheads of network communcations. So even if you had an insanely powerful server, you wouldn't want to go to a connection pool size that would allow close to that many uploads concurrently.
Its always best to benchmark a subset of activity, and then come up with some reasonable but conservative starting points.
Some more good info here https://www.slideshare.net/Koppelaars/smartdb-office-hours-connection-pool-sizing-concepts