Copy link to clipboard
Copied
G'day
Hopefully Phil is still lurking here, so might see this. Or anyone else that knows about Oracle.
I've always thought the default for "maximum pooled statements" setting for an Oracle DSN - which is 300 - is way too low to be sensible for a web app (which, let's face it, any CF DSN is going to be used for!). I usually find I end up with it up around 1000-2000, depending on the sizeof the app, and how busy it is.
I'm adequate with monkeying around with Oracle, and have needed to do a moderate bit of that sort of thing in our dev environment, but I am by no means a DBA, so it's all very seat-of-the-pants.
So I'm keen to know what other people have their maximum pooled statements / open cursors setting set to for their Oracle-driven web sites/apps.
One of the reasons I'm asking is that I wonder if it might be worth while to suggest the default for CF9 is upped to 1000 or so..?
Thoughts / comment?
--
Adam
Copy link to clipboard
Copied
I personally would not set the limit any higher than 300, and I'd consider keeping it even lower.
Each pooled query / open connection is imposing resource-burdens on both the CF and the Oracle servers by creating what might be thought of as "a very crowded restaurant." Most of them are just standing around, but they nevertheless are occupying the restaurant, and the potential exists that every one of them might try to order a burger-and-fries at exactly the same time.
The "number of pooled statements" should be such that your SQL-server can actually handle that number of simultaneous requests, both in terms of active searches and of maintaining the still-open result sets. Beyond that point, incoming web requests should be forced to wait. Set an "occupancy limit" to your proverbial restaurant such that everyone could come in and be timely served at the same time.
Categorically, if you plot out response-time graphs about such things (SQL servers, virtual memory subsystems, and so on), they exhibit a fairly-linear performance curve up to a point, after which "the elbow- or knee-point is reached" and beyond that point it becomes exponential in a very, very bad way. I saw this happen in a batch-processing subsystem long ago on a very small mainframe: (yeah, I'm makin' these numbers up...)
Number of Jobs at Once | Completion Time per Job |
---|---|
1 | 30 seconds |
3 | 40 seconds |
8 | 9 minutes |
12 | 4.5 hours |
It was just-about that bad. And what I did to (dramatically...) address the problem was to impose simultaneous-job limits on that subsystem. As long as the number of simultaneous jobs was constrained to "less than 3 at a time," a workload of 12 jobs could be reliably completed in (4 * 40) seconds, whereas if all 12 jobs tried to run at once, the computer would have reason to file a lawsuit for abuse.