I am running ColdFusion 2016 Enterprise on a Windows 2016 Server.
I have a few CFM files that are very database intensive (lots of inserts and deletions).
I am finding out that when I execute these scripts, ColdFusion is apparently running them over and over again, deleting all records out of a table, then attempt to import data back in. It does this concurrently; I haven't been able to time when it begins to restart the script as of yet but I am working on that.
Does anyone have an idea as to why it would restart the script multiple times, all of them running concurrently?
I am connecting to an Oracle Instance.
Message was edited by: Joseph Maitino
CF would not itself do this. I'm nearly positive you will find that instead something you control is doing this. It may be a scheduled task config, an error handler, or something else. With the right diagnostics you can know, whether a CF monitor or request logs, if the right info is tracked. Too much to layout in email.
I will just say for now that if the above clue helps, great. Let us know.
If not, share any more info (like the cf update level) and OS. Also, have you confirmed there were 0 errors in the last update log? Finally, if this is pressing enough to not wait for days of back and forth among folks here, I can offer help directly via remote consulting, satisfaction guaranteed or you won't pay. More at carehart.org/consulting.
It is Windows 2016 over IIS.
I noticed a lot of these in System event viewer for the application pool:
"A worker process with process id of '####' service application pool '$$$$$$$$$$$' has requested a recycle because it reached its virtual memory limit."
I wonder, if the pool resets in the middle of a long running request, if it resends the request through ISAPI.
i changed the Virtual memory limit from 41943040 KB to 4194304000 KB. For the localhost application pool, I made it 0 for unlimited.
Let's see if we still have the same issues.
main website isapi_redirect.properties:
Interesting. I suppose that MIGHT cause it. It would require IIS to have held on to the first request, then after the app pool recycled, to have run the request again. In that case, sure, just like a browser user hitting refresh, the first request would STILL be running in CF, despite the new/refresh request.
As for changing those IIS virtual memory limits in the app pool, well, just note first that the value you had (41,943,040k) is 41 GB, and in adding two 0's to that, you've made it 4.1 TB. Do you REALLY think you have that much memory on the box? 🙂 Granted, virtual memory can be larger than real memory, but still, if you're hitting the current setting of 41g, you have different problems to solve. Raising those values (to let it grow still larger) would seem only to lead to different problems.
You really want to find out why the app pool was growing so large:
Finally (and especially if you may say "it's none of those"), then check to make sure also that you have updated your CF/IIS "web server connector". This is the DLL that would be in that folder where you looked at the properties files shown above. Is that DLL from 2016? 2017? 2018? or 2019? I had asked you what update level your CF2016 was running at. Once you know that, look up when it came out (at ColdFusion (2016 release) Updates). If the date of the CF update you have is from, say 2019, but the date of your dll is from say, 2016-2018, then you need to also update your web server connector.
To do that, use the "upgrade" button in the web server config tool. It is literally as simple as clicking that button. No more need to remove and re-add ("recreate") the connector to update it. I'm not saying that could affect memory used in the app pool (but it could, as the DLL does get loaded into the app pool). And I'm also not saying that an older one could have been more susceptible to the problem of an app pool recycle kicking off a duplicate request (but it could). I'm just saying that you should at least rule this out. And of course you'd do it for both those connectors you have (for the local and main sites/app pools).
Let us know how things go.