Copy link to clipboard
Copied
We're currently running a Windows server with CFM 2018 installed and recently developed a new site. When we push the site to a "live" environment the server runs well for about 20-30 minutes and then it will just fail to respond. I do see the CFM processor and SQL processor slowly start to increase as the site gets traffic. I'm guessing it just builds up until it gets bottlenecked or something. The site then fails to respond or gives us the pretty standard CFM error or fails to load any pages and eventually just throws the Tomcat error. The errors i get are Java heap space and or GC overhead limit.
When we run the site in a "dev" environment it works fine but of course when it's in a dev environment it's only one or two people hitting the site. The "live" and "dev" site are on the same server, they're just different URL's so there is no difference between the server settings. Just the number of people hitting the site (public vs private). I do have a couple of CFC Objects that run a bunch of functions so not sure if I need to investigate there.
If anyone may know of something regarding GC or Java heap please let me know. Any help or direction is appreciated.
Thanks.
Copy link to clipboard
Copied
Before you or anyone here gives even a minute of thought about your app or cf itself, please answer this simple question: what is the max heap size in the cf admin "java and jvm" page? It may be 1024, in both prod and dev, and it's merely the default. That may be fine in your dev env, with virtually no traffic, but it may not suffice for prod. Again, don't even worry about contemplating the implications of that.
Next, what is the total memory in the box? And how much is currently shown free? You may only need the max heap to be a LIITTLE more than it is, but if you could double or triple it, you may find all's well. Could you give it ALL the free memory on the box? Maybe, but it isn't always smart.
Finally, if this cf app previously ran in prod on some other cf server, you could also check what the heap size was there.
Let us know how it all goes.
Copy link to clipboard
Copied
First off, thank you for the quick response @Charlie Arehart. Greatly appreciated.
The current Minimum JVM heap size is 512MB. Maximum JVM Heap Size is 512MB. A few years ago i fiddled with these settings so not sure if they're relevant. I just used the same settings from when we merged to CF2018 from CF11. Not sure if this matters, but this is in the JVM Arguments as well.
XX:MaxMetaspaceSize=192m
-XX:+UseParallelGC
Current Memory is 8.0 GB, sitting at 3.6. (see attached) Just a note, this is the only site that is running on the server. It's pretty much our last CFM site, the only other program on here is SQL. It's pretty much a dedicated server for this site so i can tweak to it's delight.
The dev site and live site are on the same server so they both are using the exact same settings for CFM. We did migrate from CF11 to CF2018 a few years ago, so i mostly took notes before smoking CF11. Also, not sure if this matters CFM is in lockdown mode or whatever they call it.
Copy link to clipboard
Copied
This is kind of a blast from the past. We used to run into similar problems with content-heavy CMS sites, where you had a lot of stuff that didn't change frequently. Anyway, at the time the JVM had three "generations" for objects in the heap: young, tenured and permanent. Things have changed a bit since then, but the basic idea will probably still apply for you. The problem we had was that each generation had its own maximum size, so if we didn't size them correctly we'd run into the same problem. This was also with 32-bit JVMs which were a lot smaller than today's. The way we solved the problem was by first just increasing the total heap size, and if that didn't work we'd find which generation was problematic and increase the size of that generation.
The default heap size in CF used to be 512, which was pretty small. The heap won't get bigger over time, so you need to figure out how big it should be. Or, like me, you could just jack it up to a much larger number if the server can handle that. It can't exceed the available memory of the entire server of course. So, you might go with 1024 or 2048 and see how that works. Even if that doesn't solve the problem, it should take longer to occur and that means you're on the right track. Increase it again until you solve the problem. If you don't have enough memory to do this, get some more and try it again. Memory is pretty cheap.
Dave Watts, Eidolon LLC
Copy link to clipboard
Copied
Thanks @Dave Watts for the reply. The only thing i wasn't sure of was the arguments for JVM. Per my reply to @Charlie Arehart with the JVM settings i wasn't sure what these JVM arguments meant.
XX:MaxMetaspaceSize=192m
-XX:+UseParallelGC
Copy link to clipboard
Copied
Hello,
I agree with other posts here about increasing initial and maximum size for heap values.
Could be you want to increase -XX:MaxMetaspaceSize=192m or not define it at all letting JVM manage sizing for that. On Java 11 I prefer to set it and define an initial size. Java 17 which is nil CF2018 support offers some JEP fixes which solve Metaspace ballooning when not defined. Perhaps you could try:
-XX:MetaspaceSize=356m -XX:MaxMetaspaceSize=640m
CF2018 default install offers -XX:+UseParallelGC Garbage Collector. The default for Java 11 is -XX:+UseG1GC. I know which I prefer but care needs to be taken. With any recommendation to tune memory spaces or change collector you should do some JVM monitoring to know what the usage is currently like, then what usage is like after altering values.
Regards, Carl.
Copy link to clipboard
Copied
Biff, our conclusions are the same (Dave and mine): raise the max heap size. Sounds like you could make it 2gb (2048mb, if you like). No need to consider or wonder about anything else, seriously.
As for the maxmetaspacesize, it has nothing to with the heap. If you want to understand it (and why you may not need it at all), I have a blog post on it that Google will readily find.
As for the useparallelgc, that too is the default. Some debate changimg it, but I am 100% confident it's inconsequential for your situation.
As I said, raise the heap and call it a day. If you want to wonder later "why it was needed", we can discuss that after solving the more important problem, which can be so easily solved. 🙂
Copy link to clipboard
Copied
Ok, three years to almost the day. A similar thing has started happening and nothing has really changed except the increase in traffic to the site. For some magical reason Cold Fusion and SQL Server have recently gone off the rails and have started growing in memory size and eventually CFM going through its motions of GC Overhead issues and not responding. About a month ago things were fine with the Min Heap Size at: 512 and Max Heap Size at 2048. Then CFM started crashing a bit more with GC Overhead issues, so i raised the Min Heap Size to 1024 and Max Heap Size to 4096. For the past few days ColdFusion can't seem to run more than 8 hours without stalling on page timeouts after 30 seconds or GC overhead issues and SQL Server growing in memory size. I haven't added any new code or SQL queries, it's just seen an increase in traffic mostly.
It's a Windows server with 16GB of Ram, ColdFusion 2018 (yah i know it's old).
if this matters, but this is in the JVM Arguments as well.
XX:MaxMetaspaceSize=192m
-XX:+UseParallelGC
I'm totally perplexed... any suggestions would be welcome.
I've attached what Google Analytics is seeing at this moment in time in regards to traffic.
Thank you.
Copy link to clipboard
Copied
Given the that you had the same problem around the same time of year, and it went away, my guess is that you have some memory heavy code that is only used seasonally.
Look at the web server logs and see if there are any pages that are primarily accessed at this time of year. Then look at the code. You might have a big loop that is adding variables on each iteration, or you might be adding varaibles to a scope such as (session, application, or server) which are not allowed to be garbage collected until they timeout.
Check and see how long your session timeout is - if it is super long (the default is 20 min), then every new visitor or bot request is going to have memory overhead for the duration of the session.
The traffic numbers from google analytics aren't crazy high, most applications would be able to handle that no problem with the amount of heap you have, so that's why I think it is more likley a code or session duration issue. Keep in mind that bot traffic may or may not show up on google analytics.
Copy link to clipboard
Copied
Hello @pete_freitag
Thanks for the reply and suggestions. I looked into your suggestion of looking at the the code and parts of the website that are used. The site is a tourism site so a lot of it gets utilized all year long. Majority of the site that the public hits is for it's event listings and blogs and those pages aren't anything intense. One thing you did bring up is the session timeouts. In my application file i do have the applicationTimeout and sessionTimeout set to 1 hour, could that be too long? You are correct that the hits weren't that intensive on GA, but up to just a few weeks ago, nothing really changed except the amount of traffic. They have started advertising on YouTube, TikTok, Facebook and Google. I just became aware of that after looking at the log files that you and @Charlie Arehart suggested in the next reply (which i'll answer). Let me know if you think my application/session timeouts are too long.
Thanks
Copy link to clipboard
Copied
I have a different take than Pete's helpful thoughts, and mine may well prove to be good news for you, Biff, if it proves to match your challenge. But I will say first that the reason CF (and the DB) are going off the rails should not be for a reason that is "magical"--nor is it a "mystery", though it does call for a "tour" through appropriate diagnostics. 🙂 (Nod to the Beatles, for any not getting my quoted references.)
TLDR; First, confirm what OOM errors you find currently in coldfusion-error.log, as they me different than what you mentioned. Second, don't trust GA: your web server logs may show far more traffic than GA is showing (as I will explain). That logging info will also show the user-agent of requests, which traffic may well prove to be mostly from bots and other automated agents. You can easily block any you don't want, using that user-agent header, in IIS using "request filtering" and its simple "rules" feature, which returns a 404 to the requestor. Just be careful that blocking a given user-agent really makes sense. Further, be careful if you also have IIS set to send 404's to CF, as these "blocked" requests would still end up talking to CF. The above approach did prove to be EXACLTY the problem and solution for a client I helped just yesterday. And it's been happening to a lot of people recently (indeed for weeks and months, and longer). It may not be ovbious but it need not remain hidden/mysterious.
I hope folks reading the TLDR won't presume "that's not my problem". That's what the client thought as well, as I will explain. I do think think that if you'll take just a few minutes here to read along and then several minutes to do some assessment of your own diagnostics (which I can help also via screenshare consulting, of course), you may well both FIND your root cause problem and resolve it.
1) You refer to seeing "gc overhead limit exceeded" indications. That's a good first step.
Was that reported in your app/on-screen, or were you finding it in the CF logs? Either way, do open that coldfusion-error.log, noting first the lines at the top which tell you how far back in time that log goes. Then go to the bottom and do search "up" for that phrase: outofmemory.
Do you find any reporting instead "java heap"? Or perhaps "metaspace"? The former would be CF hitting that heap limit (the "overhead limit" is more a warning in advance of that), while the latter would be about that maxmetaspacesize you referred to in your first note. As Carl noted (and I discuss in the blog post I referred to), most people would do better just to REMOVE that argument. I explain why in the blog post.)
And if instead you are getting "java heap" errors, you could keep "chasing the rabbit" of increasing the heap, which MAY help. But I sense you're wanting to "find and resolve the root cause". You may even be thinking "something's wrong with ColdFusion", but I would suggest otherwise.
2) Indeed, you mentioned in passing (in this note today) that "it's just seen an increase in traffic mostly".
That's in fact where I would have wanted to turn your attention. Indeed that's what I was thinking of when I last replied that first day you wrote (which was 2 years ago rather than 3). I'd added that "If you want to wonder later "why it [the heap increase] was needed", we can discuss that after solving the more important problem, which can be so easily solved."
You never responded again, nor did you mark any reply as an answer, so the discussion lay dormant until now. What I am sharing now is what I would have proposed to share then, if you'd have been interested. And since others may find this thread (especially now that you've revived it), and to help you now that it's happening "again", I'm elaborating.)
3) So you've shared your GA chart, though you only shared a brief window of time in that screenshot (30 mins). Further, it's focused on the count of active users (and "views"), rather than request rates (let alone durations).
Still, we can't tell: is your conclusion that traffic IS up? or perhaps that it doesn't seem to have raised enough to cause problems? If so, I would propose that GA is not the bestplace to be looking for this sort of problem. That has classically under-reported traffic to servers like CF.
Why? Because it's been classically based on the browser receiving the page (and its html) requested and then its executing the little bit of GA js code you would have put into the site. (And of course, it ONLY tracks pages that DO have that little bit of js code.) But the problem may be that your server may be being pounded by traffic that is from automated agents (search engine bots, bad guys grabbing your data, attackers, or more recently AI bots). Those typically do NOT execute any js on the page. They just say "give me the next", and "give me the next".
They could be generating 5x, 10x, or 100x the rate of traffic than regular people--and that rate could have increased for you recently. And I'm not just talking theoretical possibilities: I have helped many people find and resolve this problem many times in recent days, weeks, months, and years.
In fact, just yesterday I helped someone in this same boat. CF was crashing, Task Manager showed it using high CPU and memory--and they'd already tried previously raising the memory on the box and the CF heap , which only forestalled the problem. They didn't have GA, but indeed their contention was that "this is an internal server that no outsider should be hitting". So I asked if they had anything internal beyond Task Manager to monitor things--especially perhaps FusionReactor, or the PMT, or any sort of CF monitor. Like many, they did not. That was not a show-stopper, as I will explain.
(If they did have eitherof thosetools, they are the best at tracking "what's going on in CF, specifically". And FR even creates a great request log tracking every CF request--and ONLY cf requests--including their start time, duration, number of queries, their duration, the requesting IP, its user agent, and several more valuable metrics. Sadly, neither CF itself nor the PMT offers such a log: the PMT stores its data in an ElasticSearch DB and the Tomcat underlying CF can be configured to create a request log.)
4) So instead I'd recommended to them what I now recommend for you: look to your web server logs. You mention being on Windows, so your web server is likely IIS, as it was for the other folks. I guided them to find where those are (which can be found specified within IIS itself, but the default is c:/inetpub/logs/logfiles), and within that is then a folder for each site, named by its IIS "site id" number. Again you can find that from the IIS "sites" section).
They had multiple sites and so multiple folders. We looked at each (rather than presume to know "this site is all we care about". I recommend you do that also.
And within each site's log folder I had them sort the list of files by the date modified. As you may know, IIS logs are stored by day, rolling over at midnight (by default)--though the IIS log lines are tracked as GMT time, so in their case the server was in US Eastern time, so we'd subtract 4 hours from that to find the equivalent to local time in the logs.)
4a) Anyway, the first thing I had them look at was whether any of the recent day's logs were larger than those of previous days or weeks (when there was "no problem"). Even across multiple folders, they didn't see much that stood out--so some might have been inclined to think that a waste of time. But I'll say it's often been VERY clear that SOME days logs were indeed FAR larger than other days, or recent weeks--and it may be in ONE site's folder that was unexpected.
4b) Moving on, and before giving up on the value of these web server logs, I had them open the most recent one. (Again, remember that midnight there would have been 8pm the night before.) I wanted to just take a look to see if we might readily spot some unusual nature of traffic.
I explained first that a real challenge with web server logs (as compared to FR's request logs) is that a web server log will track EVERY request made to it: so if a CF page served up html to the browser, that browser would then process that and make several (perhaps dozens of requests back to the server, such as for js files, css files, image files, and so on), which can make it more challenging to "weed through the logs" to focus only on cf request (and their rate). Further, some CF requests are made without even naming .cfm or .cfc as the file extension.
Still, a "beneficial" side-effect of how bots work is that they tend to again just say "give me this url", then "give me that url"..."I don't care about your silly images or js or css. I just want your content!"
5) And sure enough, on the first screen of their logs (showing about 40 log lines in their editor), it was nothing but cf page request after cf page request. NO requests for images, NO requests for js or css. Just one call for a CF page after another.
And the requests were clearly just going through their site asking for a url that named one product and category after another (of course, other sites might track any possible kind of content). Often sites have lists of categories for display, and within categories lists of items, and features for paging through them. That's like honey to a bear for the bots (or bad guys). They just trawl through them asking for page after page.
6) Then I pointed out how the IIS logs (and most web server logs) track also the "user-agent" making the request, which might be a "real browser" but often legit bots do identify themselves.
And indeed we saw that on that one page alone that there were 5 different bots in that timeframe of seconds: some from googlebot, bingbot, amazonbot, dotbot, and ahrefsbot. But the most were in fact from facebook. There were clearly NO requests from any REAL browsers on that screen, nor as we paged down. And remember, this was 8pm their time--but we found it was true pretty much all the time. I've just as often found chatgpt or other AI bots doing the same thing.
Finally, note also that by default the IIS logs track the duration as the last column of the log--and indeed these were taking several seconds, even dozens when things were bad, as they were in this random log we'd open at their 8pm. Clearly it seemed we'd found the culprit(s).
7) So "what to do"? Some people respond thinking, "we need to block those IP addresses", but others know that's a fools errand. Those bot frameworks (and many bad guys or theives) are sophisticated enough to spread their load over many IPs--which may well change day to day.
And I showed how to block them instead by useragent. But I warned first that if someone in your org WANTS that bot traffic, then you can't "just block it". More on that in a moment.
In their case, though, remember they said this was an internal server/site that they didn't think had ANY incoming outside traffic. (While we could turn our attention to that, addressing things from a firewall or other level, they just needed a quick solution because like you their CF was crashing constantly. We had confirmed in this 20 mins of work and discussion that THIS was their unexpected root cause problem.)
7a) So they was indeed interested in a solution that could block the traffic from THOSE bots (those "user-agents").
And for that I showed how easily we could use IIS to handle that. Either at the site or server level is a "request filtering" feature (among the buttons in the middle of the UI). Openign that shows a UI of tabs, one of this is "rules". Right-click in there to add a new rule. Call it "block bots", and in the header field add "user-agent" (no quotes), then in the values field enter (one per line) even just a portion of the long user agent string--enough to distinguish it. So we did dotbot, ahrefsbot, amazonbot, and facebook. Again, some people may want to think twice about that last one, or about googlebot or bingbot.
As soon as you submit that page the change takes effect. If the bottom of your current IIS log showed a high rate of such traffic, wait a few seconds or minutes and re-open the log and to look for these. BTW, it's not that they would no longer be logged: it's that now they would get a 404 from IIS. The Request Filtering feature literally just rejects the request with a 404: that's what the requester sees, and what the IIS log tracks--and we should see the duration is now just milliseconds, and they should no longer going to CF.
(And of course you can do this sort of blocking by user-agent header in Apache or nginx as well.)
7b) That said, I did warn them (and would warn you and readers) to beware something else: some CF folks modify their IIS "error pages" feature (again at either the site or server level) to have 404's passed to a cf page. While I realize that can offer benefits in some cases, do beware that in this sitation we would NOT be stopping the requests from affecting CF. The 404 handler setup in CF would be blown up with the same rate of requests as before. If you were doing any sort of DB lookup--or worse, tracking the 404 failures as new records--you'd be also burdening your db with this, simply a change in the nature of traffic rather than stopping it.
One could conceivably tweak their cf-based 404 handler to better accomodate this situation. Again in their case it was not a concern, as we confirmed no one had changed their IIS "error pages" setting for 404's to go to a CF page, so we didn't have to deal with this.
(If people wonder why my answers read like blog posts--and my blog posts read like term papers--it's because of these little nuances that are often neglected when simpler answers are offered. Again I'm trying to help you and future readers who find this--and the AI bots who will read it and offer it.)
8) So, all that said, within minutes of making these changes we confirmed first that all the requests from those user agents were indeed getting 404's and taking only milliseconds. Again, they did not have FR to allow us to "see how things were going within CF".
But the most important and wonderful thing for them was that now Task Manager not only no longer showed CF as the top user of CPU, it wasn't even in the top 10! Remember: we had not restarted CF. They were very happy with the result, all achieved within an hour of investigation, explanation, and remediation. (I realize some other situations may not resolve so readily.)
8a) Indeed, though a CF restart was not neccessary in their case, it MAY be in others. If CF was woefully bogged down, you may find that attempting to stop or restart the service may fail. In that case, you could kill the coldfusion.exe from "details" in Task Manager. That's not what you should ALWAYS do, but in a case of CF running out of memory and using all the CPU and unable to shutdown, it's an option. That said, if you just wait a minute after Windows Services reports that it couldn't stop the service, Windows will ITSELF kill the process for you.
Either way, then you will find you can start CF again.
9) The next question will be: do things remain settled?
I'll note that you may need to do another round of assessing your web server logs: there may be new and different bots or bad guys trying to break in. Some may be harder to handle than with this simply "blocking by user agent".
9a) Indeed, I'll add one more thought on blocking that way, especially with regard to the calls from facebook, or linkedin, or perhaps apple and others. Note that those may not be their "search engines" but instead they may reflect the calls made to your server from some resource being shared in your organization's Facebook feed, or that of other folks sharing resources on your site.
In this case, folks scrolling through their posts may pass one with a link to your site, and what's happening is that FB (or linkedin, or whoever) is FETCHING your page for the user, to show it in a PREVIEW window showing what the page WOULD have looked like if browsed. (It's even a bit more pernicious in that the site's may well fetch your page IN ADVANCE of the user seeing it the post, if they anticipate that the user may soon be scrolling to it.)
Would you really want to block those? Or to have it serve a 404 error as the user's preview of your page? Probably not. (And the social media folks in your org may want to string you up for causing that.)
So what could you do about that? Well, I helped one client facing this problem (and who'd made that mistake in their excitement) to consider that what the preview would have shown was page content that would be so tiny as to really be useless to the user scrolling on their devices. So I proposed they could modifying their CF code to detect when such a request was made (the CF variable cgi.http_user_agent holds the header), in which case they could just return a their company's logo. That may not work for some.
10) So, all that said, and I know it's a lot, I hope you may get to finding what is your root cause, Biff. And please let us know if this sort of diagnostic approach proves helpful or not. Again hopefully you don't need to even spend as much time assessing the diagnostics and resolving the problem as it took for you to read this. Trust me: it took a lot longer for me to write it! Apologies to those who hate elaboration.
Copy link to clipboard
Copied
Hello @Charlie Arehart
My aplogies for not getting back to a response earlier. This was a valid and detailed response which i greatly appreciate and all great suggestions. This is the route i took as it's kind of what i was thinking. I also got our hosting provider involved and they came up with some of your suggestions as well. So here goes....
So yes, the problem 2 years ago, not 3, thanks for the correction, was that the default Java Heap Space just didn't cut it. So the default CFM setting move from Minimum JVM heap size is 512MB, Maximum JVM Heap Size is 512MB just didn't cut it so setting it Minimum JVM heap size is 512MB. Maximum JVM Heap Size is 2048MB helped solved the issue. We also ended up bumping the server to 16GB of memory a few weeks after that. When that happened I moved the minimum heap space to 1024MB and left the max heap space at 2048MB. The server ran really well all that time up until a few weeks ago.
Love the shout out to the Beatles... that was nice way to spin the intro. A great album, but Revolver is my favourite by them... anyways I did a deep dive into the IIS log files for that site and a few other sites on that server and OH MY GAWD was it stacked with bot action. There were a lot of MJ12, Ahrefsbot, Yandex, Blexbot, Barkrowler bot calls and they were in the 1000s and you're correct, just calling "pages" no JS files/CSS files...etc so we added them to the robots.txt file and the hosting provider added them in Request Filtering in IIS. Hosting provider recommended adding them in both spots... ok, so we did. That did clean up things for the weekend and Monday. Then Tuesday morning at around 2:00AM happened and then CFM had GC issues again. I went through the log files again and found a few other bots taking advantage of the site. ChatGPT and a few others that don't have user agents but had referrals from weird sites. Also, we did find a lot of Meta bot crawlers in there and i mean a lot. I'm not sure if there is a way to slow those down or even if i should block them. The site is advertising on Meta, TikTok, YouTube and Google so i don't really want to block them. If anyone has a suggestion to deal with the meta-externalads, meta-externalagent bots then please let me know.
As a side note in IIS i do use a CFM page to cover the 404 Page Not Found. I don't use IIS to cover the 404 page. I used to have some database action on it, but before the weekend (4 days ago from this post) i commented it out. Now it just runs a couple of cfm include files to grab the header/footer and body message.
So overall the weekend and monday went well, but Tuesday was a disaster for ColdFusion. I think i had 3 instances of GC overhead limit exceeded so CFM needed to be restarted. I'm currently going through the log files and finding more bots or sites throttling the server for "moments of time", which i believe is causing CFM and SQL to increase dramatically. I think i'm on the right track based on your suggestions but today was so deflating after the weekend.
Also, we used to have Fusion Reactor but ended up letting it go a couple of years ago since the site was humming along. It may be time to give it a second look to see what else may be going on...ironic.
Copy link to clipboard
Copied
@BiffJingles , The increase in traffic cannot be the cause of the issue. Scores of users per hour should be no problem at all. I once worked on a ColdFusion 2018 application that had tens of thousands of users per day. As it was an application used by students and teachers, most of the visitors stayed online throughout the working day, from Monday to Friday. That wasn't an issue.
However, you are very likely facing the same challenges we faced. Two of those challenges are: optimizing the server and optimizing your code.
Suggestions for optimizing the server:
Suggestions for optimizing the code:
Copy link to clipboard
Copied
Bkbk, you say "The increase in traffic cannot be the cause of the issue. Scores of users per hour should be no problem at all." As Dave Mason sang, "We Just Disagree", for all the reasons I spelled out. But I realize it was a long reply (trying to anticipate objections and clarify possible confusions).
More importantly, @BiffJingles , what's the upshot? You've gotten 3 different answers here in 2025, on top of Dave's and mine in 2023. You have the attention of 4 of the most active and experienced troubleshooting folks in these forums. We're wanting to see your problem through to resolution--for you and for the sake of others who may find this thread in the future (ust like you returned to it).
Can you let us know where things stand? And if it's that "your server is still crashing", please clarify what you did or didn't do, among all the things we offered. I would be VERY surprised if the answer is NOT in one of those replies, but it's possible. We need your feedback, to know--and to be able to help you further if needed.
Copy link to clipboard
Copied
Hello @Charlie Arehart
So overall a lot of great suggestions from everyone. I still thinks it's the bots that are bleeding the site. The Ads may be doing some harm as well as i'm not sure i can block the Facebook/Meta crawlers. Looking at the IIS files they wreak havoc at times and cause CFM and Sql Sever to go ballistic. I should look back to re-enabling Fusion Reactor to get a more indepth look at things. Today was a bad day for ColdFusion as the site collapsed a few times during the date due to GC overhead limit exceeded. Thats the error that pops up all the time in the -error log files for CFM.
Copy link to clipboard
Copied
Hello @BKBK
Thank you for your suggestions as well. I actually didn't get a reply to your post until i saw it today.
Optimization
I have looked through my functions in CFM. I did learn the hard way, way back, when building some functions that i wasn't using the var scope correctly and paid the penalty when pages were being called. Amazing what a little reading can do. So i'm pretty confident in the way the code is compiling through the site, but there can always be improvement.
I will keep trying the updates for CFM as well.
Thanks
Copy link to clipboard
Copied
Hi @BiffJingles ,
Thanks for the update.
Find more inspiration, events, and resources on the new Adobe Community
Explore Now