Skip to main content
Inspiring
July 29, 2019
Answered

Help: Error 500.0 - Java Heap Space

  • July 29, 2019
  • 1 reply
  • 841 views

Occasionally, we run into this error while running queries with 1 million + records. We would like to make sure we have optimum settings in place to minimise the possibility of this happening. I'm a coder, used to working with a specialist server guy who takes care of server-related issues. However, it looks like I finally need to get properly up to speed on all this because I don't have this luxury now.

I've included some details of our setup below. Can someone suggest some basic first steps to making sure things are configured as well as possible?

Java Settings in CF Administrator:

Minimum JVM Heap Size: 2048

Maximum JVM Heap Size: 3072

JVM Arguments:

-server -Dsun.rmi.dgc.client.gcInterval=86400000 -Dsun.rmi.dgc.server.gcInterval=86400000 -XX:+UseParNewGC -XX:NewSize=96m -XX:MaxNewSize=256m -XX:SurvivorRatio=6 -XX:MaxPermSize=256m -XX:PermSize=256m -Dcoldfusion.home={application.home} -Dorg.eclipse.jetty.util.log.class=org.eclipse.jetty.util.log.JavaUtilLog -Duser.language=en -Dcoldfusion.rootDir={application.home} -Dcoldfusion.libPath={application.home}/lib -Dorg.apache.coyote.USE_CUSTOM_STATUS_MSG_IN_HEADER=true -Dcoldfusion.jsafe.defaultalgo=FIPS186Random

System Information:

Version11,0,05,303668
Tomcat Version7.0.68.0
EditionEnterprise
Operating SystemWindows Server 2012
OS Version6.2
Update LevelC:/ColdFusion11/cfusion/lib/updates/chf11000013.jar
Adobe Driver Version5.1.3 (Build 000094)
JVM Details
Java Version1.7.0_51
This topic has been closed for replies.
Correct answer Charlie Arehart

Well, 2g heap may well be too small for a query of a million records. Depends on the number of cols retrieved and their size. (And someone might first question if that is a wise/necessary query, but I'll assume for now that you have your reasons.)

So a first step to the answer you seek (what heap size is optimal) is to ask how much memory is on this *system*, and how much is typically free? (outside of CF, I mean.) Could you double or triple the CF heap max?  Does it make the error go away? If so, you may be done.

Better would be to have something monitoring CF (not just heap use, but requests and much more), because I doubt your request in question is ALL that this CF processes, right? Other factors would influence what the "best" heap size is.

Also, FWIW, you have added many non-standard jvm args, which others may have suggested.. You might consider removing then to see if that helps. I've seen it happen.

This is another benefit of monitoring: to watch how things go based on changes made. Since you're on cf11 enterprise, you do have the cf server monitor available.

That said, check if IT may well be *contributing to the heap problem yes, it can happen. Check the cf admin "monitoring settings" page (in Enterprise only) to see if the monitor's "start memory monitoring" is enabled. If it is, turn it off and see if that stops the errors.

Finally, another monitoring option is FusionReactor, especially BECAUSE many are scared off of the CF Enterprise Server Monitor. It has many advantages over the CFSM, and has a 14-day free trial, if you need better monitoring for this or other problems. And those on cf2018 also have its new PMT.

But let us know if the other suggestions above get you going. And besides having back and forth here, there are "server guys" available to help by the hour or less, for such challenges. See my list of them at cf411.com/cftrouble.

1 reply

Charlie Arehart
Community Expert
Charlie ArehartCommunity ExpertCorrect answer
Community Expert
July 29, 2019

Well, 2g heap may well be too small for a query of a million records. Depends on the number of cols retrieved and their size. (And someone might first question if that is a wise/necessary query, but I'll assume for now that you have your reasons.)

So a first step to the answer you seek (what heap size is optimal) is to ask how much memory is on this *system*, and how much is typically free? (outside of CF, I mean.) Could you double or triple the CF heap max?  Does it make the error go away? If so, you may be done.

Better would be to have something monitoring CF (not just heap use, but requests and much more), because I doubt your request in question is ALL that this CF processes, right? Other factors would influence what the "best" heap size is.

Also, FWIW, you have added many non-standard jvm args, which others may have suggested.. You might consider removing then to see if that helps. I've seen it happen.

This is another benefit of monitoring: to watch how things go based on changes made. Since you're on cf11 enterprise, you do have the cf server monitor available.

That said, check if IT may well be *contributing to the heap problem yes, it can happen. Check the cf admin "monitoring settings" page (in Enterprise only) to see if the monitor's "start memory monitoring" is enabled. If it is, turn it off and see if that stops the errors.

Finally, another monitoring option is FusionReactor, especially BECAUSE many are scared off of the CF Enterprise Server Monitor. It has many advantages over the CFSM, and has a 14-day free trial, if you need better monitoring for this or other problems. And those on cf2018 also have its new PMT.

But let us know if the other suggestions above get you going. And besides having back and forth here, there are "server guys" available to help by the hour or less, for such challenges. See my list of them at cf411.com/cftrouble.

/Charlie (troubleshooter, carehart. org)
Inspiring
July 29, 2019

Thanks Charlie, I actually hired you to work on a different system several years ago:-) Still as caring and helpful as ever I see. Thanks so much for that input. It's given me a great place to start and you're still first on my list of server guys should I need some hands-on help. Thanks again!

Paul

Inspiring
July 29, 2019

And yeah, I inherited this system and it is one mother of a query. I'm going to have to do something about it eventually.