Skip to main content
Upen1
Inspiring
May 21, 2012
Question

Heap out of Memory Issue

  • May 21, 2012
  • 1 reply
  • 5487 views

Hi, I am using cfthread in my application and the cfthread uses one java library to download data from different servers. Generally we download 50-60MB data or 50,000 - 60,000 images in small batches by looping.

Most of the times I facing the issue out of memory for heap.

Below is my JVM settings.

java.args=-Duser.timezone=America/Chicago -server -Xmx2048m -Xms2048m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=256m -XX:PermSize=256m -XX:+UseParallelGC -Dsun.rmi.dgc.client.gcInterval=600000 -Dcoldfusion.fckupload=true -Dsun.rmi.dgc.server.gcInterval=600000 -Xbatch -Dcoldfusion.rootDir={application.home}/ -Djava.security.policy={application.home}/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/lib/coldfusion.policy -Djava.security.auth.policy={application.home}/servers/cfusion/cfusion-ear/cfusion-war/WEB-INF/cfusion/lib/neo_jaas.policy

Can any one please suggest where do I make any JVM related changes for optimize use of heap memory?

Or How can I able to detect whether any other issue(memory leak) is there in my application?

This topic has been closed for replies.

1 reply

Brainiac
May 21, 2012

Best to get an idea of what the CF JVM is doing. Can do that many ways via JVM logging , JDK tools jconsole and jvisualvm , others CF Monitor (CF8 9 10), CF Server Manager (CF9 10) both in a limited way, CF Jrun metric (CF7 8 9) CF  tomcat metrics (CF10) and perhaps FR and seeFusion have some tools. I think in your case JVM logging would be best to analyse what is happening then knowing what is occurring in CF JVM apply a change and monitor again. How to enable JVM logging and tools to help with reading or understand the log latter.

Some Questions:

CF version (suspect 9.0.n but you do not say) and Edition?

Java version that CF is using eg 1.6.0_24?

RAM available?

Operating System and CF and Java are 64 bit?

Probably no bearing Windows Linux? IIS Apache?

Sample of log error message that shows the heap has problem?

JVM logging:

Add these without return line feeds to your JVM args. Copy or backup your JVM.CONFIG before applying change. CF needs to restart to apply changes.

-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -verbose:gc -Xloggc:cfjvmGC.log

Creates a log file in ColdFusion\runtime\bin\cfjvmgc.log

Jrun4\bin\ in case multiserver

Use GCViewer tool to graphically examine the "cfjvmgc.log" contents:

http://www.tagtraum.com/gcviewer.html

Would not like to suggest a change to make without some details on the actual error and a look at the JVM logs.

Once have some details then there are some things that come to mind:

-Setting initial size same as maximum can lead to a fragmented heap tho might be ok

-Could be non heap memory value that is filling eg Perm or Code Cache

-Set New gen value (eg -Xmn184m) so JVM does not make poor guess on size

-Garbage Collect every 10 minutes OK, so you are trying to keep heap evacuated for now

-Could try a different GC routine other than UseParallelGC

HTH, Carl.

Brainiac
May 23, 2012

Amendments and addition to earlier post that apply to CF10.

JVM log file "cfjvmgc.log" will be in ColdFusion10\cfusion\bin or ColdFusion10\"instance"\bin case Enterprise Manager > Instance Manager > added new instance.

This ServerStats could help resolve JVM heap issues. No need to restart CF10 to apply JVM logging or JDK style Jconsole tools  to get a look at memory heap and CPU usage. Ref:

http://blogs.coldfusion.com/post.cfm/cfb-extension-for-server-stats-using-websocket

EG

Hope that’s helpful for readers, Carl.

Brainiac
May 23, 2012

I was monitoring my Application log I found some error messages like

"java.lang.OutOfMemoryError: GC overhead limit exceeded".

"java.lang.OutOfMemoryError: Java heap space at org.apache.xerces.dom.DeferredDocumentImpl.createChunk(Unknown Source)"

Is it due to Garbage Collector?


Too much time is being spent in Garbage Collection. Could be because of heap (Xms Xmx), non heap (PermSize MaxPermSize) or garbage collector routine (UseParallelGC) suitability for the work load. The warning  can be disabled by adding the option -XX:-UseGCOverheadLimit to JVM args however I would prefer to fix the problem, which will be causing some slow application response, rather than simply turn off the warning.

Do not have enough details to make a recommendation on what JVM arg setting to alter since the frequent GC's that are not releasing memory might be due to multiple issues. JVM logs if enabled and details analysed could assist to find a solution. If suspect matter is heap related then you could apply a change to set the New generation space, which is part of heap (heap = Old + New where New = Eden + 2 Survivor spaces), then  JVM args would look like eg 1. If suspect Permanent generation was not big enough then eg 2.  If suspect GC routine suitability another set of JVM args.

1)

java.args=-Duser.timezone=America/Chicago -server -Xmx2048m -Xms2048m -Xmn184m -Dsun.io.useCanonCaches=false ...etc

2)

java.args=-Duser.timezone=America/Chicago -server ...etc -XX:PermSize=256m -XX:MaxPermSize=512m ...etc

As I recall there was a problem (leak) with UseParallelGC with Java 1.6.0_17 that was fixed in 1.6.0_21 (?) onwards so perhaps no surprise your getting better uptime with Java update, tho with GCOverheadLimit happening you are not far from heap full problem.

You have Enterprise licence. Are you able to get any useful diagnosis from running CF Monitor?

Regards, Carl.