Skip to main content
Participating Frequently
October 2, 2025
Question

CF 2023 Thread-27 IOCP Handler Consuming Excessive CPU - Bug Report

  • October 2, 2025
  • 4 replies
  • 845 views

Each  time we restart the instance, even without any user traffic at all, cpu will eventually jump to 24-25% and stay there.

In some cases this will double, which then makes the site using the instance slow and unresponsive

  1. Problem Statement
  • ColdFusion 2023 consuming 20-24% CPU constantly with zero user activity
  • Issue started late September 2025, no configuration changes
  • Restart temporarily fixes it, CPU climbs back within hours
  • Server: Windows Server, CF 2023, Java 17.0.6, 4 cores, 6GB heap
  1. Thread Dump Evidence (CRITICAL)

"Thread-27" #95 daemon prio=5 os_prio=0 cpu=2798078.13ms

State: RUNNABLE

at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method)

at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:323)

This thread has consumed 2.7 million milliseconds (46 minutes) of CPU time in IOCP polling.

  1. Network Evidence The netstat output shows CLOSE_WAIT connections to Akamai CDN that never close, accumulating over time?
  2. What I've Ruled Out
  • GC is healthy (gc.log shows normal behavior, 5-15ms pauses)
  • No active requests (PMT shows zero activity)
  • No application errors (logs clean)
  • Removed CF WebSocket configuration (helped reduce threads from 270 to 90, but CPU issue persists)
  • All scheduled tasks complete successfully
  • No Windows updates since October 2024.
  1. HTTP Log  Shows that all application-level HTTP requests complete successfully with proper timeouts.
  2. My Analysis Thread-27 is CF's Windows I/O Completion Port handler thread. It's stuck in a polling loop instead of blocking, consuming CPU constantly. This appears to be a bug in how CF 2023's NIO implementation interacts with Windows IOCP on Java 17. Threads show three similar threads all consuming cpu cycles. But I don't know what is causing these stick IO threads

Any help greatly appreciated.

Thanks

Forrest

    4 replies

    Paolo Olocco
    Participating Frequently
    October 5, 2025

    Akamai non conosco. In tutta la tua applicazione come usi i richiami HTTP, con new http() o cfhttp()?

    Participating Frequently
    October 6, 2025

    Old codebase so mostly uses cfhttp tags. Do you think cfhttp calls may not be closing or timing out? We do call a lot of external apis. Thanks

    BKBK
    Community Expert
    Community Expert
    October 6, 2025

    @forrest_3294 , Did you see my last suggestion? 
    I repeat: 

    • it’s the JVM that’s invoking the Windows API GetQueuedCompletionStatus() in a loop.


    Hence, you will likely solve the problem by installing Java SE 17.0.16, and running ColdFusion 2023 on it. This solution is less risky than updating ColdFusion, which you're hesitant to do. One added bonus is that you need to update the Java version anyway (for obvious security reasons).

     

    Should the Java update give any problems, which is highly unlikely, then it's as simple as ABC to revert to the original state.  All you then have to do is revert to the original java.home setting in  /[CF_INSTANCE]/bin/jvm.config.

    Participating Frequently
    October 3, 2025

    This morning it is using over 60% with no active site users at all almost immediately after a reestart of the instance. I have dealt with sites with heavy traffic and slow db queries etc, but with zero traffic what would cause the coldfusion process to continually use 60% CPU? Normally at least for me if CPU locks at a certain level there is an issue with heap or CF running out of memory. That is not the case here. Something is running and I can't seem to find what it is. Some days it will run all day no cpu spike. Others it will spike at 24% then later it doubles almost exactly. So I suspect there is one particular action or script that triggers it but I sure can't find it. 

     

    I have checked cfthread and jdbc activity during spikes and PMT again shows nothing out of the ordinary.

    Participating Frequently
    October 3, 2025

    No we have not updated CF 2023. The last time we tried using the CF Admin panel it messed up all or most of the installed packages and had to restore. Afraid to risk it since this is a production server. 

    BKBK
    Community Expert
    Community Expert
    October 3, 2025

    What is your present ColdFusion update level? It is likely that the issue you're facing will go away when you update ColdFusion.

     

    There are ways to mitigate the risk of updating ColdFusion 2023 on the production server:

    1.  Best-practice requires that you have a test environment. It should run ColdFusion 2023 on the same server brand as the production environment, and should have been set up similarly. 
    2.  As you already know, a ColdFusion update may be uninstalled, reverting to a previous update.

     

    So you just have to install Update 16 on the test environment -- and test it. Depending on how urgently you want to resolve this issue and on how much risk you're willing to take, you may:

    •  Test within a matter of days, or even hours, then install Update 16 on production. Choose to install when it's quietest on the server. Rely on the fact that you can uninstall the update.
    •  Test Update 16 over an extended period, exploring every possible eventuality. 
    •  Hire a ColdFusion consultant to assist you.
    BKBK
    Community Expert
    Community Expert
    October 3, 2025

    I think that, at this point, the single most important question/suggestion is: Have you upgraded ColdFusion 2023 to the latest update level, namely, Update 16?

    Charlie Arehart
    Community Expert
    Community Expert
    October 2, 2025

    It's reasonable to conclude you've found a bug, but instead there may be an environmental contributor that's unique to you. (FWIW, I've not heard of this but perhaps Adobe or someone else has.)

     

    And that said, you've done a great job identifying some key diagnostics, but I'd press for more. 

     

    In addition to pmt tracking of requests, see its tracking of cfthread threads (created by cfthread within cfml). Sometimes those can be running amok, often unnoticed. 

     

    Along the same lines, have you checked the pmt's tracking of jdbc activity? One reason I ask is that such cfthread threads could be doing queries, which entail network ii.

     

    And while you saw NO requests running at the time you looked, you may still want to confirm whether there was a spike in request activity, anytime between your cf restart and now. 

     

    Further, while you may not see requests running amok, it may not be their NUMBER/FREQUENCY but their NATURE. Sadly the pmt doesn't log every request (like FusionReactor does), but your web server logs would, of course. It can just be like looking for a needle in a haystack, since the web server logs track ALL requests regardless of type. (Note that cf can be configured to log each cf request, which at least limits the log to only those.) 

     

    As for your thread dump, a few more thoughts. First, let's clarify that the "Thread-27" using the cpu is NOT related to the cfthread threads I asked about above. Those would be "cfthread-nm", with the nn being a number. 

     

    Further, a thread dump is a point in time resource. That tracking of total time (since cf came up) is rarely valuable, though it may well be useful in this one case. What would be MORE useful is to see thread use over time, and more specifically what Java objects are being used in threads over time. That's generically referred to as "profiling". 

     

    And while the pmt can be told to profile a request or a cfthread thread, it can be told to profile ALL threads of ALL types. That can be useful, beyond what you found. This is something fusionreactor can do very easily, and jvm tools can do it as well (though for many folks, setting those up for a prod system is often too challenging).

     

    Anyway, let us know if you learn more, or if you have thoughts on what I've offered or that others may. 

    /Charlie (troubleshooter, carehart. org)