Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
what is the maximum capacity that indy TCP can hold ?
#1
in my machine each port can carry 1024 Connections , when my indy server have about 600 clients which is 600 Threads every thing is fine no problems at all. but when it goes to 800 + the app got not responsive and unable to accept new connections 

what is the possible number that indy can hold ?
Reply
#2
If you have so many concurrent connections you should be using a thread pool, not a thread for every connection.

Indy has no limit. The limit is imposed by RAM and sometimes OS (non server OSes often have artificial TCP connection limits)
Reply
#3
(12-12-2018, 06:36 PM)kudzu Wrote: If you have so many concurrent connections you should be using a thread pool, not a thread for every connection.

A thread pool won't help if there are hundreds/thousands of clients connected concurrently. Indy still uses a 1-thread-per-client threading model. Pooling only helps to cache and reuse threads that are sitting idle between connections, not to allow a single thread to service multiple clients at a time. Indy does not support that. For that, you would have to use platform APIs directly to utilize non-blocking/overlapped socket operations manually.

(12-12-2018, 06:36 PM)kudzu Wrote: Indy has no limit. The limit is imposed by RAM and sometimes OS (non server OSes often have artificial TCP connection limits)

True. On the other hand, you may be able to squeeze in more threads concurrently if you lower your app's default thread stack size, for instance. Delphi's default size is 1MB per thread.

Reply
#4
Something else to consider if you are using windows,

Fix windows socket configurations (refer https://docs.microsoft.com/en-us/previou...echnet.10))
        One of the ways is to increase the dynamic port range. The max by default is 5000. You can set this up to 65534. 
        HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort is the key to use.

The second thing you can do is, once the connection does get into an TIME_WAIT state, you can reduce the time it is in that state. 
        Default is 4 minutes, but you can set this to 30 seconds. (30 is the minimum value)
        HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\TCPTimedWaitDelay is the key to use. 


Altering these two windows settings greatly assisted my situation, and might be worth looking at.

-Allen
Reply
#5
(12-21-2018, 04:01 PM)bluewwol Wrote: Fix windows socket configurations (refer https://docs.microsoft.com/en-us/previou...echnet.10))
        One of the ways is to increase the dynamic port range. The max by default is 5000. You can set this up to 65534. 
        HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort is the key to use.

That only applies to ephemeral ports, which are not used on the server side. You would use this fix on the client side if it were making a lot of outbound connections and was running out of available ports.

(12-21-2018, 04:01 PM)bluewwol Wrote: The second thing you can do is, once the connection does get into an TIME_WAIT state, you can reduce the time it is in that state. 
        Default is 4 minutes, but you can set this to 30 seconds. (30 is the minimum value)
        HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\TCPTimedWaitDelay is the key to use. 

Note that on the server side, a socket connection goes into TIME_WAIT only if the server is the one closing the connection. If the client closes the connection, TIME_WAIT is not used. See the state diagram in the Winsock Programmer’s FAQ.

Reply
#6
And at Linux environments, performance is better compared to Windows?
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)