Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Server "connection" object/variable when disconnecting from server side
#3
Thank you very much for your swift reply! To me, you are much more famous than the character! Apart from Indy itself, I have read many useful answers you gave for other users, and they (hopefully) taught me a lot. Thank you for them, also! I am actually glad that I can get to thank you, even in this way!

I don't know how to quote you directly in the text (I don't know how to use these functions of the forum), for which I apologize. I simply inserted quotes and pasted your answers. Also English is not my native tongue and I don't use it regularly for communicaton, so please excuse my inevitable mistakes and strange expressions.


Quote:Port exhaustion would not affect the server, only clients.  And if clients are running out of ports, they are making too many connections in a short amount of time, so you should be reusing connections instead of dropping them.


I'm trying to build a system that would comprise quite a lot of servers with different roles, that would be communicating with each other, and reusing connections would mean that I would have too many open TCP connections in that network. I am trying to make it scalable. I also considered UDP communication within the network(s), and if it will be necessary I will move to that model, as the packets are rarely lost locally. Until then, I am (most likely clumsily) trying to get as much freedom and reliability from TCP as I can. I found out that server disconnection is avoiding port clogging on other servers' sides, which also act as clients for many other servers in this system, and in this way a thread does not have to wait for a certain connection to receive an answer, to use it. I am also avoiding the use of tunelling for the client requests, to fit them into the same tube and to create a mechanism that would receive them asinchronously. I would better switch to UDP in that case, it's almost the same programming effort.


Quote:That is a very usual design if you are not sending multiple requests per connection.  In a 1-request-per-connection scenario, can you include the disconnect request in the initial data request (like HTTP does), or just disconnect blindly without waiting for the client to request it?


I use only one request per connection, even if, internally, from the point of view of the application, that request is complex and the answer is complex, too - they are received and sent as a single request and a single answer.
I have tried many scenarios, using many threads simultaneously, and also many configurations. This is what have found out:
1. Apart from the port exaustion problem, if the client disconnects the operation is at least 10 times faster than if the server disconnects, in any scenario from below. But because of the port exhaustion, the client never disconnects in my application, it never performs a disconnection, only asks for it.
2. If the server blindly disconnects, in most scenarios this is 4-10 times slower than if the server waits for the client to ask for the disconnection form the server (scenario no 3), which means this no 2 scenario is about 100 times slower than the scenario no 1. Plus this is prone to more exceptions, as the disconnect request may interrupt the data transmission, and when I read it (ReadBytes) it may tell in many ways that the connection is no longer available. If the server and the client are on the same machine (or even part of the same application), a single client thread can run at most 3 requests per second (using an Intel 4.4 hz processor, 16 gb of RAM). If they are on different machines, it depends, if the connection is wireless then this number is about 12, and if it is wired it is around 8 (less). If the server and client are on different virtual machines, the latter drops by 30%.
3. If the server waits for a request from the client in order to disconnect, this step elliminates the transmission errors but also adds some extra traffic. Even so, the speed is 4-10 times better then the case no 2. The only scenario when this is slower is the wireless connection for either the clients or the servers, but this is not important in this case. So I use this model.

So if I include the disconnect request in the initial request, as you have said, we have case no 2. In order to speed things up by an order of magnitude, I have to send a different request for the server (in which the client asks for disconnection), subsequent to the first request for data; after receiving it, the server disconnects without replying in any way.
So this is the fastest and most reliable I have found until now is: client connects, client sends data request, client receives data request, client sends disconnect request, server performs the actual disconnection.

Quote:What if the client disconnects before the queued request is processed?  Do you remove the request from the queue?  Or let it fail when it tried to write back to a dead connection?

In my model, the client does not disconnect, it never performs the disconnection. Only the servers perform the disconnectinons. The client only requests from the server "disconnect this connection", and the server does it without sending any reply to the client.
Anyway, the requests are processed very quickly, so the server has no time to notice a broken connection. But in that eventuality, the sending will fail and the request should be transmitted once more. Internally, the servers may keep that request's result and spare the time to process it once more, but this is not important from the connection's point of view.

Quote:Indy does not reuse Connection objects.  A new Connection object is created before a client is accepted, and is then destroyed after that client disconnects.  Most likely, you are storing a pointer to a Connection object is has been destroyed before you are able to access it, so that memory address is invalid, or worse has been reused for a new object elsewhere (a different Connection, or something else completely unrelated), either way causing undefined behavior in your code.

This is interesting, because what I see is this: a connection variable I pick from the OnExecute event and store in the queue list along with a tag and the peer coordinates, when retrieved milliseconds later no longer has the same attributes (it may point to a different ip and port, or be invalid). I could not find any error in my code, and the connection variable is untouched. But I will dig deeper and I will communicate the results. I will strip down this part, and make further tests. Thank you again.

Quote:Don't store a pointer to the Connection object.  Store a pointer to the Context object instead, and then validate that object is still present in the server's Contexts list before using its Connection.  Alternatively, store per-Context identification info (client ID, etc) and then search the Contexts list for that ID when needed.

This may be slow as I will be locking the list a lot of times searching for a particular connection.
I also had the same problem of inconsistence using /storing the Context object itself: the Connection to which it pointed has had the same problems, and I could not find them in my code, but I will try harder. The AContext.Connection was different. I am talking about 5% cases in fast processing, about 100-2900 client threads and at least 32-64 queue processing threads.

Quote:Either way, you have a race condition, if the client disconnects while you are using the Connection.  Unless you keep the server's Contexts list locked while sending responses, which I don't suggest. Worse case, you may have to just store the underlying SOCKET itself and write to it directly, and just let the OS fail if the SOCKET is closed.  That will work on Windows, at least, where SOCKETs are unique kernel objects.  Not so much on Posix systems, where sockets are file descriptors that can be reused.

The clients never disconnect, though the connection may drop due to network conditions. I do not intend to keep that list locked, as I have read in many other of your posts it slows down the server, and this is logical. I have to build something that would work both on Windows and on Posix, therefore I want to avoid, as much as possible, working at socket level.
I have tested only on Windows until now, on quite fast Intel processors and networks.

I will be back with details. I will try to strip and isolate the queue, the server and the clients, to see what I did wrong, and hopefully help others with this.

Thank you, again!
Reply


Messages In This Thread
RE: Server "connection" object/variable when disconnecting from server side - by noname007 - 10-30-2020, 09:28 AM

Forum Jump:


Users browsing this thread: 1 Guest(s)