mercredi 27 juin 2018

C++ Sockets and stress testing

I've written a very simple socket server in C++ (MinGW) using these common functions like

socket( PF_INET, SOCK_STREAM, 0 )...
setsockopt( s, SOL_SOCKET, SO_REUSEADDR, &OptVal, sizeof( OptVal ) )...
bind( s, ( struct sockaddr * ) &ServerAddress, sizeof( ServerAddress ) )...
listen( s, 10 )...

The handling of multiple client connections is done by

select( s, &FileDescriptorClient, NULL, NULL, &tv )...
accept( Server->GetSocketHandle(), (struct sockaddr*) &ClientAddress, &Length )...

Everything looked very good and pretty,... until I decided to stress test my server.

My first test was a very simple client which did only one thing: connect and disconnect in an endless loop - as fast as possible. Although this test was extremly simple it failed immediately.

It wasn't a too big surprise to me that the server will choke on so many toggeling connections, so I added a Sleep(5) (milliseconds) in the client before each connect and disconnect and everything was OK. For the moment.

My questions are:

  • How do I handle these reconnects correct?
  • And what is the proper way to stress test a socket server application?

Right now the procedure is as follows:

  1. client: connects to server using connect(...)
  2. server: the new connection is recognized by select(...) and accept(...). Every new connection is stored to a std::vector.
  3. client: disconnects from server using closesocket(...) (MinGW...)
  4. server: recv(...) reads 0 bytes what means the client has disconnected from server
  5. server: performs a closesocket(...) and removes connection from std::vector
  6. goto 1

As already mentioned: this will only work when I throttle the client with sleeps. As soon as I reduce the sleeping times, the server starts skipping pt. 4 (disconnects) and stockpiles open connections down the line.

What is the point I am missing?

Aucun commentaire:

Enregistrer un commentaire