mardi 22 janvier 2019

How to test or simulate timeout errors using file_get_contents to a service that responds quickly

I am using php from an all-day-running shell script launched daily from cron (Ubuntu 16.04). I'm using it to periodically fetch thumbnail images from a web camera and write to disk. Sometimes (sporadically unpredictably) it times out. I want to force this timeout to happen so I can test my mitigation approach.

I've tried setting the timeout in an array and passing to stream_context_create(), e.g. $stream_opts['http']['timeout'] = 3 and passing it to $context = stream_context_create($stream_opts) and passing that context when getting the thumbnail:

$thm = file_get_contents($thm_uri, FALSE, $context, 0, $thm_max_size);

I've also added ini_set("default_socket_timeout", 5) at the top of my script as per another answer to a similar question on Stack Overflow (can't find it right now).

I'm testing it by causing a network issue by unplugging a LAN cable in the path between the image-fetching computer and the web cam. E.g. the file_get_contents() is going to happen on the :00 second of each minute, I unplug this LAN cable from about 5 s before that, to 55 s after that.

Indeed the webcam thumbnail does not show up on disk at the expected time (:01 or :02 s after the minute). However shortly after I reconnect the LAN cable, the now-overdue thumbnail shows up on disk, and the thumbnail for the next minute quickly shows up too.

Why is it still trying for so much time? Both the stream timeout (3 s) and the socket timeout (5 s) should be well expired by the time 50+ s of LAN outage has happened. Why is the file_get_contents() still succeeding? : - (

p.s. in case it matters, the LAN cable is at a switch-to-switch section of the LAN so I don't think the DHCP clients on the web cam or fetching computer should be noticing a link outage and re-getting their addresses.

Aucun commentaire:

Enregistrer un commentaire