[03:58:13] TimStarling: hm.. so.. we don't actually use an async client, it all no-ops as it's specific to persistent [03:58:39] rather we switch optoins on the main sync client, and the next sync command then waits for any previous async commands, is that right? [03:58:49] I guess the trade-off is in 2x connection setup [03:59:23] we switch options on the main sync client, but switching options turns out to close any open connection [03:59:24] Im looking at it from the angle of why we don't set OPT_NOREPLY for the async client in general instead of only one one method and then switching back [03:59:58] * Krinkle is in SF [04:00:01] a future patch will remove acquireAsyncClient [04:01:05] look at the constructor, there are two clients if the persistent option is true, which it isn't in production [04:01:35] so there is only one client in production, with mode switching, and the current mode switching is causing connections to be closed [04:01:56] I'm curious if there's any use case for supporting persistent clients. not just memc. it seems like an outdated idea that was once thought of for performance but neither small scale nor large scale installs actually use it (idem for e.g. mysql conns and mw/wordpress etc.) [04:02:56] Does it close connections such that it still pointlessly waits for the async commands to complete, or does it close it quickly? [04:03:54] it will pointlessly wait [04:05:08] I mean for OPT_NO_BLOCK etc., before my patch [04:05:21] OPT_NOREPLY can be safely switched at any time [04:05:47] really it is a strange design choice to make it a mode rather than a parameter, since it is simply a parameter to the underlying server commands [04:07:14] the whole async implementation in MemcachedPeclBagOStuff master is wrong and will be replaced once I have cleared my to do list [04:07:40] Right, so you're thinking of using only the NOREPLY option on the formerly-async-client commands and not the other async clinet options [04:07:52] yes [04:09:28] setOptions() does not replace existing options, right? Whatever we set before still gets inherited I think. [04:10:10] yes, setOptions() is just a loop of setOption() [04:15:10] writing with OPT_NOREPLY can block once the TCP window fills up, but it is still a vast improvement over OPT_NO_BLOCK which will just call poll() in that case anyway [04:17:16] There's susprisingly little docs about these options, at least not in the places I looked (php.net, php-memcached-dev repo, libmemcached at https://awesomized.github.io/libmemcached/libmemcached ). The buffering seems to be purely client-side, e.g. don't send anything until we implicitly or explicitly close the connection or send a retreival command, and then send it all at once. [04:17:50] I'm guessing it's implied but couldn't find right away that in that mode, the retreival command will just loop through all the head-of-line storage commands until it finds the one response it cares about [04:18:38] The no_block option seems to be separate from buffering but I couldn't really find what it means. the libmemcached doc imply to me that no_block is obsoleted and the default in libmemc clients now, except for something called SO_LINGER that it sets. [04:19:29] anyway, using only no-reply and a single connection seems to have only upsides and no downsides and much easier to reason about [04:19:44] nice find :) [04:21:34] awesomized is a fork, I don't think we're using it? [04:22:18] I have been reading the old libmemcached.org source which hasn't been changed for 8 years or so [04:22:42] it matches what I have locally [04:30:07] ack, I assumed for now that the options haven't changed that much. the github fork is easier to browse and has published docs [14:43:41] Krinkle: AFAIK one big drawback of using persistent connections in the MW/PHP context at least is/was the difficulty managing transaction state and other per-connection state in a shared-nothing environment - compared with other runtimes such as Java where it is possible to encapsulate that in a pool implementation [14:44:19] I'm curious too on whether there is a measurable benefit though in the common case where the remote server (replica, cache) is in the local DC and "close" latency wise [14:46:07] that being said in the non-shared-nothing case a connection pool can probably act as a rudimentary CB by effectively imposing a ceiling on the # of threads allowed to execute in parallel per box, causing clients to fail-fast if the pool is exhausted [16:21:14] Naming conflict here won't be confusing at all: https://deno.com/blog/fresh-is-stable