[10:53:42] hello folks! [10:54:58] I am checking model.py for articlequality, I am a little confused about how it works [10:55:03] does it use the mediawiki api? [11:38:15] I am asking because I don't see in model.py the same mwapi request workflow, and I am confused about how it retrieves the data [11:38:18] * elukey lunch [14:17:18] ok self answering - we currently pass "{"article_text": "I am the text of a page. I have a word"}" as feature from the client, so there is no api call to fetch the wikitext or similar from the mw api [14:17:22] this bit was not clear to me :) [14:37:29] so it is weird that for enwiki-damaging I don't see traffic coming through the egress gateway [14:38:11] not only istio metrics, even kserve ones [14:38:49] it doesn't make sense [14:40:54] yeah I was testing codfw, good job Luca [14:40:57] * elukey cries in a corner [14:42:09] everything works, updating codfw too :) [14:49:37] ml-serve-codfw should be in sync! [15:07:02] Morning all! [15:07:11] Today I have a LOOONG survey to fill out [15:14:11] morning! [15:38:39] o/ [15:41:06] elukey: that is correct, articlequality predictor takes `article_text`, the transformer will be the one that hits mw api (or feature store) [15:45:20] accraze: perfect, it should be instructed (as we do now for editquality) with the egress gw's endpoint + host header, and then it should work :) [15:45:39] I am currently checking https://www.envoyproxy.io/docs/envoy/latest/configuration/upstream/cluster_manager/cluster_circuit_breakers#config-cluster-manager-cluster-circuit-breakers [15:46:00] seems really nice, I don't think we use anything like that in the other k8s clusters [15:46:30] the idea is to apply back pressure on our side if the traffic is too big towards the mw api [15:46:37] whoa [15:47:27] istio ingress/egress gateways are basically envoy proxies, so we have a lot of nice things to use :) [15:49:14] TIL our new team member will get an Apple M1 laptop. Anyone see any major issues with that? [15:49:51] chrisalbon: running revscoring was tough on osx, but is do-able with a vm or docker etc. [15:50:11] err macOs [15:50:35] but most of our work is either on ml-sandbox or other boxes so it should be fine [15:50:36] I could never get it to run without vm or Docker on macOS [15:50:42] Yeah, that was my thought [15:50:49] okay cool [15:50:51] 10 cores! [15:50:56] dang! [15:51:42] the kserve stack will probably be able to run locally on that [15:52:26] Apple M1 Max chip with 10-core CPU, 32-core GPU, and 16-core Neural Engine [15:52:26] 64GB unified memory [15:52:26] 1TB SSD storage [15:52:26] 16-inch Liquid Retina XDR display [15:52:26] Three Thunderbolt 4 ports, HDMI port, SDXC card slot, MagSafe 3 port [15:52:26] 140W USB-C Power Adapter [15:52:26] Backlit Magic Keyboard with Touch ID - US English [15:52:27] Accessory Kit [15:52:47] wait what is the neural engine? [15:53:11] Apple have been including a specific neural chip in its latest machines [15:53:26] to do more edge ML [15:53:42] I've never used it though [15:54:35] "enables up to 15x faster machine learning performance" [15:54:37] waaat [15:54:59] ok im kinda interested [15:58:39] yeah, you can't use it for training (yet) BUT I think the idea was that it means that Apple and developers can put complex neural networks into their desktop applications and websites which Apple's neural chip will run locally [15:58:47] rather than hogging the CPU inefficiently [16:03:05] wow interesting, i do think we'll see some neat stuff as edge AI/ML matures [16:05:44] it also opens up some interesting questions about privacy etc.. too [16:08:09] 64G of ram :D [16:13:49] ^ might be more than all of our laptops combined lol [16:14:17] the kserve stack will def be able to run locally [18:57:18] i'll file a ticket for that pipeline issue we talked about during the tech meeting [18:59:55] <3 [19:00:00] * elukey afk! [19:33:55] 10Lift-Wing, 10Machine-Learning-Team (Active Tasks): Fix pipeline image publishing workflow - https://phabricator.wikimedia.org/T297823 (10ACraze)