[00:12:47] @lucaswerkmeister - oh, that's encouraging. And sorry about the paste. [00:21:49] from the way it formats the image name ...maybe that unknown part is a date when the image was created or a base image it came from.. but just guessing [02:15:51] so I got it to work and run. [02:15:52] My next challenge is perhaps quota-related. My tool is a search API over a complete dump of Project Gutenberg. To set up the (Elasticsearch) index, I need to download a 10G archive, unpack it (to about 29G), and then ingest it into Elasticsearch. [02:16:02] However, While unpacking, my webservice shell got killed with this message: [02:16:03] 3m44s Warning Evicted pod/shell-1764813224 The node was low on resource: ephemeral-storage. Threshold quantity: 18934068771, available: 18268728Ki. Container shell-1764813224 was using 31600236Ki, request is 0, has larger consumption of ephemeral-storage. [02:16:38] Where do I request an increase of this limit? [02:16:39] (This ingestion will be very infrequent — probably no more than once every six months or so.) [02:19:02] to clarify: after ingestion, I don't need any of those files, so the 39G can be recovered. Only the ES index would remain. [07:29:43] <59029414> Увяз в долгах? [07:29:43] <59029414> Нужны бабки? [07:29:45] <59029414> Хочешь большего? [07:29:46] <59029414> Не хватает на желанное? [07:29:48] <59029414> Пиши - @underdeadside [07:29:49] <59029414> У неё есть что обсудить. [07:54:04] <59029414> Увяз в долгах? [07:54:06] <59029414> Нужны бабки? [07:54:07] <59029414> Хочешь большего? [07:54:09] <59029414> Не хватает на желанное? [07:54:10] <59029414> Пиши - @underdeadside [07:54:12] <59029414> У неё есть что обсудить. [08:07:43] Beware, this is spam [08:11:54] <59029414> Увяз в долгах? [08:11:55] <59029414> Нужны бабки? [08:11:57] <59029414> Хочешь большего? [08:11:58] <59029414> Не хватает на желанное? [08:12:00] <59029414> Пиши - @underdeadside [08:12:01] <59029414> У неё есть что обсудить. [08:40:41] abartov: unsure if this applies to gutenberb dump but if they are easy to parse (e.g. one line per document) you can try to write an import script that works from standard input and convert every line to an elastic bulk import, this way you would not need to store them locally you would just pipe them: curl -s dump_url | my_import_script.py https://elastic_host:9243/my_index/_bulk [14:38:06] I filed T411790 for the 'unknown' issue [14:38:06] T411790: jobs-api lists running buildservice images as "unknown" - https://phabricator.wikimedia.org/T411790 [14:52:52] Hi folks, [14:53:13] hoping someone can help my single braincell get my app running [14:55:11] !ask [14:55:12] Hi, how can we help you? Just ask your question. [14:57:42] So my app is deployed, running webservice says "Your job is already running", logs look good, but the default holding page is still displayed. it's a Vue app built with vite [14:58:02] not sure where to begin investigating [14:59:43] which tool? [15:00:00] https://centralnotice-banner-creator.toolforge.org/ [15:00:19] is it listening on the right port? [15:00:56] is the source available? [15:01:41] HouseOfM: doing `toolforge webservice logs` shows an error which is making the app crash [15:02:49] Thank you, I don't know how I didn't see that. sorry for wasting your time [15:40:52] !log toolforge deleting toolsbeta-test-k8s-etcd-27 and replacing with a Bullseye node for cluster consistency T361237 [15:40:53] andrewbogott: Unknown project "toolforge" [15:40:53] andrewbogott: Did you mean to say "tools.toolforge" instead? [15:40:54] T361237: [infra] Upgrade Toolforge K8s etcd nodes to Bookworm - https://phabricator.wikimedia.org/T361237 [15:41:07] !log toolsbeta deleting toolsbeta-test-k8s-etcd-27 and replacing with a Bullseye node for cluster consistency T361237 [15:41:11] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Toolsbeta/SAL [16:49:47] @abartov: you can tune how much of your ram and cpu quota the `webservice shell` container gets with the `--cpu` and `--mem` cli arguments. I would suggest starting with something like `--cpu 1 --mem 2G`. I think 6G is the max for a single container, but maybe it is 8G? [16:50:26] `kubectl describe quota` is a way to see what is being used at the moment [18:26:58] is this about memory, though? I interpreted 'ephemeral-storage' as being about disk space, somehow. (re @abartov: However, While unpacking, my webservice shell got killed with this message: [18:26:58] 3m44s Warning Evicted pod/she...) [18:37:15] @abartov: my apologies. I saw the general topic of a pod being killed for quota and didn't pay close attention to which quota. I don't remember ever hearing of a way to change the ephemeral-storage limits. You should however be able to access the slower NFS backed storage for your large file needs. Try adding `--mount all` and then the `$TOOL_DATA_DIR` directory should be the tool's NFS home dir. [18:37:16] https://wikitech.wikimedia.org/wiki/Help:Toolforge/Building_container_images#Using_NFS_shared_storage [21:11:41] The 2025 Developer Satisfaction Survey is live. See https://w.wiki/GVdi for more info. [22:17:04] thanks, bd808! I'll try that now. [23:50:06] huh. Well, I was able to unzip the large archives onto $TOOL_DATA_DIR, but once I logged out from the buildservice shell (to downgrade my ES client gem, as Toolforge runs ES 7.x), discovered the 29G of plaintext files were *gone*. [23:50:29] and now I am reminded of that phrase *ephemeral* storage. Er, just how ephemeral is it? [23:50:46] NFS isn't (re @abartov: and now I am reminded of that phrase *ephemeral* storage. Er, just how ephemeral is it?) [23:51:02] I mean, I will now start over and hopefully get my data ingested while the files are still around, but I'd like to understand why those files seem to have evaporated. [23:51:33] you could test with empty file and an interactive shell instead of 29G? just `touch` and `ls` [23:52:24] yeah, that works. huh. strange.