[08:47:11] !log wikidata-dev queripulator shut down instance, I’m pretty sure it’s not needed anymore [08:47:13] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [09:14:33] !log wikidata-dev reference:island: shut down instance, pretty sure it’s not needed anymore [09:14:35] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [09:15:53] !log wikidata-dev reference-island: shut down instance, pretty sure it’s not needed anymore [re-log with correct name] [09:15:54] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [09:16:36] !log wikidata-dev fedprops-opennext: shut down instance, MediaWiki was broken anyway (1.43 needs PHP 7.4.3+, 7.2 installed) and I believe the instance isn’t needed anymore [09:16:37] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [09:18:50] !log wikidata-dev deleted ssr-termbox.wmflabs.org proxy, backend (http://172.16.4.123:3030) was already gone (no known instance with IP .123 and no obvious existing instance that would fit) [09:18:51] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [09:27:27] !log wikidata-dev wb-reconcile: shut down instance, I don’t think it’s needed at the moment (was running MediaWiki 1.35) [09:27:29] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Wikidata-dev/SAL [15:25:19] taavi: re the list, what happens for VMs that are currently shutdown? [15:25:46] JJMC89: they also get migrated to a g4 flavor, but will stay shut down [15:26:04] ok, thanks [15:38:41] JJMC89: but also it will save taavi a bit of trouble if you delete things that can be deleted :) [15:41:09] I don't want it deleted - it is a dev/test instance that I only boot when I need to use it [15:44:20] I thought there was some documentation on wikitech about what to do and avoid when running MediaWiki instances in Cloud VPS, but I can’t find it now [15:44:28] does anyone know what link I’m talking about? [15:44:43] I think it had something like, spammers are very fast at finding open-registration instances, make sure to close down registration [15:45:12] (and I think there was also a messagebox you could copy+paste to the wiki’s main page about the cloud VPS privacy policy or something like that) [15:51:40] there's https://wikitech.wikimedia.org/wiki/Wikitech:Cloud_Services_Terms_of_use_(May_2023)#8._What_should_you_do_to_avoid_confusing_users_when_running_a_beta_or_test_wiki? but I don't think that's the page you mean [15:52:40] or maybe you're thinking of https://wikitech.wikimedia.org/wiki/Help:Toolforge/Rules #5? [15:53:10] JJMC89: that’s probably the message box I remembered, though [15:53:17] and taavi: maybe that was it, yeah [15:53:44] feels like it could be bundled on one page and linked from e.g. https://wikitech.wikimedia.org/wiki/Help:MediaWiki-Vagrant_in_Cloud_VPS [15:54:07] (“spam*bots* are very good at finding and flooding wikis” so that’s why I couldn’t find it by searching for “spammers” ^^) [15:54:44] ooh, and I didn’t know https://gitlab.wikimedia.org/toolforge-repos/mwdemo [16:02:20] !log lucaswerkmeister@tools-bastion-13 tools.wd-image-positions deployed 1b8e7b6b05 (l10n updates: it, nl) [16:02:23] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Tools.wd-image-positions/SAL [16:24:56] taavi: regarding the reboots, for the dwl project we would like to request it happening at the time of 17:19 UTC, the day doesn't matter [16:28:16] and we would like a notice so we can check afterwards [16:29:25] maybe we can even get a date? [16:32:37] gifti: that time is outside of taavi working hours. [16:32:57] ah [16:36:57] would 08:19 UTC fit better? [17:01:33] gifti: I think your best bet is to create a subtask of T364457 [17:01:33] T364457: Migrate eqiad1 hypervisors to Neutron OVS agent - https://phabricator.wikimedia.org/T364457 [17:04:28] not of T367723? [17:04:29] T367723: Migrate WMCS managed projects to g4 flavors - https://phabricator.wikimedia.org/T367723 [17:06:58] ah, no, that makes no sense [17:08:40] Is there a guide to do the bookworm migration thingy ? [17:09:41] @sohom_datta, migration is very specific to the thing you're migrating, so it's hard to write a general guide. Basically 1) make a new bookworm VM 2) move whatever it is you're doing over there 3) delete the old buster VM [17:09:50] Obviously step 2 is nontrivial :D [17:10:20] It might make sense to have a step 1.5) move all the files and such on your buster VM onto a cinder volume for easy transport to a new VM [17:11:35] I think we do have all our data in a volume, so that's a plus 😁 (for videocuttool) [17:12:50] yeah, then you may be already ready! [17:13:48] In some cases you might need a quota adjustment in order to create the new VMs while the old ones are running -- if you need that create a ticket here https://phabricator.wikimedia.org/project/view/2880/ and then let us know on irc so it gets filled quickly [18:16:39] !log admin temporarily removing all ovs hosts from the 'ceph' aggregate so the scheduler will stop putting linuxbridge hosts on ovs hosts and breaking them [18:16:43] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [18:26:24] some issue with grafana.wmcloud.org getting {"traceID":""} instead of a dashboard [18:27:46] I've also seen a couple toolforge jobs exit with "Exit code was '255'. With reason 'Unknown'." [18:34:57] JJMC89: the toolforge jobs issue might've been me, I abruptly broke some k8s workers. [18:35:00] should be all back now [18:38:03] opened T367803 [18:38:04] T367803: grafana.wmcloud.org down - https://phabricator.wikimedia.org/T367803 [19:11:38] JJMC89: fixed I think [20:03:21] !log admin repaced ovs hosts in the 'ceph' aggregate [20:03:24] Logged the message at https://wikitech.wikimedia.org/wiki/Nova_Resource:Admin/SAL [20:05:12] what is the host name of the machine that actually handles the webproxies? Like what you get by setting up webproxies in Horizon, is that "cloudweb" ? [20:06:00] I want to debug from that shell why one instance works as backend for a webproxy and another does not [20:16:02] mutante: proxy-0[34].project-proxy.eqiad1.wikimedia.cloud [20:18:23] andrewbogott: :) thanks! [20:26:53] thanks andrew - looks to be working well now [21:35:32] Hi [21:36:47] How do I forward messages from the cvn-sw-spam channel in IRC to the Telegram channel [21:39:17] I think the documentation you need is https://wikitech.wikimedia.org/wiki/Tool:Bridgebot#Telegram ? [21:42:45] هل ساحتاج تعديل هذا ملف لفعل ذلك [21:42:45] https://gitlab.wikimedia.org/toolforge-repos/bridgebot/-/blob/main/etc/bridgebot.toml (re @lucaswerkmeister: I think the documentation you need is https://wikitech.wikimedia.org/wiki/Tool:Bridgebot#Telegram ?) [21:44:05] Do I need to edit this file? [21:44:06] https://gitlab.wikimedia.org/toolforge-repos/bridgebot/-/blob/main/etc/bridgebot.toml [22:17:27] andrewbogott: is it possible puppetmaster.cloudinfra ran out of disk ? [22:17:40] I see a server error with No space left on device [22:17:53] I think that’s the right configuration, but you should file a task anyway (re @GergesShamon: Do I need to edit this file? [22:17:54] and afacit I am using the central one, not project local [22:17:54] https://gitlab.wikimedia.org/toolforge-repos/bridgebot/-/blob/main/etc/bridgebot.toml) [22:18:06] mutante: it's possible, let me check [22:18:29] thanks! [22:19:15] hm https://phabricator.wikimedia.org/T366357 seems not to be fixed [22:23:05] puppetmaster/common.pp already has a timer to delete those reports [22:23:16] it deletes everything older than 14 days [22:23:25] maybe that is not aggressive enough here [22:24:17] or puppetserver isn't puppetmaster/common ? [22:24:43] not sure [22:24:48] reports on a puppet 7 server are in /var/lib/puppetserver/reports [22:25:03] I do remember this issue happening in prod and then we added that timer I think [22:25:10] but it should not be generating reports at all [22:26:17] hm, maybe the service just needed restarting to pick up the config change [22:26:35] systemctl status puppet_report_cleanup [22:35:08] Unit puppet_report_cleanup.service could not be found [22:35:16] so that problem is likely to show up on all the new puppetservers [22:35:23] (except not this one 'cause it doesn't make reports anymore) [22:44:03] ACK, I guess that is something we could just copy over from master to server class [22:44:26] I am not sure about the "does the compiler rely on those" [22:52:47] as far as I know the compiler only relies on facts, which are stored elsewhere [23:03:47] gotcha