[07:16:01] sweet [07:44:16] theory: from a point of view, maintaining the container images like that is not that different from the old tools-images repo, no? [07:45:09] some aspects are similar yes [07:45:22] semantically, is very similar, and prone to the same limitations, like version bumps and deprecations [07:45:25] some are not (ex. not being supported by WMCS, yet at least) [07:45:47] so I wonder if an alternative is to offer just the git repo, and for each tool account to build their own [07:48:01] I'm not sure how time-compatible building would be though, as in how long can you rebuild an old version of redis without trouble (ex. deps not existing anymore, ...) [07:48:52] For these kind of services, I would lean on pre-built images, as the redis interface (or mariadb/memcached) does not change that often and is usually backwards compatible [07:49:04] ok [07:49:15] with language runtimes you have many more issues as they change more often (specially if they come with libs) [07:50:04] I'd be happy to be convinced otherwise though if you want to try it out [08:12:41] nah, that's ok. But I think it may be the perfect timing to reflect on it, before wide adoption [08:12:54] on other news, maintain-kubeusers did this yesterday: [08:13:01] https://www.irccloud.com/pastebin/CCvZ5gcG/ [08:13:18] a couple of accounts `elcdb` and `lastactiveadms` were apparently deactivated [08:13:32] I guess/hope that this was OK, the result of a human action :-) [08:25:02] https://disabled-tools.toolforge.org/ <- they show up there, though it should nod delete them yet I think [08:25:30] it should delete them only after 40 days of being disabled [08:26:07] well, the homedir wasn't really deleted, only the `k8s.disabled` flag file was created [08:26:16] but `delete` is the name of the operation in the reconciliation loop [08:26:40] the rest of the resources, yes, they are deleted. Disabled accounts don't have access to k8s, so that should be expected, no? [08:27:36] sounds ok to me yes, very confusing names though [09:25:38] > aggregate_instance_extra_specs:network-agent='ceph' [09:25:45] I wonder what I was thinking about yesterday when doing this [09:26:49] :-) [09:29:10] `ceph net-agent list` [09:29:13] xd [09:30:48] hmm is rabbitmq happy in codfw1dev? [09:32:50] arturo: there's a bunch of duplicated resourcequotas on toolsbeta, duplicated as in each tool namespace, there's `tool-` and `` resourcequotas [09:33:16] is that you testing something? [09:33:23] yeah, we should cleanup a bunch of stuff after the recent maintain-kubeuser works [09:33:45] in tools there's also some, but not all [09:34:00] we briefly had a version of maintain-kubeusers deployed that would create resourcequotas with the wrong name [09:35:01] also, there are old configmaps [09:35:03] hmm, okok, so I guess that the more restrictive version will apply, so we should cleanup before any new change will take effect right? [09:35:16] old: maintain-kubeusers, new: maintain-kubeusers- [09:35:33] not the config, the resourcequota itself [09:35:53] https://www.irccloud.com/pastebin/kHBPQffS/ [09:36:36] that makes me wonder, what does toolforge jobs look for? [09:37:10] the same as the namespace name, it seems [09:37:32] well, kubernetes does, I think [09:37:49] it just gets the first in the namespace xd [09:37:52] resource_quota = tool_account.k8s_cli.get_objects("resourcequotas")[0] [09:38:38] oh, ok. But for actual resource scheduling calculation, I think k8s uses the namespace named [09:39:18] ah, okok [09:39:36] hmm, how did that work then? [09:40:02] ahhh, okok, the ones without 'tool-' are the ones that were created by mistake [09:40:03] xd [09:40:12] yeah [09:46:52] hmm, it seems to apply both resourcequotas, not only the one with the same name as the namespace [09:47:07] ok [09:47:14] so the cleanup is definitely needed [09:48:18] yep, it applies both [09:48:33] yep, cleanup is needed, I'll do [09:49:46] thanks [10:13:23] quick review (the cleanup script is there) https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/323 [10:14:49] I'll send another patch cleaning up the specific overrides after [10:36:11] +1'd [10:45:14] arturo: dhinus: after a round of rabbitmq restarts it seems like the new flavors are working fine in codfw1dev. a g3 flavor gets you a linuxbridge agent host, a g4 one gets you an OVS host, and both types will have working network connectivity in the same VLAN-based network [10:45:26] next up i will set up the opentofu env in eqiad1 and expand the docs for it [10:45:40] great [10:45:57] and then we can add the new flavors in eqiad1, and start moving the first hypervisors there to OVS [10:46:07] sounds good [10:46:18] awesome [10:50:23] taavi: were the flavour changes related to the rabbitmq restarts/ [10:50:24] ? [10:51:06] dcaro: i don't know why rabbit was unhappy, but i can't imagine why adding a flavor could have caused that [10:51:41] ack, I was curious to know how it would do so if it did xd [11:26:43] quick review https://gitlab.wikimedia.org/repos/cloud/toolforge/toolforge-deploy/-/merge_requests/324 [11:27:09] cleaning up custom quotas that are lower than the new defaults (deployments 16, services 16) [11:34:11] dcaro: +1'd [11:34:40] thanks [11:45:30] * dcaro lunch [13:15:07] `apt-get update` fails on tools-sgebastion-10 `E: The repository 'http://mirrors.wikimedia.org/debian buster-backports Release' no longer has a Release file.` [13:15:16] what's the fix here? [13:15:53] (context: deploying the latest envvars-cli) [13:21:30] Hmm, that host is an old host that we want to get rid of asap (replaced by the newer bookworm bastions) [13:21:38] and that's one of the reasons xd [13:22:17] maybe we can change the repo to the archived url? (iirc you used that somewhere) [13:25:42] yes, I can look into that [13:26:30] is this a puppet thing or can the file be edited on the host directly? [13:27:10] or just drop the repo entirely, it's not like we're going to have any new package updates from there [13:27:54] cteam: heads up: I'm modifying all existing flavors in eqiad1 to pin them to the linuxbridge agent. this should be a no-op but if you see any scheduling weirdness this might be why [13:28:27] taavi: ack, thanks for the heads up [13:28:47] blancadesal: you can try doing it manually and then running puppet xd (I would do that) [13:29:41] dcaro: you mean to check if puppet overrides it? [13:31:14] yep [13:31:31] if it does not, then that's ok, if it does, then we need to remove from puppet :) [13:31:42] ack [13:45:25] taavi: is the idea that then you'll force migration of all VMs and they'll pick up the new pin when they move? [13:47:19] the api gateway changes to move auth there are ready for review https://gitlab.wikimedia.org/repos/cloud/toolforge/api-gateway/-/merge_requests/23, I'm happy to go over it if anyone wants to review it (no rush, whenever) [13:49:04] dcaro: puppet did not overwrite the edit, apt-get works again [13:49:13] \o/ [13:49:53] andrewbogott: the idea is that each VM will be shut down, moved from the g3.$FOO flavor to the equivalent g4.$FOO flavor (and as a part of that re-scheduled to a OVS capable host), the database bits will be flipped to enable the OVS network driver on it, and started up again [13:51:06] OK, so when you said you were modifying flavors, you mean you were creating new g4 flavors [13:52:56] no, the change I just did was to prevent any VMs using the existing flavors from getting scheduled on OVS hosts, so that I can re-image a few cloudvirts to the new agent. and once I have a few cloudvirts on OVS I can create the g4 flavors [13:54:00] ok ok, now I'm following :) [14:12:56] last time I get to ask this for a while: is anyone interested in running the toolforge meeting that's about in an hour? https://wikitech.wikimedia.org/wiki/Portal:Toolforge/Admin/Monthly_meeting [14:15:33] :), I can run it if you don't want to run this one, but I'm happy if you run it one last time before you return after the long break (ahem...) [14:17:13] I'd also leave the honour to taavi for one last time (but maybe not last last? :P) [14:17:19] sure, I can do it :-) [15:04:48] topranks: I'm seeing some issues with healthchecks between the ceph nodes, is anything hapenning you know of ? [15:06:56] no, I'm doing some work elsewhere in the DC but shouldn't affect cloudsw [15:06:56] any particular nodes firing alerts? [15:06:56] looking [15:06:58] took a quick glance at the librenms graphs no sudden jump in usage or anything suggesting links saturated etc [15:07:24] soo, now that we have an incident response process.. I'm declaring this an incident. starting a doc [15:07:32] ok [15:07:36] I had an open ssh session on tools-bastion-12, but trying to `become stashbot` has hung it. Feels like NFS sadness. [15:19:05] this shows many pings lost in the last 6+ hours [15:19:05] https://grafana-rw.wikimedia.org/d/613dNf3Gz/wmcs-ceph-eqiad-performance?orgId=1&from=now-12h&to=now&editPanel=122 [15:19:05] https://docs.google.com/document/d/1epuk7WD48vQKhHsjNEjKDOLN9UU1CloeWYKxQ3rRKyo/edit [15:19:05] https://www.irccloud.com/pastebin/8MqMIbW0/ [15:19:05] thanks taavi [15:19:05] topranks: ^ those are failing [15:19:05] dcaro: is there anything anyone can do to help you? [15:19:05] just trying to map which links that are down, running that command to some other cloudcephosd nodes might help figure out which switch/router is not letting the packets flow [15:19:05] we are in this meeting room https://meet.google.com/rbm-qwmf-qgf?authuser=1 [15:19:05] dcaro: hmm, jumbos are the issue then? [15:19:05] look like it, regular pings work between both [15:19:05] anything changed? the ping started working [15:19:05] https://www.irccloud.com/pastebin/qg0lDPD5/ [15:19:05] andrewbogott: can I task you for sending a notification to cloud-announce@? [15:19:05] yes, I'm trying to find the incident response docs :) [15:19:05] taavi: you mean about ceph right? [15:19:05] RECOVERY - BGP status on lsw1-f3-eqiad.mgmt is OK: BGP OK - up: 22, down: 0, shutdown: 0 https://wikitech.wikimedia.org RECOVERY - BGP status on lsw1-f3-eqiad.mgmt is OK: BGP OK - up: 22, down: 0, shutdown: 0 https://wikitech.wikimedia.org [15:19:05] andrewbogott: wikitech.wikimedia.org/wiki/Wikimedia_Cloud_Services_team/Incident_Response_Process [15:19:05] ty [15:19:05] andrewbogott: yeah, since it seems like the approximate impact is "cloud vps is down" [15:19:05] yeah even stranger [15:19:05] https://www.irccloud.com/pastebin/b1Lk3fH5/ [15:19:05] oh, that sounds to me like some paths broken somewhere [15:19:05] maybe some ongoing DCops work on site? [15:19:05] indeed [15:24:17] https://phabricator.wikimedia.org/P64632 [15:24:17] topranks: fwiw, we're in https://meet.google.com/rbm-qwmf-qgf [15:24:17] and https://docs.google.com/document/d/1epuk7WD48vQKhHsjNEjKDOLN9UU1CloeWYKxQ3rRKyo/edit is the incident doc [15:24:17] the graphs for inter-cloudswitch traffic here https://grafana-rw.wikimedia.org/d/613dNf3Gz/wmcs-ceph-eqiad-performance?orgId=1 seem suspicious, the traffic goes down [15:24:17] slow ops are going down again, anyone did any chahnges? [15:26:37] https://www.irccloud.com/pastebin/MCQyvEGY/ [15:26:54] from root@cloudcephmon1001:~# ceph health detail [15:27:27] from that it seems that F4 might have been the one dropping the jumbos [15:29:03] the NFS alert is back to green [15:30:03] I shut the links from E4 and F4 to D5 [15:30:18] and moved the instance vlan over to the links from C5 too [15:30:34] that seems better at first glance but not sure what's going on still [15:30:38] https://www.irccloud.com/pastebin/O9IuDEvG/ [15:30:52] when did you do that? just now? [15:30:57] https://usercontent.irccloud-cdn.com/file/kYyhPdTh/image.png [15:34:04] topranks: we now think that the issue began around 01:45UTC, 16 or so hours ago. Did you do anything interesting then? [15:34:13] seems the issue was errors on a link, possibly bad optic [15:34:15] https://usercontent.irccloud-cdn.com/file/hflAB32G/image.png [15:34:21] I can confirm that now everything seems to be flowing through c8 [15:34:23] https://usercontent.irccloud-cdn.com/file/mD7GQ8rT/image.png [15:34:25] that link is currently manually disabled [15:34:28] yeah [15:34:55] bad optics would match also the fact that pings have been being lost since yesterday [15:35:35] yeah well also it's getting worse which sort of matches your ping loss graph [15:36:41] yep, everything seems working now [15:39:29] topranks: any special template I should use for the task to replace the optics? [15:40:05] repeating here from the meet: I am declaring the active incident closed. (the optics still need to be replaced, yes) [15:41:33] i'll write the public doc tomorrow I think [15:42:02] taavi: thanks [15:42:21] thank you for the quick fix, topranks [15:43:49] taavi: ok thanks, sorry this was v. bad timing for me as I was in the middle of an upgrade of another device [15:43:56] so apologies for the poor comms [15:44:20] and yeah one of those nasty ones where it's not a binary UP/DOWN problem but slowly deteriorating [15:44:38] hence no automatic failover etc, the devices seen the link as UP despite the error count increasing [15:45:14] hm, think we need to restart nfs clients? [15:45:59] andrewbogott: do we have any alerts for nfs /client/ failures? [15:46:06] https://usercontent.irccloud-cdn.com/file/JOOM8Bvr/image.png [15:46:15] right [15:46:16] the alert is something like 'too many processes in D state' [15:46:19] should I start the reboot cookbook? [15:46:30] I don't see it though [15:46:36] yeah it's not firing [15:46:53] taavi: I think so, yes. Assuming that the nfs server itself is healthy... [15:47:06] which it must be since /some/ clients are working fine [15:47:08] (e.g. the bastion) [15:47:20] so yeah, taavi, please reboot workers. [15:48:12] ok, doing [15:48:20] taavi: should be ok yes, that will force pods to get restarted [15:49:12] topranks: what links did you disable exactly? To put in the hardware task [15:49:43] I can put the detail there, just looking over a few other things here [15:50:12] it was cloudsw1-d5-eqiad et-0/0/53 to cloudsw1-f4-eqiad et-0/0/54 [15:50:37] I had disabled others but brought them back up since, once we confirmed the one that was causing problems [15:51:15] ack [15:52:12] cookbook is running now [15:52:22] apparently the wm-bot that SAL relies on is not back yet [15:52:46] xd [15:53:15] we should fix that (nod depend on it at least) [15:54:08] topranks: just curious, is there conventional wisdom about what happens /physically/ to explain 'bad optics'? Like, e.g. does the glass degrade because it's sitting against an LED and gets too hot? [15:55:27] yeah it's mostly heat related alright [15:55:46] I'm not fully up on all the details, they're made with certain tolerances and a certain expected lifetime [15:55:57] I'm checking the remaining alert "No Puppet resources found on instance tools-redis-6 on project tools", Redis is working anyway [15:56:17] but in reality that can vary and some can start performing badly like this long before you'd expect [15:56:28] sometimes the receiver can get burnt out if the incoming signal is too hot [15:56:50] makes sense [15:57:05] In general SFPs are quite a feat of engineering, getting the electronics and optical parts into a small space and dealing with the heat produced is very tricky [15:57:05] restarting the mon service on cloudcephmon1002 released the last 2 slow ops (forcing them to rerun on cloudcephmon1003) [15:59:01] topranks: I started T367199, please feel free to fill up, I'm not sure what's needed for a cable change xd [15:59:02] T367199: hw troubleshooting: bad fiber cable between cloudsw1-d5-eqiad port et-0/0/53 to cloudsw1-f4-eqiad port et-0/0/54 - https://phabricator.wikimedia.org/T367199 [16:16:35] dcaro: thanks [16:17:06] topranks: got a question, an error counted on one side of the link, will not show up on the other side of the link error count right? [16:17:29] fwiw I am about to re-enable that interface, I've disabled all the BGP connections that run over it, which will mean it is not used, but it will enable us to test across the link with ping to verify things when we are testing the optics [16:17:34] dcaro: that can vary [16:17:50] more often than not you only see errors on one side [16:18:10] and those errors can be because of failing module where we see the errors [16:18:33] or can be failing module at the far end, i.e. the transmit from the other side is going bad [16:18:35] ack, that might explain why our charts did not show errors [16:18:48] (I only added one side of each link 🤦‍♂️) [16:18:53] which charts? [16:18:56] ah ok [16:19:44] topranks: okok, I'm ready for the re-enabling [16:19:57] I boldly added an "Action items & follow-up tasks" to the end of the incident doc template :) (and to today's doc) [16:20:06] should be fine - I'll do my own checks but just in the interest of full disclosure :) [16:20:27] dhinus: thanks, I might add stuff there, though I might refine it tomorrow [16:20:52] no rush :) [16:22:31] dcaro: ok link back up but not being selected [16:23:38] looking good so far [16:26:54] yep all good [16:32:25] tools-services-05 is stuck on nfs, I'll reboot manually [16:33:30] that was fast :), back online [16:34:18] btw. the alerts about stuck processes started coming in ~16 min ago [16:36:08] * dhinus off [16:37:12] dcaro: you mean like 'Kubernetes worker tools-k8s-worker-nfs-37 has many processes stuck on IO (probably NFS)'? [16:37:27] yep [16:38:05] hopefully those are on workers that taavi hasn't rebooted yet [16:38:59] yep, though I'm finding some non-workers also getting stuck, like tools-services (manually restarted) [16:39:31] or tools-sgebastion-10 [16:39:54] oh, I just logged into that one [16:40:00] so we might want to extend the alerts to those [16:40:12] I'm waiting for a shell there :) [16:40:16] console works though [16:40:17] * andrewbogott a bit surprised that sgebastions still exist [16:40:43] the reboot cookbook is currently on worker-nfs-37, everything above that has been rebooted and everything below has not [16:40:48] it's the last one, waiting for some fixes on how to use the toolforge api from within toolforge [16:41:02] I'm going to do a hard reboot on sgebastion-10 unless you're already doing that [16:41:09] go ahead please [16:41:30] ok, done [16:41:34] some users are still logged in [16:41:50] T360488 is the main reason for login-buster.toolforge.org still existing I think. [16:41:51] T360488: Missing Perl packages on dev.toolforge.org for anomiebot workflows - https://phabricator.wikimedia.org/T360488 [16:42:01] yep [16:42:55] taavi: meaning that e.g. tools-k8s-worker-nfs-19 has rebooted? Because it still shows 'Kubernetes worker tools-k8s-worker-nfs-19 has many processes stuck on IO (probably NFS)' as of 4 minutes ago :( [16:43:09] has not [16:43:13] I was a bit sad not to find `make` on the new bastions over the weekend when I wanted to make a little deployment helper. I got over it though. [16:43:20] oh, it's counting down [16:43:23] yes [16:43:25] reading fail [16:43:40] everything with a number that is above 3736 has been rebooted [16:44:02] bd808: `make` seems like a reasonable thing to isntall there [16:44:15] some of those still show alerts but they're old so likely just need to refresh the check [16:44:39] you can follow https://grafana.wmcloud.org/d/3jhWxB8Vk/toolforge-general-overview?orgId=1&var-cluster=tools&from=now-30m&to=now for the latest data [16:45:12] also, very happy right now that I've spent the time to make the reboot cookbook trivially handle nodes that are fully hung up / unresponsible / etc by hard rebooting them (instead of crashing when running 'reboot' gets stuck) [16:46:05] Yeah, I wish we didn't need that cookbook but we clearly do [16:47:34] tools-k8s-worker-nfs-51 is reporting choppy stats so the alert does not clear, looking [16:48:10] the host looks ok [16:48:36] taavi: :nod: I figured that others would find gnumake uncontroversial for bastion install. [16:48:41] I think at this point I would rather have some kind of fix for T360818 that would let folks make custom bastions for themselves in the form of a container. [16:48:43] T360818: Consider adding `kubectl`, `webservice`, and `toolforge` binaries to shell container images - https://phabricator.wikimedia.org/T360818 [16:49:33] T363027 might be most of what is needed for that [16:49:33] T363027: [builds-builder] Support adding repositories for Apt buildpack - https://phabricator.wikimedia.org/T363027 [16:49:40] bd808: +1 from me [16:50:28] for adding custom repos, we might want to control that a bit and only allow certain repos instead (ex, toolforge, etc.) [16:52:38] dcaro: yeah, if we are worried about arbitrary package install that could be reasonable. I guess I'm not entirely clear if that is a thing we worry about really in an environment where we let people go wild with language specific package managers (pip, composer, npm) [16:53:24] By that I don't mean that we don't care about secure defaults, but we also let people point a lot of foot guns at their own feet [16:53:45] yep, if there's a will there's a way, all we can do is make it hard to do the wrong thing [16:56:05] I think if I was in the meeting we had during the Berlin offsite again to talk about buildpacks and other container creation I would ask more questions about why specifically we like buildpacks more than direct Dockerfile usage. [16:56:43] I know why it was seen as good at a long ago point, but I don't know if we all share those same concerns today [16:57:45] for people that don't know the difference between a container and a regular process, not having to deal with any of that is easier (ex, if you only have code using standard code packaging, buildpacks are a breeze compared to dockerfiles) [16:58:39] same as using k8s directly vs using toolforge, if you know k8s you can do lots of things that if you try to do with toolforge you will have a hard time [16:58:58] but if you don't know k8s, toolforge will allow you to do many things way easier than if you had to learn k8s [16:59:24] it's a matter of who are they targeted at [16:59:54] (besides any technical struggle, that would most probably be solvable) [17:00:59] with "code packaging" I mean lang-specific package managers (pip/npm/...) [17:04:06] dcaro: Valerie has swapped the optics and ping tests over the link are testing clean [17:04:15] that was quick! [17:04:29] unless there is reason not to I will re-enable BGP, and then observe / run ping tests between hosts that were failing earlier? [17:04:51] unfortunately sometimes it's only in the presence of significant amount of traffic that an issue shows it's head [17:05:05] but if there is any sign of problem will immediately rollback [17:05:18] dcaro: ha yes she was on site we were lucky :) [17:05:54] let's try, I'm checking on my side [17:06:17] ok cool, proceeding [17:07:37] I'm a bit suspicious though of leaving it enabled, as it might start failing after some usage right? (if it's temp related) so might fail in the middle of the night for me (and with andrewbogott being the only one awake) [17:07:44] I see lost pings [17:07:45] nope we had problems [17:07:48] I shut again [17:08:01] just three though [17:08:05] https://www.irccloud.com/pastebin/pSwmWwZB/ [17:08:10] okok [17:10:26] there were a few more lost pings yep :/ [17:10:45] https://www.irccloud.com/pastebin/Wf1XZ9Jd/ [17:11:48] topranks: is there anything else we can try? does Valerie have another hardware to test? (was only one side changed or both?) [17:12:15] we changed it in D5, we're trying the other end of the link now [17:12:19] maybe it's the cable and not the spf [17:12:28] ack, okok [17:12:32] always had to say if it's the receiver on one end or the laser on the other [17:12:42] could be the cable, but usually the cable just breaks all of a sudden [17:12:52] doesn't slowly start getting worse like this [17:12:55] yep, fiberglass is fragile [17:13:03] it cracks [17:13:15] yeah exactly, and link just goes down [17:13:38] typically when you see a small number of errors and then gradually increasing over time it's the electronics [17:14:06] hmm... now that you mention, in $previous I don't remember a cable breaking ever, but spfs would fail quite often [17:14:36] (cables might be setup broken already, but working and then breaking not sure it happened in ~2years I was around) [17:15:13] besides exravators digging in the wrong place xd [17:17:38] haha yeah - it's rare in the datacentre [17:17:44] out in the WAN?? happens all the time [17:17:48] submarines, diggers [17:17:56] and of course the worst offenders - squirrels! [17:18:06] those guys just love chewing right through the fiber :P [17:18:23] crunchy :) [17:18:44] Valerie swapped the other side - again ping looking clean but I remain dubious [17:18:48] can we try again? [17:18:58] sure, ready [17:20:10] ok done [17:20:23] so far it looks better [17:20:42] yeah same [17:21:48] I'll have to go in a bit [17:22:04] yeah no probs [17:22:08] that does look better [17:22:21] is it back at full traffic? [17:22:30] what I'll do is leave for a few mins to monitor, but disable BGP again before I clock off [17:22:53] dcaro: no I only brought back up prod-realm IPv4, so there is an IPv6 BGP session and cloud vrf BGP session too which are still disabled [17:22:58] ack [17:23:00] should be a significant level though [17:23:34] zero errors reported on the link though [17:23:42] earlier they started counting up immediately [17:23:52] yep, promising [17:24:12] I'll be around for another hour or so, I'll keep monitoring it and any sign of problem disable [17:24:28] but hopefully that's it fixed [17:25:12] ack, I'll clock off then, andrewbogott should be around, if anything happens don't hesitate to page me (though taking it off seems it would do the trick worst case scenario) [17:25:27] thanks a lot for the quick fixes! [17:25:30] have a good evening! [17:25:47] ok np ttyl! [17:25:53] https://www.irccloud.com/pastebin/F8c6oo93/ [17:26:07] (thank Valerie in my regard too) [17:26:17] will do :)