[09:29:01] mutante: thank you for the decom! I noticed only because I needed a fleetwide puppet run to decom icinga mgmt checks, all good now! cc sobanski thanks for the follow up [11:46:51] nemo-yiannis && hnowlan service regen-zoom-level-tilerator-regen.service is failing on both DCs because it cant connect to cassandra [11:46:58] which we are saying to [11:47:28] is there some configuration we need to do so to tell it that we do not have cassandras anymore? [11:47:39] effie: ah I'm guessing that can remove that or at least configure that off [11:48:10] cassandra is finally down I reckon ? [11:48:29] yep, all gone. Been disabled for months though [11:48:33] I'm not certain which parts of regen are necessary if any [11:48:48] yeah both timers should go away with tilerator/cassandra deprecation [11:49:19] nemo-yiannis: is there something we should do so teh service will stop failing ? [11:49:32] `regen-zoom-level-tilerator-regen` and `regen-zoom-level-${title}` [11:49:38] looks like we could remove all of this? https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/modules/tilerator/manifests/regen.pp [11:49:39] effie: We can disable the timers [11:50:12] ok, one is already disabled [11:50:15] I will fire a CR if so - main moving part is notify-tilerator-regen and that explicitly contacts tilerator [11:50:19] * nemo-yiannis checking if tegola based pregeneration is affected by thhis [11:50:52] I think there's a different trigger for that but would appreciate verification [11:51:03] yeah, just checked its a different cron [11:51:21] so we can disable regen-zoom-level-tilerator-regen? [11:53:02] yeah i think so [11:53:02] nemo-yiannis ^ ? [11:53:22] hnowlan: I will prepare a patch [11:54:42] oh sorry, I rushed ahead and did one too :P [11:55:06] I have not submitted mine yet, I will +1 you [11:55:47] Thank you <3 [11:56:11] https://gerrit.wikimedia.org/r/c/operations/puppet/+/865053/ [11:57:07] I am running pcc [11:57:37] same :D [12:09:27] port changes for thumbor in prod so we can run it alongside the metal instances 😬 https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/865054 [12:15:35] Oh you need your limits merged too [12:16:14] ty! [12:37:04] 10serviceops, 10MW-on-K8s: Helmfile apply failing on deploy server - https://phabricator.wikimedia.org/T324553 (10jnuche) [12:47:12] 10serviceops, 10MW-on-K8s: Helmfile apply failing on deploy server - https://phabricator.wikimedia.org/T324553 (10Clement_Goubert) p:05Triage→03High a:03Clement_Goubert [12:54:07] 10serviceops, 10MW-on-K8s: Helmfile apply failing on deploy server - https://phabricator.wikimedia.org/T324553 (10jnuche) [12:56:47] 10serviceops, 10MW-on-K8s: Helmfile apply failing on deploy server - https://phabricator.wikimedia.org/T324553 (10Clement_Goubert) It's missing environment variables that are set in `/etc/profile.d/kube-env.sh` and the timer doesn't have access to them, or the `mwdebug` user because they're set only for login... [13:33:27] I'm still hoping for a review of my spark-operator chart one day: https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/855674 [13:34:13] Unfortunately, now I've added the CRDs to the chart, the `helm-lint` output has grown to 5.8 MB in size. [13:34:47] oh damn [14:25:11] 10serviceops, 10WMDE-GeoInfo-FocusArea, 10WMDE-TechWish-Maintenance, 10Maps (Kartotherian), and 2 others: Migrate our draft charts to newer scaffolding - https://phabricator.wikimedia.org/T324471 (10awight) 05Open→03Resolved a:05awight→03None [14:33:46] This one's ours [14:33:47] 2022-11-11 10:37:54 +icinga-wm PROBLEM - IPMI Sensor Status on restbase1018 is CRITICAL: Sensor Type(s) Temperature, Power_Supply Status: Critical [PS Redundancy = Critical, Status = Critical] https://wikitech.wikimedia.org/wiki/Dc-operations/Hardware_Troubleshooting_Runbook%23Power_Supply_Failures [14:33:56] I'll check it out [14:37:07] erm, host's 6 years old, that means it'll be slated for replacement, or do we still switch PSUs on old hosts? [14:57:21] 10serviceops, 10DC-Ops, 10ops-eqiad: hw troubleshooting: PSU failure for restbase1018.eqiad.wmnet - https://phabricator.wikimedia.org/T324572 (10Clement_Goubert) p:05Triage→03Low [14:57:44] 10serviceops, 10DC-Ops, 10ops-eqiad: hw troubleshooting: PSU failure for restbase1018.eqiad.wmnet - https://phabricator.wikimedia.org/T324572 (10Clement_Goubert) ipmi-sel log: ` cgoubert@restbase1018:~$ sudo ipmi-sel ID | Date | Time | Name | Type | Event 1 | No... [14:58:03] 10serviceops, 10DC-Ops, 10ops-eqiad: hw troubleshooting: PSU failure for restbase1018.eqiad.wmnet - https://phabricator.wikimedia.org/T324572 (10Clement_Goubert) racadm getsel log: ` ------------------------------------------------------------------------------- Record: 16 Date/Time: 10/18/2022 14:55:... [15:11:26] 10serviceops, 10MW-on-K8s, 10Patch-For-Review: Helmfile apply failing on deploy server - https://phabricator.wikimedia.org/T324553 (10Clement_Goubert) We now set the correct environment variables in the timer definition, so the next run should be ok. This however does not fix the problem for the user `mwpre... [15:29:02] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05): Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) [15:34:15] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05): Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) [15:51:52] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05): Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10JMeybohm) > This will mean adapting the upstream helm chart to fit in our deployment-charts repo with template conventions there. I would lean towa... [15:53:14] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05): Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) OH! If that is possible/okay then that will be much much easier. Alright, I'll try that first and we'll see how that goes. CC @BTullis... [15:56:06] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05): Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10JMeybohm) >>! In T324576#8447323, @Ottomata wrote: > OH! If that is possible/okay then that will be much much easier. Alright, I'll try that first... [16:02:54] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10BTullis) Oh, right. Yes that might have made it a bit easier, but I'm not 100% sure. The thing about the work I've done on the... [16:04:35] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) Def will need specialized RBAC for Flink, but if we don't have to add all our common templates stuff, ideally we can... [16:11:25] _joe_: Did I say I would merge and deploy the eventgate modules change and then forget to do so? [16:11:32] If so, apologies. [16:12:09] <_joe_> btullis: np :) also you won't freak out when you find whitespace changes when deploying eventgate next time [16:12:43] Nice. Thanks. [16:12:58] <_joe_> just don't deploy right now before puppet has run :) [16:13:38] OK, I'll do it in a bit. Run on deploy1002 you mean? [16:24:45] <_joe_> you don't need to do it now btw [16:25:07] <_joe_> I was just saying "don't do it while the values we import from puppet and the chart are out of sync" [16:33:36] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/helm/#watching-only-specific-... [16:52:17] inflatador: dcausse the flink operator Nonces are cool: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/job-management/#application-restarts-without-spec-change [16:55:28] ottomata: yes, gitops ftw :) [16:56:27] dcausse: have you read through the flink operator docs? especially these ones around job lifecycle management? from what i can tell everything you need for rdf-streaming-updater is here and will be much more automated than with the existing session cluster stuff? [16:56:37] wondering if there is anything missing [16:57:44] ottomata: I think it has everything yes, the only bits I wanted to test is what happens for a k8s cluster upgrade [16:57:47] <_joe_> i prefer checksum-based nonces :P [16:58:27] <_joe_> dcausse: we do k8s cluster upgrades by tearing down and up a cluster, so i guess we'd have to save any state somewhere [16:58:35] dcausse: meaning when everything is shut down, including operator? [16:59:00] i.e. are there bits in the config maps we want to save or do we want to undeploy/deploy the job [16:59:02] <_joe_> ottomata: we destroy all resources, build the cluster from scratch / helmfile [16:59:27] <_joe_> so if anything needs to be saved persistently, it needs to be handled somehow [16:59:40] aye, dcausse i think there shouldn't be any uncommtted configmaps stuff that would need to be saved, if it needs to be saved it should be in the helmfile i think [16:59:45] <_joe_> ideally, flink would save its state to an object storage or better a database [17:00:09] oh, hm i see, like, the latest savepoint location in the config map [17:00:09] hm [17:00:12] it does but still use k8s configmaps to store some pointers [17:00:15] right [17:00:39] ... could use zookeeper instead :p [17:00:45] meh :) [17:01:35] right hm, i guess on full k8s shutdown we'd have to basically do the manual recovery process: https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/custom-resource/job-management/#manual-recovery [17:01:46] get the value of the latest savepoint path before shutdown [17:01:54] and set initialSavepointPath in the helmfile values [17:02:19] with the k8s operator you can stop-with-a-savepoint and then redeploy from this savepoint [17:02:31] yes, but you have to manually tell it which one, right? [17:03:17] this what I wanted to test out basically, have a clear procedure for this scenario [17:04:12] aye. _joe_ by "save its state to an object storage or better a database", i gather you mean not using configmaps at all to do this? [17:04:27] the main state is saved externally, but the pointer to that state is in a dynamic configmap [17:13:25] <_joe_> ottomata: you're using your infrastucture database (k8s etcd) to save some application state. It has that kind of smell... [17:14:02] _joe_: makes sense, there are 2 HA options. k8s configmap (smelly?) and zookeeper. dcausse aside from dependency on zookeeper, what's so bad about it? [17:14:05] <_joe_> I mean it can work ofc, and we mostly just need to add the configmap saving/restoring [17:14:48] <_joe_> ottomata: yeah zk would be a more sound option but also we might get free of zk at some point with the next kafka upgrade, so... :P [17:15:29] <_joe_> now ideally it would also support etcd natively and we could use etcd instead of zk, but I don't think that's supported [17:15:35] well, kafka would be free! but doesn't mean flink can't be bound. [17:15:38] oh hm, that would be nice. [17:16:06] <_joe_> uhm https://cwiki.apache.org/confluence/display/FLINK/FLIP-144%3A+Native+Kubernetes+HA+for+Flink [17:17:56] ya i think that is what we are talking about [17:17:58] https://issues.apache.org/jira/browse/FLINK-12884 [17:18:03] it uses configmaps [17:18:29] https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/ha/kubernetes_ha/ [17:18:45] now this is what you were just suggesting: https://issues.apache.org/jira/browse/FLINK-11105 [17:18:48] abandoned... [17:19:46] "this ticket could be closed directly since FLIP-144[1] is a more appropriate way and could cover all the functionalities." [17:19:51] not according to _joe_ ! [17:20:24] Flink HA Etcd design doc: https://docs.google.com/document/d/12-gIZDuT4IOWG7gmwSqNFsOHuGlkdRHge0ahJ7M311Y/edit# [17:20:36] <_joe_> ottomata: yeah hence my "uhm" [17:21:10] <_joe_> ottomata: anyways, don't get blocked on this [17:21:17] <_joe_> if we just have to save-restore a configmap [17:21:24] ya ya, just googling aroudn to see if that was an option. [17:21:25] <_joe_> I think we're already doing it [17:21:53] yes...but, what's especailly annoying about that is that even for 'stateless' streaming apps that just are kafka -> kafka, flink still has state. >:( [17:22:03] it uses its own state management for kafka offsets, instead of kafka itself. [17:22:42] ottomata: hm... but your app can start fresh no? [17:23:02] hmm, but it does commit offsets to kafka for lag monitoring purposes, annnnd i think it might use those kafka offsets if there are no offsets in flink state...so maybe its okay [17:23:29] dcausse: ideally it would start from committed offset, even after cluster upgrade. and since 'stateless', ideally it would be an unmanaged start [17:23:35] you might get duplicates, as kafka offsets are lazily commited but yeah that might be acceptable for your usecase? [17:23:52] well, we have to check to be sure it will start from the kafka offset (with no flink state) [17:24:16] otherwise we'll get the whole 7 day backlog duplicated (if reset = earliest) or miss a bunch (if reset = latest) [17:24:35] anyway...tbd [17:24:47] 7days that'd be bad yes :) [17:25:10] btullis: thinking about namespaces and roles: if you manually create the CRDs, what is stopping us from deploying an operator for each applicaiotn in hte same namespace? [17:25:13] (not that we should...) [17:25:33] then the operator woudln't need any perms to alter anything outside of its own namespace [17:25:39] cc inflatador: ^ too :) [17:26:09] our base flink (and/or spark) chart would just also include the operator definitons [17:27:28] maybe a small waste of resources, but i don't think the operator is resource heavy [17:29:47] ottomata: Yes, could do. There are some naming clash gotchas outlined here, but nothing that couldn't be worked around: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#running-multiple-instances-of-the-operator-within-the-same-k8s-cluster [17:30:38] Their main issue with it is the configuration of the webhook, but we could do it our way anyway, rather than their way. [17:32:45] <_joe_> btullis: one option could be to try to contribute our improvements upstream [17:32:55] <_joe_> guarded by feature flags [17:33:40] <_joe_> I am dubious because I've been burned in the past by third-party puppet/ansible/etc modules, but it would be great if we could work with upstreams and keep using the same charts as they do [17:37:29] _joe_: yes, I like your thinking. I was under the impression that common_templates & template modules was The Way that we had to do it and therefore I would have to adapt upstream to our way of doing things. [17:37:34] the flink operator helm chart role and namespace options look pretty good. have to see ifi they can do exactly what we want builit in. [17:38:53] if we were to do single operator, i think what we want is an operator deployed in one namespace, that has permissions to create FlinkDeployments in any namespace. These deployemnts will create JobManager pods. Then, those JobManager pods will have permissions to create TaskManager pods...hopefully only in the same namespace as their JobManager pod [17:39:14] _joe_: Unfortunately, I feel I'm in a bit of a chicken and egg situation because apart from minikube I can't run anything anywhere at the moment. [17:39:41] so, a Q for serviceops (i'll ask this in the task) is: is that okay? would be allowed to have flink-operator in one namespace create resources in another namespace...if restrictred appropriately? [17:39:42] <_joe_> btullis: I would consider operators "infrastructure" [17:40:18] <_joe_> ottomata: uhm, I'd 301 that to akosiaris and jayme but my first instinctual response would be "ideally no" [17:40:41] https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/rbac/#role-based-access-control-model [17:41:02] aye, ya, will ask on task [17:41:29] I'd say "depends" :) [17:42:11] <_joe_> that's another way of saying it yeah :) [17:42:45] 10serviceops, 10Patch-For-Review: Productionise mc20[38-55] - https://phabricator.wikimedia.org/T293012 (10jijiki) p:05Triage→03High [17:42:49] but I've not read the backlog here as well ... so maybe don't count on me :) [17:43:26] no worries jayme ill try to sum it all up in a task when i get a better grasp on it :) [17:43:37] <_joe_> jayme: oh I was just going with the "least possible permissions" principle [17:43:38] ok [17:44:18] qq, is a SerivceAccount used in the per serivce/helmfile based namespaces we use now? [17:44:39] hm i guess not, because there is nothing in that namespace that is creating k8s resources, so no need [17:46:13] ottomata: please include/add a task regarding the savepoin/configmap discussion somewhere as well. We will need to iterate on that as it is one of the bigger pain points we (as in serviceops) have with the current setup [17:47:19] <_joe_> PSA: you will need to rebase any patch or it will fail CI [17:47:28] <_joe_> for deployment-charts [17:47:28] For what it's worth, I've currently got the spark-operator configured to run in one namespace, but with permission to write to pods one other namespace. But locked down so it can only do so in this one additional namespace. [17:48:08] <_joe_> btullis: that's quite different and I'd say ok :) [17:48:40] <_joe_> but I get why ottomata was looking for wider permissions. I'm nervous when I see "any namespace", because now that includes mediawiki [17:49:22] <_joe_> Also; depending on the goals of a flink pipeline, they might fit better in the dse cluster than in the wikikube one [17:50:12] <_joe_> there I'd be less worried myself but yeah, in general I'd say that we should grant operators the least permissions possible [17:51:13] _joe_: Thanks for the clarification. When you say... [17:51:14] > I would consider operators "infrastructure" [17:51:14] How would you say that this would affect how I might proceed? I still feel a bit stuck, whether I use our templates or a minimally modified upstream version. [17:51:32] well, i suppose it doesn't need anynamespace, but i would be nice to not have to redeploy the flink operator to update the namespaces it is allowed to be deployed to. [17:51:35] although...maybe that is not so bad [17:52:04] <_joe_> btullis: I haven't looked much at the spark operator, but I think with e.g. cert-manager we more or less imported upstream I think [17:52:15] or, i wonder, if we could use https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/configuration/#dynamic-operator-configuration [17:52:16] it "should" be possible to run only one flink-operator and only give it permission to a specific list of namespaces to run jobs in. [17:52:21] ? [17:52:31] <_joe_> jayme: yeah [17:53:30] Re, flink, yep, I think the operator can watch multiple namespaces. https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/865100/1/charts/flink-kubernetes-operator/conf/flink-conf.yaml#36 [17:53:47] jayme: that is possible, https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/operations/helm/#watching-only-specific-namespaces [17:54:59] but, it would be nice if users only had to make new helmfile services, not new charts, to get their flink apps deployed. Since (IIUC) each helmfile service is in its own namespace, the flink operator would need perms to create resources in that namespace. [17:55:00] ottomata: yes, that. Absolutely preferable against running multiple operators [17:55:09] cool [17:55:19] ya easy enough for now to have alimited hardcoded list of namespaces [17:55:43] just not sure about operator restarts when amending the list, and also it'd be nice any manual steps there weren't necessary [17:55:49] <_joe_> ottomata: yeah we'd have ot change some permissions for some role [17:56:02] <_joe_> or add a serviceaccount to the namespace where you deploy flink [17:56:04] well, we would have to have one chart that can be used to deploy flink apps anyways [17:56:10] <_joe_> yes [17:56:15] <_joe_> that's where I was getting to [17:56:38] that would have to be deployed as a release (helmfile.d/service/...) which needs a namespace created [17:56:40] ya, one chart for flink apps, but many helmfile/services [17:56:44] right. [17:56:47] <_joe_> you'd have to grant the operator access to a suitable cluster role first [17:56:50] release. i guess that's the term i'm looking for [17:56:51] <_joe_> in admin_ng [17:57:14] _joe_: I've moved you from CC to reviewer on my spark-operator chart, which is based on common_templates and modules. https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/855674 [17:57:19] so more or less no change to how we deploy other stuff, apart from that we don't deploy "Deployment" objects but rather "FlinkJobCRDThings" [17:58:01] If you (or jayme: or anyone) think I should restart is based on the upstream, let me know. [17:58:58] jayme: yes the process should be the same for the user, they would be configuing a FlinkDeployment (CR), example: https://github.com/apache/flink-kubernetes-operator/blob/main/examples/flink-python-example/python-example.yaml [18:00:37] btullis: I don't really have an oppinion other than the generic one I uttered in the flink task earlier. If there is a *solid* that we don't need to modify all over the place, I think it's okay to import that. But that really depends on quality, size and how much of it we use I guess [18:00:41] Sorry folks, I didn't mean to sound at all crotchety there. It looks a bit like that when I read it back. [18:01:13] * jayme not received that way [18:02:26] ottomata: yes. that FlinkDeployment we would need to wrap and generalize as a chart more or less which is then what the "service owner" deploys [18:02:46] aye [18:02:55] jayme: Yeah, when we last talked about this there was quite a lot of talk of 'trimming the fat' from the spark-operator, but maybe I interpreted that to mean the helm chart as well as the image. [18:04:25] Anyway, go to go for now. Catch you fine folk tomorrow. [18:04:39] yeah...sorry I wasn't really paying attention to the recent changes as I was all tangled up in other things and a cold :/ [18:04:53] thanks yall :) lots of learning today [18:06:06] I gtg for today - good luck :) [18:10:59] godog: gotcha. I figured you needed it for the "mgmt interface alerts" task :) nice that you got that resolved btw [19:47:24] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink Kubernetes Operator Helm chart - https://phabricator.wikimedia.org/T324576 (10Ottomata) [20:48:28] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink on Kubernetes Helm charts - https://phabricator.wikimedia.org/T324576 (10Ottomata) [21:12:57] 10serviceops, 10Data-Engineering, 10Event-Platform Value Stream (Sprint 05), 10Patch-For-Review: Flink on Kubernetes Helm charts - https://phabricator.wikimedia.org/T324576 (10Ottomata) flink-kubernetes-operator helm chart RBAC works in one of two modes: - cluster scoped - meaninig the operator can manag... [21:52:06] 10serviceops, 10DC-Ops, 10SRE, 10ops-eqiad: Q1:rack/setup/install kubernetes102[34] - https://phabricator.wikimedia.org/T313873 (10ops-monitoring-bot) Cookbook cookbooks.sre.hosts.reimage was started by cmjohnson@cumin1001 for host kubernetes1024.eqiad.wmnet with OS bullseye [23:05:00] 10serviceops, 10DC-Ops, 10SRE, 10ops-eqiad, 10Patch-For-Review: Q1:rack/setup/install kubernetes102[34] - https://phabricator.wikimedia.org/T313873 (10ops-monitoring-bot) Cookbook cookbooks.sre.hosts.reimage started by cmjohnson@cumin1001 for host kubernetes1024.eqiad.wmnet with OS bullseye executed with... [23:06:20] 10serviceops, 10DC-Ops, 10SRE, 10ops-eqiad, 10Patch-For-Review: Q1:rack/setup/install kubernetes102[34] - https://phabricator.wikimedia.org/T313873 (10ops-monitoring-bot) Cookbook cookbooks.sre.hosts.reimage was started by cmjohnson@cumin1001 for host kubernetes1023.eqiad.wmnet with OS bullseye [23:09:01] 10serviceops, 10DC-Ops, 10SRE, 10ops-eqiad, 10Patch-For-Review: Q1:rack/setup/install kubernetes102[34] - https://phabricator.wikimedia.org/T313873 (10ops-monitoring-bot) Cookbook cookbooks.sre.hosts.reimage was started by cmjohnson@cumin1001 for host kubernetes1024.eqiad.wmnet with OS bullseye