2020-06-21T00:04:13 ChrisWi, I suspect it will be related to this.... https://progress.opensuse.org/news/103 2020-06-21T02:57:06 *** okurz_ is now known as okurz 2020-06-21T04:39:13 -heroes-bot- PROBLEM: PSQL locks on mirrordb2.infra.opensuse.org - POSTGRES_LOCKS CRITICAL: DB postgres total locks: 134 * total waiting locks: 63 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb2.infra.opensuse.org&service=PSQL%20locks 2020-06-21T04:45:25 -heroes-bot- PROBLEM: PSQL locks on mirrordb1.infra.opensuse.org - POSTGRES_LOCKS CRITICAL: DB postgres total waiting locks: 4 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb1.infra.opensuse.org&service=PSQL%20locks 2020-06-21T04:55:25 -heroes-bot- RECOVERY: PSQL locks on mirrordb1.infra.opensuse.org - POSTGRES_LOCKS OK: DB postgres total=20 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb1.infra.opensuse.org&service=PSQL%20locks 2020-06-21T04:59:13 -heroes-bot- RECOVERY: PSQL locks on mirrordb2.infra.opensuse.org - POSTGRES_LOCKS OK: DB postgres total=2 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb2.infra.opensuse.org&service=PSQL%20locks 2020-06-21T06:33:24 -heroes-bot- PROBLEM: PSQL locks on mirrordb1.infra.opensuse.org - POSTGRES_LOCKS CRITICAL: DB postgres total locks: 68 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb1.infra.opensuse.org&service=PSQL%20locks 2020-06-21T07:41:44 -heroes-bot- PROBLEM: SSH on metrics.infra.opensuse.org - SSH CRITICAL - OpenSSH_7.9 (protocol 2.0) version mismatch, expected OpenSSH_7.2 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=metrics.infra.opensuse.org&service=SSH 2020-06-21T08:44:17 goodmorning 2020-06-21T08:50:31 -heroes-bot- PROBLEM: SSH on gcc-stats.infra.opensuse.org - SSH CRITICAL - OpenSSH_8.3 (protocol 2.0) version mismatch, expected OpenSSH_8.1 ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=gcc-stats.infra.opensuse.org&service=SSH 2020-06-21T09:15:20 does anyone know what's going on with the network? 2020-06-21T11:03:22 mail lists are down. Know anything? 2020-06-21T11:05:04 Web login to the OBS is also down, though osc still works. 2020-06-21T11:40:54 lists aren't really down, but no mail is getting in :-) 2020-06-21T11:42:34 Anton told me his mails bounced. I tested, and mine did not get through. 2020-06-21T11:45:25 robin_listas: nothing is bouncing, but the main suse mx has a problem with domain resolution. 2020-06-21T11:46:16 also, mails are not being forwarded, probably queueing at the suse MX 2020-06-21T11:48:23 Correction: he said they bounced yesterday. 2020-06-21T12:19:35 *** lurchi_ is now known as lurchi__ 2020-06-21T12:20:37 *** lurchi__ is now known as lurchi_ 2020-06-21T12:28:38 if anyone has any contact to suse-it - mx2.suse.de[195.135.220.15] said: 450 4.1.8 : Sender address rejected: Domain not found 2020-06-21T13:11:18 -heroes-bot- RECOVERY: Elastic-Engine on water.infra.opensuse.org - HTTP OK: HTTP/1.1 200 OK - 420 bytes in 0.003 second response time ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=water.infra.opensuse.org&service=Elastic-Engine 2020-06-21T13:39:46 *** lurchi_ is now known as lurchi__ 2020-06-21T14:03:22 Sorry if this is already reported but the main opensuse web is down and it's not reported on https://status.opensuse.org/ 2020-06-21T14:04:15 nevermind. stupid browser 2020-06-21T14:10:38 pjessen, likely this https://progress.opensuse.org/news/103 the severing of the umbilical cord to MF... ;) 2020-06-21T14:29:17 *** lurchi__ is now known as lurchi_ 2020-06-21T14:43:41 -heroes-bot`- PROBLEM: PostgreSQL standby on mirrordb1.infra.opensuse.org - POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB mb_opensuse2 (host:mirrordb2) 368937192 and 2 seconds ; See https://monitor.opensuse.org/icinga/cgi-bin/extinfo.cgi?type=2&host=mirrordb1.infra.opensuse.org&service=PostgreSQL%20standby 2020-06-21T14:59:56 malcolmlewis: yeah, my guess too - 2020-06-21T15:06:08 if anyone has any contact to suse-it - mx2.suse.de[195.135.220.15] said: 450 4.1.8 : Sender address rejected: Domain not found 2020-06-21T15:24:28 *** lurchi_ is now known as lurchi__ 2020-06-21T15:24:32 *** lurchi__ is now known as lurchi_ 2020-06-21T15:40:58 klein, kl_eisbaer: ^^^ 2020-06-21T15:41:40 completely unrelated - https://progress.opensuse.org/issues/68032 has a funny[tm] story about a wiki login problem 2020-06-21T15:42:19 I would open a ticket with them, but their ticket system is currently unreachable ... 2020-06-21T15:42:52 looks to me like the whole DMZ is currently down 2020-06-21T15:42:58 great :-/ 2020-06-21T15:44:03 re poo#68032 - it looks like a database field lost its auto_increment flag 3 months ago (no idea why) and since then new users weren't able to login to en.o.o 2020-06-21T15:44:08 jip: time to have a break... 2020-06-21T15:44:18 any idea what could have happened to that field? 2020-06-21T15:44:26 3 months ago? 2020-06-21T15:44:37 maybe a left over from the migration? 2020-06-21T15:44:44 but this is just a wild guess 2020-06-21T15:44:46 yes, 2020-03-22 22:04:04. 2020-06-21T15:45:49 and FYI: I'm currently hunting a possible bug, where chrony does not remove /var/run/chrony-helper/lock on shutdown cleanly - and refuses to start because of the lock file ... 2020-06-21T15:45:50 the move to the openSUSE galera cluster was (according to git log) 2020-01-30 and therefore probably unrelated 2020-06-21T15:46:13 hm, ok 2020-06-21T15:46:23 did you set the auto-increment again already? 2020-06-21T15:46:35 yes, I did 2020-06-21T15:46:38 does it affect only the en wiki or all of them ? 2020-06-21T15:46:47 good question, I still have to check that 2020-06-21T15:47:10 is there a quick way to check it for all wikis, or do I have to verify show create table user for each of them? 2020-06-21T15:47:25 hm... 2020-06-21T15:47:31 you can look in the backup 2020-06-21T15:48:08 mybackup.infra.opensuse.org => /backup/20200621/ for example 2020-06-21T15:48:50 either use xzless, xzgrep or run xz -d (but watch for the filesystem space) 2020-06-21T15:49:25 xzgrep looks like the best idea ;-) 2020-06-21T15:50:19 BTW: I guess I found one of the reasons why the forums-DB did not work on the galera cluster: a lot of tables use still MyISAM ... 2020-06-21T15:50:47 * kl_eisbaer wonders if I can migrate that on the fly ... 2020-06-21T15:51:41 according to https://forum.vbulletin.com/forum/vbulletin-5-connect/vbulletin-5-connect-questions-problems-troubleshooting/vbulletin-5-support-issues-questions/4391197-change-language-table-from-myisam-to-innodb this should work... 2020-06-21T15:51:58 in theory yes - but given how old our forums software is, I'd be careful ;-) 2020-06-21T15:53:25 hey: I got a fresh installation just a few weeks ago! :-) 2020-06-21T15:54:39 well, a fresh installation of the old version (because the upgrade caused more harm than good) :-/ 2020-06-21T15:55:17 good news about the wikis - "only" en.o.o lost auto_increment for user.user_id 2020-06-21T15:56:05 hm: that sounds a bit scary to me. If I attack a database server, I usually touch all databases it contains ;-) 2020-06-21T15:57:23 to start with: if I'd attack a server, then I wouldn't change something in a way that makes getting in harder - blocking creation of user accounts (for everybody who didn't login on en.o.o before) doesn't sound like a good idea ;-) 2020-06-21T15:58:09 well: at least this leaves out all new possible intruders :-) 2020-06-21T15:58:19 lol 2020-06-21T15:59:30 want to hear some good news? baloo is 15.1 :-) 2020-06-21T15:59:42 nice :-) 2020-06-21T15:59:50 and metrics.o.o as well 2020-06-21T16:00:09 the list of machines that need a zypper dup is getting shorter 2020-06-21T16:00:36 the not-so-good news is that mlmmj fails to build in Tumbleweed - no idea how hard fixing it is 2020-06-21T16:01:05 I've just boosters and sarabi left on my list 2020-06-21T16:02:15 right, elections2 (as sarabi replacement) is on my TODO list - but TODO lists have the tendency to become longer, not shorter :-( 2020-06-21T16:02:22 didn't a lot of people already try to migrate everything to mailman3 ? 2020-06-21T16:03:15 indeed, the mlmmj build failure should give lcp some extra motivation for mailman3 ;-) 2020-06-21T16:04:28 JFYI: https://lists.opensuse.org/cgi-bin/mailgraph.cgi 2020-06-21T16:04:41 ...if mail would still work, we might even see a bit more ;-) 2020-06-21T16:04:52 ;-) 2020-06-21T16:05:03 it's almost ready tbh 2020-06-21T16:05:10 I just have to make the archiver stuff work, because that's a thing 2020-06-21T16:05:21 I don't particularly enjoy setting up django 2020-06-21T16:06:34 we could stay with mhonarc for the archives if that makes things easier ;-) 2020-06-21T16:07:10 JFYI: all forum tables are now innodb. 2020-06-21T16:07:20 * kl_eisbaer goes out for a break with the family 2020-06-21T16:07:35 (before someone notices some broken forum stuff ;-) 2020-06-21T16:08:01 ;-) 2020-06-21T17:00:45 cboltz: it might, but I hate compromise 2020-06-21T17:01:19 on another note, I set up matrix fully, with the exception of the database stuff 2020-06-21T17:01:53 I will let kl_eisbaer handle that when he isn't busy with the family though, it's not that important ;) 2020-06-21T17:03:51 if we don't wanna do the 8448 port, I could switch to 443 and have it fully working 2020-06-21T17:04:21 I still need to have openid connect metadata, but I don't know how to request it 2020-06-21T17:15:12 for openid connect, I'd guess "open a ticket and assign it to bmwiedemann" 2020-06-21T17:15:55 and if port 443 is good enough, that makes things slightly easier (no need to adjust the firewall etc.) 2020-06-21T17:27:07 yeah, it's just gonna be harder to explain to the users, since they will have to use the port when logging in 2020-06-21T17:27:53 that sounds like a good argument to use 8448 ;-) 2020-06-21T17:28:29 yeah 2020-06-21T17:28:40 oh 2020-06-21T17:28:56 since the public bridges were just broken last week, I moved everything to our instance 2020-06-21T17:29:29 and it has been working quite well, apart from a few issues like running out of space because of logs taking up gigabytes of space 2020-06-21T17:29:57 oh 2020-06-21T17:30:04 I moved media_store into /data, which is 60GB iirc, and that should solve that issue 2020-06-21T17:30:56 hopefully the additional few gb are enough to make the logs not fill up the machine 2020-06-21T17:31:41 also that database thing, I can't really move the databases to the external server, so when running out of space postgres also crashed, which wasn't particularly great 2020-06-21T17:32:42 indeed, database crashes sound scary... 2020-06-21T17:37:04 yeah 2020-06-21T19:34:19 *** lurchi_ is now known as lurchi__ 2020-06-21T19:42:49 *** lurchi__ is now known as lurchi_ 2020-06-21T21:35:41 *** lurchi_ is now known as lurchi__ 2020-06-21T21:54:52 *** lurchi__ is now known as lurchi_ 2020-06-21T22:19:57 *** lurchi_ is now known as lurchi__ 2020-06-21T22:25:16 *** lurchi__ is now known as lurchi_