2021-05-24T12:56:56 *** tigerfoot is now known as ioda-net 2021-05-24T13:58:06 *** ldevulder_ is now known as ldevulder 2021-05-24T14:01:51 *** ioda-net is now known as tigerfoot 2021-05-24T15:31:03 *** tigerfoot is now known as nimportequoimong 2021-05-24T15:38:14 *** nimportequoimong is now known as tigerfoot 2021-05-24T16:41:17 FYI: syncing Leap 15.3 on widehat, provo-mirror and mirror.linux-schulserver.de atm. 2021-05-24T16:43:58 kl_eisbaer: hoi 2021-05-24T16:44:19 pjessen: hiho! 2021-05-24T16:44:55 I got a message from Lubos, that he is currently just waiting for the final Leap 15.3 FTP tree from OBS 2021-05-24T16:45:23 So my thinking was: better make sure that widehat, pontifex and the mirror in provo have the latest stuff :-) 2021-05-24T16:45:47 I know that the current widehat has just ~400G space left, but I think this should be ok 2021-05-24T16:46:02 aha, okay. yeah sounds like a good idea. I was looking widehat because repopush was going so sloooooowly 2021-05-24T16:46:25 and then I noticed it was full 2021-05-24T16:46:29 FYI: the new widehat is synced via pontifex at the moment. Just have a look in the open screen session there (screen -ls; screen -r) 2021-05-24T16:46:43 what did you delete ? 2021-05-24T16:46:51 some leap_15.0 stuff. 2021-05-24T16:46:54 * kl_eisbaer does not want to fill it up again ;-) 2021-05-24T16:47:00 ok. 2021-05-24T16:47:26 I've already added it to the excludes in the repopush 2021-05-24T16:47:36 I'll leave out 15.0 - and just re-syncing distribution/leap/15.1 - 15.3 - ok ? 2021-05-24T16:47:59 maybe we should think about getting rid of 15.0 on pontifex as well ? 2021-05-24T16:48:14 This would drop it from most other mirrors as well. 2021-05-24T16:48:21 I was going to leave out 15.1 too, but if you've cleared out enough, there's no urgent need 2021-05-24T16:49:22 New widehat: /dev/vdc    size: 42T  used:  26T  avail:  17T  60% /srv 2021-05-24T16:49:24 :D 2021-05-24T16:50:00 But I am on vacation next week, so there will probably nobody who can drive the new machine to QSC and mount it. 2021-05-24T16:50:00 wow. what is the 'new' widehat ? 2021-05-24T16:50:16 ah, you're building a new one ? 2021-05-24T16:50:34 jip. 2021-05-24T16:50:38 nice! 2021-05-24T16:51:25 AMD EPYC 7502P; 256G RAM, and quite some discs 2021-05-24T16:51:59 We even got two 8T SSDs (or NVMe's? - I need to check) sponsored from one of our community members 2021-05-24T16:52:20 sounds good to me - diskspace is what we really want. 2021-05-24T16:52:39 So jdsn could build up a nice bcache powered RAID with 76T space 2021-05-24T16:53:26 Even if it is just one machine, I think we have enough power and capacity in it to run quite some virtual machines on it. 2021-05-24T16:53:45 easily. 2021-05-24T16:53:59 I'm thinking of a setup similar to the one in the NUE datacenter, just with one machine at the moment. 2021-05-24T16:54:25 Mean: one internal (private) network and one external one, where we can make use of the 6 public IPs, we have 2021-05-24T16:54:51 I'm sure it'll come in useful. 2021-05-24T16:55:20 serving files doesn't require a lot of cpu or ram, even wth apache 2021-05-24T16:55:21 one is currently assigned to widehat, one is assigned to ns1.opensuse.org - so we should have enough free to run another haproxy and some machines behind 2021-05-24T16:56:16 I'm thinking of another MX, a "static" machine (for all the static webpages) and at least a postgresql server, so we can have widehat migrated to pontifex.o.o - just in case ;-) 2021-05-24T16:56:47 The good thing: the QSC datacenter is very close to the SUSE one, so there is not much delay. 2021-05-24T16:56:51 my own public mirror is running on old hardware, 8G with dual-core @ 2.4ghz 2021-05-24T16:57:25 It would be cool if you (for example) could setup the 3rd MX once we have the machine in the QSC DC. :-) 2021-05-24T16:57:32 QSC - I thought it was further away? 2021-05-24T16:57:51 Jip: mine is a VM with just 2 CPUs and 2 G RAM :) 2021-05-24T16:58:17 mx3 - i'll be happy to 2021-05-24T16:58:26 the "new widehat" VM is currently a bit overpowered. But it is more or less the same setup as the current physical machine has. 2021-05-24T16:59:07 as long as it has plenty of space, so we don't need to worry about repos growing and growing. 2021-05-24T16:59:11 I think if we can offload the postgresql-DB to another VM, we can free up quite some resources from the new widehat. But at the moment, I just like to get it into the "sync chain" and move it over. 2021-05-24T16:59:31 42T should be enough for some time, I think. 2021-05-24T16:59:56 We can use our monitoring to check the trend, but I think 42T should be reached soonest in 3 years. 2021-05-24T17:00:21 ...and even than we have hopefully some space left on the underlying RAID. 2021-05-24T17:01:52 My final goal would be to have redundant setups in Provo and in the QSC datacenter, so we can switch over easily. 2021-05-24T17:02:14 This would also allow us to provide Geo-based services in the future. 2021-05-24T17:02:34 Might be a bit tricky with DB-based services, but everything else should be easy to provide 2021-05-24T17:02:46 BTW: there is also room for a MX4 in Provo :D 2021-05-24T17:03:29 I'm not sure about mailman3 or another matrix server, but at least we have the possibility now. 2021-05-24T17:04:13 and if we can (for example) even just provide read-only wiki pages, this would be already much better than now. 2021-05-24T17:05:08 how about disk-space in Provo? 2021-05-24T17:05:24 At the moment, I just need to ask... :-) 2021-05-24T17:05:29 wow 2021-05-24T17:05:41 bandwidth? 2021-05-24T17:05:43 There are some old (unmaintained) arrays 2021-05-24T17:06:05 that's the tricky part: they told me that they have 1G down- and upstream in Provo. 2021-05-24T17:06:23 But when I start speedtest, I end up in way less for upstream 2021-05-24T17:06:44 at QSC; we have full 1g 2021-05-24T17:06:58 pushing out repos to provo is about 5hours behind 2021-05-24T17:07:30 jip, I think the bandwidth is one of the reasons. But SUSE-IT exchanged quite some hardware - and now also the admins. 2021-05-24T17:07:51 So it's not easy to find someone who is willing to and has enough knowledge already for debugging. 2021-05-24T17:08:25 But I hope I can takle this down next month, when the new admins are settled and have some basic understanding 2021-05-24T17:08:49 I'm pretty certain it is the downstream bw - we have one push mirror in Sweden, that one is fine. gwdg too of course 2021-05-24T17:09:58 Jip. But maybe it also has something to do with the two different providers for IPv6 in Provo. I have a blind spot there, as I can not check the new hardware any longer. 2021-05-24T17:12:00 ok - time for another tea here. "let the sync begin!" :D