2019-11-05T02:13:36 *** maxlin has quit IRC (Remote host closed the connection) 2019-11-05T02:15:29 *** maxlin has joined #opensuse-admin 2019-11-05T03:38:47 *** okurz_ has joined #opensuse-admin 2019-11-05T03:41:24 *** okurz has quit IRC (Ping timeout: 265 seconds) 2019-11-05T03:41:24 *** okurz_ is now known as okurz 2019-11-05T05:00:35 *** srinidhi has joined #opensuse-admin 2019-11-05T06:04:09 *** srinidhi has quit IRC (Disconnected by services) 2019-11-05T06:12:29 *** srinidhi has joined #opensuse-admin 2019-11-05T06:13:14 *** srinidhi has quit IRC (Client Quit) 2019-11-05T06:13:56 *** srinidhi has joined #opensuse-admin 2019-11-05T06:19:01 *** srinidhi has quit IRC (Remote host closed the connection) 2019-11-05T06:19:28 *** srinidhi has joined #opensuse-admin 2019-11-05T07:30:39 *** jadamek has joined #opensuse-admin 2019-11-05T07:38:05 *** moozaad has joined #opensuse-admin 2019-11-05T07:50:53 *** maxlin has quit IRC (Quit: Konversation terminated!) 2019-11-05T07:55:51 *** maxlin has joined #opensuse-admin 2019-11-05T08:00:54 *** ldevulder__ is now known as ldevulder 2019-11-05T10:32:37 *** matthias_bgg has joined #opensuse-admin 2019-11-05T11:41:03 *** ldevulder_ has joined #opensuse-admin 2019-11-05T11:44:44 *** ldevulder has quit IRC (Ping timeout: 265 seconds) 2019-11-05T11:47:52 *** srinidhi has quit IRC (Read error: Connection reset by peer) 2019-11-05T11:51:28 *** cboltz has joined #opensuse-admin 2019-11-05T12:22:50 *** tigerfoot has quit IRC (Remote host closed the connection) 2019-11-05T12:40:06 *** srinidhi has joined #opensuse-admin 2019-11-05T12:48:24 *** srinidhi has quit IRC (Read error: Connection reset by peer) 2019-11-05T14:06:08 *** srinidhi has joined #opensuse-admin 2019-11-05T14:50:27 *** srinidhi has quit IRC (Ping timeout: 264 seconds) 2019-11-05T14:57:19 *** srinidhi has joined #opensuse-admin 2019-11-05T15:05:37 *** ancorgs has quit IRC (Ping timeout: 240 seconds) 2019-11-05T15:14:39 *** ldevulder_ is now known as ldevulder 2019-11-05T15:17:58 *** srinidhi has quit IRC (Read error: Connection reset by peer) 2019-11-05T15:36:06 *** srinidhi has joined #opensuse-admin 2019-11-05T15:58:44 *** srinidhi has quit IRC (Disconnected by services) 2019-11-05T16:35:10 *** ancorgs has joined #opensuse-admin 2019-11-05T17:01:56 *** srinidhi has joined #opensuse-admin 2019-11-05T17:06:58 *** srinidhi has quit IRC (Quit: Leaving.) 2019-11-05T17:13:06 *** moozaad has quit IRC (Quit: Konversation terminated!) 2019-11-05T17:34:52 *** robin_listas has joined #opensuse-admin 2019-11-05T18:27:13 *** matthias_bgg has quit IRC (Quit: Leaving) 2019-11-05T18:47:32 Hi. 2019-11-05T18:47:46 we have meeting tonight? 2019-11-05T18:50:45 tuanpembual: yes, we do 2019-11-05T18:50:53 ~ 10 more minutes or so 2019-11-05T18:52:48 okey. 2019-11-05T18:56:47 *** oreinert has joined #opensuse-admin 2019-11-05T18:57:02 *** jdsn has joined #opensuse-admin 2019-11-05T18:58:46 *** okurz[m] has joined #opensuse-admin 2019-11-05T19:00:26 cboltz: time to start, i guess ;-) 2019-11-05T19:00:33 yes ;-) 2019-11-05T19:00:37 Hi everybody, and welcome to the heroes meeting! 2019-11-05T19:01:17 Here listening 2019-11-05T19:01:23 Hi 2019-11-05T19:01:29 the topics are on https://progress.opensuse.org/issues/57602 2019-11-05T19:01:35 *** mcaj_away has joined #opensuse-admin 2019-11-05T19:01:57 besides the usual topics, we have the planning for the meeting in Nuremberg, and the disk space on rsync.o.o 2019-11-05T19:01:58 Hi All 2019-11-05T19:02:21 hi mcaj_away 2019-11-05T19:02:32 can you pleaes explain how you can type and be away at the same time? ;-) 2019-11-05T19:03:08 well multitasking you know ^^ 2019-11-05T19:03:08 and while you think about that - 2019-11-05T19:03:09 * kbabioch always knew that martin is a bot :-) 2019-11-05T19:03:12 cboltz: mcaj_away is doing magic all day long - thats nothing special 2019-11-05T19:03:17 does someone from the community have any questions? 2019-11-05T19:03:30 hi, I have a question 2019-11-05T19:03:42 agraul: ask ;-) 2019-11-05T19:03:48 although this might fit the "review old tickets" phase better 2019-11-05T19:04:05 It is about upgrading software-o-o's VM from 42.3 to 15.1 2019-11-05T19:05:01 I am a bit afraid to just zypper dup "on my own", could someone that knows both openSUSE and SUSE's networks stand by when I do it? 2019-11-05T19:05:02 *** pjessen has joined #opensuse-admin 2019-11-05T19:05:41 in case something breaks and a new VM is needed (and the proxy in front might need to be reconfigured) 2019-11-05T19:05:57 agraul: speaking of experience this will break something and might need attention / changes ... 2019-11-05T19:06:26 agraul: in 99% zypper dup work and in the worst case if If does not boot you, then we need to look on the cluster 2019-11-05T19:06:39 hello again! 2019-11-05T19:06:46 welcome thomic ;-) 2019-11-05T19:06:49 *** bmwiedemann2 has joined #opensuse-admin 2019-11-05T19:06:50 thomic: Hi ! 2019-11-05T19:06:57 agraul: I have done that last month. It is broken after reboot. I cannot doing sudo. :D 2019-11-05T19:06:57 Good evening 2019-11-05T19:07:05 hi everybody who just joined ;-) 2019-11-05T19:07:12 cboltz help me out. :) 2019-11-05T19:07:14 all my ssh keys expired and this IRC VM will die after 10 years .. my provider told me 2019-11-05T19:07:17 :D 2019-11-05T19:07:30 so yes... I have some fun to catch up before being active-active again 2019-11-05T19:07:36 hi @thomic 2019-11-05T19:07:48 at least I can listen and give clever hints from my old white beard :D 2019-11-05T19:08:32 hello everyone 2019-11-05T19:08:33 hi 2019-11-05T19:09:19 tuanpembual: sounds like the usual "sssd needs a restart". Done, please try again ;-) 2019-11-05T19:10:18 agraul: I know it's more work, but I'd recommend to avoid doing the upgrade in the live instance 2019-11-05T19:10:38 I know how to fix the problem with sssd needs a restart but we need to edit all sssd services and add there dependency for network... 2019-11-05T19:11:05 mcaj_away: could be done as a package update? or with salt? 2019-11-05T19:11:12 mcaj_away: should sssd config not come from salt? 2019-11-05T19:11:13 =) 2019-11-05T19:11:13 mcaj_away: maybe submitting the fix as a maintenance update would make more sense? 2019-11-05T19:11:18 iirc 2019-11-05T19:11:25 cboltz: what about creating a snapshot before the live upgrade? it will most likely work and if it does not we can easily rewind back 2019-11-05T19:11:35 well ... setting in /etc yes but not for systemd 2019-11-05T19:11:54 cboltz: that is fine for me. how should I proceed once a new VM is ready and the openSUSE reverse proxy in front needs to be adjusted? 2019-11-05T19:12:33 cboltz: it looks alsomost like a bug but I did not find time to report it yet 2019-11-05T19:12:59 mcaj_away: then please do that - or "report" it with a SR ;-) 2019-11-05T19:13:09 that's much better than deploying a workaround with salt 2019-11-05T19:14:02 ok... but back to 42.3 we should also update all 15.0 to 15.1 ... since November the disto is EoL 2019-11-05T19:14:41 no objections on that (and I know that I still have to do that update for the wiki) 2019-11-05T19:14:59 agraul: we can give you something like $host-test on the haproxy for your testing and then just switch it as soon as you say you are fine with the result 2019-11-05T19:15:50 Welll there are some MVs with a *special* packages where 42.3 is last working disto ... and there we have a problem ... 2019-11-05T19:16:15 15.0 still has 3 weeks of life left 2019-11-05T19:16:23 mcaj_away: are the details documented somewhere? 2019-11-05T19:16:33 mcaj_away: oh, good point - please tell Tomas that he's either blind or lazy ;-) 2019-11-05T19:16:47 you could even keep software-test.o.o agraul for future (like redeploy the old VM after the switchover) in case you'd like to do something crazy like "staging" before killing software.o.o live instance with a new software patch =) 2019-11-05T19:17:31 thomic: where is the fun in that :D, you are right though, that would be a better setup than what we have now 2019-11-05T19:18:22 agraul: yup, it saves our friends from being pinged by cboltz on sunday afternoon because "something is so slow" :D 2019-11-05T19:18:29 I wonder if it would make sense to use btrfs+snapper for some of these 2019-11-05T19:18:46 bmwiedemann2: as soon as it is stable =) 2019-11-05T19:19:08 I thought, it is just 50% slower - and takes 4x as much disk space 2019-11-05T19:19:49 you can do that, as I don't need to discuss a higher budget for more disk space anymore :D but be prepared you might run out of disk 2019-11-05T19:20:13 would be mostly for OS+conf - not data 2019-11-05T19:20:52 bmwiedemann2: our default VM root disk was usually 10GB or 20GB max 2019-11-05T19:20:58 if it fits there :) alright 2019-11-05T19:21:10 but if it fills this up very quickly, I wouldn't recommend to change this 2019-11-05T19:21:26 as those small block storage is what keeps the backend fast and smooth 2019-11-05T19:21:42 thomic: get bmwiedemann2 also a $host-test instance to he can create a POC machine 2019-11-05T19:21:43 would indeed be tight. 2019-11-05T19:22:10 jdsn: don't get me wrong - VM creation is on EngInfra todo list 2019-11-05T19:22:25 as far as I'm correctly informed? 2019-11-05T19:22:36 ok :) 2019-11-05T19:22:39 kbabioch mcaj_away ^^ that still the case? 2019-11-05T19:23:03 it is, no self-service unfortunately ... maybe some openstack cloud in the future ;-) 2019-11-05T19:23:33 kbabioch: haha good one. RedHat offer some products there I heard :P 2019-11-05T19:23:46 lol 2019-11-05T19:24:28 we got the warez below the counter :-P 2019-11-05T19:24:31 mcaj_away: to explain my sarcastic note - I found an open pull request for helios (a *big* one) to make it working with django 1.11, added it as a patch, and now have the web interface working on 15.1 on a local test VM. I still need some time for celeryd (for background jobs), but that should be the boring part. 2019-11-05T19:24:47 well there were plans of splitting up the atrejus in openSUSE (or whatever the future name of the project with the green geeko will be) and SUSE 2019-11-05T19:25:16 I don't know if this is still the plan? 2019-11-05T19:26:47 no such plan and/or capacity for such a plan short term 2019-11-05T19:27:03 we could start pushing the tyres for some public-cloud sponsors 2019-11-05T19:27:08 like hetzner or something? 2019-11-05T19:27:18 would that be an idea at least? 2019-11-05T19:27:38 * kbabioch is NOT objecting, but not going to drive this 2019-11-05T19:28:00 yup sure... well I would need to check if my time allows that minor "side project" 2019-11-05T19:28:31 but anyways, we need some kind of "public cloud" to move forward i guess 2019-11-05T19:29:06 any ideas / objections from the crowd^^? 2019-11-05T19:29:06 thomic: does it have to be public cloud? or is a VM self service sufficient? 2019-11-05T19:29:29 but maybe we should get back to the agenda and be more specific ... because we're talking mostly about long term goals / philosophical stuff ;-) 2019-11-05T19:29:32 I think we can make some ideas about future, but back to present... we have a big problem with (big) data on widehat ATM... and we need to: 2019-11-05T19:29:32 a] fix for now 2019-11-05T19:29:32 b] plan for future because 19 TB disks will again in lest them month ... 2019-11-05T19:30:04 jdsn: well, let's face the truth, the cluster in the NUE basement is having limited capacities, with growing needs of SUSE public services I guess... 2019-11-05T19:30:30 kbabioch: sorry =) go on and moderate :P 2019-11-05T19:30:40 cboltz, is moderating ;-) 2019-11-05T19:30:48 thomic: OTOH hardware gets more compact + powerful every year 2019-11-05T19:31:01 thomic: lets discuss later 2019-11-05T19:31:22 bmwiedemann2: - what jdsn says, I can explain myself later 2019-11-05T19:31:52 mcaj_away: ... good point ... just to let all know, there is no free slot anymore, all 8 disks slots are filled 2019-11-05T19:31:58 2x1.5TB system disk 2019-11-05T19:32:13 6x4TB RAID5 for download.o.o 2019-11-05T19:32:20 iirc? 2019-11-05T19:32:36 s/download.o.o/rsync.o.o/g 2019-11-05T19:33:32 there should be a monitoring node now in the datacenter - right kbabioch? 2019-11-05T19:33:43 right 2019-11-05T19:33:48 6x4 is about the 19 TB size we see. 2019-11-05T19:33:58 yes, it is 2019-11-05T19:33:59 well, for now there is another machine that is not doing much and can be used as ipmi backdoor ... but not used for much more (currently) 2019-11-05T19:34:16 ok, just thinking, is this 1U? 2019-11-05T19:34:20 1u 2019-11-05T19:34:23 not much disk space 2019-11-05T19:34:24 :/ 2019-11-05T19:34:51 as QSC does not monitor what we put there and we can ask for as much ports as we like I guess, We could put some storage there 2019-11-05T19:35:11 if SUSE would sponsor something like an old Quantum/DotHill 2019-11-05T19:35:34 what we maybe need is : 2019-11-05T19:35:34 a] new machine(s) with a lot of disks 2019-11-05T19:35:34 b] a bacend storage with 100TB RAW capacity ... 2019-11-05T19:35:34 and there we are fine 2019-11-05T19:35:35 well, actually i've registered it and they did check up on it ... so we cannot just put anything there 2019-11-05T19:35:46 mcaj_away: why 100TB? 2019-11-05T19:35:58 "ready for the future" 2019-11-05T19:36:03 ah ok 2019-11-05T19:36:06 =) 2019-11-05T19:36:20 yes... and not fight with disk space every 1/2 Y 2019-11-05T19:36:29 I know of a machine with 12 3.5" slots that is not doing that much... 2019-11-05T19:36:32 well kbabioch, at least there are not "too strict" with their own rules 2019-11-05T19:36:52 bmwiedemann2: is it in support? 2019-11-05T19:37:06 because we fixed it earlier this year, quickfixed it again and again 2019-11-05T19:37:07 maybe not. but also not that old 2019-11-05T19:37:24 and it would be time to have a final solution for rsync.o.o 2019-11-05T19:37:43 bmwiedemann2: but sooner out of support than when we would ask for sponsoring a new one 2019-11-05T19:37:49 I mean, a lot of production depends on that... afaik 2019-11-05T19:38:11 oh as temporary fix - I agree 2019-11-05T19:38:13 I would like to see device like this https://www.thomas-krenn.com/en/products/rack-server/4u-server/intel-dual-cpu/4u-intel-dual-cpu-ri2424-scalable.html with al 24 disks ... 2019-11-05T19:38:25 (at www.thomas-krenn.com) 2019-11-05T19:38:29 mcaj_away: I disagree a bit... 2019-11-05T19:38:36 im not going to install anything unsupported there ... also putting there a lot of disks might be a challenge ... dont want to go there regularly to replace disks .. 2019-11-05T19:39:02 If you would invest in new machines now, you would like to have something like 2 servers at least and maybe two hosts syncing 2019-11-05T19:39:13 that if one dies, rsync.o.o is not completely down =) 2019-11-05T19:39:27 kbabioch: well other ideas? 2019-11-05T19:39:41 HA is not trivial either. 2019-11-05T19:40:14 kbabioch: and you usually keep some spare disks in the machine - to activate them on demand - so you only drive there if like 4 disks are dead 2019-11-05T19:40:34 always keep 2 hot standby drives 2019-11-05T19:40:55 in the 24bay machine we can even keep 6 ;) 2019-11-05T19:41:04 well, not sure if we're realistic here ... up until now we had a budget of essentially 0 eur/usd for this hardware ... 2019-11-05T19:41:17 we used some old / out of support suse hardware 2019-11-05T19:41:30 what about just add there U2 backend storage ? 2019-11-05T19:41:31 so we can talk all day long aobut some 24 bay machines and spare disks ... but not sure if it is going to happen 2019-11-05T19:42:01 kbabioch: but we should create an idea about what we need to ask for, right? 2019-11-05T19:42:09 yes 2019-11-05T19:42:21 so I see value in the current discussion 2019-11-05T19:42:34 what if the heroes request it? and maybe not EngInfra? 2019-11-05T19:42:39 like via Gerald etc. 2019-11-05T19:42:39 but then we should also consider to pay qsc for their service ... and then we don't have to hope for qsc to be nice enough to take another server of ours 2019-11-05T19:42:44 maybe there is budget than 2019-11-05T19:43:13 because right now we are rely relying on "qsc being nice guys" whenever we change anything there 2019-11-05T19:43:25 kbabioch: as long as they appear as a sponsor on our page, we have a deal 2019-11-05T19:43:46 well, they are happy to have us kbabioch =) 2019-11-05T19:44:07 where exatly are they listed -> https://en.opensuse.org/Sponsors 2019-11-05T19:44:08 :-)? 2019-11-05T19:44:47 but to get some progress here ... let's agree on what we need / want to have ... and then see how we can get there ... 2019-11-05T19:45:17 https://mirrors.opensuse.org/ search for QSC kbabioch 2019-11-05T19:45:27 I think we as Heroes should send an email / message to board that the situation with disk space is a critical and we need a] new machine, b] storage c] aggrement with QSC and so on 2019-11-05T19:45:29 so, basically any objections to having 1 (or 2) nodes for http/rsync/ftp/whatever ... and a storage backend? 2019-11-05T19:46:18 https://www.opensuse.org/ kbabioch and there in the bottom we still have the "old" ipexchange logo 2019-11-05T19:46:48 how reliable is that storage backend compared to the nodes? (just wondering, I never needed such big hardware - and want to avoid creating a SPOF) 2019-11-05T19:47:21 cboltz: the current machine also is a SPOF :) 2019-11-05T19:47:33 I know, but we want to improve things, right? ;-) 2019-11-05T19:47:46 yes, but the immediate issue is the space 2019-11-05T19:47:49 not the SPOF 2019-11-05T19:47:50 if you have support contract works well... 2019-11-05T19:47:56 if can fix both, great 2019-11-05T19:48:33 speaking of immediate / short term ... is there anything we can do? 2019-11-05T19:48:49 because we also have a problem right now ... and finding budget / ordering hardwrae ... will take months 2019-11-05T19:48:53 cboltz: so reading between the lines: would you feel better with standard hardware that we can in the worst case fix ourselves? 2019-11-05T19:48:54 (if at all) 2019-11-05T19:49:53 jdsn: no need to read between the lines - I don't have experience with big servers or storage hardware, so it was just a "silly question" ;-) 2019-11-05T19:50:12 I'd also really appreciate to have our data stored on disks that we can replace with normal things (e.g. SATA or SAS HDDs) without needing a support contract for 100KEUR 2019-11-05T19:50:16 or we ask for something like a CDN sponsor? but the costs are not manageable 2019-11-05T19:50:30 if the experts tell me that they feel comfortable with the storage, then everything is fine 2019-11-05T19:50:33 they would be like 2500-5000 USD for a CDN - i just checked 2019-11-05T19:50:44 per month 2019-11-05T19:51:19 cboltz: well, the contract is expensive, so I would also second bmwiedemann2s view 2019-11-05T19:51:30 ok, wait a second 2019-11-05T19:51:39 thomic: sounds like we could instead buy a new server and several disks each year... 2019-11-05T19:51:40 but if there is a sponsor for a storage, I would be fine with it as well 2019-11-05T19:52:09 1st of all - if you put a Storage-Machine there (like Quantum QXS) you always have more than 1 controller, you always have 2 controllers connected to the disks 2019-11-05T19:52:38 second, storage systems, always have redundant (failover disks) that you dont need to go there and they usually take "normal SATA disks" 2019-11-05T19:52:49 We are not talking about NetApp here... 2019-11-05T19:53:11 and if we get a storage + support contract + QSC datacenter suport to change disks every time one fails ? 2019-11-05T19:53:38 the big advantage I see with having 2 x 1U virtualization server + 1 x 2/3U storage system is, you can cross-connect via fibrechannel or iSCSI and if one of the virt hosts fails, the other one can take over 2019-11-05T19:53:49 alternative suggestion - move widehat to NUE, and upgrade or get a 2nd uplink? 2019-11-05T19:54:12 pjessen: there is no possibility as there is no second fibre afaik 2019-11-05T19:54:25 get another one? it's only 3K/annum 2019-11-05T19:54:28 and that is an even longer discussion you would start 2019-11-05T19:54:44 pjessen: you know, in the building, opening streets, etc... 2019-11-05T19:54:55 and I guess fibre is managed by SUSE-IT now 2019-11-05T19:55:01 thomic: I like your idea with 2x1U plus 1x 2/U 2019-11-05T19:55:05 so even that discussion you go down a long road 2019-11-05T19:55:27 surely not - even here in the darkest of Switzerland, I can have a new 1Gbit fibre in less than a week. 2019-11-05T19:55:39 klein: you would not need a QSC hands-on-service, as to be honest, we changed disks in DotHill 2-3 times a year maximum, I guess we can afford this time 2019-11-05T19:55:56 anyway, it was just meant as an alternative. 2019-11-05T19:55:57 pjessen: yes, switzerland 2019-11-05T19:56:02 welcome to germany 2019-11-05T19:56:03 pjessen: I'd guess that in germany it will take you a week to fill out the needed forms ;-) 2019-11-05T19:56:05 (we recently changed multiple disks within a couple of days in our dothill :-)) 2019-11-05T19:56:11 haha 2019-11-05T19:56:22 offtopic ))) 2019-11-05T19:56:38 so let's have something like a vote vboltz? 2019-11-05T19:57:02 kbabioch: yes, ok, maybe that was needed, because nobody checked it for month ;) - mcaj_away how are your experiences? 2019-11-05T19:57:21 mine are, we are changing 2 spare/hot standby disks every 4-6 month? 2019-11-05T19:57:32 ok cboltz is sleeping 2019-11-05T19:57:43 so let's have a vote on this? 2019-11-05T19:57:44 just out of curiosity, would that be a temporary solution, or even a long term alternative? https://www.hetzner.de/dedicated-rootserver/sx132 2019-11-05T19:57:56 (at www.hetzner.de) 2019-11-05T19:58:01 I'm not sure if we need a vote ;-) 2019-11-05T19:58:04 depends on the luck ... you I had 2 broken disk in 4 year on dothill for example and like 4 disk on netapp for the same time frame 2019-11-05T19:58:15 jdsn: I suggested that more than one time :) 2019-11-05T19:58:37 Option 1: Setup one big machine with 24 disks and a lot of spare disks? 2019-11-05T19:58:39 thomic: and....? 2019-11-05T19:58:52 jdsn: technically its a perfect solution ... but we need to find someone to pa yfor it ;-) 2019-11-05T19:58:54 Option 2: Setup 2 small virt machines and a storage machine? 2019-11-05T19:59:07 kbabioch: I can ask ;) 2019-11-05T19:59:08 Option 3: Rent hardware somewhere like Hetzner? 2019-11-05T19:59:29 or offer to be come a sponsor ;) 2019-11-05T19:59:38 kbabioch: We will rent a server and we make Melissa pay for it! 2019-11-05T19:59:40 :D 2019-11-05T19:59:48 * kbabioch is all in 2019-11-05T19:59:51 thomic: +1 2019-11-05T20:00:05 so Option 3 is our favourite? 2019-11-05T20:00:12 thomic: please define the "make" 2019-11-05T20:00:25 the nice thing here would be - we could even have 2 servers at some point in future 2019-11-05T20:00:30 i like option 3 the most, yes ... 2019-11-05T20:00:31 maybe we can check Serverbörse 2019-11-05T20:00:34 I vote for the Hetzner option. 2019-11-05T20:00:34 or 3b) (with sponsoring offer) 2019-11-05T20:00:36 and have a better price 2019-11-05T20:00:49 jdsn: Hetzner is not our best friend, let's say it like this 2019-11-05T20:00:58 oh. 2019-11-05T20:01:03 jdsn: option 3 only has 1gbit network - 10GBit might be available with extra cost / per TB used 2019-11-05T20:01:08 so let's change it 2019-11-05T20:01:21 bmwiedemann2: this is what we have now as well :D 2019-11-05T20:01:22 widehat only has 1gbit anyway? 2019-11-05T20:01:26 yay right 2019-11-05T20:01:58 I see 2019-11-05T20:02:06 looking at the price tag, the Hetzner option looks quite good - maybe even cheaper than buying the needed hardware 2019-11-05T20:02:27 yeah, the price is really good 2019-11-05T20:02:32 Last time hetzner "sponsored us" was back then when $somebody was pointing from download.o.o mirrorbrain net directly to their public visible customer mirrors without telling them :D - I guess that caused "a bit of traffic on their side" 2019-11-05T20:02:42 and as sponsor the price might even be perfect :) 2019-11-05T20:02:44 and we have hands-on support 2019-11-05T20:03:01 that's why the name opensuse is a bit burned in-house 2019-11-05T20:03:30 well, if they run a public mirror... ;-) 2019-11-05T20:03:42 cboltz: it was in the wiki, as a mirror for customers... to be fair here 2019-11-05T20:03:47 cboltz: they changed their mirror to private 2019-11-05T20:03:52 yes... 2019-11-05T20:03:57 anyone willing to ask nicely and/or has some contacts there? 2019-11-05T20:04:03 kbabioch: yes 2019-11-05T20:04:06 both 2019-11-05T20:04:22 are we asking hetzxner to sponsor us? 2019-11-05T20:04:36 asking doesn't hurt, i guess ;-) 2019-11-05T20:04:39 pjessen: yes 2019-11-05T20:04:45 ok, got it. 2019-11-05T20:05:00 just for the record 2019-11-05T20:05:03 https://www.hetzner.de/dedicated-rootserver/matrix-sx 2019-11-05T20:05:15 (at www.hetzner.de) 2019-11-05T20:05:20 we could ask for two sx62 instead of 1 sx132 2019-11-05T20:05:23 for now 2019-11-05T20:05:30 as 40TB are enough for us by now 2019-11-05T20:05:37 and upgrade later 2019-11-05T20:05:46 enough for how long 2019-11-05T20:06:02 furthermore we could finally have widehat.o.o and rsync.o.o on two different hosts for http and rsync traffic split 2019-11-05T20:06:03 ok, yea, if they let us upgrade any time - sure 2019-11-05T20:06:09 jdsn: well we have 20TB now 2019-11-05T20:06:17 let's say 20TB is +2 years 2019-11-05T20:06:20 also, 4x10 TB would become 30TB with RAID5 2019-11-05T20:06:39 at least we can upgrade later to a SX132 2019-11-05T20:06:45 os its like 913,92 EUR per year ... 2019-11-05T20:06:45 with those sizes, I would seriously recommend raid6. 2019-11-05T20:06:46 ok, I have yet to see the statistics about the growth rate 2019-11-05T20:06:47 if we deploy the machine from slat 2019-11-05T20:06:52 that should be easy 2019-11-05T20:07:07 jdsn: ask rudi 2019-11-05T20:07:13 he can provide you with clear numbers 2019-11-05T20:07:19 it's like 500GB per month 2019-11-05T20:07:23 rough number 2019-11-05T20:07:28 pjessen: kun for dig 2019-11-05T20:08:00 thomic: that would be 6TB/y - I think it is less 2019-11-05T20:08:32 ok, its 2110 already and we have more topics ... can we discuss what we want to do short-term ... and for the long term we will wait for jdsn to get in touch with hetzner ... 2019-11-05T20:09:25 I would like to have short-term + one option for longterm + ActionItem with responsible. 2019-11-05T20:09:28 cboltz: ^^ 2019-11-05T20:09:59 i think we have one possible option for long term with action item / responsibility (jdsn) ... 2019-11-05T20:10:09 for (very) short term, would it be possible to abuse a part of the system disk? 2x1.5 TB should have some space left for packages ;-) 2019-11-05T20:10:11 what about the option to replace widehat's 6x4TB disks with 6x12 TB? 2019-11-05T20:10:38 unless you can quickly get the budget for it, also not really short term :-/ 2019-11-05T20:10:39 cboltz: with the risk that you shred your system disks to death 2019-11-05T20:10:45 ~2KEUR 2019-11-05T20:10:53 keep in mind, they are not new like the 4TB disks I put in 2019-11-05T20:11:38 yes, I know 2019-11-05T20:12:01 cboltz: if we are brave we could also run with a degraded RAID5 and gain extra 4TB :) 2019-11-05T20:12:07 you need to see if the controller support 12TB disks bmwiedemann2 2019-11-05T20:12:17 is there anything we can delete (like we did in the past) ... not really good, but the only way out of this with the current setup? 2019-11-05T20:12:48 klein: what feature does the controller need for that? We dont even need to boot off these 2019-11-05T20:12:54 jdsn: well the data there is not critical in the sense of "doesn't exit anywhere else" 2019-11-05T20:13:28 but running without any redundancy might be asking for too much :-/ 2019-11-05T20:13:29 kbabioch: sure, thats why I mentionened it, but still its risky, becaus if then one disk dies, the whole service dies 2019-11-05T20:13:57 from the past on the server I see that we deleted home repos ... 2019-11-05T20:13:59 coming back to my question: anything (home projects, etc. pp.) we can delete for the time being? 2019-11-05T20:14:13 I would highly!!! not recommend to exchange anything in the existing machine 2019-11-05T20:14:26 better break up the RAID instead of exchanging something 2019-11-05T20:14:45 the system is old, I brought it to live with a lot of fun 2019-11-05T20:14:57 and I wouldn't recommend to put their new disks it is wasted money 2019-11-05T20:15:05 as the 800 euros for the 4TB disks was 2019-11-05T20:15:12 but back then, we had a dead machine 2019-11-05T20:15:18 at least now we have a machine 2019-11-05T20:16:30 kbabioch: you can smash the home repos 2019-11-05T20:16:43 and the resync will take at least 1 month or so via the slow lines 2019-11-05T20:16:55 so you won 1 month to get the hetzner thing running 2019-11-05T20:17:00 if not, delete home repos again 2019-11-05T20:17:11 not the best solution, but it helps 2019-11-05T20:17:12 yeah, this will be taking more time / iterations i guess :-/ 2019-11-05T20:17:25 but remember, people will start complaining 2019-11-05T20:17:36 as on their rsync targets, home repos will be deleted as well 2019-11-05T20:17:37 shouldn't we also add this deleted dirs/repos to a --exclude line in the rsync that copy the files over there? 2019-11-05T20:17:37 we can have a monthly cron job for that :D 2019-11-05T20:17:42 thomic: there will be more complaints if the server dies 2019-11-05T20:18:05 there are customers building in OBS... syncing to their private mirror via rsync.o.o specifically their home repo 2019-11-05T20:18:21 and they always complain when it goes down 2019-11-05T20:18:28 (i mean the home repos) 2019-11-05T20:18:33 as they use it for production 2019-11-05T20:18:37 well, more people will complain if nothing works anymore :-) 2019-11-05T20:18:42 yes... i see 2019-11-05T20:18:47 i did it in the past 2019-11-05T20:18:48 it works 2019-11-05T20:18:56 but be prepared for whining people 2019-11-05T20:19:07 klein: 21:17:37< klein> shouldn't we also add this deleted dirs/repos to a --exclude line in the rsync that copy the files over there? 2019-11-05T20:19:10 nope 2019-11-05T20:19:20 you don't want to touch the rsync-fun now :D 2019-11-05T20:19:41 ok :-) 2019-11-05T20:19:49 dont destroy two things at the same time 2019-11-05T20:20:10 mcaj_away: well it's a very bad behaviour 2019-11-05T20:20:15 we could also be more subtle with home: and do find -mtime +30 -delete or such 2019-11-05T20:20:18 as scanner.o.o is seeing home repos this second 2019-11-05T20:20:24 and the other second it's gone 2019-11-05T20:20:35 wild idea: instead of deleting repos, freeze. Till solution implemented. :-? 2019-11-05T20:20:58 robin_listas: nay, people expect to get the latest updates from mirrors 2019-11-05T20:21:03 removing home will give us back 6tb: 6.1T home:/ 2019-11-05T20:21:03 robin_listas: how exactly do you want to freeze the OBS? 2019-11-05T20:21:13 kbabioch: you can do it! 2019-11-05T20:21:18 asking... 2019-11-05T20:21:24 the problem is that we're just mirroring what obs is doing 2019-11-05T20:21:28 we cannot freeze obs 2019-11-05T20:21:32 sure, we just tell the guys to freeze OBS 2019-11-05T20:21:43 until somebody pays for rsync.o.o replacement 2019-11-05T20:21:49 that might give that project some new drive 2019-11-05T20:21:49 :D 2019-11-05T20:22:06 Or ask devs to brake deving as much as they can 2019-11-05T20:22:07 indeed ;-) 2019-11-05T20:22:44 what about *really bad* idea ... take an external disk with like 10TB capacity, plug it into USB 3.0 port and mode there home repos ? 2019-11-05T20:22:57 thomic: yes 2019-11-05T20:23:04 well actually a good thing to do is to actually talk to devops guys about OBS repos that waste a lot of space 2019-11-05T20:23:13 * cboltz wonders if that old server has USB 3.0 2019-11-05T20:23:16 they do a clean-up round from time-to-time 2019-11-05T20:23:21 if you ask for it 2019-11-05T20:23:27 mcaj_away: even this will only be a mid term solution ... i.e. we have to buy and order it ... 2019-11-05T20:23:39 mcaj_away: AHHHHHHHHHHHHHHHHHHHHHHHHHH! 2019-11-05T20:23:58 its like its there but juts slow .... 2019-11-05T20:24:20 "hey what's that USB disk in your data center for?" - "ah never mind, just holds some very important data" 2019-11-05T20:24:32 no usb 3 controller on that machine 2019-11-05T20:24:38 i guess so^^^ 2019-11-05T20:24:42 it is freaking old 2019-11-05T20:24:47 just checked, there isnt 2019-11-05T20:24:49 BTW What about exclude 42.3 repos ... 2019-11-05T20:24:51 :D 2019-11-05T20:24:57 and have fun transferring 6tb over usb 2.0 :-) 2019-11-05T20:25:08 free pcie slot? *duckandrun* 2019-11-05T20:25:13 thomic: we could attach two disks, and make a RAID1 *g,d&r* 2019-11-05T20:25:50 ok, but on a serious note ... let's do: a.) delete home (for now as we are at 100% capacity) ... and b.) ask hetzner for sponsoring ... otherwise we won't finish with this topic 2019-11-05T20:26:09 any (strong) objections / pragmatic suggestions? 2019-11-05T20:26:10 jdsn: https://www.seedhost.eu/dedicated-seedboxes.php 2019-11-05T20:26:15 maybe you can ask there as well^^ 2019-11-05T20:26:40 yes let do it like that 2019-11-05T20:27:00 yes, and c) if we don't get it sponsored, find someone at SUSE to pay for it 2019-11-05T20:27:59 kbabioch: but we do not need to delete all of home 2019-11-05T20:27:59 I would force them to pay for it 2019-11-05T20:28:05 just to get a bit of pain as well 2019-11-05T20:28:13 after years of ignoring the topic 2019-11-05T20:28:16 but hey :D 2019-11-05T20:28:18 just my 2 cents 2019-11-05T20:28:29 bmwiedemann2 not sure what will happen if we remove files "randomly" (i.e. only old and/or new ones) 2019-11-05T20:28:49 jdsn: ? are you writing a letter somewhere in etherpad or so ? i would contribute and send it to seeboxes.eu? 2019-11-05T20:28:51 but im happy with whatever buys us some time and gives us back some of the 6tb 2019-11-05T20:29:26 AFAIK mirrorbrain should handle it even if you delete just a single file somewhere 2019-11-05T20:29:29 thomic: I am using my contact to talk to them 2019-11-05T20:29:38 can you selectively delete 42.3 and older home repos, for instance? 2019-11-05T20:30:14 yes, that should be possible 2019-11-05T20:30:25 thomic: seedhost? seedboxes? 2019-11-05T20:32:19 robin_listas: yes, find home: -path \*/openSUSE_Leap_42.\?/\* -delete or so 2019-11-05T20:32:27 jdsn: yes =) 2019-11-05T20:32:30 linode.com has a datacenter in Frankfurth, maybe they want to sponsor us if hetzner doesn't 2019-11-05T20:32:44 klein: do you have a contact there? 2019-11-05T20:33:16 I have a vm on US linode since... maybe 5+ years... but have no contact 2019-11-05T20:33:17 -> find /srv/pub/opensuse/repositories/home\:/ -name 'openSUSE_Leap_42.3' 2019-11-05T20:33:19 we can ask 2019-11-05T20:33:23 that's what i can offer 2019-11-05T20:33:30 kbabioch: go for it :D 2019-11-05T20:33:31 maybe I can open a ticket, and see what happens 2019-11-05T20:33:38 I have a contact at https://vpsfree.cz/ 2019-11-05T20:33:41 or even the version from bmwiedemann2 to get rid of 42 everything 2019-11-05T20:33:47 well we should coordinate our sponsoring efforts ... 2019-11-05T20:33:55 not run around in headless chicken mode 2019-11-05T20:34:00 yeah, I don't like to do that 2019-11-05T20:34:08 kbabioch: I wonder how many openSUSE_1* are left in there 2019-11-05T20:34:31 malaka! just delete the home repos for now... people will complain anyways. 2019-11-05T20:34:34 maybe in the end we can have mode cloudhat servers from different sponsors ^^ 2019-11-05T20:34:53 those who use home repos for production have 42.3 in their production as well as windows xp 2019-11-05T20:34:58 you can't change people 2019-11-05T20:35:31 hey, no Greek swearing plz 2019-11-05T20:35:43 https://etherpad.opensuse.org/p/rsyncsponsors 2019-11-05T20:36:36 pjessen: :-D 2019-11-05T20:36:47 ok, removing repos now ... takes a while 2019-11-05T20:37:17 will buy us osme time, but let's make sure to investigate the other options ... 2019-11-05T20:37:27 when it's done, please report back how much disk space those 42.x repos took ;-) 2019-11-05T20:37:27 we should *really* move on to other topics, though 2019-11-05T20:37:37 agree 2019-11-05T20:37:42 like the heroes meeting 2019-11-05T20:37:43 :-) 2019-11-05T20:37:48 exactly ;-) 2019-11-05T20:38:46 I have little to report, I have fixed the repopusher, but other wise october has been very busy with other stuff 2019-11-05T20:39:06 forums - no progress. I think I need someone to pusg MFIT. 2019-11-05T20:40:02 I know this joke is getting old, but - maybe ask TSP for a flight to Provo, and take some safety boots with you? 2019-11-05T20:40:36 cboltz: oooh, nice idea! I'll file a TSP request right away. :-) 2019-11-05T20:41:46 RobertW and Renato are onsite in Provo - so maybe they could use their safety boots? 2019-11-05T20:42:30 sounds good, can you please ask them? 2019-11-05T20:42:50 I don't know those names, can someone send me their addresses please? 2019-11-05T20:43:57 I have little update to. Testing ichain-plugin on https://progress-test.opensuse.org. Need test login status. I have login on connect, and open progress-test.o.o . My status logged, but cannot access any page. Still using old backup db (201904xx). run db on same machine as progress-test.o.o. will try another test case. 2019-11-05T20:45:02 (intermediate report: we've gained 0.6 tb by removing all of the 42.3 home repos) 2019-11-05T20:50:16 tuanpembual: progress-test.o.o looks quite good to me - I can access overview pages (like user list or ticket list), but trying to view a ticket gives an internal error 2019-11-05T20:50:51 so even if there are still problems, you are making progress :-) 2019-11-05T20:51:42 maybe link for ticket still using original url: progress.o.o 2019-11-05T20:52:05 I debug it by see the dump DB. 2019-11-05T20:52:17 no, it links to progress-test 2019-11-05T20:52:43 the log should (hopefully) tell you why it errors out 2019-11-05T20:52:58 sure. 2019-11-05T20:54:14 hmm, do we have some breakage in the VPN? 2019-11-05T20:54:30 it worked for me ~2 hours ago, but now I can't connect anymore :-( 2019-11-05T20:54:52 same from me. 2019-11-05T20:56:01 the most relevant messages is probably AUTH: Received control message: AUTH_FAILED 2019-11-05T20:56:10 The same to me :( 2019-11-05T20:56:18 sudo: unknown uid 1366800077: who are you ? 2019-11-05T20:56:46 sounds like we might have a problem with FreeIPA... 2019-11-05T20:57:13 (the initial connection and cert validation works, the failing part is probably the username/pass check) 2019-11-05T20:57:37 I will check the VM on atruju ... 2019-11-05T20:57:39 just a second 2019-11-05T20:57:48 cboltz: ah, I was wondering about that 2019-11-05T20:59:41 yes ... sssd service is down : Active: failed (Result: exit-code) 2019-11-05T20:59:58 great :-/ 2019-11-05T21:00:08 does the log indicate why it failed? 2019-11-05T21:00:29 fixed, but there was no reboot ... so I'm not sure why that happen 2019-11-05T21:01:01 feel fee to investigate it ... 2019-11-05T21:02:50 hmm, /var/log/sssd/sssd_infra.opensuse.org.log looks interesting[tm] 2019-11-05T21:03:16 its 22:00 CET to me time for dog beer ... 2019-11-05T21:03:52 I know it's late, but - should we do some planning for the offsite meeting? 2019-11-05T21:04:17 like topics to discuss, maybe workshops etc.? 2019-11-05T21:04:20 how many people will join? 2019-11-05T21:04:28 what i need to know: where did you go the last time for dinner on friday? and when? 2019-11-05T21:04:32 i need to make some resevations 2019-11-05T21:04:42 we are 10-15 people as of now 2019-11-05T21:05:54 what about Ziet und Raum ? 2019-11-05T21:05:58 kbabioch: you mean, you don't want to book the same location twice? 2019-11-05T21:06:10 im also fine booking the same location 2019-11-05T21:06:14 i was not part of this event last time 2019-11-05T21:06:18 so im asking for some guidance here 2019-11-05T21:06:38 then maybe ask for recommendations/wishes :) 2019-11-05T21:07:20 I can probably look up where we went in my mail archive, but OTOH - if you know a nice place, just make a reservation there 2019-11-05T21:07:36 or one of the top 10 ^^ ? ( https://theculturetrip.com/europe/germany/articles/the-top-10-bars-in-nuremberg-germany/ ) 2019-11-05T21:07:38 ok ... and time-wise? 2019-11-05T21:07:48 (at theculturetrip.com) 2019-11-05T21:07:50 pjessen oreinert wil arrive at ~ 18 (or so) 2019-11-05T21:08:07 what about dinner at 19:00? 2019-11-05T21:08:28 fine with me ... 2019-11-05T21:08:58 shud be fine. 2019-11-05T21:09:16 okay, will let you know via mail 2019-11-05T21:09:20 I'm also fine with 19:00 2019-11-05T21:09:30 is there anything else? has everyone (outside of suse) been contacted and is setup? 2019-11-05T21:09:54 for the suse employees ... you'll need to speak to your manager and book a hotel via bcd when you are remote 2019-11-05T21:10:18 otherwise we have a dinner on friday ... and two days of sessions ... 2019-11-05T21:10:39 any topics that we definitely want to discuss and are not on the agenda yet? 2019-11-05T21:10:48 also is any preparation needed / reocmmended? 2019-11-05T21:11:39 is the SUSE guest network still restricted to http and https? If yes, I can recommend a nice VPN on port 443 somewhere ;-) 2019-11-05T21:11:52 i'll add some stuff for the agenda, via email or the list 2019-11-05T21:12:18 cboltz to be honest ... not sure not using it much :-/ 2019-11-05T21:13:10 I need to go ... So CU next week on Friday in Nuremberg ;) 2019-11-05T21:13:42 when I was last there (in May), my VPN tunnel was quite helpful ;-) and I'd be surprised if it changed since then 2019-11-05T21:14:48 but maybe that's only a technical detail - traditionally, we rarely use computers during the heroes meeting ;-) 2019-11-05T21:15:22 *** mcaj_away has left #opensuse-admin 2019-11-05T21:17:24 ok ... seems like there is nothing else for the heroes meeting 2019-11-05T21:17:29 i will make a reservation for friday 2019-11-05T21:18:21 maybe also for saturday? 2019-11-05T21:18:52 yup, makes sense. and during the day ... we will have to order something (e.g. pizza) 2019-11-05T21:19:24 sounds like a plan :-) 2019-11-05T21:22:39 so - thanks everybody for joining the meeting today, and see you in Nuremberg soon! 2019-11-05T21:23:43 thanks all, and good morning. 2019-11-05T21:27:16 regarding sssd: 2019-11-05T21:27:40 👋 2019-11-05T21:27:42 it looks like an update got installed today, and restarting sssd (as part of that update) failed 2019-11-05T21:28:34 systemctl status sssd -> https://paste.opensuse.org/69658991 2019-11-05T21:30:03 we had some other services fail after update, too. Seems to be rather common these days 2019-11-05T21:30:05 I don't know if it is of interest, but a kernel update to 15.1 was announced an hour ago. 2019-11-05T21:30:13 cboltz: I don't know who is managing the FreeIPA system, but now that Fedora 31 is available, when FreeIPA box is upgraded to F31, you should be able to use ipsilon with it again 2019-11-05T21:30:20 since Ipsilon in F31 is now Python 3 2019-11-05T21:31:15 I'm the maintainer of the `ipsilon` package in Fedora, so if there's issues, let me know and I can help resolve them 2019-11-05T21:31:15 robin_listas: I think, there could be another kernel update next week. 2019-11-05T21:32:25 (sorry I missed earlier today... I've been in a SUSE Experts Days thing all day today...) 2019-11-05T21:34:17 Eighth_Doctor you don't want to know which version of fedora our freeipa instance is running on :-/ 2019-11-05T21:34:26 looks like that sssd update broke sssd on half of our Leap 15 VMs :-( 2019-11-05T21:34:37 I'll restart it everywhere using salt 2019-11-05T21:34:39 kbabioch: oh no 2019-11-05T21:34:53 automatic updates + no monitoring & testing ... what could possibly go wrong ;-) 2019-11-05T21:34:59 kbabioch: do we need to stage an intervention here? :( 2019-11-05T21:35:28 Eighth_Doctor yes 2019-11-05T21:35:38 it's F24 ... 2019-11-05T21:35:44 will be lots of fun to upgrade it 2019-11-05T21:35:49 oh god 2019-11-05T21:35:54 probably dumping everything and re-installing will be easier ;-) 2019-11-05T21:35:56 well, if I can help, I'd be happy to 2019-11-05T21:36:47 well, do you happen to know if we can upgrade from f24 to f31 (one at a time)? 2019-11-05T21:37:07 or has the migration train alraedy left? 2019-11-05T21:37:14 not even sure if mirrors still have the packages, etc. 2019-11-05T21:37:54 ah, i was wrong ... its f25 :-) 2019-11-05T21:38:04 VERSION="25 (Cloud Edition)" 2019-11-05T21:52:23 *** oreinert has quit IRC (Quit: oreinert) 2019-11-05T21:54:21 tuanpembual: sudo should work again (actually since 20 minutes) 2019-11-05T21:57:38 *** bmwiedemann2 has quit IRC (Ping timeout: 240 seconds) 2019-11-05T21:57:57 I don't think is a good idea upgrade F24>F31, we should create a new instance, install FreeIPA, and migrate the data 2019-11-05T22:01:20 that, or switch to �-DIR ;-) (also a possible topic for the meeting in Nuremberg) 2019-11-05T22:01:51 given all the fun we had with sssd, it can only become better 2019-11-05T22:02:11 migrating to aedir is a bigger project 2019-11-05T22:02:18 upgrading freeipa is "easy" 2019-11-05T22:02:22 but we can discuss this 2019-11-05T22:03:03 I'm quite sure Michael would love to do that migration on the server side (data migration from FreeIPA) 2019-11-05T22:03:40 and on the client side, doing the changes with salt shouldn't be too hard 2019-11-05T22:05:14 personally, I'd even consider to move as much as possible out of LDAP (like plaintext files for DNS), but that's another topic ;-) 2019-11-05T22:05:57 I would like if we discuss about centralized auth tools 2019-11-05T22:07:07 we need something that is not RHEL to use at openSUSE, or, we can enforce salt to be the only one that manages users (and enforce some rules) 2019-11-05T22:08:50 and of course, move DNS management outside ldap 2019-11-05T22:18:18 kbabioch: any idea why gitlab randomly gives 500 errors while running the tests? 2019-11-05T22:59:49 *** jadamek2 has joined #opensuse-admin 2019-11-05T23:03:14 *** jadamek has quit IRC (Ping timeout: 240 seconds) 2019-11-05T23:58:14 *** boombatower has quit IRC (Quit: Konversation terminated!)