2019-12-03T00:06:24 *** cboltz has quit IRC () 2019-12-03T03:08:30 *** okurz_ has joined #opensuse-admin 2019-12-03T03:09:39 *** okurz has quit IRC (Ping timeout: 265 seconds) 2019-12-03T03:09:39 *** okurz_ is now known as okurz 2019-12-03T03:16:29 *** srinidhi has joined #opensuse-admin 2019-12-03T06:43:07 *** ldevulder_ has joined #opensuse-admin 2019-12-03T06:46:17 *** ldevulder has quit IRC (Ping timeout: 252 seconds) 2019-12-03T07:10:50 *** ldevulder__ has joined #opensuse-admin 2019-12-03T07:14:13 *** ldevulder_ has quit IRC (Ping timeout: 265 seconds) 2019-12-03T07:33:56 *** srinidhi has quit IRC (Quit: Leaving.) 2019-12-03T07:34:31 *** srinidhi has joined #opensuse-admin 2019-12-03T07:40:18 *** srinidhi has quit IRC (Ping timeout: 245 seconds) 2019-12-03T07:48:29 *** ldevulder_ has joined #opensuse-admin 2019-12-03T07:52:02 *** ldevulder__ has quit IRC (Ping timeout: 268 seconds) 2019-12-03T08:00:46 *** matthias_bgg has joined #opensuse-admin 2019-12-03T08:05:00 *** jadamek has joined #opensuse-admin 2019-12-03T08:08:17 *** ldevulder_ is now known as ldevulder 2019-12-03T08:10:09 *** ldevulder_ has joined #opensuse-admin 2019-12-03T08:13:40 *** ldevulder has quit IRC (Ping timeout: 265 seconds) 2019-12-03T08:31:06 *** srinidhi has joined #opensuse-admin 2019-12-03T08:58:23 *** ldevulder_ is now known as ldevulder 2019-12-03T09:39:17 *** srinidhi has quit IRC (Ping timeout: 240 seconds) 2019-12-03T09:41:57 *** sysrich_ has joined #opensuse-admin 2019-12-03T09:42:17 *** sysrich has quit IRC (Ping timeout: 240 seconds) 2019-12-03T09:42:17 *** adamm has quit IRC (Ping timeout: 240 seconds) 2019-12-03T09:43:34 *** adamm has joined #opensuse-admin 2019-12-03T09:52:12 *** srinidhi has joined #opensuse-admin 2019-12-03T10:02:45 *** cboltz has joined #opensuse-admin 2019-12-03T10:11:38 *** srinidhi has quit IRC (Ping timeout: 276 seconds) 2019-12-03T10:40:59 *** srinidhi has joined #opensuse-admin 2019-12-03T10:50:16 *** srinidhi has quit IRC (Ping timeout: 265 seconds) 2019-12-03T11:16:14 *** srinidhi has joined #opensuse-admin 2019-12-03T12:10:27 *** matthias_bgg has quit IRC (Read error: Connection reset by peer) 2019-12-03T12:10:56 *** matthias_bgg has joined #opensuse-admin 2019-12-03T12:15:47 *** srinidhi has quit IRC (Ping timeout: 250 seconds) 2019-12-03T12:22:39 *** cboltz has quit IRC () 2019-12-03T12:26:33 *** darix has left #opensuse-admin ("All rights reversed") 2019-12-03T12:37:37 *** srinidhi has joined #opensuse-admin 2019-12-03T12:55:26 *** srinidhi has quit IRC (Ping timeout: 276 seconds) 2019-12-03T13:09:23 *** darix has joined #opensuse-admin 2019-12-03T13:09:49 *** darix has left #opensuse-admin 2019-12-03T13:13:28 *** jadamek2 has joined #opensuse-admin 2019-12-03T13:17:38 *** jadamek has quit IRC (Ping timeout: 268 seconds) 2019-12-03T14:54:14 *** srinidhi has joined #opensuse-admin 2019-12-03T14:57:25 *** Son_Goku has joined #opensuse-admin 2019-12-03T15:18:02 *** srinidhi has quit IRC (Ping timeout: 265 seconds) 2019-12-03T15:37:50 *** srinidhi has joined #opensuse-admin 2019-12-03T16:04:30 *** Son_Goku is now known as Conan_Kudo 2019-12-03T16:04:37 *** Conan_Kudo is now known as Son_Goku 2019-12-03T16:04:46 *** Son_Goku has quit IRC (Quit: "真実はいつも一つ!" -- 工藤新一) 2019-12-03T16:20:51 *** matthias_bgg has quit IRC (Read error: Connection reset by peer) 2019-12-03T16:21:27 *** matthias_bgg has joined #opensuse-admin 2019-12-03T16:41:54 *** ldevulder_ has joined #opensuse-admin 2019-12-03T16:45:02 *** ldevulder has quit IRC (Ping timeout: 265 seconds) 2019-12-03T16:45:02 *** srinidhi has quit IRC (Quit: Leaving.) 2019-12-03T16:47:19 *** srinidhi has joined #opensuse-admin 2019-12-03T17:37:36 *** cboltz has joined #opensuse-admin 2019-12-03T17:52:10 *** kl_eisbaer has joined #opensuse-admin 2019-12-03T17:59:56 hi all 2019-12-03T18:01:13 hi 2019-12-03T18:01:31 you are quite early - the meeting is in an hour ;-) 2019-12-03T18:03:56 :) wrong timezone again. 2019-12-03T18:17:21 *** matthias_bgg has quit IRC (Ping timeout: 265 seconds) 2019-12-03T18:55:59 *** oreinert has joined #opensuse-admin 2019-12-03T18:59:39 *** jdsn has quit IRC (Remote host closed the connection) 2019-12-03T19:01:20 hi everybody, and welcome to the heroes meeting ;-) 2019-12-03T19:01:34 our usual topics are listed on https://progress.opensuse.org/issues/59121 2019-12-03T19:01:39 *** jdsn has joined #opensuse-admin 2019-12-03T19:01:40 good evening 2019-12-03T19:02:08 does someone from the community have any questions? 2019-12-03T19:03:48 doesn't look so, so let's continue with the status reports 2019-12-03T19:04:09 as discussed in Nuremberg, let's try to limit this to the reports, and have discussions afterwards 2019-12-03T19:04:17 who wants to start? 2019-12-03T19:04:25 I can share some information about widehat 2019-12-03T19:04:40 we got a 'verbal' appoval from SUSE that they will "most likely" find the budget for a new widehat machine 2019-12-03T19:04:47 so we can replace the old machine soon 2019-12-03T19:05:10 we also have a configuration that should fit for the next few years 2019-12-03T19:05:44 an interim solution with the move to a hetzner server is not really needed as a short downtime for the replacement should not be an issue 2019-12-03T19:06:22 questions? 2019-12-03T19:06:53 no, just a thank you ;-) 2019-12-03T19:07:18 Is Hetzner still an option for an additional mirror? 2019-12-03T19:07:43 if they offer to sponsor a server long term, why not 2019-12-03T19:07:51 but until now I did not hear back from them 2019-12-03T19:07:58 ok, thanks 2019-12-03T19:08:04 I had my wish forwarded to Martin Hetzner himself 2019-12-03T19:08:23 s/my/our/ 2019-12-03T19:08:47 Nothing much from me, it's a busy time of year. Spent time setting myself up, and familiarising myself with the way things work. No real accomplishments, just some wiki edits. And I will continue with that until next time. 2019-12-03T19:09:30 While I'm also familiarising myself (again) with the setup, I already have something... 2019-12-03T19:09:40 = Duplicate IP addresses in infra.opensuse.org network: = 2019-12-03T19:09:40 caasp-worker1.infra.opensuse.org. 300 IN A 192.168.47.47 2019-12-03T19:09:40 helloworld.infra.opensuse.org. 300 IN A 192.168.47.47 2019-12-03T19:09:40 aedir1.infra.opensuse.org. 300 IN A 192.168.47.57 2019-12-03T19:09:40 osc-collab-future.infra.opensuse.org. 300 IN A 192.168.47.57 2019-12-03T19:09:40 aedir2.infra.opensuse.org. 300 IN A 192.168.47.58 2019-12-03T19:09:40 mailman-test.infra.opensuse.org. 300 IN A 192.168.47.58 2019-12-03T19:09:41 Someone should fix this... 2019-12-03T19:10:14 I did not check if those affected machines are currently online - but IF they are, someone (probably the one who set them up) should change their IPs 2019-12-03T19:10:32 = status.opensuse.org = 2019-12-03T19:10:46 Both machines are now running 15.1 and the latest stable Cachet code 2019-12-03T19:10:54 https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Statusopensuseorg 2019-12-03T19:10:54 https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Status1opensuseorg 2019-12-03T19:10:54 https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Status2opensuseorg 2019-12-03T19:11:02 is the documentation (updated) 2019-12-03T19:11:10 = Documentation in general = 2019-12-03T19:11:16 https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines 2019-12-03T19:11:16 => currently lists ~90 (!) machines 2019-12-03T19:11:40 This brings me to a topic we might need to discuss / make a decision... 2019-12-03T19:11:48 Q: FreeIPA allows to define Hosts. This would currently help to get a short overview of available machines and their functions. In addition, it allows to store machines MAC addresses, public SSH keys and to define roles (functional roles as well as sudoers for example) and group them. 2019-12-03T19:11:48 This would not make the wiki obsolete (as the description field in FreeIPA does not allow wiki semantic), but could give a good, first overview. 2019-12-03T19:11:48 Backside: people need to add/maintain information at least in 3 different systems: 2019-12-03T19:11:48 * FreeIPA 2019-12-03T19:11:48 * Admin-Wiki 2019-12-03T19:11:49 * Salt 2019-12-03T19:11:49 Fact: there is currently not one single page which gives an overview about machines and their 2019-12-03T19:11:51 *** mstroeder has joined #opensuse-admin 2019-12-03T19:12:17 So I would love to have this discussed either here or via mailing list. 2019-12-03T19:12:23 (but I have more... ;-) 2019-12-03T19:12:32 = Monitoring cleanup = 2019-12-03T19:12:32 Removed/ fixed some machines. 2019-12-03T19:12:32 Question: what about the machines that are currently NOT monitored at all? 2019-12-03T19:12:32 * aedir{1,2} 2019-12-03T19:12:32 * caasp*/kubic (16 machines) 2019-12-03T19:12:33 * ci-opensuse 2019-12-03T19:12:33 * narwal (6 machines) 2019-12-03T19:12:34 * pinot 2019-12-03T19:12:34 * ses-admin 2019-12-03T19:12:35 What about test machines in general? 2019-12-03T19:12:52 *** kbabioch has joined #opensuse-admin 2019-12-03T19:13:10 As I have no access to those machines, I can not really do anything here regarding monitoring. Only monitoring the available services 2019-12-03T19:13:31 So if some admin of those machines feels now trapped: please ping me 2019-12-03T19:13:34 = openSUSE:infrastructure repo = 2019-12-03T19:13:34 * Started with cleanup - and fixing packages 2019-12-03T19:13:34 + updated etherpad-lite to 1.7.5 (waiting for someone to deploy) 2019-12-03T19:13:34 + abuild-online-update is replaced with suse-online-update -> this requires adaptions on machines with the old package 2019-12-03T19:13:34 + adjusted repositories (enabled 15.2 and removed some old repos like SLE_12_SP3) -> might affect some machines that should either see an update or a migration 2019-12-03T19:13:34 2019-12-03T19:13:34 * started to work on Leap 15.2 images 2019-12-03T19:13:35 * Leap 15.1 image deployment is currently challenging: 2019-12-03T19:13:35 + need to wait for dracut to run into timeout 2019-12-03T19:13:36 + chroot into installed system 2019-12-03T19:13:36 + run grub2-mkconfig -o /boot/grub2/grub2.cfg 2019-12-03T19:13:37 + reboot 2019-12-03T19:13:54 = Security issues popped up during scan = 2019-12-03T19:13:54 * most obvious problems fixed 2019-12-03T19:13:54 + SSL ciphers enhanced 2019-12-03T19:13:54 + TLS 1.2 enforced 2019-12-03T19:13:54 * status1&2 upgraded 2019-12-03T19:13:54 * daffy1&2 upgraded 2019-12-03T19:13:54 Real old machines (SLE11): 2019-12-03T19:13:55 * boosters 2019-12-03T19:13:55 * narwal{,2} 2019-12-03T19:13:56 * redmine 2019-12-03T19:13:56 * community 2019-12-03T19:13:57 Still some 42.3 machines online: 2019-12-03T19:14:08 = Salt = 2019-12-03T19:14:09 What is the common procedere for Salt? 2019-12-03T19:14:09 I'm asking, because I see some long hanging merge requests. Wouldn't it be a good idea to have some arrangements like: 2019-12-03T19:14:09 * emergency updates fixing something that is already broken => direct 2019-12-03T19:14:09 * stuff that is interesting only for machines that the requester maintains => direct 2019-12-03T19:14:09 * stuff that nobody was able to review for more than 2 months => direct 2019-12-03T19:14:09 And in turn: 2019-12-03T19:14:10 * stuff that tends to break existing stuff => request 2019-12-03T19:14:10 * stuff that affects other machines where the submitter != machine-admin => request 2019-12-03T19:14:11 aedir{1,2} are VMs for Æ-DIR PoC (see progress #39872) 2019-12-03T19:14:28 * kl_eisbaer is done with status report 2019-12-03T19:15:03 mstroeder: that was my guess :-) - but I want to know if those machines should be monitored? 2019-12-03T19:15:26 IMHO every machine which provides an external visible service should be monitored. But this is just my personal wish. 2019-12-03T19:15:43 wow, that was a lot! - and I have a feeling that we can fill the meeting with discussing your questions ;-) 2019-12-03T19:15:50 but before we do that - more status reports? 2019-12-03T19:15:51 But even this "wish" leaves room for questions, as some test-instances are visible externally. 2019-12-03T19:16:17 aedir{1,2} are still not in production use. But feel free to monitor them. Because of conflicts with Python3 modules I had to disable salt on aedir1 though. 2019-12-03T19:16:23 I have update 2019-12-03T19:16:26 :D 2019-12-03T19:16:41 go ahead ;-) 2019-12-03T19:16:57 progress-test.o.o get some fixed plugins. some broken error page 500 have fixed. 2019-12-03T19:17:05 https://progress-test.opensuse.org/ 2019-12-03T19:17:12 mstroeder: I would leave the final decision up to you (especially as you should also provide the information of "what to monitor" 2019-12-03T19:17:50 please look for this. Next plan, if acceptable, please move to next step, using real db. 2019-12-03T19:17:59 tuanpembual: one word: wow :D 2019-12-03T19:18:41 backward, it still use manual installation. no salt stuff yet. 2019-12-03T19:19:13 for now, still using local mariadb. more detail at https://progress.opensuse.org/issues/27720 2019-12-03T19:19:20 thanks @kl_eisbaer 2019-12-03T19:19:36 tuanpembual: well, first it would be good to have a secure and up-to date installation. Salt (and other stuff) can IMHO follow later... 2019-12-03T19:20:17 tuanpembual: did you test the ticket system as well? Means: if you sent Emails, do they end up in the right queue ? 2019-12-03T19:20:38 mail is working. 2019-12-03T19:20:49 but I will test make new ticket now 2019-12-03T19:20:50 :D 2019-12-03T19:20:51 perfect1 :-) 2019-12-03T19:21:59 https://progress-test.opensuse.org/projects/opensuse-admin/files => the images are missing (looks like they are stored locally somewhere) 2019-12-03T19:22:49 tuanpembual: do you mind to create a project in gitlab, where you can put scripts and other stuff - and which can be used to file issues ? 2019-12-03T19:23:27 I'd argue that scripts should be hosted in the salt repo and listed as "file.managed" ;-) 2019-12-03T19:23:38 sure. I have some notes about installation and other stuff 2019-12-03T19:23:55 cboltz: is Salt unable to get sources / files from more than one repo? 2019-12-03T19:24:42 we can use "git.cloned", but IMHO that only makes sense for repos with lots of files 2019-12-03T19:25:09 if we are talking about a few scripts (which get managed by us anyway), using an external repo sounds like superfluous overhead IMHO 2019-12-03T19:25:24 had successfully create new ticket. and an email arrived at my inbox 2019-12-03T19:25:43 kl_eisbaer: whats the goal of having multiple repos? 2019-12-03T19:26:02 jdsn: for me the goal is to keep things separated that are separated 2019-12-03T19:26:20 pushing everything into one single repo ends up in a mess sooner or later. 2019-12-03T19:26:35 does not matter if it is openSUSE:infrastructure or any git/svn repo 2019-12-03T19:27:00 What - for example - has my issue report above to do with our salt repository? 2019-12-03T19:27:28 ok, I am unsure about the level of separateness - but salt even works without git, so the source does not matter - its just a matter of taste how to integrate the other files 2019-12-03T19:27:54 instead, if a repo is clearly defined to host one special tool / machine scripts, I see the benefit for the maintainer to work independenly 2019-12-03T19:28:21 jdsn: jip: salt can work with plain files - it doesn't matter where they come from. 2019-12-03T19:28:27 kl_eisbaer: ... which can potentially break stuff if the does not think about the other repo :) 2019-12-03T19:28:41 right 2019-12-03T19:28:54 kl_eisbaer: I understand your reasons, but OTOH I'd like to avoid having 100 repos - keeping an overview of everything would be a nightmare 2019-12-03T19:29:07 On the other side, I would love to give people as much freedom as possible. - and the one who breaks stuff should be able to fix it as well ;-) 2019-12-03T19:29:31 so, what next plan to do for new redmine? 2019-12-03T19:29:34 in good times, "he" breaks only stuff "he" maintains anyway 2019-12-03T19:29:45 kl_eisbaer: what freedom do we take if the files are in the same repo? you can still work independent 2019-12-03T19:29:51 or what am I missing 2019-12-03T19:29:52 ? 2019-12-03T19:30:02 cboltz: do you really have an overview right now? 2019-12-03T19:30:28 jdsn: everyone has to work under the conditions the whole team has defined for this repo 2019-12-03T19:30:36 see my question about the infra/salt repo above. 2019-12-03T19:30:39 maybe not 100% (because for example I'm not a LDAP expert), but in general I'd say that I have a quite good overview 2019-12-03T19:30:50 kl_eisbaer: I think this topic deserves to be detailled eg. in an etherpad so we all are on the same page 2019-12-03T19:30:50 Do you really want to have a merge request hanging for more than a year? 2019-12-03T19:30:58 you obviously know more than we do 2019-12-03T19:31:26 jdsn: no, I do not know. I just have my personal feelings and my personal experience - like everyone of us has 2019-12-03T19:31:51 I just see it very often that a very restricted master branch tends to move contributors away 2019-12-03T19:32:13 kl_eisbaer: what are the reasons? lets address them! 2019-12-03T19:32:15 ...and I posted the solution that my team is running above 2019-12-03T19:32:34 The main reason is that it takes ages before a merge request get reviewed - or even accepted 2019-12-03T19:32:51 but that is independent of the number of repos 2019-12-03T19:33:12 kl_eisbaer: IMHO we "just" need to adjust our policy how merge requests get handled - allow to self-merge simple and/or urgent things 2019-12-03T19:33:16 even more, one shared repo is less work to look at, so the MRs get reviewed faster 2019-12-03T19:33:22 If I fix a typo somewhere, if I change stuff that clearly is maintained only by myself - why do I need to wait weeks or months before my changes get into the master branch? 2019-12-03T19:33:38 but thats a topic for the MR policy that you already proposed - easier merging 2019-12-03T19:33:39 and also define a "timeout" which allows to merge without a formal review 2019-12-03T19:33:57 jip. This is my proposal for "team repos" like the Salt one 2019-12-03T19:34:40 but if tuanpembual has written some scripts to make the usage of progress more conveniant for each of us - why should he required to push them into the salt repo ? 2019-12-03T19:34:41 shall we try that, and see how we get along with it for a few months? 2019-12-03T19:34:58 (sorry, tuanpembual, just taking you as example here) 2019-12-03T19:35:06 kl_eisbaer: because they may be part of the system configuration 2019-12-03T19:35:15 without these scripts the host is incomplete 2019-12-03T19:35:45 but if we find a nice way to reference other sources we can try that too 2019-12-03T19:36:04 if it's for a single project or machine, we could also package the scripts and just install that 2019-12-03T19:36:05 I just see no problem to require someone to move some scripts to an existing repo 2019-12-03T19:36:13 jdsn: you are right. But if I am currently working on those scripts, I will not push them into a repository where I have to wait for days (or even hours) before I can proceed 2019-12-03T19:36:17 if the merging is easy, it should not matter 2019-12-03T19:36:39 but thats the same topic again -> easier merging 2019-12-03T19:36:50 If we can agree that those scripts (or stuff that clearly belongs only into a dedicated area) can be directly pushed, I'm in :-) 2019-12-03T19:36:55 *** mstroeder has quit IRC (Ping timeout: 250 seconds) 2019-12-03T19:37:01 again: shall we try Lars' proposal of easier merging? 2019-12-03T19:37:07 ...and similar to issue tracking and wiki usage. 2019-12-03T19:38:05 we'll "only" need to give more people write access to master - not really a problem, and indeed worth a try 2019-12-03T19:38:23 are there other opinions? will we have a voting on that? 2019-12-03T19:38:27 I'd still propose to handle everything via merge requests (even if you self-accept them) 2019-12-03T19:38:28 I'm even happy to enhance the README with the rules posted above ;-) 2019-12-03T19:39:03 cboltz: looks like the OBS approach ;-) 2019-12-03T19:39:04 cboltz: yea, that creates some more visibility, ok 2019-12-03T19:39:12 *** mstroeder has joined #opensuse-admin 2019-12-03T19:39:14 reason: a MR sends out a mail to everybody (who subscribed), so you might get reviews "for free" 2019-12-03T19:39:27 (in worst case, you'll have to do another MR with the proposed improvements ;-) 2019-12-03T19:39:46 "MR + self-accept in special cases" => +1 2019-12-03T19:39:51 don't forget that an important part of PRs is to allow others to keep track of what's happening 2019-12-03T19:40:38 besides, isn't it possible to get salt to run a change without committing it first? (possible noob question) 2019-12-03T19:40:44 +1 as long as you define special cases :) 2019-12-03T19:41:45 also, Google commits *all* of their software in a single repository, so why can't we do that, too? 2019-12-03T19:41:54 oreinert: IIRC there's a way to somehow specify the git branch to use, but I'd have to look it up 2019-12-03T19:42:34 (obviously you'll first need to commit to that branch ;-) 2019-12-03T19:42:37 * kl_eisbaer is normally "trying out" things directly from the saltmaster. ;-) 2019-12-03T19:42:47 but this depends on the setup 2019-12-03T19:42:58 personally, I have some test VMs on my laptop and can test things on them 2019-12-03T19:43:51 to me is sounds like kl_eisbaer wants to fire off a rapid succession of commits/PRs while developing, and that's not really what you're supposed to do. PRs is for the final things (or as close to it as you can get), also to reduce load on reviewers. 2019-12-03T19:44:18 well: I'm a fan of "release often".... 2019-12-03T19:44:39 sure - but that's not the same as "release during development" 2019-12-03T19:44:51 oreinert: so I have to admit that you are probably right with this 2019-12-03T19:45:08 as long as you don't have one MR followed by two "fix previous MR" MRs ;-) I'm fine with "release often" 2019-12-03T19:45:32 if we can't make (local) salt changes and run/test them without committing and pushing to the repo (maybe also via a PR) then the process is wrong, I'd argue. 2019-12-03T19:45:46 I am more from the DevOPS approach - and YES, this sometimes breaks things. But on the other side, this gives some possibility for fast development 2019-12-03T19:46:04 sure, "fix my previous mistake" PRs are normal. :-) 2019-12-03T19:47:02 yeah, no problem as long as we have (on average) more "$foo" MRs than "fix previous MR for $foo" ;-) 2019-12-03T19:47:07 My experience with this is just that people tend to hold their changes back (because they need some love/beautify) - and suddenly notice that others already did "quick and dirty" what they wanted to achieve 2019-12-03T19:48:05 you shouldn't be that shy ;-) 2019-12-03T19:48:42 improvements in small steps are always welcome (and maybe even easier to review than one big MR including 20 of those steps) 2019-12-03T19:48:49 +1 2019-12-03T19:48:56 My current feeling is just that we sometimes outbrake ourselves, when we wait for "someone" who clicks on the "merge" button - more than one year later.... 2019-12-03T19:49:37 I agree completely 2019-12-03T19:49:51 i assume you mean it feels like a year waiting for approval? :-) 2019-12-03T19:49:55 *** mstroeder has quit IRC (Ping timeout: 250 seconds) 2019-12-03T19:49:57 If I find the time to work on openSUSE stuff, I just don't want to get stopped because some rules require that someone reviews my commits at 03:00 night 2019-12-03T19:50:39 oreinert: well - threre are indeed merge requests that started over a year ago - just in the salt repo 2019-12-03T19:51:05 ...and this is something that I do not understand 2019-12-03T19:51:18 if the change is a) small and trivial or b) only affecting "your" VM, I see no problem with self-accepting the MR 2019-12-03T19:51:29 cboltz: thanks. 2019-12-03T19:51:48 cboltz: and I would only extend this rule for "emergency updates" 2019-12-03T19:51:48 the obvious disadvantage is that you don't have someone to blame for not noticing the breakage it causes in the review, but that will be your choice ;-) 2019-12-03T19:51:56 kl_eisbaer: I remember we talked about them in Nürnberg. They are special, if I remember correctly - potentially harmful, and noone quite seems to know what the impact of merging is. I assume most PRs by far will not be like that. 2019-12-03T19:52:08 example: the given NTP servers are down and all hosts should get a replacement immediately 2019-12-03T19:52:13 agreed, emergency updates are another obvious category for self-merge 2019-12-03T19:52:42 i don't really mind direct commits for small changes without PR either 2019-12-03T19:53:10 Once I figured out what changed in the notify mechanism of the IRC-Bot, we could even think about pushing merge requests topics here 2019-12-03T19:53:13 as long as it's tracked in a VCS, I'm fine (instead of hacking directly on the box) 2019-12-03T19:53:54 oreinert: me as well (especially as a VCS has this nice "way-back-machine" interface ;-) 2019-12-03T19:54:05 I'd prefer MRs for everything - even if you self-merge within seconds, it will still send out some mails (which pushing to production directly doesn't) 2019-12-03T19:54:19 cboltz: +1 2019-12-03T19:54:23 cboltz: ...and this is IMHO a good compromize 2019-12-03T19:54:54 cboltz: +1 2019-12-03T19:55:12 *** mstroeder has joined #opensuse-admin 2019-12-03T19:56:01 ok, so on the technical side, we'll just need to give more people permissions to (self)accept MRs ;-) 2019-12-03T19:56:46 and on the practical side, I'm sure everybody has enough common sense to judge if a MR qualifies for one of the self-merge categories 2019-12-03T19:57:50 anything else on this topic, or can we switch to the next one? (+ define "next one" - any preferences?) 2019-12-03T19:57:51 kl_eisbaer: please define these categories, cause IMHO intrusive changes should get more than one voting 2019-12-03T19:58:14 jdsn: I'm on it... 2019-12-03T19:58:22 define them in the README I mean 2019-12-03T19:58:23 ok thanks 2019-12-03T19:58:42 but dont self-merge these changes :) 2019-12-03T19:58:51 give us a chance to review :) 2019-12-03T19:59:52 jdsn: argh! now you have me :-) 2019-12-03T20:00:58 3... 2... 1... merged, you had your chance *g,d&r* 2019-12-03T20:02:15 should we switch to the next topic? 2019-12-03T20:02:23 I'd propose documentation / machine list etc. which Lars brought up 2019-12-03T20:03:45 kl_eisbaer: I noticed some of the machines you added in the wiki are not in the heroes network - was adding them intentional? 2019-12-03T20:04:16 cboltz: it was just a DNS dump from FreeIPA : "dig -AXFR @127.0.0.1 infra.opensuse.org" 2019-12-03T20:04:44 cboltz: as this DNS domain (and the opensuse.org one) is maintained by the heroes, I see no reason to hide something ;-) 2019-12-03T20:05:07 Instead, I see it as requirement that the heroes KNOW what is running inside these domains 2019-12-03T20:05:37 agreed 2019-12-03T20:05:50 maybe we should add a comment saying "SUSE network" to the machines not in the heroes network? 2019-12-03T20:06:28 cboltz: that's the problem I described above... 2019-12-03T20:06:41 IMHO we need such documentation - but I'm unsure WHERE... 2019-12-03T20:06:59 there's nothing like a "wrong place for documentation" 2019-12-03T20:07:01 jdsn: https://gitlab.infra.opensuse.org/infra/salt/merge_requests/287 - Feuer frei! :-) 2019-12-03T20:07:09 the typical problem is "no documentation at all" 2019-12-03T20:07:15 cboltz: yes, but there is "too many places for outdated documentation" 2019-12-03T20:07:21 So we have: 2019-12-03T20:07:37 * FreeIPA (where we can add the machines and do other, crazy things with them) 2019-12-03T20:07:41 * progress wiki 2019-12-03T20:07:44 * Salt 2019-12-03T20:07:49 *** mstroeder has quit IRC (Ping timeout: 252 seconds) 2019-12-03T20:08:26 in general, I'd like to have the "quick overview" in salt (pillar/id/*) - which obviously only works for machines we have in salt 2019-12-03T20:08:27 If you look into FreeIPA, you will notice that there are currently 33 hosts listed 2019-12-03T20:08:50 for a) more details and b) machines not in salt (because they are in the SUSE network), the wiki is fine 2019-12-03T20:08:59 *** mstroeder has joined #opensuse-admin 2019-12-03T20:09:02 each machine with an IP address assigned, sometimes even MAC addresses or SSL/SSH certs 2019-12-03T20:09:17 ...and the possibility to define (for example) sudoer roles... 2019-12-03T20:10:07 I'm not sure if I like to have more things in FreeIPA - I try to avoid logging in there whenever possible ;-) 2019-12-03T20:10:08 If there are no objections, I am fine if we go with the wiki for now 2019-12-03T20:10:31 so - managing membership of a "$whatever-admins" group in FreeIPA is fine 2019-12-03T20:10:39 Maybe the new redmine allows to use some kind of API to update the list automatically 2019-12-03T20:10:46 but deploying the actual sudo permissions for this group should IMHO stay in salt 2019-12-03T20:11:17 ideally we should replace that list with ls pillar/id/ ;-) 2019-12-03T20:11:18 cboltz: ...and where is the documentation about this? :-) 2019-12-03T20:11:33 /dev/brain ;-) 2019-12-03T20:11:41 cboltz: don't get me wrong: I'm fine with your approach of avoiding FreeIPA 2019-12-03T20:12:10 but we should write this down (at least) in the wiki, to avoid that people start using FreeIPA for things that "we" did not agree upon 2019-12-03T20:12:39 that indeed makes sense 2019-12-03T20:13:23 we probably have some more things which are only documented in /dev/brain and missing in the wiki 2019-12-03T20:13:44 whenever you miss something in the wiki, feel free to document it 2019-12-03T20:13:52 I know that this is kind of a German approach to ask for more written guidance, but I guess it helps newbees here 2019-12-03T20:14:30 even what you write is wrong - I monitor the wiki changes, and will help to get those things fixed 2019-12-03T20:15:00 but adding them myself is sometimes hard because I'm "betriebsblind" ;-) 2019-12-03T20:15:09 cboltz: you are giving me more and more the impression that we don't need any additional monitoring at all, as long as we have you :-) 2019-12-03T20:15:24 even if it *only* helps newbees, it helps attract newbees 2019-12-03T20:15:29 lol 2019-12-03T20:15:54 I hope at least that the https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines list will get correct during the next weeks 2019-12-03T20:16:09 yeah, sounds doable 2019-12-03T20:16:29 question: do we want to have a) all machines there or b) only machines we don't have in salt? 2019-12-03T20:16:42 cboltz: btw: I do not see any benefit from your "see also pillar/id/" comments there. Especially, as you do not link to the correct pillar/id/ directly 2019-12-03T20:16:59 cboltz: I would vote for a "quick overview" page in the wiki 2019-12-03T20:17:12 I know that b) means we'll have to look at two places, but OTOH it avoids having an outdated copy in the wiki 2019-12-03T20:17:29 as not everyone is familar with all this crazy IT stuff we always mention (like Salt, Pillars, Git and so on) 2019-12-03T20:18:08 I think we should find a way to attract newbees - and to get a quick overview, if we want to know something about any machine 2019-12-03T20:18:11 well, we should document how to clone the salt repo, and that people should look at the files in pillar/id/ 2019-12-03T20:18:28 even if you don't understand the detailed structure of those files, I'm sure the machine info is human-readable 2019-12-03T20:18:36 I even link to the progress wiki pages about a machine from monitor.opensuse.org 2019-12-03T20:19:39 good point - should we host a copy of the salt git repo on monitor.o.o, alias pillar/id/ into the docroot and link to the pillar/id/ files instead? 2019-12-03T20:19:44 cboltz: what about a simple table (like now), just extended with a short description about the machine and a link to the Salt Piller, if it exists 2019-12-03T20:20:20 cboltz: I'm fine with that - but this might open security problems, as the webserver is reachable from the outside 2019-12-03T20:21:00 So - if we want to link to the pillars in monitoring, we can even think about opening gitlab to the outside 2019-12-03T20:21:37 The wiki pages are also public - but people have to log in in redmine and get the correct access rights there 2019-12-03T20:21:44 that, or only rsync pillar/id/* to monitor.o.o to limit the possible damage 2019-12-03T20:22:07 cboltz: what about my approach and leave the wiki where it is now 2019-12-03T20:22:19 just with the two small extensions 2019-12-03T20:22:29 this would result in "everything on one page" 2019-12-03T20:22:58 ...and if the machine is in Salt, you (hehe) can add links to the salt pillars 2019-12-03T20:23:13 ...and if not, we can use additional wiki pages to provide a bit more information about the machine 2019-12-03T20:23:23 that's indeed an option 2019-12-03T20:23:51 can you do that for a few machines so that we see a practical example? 2019-12-03T20:23:52 Otherwise we could also add each and every machine (even if not reachable for us) into Salt 2019-12-03T20:24:17 cboltz: ok - taken as action item for me: enhance https://progress.opensuse.org/projects/opensuse-admin-wiki/wiki/Machines 2019-12-03T20:24:35 I like that idea - grepping salt is easier than reading the wiki (at least for me ;-) 2019-12-03T20:24:52 cboltz: I don't want to store much information in the wiki 2019-12-03T20:25:10 but I want to have one central point - and from there correct links to additional information 2019-12-03T20:25:14 +1 2019-12-03T20:25:48 This might help, for example, if an important machine is down - and you want to get quick information about it. 2019-12-03T20:26:16 * kl_eisbaer just thinks about "helloworld.infra.opensuse.org" or "login3.infra.opensuse.org" 2019-12-03T20:27:00 sounds like we have different definitions of getting quick information (hey, grep $whatever pillar/id/* _is_ fast!) ;-) 2019-12-03T20:27:25 cboltz: for this, you need to have access to YOUR checked out repo. This is some luxury I do not always have... 2019-12-03T20:27:57 ok, good argument 2019-12-03T20:28:03 as I already said, I'm not against your way - just start with a few machines so that we see how it will look, and can give feedback 2019-12-03T20:28:03 so my alternative would be to visit gitlab, behind a firewall, reachable only via vpn ... 2019-12-03T20:28:35 ... which sounds even more problematic if you don't even have the git checkout available... 2019-12-03T20:28:50 But it's already 21:30 - so what about another topic? :-D 2019-12-03T20:29:30 I think we now know what we need to do for the documentation, so - yes ;-) 2019-12-03T20:29:54 monitoring cleanup sounds like an easier one to me 2019-12-03T20:30:01 I like to get rid of the old SLE11 machines as soon as possible. So these might be my next targets... But there are even some 42.3 machines, that should see some "zypper dup" 2019-12-03T20:30:27 ...and I'm wondering if we really need 6 narwal machines? 2019-12-03T20:30:52 only narwal{5,6,7} are used, all setup with salt 2019-12-03T20:31:07 so we can shut down the old narwal machines, perfect! 2019-12-03T20:31:09 narwal{,2,3} are old machines and waiting for someone to shut them down ;-) 2019-12-03T20:31:21 jdsn? 2019-12-03T20:31:32 or should I get the honor to pull the plug? 2019-12-03T20:31:41 :D 2019-12-03T20:32:03 whoever is faster ;-) 2019-12-03T20:32:27 cboltz: can I add the new machines into monitoring? 2019-12-03T20:32:30 however, please (I'm afraid: manually) sync haproxy.cfg from anna to elsa - elsa might still reference the old narwals :-/ 2019-12-03T20:32:50 ok 2019-12-03T20:32:51 yes, please - the interesting thing to monitor is obviously port 80 2019-12-03T20:33:15 static.o.o is the most important domain to monitor 2019-12-03T20:33:38 if you want to monitor all domains narwal* serve, see pillar/id/narwal{5,6,7}* for the domain list 2019-12-03T20:33:41 that will not change - but full fs_/ is something I like to have pro-actively monitored... 2019-12-03T20:34:13 right, that's something we should monitor on all machines 2019-12-03T20:34:24 cboltz: that's something we have haproxy for ;-) 2019-12-03T20:35:06 ;-) 2019-12-03T20:35:19 FYI: narwal{,2,3} are gone from haproxy 2019-12-03T20:35:36 so it's really just the decomissioning 2019-12-03T20:36:11 did you also sync the haproxy.cfg to elsa? 2019-12-03T20:36:49 cboltz: "csync2 -xv" is your friend - even running a haproxy test before reloading the haproxy on else 2019-12-03T20:37:04 good to know, thanks 2019-12-03T20:37:20 I know, old school - but I managed nearly all my HA setups with this simple tool 2019-12-03T20:37:28 I'll happily salt haproxy.cfg - as soon as the keepalived MRs are merged and get rid of that "salt timebomb" 2019-12-03T20:37:52 (currently we have a hand-modified keepalived config, and salt highstate would revert those changes) 2019-12-03T20:38:11 well... ;-) 2019-12-03T20:38:40 but this leaves us IMHO just with boosters, redmine (progress) and community running SLE11 2019-12-03T20:38:51 redmine is WIP 2019-12-03T20:39:08 some people started to work on community stuff (doc.o.o) as well 2019-12-03T20:39:20 so the only machine currently left seems to be boosters 2019-12-03T20:39:40 which is running ... 2019-12-03T20:40:03 *grmbl*: CONNECT 2019-12-03T20:40:16 ...ok: here the solution is simple: "poweroff" 2019-12-03T20:40:22 yeah, but at least "only" connect (to my knownledge) 2019-12-03T20:40:41 there is a vhost for travel-support-program - but I don't know if this one is still used 2019-12-03T20:41:10 at the moment, travel support is on connect.o.o/travel-support/ 2019-12-03T20:42:01 seems so. But this just means that the idea of putting that stuff behind a .htaccess file should produce the needed attention for people to start reacting 2019-12-03T20:42:19 we already have a new VM for travel support, I "just" need some time to move it there 2019-12-03T20:42:34 ok - so no real road-blocker for the .htaccess file 2019-12-03T20:42:52 I will get in contact with ancor about the travel-support stuff 2019-12-03T20:43:23 cboltz: can you ping the membership commitee and tell them that we will restrict access to connect in a few days ? 2019-12-03T20:43:53 I already was in contact with ancor and forced ;-) him to do quite some things (like updating to the latest gems etc.) 2019-12-03T20:43:58 They should work as before - just need to know a common username/password to get behind the .htaccess stuff 2019-12-03T20:44:13 so deploying and moving the (AFAIK sqlite) database are the only steps left 2019-12-03T20:44:35 cboltz: ^^ can you inform them? 2019-12-03T20:44:53 I can meanwhile prepare an anouncement for the community 2019-12-03T20:45:13 good idea, maybe people want to move some things to their wiki user page 2019-12-03T20:46:01 and yes, I can send the membership commitee a mail with a quick summary, and tell them that you'll send a public announcement 2019-12-03T20:46:38 how will applying for membership work? 2019-12-03T20:46:52 a) ask someone for the .htaccess password, and continue as usual 2019-12-03T20:46:54 Should we use one single htaccess-account or create one for every mc-member? 2019-12-03T20:47:11 b) membership commitee can apply for membership in someone's name? 2019-12-03T20:47:21 well: I would say: sent an Email to a mailing list with your application 2019-12-03T20:47:23 (not sure if the software allows b) ) 2019-12-03T20:47:32 and for the commitee, it's just one additional log-in 2019-12-03T20:47:58 need to check, but I guess they can just set a checkbox 2019-12-03T20:48:11 worst case would be that we get some more ELGG admins :-)) 2019-12-03T20:48:34 as admins can set every button - even the "b" one 2019-12-03T20:49:29 any other questions/topics? 2019-12-03T20:49:31 ok, then please check if admins can add membership without someone having clicked the "request membership" button ;-) 2019-12-03T20:49:41 will do 2019-12-03T20:50:00 well, another machine for the monitoring - pinot 2019-12-03T20:50:12 it runs apache for countdown.o.o 2019-12-03T20:50:16 just open a ticket, please 2019-12-03T20:50:29 and I consider to also move doc.o.o there (unless someone thinks it should be a separate VM) 2019-12-03T20:50:41 ok, will do 2019-12-03T20:51:33 Last topic on my list is the openSUSE:infrastructure repository. 2019-12-03T20:51:52 In general, there is not much to say about - it just needs some love ;-) 2019-12-03T20:52:04 agreed ;-) 2019-12-03T20:52:07 But I did 2 interesting changes, I like to share 2019-12-03T20:52:18 1) upgrade of etherpad-lite to 1.7.5 2019-12-03T20:52:30 here I'm looking for the admin of the current etherpad.opensuse.org machine 2019-12-03T20:52:43 2) I replaced abuild-online-update with suse-online-update 2019-12-03T20:53:08 which might need some changes on some machines - but these should get notified in the monitoring 2019-12-03T20:53:08 *** mstroeder has quit IRC (Read error: Connection reset by peer) 2019-12-03T20:53:10 for 1), search /dev/null for that admin ;-) 2019-12-03T20:53:18 (in other words: you just volunteered ;-) 2019-12-03T20:53:26 perfect :-/ 2019-12-03T20:53:39 "you just won another machine..." 2019-12-03T20:54:17 ;-) 2019-12-03T20:54:34 well, at least you know etherpad, and probably know how to fix it if the upgrade breaks something 2019-12-03T20:54:42 not really. 2019-12-03T20:54:50 I just packaged the current version :-) 2019-12-03T20:55:18 but this machine is really just running etherpad, so it seems 2019-12-03T20:55:32 right 2019-12-03T20:55:35 might be a perfect candidate for consolidation (pinot? har, har) 2019-12-03T20:56:02 But let me gather some experience before we do this. 2019-12-03T20:56:25 doc.o.o fits better there ;-) (needs apache for MultiViews, while we use nginx for most other things) 2019-12-03T20:56:29 ...or even move that into a kubernetes/caasp cluster, which already needs half the amount of machines in the infra.opensuse.org network 2019-12-03T20:56:49 for etherpad, you just need the haproxy in front 2019-12-03T20:57:09 btw: who - beside Theo - is maintaining this clusters? 2019-12-03T20:57:33 check the open MR for caasp, you'll find some names there ;-) 2019-12-03T20:57:52 *** mstroeder has joined #opensuse-admin 2019-12-03T20:57:52 *** mstroeder has quit IRC (Remote host closed the connection) 2019-12-03T20:57:58 or check pillar/id/caasp* 2019-12-03T20:57:58 you mean these other, old-aged MRs? 2019-12-03T20:58:18 I was even wondering why they have dedicated (but empty) projects in gitlab 2019-12-03T20:58:24 the MR for caasp is "only" a few weeks old ;-) 2019-12-03T20:59:17 once I get the Leap 15.1 image to work (thanks, dracut), I was already wondering if we want to build some docker/pod stuff as well. 2019-12-03T20:59:24 ...but this is something for Christmas time... 2019-12-03T20:59:53 I wonder what's wrong with the 15.1 image - I'm sure it built successfully in the past 2019-12-03T21:00:08 it builds and works in general 2019-12-03T21:00:31 just after the initial deployment, dracut hangs, as it still wants to use /dev/loop0 2019-12-03T21:01:12 doing some "recovery" in a chroot via grub2 brings the machine up permanently - but I think this should be fixed.... 2019-12-03T21:01:23 *** mstroeder has joined #opensuse-admin 2019-12-03T21:01:38 hmm, I never had this problem in my test VMs (but I'm using an "old" copy of the image, not a recently downloaded one) 2019-12-03T21:01:58 I guess I will start from scratch with the 15.1 template as base 2019-12-03T21:02:08 *** robin_listas has joined #opensuse-admin 2019-12-03T21:02:30 I won't stop you ;-) 2019-12-03T21:03:18 That's all I have so far. 2019-12-03T21:04:37 There are only some minor things left from the security scan. But the only thing we should check is the setup of the mail servers running inside the internal LAN 2019-12-03T21:04:49 the default configuration is very open... 2019-12-03T21:05:18 that's anyway something to get salted 2019-12-03T21:05:43 it is already ;-) 2019-12-03T21:06:09 ok - so it's just some additional tuning of the setup. 2019-12-03T21:06:17 (IIRC "only" package install etc. the relayhost setting, not the whole main.cf) 2019-12-03T21:06:51 JFYI: I plan to run some scans via openVAS again next year, so have a good overview 2019-12-03T21:08:52 1.7GB sqlite database for etherpad...! Looks like we should do some cleanup ;-) 2019-12-03T21:09:12 looks like people actually use it ;-) 2019-12-03T21:10:35 if cleanup means "under the hood" (like "optimize table"), go ahead 2019-12-03T21:10:53 but I wouldn't delete old pads 2019-12-03T21:11:08 I would put this into a real database ... 2019-12-03T21:11:26 good idea 2019-12-03T21:11:34 but first: migrate to current version 2019-12-03T21:11:47 a copy command is way easier than a DB dump ;-) 2019-12-03T21:11:56 ;-) 2019-12-03T21:13:25 another topic - you mentioned some duplicate IPs 2019-12-03T21:13:32 yes 2019-12-03T21:13:36 .57 and .58 are aedir1 and aedir2 (just logged in to verify) 2019-12-03T21:13:50 this also means you should change (or drop?) osc-collab-future and mailman-test 2019-12-03T21:14:15 (actually dropping them shouldn't be a problem - the fact that you end up on aedir VMs, and nobody complained, shows that these names aren't used in practise) 2019-12-03T21:14:22 yes, but I have currently no idea if those machines exist (at least as templates) 2019-12-03T21:14:35 cboltz: feel free to do so :-) 2019-12-03T21:14:48 AFAIK I don't have permissions to change DNS entries 2019-12-03T21:15:15 Hmm, could these IP conflicts be the cause of my problems with zypper repos on aedir1/2? 2019-12-03T21:16:53 I'm quite sure we don't have two _running_ VMs with the same IP (that would cause other problems, for example my ssh login wouldn't have ended up on the aedir* VMs 2019-12-03T21:17:19 so the conflict is "just" a superfluous A record with a strange name pointing to the aedir* VMs 2019-12-03T21:17:24 which shouldn't do any harm 2019-12-03T21:17:29 well: I just don't know if the other two VMs are currently just off 2019-12-03T21:18:27 that's something you'll probably need to check on the atreju bare metal level 2019-12-03T21:18:28 cboltz: should be in "Network Services" tab 2019-12-03T21:18:40 JFYI: etherpad updated 2019-12-03T21:18:51 I can access and read it, but don't have write permission 2019-12-03T21:19:01 hm... 2019-12-03T21:19:11 can you please log out and in again? 2019-12-03T21:19:28 so you just gave me additional permissions on the ldap level? 2019-12-03T21:19:48 Let's say I found an "add" button :-) 2019-12-03T21:21:47 seems to work, thanks! 2019-12-03T21:22:02 that's what I call a "quick fix" :-) 2019-12-03T21:22:13 actually - damn, now I also have to do DNS changes! 2019-12-03T21:23:25 * kl_eisbaer thought you were doing them via Salt already... 2019-12-03T21:23:51 no, sadly DNS is managed in LDAP instead of plaintext zone files ;-) 2019-12-03T21:24:10 well, you can edit them on the command line 2019-12-03T21:24:51 I know (darix showed me an example recently), but for now I prefer the web interface 2019-12-03T21:25:03 "Mausschubser!" 2019-12-03T21:25:11 much easier to learn ;-) 2019-12-03T21:25:35 I need back to go. 2019-12-03T21:25:51 cboltz: I will ping you latter about migration 2019-12-03T21:25:53 and since we want to change the DNS setup anyway, there's not really a point to learn the soon-old commandline syntax ;-) 2019-12-03T21:26:03 tuanpembual: thanks for your work! Much appreciated! 2019-12-03T21:26:28 tuanpembual: whenever you see me online ;-) 2019-12-03T21:26:32 sure kl_eisbaer, glad I can do some help for openSUSE 2019-12-03T21:26:37 good morning 2019-12-03T21:27:18 very welcome! 2019-12-03T21:27:22 it's close to "good night" here, but that's timezone fun ;-) 2019-12-03T21:28:37 *** mstroeder has quit IRC (Remote host closed the connection) 2019-12-03T21:28:51 FYI: I deleted osc-collab-future.infra.o.o and mailman-test.infra.o.o from DNS 2019-12-03T21:29:20 that only leaves caasp-worker1 vs. helloworld who have the same IP 2019-12-03T21:29:43 Maybe jdsn can have a look 2019-12-03T21:29:57 that would be welcome, because I could only guess 2019-12-03T21:30:15 I am off tomorrow 2019-12-03T21:30:23 so on Thu I can check it 2019-12-03T21:30:39 perfect. A day more or less should not harm 2019-12-03T21:31:49 any other topic? 2019-12-03T21:31:49 *** mstroeder has joined #opensuse-admin 2019-12-03T21:32:05 * kl_eisbaer don't things so 2019-12-03T21:32:17 s/think/ 2019-12-03T21:32:53 ok, then let's close the meeting 2019-12-03T21:32:58 thanks everybody for joining 2019-12-03T21:33:26 cboltz: thanks for leading! 2019-12-03T21:33:44 thanks 2019-12-03T21:33:49 also thanks for all the things you all did since we met in Nuremberg - I haven't seen that much activity for a while :-) 2019-12-03T21:34:54 cboltz: don't worry, I will cool down. Just want to get into it again... ;-) 2019-12-03T21:35:16 you don't _have to_ cool down ;-) 2019-12-03T21:36:49 cboltz: hey, it's getting cold outside :-) 2019-12-03T21:37:20 I know, I'm outside several hours per day ;-) 2019-12-03T21:40:55 *** mstroeder has quit IRC (Quit: Leaving) 2019-12-03T21:52:43 ok - time to say good night here! 2019-12-03T21:52:45 CU! 2019-12-03T21:52:51 good night! 2019-12-03T21:52:54 ...and enjoy the new etherpad ;-) 2019-12-03T21:53:20 *** jdsn has left #opensuse-admin ("Konversation terminated!") 2019-12-03T21:53:52 bye - it was quite a ride 2019-12-03T21:56:29 kl_eisbaer: looks like we skipped a few ;-) versions - the new version looks quite different, and much better :-) 2019-12-03T22:09:02 *** kl_eisbaer has left #opensuse-admin 2019-12-03T22:24:29 *** oreinert has quit IRC (Quit: Konversation terminated!) 2019-12-03T22:54:14 *** jadamek2 has quit IRC (Quit: Leaving) 2019-12-03T23:49:26 *** cboltz has quit IRC ()