14:00:15 #startmeeting Cross Community CI 14:00:15 Meeting started Wed Jan 31 14:00:15 2018 UTC. The chair is fdegir. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic. 14:00:15 The meeting name has been set to 'cross_community_ci' 14:00:23 hello everyone 14:00:33 as hwoarang says, it is that time of the week again 14:00:39 #topic Rollcall 14:00:39 hello 14:00:52 hi hw_wutianwei 14:00:55 #info Tapio Tallgren 14:00:59 #info Manuel Buil 14:01:03 #info Tianwei Wu 14:01:06 #info Joe Kidder 14:01:34 #info Fabien Andrieux 14:01:40 #info David Blaisonneau 14:01:46 let's start with the first topic and others can join on the way 14:02:05 agenda was on https://etherpad.opnfv.org/p/xci-meetings 14:02:13 but etherpad seems to be down 14:02:22 anyone else having trouble opening it? 14:02:26 #info Markos Chandras 14:02:45 fdegir: yes no etherpad 14:02:51 I can't open 14:02:52 aricg: bramwelt: ^ 14:03:14 it was pretty similar to previous week's agenda 14:03:22 and the first topic is 14:03:23 "502 Bad Gateway" 14:03:26 #topic Functest Healthcheck Status 14:03:41 some of you might know that, we run CI within VMs 14:03:55 #info Dimitrios Markou 14:04:04 #info Periyasamy Palanisamy 14:04:07 meaning that all the VMs (opnfv, controller, compute) get created in distro VMs to ensure we always have clean environment 14:04:14 and increase the no of resources we have 14:04:34 but we are seeing strage failures with functest healthcheck when running things this way 14:05:12 to be more precise, the snaps testcase test_add_remove_volume test fails attaching/detaching volume from instance 14:05:30 we have been trying to find the root cause but failed to do so 14:05:56 because of this, we haven't been able to bump shas and will not do that until we either find the problem and fix it 14:06:06 or switch back to running things on the host directly 14:06:13 fdegir: the volume does not attach to the instance? 14:06:24 if using ceph, it maybe pass this failures about volume 14:06:25 fdegir, is it working on a stable branch ? 14:06:30 which results in reduced no of CI resources and more importantly removes the ability to run things in always clean environment 14:06:56 mbuil: when I tried, I was able to attach but unable to remove the volume 14:07:07 ok 14:07:10 hw_wutianwei: can you give some more details please? 14:07:43 david_Orange: it works on latest but not when we run things within vm 14:08:02 these testcase can be passed in compass4nfv. 14:08:25 david_Orange: when we have nested virtualization 14:08:30 we are not using ceph though 14:08:41 fdegir, ok 14:09:21 fwiw here is the traceback from detaching the volume 14:09:23 http://paste.opensuse.org/27139788 14:09:35 from the nova-compute.log on compute node 14:10:03 In my opinion, it is related to storage. 14:10:13 it's not so much with detaching the volume but with neutron but i am still trying to decipher things 14:10:58 so, long story short, we will continue looking into this 1 or 2 more days 14:11:13 #info Victor Morales 14:11:15 and if we can't come to a solution, we will change how we run things in CI and stop using VMs 14:11:22 Storage tends to be a bottleneck. Do you see high storage load in the virtual host? 14:11:29 which in turn will result longer queue times for patch verification 14:11:51 libvirtError... seems like a nested issue 14:12:13 when I create an instance manually, I am able to attach a volume 14:12:20 but I fail to remove the volume 14:12:29 and I have different error comparing to hwoarang 14:13:05 so this was all about this topic 14:13:29 if anyone is willing to give a try, just ping hwoarang and me and we can summarize how you can reproduce the issue 14:13:53 moving to the next high prio topic 14:13:57 #topic Baremetal Status 14:14:28 testing is one thing to enable patchset verification and scenario promotion post merge jobs 14:14:46 and baremetal is the next one that should happen once a scenario gets promoted 14:15:24 fdegir, in my side i am still working on it 14:15:31 mbuil is going to start working on this for baremetal 14:15:42 with the help from david_Orange :) 14:15:49 fdegir, actually on the DC to debug a pxe issue 14:16:07 david_Orange: you mean you still need to update your patches? 14:16:40 mbuil, yes, sorry i had a long week and can not focus on that 14:17:35 mbuil, fdegir is this baremetal will use pdf/isd as source or use the actual fixed ip config . 14:17:36 ? 14:17:43 david_Orange: ok, no worries :). Should I wait then? 14:17:45 david_Orange: pdf/idf 14:18:16 mbuil, yes, please, i will keep you in touch when we will test the baremetal infra deployment 14:18:31 the focus on OSA 14:18:37 then focus on OSA 14:19:03 thanks mbuil david_Orange 14:19:16 if someone else wants to work on it with a different scenario as well (such as os-nosdn-nofeature, etc.), just ping me and I can find a pod 14:19:50 but that person needs to wait for david_Orange as well unless he or she prefers going to hard way and doing things from scratch 14:20:14 moving to the usual topics 14:20:22 i really hope i can patch 'till the end of week 14:20:28 #topic Scenario/Feature Status: os-odl-sfc 14:20:38 mbuil: mardim: any update for sfc? 14:21:41 fdegir: not really. All bugs which appeared when moving to the new SHAs are fixed 14:22:14 We are ready to make that move, although we would prefer to move to newer SHAs where those fixes are included. Apart from that, I have nothing 14:22:58 mbuil: we will hopefully move to the new SHAs by next week 14:23:20 thx mbuil 14:23:24 #topic Scenario/Feature Status: os-odl-bgpvpn 14:23:30 fdegir, nothing from my side I am occupied with Tacker problems 14:23:39 peri: I've seen the blueprint got merged 14:23:45 peri: anything more? 14:24:00 yes, I have tested this scenario with ODL nitrogen 14:24:26 but looks like some issue with this version. raised a bug in netvirt https://jira.opendaylight.org/browse/NETVIRT-1071 14:25:16 but it works fine with ODL version (i.e BGP peering). so i request you to review the upstream patches 14:25:32 i meant ODL carbon version 14:25:39 ok 14:25:59 so there is progress which is good and at the same time a bug is identified 14:26:50 yes, but we have get upstream reviews to be reviewed and merged 14:26:57 its long pending 14:27:16 https://review.openstack.org/#/c/522598/ and https://review.openstack.org/#/c/523907/ 14:28:07 sorry these went under the radar because of zuul -1 14:28:14 gate job /c/522598 is failing continuously. need someone help to figure out the reason 14:28:35 as hwoarang says, zuul seems to be pretty shaky lately 14:28:54 I've been following zuul status and there is at least 1 issue every day 14:29:19 but it works for the review https://review.openstack.org/#/c/538933/ 14:29:22 peri: zuul + xenial was massively broken yesterday 14:29:28 peri: try again. That Ubuntu gate was failing for me but yesterday evening it worked 14:29:30 please recheck both 14:29:38 if you are using twitter, you can follow the news :) 14:29:38 https://twitter.com/openstackinfra?lang=en 14:30:05 peri: I suppose that's all 14:30:20 yes, thats all :) 14:30:21 fdegir: we should use Twitter too 14:30:25 thanks peri 14:30:32 mbuil: why not :) 14:30:39 a bot to post openstackinfra tweets here :) 14:30:39 but we first need to test things, you know 14:31:28 we have our own spambot 14:31:31 OPNFV-Gerrit-Bot: ping? 14:31:37 moving 14:31:43 #topic Scenario/Feature Status: k8-nosdn-nofeature 14:31:59 hw_wutianwei: I see you've been sending few more patches lately 14:32:05 #link https://gerrit.opnfv.org/gerrit/#/c/50213/ 14:32:26 fdegir: yep 14:32:49 it finished and passed the CI verify. I think it can be megered. 14:32:56 hw_wutianwei: +1 14:33:08 thanks the people review this patch and feedback, especially hwoarang 14:33:23 and mbuil 14:33:37 yep we can get it in 14:33:46 hw_wutianwei: what would be the next step? 14:33:58 hw_wutianwei: opensuse/centos support or ? 14:33:59 should we create kubernetes verify job? 14:34:07 fdegir: yep 14:34:10 a xenial job would be nice 14:34:11 hw_wutianwei: that I can take care of 14:34:17 I will support centos 14:34:23 hw_wutianwei: I mean the job stuff 14:34:24 first 14:34:25 i will do the suse bit 14:34:31 good 14:34:34 hwoarang: thanks 14:34:43 so please keep an eye on releng patches coming days 14:34:50 creating jobs for k8s 14:34:57 or adding it to existing jobs 14:35:23 thanks hw_wutianwei for taking this till the end 14:35:31 yeah good job 14:35:41 one more thing, could you give me a centos vm? 14:35:49 fdegir: ^^ 14:36:20 hw_wutianwei: we have one node that's not used by anyone but the OS installation failed on it 14:36:29 electrocucaracha: did you try reinstalling os on pod21-jump again? 14:36:52 fdegir: nope, but I can try it today 14:37:04 electrocucaracha: that would be good 14:37:13 hw_wutianwei: we put ubuntu 16.04 on the nodes by default 14:37:21 fdegir, electrocucaracha: It doesn't matter, I will try to find one 14:37:48 hw_wutianwei: we need to fix the node anyways so electrocucaracha can try agian and it works, you get that one 14:38:05 fdegir: ok, thank you 14:38:21 electrocucaracha: I'll ping you tomorrow if you don't ping me earlier 14:38:34 fdegir: ok 14:38:36 moving to the next features 14:38:39 Taseer: around? 14:39:14 #topic Scenario/Feature Status: congress/blazar/masakari 14:39:30 I think the status for these are same as last week 14:39:40 the last patch for congress is still under review 14:40:02 blazar blueprint got accepted and Taseer working on role 14:40:24 masakari team hasn't started and will not start working on it until queens is out 14:41:01 fandrieu: should we talk about vpp? 14:41:14 yes 14:41:23 can give you a quick status 14:41:24 #topic Scenario/Feature Status: vpp 14:41:29 please go ahead 14:41:53 Almost there. vpp and networking_vpp agent deployed and configured. 14:42:05 Still have to figure out a few plumbing issues. 14:42:21 Also have a question for you guys 14:42:36 yes 14:43:04 I created a os-nosdn-vpp scenario for that. I realized later that it would be os-nosdn-vpp scenario with vpp ml2 plugin 14:43:14 What would be the right way to integrate ? 14:43:53 I meant os-nosdn-nofeature with vpp 14:43:53 have you looked at this? 14:43:54 https://wiki.opnfv.org/pages/viewpage.action?pageId=12390152#OnboardingProjects/ScenariostoXCI-StructuringtheWorktoOnboardtoXCI 14:44:39 Yes. Not clear to me though 14:44:49 fandrieu: ok, will come to there after a question 14:44:59 isn't this scenario os-nosdn-fdio ? 14:45:27 rather than os-nosdn-vpp or os-nosdn-nofeature 14:45:28 might be. fdio is the project. vpp is the vswitch technology 14:45:53 I think vpp integration is done under the scenario os-nosdn-fdio 14:46:12 OK. will move into it then 14:46:18 we can check that and if that's the case, we have examples for you which you can take a look at and see how it can be done 14:46:25 about what needs to be done 14:46:33 first, you need a blueprint in upstream osa 14:46:45 proposing vpp integration 14:47:01 #info Jack Morgan 14:47:03 and in parallel to that, you can start upstreaming ansible roles 14:47:23 and in opnfv, you do opnfv scenario stuff similar to sfc or bgpvpn 14:47:53 ok. clearer now. 14:47:54 I'll send an email to you with examples and include others who went through this and can help you doing this 14:48:17 the crucial part in this is to put your opnfv scenario role in fds repo, not in xci repo 14:48:17 yep. kept the blueprint in mind but did not write it yet 14:48:45 That was my next question 14:49:00 For the sake of development I worked in xci/scenarios till now 14:50:44 the scenario will end up in /scenarios/os-nosdn-fdio/role/os-nosdn-fdio folder 14:50:56 please check sfc in this file https://gerrit.opnfv.org/gerrit/gitweb?p=releng-xci.git;a=blob;f=xci/opnfv-scenario-requirements.yml 14:51:13 will do. thanks 14:51:47 and you can ask how the scenarios are plugged in to xci when you start looking into it 14:51:58 hwoarang is the mastermind behind the mechanism 14:52:19 thanks fandrieu and you will get a long mail from me 14:52:46 fdegir: will try to respond with a long mail as well :) 14:52:57 please do :) 14:53:00 #topic Kolla Wars 14:53:07 david_Orange: electrocucaracha: anything to say? 14:53:20 fdegir, this is not a war :D 14:53:34 I have rebased the latest changes on the draft 14:53:50 I think the aio flavor is working in ubuntu and centos 14:53:50 david_Orange: I like using the term, don't blame me 14:53:52 blame Ray 14:53:52 yes, changte topic to Kolla for the win! (FTW) 14:54:02 fdegir, i know :) 14:54:22 jmorgan1: that's not as fun, sorry 14:54:31 agreed 14:54:40 fdegir, in my side i started to write my view on pdf/idf integration (that will also impact next baremetal test) 14:54:56 fdegir, i will send it soon 14:55:04 david_Orange electrocucaracha: before diving into details, can you come up with a short/brief info about this? 14:55:15 as asked last week 14:55:27 so everyone knows what the intentions are 14:55:49 from last week's meeting: david_Orange will come up with a high level overview/plan and present it to team to collect feedback 14:55:51 as i said, i have prepared a short text to explain how we can do that and set some roles that can be reused 14:56:13 it wasn't just about pdf/idf 14:56:34 fdegir: do you mean the main goal of that effort? 14:56:36 it was general like why kolla, what are the intentions and so on 14:56:39 electrocucaracha: yes 14:56:55 no, but mainly about how pdf/idf can impact today steps 14:57:16 fdegir, humm i did not get that :) 14:57:38 pdf/idf and other impacts are after talking about the goals with having kolla in xci 14:57:42 fdegir: I think that main idea is support an additional OpenStack installer which offers a containerized way 14:58:10 electrocucaracha: david_Orange: we have 3 minutes left so if you can come up with this type of overview, we can talk about it next week 14:58:51 I take the silence as "yes, we can do that" 14:58:54 fdegir, ok 14:59:02 fdegir, SIR YES SIR ! 14:59:07 fdegir, is it better ? 14:59:08 david_Orange: :) 14:59:14 fdegir: yes, I think that I can elaborate better next week 14:59:21 thanks 14:59:29 #topic AOB 14:59:33 before we end today's meeting 14:59:39 anyone wants to add anything? 14:59:43 yes, 14:59:43 electrocucaracha, please feel free to ping me to talk about that 14:59:57 the patch that adds proxy support is ready for reviews 15:00:07 I am working on offline mode for CentOS. Not sure what will be done with that 15:00:19 ttallgren: offline mode? 15:00:20 This one https://gerrit.opnfv.org/gerrit/#/c/45383/ 15:00:46 Installing the XCI from locally installed repos (rpm + git) 15:00:56 I'm using this values for non-proxy https://gist.github.com/electrocucaracha/3b83004b0233b99c480546d895939839#file-run_jenkins_test-sh 15:01:10 ttallgren: that's interesting cause this is one of the things many people asking to upstream osa 15:01:15 enabling offline deployments 15:01:21 ttallgren: that needs work on upstream osa and there is a spec for it 15:01:32 better talk to the author of the spec otherwise i smell duplication 15:01:42 ttallgren: do you mean having a local mirror (rpm/git)? 15:01:48 hwoarang: can you put the link if you have that? 15:01:54 jmorgan1: Yes 15:02:32 before I end the meeting 15:02:40 http://git.openstack.org/cgit/openstack/openstack-ansible-specs/tree/specs/queens/python-build-install-simplification.rst 15:03:02 an XCI update will be given to opnfv community tomorrow during weekly tech discussion meeting 15:03:02 that's for wheel packages but it's still part of the offline installation 15:03:03 https://wiki.opnfv.org/display/PROJ/Weekly+Technical+Discussion 15:03:14 offline is not just about rpms 15:03:15 please join if you have time 15:03:34 thank you all and talk to you next week 15:03:36 #endmeeting