15:03:27 #startmeeting neutron_northbound 15:03:27 Meeting started Mon May 15 15:03:27 2017 UTC. The chair is yamahata. Information about MeetBot at http://ci.openstack.org/meetbot.html. 15:03:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:27 The meeting name has been set to 'neutron_northbound' 15:03:33 #topic agenda bashing and roll cal 15:03:40 #info yamahata 15:03:48 #info rajivk 15:03:50 #link https://wiki.opendaylight.org/view/NeutronNorthbound:Meetings agenda page 15:04:10 there is no update of the agenda page 15:04:26 Is there any additional topic? 15:04:33 no 15:04:46 I see, move on 15:04:51 #topic Announcements 15:05:03 Last week there was openstack summit. and also openaylight day. 15:05:25 The videos are already available. they were quite fast to upload recordings. 15:05:45 any new updates from summit 15:05:54 At opendaylight day, the topics focused on nirvana stack which AT&T is trying to promote. 15:06:06 hi 15:06:09 There is no new info for opendaylight. 15:06:10 sorry for being late 15:06:29 Right now we're discussing announcement. 15:06:54 mkolesni: Do you have any topic in addition to usual topic? 15:06:57 yes i see 15:07:03 no just talk about patches 15:07:46 ODL carbon release is being delayed. 15:07:58 is there an eta? 15:08:36 #link https://wiki.opendaylight.org/view/Simultaneous_Release:Carbon_Release_Plan carbon release plan 15:09:27 RC0 was cut and now it's test phase. 15:10:03 so 1 month postponed? 15:10:33 Yes. For detailed schedule, please refer to the discussion in ODL release mailing list. 15:11:38 Hopefully it will be released before ODL developer design forum. but we will see. 15:11:55 i thought its end of month? 15:12:11 #link https://lists.opendaylight.org/mailman/listinfo/release 15:12:15 hi 15:12:16 mkolesni: right. 15:12:22 manjeets, hi 15:12:53 any other announcement? 15:13:01 hello 15:13:50 there seems no other annoucement, move on. 15:13:51 #topic action items from last meeting 15:13:57 I suppose there is no items. 15:14:13 #topic carbon/nitrogen planning 15:14:50 We'll discuss nitrogen planning at ODL DDF. Especially we need to communicate about incompatible changes. 15:15:16 I talked with Sam to have time slot to discuss on it at DDF. 15:15:26 is there something incompatible planned? 15:15:41 yang model update to drop tenant-id. 15:16:06 also status member will be operational. 15:16:23 Those are incompatible ones. Other update will be compatible. 15:16:52 ah in terms of api status change didnt change the api 15:17:11 afaik its only additions? 15:17:57 Basically right. 15:18:12 In some cases, API incompatible change is inevitable. 15:18:15 so only the tenant id which we should ready for afaik 15:19:05 the case of status, we will communicate with dependent projects and see their response. 15:19:17 It may be delayed to post Nitrogen. 15:19:32 ok 15:20:28 anything else? otherwise let's move on to patches/bugs. 15:21:16 lets move on 15:21:16 #topic patches/bugs 15:21:25 https://review.openstack.org/#/c/456965/2/networking_odl/tests/functional/base.py 15:21:44 mkolesni, i added this but didn't get a reply from you 15:22:10 i observed for functional test delete test was always passing no matter if resource gets created or not 15:22:49 manjeets, shouldnt you be getting error code then? 15:23:14 no it sent None 15:23:36 manjeets, per HTTP SPEC the response should be 410 or 404 in case resource doesnt exist 15:23:49 so if its not there that what id expect 15:25:22 ohk I haven't touched it for few weeks, i'll recheck but i remember the create was not happening and this test was passing 15:25:30 if it doesnt return that then we need to decide if thats a bug or not 15:26:26 perhaps you can add a case to see the correct error code is returned in case of deleting a non existant resource? 15:26:58 mkolesni, that's a good idea 15:27:04 i'll add a case for that 15:27:27 ok great then we can be sure the correct error is thrown 15:27:39 thanks 15:28:01 mkolesni, for qos the driver was not getting registered properly and resource i believe got created on neutron side 15:28:02 yamahata, can we talk about https://review.openstack.org/453581 ? 15:28:20 mkolesni: sure off course. 15:28:43 So what's heppens if two dependent resources are updated? 15:28:53 e.g. network and port. sg and sgrule. 15:29:02 i basically dont have improvements there, but i noticed that although it works there are now much more deadlocks in the db 15:29:29 so ive been trying to track it down for the last week but to no avail 15:29:34 more deadlock with your patch? or without patch? 15:30:26 with the patch some deadlocks occur when inserting the dependencies in the db 15:30:45 i wasnt able to figure out why though 15:30:49 Oh. Is garella db backend used? 15:30:56 Sure we need to track it down. 15:31:12 basically it seems to happen when the father resource is being updated while child dependencies get inserted 15:31:46 The dependency calculation would widen the window. 15:31:57 its not awful since retries basically fix everything back, but its less than ideal 15:33:31 I see. Let's investigate it further. 15:33:32 regarding the race you were talking about did you see yamamoto's comment? 15:33:35 https://review.openstack.org/#/c/453581/9/networking_odl/journal/journal.py 15:33:42 please take a look later 15:33:48 Sure, will do 15:33:55 #action yamahata look at yamamoto's comment 15:34:08 can we talk about https://review.openstack.org/444648 ? 15:34:16 singleton patch? 15:34:23 sure. Please go ahead. 15:34:44 yes 15:35:28 what is your position on this? 15:35:41 For now, we should have only single timer of neutron server. 15:35:56 you mean globally per host? 15:36:32 rpc worker process shouldn't run the timer and main process should run single timer. 15:36:40 Maybe it can be neutron worker. 15:36:52 ok but what if that process dies? 15:37:06 677016 15:37:25 it could be problematic 15:37:43 can i make a suggestion? 15:37:54 Is the number wront? 677016? 15:38:08 the process death means neutorn server death. 15:38:13 huh, no its just ota token accidentally pressed :$ 15:38:38 Oh I can open the patch now. 15:38:58 since the processes fork i think it could be possible only one can die for whatever reason 15:39:33 so we could be in problem 15:39:58 unless its not possible but im not familiar with all possible OS behaviors so we need to tread carefully 15:40:11 anyway id like to make a suggestion.. 15:40:37 i think this patch does no harm while for scale it does mitigate a problem that at least we hit in our testing 15:40:53 so i think as such we can merge this and of course continue planning enhanced solution 15:41:18 the issue you're seeing is timer issue? or other issue? 15:41:47 Anyway to have multiple timers within neutron server would be scalability issue. 15:42:01 the issue we had was that when we had 56 cores on the machine the cloud came to a half because of so many threads 15:42:05 So we can have single timer within neutron server and see the outcome. 15:42:28 yes i agree but i dont think this should stop this patch from going in but rather build on top of it 15:42:48 We can have threadpool patch for more flexibility. 15:42:57 this will at least limit the timers to one per neutron process (after the fork) 15:43:06 then after that we can further limit it 15:43:07 For example, api worker can have only one journal thread. 15:43:17 but we can have more for main process 15:43:21 s/api/rpc/ 15:43:41 ok sure but this patch doesnt limit that 15:44:13 all it does is make sure theres one of this object per process, then we can have thread pool of whatever else we like 15:44:19 You're against threadpool patch giving -2. 15:44:35 we can discuss that as well right now 15:45:07 We can have singleton patch and then threadpool support for more flexibilty. 15:45:53 sure that sounds good as long as thread pool is not increasing number of timers per process 15:46:39 hmm Do you want to have at least one timer per process. i.e. all rpc workers and main process? 15:47:22 for now there will be one as i see the thread pool patch didnt change that 15:48:02 Or are you okay with single timer within neutron server? 15:48:15 i.e. single timer among main process and rpc workers. 15:48:41 i think thread pool patch just increases capasity of available threads per event happening right? 15:48:58 Right. 15:49:07 ie there will be one timer per process so for 4 core machine there will be 9 timers iiuc 15:49:19 then later we can plan how many timers we want 15:49:33 We don't have to create timers. We can have only single timer among processes within nuetorn sever. 15:49:40 obviously too much is not good but limit to 1 per machine could be problematic as well 15:50:02 Why is 1 timer per machine problematic? 15:50:05 but i think these both patches can continue an the timer count can be addressed later on 15:50:13 timer is only for rescuing unprocessed journal entry. 15:50:43 timer is generally for sync 15:50:53 so its either the backlog from connectivity loss 15:50:56 or full sync 15:51:08 so just 1 might be too little 15:51:34 I see. but we don't have to have 1 per 1 process. 15:51:37 also if that process dies but others dont it could be a problem i guess but thats just a theory which im not sure if its possible or not 15:51:47 We can control the number of timers. 15:52:04 sure but what i think is that should be a different patch 15:52:06 Process death means neutron death. It's another issue. 15:52:27 i.e. no reason to stall these patches for that fix 15:53:00 You'd like to have single timer per process? 15:54:14 id like to think of it further and come up with a proposal 15:54:29 but in the mean time i dont think these patches need to wait 15:54:37 i removed the -2 from the thread pool patch 15:54:45 Ok. we have 5min left. 15:54:57 any other patches to discuss? 15:55:03 rajivk: ? 15:55:04 none from me 15:55:30 yeah, i requested for lbaas review 15:55:30 From me, dhcp port issue will be discussed on mailing list. 15:55:44 #action yamahata reply to dhcp port discussion on mailing list 15:56:04 #link https://review.openstack.org/#/c/449432/ lbaas review 15:56:19 any other patches? 15:56:24 yeah, https://review.openstack.org/#/c/459970/ 15:56:36 rajivk_ did you check the jenkins failure? 15:56:39 is python27 test case broken? 15:57:09 I checked it, but i could not find the reason 15:57:21 i requested to yamahata to have a look at them. 15:57:22 hmm .. ok 15:57:30 I see. 15:57:41 #action yamahata and others look at https://review.openstack.org/#/c/459970/ 15:57:44 If its ok, I would look at it tomorrow .. hope i can help 15:57:53 Also I noticed the patch https://review.openstack.org/#/c/464111/ 15:58:01 Fix floatingip status not same when create and unAssociate 15:58:21 This is a good fix. So we should follow it up. neutron fix might be necessary. 15:58:45 any other patches to discuss? 15:59:11 okay. 15:59:12 #topic open mike 15:59:17 anything else to discuss? 16:00:17 I would like to have more work. 16:00:18 seems nothing. 16:00:28 If someone needs any help, please let me know. 16:00:39 rajivk: please go ahead. and feel free to take over pending patches. 16:00:57 sometime I have uploaded patches, but don't have time to follow up. 16:01:03 In that case, please take them over. 16:01:04 yamahata, thanks 16:01:14 I want to know more about your rpc specs 16:01:34 What is the plan for that, may be i can contribute in that with you. 16:01:52 rajivk: cool. The plan is to implement rpc from ODL. the main use case is dhcp port. 16:02:13 yeah, are you working on it? 16:02:19 The goal is to allow rpc. It doesn't add new rpc. 16:02:28 Not yet. Now we're discussin dhcp port folks. 16:02:35 discussing with dhcp folks. 16:03:06 ok, i will go through your rpc specs and raise my concern if i have any. 16:03:13 great 16:03:32 anything else? 16:03:35 no 16:03:44 thank you everyone. 16:04:08 #topic cookies 16:04:13 #endmeeting