> Thanks for getting this discussion kicked off!
> The model we have been pursuing is a little different than the one you
> propose below. A rough sketch (in less detail than your proposal),
> picking up where we diverge:
> 5. The user constructs an RSpec (likely with help from some user tools)
> that describes the whole slice they want to build, including the PG
> and OF nodes and link(s) between them
> 6. If the user *knows* the path they want to use on I2, they fill that in
> - if they don't know or don't care, they pass their request RSpec to a
> slice embedding service (SES), which fills out these details for them.
> Note that the SES isn't actually setting anything up on the users'
> behalf, it is just *annotating* the request RSpec to fill in details
> for the user
> 7. The user submits the RSpec to the appropriate AMs (in this case PG,
> OF, and I2) - it's trivial to extract the right set of AMs from the
> RSpec if the SES added any (for example, if it added nodes in the I2
It hadn't occurred to me that all the aggregates would see the full
RSpec. This sounds like probably a Good Thing.
> 8. The AMs communicate in pairwise fashion to agree on things like
> VLAN#s for the interconnection point. They can do this because the
> request RSpec that each AM sees includes 'external references' to the
> other AMs in the topology.
As I said to Max, I'm a little concerned about the complexity in the
'pairwise fashion' negotiation. See below.
> 9. The AMs, using the negotiated VLAN#s, etc., individually create their
> slivers and inform the user when they're ready.
> We've implemented up to step 7. While 'pairwise fashion' in step 8
> sounds like it would scale poorly, consider the fact that the number of
> places where there are physical interconnects between various aggregates
> are likely to be limited in number and fairly static, bounding the
> number of pairs that have to agree. And, in fact, in the medium term,
> what's likely to happen is that there are a couple of backbone AMs, with
> aggregates hanging off of them. This results in a very simple, easy to
> manage structure: backbones act as 'masters' for selecting VLAN#s for
> their attached aggregates, the only place where real negotiation has to
> occur is at connection points between backbones.
How do you handle the situation when it's not obvious who the 'master'
is? E.g., when you have multiple backbones in a slice (we already have
2)? What about when the two ends try to assign the same VLAN to
different slices? Maybe I'm being unreasonably conservative. Can you
describe the 'pairwise' protocol? Is there code?
Can you comment on my point to Max about keeping one sliver from sending
data out of the aggregate until the end to end slice is stitched. Do
you think this is a concern?
Finally, I suggest that the VLAN information gets logged in the
clearinghouse. This seems like important operational information that
will be needed to help diagnose faulty slices and to assign
accountability for packets leaving an aggregate.