WARNING: This server is unstable and will be retired in the next days.
If you want to keep this forum available, please request immediately a migration
on the Nabble Support forum.
Forums that don't receive any migration request will be deleted forever.
The model we have been pursuing is a little different than the one you
propose below. A rough sketch (in less detail than your proposal),
picking up where we diverge:
5. The user constructs an RSpec (likely with help from some user tools)
that describes the whole slice they want to build, including the PG
and OF nodes and link(s) between them
6. If the user *knows* the path they want to use on I2, they fill that in
- if they don't know or don't care, they pass their request RSpec to a
slice embedding service (SES), which fills out these details for them.
Note that the SES isn't actually setting anything up on the users'
behalf, it is just *annotating* the request RSpec to fill in details
for the user
7. The user submits the RSpec to the appropriate AMs (in this case PG,
OF, and I2) - it's trivial to extract the right set of AMs from the
RSpec if the SES added any (for example, if it added nodes in the I2
8. The AMs communicate in pairwise fashion to agree on things like
VLAN#s for the interconnection point. They can do this because the
request RSpec that each AM sees includes 'external references' to the
other AMs in the topology.
9. The AMs, using the negotiated VLAN#s, etc., individually create their
slivers and inform the user when they're ready.
We've implemented up to step 7. While 'pairwise fashion' in step 8
sounds like it would scale poorly, consider the fact that the number of
places where there are physical interconnects between various aggregates
are likely to be limited in number and fairly static, bounding the
number of pairs that have to agree. And, in fact, in the medium term,
what's likely to happen is that there are a couple of backbone AMs, with
aggregates hanging off of them. This results in a very simple, easy to
manage structure: backbones act as 'masters' for selecting VLAN#s for
their attached aggregates, the only place where real negotiation has to
occur is at connection points between backbones.
Note that this pretty much follows what Max suggested.
It is probably possible to implement a service that drives this process
for the user, to hide some of the details from them, and that service is
probably pretty straightforward.
Thus spake Aaron Falk on Mon, Feb 01, 2010 at 09:19:42AM -0500:
> 5. The researcher (via the GENI Aggregate Manger API) requests a
> sliver containing hosts connected by a topology on the ProtoGENI
> cluster. The AM allocates the topology and hosts but does not yet
> connect them to the outside world.
> 6. The above step is applied to the campus OpenFlow network.
> 7. The researcher now requests an I2 sliver providing Ethernet
> connectivity between the ProtoGENI cluster to the OpenFlow
> network. The I2 AM allocates the topology but does not yet
> connect it to the outside world. At this point, three disconnected
> slivers have been established.
> 8. The researcher now provides his slice credentials to a stitching
> manager service, S, with two requests: stitch his PG and I2
> slivers and stitch his I2 and OF slivers. S, using a
> pre-established rule, determines the sort order for stitching is
> PG, OF, I2, meaning that for the PG-I2 VLAN, PG is contacted first
> and for the I2-OF VLAN, OF is contacted first.
> 9. S contacts the ProtoGENI AM, forwarding the slice credentials and
> the request to connect the sliver to Internet2. The PG AM, using
> local policy determined by the ProtoGENI administrator, assigns a
> VLAN connecting the ProtoGENI cluster to Internet2 to this slice.
> The PG-I2 VLAN identifying information is provided to S. Even
> though the mapping has been determined, the PG switch is
> configured to drop traffic on the allocated VLANs until there is
> confirmation that the all stitching required by the slice is
> complete. This is to avoid the possibility of traffic injected
> into a partially configured network.
> 10. S now contacts the I2 AM providing the slice credentials and the
> PG-I2 VLAN identifying information. The I2 AM prepares the mapping
> between the I2 internal network and the PG-I2 VLAN. However, as
> within PG, the I2 switch is configured to drop traffic on the
> allocated VLANs until there is confirmation that the all stitching
> required by the slice is complete.
> 11. The previous two steps are repeated with OF and I2, starting with
> OF (as stated in step 8). At this point S, knows the identifying
> information for all the stitching VLANs assigned to this slice.
> This information is stored for operations and forensic use. S
> also has confirmation that the stitching has been completed.
> 12. S sends an indication to PG, OF, and I2 that the end-to-end
> network is configured. Now the rules to drop traffic on the
> assigned VLANs are removed and each switch is configured to
> translate VLAN traffic between the assigned stitching VLAN and the
> internal network. Each network sends a confirmation back to S.
> 13. S tells the researcher the end-to-end network is in place.