Hi Roland, On 11/ 9/10 03:10 PM, Roland Bless wrote:
Hi Sunay, On 09.11.2010 20:07, Sunay Tripathi wrote:So looking at the Acid tests so far and VN principles, it seems like we need to tighten the isolation case a bit more. Specifically, just putting a Virtual Output Queue (VoQ) per VN on each link to provide isolation is not cutting it. The isolation (which translates into per packet latency and B/W) needs to be on a VN fabric level rather than on individual link level. Basically the VN should mirror the non virtualized physical network of same capacity i.e. a VN for 1Gbps on a 10Gbps network should see same or better behavior than if it was on a physical 1Gbps switch fabric by itself. This does need the network elements like switches and routers to do more work.I don't agree on this, because it's too restrictive IMHO. A chance for VNets is that they permit to use a different virtual link QoS, too, even if it is not exactly the same QoS as a real physical substrate provides. For instance, consider that an infrastructure provider can exploit some statistical multiplexing gain by multiplexing several virtual links over the same physical link, offering lower QoS (maybe only a statistical guarantee) at lower costs.
Yes, but then it is not isolation at VNet level but more at individual component level. So coming back to my favorite example, what happens when virtual machines for two different virtual networks are trying to send packets out of the same egress link. The cumulative egress B/W for one Vnet on that egress would have been within its quota but the traffic for other Vnet causes the link capacity to exceed. On their individual ingress policy basis, the offending vnet might be within its QoS limits but collectively the traffic is over the quota on the egress link. Which means that switching fabric will drop traffic for both vnets (based on RED) and our isolation axiom doesn't hold. But I do agree with you that this is much harder to implement (although not impossible). We do want to exploit the benefits of statistical multiplexing. Absolutely for it when possible but we do want to make sure that vnets operating within their set limits are isolated for other offending vnets.
In my view, a virtual network consists of virtual nodes and virtual links that connect the virtual nodes. So a VNet at "network layer" provides logical/direct point-to-point connections between the virtual nodes. Which underlying substrate network technology is used to provide the virtual link between to virtual nodes (which are hosted on substrate nodes) may vary widely, for example it could be a TCP connection, an IP-based tunnel (e.g., L2TP), an MPLS LSP, a dedicated L2 connection, a VLAN, some wave length on a WDM connection, or even shared memory if the virtual nodes are hosted on the same physical host. Similarly, what is running inside the virtual node is completely independent of the substrate technology, e.g., it could be the same technology or some future networking layer. Coming back to your proposal: I find it valuable to realize a virtual 100 Mbit/s link using an IP-tunnel within an EF-PHB over a 1 Gbit/s physical network. The DiffServ-based QoS may not be comparable to a dedicated 100 Mbit/s physical link, but may be good enough for most uses and may span a much larger distance and may be less expensive to realize.
Like I said before, promise me 100 Mbps and give me 1Gbps and I am happy. Promise me 100Mbps and give me 50Mbps, and I am very unhappy. As Robert mentioned, we are talking to several cloud operators who want to deploy vnets for their end customers. Some of these requirements came from there. Making the cloud operators happy would be a good use case of the VNRG effort.
The other thing is related to management. A VN administrator needs to be able to administer his resources and name space independently.Independently of what? Independently of what is running as substrate technology, yes.
Yes. But it is easier said than done. For true virtualization, the VN administrator should be assigned physical resources (which links he can use, B/W, etc) but then be allowed to create Virtual Machines with his own MAC addresses or use his vlan tags. I think the fundamental question that we need to agree on is what is it that we want to virtualize? Is the L2 resources partitioned allowing the L3 to get virtualized? Perhaps one proposal for a Vnet would be: 1) L2 resources (substrate) are partitioned i.e. VLAN tags or set of VLAN tags are assigned to a vnet 2) A set of MAC addresses are assigned to vnet or perhaps a MAC address prefix (IP style) is assigned to the vnet. 3) Also assigned are physical resources where possible (space in flow table to configure openflow style flows, bandwidth, etc) per vnet. These can collectively be termed as Virtual Resource Group (VRG) and the control for this is handed to the vnet administrator. At this point, L3 is fully virtualized (barring few caveats) and each vnet administrator can deploy his IP subnets, dhcp servers, virtual routers, etc. The acid test can much easier fall out if we put a stake in the ground as to what is being partitioned and what gets virtualized. There are more complicated scenarios but if there is some acceptance, we can probably start with the simplistic scenarios and expand on that.
But the issue that is bogging us down is what is the non virtualized part that ties entities to VN and allows the H/W to enforce the virtualization - is it the MAC address? Is it the VLAN? The problem with VLAN is that most hosts don't support Q-in-Q. Do people have thoughts on this?As I said: substrate technologies to realize virtual links may vary widely, this also implies different isolation properties.
I think we need to define what you are terming as substrate first. Not sure if there were any discussions f2f but I don't see much on the mailing list and hence the open ended questions. Cheers, Sunay
Note Well: Messages sent to this mailing list are the opinions of the senders and do not imply endorsement by the IETF.