[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [nvrg-bof] Updated charter proposal



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Aaron Falk wrote:
> I agree with James.  Virtualization is getting intense interest in
> industry but in a way that is very different from the way projects like
> GENI and FEDERICA are using it.  Operators use virtualization as a way
> of bringing up and managing services.  GENI uses virtualization to allow
> researchers to share deeply-programmable experimental infrastructure. 

I'm not sure this is all on the same page at all. Industry
virtualization focuses on the OS, as you note to virtualize services,
not as a way to virtualize a network.

Network virtualization has a fairly long history (over a decade), which
is close to the virtualization efforts in the EU and Japan right now,
and where NVRG was going IMO.

AFAICT, GENI is exploring the two together, but they are not necessarily
coupled.

Network virtualization without OS virtualization is just as useful,
mapping network resources at the process group level rather than the OS
level.

As to whether experiments or commercial systems require different
things, I could not disagree more. They may have different priorities of
what to develop short-term or what needs more detailed development
(e.g., a debugger isn't a run-time system), but overall they solve
exactly the same problem architecturally.

...
> Off the top of my head, here are a few examples of challenges GENI faces
> related to virtualization:
> 
>     * How is isolation between slices handled?  To strict a requirement
>       and you loose flexibility, too loose and experiments are
>       influenced by other slices.

Consider the same question in terms of OSs. They're isolated except
where sharing is explicit (shared memory, IPC). Why isn't the same
answer good enough for virtual nets?

>     * How is repeatability handled?  The virtual world is dependent on
>       the real infrastructure beneath it.  How much should it be
>       measured/controlled to permit repeatable experiments?
>     * How is visibility handled?  Users may be interested in fine grain
>       information about the real devices in the path.  By definition,
>       this is hidden from users via virtualization.  How is visibility
>       controlled?
>     * How do the three topics above (isolation, repeatability, and
>       visibility) impact experiment design and resource discovery?
>     * Can slices with very diverse goals in the above topics share
>       infrastructure?

If I were to ask the same questions about multiprocessing (process
virtualization), would they make the same (or any) sense?

Virtualization is abstraction. Abstraction means that you may know
process time, but not clock time; you may know path properties, but not
physical path. If you don't want to (or can't) ignore these properties
you're better off not virtualizing - e.g., as per many real-time OS's.

Joe
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkovzSMACgkQE5f5cImnZrsTKACgw3/E3cM/BaY8wZwqnPidrYIb
d4IAoJCBlTF5pth195QB3ROhVZ5RbaCq
=pVuA
-----END PGP SIGNATURE-----


Note Well: Messages sent to this mailing list are the opinions of the senders and do not imply endorsement by the IETF.