-- Leo's gemini proxy

-- Connecting to going-flying.com:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini; lang=en

[08/27/2020 @11:02]: Did the slow adoption of IPv6 lead to the cloud?



Background



The Internet Protocol (IP) was specified in 1981 in RFC 791 and was based on a 32 bit address space. Due to the growth of the Internet the original assignment scheme was found to be wildly inefficient and was leading to premature address exhaustion so RFC 1519 created CIDR (Classless Inter-Domain Routing) to allow a more flexible assignment of addresses in 1993. Two years later in 1995 RFC 1883 brought about the next version of IP, known generally as IPv6. One of the key changes of IPv6 is the use of 128 bits to specify addresses.



/cgi-bin/rfc/rfc791.txt


/cgi-bin/rfc/rfc1519.txt


/cgi-bin/rfc/rfc1883.txt




Addressing



Every node wishing to communicate on the Internet needs a unique address. In IP (IPv4) the size of an address is 32 bits, which means the total number of addresses possible is 4,294,967,296 or a little over 4.2 billion, however not all addresses are assignable. Several address ranges are reserved (for example the ubiquitously used RFC 1918 private address spaces such as 192.168.0.0/16, 172.16.0.0/12 and 10.0.0.0/8 reserve 17,826,048 addresses for private use that may not be used on the Internet) and several legacy address ranges remain inefficiently allocated (for example Apple still owns one of the original class A address allocations comprising 16,777,216 public IP addresses) so the practical limit is in fact much lower. With IPv6 the number of assignable addresses is so large it is hard to fathom. Written out 2¹²⁸ is 340,282,366,920,938,463,463,374,607,431,768,211,456 or roughly 3.4 x 10³⁸. Even with large swaths of reserved space and overhead for internal networking operations it represents a significant increase in available addresses.



/cgi-bin/rfc/rfc1918.txt


https://www.wolframalpha.com/input/?i=2%5E128




The Cloud



      _  _
     ( `   )_
    (    )    `)
   (_   (_ .  _) _)


In its simplest form "the cloud" is a marketing term for two different, but related concepts. The first was born out of the penchant for engineers to elide the complexity of a system that didn't need to be worried about (often times the Internet itself) as a picture of a cloud. This aspect is generally summarized as "someone else's computer," and refers to a highly-automated, self-service, and elastic infrastructure that can be consumed directly by the end-user without intervention of a traditional IT staff (and is often owned by someone other than the end user or their organization). The other is economic and generally refers to the shifting of the payment of resources from long-term capital activity (buying hardware every few years) to a more frequent expense activity (known usually as pay-as-you-go or usage based billing), often leveraging the elastic nature of the first point.


In this writing I'm leaning on the naive version of the first point, the blunt "someone else's computer" definition if you will.



A client-server problem?



In the beginning computers were rare and expensive. This lead to the advent of time-sharing, multi-user systems where several individuals could use the resources of a computer simultaneously. Modern personal computers retain this ability but it is rarely used outside of providing a layer of security (think of when Windows prompts you for permission to run something as the Administrator or when macOS asks for your password to do something. In both of those cases it needs to run something as a specially empowered user that is different from your user). The way computers were used lead to client-server applications and most of the Internet remains built that way. E-mail, the web, and most of the foundational operating applications of the Internet (DNS, BGP, OSPF...) operate in a client server model. As we began to run out of public IP addresses during the growth in popularity of the Internet in the late 1990s and early 2000s a stop-gap measure gained popularity. Known as Network Address Translation (NAT), it is a scheme to "hide" one or more networks of IP addresses (usually RFC-1918 addresses) behind a single public IP address. Originally released in May of 1994 in RFC 1631 NAT rapidly became the standard used in residential Internet connections to the point that it was built into the devices that ISPs provided to their customers to connect their computer to the Inetneret. (I have somewhat fond memories of using NAT (known at the time in Linux as IP Masquerade) to share a single dial-up connection to the Internet with two computers (one was mine and the other used by my siblings)).



/cgi-bin/rfc/rfc1631.txt



The drawback of NAT is that the systems that lack the public IP address cannot receive an incoming connection. There is a facility to expose individual IP ports to a system behind the NAT gateway however it is often tedious to manage and imposes limits on the use of the port. This made it difficult to run servers on residential Internet connections as they often only provided the user with a single IP address that could change periodically.



Enter IPv6



In the mid to late 2000s the impending exhaustion of IP address allocations began to find some urgency in the public consciousness and a real push was made to widely support the now decade old IPv6 protocol. IPv6 specifies that Internet connection users should be given what is known as "a sixty-four", or 2⁶⁴ addresses (the reasoning comes from the ability to encode your Ethernet MAC address in the IPv6 address with some other information which makes address assignment easy, though this was largely determined to be an awful idea as it allowed you to be uniquely identified literally anywhere in the world by IP alone. Most modern IPv6 stacks implement RFC 3040 or 4941) which provides them with something like 18,446,744,073,709,551,616 possible addresses to use.



/cgi-bin/rfc/rfc3040.txt


/cgi-bin/rfc/rfc4941.txt




What does this have to do with the cloud?



Ok, so that was a *lot* of background but I think it is important to understand the world we were in while I posit this. It is of course all conjecture and probably at best a thought experiment but it's been on my mind for a while now.


As applications started to appear that allowed users to control real world objects (somewhat like my silly VFD display) it became necessary to connect a user to a device they owned specifically. Generally called the Internet of Things (IoT) these days many of these applications needed a way to connect the user to their equipment even if the user was not on the same local network as the devices (controlling a smart thermostat or viewing home security cameras while not at home for example). The solution was to use a third-party server as a mediator, so both the device and the client application could connect to a public IP address which would relay information and bypass the need for NAT port forwarding (and a solution to the potential dynamic IP address).



My silly VFD display



Interestingly though, if you think about it, IPv6 makes this entirely pointless. There are plenty of addresses for every single device so you could simply "pair" your application with your device(s) using one of several local-network discovery protocols (UPnP, and Zeroconf come to mind) and from that point on your application would be able to connect to your device. Security though token or certificate exchange similar to Bluetooth's PIN pairing could be easily added which would mean that in many cases the use of the cloud would be entirely unnecessary.



What do you think?



This possibility popped into my head a while back and has been bugging me. What if IPv6 had been deployed earlier, before the Internet got popular enough that NAT had to become the de-facto standard. Would we have seen the rise of cloud as a middleman as the de-facto standard for how IoT applications are structured? Would we instead see more decentralized applications where users remain in control of their data and companies don't have infrastructure that lives in the middle of most of our interactions?


I don't know. It is certainly possible. After all, the central server model only really thrived because there wasn't really another option that the layperson had any hope of comprehending or implementing. What do you, dear reader think? What kind of Internet might we have if most of us always had a few quintillion addresses to call our own?





↩ back to index

backlinks [GUS]

backlinks [geminispace.info]


🚀 © MMXX-MMXXI matt@going-flying.com

-- Response ended

-- Page fetched on Tue Sep 21 07:48:11 2021