-- Leo's gemini proxy

-- Connecting to tweek.zyxxyz.eu:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

The Local Cloud

May 2011


distribution unlimited


On February 7 th 2007, Google opened their invitation-only beta of Gmail to the public. Although email-in-a-browser was by no means a new service – we had the likes of Hotmail and Yahoo Mail for over a decade – it was revolutionary in respect of quality. Google's excellent AJAX client turning any browser into a fully-featured email client coupled with virtually unlimited and always-growing storage for one's messages created an unprecedented influx of users into the @gmail.com realm.


Fast forward to 2011 and most of email in the world is flowing through the servers of the top two players, Google and Microsoft. One letting algorithms tirelessly analyse every word written to provide better targeted advertising. The other one happily handing over your email to very alive people in three-letter-agencies so their failing empire can maintain its edge for another year or two. On top of the email disaster, we have the advent of walled-garden ecosystems of Facebook, LinkedIn, app-delivered content like The New York Times and iTunes. Places where information stops becoming an URI – accessible from anywhere by means of a simple address – and becomes just an isolated bit inside a data-silo. Closely guarded by paywalls and locked behind registration requirements removing the ability of users to access information universally and anonymously. Lastly, the gigantic service aggregations created two more issues. The gold mine of log files telling the service provides who does what, when and with whom, building unimaginable depth of privacy-violating information, available for sale to anyone wishing to profile you as a target of advertisements and available for free to any agency with enough authority to finish their request for information with “...or else.” This coupled with the GDP-swinging consequences of outages, whether accidental or intentional, creates possibilities of power and control abuse on unprecedented scale.


The Internet was initially built as an egalitarian inter-network of peers. What is becoming of it is a heavily imbalanced client-server model with clients having to either accept the server's service or have no service at all. It is either getting all the content you can pay for or receiving no content at all. Agreeing to the terms of those centralised services means waiving all your rights in favour of the multi-billion corporations. The same corporations that present us with “Don't do evil” on one hand and “If you have something to hide, maybe you shouldn't be doing it in the first place” on the other. These are the terms that remove your privacy, take away your fair use rights, override binding license agreements and remove any duty of care of your data put in custody of these entities. People that used to be the gatekeepers of the Internet in the past, the Systems Administrators, the Operators, the Architects are to blame for the state of play too. They've gone soft and forgotten that economies of scale and outsourcing can't replace knowledge and data ownership in the long run. They have become docile and complacent to the likes of beancounters and CEOs seeing more value in quarterly cost savings over long term sustainability of operation. It became “safer” to propose an outsourced service with contractual SLAs than to accept responsibility for one's actions when a simple mistake that would otherwise provide a valuable lesson could become a “career-limiting move”.Especially when Gmail became so bloody good. We have arrived at a point when an email address that doesn't end with “@gmail.com” or “@live.com” is a curiosity.


Not all is lost however. Organisations like the Electronic Frontier Foundation, people like Tim Berners-Lee and Eben Moglen have been tirelessly trying to raise the awareness of the Internet being taken away from the people and changed into a government- and corporation-serving surveillance and content delivery machines. Where everything is “free” provided you're happy to pay for it with spying and targeted advertising.


Contrary to the cloud revolution, rammed down our throats from the top, the counter-cloud revolution must take place from the bottom-up. It is essential to empower users again using services local to the user. Services that put the control over the goldmine of log files back into the users' hands and enables them to decide on how to interact with their equal peers and share the information between them without relying on a central service that remains in tight grip of those in charge. Recent happenings of the “Arab Spring” show how powerful an information-interchange-enabled population can be. Paradoxically, the spring movements of 2011 relied on centralised services of American companies – Twitter and Facebook – to exchange information about civil disobedience and protests. This was only feasible because the “Arab Spring” was perfectly aligned with American foreign policy. It is however very easy to imagine that the availability of these services would be swiftly withdrawn in case of the contrary.


Despite the asymmetric speed of most residential Internet connections, the bandwidth available to the majority of households in western societies is usually order of magnitude more than what only fifteen years ago was available to local ISPs serving thousands of users. Even taking under account that the nature of information transferred through those connections has changed from text-based to image-, sound- and video- based, most of our actual communication remains text-like. We chat, email and socialise using words with an occasional URI (text) or picture (jpeg) thrown in. It is prudent to assume that a modest residential line of 8 down / 1 up Mbps could perfectly well be capable of servicing communication needs of an entire family leaving enough bandwidth for VoIP and multimedia to constantly flow to various passive media consumption devices that fill our digital homes these days. What stops us from taking the leap from the cloud onto a more down-to-earth, local service boils down to three main issues.


Firstly, in the world of consumption-centred, client-server Internet, it's only the servers that are allocated static addresses. The clients are forced to use dynamic address allocation – their IP would change on each connection or every 24 or so hours. This breaks with the egalitarian nature of early internet in favour of “dumb” clients focused on consumption of services and discourages endpoints that are capable of proper data exchange with other peers. Unless customers can request a static IP from their ISP (usually at an unreasonable premium), this needs to be worked around using dynamic-dns providers. Fortunately, there are multiple, most of them free or charging a nominal amount for service (under $10/year is the norm) and when used correctly can even be used in fail-over configuration to remove the dependence on a single point of failure. Such services allow creation of intermediate host.domain names that can then be used as fixed rendezvous points for services like email, chat, IM and the like. This problem also affects IPv6 due to unwillingness to allocate static addresses to customers. Secondly, the reliability of home connections. Although we are now used to what is perceived as multiple-9 service provision (99.99% or more availability), this needs to be put into perspective. What in the IT world is considered a pretty shoddy 99.9% availability means only less than ten hours downtime in an entire year. Surely, this may mean a lot when downtime can affect entire nations or continents but even a 99% uptime for a family of four would have been a dramatic improvement over countless malware struggles most Microsoft Windows users need to face every week. Therefore this apparent problem could be considered a non-issue. Especially when most internet protocols have been build to use unreliable and intermittent connectivity (think SMTP being able to deliver mail despite connection issues and over slow links) and TCP being from the ground-up designed to withstand delays and unreliable networks.


Last and largest hurdle is the actual software stack that would cover the needs of users to network efficiently and without over-reliance on central authority. The pieces of the puzzle are available now but lack the required integration (Free software is usually very good at interoperability but somewhat lacking in polish and product integration) or ease of installation and configuration. To work around this issue, a FreedomBox foundation (http://freedomboxfoundation.org) has been created by Eben Moglen and the EFF with mission statement to create “Smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate, they can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls. ”Since FreedomBox is in its early infancy, it may be feasible to begin experiments in this areausing a less formal process. While participating in discussions on the proposed shape of the FreedomBox, a second-track approach, prototyping and testing different software stacks and solutions (call it the Cave Johnson approach – “throwing science at the wall and seeing what sticks”) may be beneficial in short-to-medium term for both the project and people interested in becoming early adopters. It would have the obvious advantage of using off-the-shelf hardware as base (old or partly-damaged laptops, small, underpowered computers) coupled with currently available software to provide early access for local services to those willing to spearhead the process.


The technical expertise that is expected from the user to create their own local service may be daunting, especially when a working, easy-to-use alternative exists in the regular cloud. It may require a fair amount of mentoring and patience by more skilled users. But it would benefit the entire userbase by creating a critical mass of interconnected nodes with not only private and secure communications but also distributed, encrypted storage, trust-based peering and decentralisation of service. Since such test deployments would by definition be uncoordinated, they would provide valuable insight into any potential interoperability issues in case of different versions of software being used on different nodes of the network. It is critical for the early adopters to commit their systems into “production” as soon as possible.


There is no better incentive to fix bugs than relying on it for day-to-day communication. For example, one of the first services that should be deployed on any local cloud would be email and instant messaging. This would create strong pull towards creating correct and working solutions that interoperate well. On the other hand, it's also important that the early adopters have freedom to scrap unsuccessful attempts and replace them with working solutions. Such local deployments should not be limited to a pre-defined set of software packages if interoperating alternatives are available. For example, no single email server implementation should be preferred over another, as long as both are standards-compliant and work well. In the true spirit of Unix tools, local cloud deployments should avoid package evangelism and focus on functionality instead. It is similarly important to be inclusive in the OS versions and types. No single operating system should ever be excluded when it comes to peering unless it's faulty and/or incapable of performing requested function. Today's generation of Computer Architects and Administrators has grown in the fertile soil of self-provided services. They have learned the stability and scalability the hard way, using their own systems and tinkering them to perfection. We can see them building huge and resilient services today. We do need to be careful not to take this ability to tinker and the freedom to make one's own mistakes from the next generation which already is becoming overly reliant on provided services instead of being able to build and understand them. In the exponential explosion of Internet users we need to be able to let the gifted ones become self-sufficient as only then they will be capable of carrying on when the last bearded old Unix git draws their final breath.



-- Response ended

-- Page fetched on Fri May 3 22:08:35 2024