GitHub Buried Code In The Arctic To Sleep For 1,000 Years

Although undoubtedly controversial especially after its acquisition by Microsoft, Much of the world's open source (and some not so open) code is most likely stored on GitHub. Given how much of the world runs on open source software, directly or otherwise, GitHub repositories pretty much hold most of our current culture's digital infrastructure and history. That's why GitHub has taken it upon itself to preserve that data by burying a snapshot of its entire archive somewhere in the Arctic circle to be preserved for a thousand years.

Advertisement

No, GitHub didn't just dig a deep hole in the North Pole to bury DVDs or, worse, magnetic hard drives though it did come close to that geographical location. To keep the world's open source code safe for hundreds of years, GitHub selected a decommissioned coal mine in Svalbard, Norway, where a chamber underneath meters of permafrost was built exactly for that purpose.

Fortunately, GitHub also chose a more reasonable medium to store those repositories. Archiving all active public repositories on February 2 this year, GitHub was able to collect 21TB worth of repository data which was then stored by Piql on 186 reels of digital photosensitive archival piqiFilm. These were then packed in boxes and hauled over to Norway where they now sit inside containers ready to be kept safe for 1,000 years.

Advertisement

GitHub would have probably wanted to be there to document the whole journey but the world is a much different place now than when it announced its archival program last November. Even when it was finally possible for the archives themselves to be shipped to Norway, they had to let local partners handle matters instead. On July 8, 2020, GitHub's snapshot as of February 2, 2020, was safely deposited in the Arctic Code Vault.

While the archival film's journey ended there, the GitHub Archive Program is still in full swing. In particular, the Internet Archive is still doing its own full archive of public repositories as of April 13 this year, already producing 55TB of data. Unlike the archive resting in literally cold storage, the Internet Archive is aiming to make this entire archive available to be cloned later this month.

Recommended

Advertisement