+5 votes
100 views
What is the Interplanetary File System and how does it work?

in Guides by (552k points)
reopened | 100 views

1 Answer

+3 votes
Best answer

IPFS, key to a decentralized Internet?
Origin and operation of IPFS
How is everything stored then?

  • IPFS, as the Interplanetary File System is known, is a solution proposed by Juan Benet and is developed by his Protocol Labs, which could mark the future of the Internet in the world
  • Just as Git proposes a decentralized platform for repositories, IPFS wants to stand out with a similar method, but more focused on file storage and data recovery.

If you have spent time between computers, you have probably heard of a very special concept. The Interplanetary File System or IPFS , which usually generates so many doubts among those who recently locate it. Precisely for this reason, we will now review what it is and how it works ..

IPFS, key to a decentralized Internet?

The first thing that we must highlight, in this sense, is that the Interplanetary File System or IPFS is nothing more than a distributed peer-to-peer file exchange network, which many analysts already point to as one of the best positioned to become the basis of a new decentralized web .

This is because data storage behind the Internet, these days, has to do more with servers. Physical or virtual, on massive server farms or cloud platforms, but always within a firm..

Anyone who wants to access them will have no choice but to establish an HTTPS connection in their browser to the appropriate server. In other words, the server happens to be at the center of everything that happens . This is a simplification, of course, but it describes the general model in which the Internet works for now.

And while solutions like mirror servers and content delivery networks exist, the locations are finite. IPFS aims to break with that, by working as a decentralized network, something similar to what the Git service does. Git is one of the most used decentralized systems, in its case of repositories. IPFS also wants to be, but in its case in everything related to file storage and data recovery ..

image

Origin and operation of IPFS

IPFS was created by Juan Benet and is developed by Protocol Labs , the company he founded for this purpose. As we said, he took the decentralized nature of Git and the distributed bandwidth-saving techniques of torrenting and created a mechanism that works on all nodes of the IPFS network. And here we have it.

The IPFS decentralized web is made up of all the computers connected to it, known as nodes. Nodes can store data and make it accessible to anyone who requests it .

If someone requests a file or a web page, a copy of the file is cached on your node. As more and more people request that data, there will be more and more cached copies of this information.

To make this possible, the decentralized web uses content-based routing, as a very interesting option to the usual web addresses. It is the way these addresses and locations are established. As an advantage, this allows you to reduce latency, the necessary bandwidth and annoying bottlenecks .

On the other hand, moving away from the centralized model means that there is no focal point for hackers to attack . Eventually, that could reassure those who are wary of others having access to their private data. Really, no one else will be able to access your information, even if you share many more things than before.

How is everything stored then?

The data is stored in a series of chunks 256 KB in size, which are called IPFS objects. Files larger than that are split into as many IPFS objects as needed to split their contents. One IPFS object per file contains links to all other IPFS objects that make up that file .

When a file is added to the network, it is given a unique 24-character hash ID, called a content ID. This is how it is identified and referenced within the IPFS network, and is tracked over time.

Suppose you store a file on your node and someone requests and downloads it directly from that node. Next time, when a third party demands it, you can get it from both your node and the second person's node. The more people download the file, the more nodes there will be to everyone's advantage .

Garbage collection will periodically remove useless cached objects. However, you can pin files to your node if, for some special reason, you still want to keep it. Paying for external storage, even, you will have all your files anchored without limitations.

If something on your website goes viral and generates massive waves of traffic to your website, the pages will be cached on all nodes that retrieve those pages. Those cached pages will be used to help serve more page requests, keeping demand covered even if it increases a lot.



by (3.5m points)
edited

Related questions

+4 votes
1 answer
+3 votes
1 answer
+4 votes
1 answer
asked Nov 3, 2022 in Guides by backtothefuture (552k points) | 78 views
+5 votes
1 answer
asked Dec 20, 2022 in Socialmedia by backtothefuture (552k points) | 87 views
+4 votes
1 answer
asked Dec 3, 2020 in Internet by backtothefuture (552k points) | 265 views
Sponsored articles cost $40 per post. You can contact us via Feedback

Most popular questions within the last 30 days

10,659 questions
10,791 answers
510 comments
3 users