Easy personal IPFS pinning service with Textile

ipfs Jan 03, 2019

A quick lesson in spinning up your own personal IPFS pinning service in no time with Textile Cafes

UPDATE:  We've discontinued Textile Cafes,  but pinning data to the IPFS network has never been easier. Textile Buckets  now allow you to publish your website, blog, or data on IPFS easily.  Buckets are dynamic folders of pinned content. Data you add to a Bucket  is available over IPFS and HTTP gateways and every Bucket has a static  URL that will reference your changing data. Read more.

The Interplanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia in a distributed file system. That’s quite a mouthful, but it basically it boils down to a re-imagining of the Internet, where everyone participates in the storage and sharing of content. Peers connect and share content or data via protocols designed to ensure the actual location of content on the web is less important than the content itself. We’ve talked about this before, and it has huge implications for the future of the Internet.

As IPFS continues to gain in popularity, the likelihood that any given bit of content added to IPFS will stick around continues to increase. But, it’s important to keep in mind that nodes treat the data they store like a cache, meaning that there is no guarantee that the data will continue to be stored. For important data that we need or want to have around long-term, we can pin it on our local node, telling our IPFS peer that the data is important and shouldn’t be thrown away (aka garbage collected). For example, Textile hosts textile.io on IPFS, but since it’s pretty important for us that it is available at all times, we make sure it is pinned on our own IPFS server peers!

That’s all well and good for things like static sites that don’t change frequently, or data stored by an organization or group of developers, but what about my own personal files? Of course I can have an IPFS peer running on my local machine (I do, and I highly recommend IPFS desktop for this), but what if I shutdown my peer? or I go offline? or I need to run an update? You could certainly use an existing payment-based service such as Pinata, Temporal, Eternum, or Constellation (these are awesome if you don’t want to host anything yourself!)… But wouldn’t it be nice if I could have my own personal pinning service? Something that would pin my files for me, remotely, with minimal fuss and maximum up-time? Sure it would, so let’s set one up!

Textile Cafes

Textile provides encrypted, recoverable, schema-based, and cross-application data storage built on IPFS and libp2p. We like to think of it as a decentralized data wallet with built-in protocols for sharing and recovery, or more simply, an open and programmable iCloud. Just like iCloud has its own storage back-end, Textile uses IPFS to store data across a decentralized, open network of peers.

With the most recent release of Textile, we have several tools (learn more about our command-line tools on our wiki) that make setting up your own personal (or public) pinning service a breeze. To do this, you need to run the Textile daemon in Cafe Mode. This means that the node (which wraps an IPFS peer) will be open for you to connect with, relay messages to other connected ‘apps’, cache larger files (e.g. encrypted images), and even help with peer-to-peer discovery across the network in general. But most importantly, it means you can have your Cafe pin content for you!

Setup

The ‘manual’ (but still pretty easy) way

You can follow these steps to get your Textile command-line tool setup. It should only take a few minutes, depending on network speeds and what tools you already have installed. Once you have that setup, we can setup our Peer to run in Cafe mode. For now, you can run all of these steps on your local machine, but for a better, more permanent solution, you might want to setup a cloud-based machine (check out the easy way 👇 below). For the purposes of this demo, I’ll assume you have deployed this on a ‘remote’ machine, and refer to it as such…

With your remote Textile daemon running, we’ll update the config file to enable Cafe mode. You can do this manually if you want (edit ~/.textile/repo/textile directly), but the easiest thing to do is to use the textile config command-line tool. So first, set your Host.Open entry to true:

textile config Cafe.Host.Open true

Next, you’ll want to figure out the public IP address for your Cafe node. This will enable you to directly access your Peer when you submit your pin requests. It doesn’t really matter how you access this, but if you are running your Peer behind a router or gateway, you might have to setup port forwarding or other ‘tricks’ to make sure your Peer is accessible (if you’re only going to access it over a LAN, you can use its private IP address).

textile config Cafe.Host.PublicIP \"$IP\"

You’ll also want to change your default Cafe and Gateway Addresses from localhost to the default route, enable server mode, and, if you’ll be providing the Cafe service over HTTPS via a load balancer, you’ll also want to specify a public HTTP url (leave this out if that sounds like a lot of extra work!):

textile config Addresses.CafeAPI \"0.0.0.0:40601\"
textile config Addresses.Gateway \"0.0.0.0:5050\"
textile config IsServer true
textile config Cafe.Host.HttpURL \"https://mycafe.io\"

You can also do all of the above config setup when you initialize the Peer in one go using something like the following one-liner (where IP is your exported public IP address and URL is your public url if you’re so inclined):

textile init -s $(textile wallet init | tail -n1) \
  --server --cafe-open --swarm-ports=4001 \
  --cafe-http-url="$URL" \
  --cafe-public-ip="$IP" \
  --cafe-bind-addr=0.0.0.0:40601 \
  --gateway-bind-addr=0.0.0.0:5050

The ‘automatic’ (and definitely easy) way

Want to deploy a Textile Cafe on a publicly accessible cloud machine such as an Amazon EC2 instance? We’ve got you covered! Here’s a script (you also need this companion installer) that will setup the whole thing for you in one go. All you need is your instance’s SSH key, its public IP address, and the Textile release you want to deploy (e.g., v1.0.0), and you’re good to go (assumes your username is ec2-user so be prepared to edit as needed)! The only other thing to keep in mind is that you have to have your Cafe and Swarm ports open on your remote instance. By default, these are 4001 for the swarm and 40601 for the Cafe:

./init.sh -k key.pem -r 1.0.0.0 -p xx.xxx.x.xxx -u http://xx.xxx.x.xxx:40601

Quick Test

Regardless of method used, once you have your Cafe setup, you can test that it is up and running using a tool such as curl:

curl "http://xx.xxx.xx.xxx:40601/health"

You should get a 204 response if all is well. Wanna update/upgrade your Cafe? We have you covered there as well! Here’s the script (and here’s the companion installer).

Registering

Now that we have our own personal (remote) Textile Cafe setup, its time to register our local ‘app’ with it. For now, we’ll just run a vanilla Textile Peer locally, and connect that way. If you don’t already have a local peer set up, you can use the same steps as before, this time on your local machine. We’ll refer to this vanilla Textile peer as the ‘local’ peer, and the above Cafe Peer as the ‘remote’ Peer from now on.

So if your ‘remote’ Peer isn’t actually remote (i.e., you followed the manual steps above and just ran them on your local machine), then you’ll have to create a second separate Peer by specifying an alternative repository location and port for your Peer’s command API (you’ll also have to add the --api=http://127.0.0.1:41600 flag to the following commands if you’re using an alternative API port):

textile init -s $(textile wallet init | tail -n1) --api-bind-addr=http://127.0.0.1:41600 --swarm-ports=4101 --repo-dir=/full/path/to/repo

With your local Peer ready to do, simply register it with the remote Cafe Peer. You’ll need the Cafe Peer’s ID, which you can get via textile peer (run on your ‘remote’ machine). Now, register (locally) with:

textile cafes add <cafe-peer-id>

This will return a JSON response containing an access and refresh JSON Web Token (JWT), which you can use when communicating with your remote pinning service. Textile uses JWTs to ensure that you aren’t just pinning files willy-nilly for anyone who happens upon your Cafe. JSON Web Tokens are an open, industry standard (RFC 7519) method for representing access tokens that assert some number of claims securely between (two) parties.

You can list your registered Cafes via textile cafes ls, and even extract the specific access token component using a tool such as jq (which we introduced in a previous post):

textile cafes ls | jq '.[].access'

You can register with multiple Cafes, thus ensuring your data is even more accessible and safe. All you need to do is repeat the above steps with a different remote Cafe. You can deploy your own, or use one of Textile’s Cafes. Now, with your access token in hand, you could ‘manually’ connect to your Cafe’s REST API and start adding/pinning files… but there’s an even easier way!

Adding & Pinning

Once a local peer is registered with a remote Cafe, any files that are added to a Thread will be automagically added and pinned to the remote Cafe. So actually, there’s really no additional work required to have a custom remote pinning service setup. Start, Register, Create, Add, Done. And even better still, by adding data to your remote Cafe via Threads, you get the benefit of structured data, encryption tools, sharability, and even recoverability! Those are some nice features. Don’t want to structure your Thread data? Just use a simple blob schema and add whatever you want.

To make this magic happen, you need to create a new Thread with a (possibly custom) Schema. This is super easy, and we have a basic guide/demo for this already (plus we’ve mentioned it here and here as well). For the uninitiated, if you wanted to create a private Thread for storing your photos, you’d do something like this (uses our built-in media Schema):

textile threads add photo-thread --media

This will output some information about your new Thread, including its id. Then it’s just a matter of adding your photos one by one, or a whole directory at a time. You can even create photo albums (or albums of any type of data) by using the --group flag:

textile add photos/ --caption="moar pics" --thread=<id> --group

Under the hood, your added files/data will be pushed onto your Cafe’s Store Request queue. Which is essentially a queue of requests to… you guessed it, store and pin data. The Cafe will process this queue opportunistically, adding and pinning the data that your local Peer has sent. For heavy traffic applications, this queue can take some time to process (e.g., a large album of photos), but should process the data and notify the local peer once the processing is complete.

Don’t want to add data files manually like that? Ok, how about a simple one-liner to watch a folder for new files and then automatically add them to a backup thread (requires fswatch and your thread’s id)?

fswatch --event Created --event MovedTo path/to/folder | xargs -I{} textile add {} -t <id>

How cool is that. You’ve basically just created your own custom Dropbox in less than 15 lines of command-line magic. And with Textile’s backup and recovery tools coming down the pipeline, you’ll be able to login from any computer and sync your files from your remote Cafes! All using a completely decentralized/federated network of Peers.

Summary

So after all that, setting up a custom remote pinning service using Textile really amounts to a few simple steps:

  1. Setup a remote Peer running in Cafe mode
  2. Setup a local default Peer
  3. Register your local peer with your remote Cafe
  4. Create a Thread, and start adding data
  5. Relax… your data is now safe and secure on the dweb!

That’s it!

Thanks for following along, please let us know what you think?! You can reach out over Twitter or Slack, or pull us aside the next time you see us at a conference or event. We’re happy to provide background on this demo/tutorial, and where we’re headed next. In the mean time, don’t forget to check out our GitHub repos for code and PRs that showcase our latest APIs. We try to make sure all our development happens out in the open, so you can see things as they develop.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.