Tutorial: Setting up an IPFS peer, part I

community Jun 17, 2018

How to quickly (and inexpensively) spin up a cloud-based IPFS peer and start connecting to the distributed web

This post is the first in a series of step-by-step guides on “getting started with the Interplanetary File System (IPFS)”. In this post, we going to cover launching an Amazon AWS EC2 instance, installing IPFS and related technologies, initializing and running our IPFS peer node, and connecting to the distributed web. We’re going to assume you haven’t already used your free year of AWS access yet. If you have, or use AWS regularly, well then you probably don’t need the first part of this tutorial, and can just skip down to the part about getting IPFS set up. Otherwise, let’s create an account and spin up a (free) instance to get us started.

Getting started with EC2

Before we get going, you might be wondering “should I really be using a centralized cloud-server provider to spin up a decentralized IPFS peer?” The short answer is: “if it helps bootstrap the IPFS ecosystem, I say let’s do it!” The longer answer might be: well, the IPFS community is all about making the transition to a safer, faster, decentralized Internet at seamless as possible. This means you sometimes have to work within the incumbent system before you can completely ‘break free’. Ok, now back to your regular programing…

First, pop on over to aws.amazon.com/free/ and Create a Free Account. When you hit Continue, it’ll take you to a page to fill in your contact information. I setup a Personal account. On the next page, you will need to enter your credit card information. We won’t be setting up anything beyond the free-tier services, but… be warned, you are giving up your credit card information and could be charged for something if you aren’t careful. If you start pinning popular files, you will get dinged for data transfer.

Update: After about 1 month of uptime on AWS, a personal peer node running 24/7 cost me about $8 in AWS fees…

Finally, you’ll need to do a phone verification. Just enter the code on the screen after answering the call, and Amazon will take care of the rest. Now you can select the Basic (free) plan, and you’ll be directed to a page where you can complete the signup and Sign In to the Console. Once logged in, you can go to your username drop-down (upper right of the page), select My Account > Preferences (on the right) and enable Receive Free Tier Usage Alerts and Receive Billing Alerts, just to be on the safe side.

It’s always a good idea to monitor your usage.

Next, we’ll create our basic free-tier AWS EC2 instance. Click on Services > Compute > EC2. You’ll end up on a page listing the Amazon EC2 resources you are currently using. Right now, it’s probably just 1 Security Group. So let’s change that, and add an EC2 instsance by clicking on Launch Instance.

Now you have some choice. AWS asks you to pick an Amazon machine image (AMI). These are basically virtual machine templates that come pre-installed with a (Linux) operating system. I’m going to go with the Ubuntu Server for this demo. By default, this will bring you to the Instance Type page, where you can just stick with the default t2.micro. Go ahead and click Next to Configure Instance Details. We’ll skip this first page (use defaults), and click on Next: Add Storage. The free tier gives us up to 30 GB of EBS storage, so we might as well use it.

Next, skip over Add Tags, and go straight to Configure Security Group. By default, SSH connections on port 22 are enabled from any source. This is probably fine, though restricting access to known IP addresses is probably a good idea. We’ll add some rules here (and maybe change the group name).

Some Security Group rules to add

Now click Review and Launch and then Launch. This will bring up the dialog to create a key new pair. So select create a new key pair, name it something useful like ipfs, and download the pem file (your browser might automatically append .txt to the downloaded file name, so you’ll want to remove this before subsequent steps). Then click Launch Instances. If you hadn’t setup billing alerts previously, you can do it now by clicking on the links.

While you wait for your instance to launch, you can View Instances, or read up on how to connect to your Linux instance. For the rest of this session, I’m going to assume you have access to a command-line shell (Terminal). If not, the above link has some useful resources for connecting with other types of ssh clients. Unless otherwise stated, you’ll be entering (copy-paste) the following commands into your Terminal.

Connecting to your instance

First, change directories (cd) to the location of the private key file that you created when you launched the instance (mine was in Downloads). Next, use the chmod command to make sure that your private key file isn’t publicly view-able.

cd ~/Downloads
chmod 400 ipfs.pem

Use the ssh command to connect to your instance. You need to specify the private key (.pem) file and use the following connection address: user_name@public_dns_name. For example, if you used an Ubuntu AMI, the user name is ubuntu. You can find the Public DNS (IPv4) name on the Description tab of your instance Dashboard.

You can find your Public DNS on the Description tab.
ssh -i ipfs.pem [email protected]

Et voila, you have connected securely to your running AWS EC2 instance. Now let’s IPFS-ify it!

Put some IPFS in it

The following IPFS setup is based largely on the excellent post from my colleague Sander Pick.

So first things first: let’s download the go-ipfs binary (wget), unpack it (tar xvfz), remove the downloaded archive (rm), move it (sudo mv) to a directory where our executable programs are located, and then remove (rm -rf) the unpacked folder (since we no longer need it):

wget https://dist.ipfs.io/go-ipfs/v0.4.15/go-ipfs_v0.4.15_linux-amd64.tar.gz
tar xvfz go-ipfs_v0.4.15_linux-amd64.tar.gz
rm go-ipfs_v0.4.15_linux-amd64.tar.gz 
sudo mv go-ipfs/ipfs /usr/local/bin
rm -rf go-ipfs

The next step is to setup (initialize) our IPFS repo. We do this by first specifying where we want to store our repo (in this case, in /data/ipfs), and then weinit the whole thing. First, we’ll add our repo path to our .bash_profile script and source that (run the code contained in the file). Make sure you actually make the required directory (mkdir), and take ownership of it (chown) before trying to initialize the server (init):

echo 'export IPFS_PATH=/data/ipfs' >>~/.bash_profile
source ~/.bash_profile
sudo mkdir -p $IPFS_PATH
sudo chown ubuntu:ubuntu $IPFS_PATH
ipfs init -p server

Ok, we’re almost there. Now let’s configure out repo a bit. We’ll increase our storage capacity (Datastore.StorageMax), and if you want, enable the public gateway. But, you might want to skip this one if allowing people to access files through your gateway makes you nervous:

ipfs config Datastore.StorageMax 20GB
# uncomment if you want direct access to the instance's gateway
# ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080

Keeping things going

Cool. We’re now ready to start our IPFS daemon. But, since this is a cloud compute node, we don’t want to have to start and stop things ‘manually’. If our instance gets restarted, we’d like IPFS to start automatically. So let’s set that up using the systemctl service. First, we’ll set up an ipfs.service:

sudo bash -c 'cat >/lib/systemd/system/ipfs.service <<EOL
[Unit]
Description=ipfs daemon
[Service]
ExecStart=/usr/local/bin/ipfs daemon --enable-gc
Restart=always
User=ubuntu
Group=ubuntu
Environment="IPFS_PATH=/data/ipfs"
[Install]
WantedBy=multi-user.target
EOL'

And then enable our new service,

sudo systemctl daemon-reload
sudo systemctl enable ipfs.service

And start it, and check it out (a sanity check if you will):

sudo systemctl start ipfs
sudo systemctl status ipfs

You should see something like the following, along with the output from the daemon.

● ipfs.service - ipfs daemon
   Loaded: loaded (/lib/systemd/system/ipfs.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri xxxx-xx-xx xx:xx:xx UTC; xx ago
 Main PID: xxxx (ipfs)
    Tasks: 8
   Memory: 8.5M
      CPU: xxxxxx
   CGroup: /system.slice/ipfs.service
           └─1738 /usr/local/bin/ipfs daemon --enable-gc

You can also see that your fellow IPFS peers are connecting and routing through your newly configured node. If you run ipfs swarm peers, you should (eventually) see a nice long list of connected peers.

We are officially connected to the dencentralized web.

Ok, so we’ve now created a new EC2 instance, installed IPFS on it, and started running our IPFS daemon. If you opened up your gateway, you should now be able to browse to its address in a browser, and ask for the docs directory: http://ec2-xx-xxx-xxx-xx.us-xxxx-2.compute.amazonaws.com:8080/ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv, or anything else accessible on the decentralized web.

What’s next?

There are lots of additional config options you can try, and port/security configurations you could use to protect your instance. You might also want to enable secure websockets on your instance (we’ll cover this in detail next week), or some of the other IPFS experimental features. In particular, stay tuned for next week’s post, where we’ll cover setting up a reverse proxy with NGINX, enabling a secure gateway, peer-to-peer (p2p) connections with browsers, and so much more!

Until then, why not check out some of our other stories, or sign up for our Textile Photos waitlist to see what we’re building with IPFS, or even drop us a line and tell us what cool distributed web projects you’re working on — we’d love to hear about it!


Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.