Setting up a Node web server on an Amazon EC2 instance

Up until recently, I’d been hosting Leaf on Heroku. I had a pretty good experience with Heroku and would recommend it to anyone whole heartedly, especially if you’re just getting into the Node space and want to play around a little. It has a free sandbox plan, it’s really easy to get started. That said, however, when you’re getting a little more serious, the cost of Heroku’s offering is almost twice as much as what Amazon offers. Prices today sit at $34.50 for two “drones”, which are the equivalent of two EC2 instances, while the EC2 instances themselves can be purchased for as little as $9 a month (or $18 for two instances). On top of the price difference, EC2 instances have the benefit of being almost 100% configurable, so I can poke around and do however I please.

From here on out, I’ll assume the reader is familiar with Amazon Web Services and have already spun up an EC2 instance with SSH capabilities. I’m using an Amazon Linux instance, but any of the *nix based instances will do the trick.


Here’s the main list of things that we need to do to get a Node web server up and running. I’m going to be using Leaf as the example here because that’s what I did for my own EC2 instance. Leaf is hosted on GitHub, so we’ll need Git installed as well to pull down the source.

1) Install Git and pull down the code.
2) Install Node and NPM.
3) Build your application.
4) Redirect traffic to/from the appropriate ports.
5) Set up your web server to run until you say otherwise.
6) Celebrate with champagne.


Unsurprisingly, the first step is to install Git.

sudo yum install git

Also unsurprisingly, the second step is to clone your Git repo.

git clone
cd Leaf

And done. So far so good.


This step is a little trickier, but still a piece of cake. Node and NPM aren’t available in the default repositories that yum knows about on a standard Amazon Linux instance, so we need to add a new repository. The repository we’ll be using is the Fedora Extra Packages for Enterprise Linux (EPEL) repository.

sudo rpm --import
sudo rpm -Uvh

Once that’s done, we can go ahead and install Node and NPM almost as usual. We just need to specify the newly added repository.

sudo yum install nodejs npm --enablerepo=epel

Build the application

In my case, there’s no building to do; it’s just a quick install of saved packages.

npm install

This step isn’t required, but now would be a good time to make sure that everything looks ok by running your copious amounts of unit tests.

npm test

Redirect traffic

Normally, Leaf runs on port 80, but Amazon discourages users from handling traffic on ports below 1024, so we’ll redirect traffic coming into port 80 to another port that your application listens on. In Leaf’s case, that port is 8080, but you can use practically any port you like.

sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

Run your application

Typically, running a Node application is just throwing a “node” in front of the JavaScript file you’d like to execute. That still works on an EC2 instance, but as soon as you quit your SSH session, the Node executable will exit. To get around this, the guys over at Nodejitsu have built a Node plugin called Forever which solves exactly our problem. First, install forever.

npm install forever --save

Once installed, tell Forever to run your script as so, assuming it’s called “server.js”.

./node_modules/forever/bin/forever \
start \
-al forever.log \
-ao out.log \
-ae err.log \

If you ever need to stop your script, you can do so this way:

./node_modules/forever/bin/forever stop server.js


And that’s it. Pretty straightforward stuff. You can test that your application is visible by navigating to the public DNS location (or public IP) that your Amazon EC2 instance is set to. Piece of cake!

Thanks for reading!

About these ads

5 thoughts on “Setting up a Node web server on an Amazon EC2 instance

  1. Thanks for taking the time to write this up! There are a few changes I would recommend:
    1. It’s not just Amazon that discourages using privileged ports. To use port 80, your process has to be root. If someone finds a way to make your process do something you didn’t intend it to (remote code execution vulnerability), that’s a problem. If your process has root privileges, that attacker can now do anything they want on your system. If the process is an unprivileged user, it’s still bad, but much less bad.
    2. The iptables forward is a fine way to forward traffic, but a better way would be to use an Amazon Elastic Load Balancer. They’re included in the Free Tier and you can expose port 80 to the internet, but route traffic to port n (8080, for example) on your instance(s). When you add a second instance for fault tolerance or scaling, the ELB will distribute traffic between the two. If one of them goes down, it will stop sending traffic to that instance. Also, you can have the ELB handle all your SSL for you if you like. Very good in node-land considering node’s shocking TLS performance!
    3. Forever is a handy tool in development, but in production, what happens when this server gets rebooted? Things crash, Amazon needs to migrate instances, etc etc. Amazon’s uptime is pretty good, but at some point your server will go down. With this setup, your process is offline until you ssh in and run your commands. Not good!
    What you want is something that will automatically start your service when the system boots up. I’m an Ubuntu guy, so I’d use upstart. On the amazon flavour of CentOS you’re using, you need an init script in /etc/init.d. They’re gross, but they’re what you want. There’s an example here that will need some updating, but will get you started:
    The best bit about getting that sorted, though, isn’t actually crash tolerance at all! Now that your machine knows how to set itself up as soon as it boots, you can take a snapshot of the disk and clone it at will. Amazon makes this really simple with AMIs. Create an AMI from this machine. When you need more capacity (or when something goes wrong) you can launch a new instance from this AMI and POP! you’ll have a machine already running your server in seconds. Just associate it with your load balancer and it’ll be serving traffic within minutes!

    • Thanks a bunch for your suggestions; they’re all great! I’m particularly tickled by your second suggestion in which you reference the Amazon Elastic Load Balancer. It’s my goal to eventually have more than one node running my app, so that presumably means I’ll have to become intimately familiar with it. I also like the idea of being able to spin up app instances super quickly without any intervention on my part. Thanks again!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s