Amazon SQS

First I created a project for the node.js worker. The first steps for this are identical to that of creating the Hapi.js site that publishes messages to the queue. Go through these three steps for the worker and then I’ll continue from there.

…and now on to the security, configuration and worker specific parts of this series…

Security Needs

Before getting the actual worker setup I need to have a role setup in IAM (

Screen 1

Once here click on the Roles section of IAM. Then click on Create New Role.

Screen 2

Next set the role name.

Screen 3

Now select Amazon EC2 here. I noted this wasn’t immediately intuitive. But once I realized that the security item I’m looking for is a sub-item under Amazon EC2 things made more sense.

Screen 4

Next next odd thing that occurred in this web wizard was that the number 3 step is skipped. Again, that took me a second to realize maybe that’s an optional step. Whatever the case, it shouldn’t be displayed unless it’s a step that might actually occur in all paths, otherwise just make it disappear. Anyway, step 4 is where the next step awaits.

Screen 5

Screen 6

The next step I’ll add the JSON that defines this role. It looks like this in the wizard (and I’ve included the actual JSON just below the image of the wizard). NOTE: In this screen shot I’ve named the role one thing, but when I select it below I’ve actually renamed it to “serverComms”. These two are indeed the same role, I just didn’t want to go back and redo all the screenshots around a minor rename. :)

Screen 7

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "QueueAccess",
      "Action": [
      "Effect": "Allow",
      "Resource": "*"
      "Sid": "MetricsAccess",
      "Action": [
      "Effect": "Allow",
      "Resource": "*"

Click next and the summary is provided before final creation of the role.

Screen 8

Web Worker Application

The first thing I need is to go ahead and get the worker setup in the AWS Management Console. I create a new environment by clicking on Launch New Environment.

Screen 9

Next up is setting the environment tier and type and the configuration. I set these to Worker, Node.js, and Load Balanced.

Screen 10

Then upload the project zip file. I zipped and uploaded this file similarly to the way I did the site for submitting messages to the queue. To see what code I’m uploading - the blog entry is kind of circular - so I added the code part toward the bottom of this entry. For the exact code, check out the later part of the entry and the finished code here in the github repo.

Screen 11

Now click next through environment info and additional resources. In configuration details the main thing I need is to select the IAM security role for the instance being created.

Screen 12

Click through the environment variables and on to Worker Details. Here I select the queue that I created in part 1 of this series. Just below that enter the URI end point that the worker will provide the queue to send messages via POST. I’ll get to the code later in this article. But for now, I just selected /hi as the end point.

Screen 13

Finally, the last step is to review and Launch the worker instance.

Screen 14


At this point I’ll still be using hapi.js and good.js, so I follow the installation of these libraries similar to the ones I used for the site app in part 2 of this series.

npm install hapi --save
npm install good --save

Now, I’ve setup a server.js as shown below. This API end point provides an action, in this case a write to the log, and then just finishes. This will prove out a complete movement of message from publisher site to queue to answering worker service.

  AWS = require('aws-sdk'),
  awsRegion = 'us-west-2',
  sqs = {},
  Hapi = require('hapi'),
  Good = require('good'),
  queueUri = '';

var server = new Hapi.Server(process.env.PORT || 3000);

  method: 'POST',
  path: '/hi',
  handler: function (request, reply) {
      accessKeyId: process.env.AWS_ACCESS_KEY_ID,
      secretAccessKey: process.env.AWS_SECRET_KEY,
      region: awsRegion
    sqs = new AWS.SQS();

    server.log('response: ',;
    server.log('Starting receive message.', '...a 200 response should be received.');


server.pack.register(Good, function (err) {
  if (err) {
    throw err;

  server.start(function () {
    server.log('info', 'Server running at: ' +;

In this code, note that Hapi.js takes the request (read more on Hapi.js here) and sticks the body of the request in the property payload. Since AWS SQS sends across JSON in the way I’ve set it up (see part 1 and part 2) the received message coming in looks like this.

"name": "April"

In the above code, the code gives us the name April. Run this and when the SQS receives input to process it will immediately send the message to the worker which will then process the code. When the worker returns a 200, the message is marked complete and removed from the queue. When I navigate to the nodejs.log in the AWS Beanstalk logs section of the environment, I get the last few items that I submitted to the queue for processing. The code above responds as shown below in the log.

141119/011034.709, response: , Susan
141119/011034.709, Starting receive message., ...a 200 response should be received.
141119/011034.688, request, http://ip-172-31-33-151:8081: [1;33mpost[0m /hi {} [32m200[0m (26ms)
141119/011039.927, response: , April
141119/011039.928, Starting receive message., ...a 200 response should be received.
141119/011039.925, request, http://ip-172-31-33-151:8081: [1;33mpost[0m /hi {} [32m200[0m (6ms)
141119/011045.232, response: , Jessica
141119/011045.232, Starting receive message., ...a 200 response should be received.
141119/011045.229, request, http://ip-172-31-33-151:8081: [1;33mpost[0m /hi {} [32m200[0m (7ms)

BOOM! All done. A few notes before I end this entry though. Note that with the worker feature being used for Beanstalk and SQS there really isn’t much code that is needed on the receipt end of the worker. I merely needed to respond 200, to complete the request from the point of view of the SQS worker service. Then whatever code I have that I want to act on the process with can work on the data received in the body from the queue. More than a few examples out there don’t really show this, but instead show the manual way of writing code that will poll and act upon the messages in the queue. The Beanstalk worker configuration is dramatically simpler in comparison to this practice. If you do want to read more about manually polling and acting on the data check out “Using SQS With Node“, it’s the only end-to-end example I’ve seen with Node.js being used. There is also of course the documentation, but it doesn’t provide clear cut examples of what exactly a good practice around working with the queue and requires a lot of RTFMing which quit frankly is a TLDR; scenario for doing something like this.

Hope this blog post is helpful in getting Node.js working with the worker role. If you have any questions, comments or it appears I’ve missed a step, let me know and I’ll edit this and the related posts to make sure they’re as accurate and as simple to follow as I can get them.