Use Slack's Incoming Webhooks from your Rails app

Incoming Webhooks are the simplest way to post messages from your application into your users' Slack channels. They use plain HTTP requests with JSON payloads to post text messages and rich content alike.

If you're building a Slack app, with Rails, you probably want to make use of incoming webhooks to send custom message notifications about your app. To do this, we'll authenticate your app to your user's Slack team and extract the incoming webhook URL from the API.

Embed the "Add to Slack" button

If you haven't already registered your app with Slack, go to the Your Apps page and click "Create New App". Give your app a name and click "Create App".

Create an App

After you've created your app, head over to the Slack Button documentation page and scroll down to the "Add the Slack button" section. There you'll find a form where you can customize the code for embedding your Slack button. Be sure to select your app name from the list. Also be sure the "incoming webhook" option is selected.

Add the Slack Button

Paste the resulting code into the view where you want your user to authenticate their Slack team with your application. You'll most likely want this to occur after the user has already authenticated themselves with your app so they'll be able to log back in and change their preferences.

Create a callback endpoint

When your users click the "Add to Slack" button, they'll be taken to a Slack-hosted page where they'll verify that they want to give you the ability to post to Slack on their behalf. After they confirm, Slack will redirect to an OAuth Redirect URL. This URL will receive a special code from Slack that will grant your app access to Slack's API features, including incoming webhooks.

Before we build the endpoint, add the Slack API gem to your Gemfile. I came across two popular gems at the time of this writing. The one we'll use is the slack-api gem:

# Gemfile
gem 'slack-api'

Run bundle install to download the gem and load it into your app.

Next, define a route in your routes.rb file for our new endpoint:

# config/routes.rb
Rails.application.routes.draw do
  # ...
  get '/auth/callback', to: 'slack#callback'
end

Then, create a corresponding controller in app/controllers:

# app/controllers/slack_controller.rb
class SlackController < ApplicationController
  # If you're using Devise to authenticate your
  # users, you'll want to first ensure you
  before_action :authenticate_user!

  def callback
    client = Slack::Client.new
    response = client.oauth_access(
      client_id: <YOUR_SLACK_CLIENT_ID>,
      client_secret: <YOUR_SLACK_CLIENT_SECRET>,
      code: params[:code],
      redirect_uri: "http://localhost:3000/auth/callback"
    )

    if current_user.update_attributes(
      slack_access_token: response['access_token'],
      slack_incoming_webhook_url: response['incoming_webhook']['url']
    )
      redirect_to root_path
    else
      render text: "Oops! There was a problem."
    end
  end
end

First, we create a before_action which authenticates the user before entering the controller action. It's likely you'll want to know who is clicking the "Add to Slack" button so you're able to save their Slack credentials for later use and/or discarding.

Then, in the action, we create a new Slack::Client object and call the Slack API method oauth.access which will grant us access to the Slack access token, incoming webhook URL, and other metadata associated with the Slack account we just authorized.

You'll want to change the client_id and client_secret settings to reflect the settings in your Slack app's configuration.

Slack App Credentials

Since we defined the route to our callback as /auth/callback in our routes file, you should use http://localhost:3000/auth/callback (or a different port if you're running Rails elsewhere) as the redirect_uri value. Note that you'll want to make this configurable when you deploy this to production.

You'll also want to add http://localhost:3000/auth/callback to the redirect URL field in your Slack app config panel:

Slack OAuth Settings

After we call oauth_access, we then update our current_user record's slack_access_token and slack_incoming_webhook_url attributes with the values in the API response. You might want to store them differently in your app, so I've added this purely for illustration. But you'll want to store them somewhere so you're able to access them when we post messages using the incoming webhook.

Send a message using the webhook

We've successfully authorized our Rails app to use the Slack API on behalf of our user. Now let's post a message using the incoming webhooks API!

For demonstration, let's build an endpoint at /post_message which posts the message "Hello, Slack!" into the user's Slack when we visit it.

First, add a route declaration:

# config/routes.rb
Rails.application.routes.draw do
  # ...
  get '/auth/callback', to: 'slack#callback'
  get '/post_message', to: 'slack#post_message'
end

We're going to use the Faraday gem as our HTTP client. Any HTTP client gem will do, since the incoming webhook is just a plain HTTP request. Add it to your Gemfile:

# Gemfile
# ...
gem 'faraday'

And add a new controller action to SlackController:

class SlackController < ApplicationController
  # ...

  def post_message
    conn = Faraday.new(url: current_user.slack_incoming_webhook_url)

    conn.post do |req|
      req.headers['Content-Type'] = 'application/json'
      req.body = { text: "Hello, Slack!" }.to_json
    end

    render text: "Posted to Slack!"
  end
end

First we create a new Faraday connection with the URL we captured in our callback action. Then, we post to the endpoint using a JSON request body. The payload of the request is formatted according to the specification in the Slack Incoming Webhooks documentation. Finally, we render some text to let the user know we posted to Slack.

We ought to do more error handling in the event Slack doesn't respond, but I'll leave that as an exercise for the reader.

Assuming everything is wired up, when you point your browser at http://localhost:3000/post_message, you'll find a new message waiting for you in Slack!

I had a tough time sifting through the Slack documentation to find a decent Rails walkthrough, so I hope this guide answers some of your questions.

Send visitor HTML form data to Slack with Formbot

Formbot sends visitor HTML form data to Slack

You're using a static site generator like Middleman or Jekyll. These tools are fantastic for building blogs and marketing sites. But every so often you need to collect some data from your visitors in a form.

There are plenty of form tools on the web (Wufoo comes to mind). But most of them are bloated and made for less technically minded people. All you want is to embed a form in your site and be notified when your visitors fill it out without having to set up a server application.

Almost every time I've built a marketing site for a new product I run into this situation. So this week, I built a little tool called Formbot that's here to help!

Formbot sends the contents of your HTML form fields to any of your Slack channels. Create a custom HTML form with any number of fields, set its action attribute to your Formbot URL, and it'll do the rest.

Want to add a Slack-enabled form to your site? Install Formbot

Why I stopped billing hourly and you should too

If you're like most freelance developers, you bill by the hour. I want to show you why this isn't ideal, and suggest an alternative billing structure to simplify your relationships with your clients.

Imagine you have a client that wants to build a new application. The specification is vague enough that you know you can't offer a waterfall-style fixed-bid estimate. The project might rely on a third party, or might use technologies with which you're not particularly familiar.

In this case, you'd typically bill your client by the hour. This insulates you from risk because you know you'll be paid regardless of the value you deliver. And your client is happy because they know they're only paying you for the time you spend on their project.

But there are a few less-than-ideal things that happen in an hourly billing scenario:

  • Your client questions items on their invoice.
    Your invoice might say it took you 2 hours to "Refactor the XYZ module", but to your client, that doesn't translate into value for their business. Now you have to explain how and why you spent time on what you did because your client perceives them as unnecessary expenses instead of as part of the path toward producing value.

  • You cannot bill for time away from your desk.
    Raise your hand if you stop thinking about your work the moment you leave your desk! I'm sure your hand isn't raised. Mine sure isn't. We programmers spend most of our time thinking in one way or another about how we can improve our chops or solve our clients' problems. This is real time that goes unaccounted for in our billing when we bill by the hour.

  • You don't really bill accurately anyway.
    How many minutes in a given billable hour do you work? How many seconds? Are there moments where you're distracted? The truth is, no one can stay 100% on-task for a duration of time. Creative work especially is conducted in a manner that is sporadic and inconsistent. Billing hourly ignores this.

Is the answer to conduct a comprehensive estimate and then engage your client on a fixed-bid project basis? If your sort of work has predictable timelines and you're comfortable with the possibility of being underpaid, then a fixed-bid engagement might work. But for the rest of us building applications with vague timelines and requirements, fixed bid pricing is too risky.

Re-examining the problems with hourly billing above, there's a common cause among all of them: No one can deliver much value in one hour. So why do we use an hour as the default unit of billable value?

You feel undercompensated for all those minutes of work you inevitably spend away from your desk. Your client feels nickled and dimed for tasks that don't appear to contribute value to their business.

Wouldn't it be simpler to not have to think in terms of how many minutes or hours you spent working, and instead focus your attention on doing the work?

We've discussed how fixed-bid billing won't insulate us from risk. Instead of engaging on a fixed-bid basis, let's visit the hour's longer cousins: the day, the week, and the month.

Billing by the day results in the same sort of micromanaging relationship: If you spend an entire day doing a task which doesn't appear to have provided any real business value but does pave the way for the following day's work, it's difficult to effectively justify that cost to your client.

Billing monthly has the opposite problem: When your client receives the invoice, they're less likely to understand the value delivered relative to the fee they've paid. After a whole month, it's difficult to communicate effectively what was done and how it benefited them.

Weekly billing, though... weekly billing is gold:

  • You can invoice for value.
    In one week, you can deliver tangible value that you can qualify in a sentence on your invoice ("Delivered Feature X"). Your client will love this since the value you produce is what they care about anyway.

  • Your deliverables are clear.
    Each week, you can discuss with your client the deliverable you want to make the following week. This puts them in control and gives them a sense of what your fee is buying them.

  • It makes planning simple.
    Because your fee is fixed per week, it makes financial planning for both parties simple. Your client won't be surprised by your bill, and you won't be surprised by their expectations.

Testing ES6 React components with Enzyme's shallow rendering

I ran into a strange issue today when writing some assertions using the Enzyme testing library for React.

Whenever I create a new component, I like to use ES6 class notation and export the class anonymously like this:

// MyChildComponent.js
import React from 'react';

export default class extends React.Component {
  render() {
    return (<div>MyChildComponent</div>)
  }
}

Then, I'll render it in another component like this:

// MyParentComponent.js
import React from 'react';
import MyChildComponent from './MyChildComponent';

export default class extends React.Component {
  render() {
    return (
      <div>
        <MyChildComponent />
      </div>
    )
  }
}

When testing for the presence of MyChildComponent within MyParentComponent in Enzyme, I'll typically produce a test that looks like this:

import { shallow } from 'enzyme';
import { expect } from 'chai';

import MyParentComponent from './MyParentComponent';

describe("<MyParentComponent />", () => {

  const wrapper = shallow(<MyParentComponent />);

  it("renders a MyChildComponent", () => {
    expect(wrapper.find('MyChildComponent')).to.have.length(1);
  });

});

But this fails! It's as if MyChildComponent isn't being rendered at all.

If I dump wrapper.debug() (doc) to the console, I get this output in place of MyChildComponent:

<div>
<_class />
</div>

It's as if Enzyme doesn't know the component is called MyChildComponent!

Solutions

There are two ways to solve this.

Import the component itself and assert on it instead

Below, we import MyChildComponent and then, in the assertion, use the class constant instead of the string literal "MyChildComponent":

import { shallow } from 'enzyme';
import { expect } from 'chai';

import MyParentComponent from './MyParentComponent';
import MyChildComponent from './MyChildComponent';

describe("<MyParentComponent />", () => {

  const wrapper = shallow(<MyParentComponent />);

  it("renders a MyChildComponent", () => {
    expect(wrapper.find(MyChildComponent)).to.have.length(1);
  });

});

Export the named class from within the child component

As much as we should strive to write code that doesn't repeat itself, this was the solution I ultimately chose. It turns out React is able to determine the class name so long as you define it in the class statement. Modifying MyChildComponent.js to produce a named class and then exporting it allows Enzyme to find it in the string literal assertion:

// MyChildComponent.js
import React from 'react';

class MyChildComponent extends React.Component {
  render() {
    return (<div>MyChildComponent</div>)
  }
}

export default MyChildComponent;

If you can't seem to get an Enzyme assertion to find a component you know is there, make sure Enzyme knows what sort of component it is!

How to set up a test runner for modern JavaScript using Webpack, Mocha, and Chai

We've all been there: You're about to build another front-end feature. You know you want to start unit testing your JavaScript. You know that because React employs one-way data binding, it means writing tests is made easier than the Backbone MVC days of yore. But the setup... oh my, the setup. It's painful. There are so many tools, so much boilerplate. So you say to yourself, we'll do it next sprint.

But then the regressions start mounting. Your team is frustrated when QA sends back your work and tells you the new thing works, but that you broke 2 old things. And so now you're back to the grind, trying to ship a working build before the end of the week.

We've all been there, but let's put our procrastination to rest once and for all. The truth is, JavaScript testing is more awesome than ever. It might not be as distilled as say, Rails testing. But after reading this guide, you'll be able to go back to your team and proudly say this is the week you start testing your JavaScript.

If you've already read the guide, or just want to play around with some real, working code, I've prepared an example app here: Webpack+Mocha+Chai Example

Tools

Right now, the landscape of tools for testing JavaScript is large. In this guide, we're going to focus on what I've found to be the most productive combination:

  • Mocha to run our tests.
  • Chai to make assertions.
  • Webpack to glue everything together.

Install Packages

I'll assume you're already familiar with npm, have created a package.json file, and are using it in your project. If not, here's a tutorial to get you started. The npm command installs packages you want to use in your application and provides an interface for working with them. We're going to install the packages that will support our tests. Because these packages are for our development use only, we use the --save-dev option when running npm:

npm install --save-dev webpack mocha chai mocha-webpack

Create a Webpack Configuration

Webpack is a module bundler for the web. You might have used Browserify or CommonJS in the past to modularize your JavaScript. Webpack takes this paradigm a step further and lets you produce a dependency for just about any type of file. A full explanation of the tool is outside the scope of this tutorial, but Ryan Christiani has a great Introduction to Webpack tutorial to get you started.

For now, create a file webpack.config.js and fill it with the following:

var webpack = require('webpack');

module.exports = {
    module: {
        loaders: [
            {
                test: /.*\.js$/,
                exclude: /node_modules/,
                loaders: ['babel']
            }
        ]
    },
    entry: 'index.js',
    resolve: {
        root: [ __dirname, __dirname + '/lib' ],
        extensions: [ '', '.js' ]
    },
    output: {
        path: __dirname + '/output',
        filename: 'app.bundle.js'
    }
};

Configure Babel

Babel is a JavaScript compiler that allows us to use next generation JavaScript (ES6, ES7, etc) in browsers that only support ES5. As you'll see when we begin writing tests, having ES6 import statements and fat arrow function notation (() => { }) will make our tests more readable and require less typing.

You'll notice, in the loaders section above, we're using the babel loader to process our JavaScript. This will allow us to write our application and test code in ES6. However, Babel requires that we configure it with presets, which will tell Babel how it should process our input code.

For our example, we need just one preset: es2015. This tells Babel we want to use the ECMAScript 2015 standard so we can use things like the import and export statements, class declarations, and fat arrow (() => {}) function syntax.

To use the preset, we'll first install its package using npm:

npm install --save-dev babel-preset-es2015

Then, we'll tell Babel to use it by creating a .babelrc file:

{
    "presets": [
        "es2015"
    ]
}

Create the entry file and test Webpack configuration

Our Webpack configuration states that our entry file, the JavaScript module Webpack will run when our bundle is included in the page, is index.js. So let's create that file now. For now, let's just alert "Hello, World!". We're not going to run this code anyway, since we're really just using this entry file to be sure Webpack is configured properly.

// index.js

alert("Hello, World!");

Then we'll create an output directory. This is where we've configured Webpack to write our bundle file:

mkdir output

If we've configured everything properly, running Webpack should spit out our bundle file:

webpack

If the file output/app.bundle.js is present and you can locate our alert("Hello, World!") code in its contents, then you've configured Webpack successfully!

Set up the Mocha runner command

NPM has a scripts configuration option that allows creating macros for running common commands. We'll use this to create a command that will run our test suite on the command line.

In your package.json file, add the following key to the JSON hash:

{
  "scripts": {
    "test": "mocha-webpack --webpack-config webpack.config.test.js \"spec/**/*.spec.js\" || true"
  }
}

For an actual example of this command in a real package.json file, see the package.json file in the example code.

Dang though, that is one hefty command. Let's go through this piece by piece.

First, we're assigning this to the test command. That means that when we run npm run test, NPM will execute the mocha-webpack --webpack-config ... command for us.

The mocha-webpack executable is a module that precompiles your Webpack bundles before running Mocha, which actually runs your tests. Now, mocha-webpack is designed for server-side code, but so far I haven't had any problems using it for client-side JavaScript. Your mileage may vary.

When we call the mocha-webpack command, we pass it the --webpack-config option with the argument webpack.config.test.js. This tells mocha-webpack where to find the Webpack configuration file to use when precompiling our bundle. Notice that the file has a .test suffix and that we haven't created it yet. We'll do that in the next step.

After that, we pass mocha-webpack a glob of our test files. In this case, we're passing it spec/**/*.spec.js, which means we'll run all the test files contained within the spec folder and all folders within it.

And finally, we append || true to the end of the command. This tells NPM that in the event of an error (non-zero) exit code from the mocha-webpack command, we shouldn't assume something horrific went wrong and print a lengthy error message explaining that something probably did. Most of the time we run tests, a test or few will fail, resulting in a non-zero exit status. This addition cleans up our output a bit so we don't have to read a nagging error message each time. I'm sure the NPM team meant well when they added this message, but I think it's a bit silly we have to resort to this to remove it. If you know a better way, leave a comment!

Create our test Webpack configuration

Because we're running our tests on the command line and not in the browser, we need to be sure to tell Webpack that our target environment is Node and not browser JavaScript. To do this, we'll create a specialized test Webpack configuration which targets Node in webpack.config.test.js:

var config = require('./webpack.config');
config.target = 'node';
module.exports = config;

I also want to point out how nice it is that Webpack configurations are just plain JavaScript objects. We're able to require our base configuration, set the target property, and then export the modified configuration. This pattern is especially useful when producing production configuration files, but that's a topic for another guide.

Write a basic test

It's the moment we've been waiting for! We've laid the foundation for testing in our project. Now let's write a basic (failing) test to see Mocha in action!

Create the spec directory in your project if you haven't already. Before we get testing React components, let's just try our hand at testing a plain old function. Let's call that function sum, and test that it does indeed sum two numbers. I know, it's real exciting. But it'll give us confidence our test setup is working.

Create a file spec/sum.spec.js with the following:

import sum from 'sum';
import { expect } from 'chai';

describe("sum", () => {
    context("when both arguments are valid numbers", () => {
        it("adds the numbers together", () => {
            expect(sum(1,2)).to.equal(3);
        });
    });
});

Let's go over that one line at a time.

First, we import a function called sum from a module called 'sum'. You probably guessed we're going to need to create that file. You guessed right.

Create the file lib/sum.js:

export default function() { }

Note that we're creating the file inside the lib folder. Way back in step 2, we told Webpack that we should resolve modules in both the root folder as well as the /lib folder. We use lib because it indicates to other developers that this file is part of our application library code, as opposed to a test, or configuration, or our build system, etc.

Assertion Styles

The second line in our test file imports a function expect from the Chai module. Chai has a couple different assertion styles which dictate how tests will be written. Without going too far into the details, it means your tests could either read like this:

Assert that x is 10.

Or like this:

Expect x to be 10.

Or like this:

x should be 10.

This is largely a matter of developer preference. In my time as a developer, I've seen the Ruby community shift its consensus from assert, toward should, and now toward expect. So let's settle on expect for now.

Run our test suite

Now that we've created our spec/sum.spec.js file, let's go ahead and run our npm run test command:

npm run test

> react-webpack-testing-example@1.0.0 test /Users/teejayvanslyke/src/react-webpack-testing-example
> mocha-webpack --webpack-config webpack.config.test.js "spec/**/*.spec.js" || true

sum
  when both arguments are valid numbers
    1) adds the numbers together


0 passing (7ms)
1 failing

1) sum when both arguments are valid numbers adds the numbers together:
  AssertionError: expected undefined to equal 3
    at Context.<anonymous> (.tmp/mocha-webpack/01b73f0d4e3c95d9c729f459c86e1fc4/01b73f0d4e3c95d9c729f459c86e1fc4-output.js:93:61)

Success! Well, sort of. Our test runs, but it looks like it's failing because we never implemented the sum function. Let's do that now.

Make the test pass

Let's make our sum function take two arguments, a and b. We'll return the result of adding both of them together, like so:

export default function(a, b) { return a + b; }

Now run our test again. It passes!

npm run test

> react-webpack-testing-example@1.0.0 test /Users/teejayvanslyke/src/react-webpack-testing-example
> mocha-webpack --webpack-config webpack.config.test.js "spec/**/*.spec.js" || true

sum
  when both arguments are valid numbers
    ✓ adds the numbers together


1 passing (6ms)

Watch for changes to streamline your workflow

Now that we've written a passing test, we'll want to iterate on our math.js library. But rather than running npm run test every time we want to check the pass/fail status of our tests, wouldn't it be nice if it ran automatically whenever we modified our code?

Mocha includes a --watch option which does exactly this. When we pass mocha-webpack the --watch option, Mocha will re-run our test suite whenever we modify a file inside our working directory.

To enable file watching, let's add another NPM script to our package.json:

{
  "scripts": {
    "test": "mocha-webpack --webpack-config webpack.config.test.js \"spec/**/*.spec.js\" || true",
    "watch": "mocha-webpack --webpack-config webpack.config.test.js --watch \"spec/**/*.spec.js\" || true"
  }
}

Notice how the watch script just runs the same command as the test script, but adds the --watch option. Now run the watch script:

npm run watch

Your test suite will run, but you'll notice the script doesn't exit. With the npm run watch command still running, add another test to spec/sum.spec.js:

import sum from 'sum';
import { expect } from 'chai';

describe("sum", () => {
    context("when both arguments are valid numbers", () => {
        it("adds the numbers together", () => {
            expect(sum(1,2)).to.equal(3);
        });
    });

    context("when one argument is undefined", () => {
        it("throws an error", () => {
            expect(sum(1,2)).to.throw("undefined is not a number");
        });
    });
});

Save the file. Mocha will have re-run your suite, and it should now report that your new test fails.

Reduce duplication in package.json

In the previous step, we copied and pasted the test script into the watch script. While this works fine, copy and paste should bother every developer just a little bit.

Luckily, mocha-webpack provides a way to specify the default options to the command so we needn't include them in each line of our package.json's scripts section.

Create a new file called mocha-webpack.opts in your project's root directory:

--webpack-config webpack.config.test.js
"spec/**/*.spec.js"

And now, your package.json file can be shortened like this:

{
  "scripts": {
    "test": "mocha-webpack || true",
    "watch": "mocha-webpack --watch || true"
  }
}

Helpful links

Install CtrlP to save time hunting for files in Vim

Vim is my favorite text editor. I've used it exclusively since 2004, having fallen in love with its near-infinite customizability and "one tool, one job" philosophy.

But if there's one feature that's always felt missing, it's a great fuzzy file search. Other text editors like Atom, TextMate, and Sublime offer the user a convenient way to search files by typing partial substrings of the full filename. So if you have a file in lib/foobar/baz.rb, typing foobaz into the fuzzy finder would find the file.

This becomes especially useful in the context of modern JavaScript, where you'll often have file trees that look like this:

reducers/todos.js
actions/todos.js
components/TodoList.js

Using tab completion to resolve these paths works, but it's a lot of keyboard crunching. Not the smoothest approach.

Luckily, CtrlP offers a turnkey solution.

Installation

To install CtrlP, clone it into your ~/.vim/bundle directory:

git clone https://github.com/ctrlpvim/ctrlp.vim.git ~/.vim/bundle/ctrlp.vim

Then, add it to your Vim's runtime path in your ~/.vimrc:

set runtimepath^=~/.vim/bundle/ctrlp.vim

You'll probably also want to tell CtrlP to ignore files matching some paths by setting the wildignore option in your ~/.vimrc:

set wildignore+=*/.git/*,*/.hg/*,*/.svn/*,*/build/*,*/node_modules/*

This tells CtrlP to ignore version control meta files (Git/Mercurial/SVN), files inside build directories (I use Middleman frequently and it dumps its output files here), and your NPM node_modules directory. If you have other project-specific paths you don't want to show up in your fuzzy search results, add them here.

Usage

To use CtrlP, open Vim in the root directory of the codebase of your choice and press, well, Ctrl+P. A buffer will appear at the bottom of your Vim. Type some characters that are a part of the file you want to find, and you'll see the list of files reduce to those matching your query. Press Return and the selected file will open!

Hopefully CtrlP will improve your workflow like it has improved mine. Reducing the friction between your brain and your fingers is paramount in creating a work environment that enables great work instead of getting the way. Cheers!

Beware of making database queries in Goroutines

The past couple days I've been struggling to patch an issue in a client's codebase wherein PostgreSQL is reporting the following repeatedly in my error tracker:

pq: sorry, too many clients already
pq: remaining connection slots are reserved for non-replication superuser connections

In an earlier post, I hypothesized that perhaps I wasn't closing connections I'd opened using db.Query. While I did find some instances of this, I found that the actual culprit was opening database connections inside of Goroutines created and run in a for loop:


for _, user := range users {
  go doStuff(user)
}

func doStuff(user User) {
  rows, err := db.Query("SELECT * FROM cars where user_id=$1;", user.Id)
  defer rows.Close()
}

The above example would work just fine if not for running doStuff in concurrent Goroutines. PostgreSQL would execute the queries in series, closing the previous connection before opening a new one. But when we tell Go to execute them in parallel, open connections pile up and bad things happen.

So: If Postgres is complaining that you've got too many concurrent connections, think about the architecture of your application. Is there some place where you might be trying to execute queries in parallel? Is there any way you can execute the queries in series? Or perhaps complete your queries ahead of the concurrent processing?

If you've struggled with having too many concurrent open connections in your Go application, I'd love to hear how you overcame the problem.

Too many connections using PostgreSQL with Golang

If you're building a database-backed Golang application using PostgreSQL, you might come across one or both of the following errors:

pq: sorry, too many clients already
pq: remaining connection slots are reserved for non-replication superuser connections

Both of these errors are signs that you've tried opening more database connections than your PostgreSQL server can handle.

It's tempting to go into your PostgreSQL server configuration and increase the number of connections your server will accept. But that will only lead to performance problems, especially if you're running your PostgreSQL server on a smaller instance with less memory and CPU.

More likely than the database not accepting connections being the culprit is the possibility of your Golang code leaking database connections.

Wherever you open a query connection, you're responsible for deferring a Close() call on the resulting row set:

rows, err := db.Query("SELECT * FROM cars;")
defer rows.Close()

It's a good bet that somewhere, you're not closing a connection you've opened. Over time, this could result in your database connection pool being consumed by idle connections. Auditing your code for queries where you're not closing the connection afterward will help ensure your application can still connect to its database.

Depending on the size of your application, this process could take awhile. But it's a surefire way to get things moving in the right direction.

Alias your common Ruby commands for a faster workflow

If you're a Rubyist, you probably use the likes of rspec, cucumber, rake, and other commands frequently. And it's likely that you might be running them using bundle exec to execute them in the context of your project's bundle. After finding I was spending a lot of time typing each of these commands, I added a few aliases to my shell config to speed up my workflow:

alias rsp='bundle exec rspec'
alias cuc='bundle exec cucumber'
alias rak='bundle exec rake'
alias mm='bundle exec middleman'

Paste these into your ~/.bashrc or ~/.zshrc, restart your shell, and now running an rspec test in the context of your bundle is as simple as:

rsp spec/models/banana_spec.rb

Have other useful aliases? Post them in the comments below!

Using Gulp to generate image thumbnails in a Middleman app

var gulp = require('gulp');
var imageResize = require('gulp-image-resize');

var paths = {
  images: "source/images/**/*"
}

gulp.task('images', function() {

    gulp.src(['source/images/**/*.png', 'source/images/**/*.jpg'])
        .pipe(imageResize({
            width: 538,
            height: 538
        }))
        .pipe(gulp.dest('tmp/dist/assets/images/538x538'));

    gulp.src(['source/images/**/*.png', 'source/images/**/*.jpg'])
        .pipe(imageResize({
            width: 1076,
            height: 1076
        }))
        .pipe(gulp.dest('tmp/dist/assets/images/1076x1076'));

});

gulp.task('watch', function() {
  gulp.watch(paths.images, ['images']);
});

gulp.task('default', ['watch', 'images']);
gulp.task('build', ['images']);