Compile ImageMagick 6.8 on RHEL 6 / CentOS 6

There wasn’t a lot of thorough documentation on installing/compiling the latest ImageMagick on CentOS 6.  Below are the packages and install steps; remember to also install the PHP PECL extensions if needed (the ones from yum should work fine).

If ImageMagick-6.8.5-4.tar.gz is not the latest package (it is at the time of writing), be sure to update that before running.

In case you’re concerned bout the last line, we add that so that /usr/local/lib is searched first in case an older ImageMagick is present elsewhere.  If you don’t have ImageMagick at all and /usr/local/lib shows up in  ldconfig -v , you can skip this step.


Installing Gearman 1.1.6 on RHEL / CentOS 6

I haven’t been able to find a clear set of instructions to get Gearman compiled on CentOS 6. After some trial and error, this is what I came up with to get things going.

Bot Blocking Your Site with Apache

If you want to block bots/crawlers/search engines/your mom from reaching your site (based on user agent), there are many ways to go about it, the best (IMO) being directly with Apache.

The below Apache rules set an environment variable if a user agent matches, and the rewrite rule acts based on that. I am redirecting to robots.txt here to remind these bots it exists (thanks for not respecting the rules!). You can redirect to anywhere you want.

Note that BrowserMatchNoCase does simple string matching, and matches substrings. You can have just one rule to match ‘bot’ that will capture anything with ‘bot’ in the name. I am being more explicit here just to play it safe.

You can test with the curl command below to test:

curl -I -H “User-Agent: botname” localhost

Apache rules:

Apache Digest Password Protection

I was working with a barebones Apache setup recently and needed to enable Digest authentication, but it turned out the necessary modules weren’t loaded. After some research, I came up with the below. I’m largely posting this for my own reference, but figured others could benefit as well.

Make sure the module and password file paths are correct!

Using Nginx with W3 Total Cache and Memcached

When working on a WordPress project recently where speed was a huge concern, I relied on everyone’s favorite reverse proxy, Nginx, to offload the work required by Apache to serve pages. This worked great, but an issue I encountered was invalidating cached items that were just updated (e.g. comments, post edits, etc.) in Nginx. For users to see the updated content, the cache TTL had to pass, which was unacceptable for our purposes.

I was already using W3 Total Cache in WordPress with Memcached storage, and although W3 Total Cache knew to invalidate certain cached items in Memcached, Nginx was not aware of this since it used its own local cache.

Nginx, thankfully, has built-in support for Memcached, and is therefore able to get key/value pairs that were previously set by the application (i.e. by default (without patching), it doesn’t store cached items to Memcached, but can read from Memcached). My hope was to let W3 Total Cache perform all the caching/invalidation, and let Nginx rely on that cache for content.

The problem, though, is that W3 Total Cache stores items in Memcached using a nonstandard keys, and stores the value as a PHP array. It also compresses values (via the Memcache PHP library) that are over a particular size. These incompatibilities made working with Nginx not possible–at least not without modification.

I took the time to modify W3 Total Cache so that it stores data in a format that Nginx can clearly understand. Once done, it was awesome. Comments appeared immediately, and pages were snappy.

I’ve upped the modified W3 Total Cache plugin here:

You will need to set a key to use (e.g. ‘’). Memcached keys will then look like ‘’. In Nginx, you would set the key accordingly (set $memcached_key$request_uri;).


Running Proxmox behind a single IP address

I ran into the battle of running all of my VMs and the host node under a single public IP address. Luckily, the host is just pure Debian, and ships with iptables.

What needs to be done is essentially to run all the VMs on a private internal network. Outbound internet access is done via NAT. Inbound access is via port forwarding.

Network configuration

Here’s how it’s done:

Create a virtual interface that serves as the gateway for your VMs:

My public interface (the one with the public IP assigned) is vmbr0. I will then create an alias interface called vmbr0:0 and give it a private IP address in /etc/network/interfaces. Note that this is needed for KVM and OpenVZ bridged interfaces; venet interfaces automagically work.

Create an iptables rule to allow outbound traffic:

There are a few ways to specify this, but the most straightforward is:

In one of your VMs, set the interface IP to something in, and set the default gateway to, with the subnet mask of Feel free to adjust this as you see fit. Test pinging your public IP address, and perhaps even an external address (like If this works, you’re on the right track.

At this point, you have internet access from your VMs, but how do you get to them? For your OpenVZ containers, sure, you could SSH into the host node and ‘vzctl enter’ into a CTID, but that’s probably not what you want. We will need to set iptables rules to dictate which ports point to which servers.

Assuming you want VM 100 to have SSH on port 10022, and let RDP of VM 101 ‘live’ on port 10189, we can do the following:

You can add as many of these as you’d like.

Once you have your configuration set up as you please, we will need to make it persistent. If you reboot at this point, all of your iptables rules will be cleared. To prevent this, we simply do:

This step saves the rules to an iptables-readable file. In order to apply them upon boot, you have several options. One of the easier ones is to modify /etc/network/interfaces as such (notice the third line):

At this point, you now have a functioning inbound/outbound setup on your own private LAN.

Assigning public ports to containers

With multiple containers potentially running the same types of services, you can’t easily just map -> and  Ports will collide, and you have to figure out the best way to work around that.  The section below details how to perform host-header switching/proxying for websites, but for other services, there aren’t such elaborate solutions.  SIP, for example, runs on port 5060.  If you have two SIP servers (perhaps one for testing, one production), you’ll have to map things.

A port-numbering algorithm I came up with is:

(CTID mod 100) x 100 + original port number + 1000

For example, with container 105 that needs SIP:

For FTP, port 22 on container 105:

Your weights and offsets might need tweaking for your particular purposes; this is just what works for me.

Supporting multiple websites

Now, what if you want to install multiple websites across multiple containers?  One easy way to do this is to do port forwarding so that, e.g., goes to container 101, goes to container 102, etc., but that’s ugly.  We can instead setup a proxy that takes ALL requests on port 80 and routes them to their appropriate destinations.  Let’s get started.

In this example, we’re going to have a dedicated container for nginx.  I also have a dedicated container for a MySQL instance that’s shared for all of my sites.  This allows the website containers to be very lightweight.

First, create a container using the OS of your choice, and enter it.  I recommend using one of the minimal templates provided by  View this post for information on how to install templates and create containers.

Here, we’ll be using the Ubuntu 14.04 template.  Once you’re in, you’re now ready to install nginx.

You’ll now have a default site, which you’ll probably want to change.  This site will be served for any requests NOT matching a site name of anything else nginx serves (e.g., if a request for comes in, but nginx only knows to serve, the default site would show up).  Either change the default site, or delete it so that another site (the first config file nginx loads, in alphabetical order), is the default.  You can prefix the config filename with something like 000- to ensure it’s the default.  Alternatively, you can specify it in the config file, like  listen 80 default_server; .

Now, for each site you want to proxy for, you’ll need a config file, as follows:

Once you’ve created all of the config files, as shown above, simply restart nginx with  service nginx restart .

Now, assuming your nginx container is container 101 with IP address,  we can allow worldwide access as such:

Now, once you point DNS, you should be good to go.  If you’d like to test this prior, you can update your hosts file, or simply use curl to see if things are looking as expected:

I hope that helps!

Using rsync with CIFS or SMB (Windows) Destination

This took me a good bit of reading and trial/error, but here are rsync options that I found to work well with a Windows target (tested with a CIFS mount) from a Linux ext3 filesystem without errors.

Bing Site Search in PHP with Pagination

This is a simple yet powerful way to get Bing’s Site Search API working on your site in PHP. Simply update the $config variables below and you’ll be good to go.

MySQL Backup to FTP and Email Shell Script for Cron v2.2

It’s been a while since I’ve publicly made updates to this script, but I did make some tweaks over the years that I’d like to share.

Here are some changes over the last version:

  • Delete old backups via FTP
  • Backup to multiple FTP servers
  • More efficient backups
  • Add time in filename (allows for multiple backups/day)
  • More verbose/better error detection

I have moved the development of this to the GitHub repository below:

You can view the previous version here.
If there’s a feature that’s missing that you’d like to see, leave a comment below.

PHP Function to Return Lowest MX Record

This is a quick and simple one, but I admittedly started off with a for loop before coming across array_multisort. This turned out to be very useful for an application I’m working on.