Elasticsearch cluster administration notes

If a replica shard is in an INITIALIZATION state and the primary shard is healthy, then the shard is being replicated from the primary to the replica. You can use the cat API to get this state:

Now, we have no clue from this view what progress has been made, if any.  On large shards, it may even look like things are frozen.  How do we gain insight into what is happening?  Well, thankfully, there’s a status page which is called like so:


Queue size reached


To increase your queue size, you can add the following to elasticsearch.yml.  Replace “bulk” and “search” with the appropriate thread pool name, along with a reasonable value.  You can find the list of those at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-threadpool.html .



Posted in Linux Luvin'

Installing OpenVZ templates in Proxmox

Promox has built-in support for installing templates, but they’re pretty outdated and not well-maintained.  openvz.org, on the other hand, regularly updates their templates, and has a wide selection to choose from.

In order to make these templates available for use in Proxmox, it’s simple.  Get the links for the templates you’d like from the link above, and download them like:

All of the links were placed together with one wget command, but you could also do a wget for each template.

You will now see the templates in your local storage.  Note that if you use alternate storage (e.g. GlusterFS), you’ll need to update the template path above accordingly.

Posted in Linux Luvin'

IMAP Append – Message contains bare newlines

The Cyrus IMAP server (which is used by FastMail, FYI), is pretty picky when it comes to enforcing RFCs.  When performing a recent email migration (from Zoho, which is less picky), I got a boatload of errors along the way.

With each of my messages as individual files (downloaded as “RFC822″ in raw IMAP speak), the following cleaned it up:

Although you can use sed as well, the command is not the prettiest due to the way it works with lines.

To be more thorough, this is what I ultimately placed in my PHP-based migration script:


Posted in Linux Luvin'

Create multiple Proxmox containers via script

There are times when I need to create a number of OpenVZ containers in Proxmox at once, which would take way too long via the user interface.  There are a number of ways to accomplish this programmatically, but the most straightforward (assuming you have root access) is via  pvectl .

Below is an example script which creates identical containers.  Adjust the container IDs to ones that are available, as well as any other parameters you need.  If you need more advanced options, the man page can be found at https://pve.proxmox.com/wiki/Pvectl_manual .

Your output will be something like:



Posted in Linux Luvin'

rsync to multiple hosts in parallel



Posted in Linux Luvin'

Enable XHProf for WordPress


You’ll now need to have this module loaded in PHP; this varies depending on which handler you use:

cd into wp-content/plugins

In wp-config.php:

In your admin dashboard, enable “WP XHProf Profiler” from the plugins section.

You will have “Profiler output” links at the very bottom of your page that’ll show you XHProf output.

Posted in Tech Corner

Migrating from one Chef server to another

It happens — you’re on a server that just can’t be upgraded any further, and you need more resources.  Or, you need to backup a Chef server.  Or, you need to setup a QA instance.  Or, you need to finally migrate from Chef 10 to Chef 11.  Or, you have one of many other possible reasons, but you need to be able to stand up a new Chef instance, and not have to do a ton of work.  If any of that applies to you, then this post is for you.

In the case where you’re migrating from one Chef server to another (i.e., the old one is going bye-bye), it would be very helpful to have your Chef server be CNAMEd (e.g. chef.company.com -> vm101.iad.company.com) or behind a load balancer/proxy where you can change targets easily.  That way, you won’t need to update the client configs, and it’ll be an easy swap.  Everything should “just work” ™.

First, we’ll make a copy of your knife.rb:

Now, we’ll need to get access to your new Chef server via knife.  You can do so by logging in as admin, and regenerating and saving a new private key.  You can also create a new user here instead of using admin, but I advise against this, as any user you create will conflict with users of the same name from the old server.  Yes, that means that if you’ve been using ‘admin’ as the main user, you may run into problems (but let’s just hope that you’ve been using per-person accounts).

Now, we’ll update your current knife.rb to reflect the new node information in it:

It wouldn’t hurt to check that you have access to the new node by doing a  knife user list .

Now, we’ll need to download all of the data from the “old” Chef server.  To do so, we’ll be using the nifty ‘knife backup‘ plugin.  To get it installed on OS X, I did:

Now, to finally back things up, we’ll do:

Note that the argument after -D is the destination directory where all of the Chef data will go; this directory will automatically be created for you.  The argument of -c tells knife which config file to use; we’ll, of course, be using the “old” server here.  Also, if you only need to backup a certain set of data from your Chef server (e.g. only users and environments), you can specify that.  See the knife backup documentation for details.

Now that we have all the data we need, we’ll need to push it up to the new server.  This works much the same as the export:

I left off the -c here because knife.rb is the default config file.

Once everything has been restored, your original user in Chef will now be available (you can verify this via the Chef Server UI).  The amazing thing is that your keys have not changed, and can be used as-is.  Chef Server keeps track of your public keys, so all of your private keys for all nodes/clients are still good.

This, now, is where you update your knife.rb to reflect your original user settings.  If you’re running behind a load balancer/proxy, you can simply use your original config as-is after replacing the old server with the new one.  If you’re doing the CNAME/A record route, you can do the same once DNS has propagated.  Otherwise, you can overwrite your new config with your old one, and edit it to reflect the new server’s URL.

If your nodes are pointing to the wrong server in their client.rb, you can use knife ssh with sed to find/replace the server URLs.

If you’ll be accessing multiple Chef servers frequently enough, I highly recommend looking at the knife block plugin.  That way, you can switch between different configurations with ease, including those for Berkshelf.

Posted in Linux Luvin'

Change Chef Server settings after installation

Just about every significant aspect of Chef Server is configurable, although the defaults are okay for most.  The configuration options are documented at http://docs.opscode.com/config_rb_chef_server_optional_settings.html .

Note, though, that the chef-server.rb described in the article was nowhere to be found on my server.  Instead, after some digging, I found what I needed at  /opt/chef-server/embedded/cookbooks/chef-server/attributes/default.rb .

So, for example, if you want to change nginx’s HTTPS port from 443 to 4443, you’d simply set default['chef_server']['nginx']['ssl_port'] = 4443.

Update:  after a bit more digging, I found that a) updating your Chef server will undo all of these changes, and b) that there is a better way.

Simply create the file /etc/chef-server/chef-server.rb and use the attributes from http://docs.opscode.com/config_rb_chef_server_optional_settings.html .  Going with our example above, to change the nginx SSL port, simply insert  nginx['ssl_port'] = 4443 .

After making all of the changes you need, you’ll have to apply the configuration (which does a chef-solo behind the scenes) with:

You’ll see a diff of all the changes that avalanched out to during the Chef run.

And that’s it.  Hopefully that’ll help you accomplish what you’re looking for.


Posted in Linux Luvin'

Use nginx as a reverse proxy to speed up your WordPress site

Everyone loves speed.  That includes your site’s visitors.  If you run a WordPress site, WP Super Cache is a pretty cool plugin that generates static files from your dynamic content, and serves those to your users instead of dynamically generating the same page for each user, which can really put your database to work.

If your current WordPress installation runs on Apache, and you want to give it an easy dose of speed, there’s a solution for you.  nginx is a very lightweight and performant webserver that not only serves static files fast, but it caches, too.  We can put both of those features to use to make your site faster than ever.

First, download and install WP Super Cache.  Next, you’ll need to install nginx (I use the official nginx repos, but any recent build should be good).  Don’t start it yet.  Take the config file below, and adjust it to your needs (mainly the section at the top).  Pop it into the nginx conf.d directory as mysite.com (adjust the name, of course).  Next, adjust your Apache config’s Listen directive to listen on port 8080.  In your virtualhost config for your site (which you can find with apachectl -S), update the port from 80 to 8080.

Now, do the following:

The above will test the nginx and Apache configs (apachectl will do a configtest before a restart), and start nginx if all is well.

In case the nginx config wasn’t clear, it will serve the static pages created by WP Super Cache directly from the filesystem if they exist.  Otherwise, the request will be proxied to Apache, where the page will be generated (and in most cases, WP Super Cache will create a static file for it) and returned to nginx.  nginx will cache that request for 15 minutes.  If a user requests the same page again, if WP Super Cache indeed cached that page, it’ll be served from the filesystem, otherwise nginx will serve it from its cache (if it’s within the 15 minute window), or it’ll proxy it back to Apache.

Users who are logged in or who have left comments will not be served any cached data.  This is determined via a cookie check.  So, if an anonymous user has a blazing user experience, then comments on a post, he/she can suffer with slower speeds until the comment cookie expires (typically 5-30 minutes) since each request will be proxied to Apache for processing.  Just something to keep in mind.

If you try this, let me know how it goes!

Posted in Linux Luvin'

Update your Ubuntu/Debian servers with Chef

If you need a hands-off way to update your Ubuntu or Debian servers, Chef’s Knife utility provides an easy way to do this (and parallelize it!).

The following will update your packages in an unattended way.  In other words, all prompts will be suppressed, and defaults will be accepted (e.g. if a new file is delivered with a package, yours will be kept).  You can look at the dpkg –force-conf* flags to change this behavior.  The -C flag denotes the number of parallel processes.

If you don’t want to update all of your servers, update the search command to be more precise.


Posted in Linux Luvin'