In Administration -> Management, add the following to Additional Cron Jobs. Update the schedule as needed using cron syntax.
1 |
@hourly killall -SIGUSR1 udhcpc |
In Administration -> Management, add the following to Additional Cron Jobs. Update the schedule as needed using cron syntax.
1 |
@hourly killall -SIGUSR1 udhcpc |
You might have a case where you have multiple APs broadcasting the same SSID, and you want to ensure that you connect to a specific one. DD-WRT uses wpa_supplicant, which allows us to configure out-of-band. Note that DD-WRT advertises a similar feature in the UI (by using MAC filters), but I have not been able to get it to work reliably.
In Administration -> Commands , save the following as a startup script (don’t forget to change the BSSID!).
1 2 3 4 5 6 7 8 |
bssid="AA:BB:CC:DD:EE:FF" until pidof "wpa_supplicant"; do echo "waiting for wpa_supplicant to start..." sleep 3 done sed -i "3ibssid=$bssid" /tmp/ath0_wpa_supplicant.conf kill -HUP $(pidof "wpa_supplicant") |
There are many reasons why you might not want to have Chef Server in your deployment path. Or, as this example shows, you don’t want to host your own Chef Server. We instead use Hosted Chef, but purely as a cookbook repository. We will not have any nodes communicate directly with Chef Server, so we will not have to worry about the five-node limit for free accounts.
As with any knife bootstrap method, this is not suitable for auto-scaling. I will post an article on auto-scaling with Chef sometime in the future.
Let’s get started.
First, you will need to create a Hosted Chef account. Go ahead and do that while I wait.
Assuming you already have ChefDK installed, we install knife-zero:
1 |
chef gem install knife-zero |
Save the following as ~/.chef/knife.rb and update as needed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
# See http://docs.chef.io/config_rb_knife.html for more information on knife configuration options cookbook_path = '/tmp/berkshelf/cookbooks' chef_repo_path = File.expand_path('../', cookbook_path) current_dir = File.dirname(__FILE__) log_level :info log_location STDOUT node_name '<chef-username>' client_key "#{current_dir}/<chef-keyname>.pem" chef_server_url 'https://api.chef.io/organizations/<chef-organization>' chef_repo_path chef_repo_path # https://knife-zero.github.io/tips/with_cookbook_manager/ knife[:before_bootstrap] = knife[:before_converge] = "berks vendor #{cookbook_path}" %w(roles environments data_bags).each do |dir| FileUtils.ln_sf File.realpath(dir), "#{chef_repo_path}/#{dir}" if Dir.exist?(dir) end |
All the values in brackets need to be updated, and are found in your Hosted Chef account.
This knife.rb uses Berkshelf to create a chef-repo style structure from your cookbook directory. If you have any of roles , environments , or data_bags directories in your cookbook directory, they will be symlinked correctly so that it follows the correct directory structure. Otherwise, using any of these features would not work correctly.
Go into the Chef cookbook directory you’d like to apply to a node. You can apply the cookbook as follows:
1 |
knife zero bootstrap --overwrite -r '<recipe or role>' -E <chef environment> root@<hostname or IP> |
You will notice that Berkshelf will download all of your cookbooks from Chef or Supermarket and install them in /tmp/berkshelf/cookbooks . The entire /tmp/berkshelf directory is copied to the node, and chef-client is run in local mode using the arguments you passed.
So long as your run_list had no issues, you should see your run complete successfully. Congratulations!
1 2 3 4 5 6 |
net-ssh requires the following gems for ed25519 support: * rbnacl (>= 3.2, < 5.0) * rbnacl-libsodium, if your system doesn't have libsodium installed. * bcrypt_pbkdf (>= 1.0, < 2.0) See https://github.com/net-ssh/net-ssh/issues/478 for more information Gem::MissingSpecError : "Could not find 'rbnacl' (< 5.0, >= 3.2.0) among 263 total gem(s) |
Solution:
1 |
chef gem install 'rbnacl:<5.0' rbnacl-libsodium 'bcrypt_pbkdf:<2.0' |
I recently got a new laptop, and had some issues connecting to hosts I’d been able to connect to on my previous laptop.
After much googling and no luck, I found the following info via brew info openssl :
1 2 3 4 5 6 7 8 |
==> Caveats A CA file has been bootstrapped using certificates from the SystemRoots keychain. To add additional certificates (e.g. the certificates added in the System keychain), place .pem files in /usr/local/etc/openssl/certs and run /usr/local/opt/openssl/bin/c_rehash |
All I needed was the intermediate certificate to be placed in that directory and run the command for Python to stop giving me Error in request: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:748) .
Note that this is Python 3.6 installed via homebrew.
It turns out that the site operator only had the leaf certificate installed, but should have bundled the intermediate with it, so it shouldn’t have been my issue; it just turned out to be.
YMMV, but good luck!
Allow installation from unknown sources in Fire TV.
Download ADB tools (I used https://github.com/simmac/minimal_adb_fastboot).
Download Kodi for ARMv7 from https://kodi.tv/download.
1 2 3 4 |
adb tcpip 5555 adb connect $FIRE_STICK_IP:5555 adb devices adb install -r ~/Downloads/kodi-17.4-Krypton-armeabi-v7a.apk |
cat docker-compose.yml | docker run --rm -i micahhausler/container-transform
If using in CloudFormation, then each key in the task definition must be capitalized. You can feed it to https://github.com/ameir/upkey in order to do that for you.
First, get source machine ready to provide Windows installer to target machine.
ISO_DIR=~/Downloads
# disable firewall (I’m on macOS; do the equivalent on your OS)
sudo defaults write /Library/Preferences/com.apple.alf globalstate -int 0
cd $ISO_DIR
# launch Samba container to share mounted image
docker run -d -p 139:139 -p 445:445 -v $ISO_DIR:/mount dperson/samba -s ‘public;/mount’ -u ‘user;password’
# mount ISO image locally for use by netboot.xyz
hdiutil mount -mountpoint ./win10/x64/ ./win10.iso
# start local web server
python -m SimpleHTTPServer 8000
On target machine:
Boot up netboot.xyz via USB or other media. Under “Signature Checks,” disable checking of Images. I was unable to get this to work correctly.
Go to Windows section, and set base URL to “http://:8000/win10”.
Select “Load Microsoft Windows Installer”.
netboot.xyz will download some binaries from the internet, and load the remaining binaries from your source machine over HTTP. You’ll see those requests in the Python server console.
remove usb after installer loads
shift + f10
wpeinit
net use S: \\\public\win10 /user:user password # wasn’t able to do anonymous login (system error 58)
S:\x64\sources\setup.exe
1 |
docker run -v ~/.aws:/home/<user>/.aws -e AWS_PROFILE=<profile name> <container id> |
rsync is great, but one thing it doesn’t necessarily excel at is speed. Don’t get me wrong, it plenty fast in most cases, but there are plenty of opportunities to parallelize transfers and help saturate your pipe, but rsync is ultimately just one-file-at-a-time.
There are tons of articles on how to parallelize rsync, many of them being long shell scripts. Those may be what you’re after in some cases, but if you want something quick and easy, the info in this post should help.
This method assumes that you are in the directory you want to sync from, and there are multiple files/subdirectories there. What this approach does is simply list all contents, and create an rsync command for each. For example, if I have 30 subdirectories and 12 files, 42 rsync runs will be created by parallel. They won’t all run at once, though. By default, parallel will run as many jobs as you have cores on your computer. You can increase/decrease this as needed, though.
1 |
ls -a1 | tail -n +3 | parallel -u --progress rsync -avzP {} <remote host>:<remote path> --exclude .Trash --exclude *cache* --exclude *node_modules* --exclude vendor --delete |
Don’t forget to adjust the rsync parameters.
Instead of using ls , you can use other techniques to create a list of files/directories to rsync at once, and cat it out to parallel .
Recent Comments