News


Thoughts from the field

Help Wanted

Open Infrastructure Services is hiring! We have a number of exciting projects on the horizon and are looking for talented infrastructure engineers. If you’re experienced with, or have an interest in cloud infrastructure automation and helping customers migrate their data center workloads to the public cloud, please send an email to [email protected]. We’d love to talk to you about these exciting opportunities!

These projects are focused on large enterprise datacenter migrations to the Google Cloud.

Apply Now

Apply online now: Employment Application.


Update ESXi 6.5 to U1 over SSH

Updating ESXi 6.5 to 6.5 U1, I encountered the following error:

[[email protected]:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20170702001-standard
 [InstallationError]
 [Errno 28] No space left on device
       vibs = VMware_locker_tools-light_6.5.0-0.23.5969300
 Please refer to the log file for more details.

The solution to this problem is to enable swap. I’m running this ESXi host on a single 32GB USB Thumb Drive, so I first had to create a VMFS5 datastore using the process at ESXi 6.5 Single USB Thumb Drive.

Once a datastore exists, enable Swap. Go to Host > System > Swap and activate swap on your datastore of choice. In my case there’s only one.

Enable Swap

Once activated, this process to update ESXi over SSH worked flawlessly:

Enable outbound HTTP connections:

esxcli network firewall ruleset set -e true -r httpClient

Perform the update:

esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20170702001-standard

Lock down HTTP connections after the update:

esxcli network firewall ruleset set -e false -r httpClient

Reboot the host:

reboot

For future reference, to see a list of available updates, use

esxcli software sources profile list -d \
  https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml \
  | awk '/6.5.0/ {print $1}'
[[email protected]:~] esxcli software sources profile list -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml | awk '/6.5.0/ {print $1}'
ESXi-6.5.0-20170701001s-no-tools
ESXi-6.5.0-20170404001-standard
ESXi-6.5.0-4564106-standard
ESXi-6.5.0-20170104001-standard
ESXi-6.5.0-20171004001-no-tools
ESXi-6.5.0-20170702001-no-tools
ESXi-6.5.0-20170404001-no-tools
ESXi-6.5.0-20170304101-no-tools
ESXi-6.5.0-20171004001-standard
ESXi-6.5.0-20170104001-no-tools
ESXi-6.5.0-4564106-no-tools
ESXi-6.5.0-20170304101-standard
ESXi-6.5.0-20170301001s-standard
ESXi-6.5.0-20170701001s-standard
ESXi-6.5.0-20170304001-standard
ESXi-6.5.0-20170702001-standard
ESXi-6.5.0-20170304001-no-tools
ESXi-6.5.0-20170301001s-no-tools

ESXi 6.5 Single USB Thumb Drive

I have a goal of booting an ESXi host from a single 32GB USB thumb drive. No other internal storage should be required for this firewall application. This is an ideal setup as there are no moving parts or cables to come unplugged. USB thumb drives are cheap and fast these days.

I was able to install ESXi 6.5 onto the USB thumb drive, but nothing shows up as an available data store for virtual machines. There’s a ton of free space on the USB stick. We can make use of this space with some partitioning magic.

ESX Data Stores

I’m doing this all with a Mac OS X workstation. I’ll use a Ubuntu 16.04 VirtualBox instance to partition the USB stick. We’ll format the filesystem on the ESXi host itself.

First install, ESX onto the USB stick.

Shutdown the ESX host, remove the USB stick and insert into your Mac. Eject the disk from your mac so it can be passed through VirtualBox to Ubuntu 16.04:

sudo diskutil list
sudo diskutil eject disk1

To quickly get a Ubuntu desktop up and running, use vagrant:

mkdir ~/xenial
cd ~/xenial
vagrant init ubuntu/xenial64

Patch the vagrant file to get the GUI:

--- Vagrantfile.orig    2017-11-22 16:03:04.000000000 -0800
+++ Vagrantfile 2017-11-22 16:04:49.000000000 -0800
@@ -57,4 +57,8 @@
   #   vb.memory = "1024"
   # end
+  config.vm.provider "virtualbox" do |vb|
+    vb.gui = true
+    vb.memory = "2048"
+  end
   #
   # View the documentation for the provider you are using for more

Bring up the vagrant instance:

vagrant up

Shutdown the instance and add USB to the virtual machine:

vagrant ssh -- sudo shutdown -h now

Go into VirtualBox Settings => Ports => Add a USB EHCI controller. Add a filter for the USB thumb drive. This is important, otherwise the USB thumb drive won’t show up in the Ubuntu VM. If the USB thub drive doesn’t show up in the GUI, make sure it’s been ejected from Mac OS X using diskutil eject prior to going into VirtualBox settings.

Install the Ubuntu Desktop:

vagrant ssh -- sudo apt-get install -y --no-install-recommends ubuntu-desktop

Install gparted

vagrant ssh -- sudo apt-get install -y gparted

Set a password for the user ubuntu:

vagrant ssh -- sudo passwd ubuntu

Reboot again to get the desktop up and running.

vagrant ssh -- sudo shutdown -r now

Log in as ubuntu with the password just set. Open a terminal with ctrl + alt + t.

Use sudo gparted to create a new partition in the free space. Make sure to create it as unformatted, not the default of ext4.

Create New Partition

Note the partition number, it should be partition 2, e.g. /dev/sdc2.

Note Partition Number

Use sudo gdisk /dev/sdc to change the partition type to fb00. The sequence here is:

  1. t
  2. 2
  3. fb00
  4. w
  5. Y
vagrant ssh -- sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): t
Partition number (1-9): 2
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): fb00
Changed type of partition to 'VMWare VMFS'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdc.

Finally, change partition 2 to partition 10 to avoid issues updating ESXi 6.5. The update process assumes partition 2 has not been created and will error out if present.

sudo sfdisk -d /dev/sdc > esxi.txt
cp -p esxi.txt esxi.txt.orig

Change esxi.txt as the following diff shows, moving partition 2 to 10.

--- esxi.txt.orig       2017-11-23 00:13:57.561990531 +0000
+++ esxi.txt    2017-11-23 00:15:35.566968530 +0000
@@ -7,5 +7,4 @@

 /dev/sdc1 : start=          64, size=        8128, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=439DBC97-6DB2-4FD1-BDEA-A01FC9F26A49
-/dev/sdc2 : start=     8134656, size=    51927040, type=AA31E02A-400F-11DB-9590-000C2911D1B8, uuid=7661DD41-6B25-4ACA-9A7E-E68F07361B9E
 /dev/sdc5 : start=        8224, size=      511968, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=CC0591B2-6658-4A25-91CF-1A9765D239A5
 /dev/sdc6 : start=      520224, size=      511968, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=CDD8851F-3A51-47AD-80E1-F2D504197A8C
@@ -13,2 +12,3 @@
 /dev/sdc8 : start=     1257504, size=      585696, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=3119D6C6-3EEC-4970-9289-6128686849EB
 /dev/sdc9 : start=     1843200, size=     5242880, type=9D275380-40AD-11DB-BF97-000C2911D1B8, uuid=7A6D08A3-6E3F-488D-8F3B-36145382BA9F
+/dev/sdc10 : start=     8134656, size=    51927040, type=AA31E02A-400F-11DB-9590-000C2911D1B8, uuid=7661DD41-6B25-4ACA-9A7E-E68F07361B9E

Write the partition table back out to the USB drive:

sudo sfdisk --force /dev/sdc < esxi.txt

Check the partition table, make sure there is a partition 10:

sudo fdisk -l /dev/sdc

Insert the USB thumb drive back in the ESXi host and boot it back up. SSH in as root and check the partition table. There should be no partition 2 and you should see partition 10.

[[email protected]:~] partedUtil getptbl /dev/disks/mpx.vmhba32\:C0\:T0\:L0
gpt
3825 255 63 61457664
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
10 8134656 61456383 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Format the partition with vmkfstools -C vmfs5 -S USB.1:

[[email protected]:~] vmkfstools -C vmfs5 -S USB.1 /dev/disks/mpx.vmhba32\:C0\:T0\:L0:10
create fs deviceName:'/dev/disks/mpx.vmhba32:C0:T0:L0:10', fsShortName:'vmfs5', fsName:'USB.1'
deviceFullPath:/dev/disks/mpx.vmhba32:C0:T0:L0:10 deviceFile:mpx.vmhba32:C0:T0:L0:10
ATS on device /dev/disks/mpx.vmhba32:C0:T0:L0:10: not supported
.
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba32:C0:T0:L0:10" with blockSize 1048576 and volume label "USB.1".
Successfully created new volume: 5a1614ce-846cd3c8-9b10-0cc47aaaf624

The partition now shows up in the datastore browser after a refresh.

ESX Data Stores

Configure swap, a persistent scratch location, and virtual machines on the same USB drive ESX is booting from, and enjoy!


Gary Larizza joins Open Infrastructure Services

Gary Larizza Portrait

Open Infrastructure Services is growing! I’m excited to announce Gary Larizza, has joined Open Infrastructure Services as a Principal Consultant. Gary and I first started working together on Puppet back in 2010 in Portland, Oregon. From that first day working together I was struck by Gary’s attitude and desire to dive deep in the guts of Puppet to figure out what’s really going on. Fast forward to 2017, and I’ve found myself visiting Shit Gary Says on a daily basis while working on Puppet types and providers. Gary’s ability to assimilate complex information, distill that information into the essential bits customers value, and communicate that essential information with humor and empathy has been invaluable to me personally. A great example of this depth, humor, and empathy is, Seriously, what is this provider doing?

Gary and I both started out working in education in Ohio, though separately from one another. We were bought together in Portland, OR by two australians from the other side of the world, Nigel Kersten and James Turnbull.

Gary and I share that special bond that only comes from successfully dealing with a classroom full of computers delivered a couple of weeks before school starts with the latest and greatest Mac OS X major release, chock full of changes and new surprises.

Please check out Gary’s post, Some Other Beginning’s End on this new beginning for both of us.

Contact us with your automation and cloud infrastructure goals. We’d both love to help you handle changes faster and more confidently through automated cloud infrastructure.

Gary is also a Puppet certified consultant and a member of the Puppet Services Delivery Partner Program. We’re here to help achieve your goals whether you’re expanding your Puppet investment, upgrading to the latest version and language features, or just getting started.


Puppet Enterprise Node Classification Backup & Restore with ncio

I’m happy to announce ncio, a small command line utility to backup and restore Puppet Enterprise Node Classification data.

A customer recently needed to automate the process of backing up and restoring Puppet Enterprise. Most of the work involved in accomplishing this goal is fairly straight-forward. The majority of the Puppet configuration is stored in version control in a Control Repository and Git is incredibly easy to backup and restore. Most customers I work with don’t mind losing data stored in PuppetDB because Puppet reports, resource information, and facts are automatically re-populated as nodes check in after the system is restored. The certificates used by Puppet are also fairly straight forward to backup as they live on the local filesystem.

The only service that was difficult to backup using normal filesystem tools is the Node Classification Service. The node classifier stores critical information and as a result it needs to be backed up and restored along side all of these other resources.

The Node classification v1 API is an excellent mechanism to retrieve and restore node classification data, but the task of using the API is largely left as an exercise for the reader. In order to help with this common problem, I wrote a small utility called ncio (node classification input / ouput). If you’d like to easily get a dump of all node classification data in pretty-printed JSON, transform a backup for restoration on a different PE Monolithic Master, or restore a backup then this tool is for you.

The tool is distributed and updated on rubygems in an effort to make it easiy to install and upgrade in the future.

Here’s how to get started:

Installation

Installation is straight-forward thanks to Puppet Enterprise shipping with the gem command:

$ sudo /opt/puppetlabs/puppet/bin/gem install ncio
Successfully installed ncio-1.1.0
Parsing documentation for ncio-1.1.0
Installing ri documentation for ncio-1.1.0
Done installing documentation for ncio after 0 seconds
1 gem installed

Usage

The command runs best from a PE Monolithic Master. The tool automatically uses SSL certificates which are already present to make setup as easy as possible.

sudo -H -u pe-puppet /opt/puppetlabs/puppet/bin/ncio backup > /var/tmp/backup.json
I, [2016-06-28T19:25:55.507684 #2992]  INFO -- : Backup completed successfully!

Retrying Connections

When automating a backup from cron, it’s recommended to use the --retry-connections global option to make the backup as robust as possible. This option allows ncio to retry in certain situations, e.g. when the puppetserver service is restarting.

Restore Pipelines

The command is designed to send a backup to standard output and restore from standard input. This allows backup and restore pipelines. In this example a backup is taken on master1, transformed so that it can be restored on master2, then restored on master2:

export PATH="/opt/pupeptlabs/puppet/bin:$PATH"
ncio --uri https://master1.puppet.vm:4433/classification-api/v1 backup \
 | ncio transform --hostname master1.puppet.vm:master2.puppet.vm \
 | ncio --uri https://master2.puppet.vm:4433/classification-api/v1 restore

Command overview

Global options:

$ ncio --help
usage: ncio [GLOBAL OPTIONS] SUBCOMMAND [ARGS]
Sub Commands:

  backup     Backup Node Classification resources
  restore    Restore Node Classification resources
  transform  Transform a backup, replacing hostnames

Quick Start: On the host of the Node Classifier service, as root or pe-puppet
(to read certs and keys)

    /opt/puppetlabs/puppet/bin/ncio backup > groups.$(date +%s).json
    /opt/puppetlabs/puppet/bin/ncio restore < groups.1467151827.json

Transformation:

    ncio --uri https://master1.puppet.vm:4433/classification-api/v1 backup \
     | ncio transform --hostname master1.puppet.vm:master2.puppet.vm \
     | ncio --uri https://master2.puppet.vm:4433/classification-api/v1 restore

Global options: (Note, command line arguments supersede ENV vars in {}'s)
  -u, --uri=<s>                Node Classifier service uri {NCIO_URI}
                               (default: https://localhost:4433/classifier-api/v1)
  -c, --cert=<s>               White listed client SSL cert {NCIO_CERT}
                               See: https://goo.gl/zCjncC (default:
                               /etc/puppetlabs/puppet/ssl/certs/pe-internal-orchestrator.pem)
  -k, --key=<s>                Client RSA key, must match certificate {NCIO_KEY} (default:
                               /etc/puppetlabs/puppet/ssl/private_keys/pe-internal-orchestrator.pem)
  -a, --cacert=<s>             CA Cert to authenticate the service uri {NCIO_CACERT}
                               (default: /etc/puppetlabs/puppet/ssl/certs/ca.pem)
  -l, --logto=<s>              Log file to write to or keywords STDOUT,
                               STDERR {NCIO_LOGTO} (default: STDERR)
  -s, --syslog, --no-syslog    Log to syslog (default: true)
  -v, --verbose                Set log level to INFO
  -d, --debug                  Set log level to DEBUG
  -r, --retry-connections      Retry API connections, e.g. waiting for the
                               service to come online. {NCIO_RETRY_CONNECTIONS}
  -o, --connect-timeout=<i>    Retry <i> seconds if --retry-connections=true
                               {NCIO_CONNECT_TIMEOUT} (default: 120)
  -e, --version                Print version and exit
  -h, --help                   Show this message

Backup options

$ ncio backup --help
Node Classification backup options:
  -g, --groups, --no-groups    Operate against NC groups.  See: https://goo.gl/QD6ZdW (default: true)
  -f, --file=<s>               File to operate against {NCIO_FILE} or STDOUT, STDERR (default: STDOUT)
  -h, --help                   Show this message

Transform options

$ ncio transform --help
Node Classification transformations
Note: Currently only Monolithic (All-in-one) deployments are supported.

Transformation matches against class names assigned to groups.  Transformation
of hostnames happen against rules assigned to groups and class parameters for
matching classes.

Options:
  -c, --class-matcher=<s>    Regexp matching classes assigned to groups.
                             Passed to Regexp.new() (default: ^puppet_enterprise)
  -i, --input=<s>            Input file path or keywords STDIN, STDOUT, STDERR (default: STDIN)
  -o, --output=<s>           Output file path or keywords STDIN, STDOUT, STDERR (default: STDOUT)
  -h, --hostname=<s+>        Replace the fully qualified domain name on the left with the
                             right, separated with a :
                             e.g --hostname master1.acme.com:master2.acme.com
  -e, --help                 Show this message

Restore options

$ ncio restore --help
Node Classification restore options:
  -g, --groups, --no-groups    Operate against NC groups.
                               See: https://goo.gl/QD6ZdW (default: true)
  -f, --file=<s>               File to operate against {NCIO_FILE} or STDOUT,
                               STDERR (default: STDIN)
  -h, --help                   Show this message

Hopefully you find ncio useful. If so, please let me know! If you run into any issues or would like to see additional features, please open up an issue on the project page.