Numeric Illustration

Using chef-provisioner with the Joyent Smart Data Center

Posted in Uncategorized by elevatorboy on December 4, 2015

Fog gained support for Joyent SDC in 2012

https://github.com/fog/fog/pull/739

 

the chef-provisioning fog joyent driver is an official part of Chef-provisioning

https://github.com/chef/chef-provisioning-fog

 

BUT the SDC7 API now has a few changes that affect how you use it:

  1. set API to 7.0.0 or higher and use array of uuids
  2. need to use key signed requests, no more password auth only
  3. with 7.3.0 you can use array of hashes of network configs

Since I was making this an example, I setup a network using

https://tools.ietf.org/html/rfc5737

Screen Shot 2015-11-25 at 1.30.03 PM

If you setup a network for testing this out, make sure you assign the nic_tag to an interface on some CNs https://docs.joyent.com/private-cloud/networks/nic-tags

Screen Shot 2015-11-25 at 1.30.25 PM

Per the docs https://apidocs.joyent.com/cloudapi/#appendix-e-sdc-7-changelog

AND

the sdc-cloudapi code that parses the networking params

https://github.com/joyent/sdc-cloudapi/blob/master/lib/machines.js#L389-L523

You can see

https://github.com/joyent/sdc-cloudapi/blob/master/lib/machines.js#L435

that if the API version is set to 7.3.0 then it will validate and use the array of hash format shown in the comments about the networking

https://github.com/joyent/sdc-cloudapi/blob/master/lib/machines.js#L405-L413

So what is the chef-provisioner stuff setting?

the chef-provisioning-fog stuff uses the underlying fog stuff

https://github.com/fog/fog/blob/master/lib/fog/joyent/compute.rb#L120

https://github.com/chef/chef-provisioning-fog/blob/master/lib/chef/provisioning/fog_driver/providers/joyent.rb

but gives a good hint at settings to put in your knife.rb file

https://github.com/chef/chef-provisioning-fog/blob/master/lib/chef/provisioning/fog_driver/providers/joyent.rb#L44-L54

 

the actual fog joyent compute code

https://github.com/fog/fog/blob/master/lib/fog/joyent/compute.rb#L120

by default sets the API value to 6.5

So you need to bump :joyent_version in your knife.rb file to 7.0.0 at least since SDC is version 7++ now and pre 7 will be just going away.

If you want to use the array of hash format, set it to 7.3.0

Also, 6.5 allowed password auth, but 7.0 and up required key signed auth.  Fortunately 6.5 supported that as well and thus if you specify the right settings in your knife.rb file, it will do the right thing

https://github.com/fog/fog/blob/master/lib/fog/joyent/compute.rb#L127-L140

driver 'fog:Joyent'
driver_options :compute_options => {
    :joyent_url => 'https://192.168.42.202',
    :joyent_username => 'myUserName',
    :joyent_password => 'myPassWord',
    :joyent_version => '7.3.0',
    :joyent_keyname => 'name of my key in sdc',
    # matching .pub must be in same dir
    :joyent_keyfile => '/path/to/my/key/keyfile' # the priv key
    :joyent_keyphrase => 'password for key file'
}

knife[:ssl_verify_peer] = false # I needed this for my home sdc for which I have self signed certs

Then the provisioner cookbook code can be something like:

machine 'testInstance' do
tag 'my_tag_is_cool',
machine_options({
    :bootstrap_options => {
        :package => 'dc_128', # small package for testing
        :image => '842e6fa6-6e9b-11e5-8402-1b490459e334', # happens to be a base-64 image
        :networks => [
            {
                :ipv4_uuid => 'da0c6983-14cf-4fc6-a83e-329cb827f57c', # a uuid of one of my nets
                :primary => true
            },
            {
                :ipv4_uuid => '074384c0-0561-461f-9109-d3a399da38eb&quot' # a uuid of another one of my nets
            }
        ],
        :key_name => 'name of my key in sdc'
    },
})
end

OR to use the older but still SDC 7.0 syntax, set :joyent_version to ‘7.2.0’ and you can specify the networks parameter to the instance as just an array of uuids like

:networks => [ 'da0c6983-14cf-4fc6-a83e-329cb827f57c', '074384c0-0561-461f-9109-d3a399da38eb']
Tagged with: , , ,

like the blur of lane lines late at night

Posted in Uncategorized by elevatorboy on February 8, 2015

Life sometimes just feels like a blur. I just realized how long its been since I posted anything here. While I’ve never been a prolific blogger, I always intend to contribute more technical stuff back since I’ve gleaned so much from blogs and SE sites but I’m usually flying from one thing to another.

So we bought a house, traveled the world, remodeled said house doing most of the work ourselves while having our first child (a wonderful baby girl) and we both changed jobs, she transitioned to full time staying-at-home mom, and I changed companies from Ntrepid to Tealium ~7 months ago.  Both of us have had our sleep patterns rearranged (re-derranged?) a few times. So theres a good reason for the blur effect. Looking back, some of it (like all the evenings spent framing, insulating, drywalling, texturing, and painting) is all fuzzy like remembrances of the daily commute home.  I know I did it, but when and for how long and most of the details are not recorded.  Other parts of it are as crisp as it gets (holding my baby girl in my arms for the first time, seeing a lion throw up 8 feet from my rental car, finding out our pending 2.0 release is a boy).

I hate calling life busy because that sounds almost dirty. Busy always sounds like I’d rather do something else, but I can’t because — busy! Life is full, fantastically, wonderfully, and blazingly awesomely full and I love it.

All I have to offer at this point is an aggregating statsd backend that sends the data in json. Its my first foray into writing anything in node and I mostly just adapted some existing code with ideas from some other existing code  but its something sharable so here it is.

json_metrics

Some of the Application developers instrumented out a bunch of their application code (YAY!!) and pumped it to statsd and I needed something to get that data into Splunk for analytics and alerting.  I wanted:

  1. throttling so that data didn’t come in as fast as the code running on the hosts I was gathering the metrics from generated it
  2. structured data so that I could automatically have it parsed by splunk and do all the sweet Splunky stuff with it.

and while there are several backends for statsd already, none of them at the time I met both of my criteria so this was born.

If its useful to you too, enjoy.

I’ve had my head in the AWS sdk for Ruby, Chef, and just learning Ruby itself for a project so maybe something shareable will come out of that soon as well.

Tagged with: , , ,

How to export photos from an iDVD project

Posted in Uncategorized by elevatorboy on January 26, 2013

I needed to get a copy of the photos I had used in an idvd project. I discovered that iDVD doesn’t have an export feature that gives this output. You can view the project info and see a list of the photos, but cannot select the whole list and copy it.

You can however get the listing in another way.

Inside the project container is a plist file called
ProjectData

Unfortunately it is in binary plist format and its not so easy to extract the file names from that. However there is a system utility to convert plist files to and from various formats including binary and xml. So if I convert it to xml, I can regex out what I want, and feed that to cp to make an export.

say my project was called cool_project. Then the plist file is at:
~/Documents/DVD_Projects/cool_project.dvdproj/Contents/Resources/ProjectData

Convert it to xml with plutil:
plutil -convert xml1 ~/Documents/DVD_Projects/cool_project.dvdproj/Contents/Resources/ProjectData -o ~/output.plist

I have an XML plist file that I can more easily work with. Because all I care about is the images and other media, I extracted out the iTunes path listings, cleaned them up a bit with regex in vim to end with a file containing only a listing of file paths with the spaces escaped. Then I fed that list to the following shell command to copy all the pictures to a folder:
mkdir ~/DVD_EXPORT; while read picture; do cp "$picture" DVD_EXPORT/ ; done <photo_list.txt

Also a few of the references to iTunes audio files were not found so I fixed the paths to them and then converted the plist back to binary and stuck it back in the project folder like so: plutil -convert binary1 output.plist -o output.binary
cp output.binary Documents/DVD_Projects/cool_project.dvdproj/Contents/Resources/ProjectData

UPDATE:
A revised version with some bash command substitution, some sed editing, and a loop that you can put all on one line would go something like: mkdir ~/DVD_EXPORT; for picture in $(plutil -convert xml1 ~/Documents/DVD_Projects/cool_project.dvdproj/Contents/Resources/ProjectData -o - | sed -n -e 's/\(\<string\>\)\(\/Users\/MY_USER_NAME.*\)\(\<\/string\>\)/\2/p' ./SA_T.plist | sed 's/ /\\ /g') ; do cp "$picture" ~/DVD_EXPORT/ ; done

perl-dyndns as a solaris service

Posted in Uncategorized by elevatorboy on October 4, 2011

I’ve switched a few things around at home and now my old way of doing dyndns won’t work for me. So I finally got a script based dyndns update client working. It was a close shave as I was able to download the SunStudeoExpress suite just before Oracle shut down the OpenSolaris pkg repo that I was using (I’m still running 134 but when I update to illumos/openindianna I’ll update this). Anyway, now CPAN works and I was able to build the prereqs for this nifty dyndns client written in perl.

But being a fan of SMF and wanting to do things the “right” way on Solaris, just adding a cron job myself wasn’t enough, so I went ahead and made adding the cron job an smf service.

Here’s the manifest and the method script. I clearly borrowed some ideas from the zfs snapshot service. Again, thanks Tim I still like zfs snapshot more than timeslider.

First the Method script. Don’t hate me because I like bash. Its probably technically supposed to be bourne shell or ksh93 for OpenSolaris. So much for doing it “right”.
Method:

#!/bin/bash -x
# I left it in -x mode to have entries in the service log.  This is optional

. /lib/svc/share/smf_include.sh

getproparg() {
  val=`svcprop -p $1 $SMF_FMRI`
  [ -n "$val" ] && echo $val
} 

# just in case
export PATH=/usr/bin:${PATH}

if [ -z "$SMF_FMRI" ]
then
  echo "SMF framework variables are not initialized."
  exit $SMF_EXIT_ERR
fi

PERLDYNDNSBIN='/opt/perl-dyndns/bin/dyndns.pl'
PERL='/usr/bin/perl'

CONFIG_FILE=`getproparg perl-dyndns/config_file`

if [ -z "$CONFIG_FILE" ]
then
  echo "perl-dyndns/config_file property not set"
  exit $SMF_EXIT_ERR_CONFIG
fi

case "$1" in
  'start')
    $PERL $PERLDYNDNSBIN --Config $CONFIG_FILE
    crontab -l > /tmp/saved-crontab.$$
    echo "0 6 * * * $PERL ${PERLDYNDNSBIN} --Config ${CONFIG_FILE}" &gt;&gt; /tmp/saved-crontab.$$
    crontab /tmp/saved-crontab.$$
    retval=$?
    if [[ ! $retval -eq 0 ]]
    then
      echo "WARNING - error adding cronjob"
      rm /tmp/saved-crontab.$$
      exit 1
    fi

    ;; 

  'stop')
    # removing a cron job is essentially just looking for an existing entry,
    # removing it, and reading the leftovers back to crontab
    crontab -l | grep -v "${PERLDYNDNSBIN}" > /tmp/saved-crontab.$$
    crontab /tmp/saved-crontab.$$
    #check_failure $? "Unable to remove cron job!"
    retval=$?
    if [[ ! $retval -eq 0 ]]
    then
      echo "WARNING - error removing cronjob"
      rm /tmp/saved-crontab.$$
      exit 1
    fi

    ;; 

  'refresh')
    echo "not implemented yet, not sure if it can be"
    ;; 

  *)
    echo "I don't understand that option, try one of these:"
    echo "Usage: $0 {start|stop|refresh}"
    exit 1
    ;;
esac 

exit $SMF_EXIT_OK

and now the manifest. I hosed it a few times until I remembered that you have to specify transient for things that you want to not run as a daemon, else smf will try to restart it, detect that its failing a lot, and put it in maintenance.

Manifest:

<?xml version="1.0"?>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">

<service_bundle 
  type="manifest" 
  name="perl-dyndns">

  <service 
    name="application/network/perl-dyndns" 
    type="service" 
    version="1">
    
    <dependency 
      name="network" 
      grouping="require_all" 
      restart_on="none" 
      type="service">
      <service_fmri 
        value="svc:/milestone/network:default"/>
    </dependency>
    
    <exec_method 
      type="method" 
      name="start" 
      exec="/lib/svc/method/perl-dyndns.sh %m" 
      timeout_seconds='0'/>
    <exec_method 
      type="method" 
      name="stop" 
      exec="/lib/svc/method/perl-dyndns.sh %m" 
      timeout_seconds='0'>
    </exec_method>

          <property_group name='startd' type='framework'>
                  <propval name='duration' type='astring' value='transient' />
          </property_group>

    <instance 
      name="config-file" 
      enabled="false">
      <method_context>
        <method_credential 
          user="root" 
          group="root"/>
      </method_context>
      <property_group 
        name="perl-dyndns" 
        type="application">
        <propval 
          name="config_file" 
          type="astring" 
          value="/etc/opt/perl-dyndns/dyndns-dynamic.conf" />
      </property_group>
    </instance>

    <stability 
      value="Evolving"/>

    <template>
      <common_name>
        <loctext xml:lang="C">perl-dydndns</loctext>
      </common_name>
      <description>
        <loctext xml:lang="C">
       Perl dyndns - A Perl Dynamic DNS (DDNS) update client
-----------------------------------------------------
Map dynamic IP address into your.hostname.example.org. A
cross-platform solution for DHCP ISP-connected users to obtain
permanent DNS, MX, and Web hosting service from a DDNS provider (e.g.
dyndns.org). Works anywhere where Perl is installed.

Requirements

        Extra Perl CPAN modules need to be install before program
        can be used:

            HTTP::Request::Common
            HTTP::Headers
            LWP::UserAgent
            LWP::Simple
            Sys::Syslog

        You can install these one by one with perl command:

            perl -MCPAN -e shell
            at cpan prompt install module name

        External commands needed:

            ipconfig            (Under WIndows)
            ifconfig            (Under POSIX compliant OS)

        </loctext>
      </description>
      <documentation>
        <doc_link 
          name="Perl Dynamic DNS (DDNS) Update Client"
          uri="https://savannah.nongnu.org/projects/perl-dyndns" />
      </documentation>
    </template>
  </service>
</service_bundle>

Ocarina of my time

Posted in Uncategorized by elevatorboy on October 2, 2011

My father in law is a wizard.  Well with clay that is.  He’s got this cool little workshop he built in his backyard with his throwing wheel and a bunch of tools and glazes and he putters around in there in the evenings after work some times and gold comes out.  Not the economy-is-in-the-tank-put-10%-of-your-assets-in-it kind, but just the spectacular pottery kind.

 

The newest thing he started doing is making whistles.  But being the craftsman and tinkerer he is, he makes them with sound holes that let you adjust the pitch and he crafts them in the shape of reptilian heads.

geckoChameleon

He’s always suggesting I take a go at making some pottery, but I never take him up on it.  I saw on his shelf one whistle in particular that was his little prototype of the next level and it had 2 sound holes which enables it to have 3 tones.  OK now I’m into making something with clay.  So I’ve resolved to make myself an ocarina.

 

someday

zpool mirror what ?

Posted in Uncategorized by elevatorboy on October 2, 2011

Recently I came home to find my OpenSolaris server not answering on ssh and when I went to investigate, I found it off and a nasty black scorch mark where some component or other used to occupy the motherboard.

USBBQ

I think this used to be part of a USB header

So I ordered a new motherboard, processor (core i3 35w), and 8GB ram, and while I was at it got a new case that has more fans and enough space for the 4 HDDs I have to all be in bays (I had one just sitting in the bottom of the case before) and plugged the drives back in and it booted right up.

Anyway, I had my friend Brian over after work and looking at my zpools.   I have 2 pools, one is the rpool and is a 2 disk mirror and the other is for my home file share use and is similarly a 2 disk mirror.  So he noticed that I had created my non root pool with partitions on the disks instead of handing zpool the whole drive which precludes zfs from being able to use the disk’s cache.  So I set about to fix this.  The solution seemed to be to remove one device from the storage mirror, and they add it back as a whole disk and let the mirror resilver, then swap the other drive out and back in as a whole disk.

It sounded simple.  When I went to do it, I found that I could remove a device from either pool just fine, but when I went to add it back, zpool kept telling me that the device was part of a pool already.  This didn’t match the config it had shown me and the disk address that I had just removed from the mirror, so I tried seeing what it would say about each of the drives in turn to find out which pools it thought things were in.  I found one that it said was not part of any pool, but it wasn’t at the same scsi controller and target that I had set free.

Ok… well It let me add it, but it was at this point that I realized that this was probably because my new motherboard had a different sata controller and thus the SCSI emulation layer was detecting things differently, even though zpool status showed the old setup that it had been originally built with, underneath the disks were all at different places.  Was I hosed?  After adding that disk back in, the mirror resilvered, and I was able to remove the other disk from that same pool and add it back in as a whole disk, but at this point zpool hesitated.  I was afraid it was going to spew bits, but then it figured out what all the new addresses were supposed to be and started showing all the disks with the correct new addressing.

:~ $ zpool status
pool: longstor
state: ONLINE
scrub: resilver completed after 1h2m with 0 errors on Wed Sep 21 19:50:29 2011
config:
NAME STATE READ WRITE CKSUM
    longstor ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
        c5d0 ONLINE 0 0 0
        c6d0 ONLINE 0 0 0 244G resilvered

errors: No known data errors

pool: rpool
state: ONLINE
scrub: resilver completed after 0h3m with 0 errors on Wed Sep 21 18:49:20 2011
config:
NAME STATE READ WRITE CKSUM
    rpool ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
        c4d0s0 ONLINE 0 0 0
        c7d0s0 ONLINE 0 0 0 5.67G resilvered
errors: No known data errors

ZFS, pretty nifty stuff

PSA – Sanitized for your safety

Posted in Uncategorized by elevatorboy on October 6, 2010

Just a Public Service Announcement.  Sanitize your inputs.  No really, do it!

favorite comment so far “they paid $300,000 for a webapp that doesn’t scrub its input?”  Yep, but thats Federal Grant Money at work for you.

Fixing SV4 Package dependency resolution?

Posted in Solaris by elevatorboy on June 30, 2010

If you’re using or considering using Solaris then you know that for all of its wonderful features, its antiquated package management system can be a cause for dismay & headache. Especially if you’re used to (or ever heard of ) Debian’s Apt, Red Hat’s Yum, Mandriva’s urpmi tools, or any other modern package management tool, you’re wishing that you had something similar to alleviate the dependency problems.

Of course, you may have heard of the Sun Freeware site, installed the pkg-get script and off you went installing from a third party repository of packages (not that there’s anything necessarily wrong with that). But of course Sun’s service contracts aren’t going to cover the non-Sun issue stuff you get there. And perhaps you’re just a bit paranoid about 3rd party stuff ever since you read a transcript of Ken Thompson’s Turing Award speech. Maybe you even work somewhere that has a policy which doesn’t allow this sort of thing. Maybe you just want to be a Sun purist and only use what they distribute. Whatever the reason for your need for working with the Solaris 10 distro packages, what can you do to sort them all out?  I know that pkg is coming someday.  I use it with opensolaris and so far I like it.  But for Solaris 10 its not available.  And you might even want to keep using pkgadd and friends on opensolaris.  So what now?

Glenn Brunette’s contribution to taking away some of the difficulty of this is something I consider a God-send and you (if you’re one of the rest of the over 4500 who already downloaded the SPC) may too. It lets you see what packages are in which of Sun’s install levels (Metaclusters), has some advanced dependency querying, and then some.  I found it to be really useful.  I had a project where I needed to build as minimal a system as I could, but have particular utilities for some of the tasks our servers do. It got painful trying to do this by hand, and even with Glen’s cool tool I still had to keep track of which packages I had already run down the dependencies on, aggregate my lists hopefully in the right order, and then make attempt after attempt at snapshotting, installing the packages, and fixing the order, which got messy.  So I went ahead and wrote a wrapper for Glen’s script that took the fuss out of all that.  I made it a wrapper since SPC gave me a good tool to work with, but I wanted even more. Between what the Solaris Package Companion provides natively and what the wrapper extends it to do, you can now:

  • find out given a particular install level, what you will need to add to it to support package X
  • Given an install level and a seed list of other packages that you know will be installed, what do you need to satisfy dependencies for package Y
  • outputs this list in an ordering that should fulfill all the dependencies as it goes so you can just feed this list to pkgadd and watch the magic unfold.
  • lets you resolve dependencies for packages from the Companion disk (those SFWblah packages that have a lot of the GNU goodness you are used to)
  • turn on verbosity and watch ascii text scroll by voluminously (ooohhh, text…)

The script is in all likelihood not perfect, but I tried to do a good job, I had fun (its recursive), discovered a bug with particular Solaris packages, and increased in my understanding of Bash scripting and Unix tempfile-fu in the process. I’m publishing the script here so that anyone who wants something like this can take it for a spin and see if it helps out. I’d also appreciate any feedback you care to offer. I of course offer it with no warranty or implied liability for what you do with it, or what it does to you. Thanks for checking it out and may your SVR4 package management be less of a headache.

To Use it you will have to follow the instructions on the SPC page to initialize your repo (I believe my script expects that you did a repo based on the packages on the distribution, not just what you happened to have installed), and set up the variables in the beginning that tell the script were you put the solaris package companion script, where the repo is for it, and if you want to check SFW packages, where the sfw distribution cd is mounted with a path to the packages directory for your architecture.  Then run it with no options to see the usage statement, and experiment and hopefully use it to your advantage to generate minimal system package lists, or figure out what packages you need to install Oracle 11G R2 on that swanky new Solaris 10 install you just set up…

Where can you find it?  I put it up over at Google code.  Version 0.1.2 is available for download, and also in a mercurial repo.  Linked here pkgrez.  Enjoy!

Future plans:

  • add capability to create a repo for spc to use so that you can just drop in this and spc, mount your iso or insert it in the drive, and go
  • add ability to switch between various releases of Solaris 10.  Thus far I’ve only used it with u7 and u8

Mike

Tagged with: , ,