Computing


If you need to visualise molecular structures, RasMol is a venerable program that should run on multiple platforms. I was recently asked to help getting it working on modern Mac OSX. There are instructions on the RasMol website, but not for modern Mac OSX. Hence, these rough notes – offered in the hope they may help others…

RasMol uses the older X11 windowing system that is no longer provided as part of macOS, so we’ll install the open source XQuartz X11 system from XQuartz.org.
Download the .dmg (disk image file), open it, and run the installer. You’ll then need to log out and log in.

Then download RasMol from its SourceForge download area. You’ll need the file:
RasMol_2_7_5_3_i86_64_OSX_High_Sierra_19Dec18.tar.gz.

Once this has downloaded (into your Downloads folder), you’ll need to open a Terminal, and extract its contents, with the following commands. Note that MyMacBook$ is the prompt provided by the Terminal/OS:


MyMacBook$ cd Downloads
MyMacBook$ tar xzvf RasMol_2_7_5_3_i86_64_OSX_High_Sierra_19Dec18.tar.gz
(Many lines will scroll by)

We’ll put this extracted software somewhere a bit easier to get to:

MyMacBook$ mv RasMol_2_7_5_3_i86_64_OSX_High_Sierra_19Dec18 ~/Applications/RasMol_2_7_5_3

(Now it’s in your personal Applications folder).

We need a launch script, since the one that comes with the software doesn’t seem to work, since it can’t find XQuartz’s libraries. So from the terminal:

MyMacBook$ cd ~/Applciations/RasMol_2_7_5_3
MyMacBook$ nano run-rasmol.sh

This puts you into the ‘nano’ text editor, then you must copy and paste:


#/bin/bash
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/opt/X11/lib
./rasmol_XFORMS_32BIT

Then press Control-X, and press Y then return to save the file. Then:


MyMacBook$ chmod 755 run-rasmol.sh

OK, nearly there.

Only kidding 🙂

Let’s create an alias to let you run RasMol…


MyMacBook$ cd (then press return)
MyMacBook$ nano .bashrc

Again, you’re in the nano editor, so copy and paste this:


#!/bin/bash
alias rasmol='cd ~/Applications/RasMol_2_7_5_3; ./run-rasmol.sh'

Then Control-X, and Y then return to save the file.

Righty, let’s run XQuartz (via spotlight [Command-Space]). After a few seconds, you’ll see an X11 terminal window (xterm) appear. This is different from the usual Mac Terminal. You won’t be able to run RasMol from the Mac Terminal: you must use XQuartz and in the xterm, type:


bash-3.2$ rasmol (then press return)

Then you’ll see the beautiful RasMol window. It’s very different from what you’re used to on macOS, but this is how we used to use graphical programs back in the 80s.

The main RasMol window has its own menu – it’s not in the top menu bar like ‘normal’ Mac programs.

When you close XQuartz, you close ALL the X11 programs you’re running – xterm, rasmol, etc.

To open a file in RasMol, use the File menu, then Open…, then use the old-style file dialog to navigate using the ‘..’ (Parent Folder) directories to find where you’ve stored your RasMol files.

Advertisements

Since Feb/Mar 2018, I’ve been working on a new phase of one of my old projects: Parachute, a modern toolchain for programming the Transputer, and a Transputer Emulator – cross-platform for Mac OSX, Windows 10 and Linux.

The Transputer architecture is interesting since it was one of the first microprocessors to support multitasking in silicon without needing an operating system to handle task switching. Its high level language, occam, was designed to facilitate safe concurrent programming. Conventional languages do not easily represent highly concurrent programs as their design assumes sequential execution. Java has a memory model and some facilities (monitors, locks, etc.) to make parallel programming possible, but is not inherently concurrent, and reasoning about concurrent code in Java is hard. occam was the first language to be designed to explicitly support concurrent (in addition to sequential) execution, automatically providing communication and synchronisation between concurrent processes. If you’ve used go, you’ll find occam familiar: it’s based on the same foundation.

My first goal is to get a version of eForth running on my emulator (as I’ve long wanted to understand Forth’s internals). The eForth that exists is a) untested by its author and b) only buildable on MASM 6, which is hard to obtain (legally). I’m trying to make this project as open and cross-platform as possible, so first I had to write a MASM-like macro assembler for the Transputer instruction set This is mostly done now, written in Scala, and just requires a little packaging work to run it on Mac OS X, Linux and Windows.

I’ve written up the history of this project at Parachute History, so won’t repeat myself here..

I’m not yet ready to release this, since it doesn’t build on Windows or Linux yet, and there are a few major elements missing. Getting it running on Windows will require a bit of porting; Linux should be a cinch.

Once I have a cross-platform build of the emulator, I intend to rewrite my host interface to be compatible with the standard iServer (what I have now is a homebrew experimental ‘getting started’ server).

There are quite a few instructions missing from my emulator – mostly the floating point subset, which will be a major undertaking.

The emulator handles all the instructions needed by eForth. eForth itself will need its I/O code modifying to work with an iServer.

Once eForth is running, I have plans for higher-level languages targetting the Transputer…

… but what I have now is:

… to be continued!

Ah, the optimism of the 1st January!

As I reflected on 2018, it became apparent that ‘starting, not finishing’ is a big problem, chez M0CUV. My muse bestows plenty of interesting ideas, but some of them are a bit ambitious. I start, then things grind to a halt. This, coupled with chronic procrastination means a lot of churn, and a feeling of dissatisfaction, angst, and despair at not being able to find the time to do all this – or to prioritise better. A look back through the log shows a big gap of radio silence from June to October, a page of digital mode contacts, and not a single CW QSO throughout the whole year. On the software side, I hung up my own P2P stack after a baffling test failure wore me down. I do want to return to that.. However, I spent most of my hobby development time working on the Parachute project, which despite being really tricky, is going well. I never thought writing an assembler would be this hard. The devil is in the details.

So, after giving myself a good slap for a lacklustre radio year, 2019’s going to be goal-driven. Hopefully these goals are SMART (Specific/Stretching, Measurable/Meaningful, Agreed-upon/Achievable, Realistic/Rewarding and Time-based/Trackable). There are quite a few, and only the tech/radio-related ones are blogged about here. I’ve been using the Getting Things Done method for a while, and it stresses the importance of defining the Next Action on your activities..

So…

  • 1 blog post/month, at least. Progress updates on projects, etc.
  • 1 CW QSO/month, or 12 in the whole year. This’ll probably be really tough.
  • 1 QSLable QSO/month, or 12 in the whole year, any mode/band. FT8 makes this much easier!
  • Try to contact each month’s special callsign for the Bulgarian Saints award from Bulgarian Club Blagovestnik. I’ve already bagged January’s, LZ1354PM via SSB on 1st Jan 🙂
  • Take the next step on the magnetic loop project: build the frame, house the capacitor. I bought some wood and a food container for this yesterday.
  • Box up the 30m QCX transceiver properly – then use it.
  • Keep up with magazines as they arrive rather than building a pile of them.
  • Fix the current bizzare bug in the Transputer assembler – then ship the first release on Windows, macos and Ubuntu/Raspbian
  • Convert the Parachute Node server to use the IServer protocol – then write the IO code for eForth.
  • Build a web application with elm. I’m thinking of a web-front-end to WSJT-X, to allow me to operate remotely.

Let’s see how I get on…!

Abstract: Oracle is shutting down Kenai and Java.net on April 28, 2017, and as one of the open source projects I’m a member of was formerly hosted there, we needed to move away. This move comprises source code and mailing lists; this post concerns the former, and is a rough note on how I’m migrating svn repositories to git (hosted on github), with the method, and a script that makes it easier.

It’s an ideal time to migrate your subversion repositories to somewhere else, and since git/github are the de facto/fashion standards these days, you may want to consider converting your entire history to git, and hosting it at github. (Disclaimer, I prefer mercurial and bitbucket, and some of the below could be used there too…)

To convert a svn repo to git, and push to github, there are several stages. I did this on OS X – I’d do this on some form of UNIX, but you may get results with Windows too.

I’m going to use svn2git, so let’s get that installed:

Following the installation instructions at https://github.com/nirvdrum/svn2git

I made sure I had ruby, ruby gems, svn, git, and git-svn installed (these package names might not be precise for your system; they’re from memory)
I can do:


$ cd /tmp
$ mkdir foo
$ cd foo
$ git init
$ git svn
git-svn - bidirectional operations between a single Subversion tree and git
usage: git svn [options] [arguments]
… blah blah …

So,

$ sudo gem install svn2git

The conversion from svn to git makes a lot of network access to the svn repo, and so to reduce this, let’s “clone” the svn repo onto the local system.
Mostly following the notes at https://journal.paul.querna.org/articles/2006/09/14/using-svnsync/, first, initialise a local repo:


$ mkdir svnconversion
$ cd svnconversion
$ svnadmin create svnrepo

Note that git’s svn code expects the subversion repo it converts to have a filesystem format version between 1 and 4, that is, up to Subversion 1.6. So if you have a version of the svn client that’s more recent than that, you’ll have to use the command:


$ svnadmin create —compatible-version 1.6 svnrepo

(see http://svnbook.red-bean.com/nightly/en/svn.reposadmin.create.html for details)


$ ls -l svnrepo
total 16
-rw-r--r-- 1 matt staff 246 1 Sep 22:58 README.txt
drwxr-xr-x 6 matt staff 204 1 Sep 22:58 conf
drwxr-sr-x 15 matt staff 510 1 Sep 22:58 db
-r--r--r-- 1 matt staff 2 1 Sep 22:58 format
drwxr-xr-x 12 matt staff 408 1 Sep 22:58 hooks
drwxr-xr-x 4 matt staff 136 1 Sep 22:58 locks

$ cd svnrepo

Now create the pre-revprop-change hook:

$ echo '#!/bin/sh' > hooks/pre-revprop-change
$ chmod 755 hooks/pre-revprop-change

Let’s prepare to sync the svn repo here:


$ svnsync init file:///tmp/svnconversion/svnrepo https://svn.java.net/svn/name-of-remote-svn-repo

Now let’s do the actual sync. This is what takes the time on large repositories…


$ svnsync --non-interactive sync file:///tmp/svnconversion/svnrepo
# Make tea…

OK, now we have the “clone” of the svn repo, so let’s convert it to git. The first thing you’ll need is an author mapping file. This converts the short author names used in svn commits into the longer “name ” form used by git.

Note there are many possible structures for svn repos, with the ‘standard’ layout having branches/tags/trunk. This page assumes that your svn repo looks like that. If it doesn’t, then see https://github.com/nirvdrum/svn2git where there are many possibilities documented to aid your conversion.

See the svn2git github page for details of how to create this authors.txt file.

Converting to git is as simple as:

$ cd /tmp/svnconversion
$ mkdir gitrepo
$ cd gitrepo
$ svn2git —authors ../authors.txt file:///tmp/svnconversion/svnrepo

Then create a new repository using the GitHub web UI, add it as a remote, and push, mirroring all branches to the remote:


$ git remote add origin https://github.com/your-repo-name.git
$ git push --mirror origin

The following is a script I wrote to make it easier to perform the above steps repeatedly, as I had several repositories to convert. It assumes you have exported the GITORGANISATION environment variable.

#!/bin/bash


usage() {
	echo "svn-to-git-conversion [syncsetup|sync|convert|push] http://url/of/svn/repository local-repo-dir-prefix ~/path/to/authors"
	exit 1
}

PHASE="$1"
# syncsetup, sync, convert or push

SVNURL="$2"
# https://svn.java.net/svn/jxta-c~svn

LOCALREPODIRPREFIX="$3"
SVNREPONAME=${LOCALREPODIRPREFIX}-svn
GITREPONAME=${LOCALREPODIRPREFIX}-git
# prefix of relative folder (eg jxta-c) where repository will be svnsynced to eg jxta-c-svn
# and of relative folder where repository will be converted eg jxta-c-git

AUTHORS="$4"
# path to author mapping file

if [ "$PHASE" != "syncsetup" -a "$PHASE" != "sync" -a "$PHASE" != "convert" -a "$PHASE" != "push" ]
then
	usage
	exit
fi

SVNREPOFILEURL="file://`pwd`/$SVNREPONAME"
echo local svn repository url is $SVNREPOFILEURL

if [ "$PHASE" = "syncsetup" ]
then
	svnadmin create --compatible-version 1.6 $SVNREPONAME
	echo '#!/bin/sh' > $SVNREPONAME/hooks/pre-revprop-change
	chmod 755 $SVNREPONAME/hooks/pre-revprop-change
	svnsync init $SVNREPOFILEURL $SVNURL
fi 

if [ "$PHASE" = "sync" ]
then
	svn propdel svn:sync-lock --revprop -r 0 $SVNREPOFILEURL
	svnsync --non-interactive sync $SVNREPOFILEURL
	echo Users in the SVN repository to be added to the $AUTHORS file:
	svn log --quiet $SVNREPOFILEURL | grep -E "r[0-9]+ \| .+ \|" | cut -d'|' -f2 | sed 's/ //g' | sort | uniq
	echo Top-level structure of the SVN repository: 
	svn ls $SVNREPOFILEURL
fi

if [ "$PHASE" = "convert" ]
then
	mkdir $GITREPONAME
	cd $GITREPONAME
	svn2git --authors $AUTHORS $SVNREPOFILEURL
fi

if [ "$PHASE" = "push" ]
then
	cd $GITREPONAME
	git remote add origin https://github.com/$GITORGANISATION/$GITREPONAME.git
	git push --mirror origin
fi

Some interesting features of cifs-utils in CentOS 7 that make mounting Windows shares harder than I’d like, that need documenting next time I (or you) run into them…. I had to read the source for mount.cifs to uncover this, and examine the entrails of mount.cifs with strace… so hope this helps….

The actual problem I’m having is that I’m trying to mount CIFS shares as non-root users: this appears to be impossible very hard in CentOS 7, but worked fine was really easy in earlier versions. Along the way to discovering this, I found many suboptimalities that I thought might be useful to fellow travellers….

You can add an entry to /etc/fstab for the default settings for the mount point, or, specify them on the command line. It’s not mandatory.

Normally you’d need to give the username, password, and (windows) domain to the mount command, and these are delegated to mount.cifs. However, supplying the password on the command line is not wise from a security perspective, since it’ll be visible via ps. So, you can store the credentials in a file, with appropriate permissions, then give this to mount.cifs. Except that it doesn’t quite work as I expected….

With the credentials.txt file containing:
username=windowsdomain\\windowsusername
domain=windowsdomain
password=windowspassword

(Note, I’ve seen some posts suggesting that the windows domain be prepended to the username as above, and as I’ll show below, this causes problems…)

I used the command:

mount -t cifs //SERVER/SHARE /mount/point -v -o credentials.txt,uid=1002,gid=1002

With appropriate Linux UID/GID that should be the owner of the files thusly mounted. (this is my ‘backupuser’ user) It didn’t work. The first problem was the error:

Credential formatted incorrectly: (null)

… which is code for ‘you have a blank line in your credentials.txt file’. Removing the blank lines, I then got:

mount error(13): Permission denied

I checked permissions on the credentials.txt (0440), the mount point, etc., etc… no, it’s not that. It’s parsing the credentials.txt, and seems to not get the username from it.. if you give it a gentle hint with:

mount -t cifs //SERVER/SHARE /mount/point -v -o credentials.txt,uid=1002,gid=1002,username=windowsusername

It works!

Now, if your credentials.txt has the username without the domain, like:

username=windowsusername
domain=windowsdomain
password=windowspassword

You do not need to give the username when calling mount, so this works:
mount -t cifs //SERVER/SHARE /mount/point -v -o credentials.txt,uid=1002,gid=1002

So, the rules for credentials files are:

  • No blank lines
  • Don’t add the domain to the username

But as for enabling non-root/suid/user mounts of CIFS shares… setting the suid bit (chmod u+s /sbin/mount.cifs), adding ‘user’ to an /etc/fstab entry, and running it gives the very helpful “mount error(22): invalid argument”. I’ve tried everything I can think of, but it just appears to be something that is no longer possible in CentOS 7.

To get this working requires adding an entry to the /etc/sudoers, like this:


backupuser ALL=NOPASSWD: /sbin/mount.cifs *, /bin/mount *, /bin/umount

Then mounting using sudo mount…. not happy about having to jump through these hoops, but there you go…

In part 2 of this series, I described the construction of the HF antenna analyser project I’m building, from Beric Dunn’s schematics and Arduino firmware. In this article, I’ll finish some small items of construction, and look at testing and driving the analyser. All resources, pictures and files for this project are available from the project GitHub repository, with driver software available from the driver GitHub repository.

Errata

The Scan LED wasn’t working, and this was because R12 was too large, so I replaced it with a 1K Ohm. Sorted. Also, the SIL headers I’d ordered originally were too small for the pins of the Arduino Micro and DDS module. It took some time to locate suitable replacements, and find a supplier who wasn’t going to charge me £4.95 just for placing an order as a private (hobbyist) customer. Fortunately, I discovered Proto-Pic, a UK supplier that could provide 10-pin and 6-pin SIL headers. I ordered 2×10 pin Stackable Arduino Headers (PPPRT-11376) and 6×6 pin Stackable Arduino Headers (PPPRT-09280) for £4.78 including P&P. When fitting the 6-pin headers for the Arduino Micro (three per side), you may find that they are quite tight together, so sand down the inner edges a little. The Arduino Micro was still quite a tight fit, but it’s far more secure than it was.

Boxing it up

I cut a few more tracks on the veroboard near the mounting holes so that the metal spacers and screws I found in my spares box wouldn’t short anything out, then started fitting the board into the enclosure, cutting holes as appropriate. I added a switch into the power line… the result looks like this:

And when the LetraSet goes on:

Software, Firmware

I’ve made a few changes to Beric’s original firmware (see here), but will keep the commands and output format compatible, so if you’re driving my modified firmware with Beric’s Windows driver, everything should still work.

I use Windows 2000 on an old laptop in the Shack: I couldn’t get it working with the Arduino drivers, so I couldn’t use Beric’s Windows driver software. I needed a Linux or Mac OSX solution, so started writing a Scala GUI driver that would run on Mac, Windows or Linux, and have got this to the point where I need to add serial drivers like RxTx, getting the native libraries packaged, etc., etc.

However, that’s on hold, since I was contacted by Simon Kennedy G0FCU, who reports that he’s built an analyser from my layout which worked first time!! He’s running on Linux, and has passed the transformed scan output into gnuplot to yield a nice graph. I hadn’t considered gnuplot, and the results look far better than I could write quickly.

So, I reused the code I wrote several years ago for serial line/data monitoring, and wrote an analyser driver in C that produces nice graphs via gnuplot. So far it builds on Mac OSX. In the near future I’ll provide downloadable packages for Debian/Ubuntu/Mint, Red Hat/CentOS and hopefully Raspberry Pi.

Testing

The analyser as it stands is not without problems – the first frequency set during a scan usually reports a very high SWR – I don’t think the setting of the DDS frequency after a reset is working reliably. From looking at the DDS data sheet timing diagrams, short delays are needed after resetting, and updating the frequency – these are not in the current firmware…

Also repeated scans tend to show quite different plots – however, there are points in these repeated plots that are similar, hopefully indicating the resonant frequencies.

Beric mentioned (on the k6bez_projects Yahoo! group) that “With the low powers being used, and the germanium diodes being used, it makes sense to take the square of the detected voltages before calculating the VSWR.”…

Simon pointed out that “the variable VSWR is defined as a double. This means that when REV >= FWD and VSWR is set to 999 it causes an overflow in the println command that multiplies VSWR by 1000 and turns it into an int. Making VSWR a long should fix this.” He also suggested some other changes to the VSWR calculation…

… these are changes I’m testing, and hope to commit soon.

I’ll add some options to the software/firmware to plot the detector voltages over time for a few seconds – an oscilloscope probing the FWD/REV detector output shows some digital noise. I had added an LED-fading effect to show that the board is active, and this exacerbates the noise. This noise makes it through to the VSWR measurement. I’ll try taking the mode of several measurements… Once the DDS is generating the relevant frequency, I’m expecting these voltages to be perfectly stable.

I’m investigating these issues, and hope to resolve them in software/firmware – I hope no changes are needed to the hardware to fix the problems I’m seeing, but can’t rule out shielding the DDS, and/or using shielded cable for the FWD/REV connections between the op-amp and Arduino Micro.

In the next article, I’ll show how to drive the analyser with the driver software, and hopefully resolve the noise issue.

Will M0CUV actually find the resonant frequency of his loft-based 20m dog-leg dipole made from speaker wire? Will the analyser show the tight bandwidth of the 80m loop? Stay tuned! (groan)

73 de Matt M0CUV

I’ve recently been building a small set of CentOS server virtual machines with various settings preconfigured, and packages preinstalled. These were built from the ‘minimal’ CentOS-6.5-x86_64-minimal.iso distribution, as you don’t need a GUI to administer a Linux server. Initially these VMs were built manually, following a build document, but after several additions to the VMs, and documenting these updates in the build document, I decided to automate the whole process. This post describes how I achieved this – I had some problems, hope this helps…

UPDATED: The need to specify an IP adderss for the remote_host property has been fixed in Packer’s GitHub repo, and should be in a release coming soon!

I decided to use Mitchell Hashimoto’s excellent Packer system. I’m running it on an Ubuntu Linux 12.04 desktop VM. Eventually this will be changed to run under Jenkins, so that changes to the configuration can be checked into source control, and the whole process can be fully automated. Until then, I’ve automated it using Windows 7 as my main system, with VMware Player 6.0.1 running the Ubuntu Linux desktop. I also have an instance of VMware ESXi 5.5.0 also running under VMware Player. The Ubuntu VM with Packer creates the new CentOS VMs inside this Nested ESXi. If you haven’t seen the film Inception, now might be a good time to watch it…. Both the Ubuntu and ESXi VMs use bridged networking, and are on the same IP network.

On the ESXi system, I have:

  • installed the VMware Tools for Nested ESXi
  • configured remote SSH access and the ESXi Shell (under Troubleshooting Mode Options) – Packer currently requires SSH access to ESXi, rather than using VMware’s API; this may change in the future
  • enabled discovery of IP address information via ARP packet inspection. This is disabled by default, and is enabled by SSH using esxcli system settings advanced set -o /Net/GuestIPHack -i 1
  • allowed Packer to connect to the VNC session of the VM being built, so that it can provide the early boot commands to the CentOS installer (specifically, giving KickStart a specific configuration file, served by a small web server – more on this later). To enable VNC access, I used the vSphere client to visit the server’s Configuration/Security Profile settings, and under Firewall/Properties…, enabled gdbserver (which enables ports in the range VNC requires, 5900 etc.) and also SSH Client and SSH Server (I forget some of the other things I tried… sorry!)
  • configured a datastore called ‘vmdatastore’ which is where I want Packer to build the VMs.

On the Ubuntu system, I have a directory containing:

  • The CentOS minimal .ISO
  • A Kickstart file. This was taken from a manual installation’s anaconda-ks.cfg, and modified using a CentOS desktop’s KickStart Configuration tool. See below for its contents.
  • The Packer .JSON script. See below.
  • A script to launch a webserver to serve this directory – Packer needs to get the .ISO and KickStart file over the network, and this is how it’s served. Nothing complex: python has a simple one-line server which I use here.
  • A script to run packer.
  • A script to run on the built VM after the OS has been installed. This isn’t the hard part, so this just echoes something: in reality, this installs the packages I need, configures all kinds of stuff.

So let’s see some scripts. They are all in my ~/packertemplatebuilding directory. The Ubuntu desktop VM’s IP address is 192.168.0.1, and the ESXi VM’s IP address is 192.168.0.2; root SSH access to ESXi is used, and the password is ‘rootpassword’. (Of course these are not the real settings!)

The webserver launching script:

#!/bin/sh
python -m SimpleHTTPServer &

The Packer launch script:

#!/bin/sh
# export PACKER_LOG=enable
packer build base-packer.json

The Packer script – one of the problems I had was that the IP addresses you see in here were initially given as hostnames, and set in DNS. This didn’t work, as Packer (0.5.1) is using Go’s net.ParseIP(string-ip-addr) on the remote_host setting, which yielded the error “Unable to determine Host IP”. Using IP addresses isn’t ideal, but works for me:

{
  "builders": [
    {
      "type": "vmware-iso",
      "iso_url": "http://192.168.0.1:8000/CentOS-6.5-x86_64-minimal.iso",
      "iso_checksum": "0d9dc37b5dd4befa1c440d2174e88a87",
      "iso_checksum_type": "md5",
      "disk_size": "10240",
      "disk_type_id": "thin",
      "http_directory": "~/packertemplatebuilding",
      "remote_host": "192.168.0.2",
      "remote_datastore": "vmdatastore",
      "remote_username": "root",
      "remote_password": "rootpassword",
      "remote_type": "esx5",
      "ssh_username": "root",
      "ssh_password": "rootpassword",
      "ssh_port": 22,
      "ssh_wait_timeout": "250s",
      "shutdown_command": "shutdown -h now",
      "headless": "false",
      "boot_command": [
        "<tab> text ks=http://192.168.0.1:8000/ks.cfg<enter><wait>"
      ],
      "boot_wait": "20s",
      "vmx_data": {
        "ethernet0.networkName": "VM Network",
        "memsize": "2048",
        "numvcpus": "2",
        "cpuid.coresPerSocket": "1",
        "ide0:0.fileName": "disk.vmdk",
        "ide0:0.present": "TRUE",
        "ide0:0.redo": "",
        "scsi0:0.present": "FALSE"
      }
    }
  ],
"provisioners": [
    {
      "type": "shell",
      "script": "ssh-commands.sh"
    }
  ]
}

Note that this need for IP addresses has been fixed and will be in a future Packer release.

The ssh-commands.sh script:

#!/bin/sh
echo Starting post-kickstart setup

And finally, the Kickstart file ks.cfg, note the hashed value of the VM’s root password has been redacted. Use the Kickstart Configuration tool to set yours appropriately:

#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Firewall configuration
firewall --enabled --ssh --service=ssh
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom

rootpw  --iscrypted insert-hashed-password-here
authconfig --enableshadow --passalgo=sha512

# System keyboard
keyboard uk
# System language
lang en_GB
# SELinux configuration
selinux --enforcing
# Do not configure the X Window System
skipx
# Installation logging level
logging --level=info

# Reboot after installation
reboot

# System timezone
timezone --isUtc Europe/London
# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on
# System bootloader configuration
bootloader --append="crashkernel=auto rhgb quiet" --location=mbr --driveorder="sda"

# Partition clearing information
zerombr
clearpart --all  --drives=sda

# Disk partitioning information
part /boot --fstype="ext4" --size=500
part pv.008002 --grow --size=1
volgroup vg_centos --pesize=4096 pv.008002
logvol / --fstype=ext4 --name=lv_root --vgname=vg_centos --grow --size=1024 --maxsize=51200
logvol swap --name=lv_swap --vgname=vg_centos --grow --size=3072 --maxsize=3072

%packages --nobase
@core

%end

And that’s it! You’ll have to adjust the timings of the various delays in the Packer .JSON file to match your system. Have the appropriate amount of fun!

Next Page »