Open Source


Just had a short break away from the constant pace of previous work – 16 months’ worth of near-daily updates – in which I caught up on the stack of magazines, articles, books I’d been meaning to get through. So away from tech for a while hence no blog post last month.

The book stack never goes down – see Michael Simmons’ article at https://medium.com/accelerated-intelligence/the-5-hour-rule-if-youre-not-spending-5-hours-per-week-learning-you-re-being-irresponsible-791c3f18f5e6 – if I could take a yearly two-week reading vacation, I would… but back in the real world..

Now I’m ready to start on Parachute 0.0.2, the rework of the node server protocol to be iserver compatible. This should mean that the emulator could work with other emulators’ iservers, and vice-versa. However, the link emulation mechanism would need additional variants, to use the mechanism used by other emulators – e.g. http://lcm-proj.github.io/ as used by Gavin Crate’s emulator (see https://sites.google.com/site/transputeremulator/Home/multiprocessor-jserver-support).

During this work, I’ll update the Hello World assembly program, and start upgrading the C++ code to C++11 as needed.

Advertisements

TL;DR: Frustration, but the end is in sight.

Parachute is composed of several separate projects, with independent versions, held in separate repositories:

  • the Transputer Emulator itself, written in C++, built using Maven/CMake/Make, which requires building and packaging on macOS, CentOS 7, Ubuntu 1604 and 1804, Raspbian Stretch, and Windows 10.
  • the Transputer Macro assembler, written in Scala, built using Maven, which requires building and packaging on macOS, Linux (one cross-platform build for all the above Linux variants), and Windows 10.
  • and eventually there will be the eForth build for Transputer, other languages, documentation, etc.

Getting all this to build has been quite the journey!

I use Maven as an overall build tool, since it gives me sane version management, build capability for all the languages I use via plugins, packaging, signing, deployment to a central repository (I’m serving all build artefacts via Maven Central).

Each project’s build runs across a set of Jenkins instances, with the master on macOS, and nodes on virtual machines, and a physical Raspberry Pi.

Each project deploys a single artefact per target OS, into Maven Central’s staging repository. So there are six build jobs, one on each node, that can sign and deploy on request.

The effect of this is that a single commit can trigger six build jobs for the C++ code, and three for the JVM-based code (since all Linux systems package the same scripts). Deployment is manually chosen at convenient points, with manual closing of the staging repository in Sonatype’s OSSRH service.

The manual deployment choices may be removed once all this is running smoothly. Since I cannot produce all platform-specific artefacts from a single Maven build, I cannot use the Maven Release Plugin.

Once the emulator and assembler are deployed for all their variants, there is a final build job that composes the Parachute distribution archives, signs them and deploys them to Maven Central via Sonatype OSSRH.

There have been several ‘gotchas’ along the way..

… the GPG signing plugin does not like being run on Jenkins nodes. It gets the config from the master (notably, the GPG home, from which it builds its paths to the various key files). So that had to be parameterised per-node.

… getting the latest build environments for C++ on each of the nodes. I’m not using a single version of a single compiler on everything. A variety of clangs (from 3.5.0 to 8.0.0) and Microsoft Visual C++ Build Tools.

… Windows. It’s just a world of pain. Everything has to be different.

So this long ‘phase one’ is almost at an end, and I hope to ship the first build very soon.

It would be ‘fun’ to see if I can replicate all the above with a cloud-based build system instead of Jenkins + VMs. However, Windows, macOS and Raspberry Pi will be problematic. Travis CI does not have CentOS or Raspberry Pi hosts; Circle CI does not have Windows, CentOS or Raspberry Pi hosts (Windows is on their roadmap).

Since Feb/Mar 2018, I’ve been working on a new phase of one of my old projects: Parachute, a modern toolchain for programming the Transputer, and a Transputer Emulator – cross-platform for Mac OSX, Windows 10 and Linux.

The Transputer architecture is interesting since it was one of the first microprocessors to support multitasking in silicon without needing an operating system to handle task switching. Its high level language, occam, was designed to facilitate safe concurrent programming. Conventional languages do not easily represent highly concurrent programs as their design assumes sequential execution. Java has a memory model and some facilities (monitors, locks, etc.) to make parallel programming possible, but is not inherently concurrent, and reasoning about concurrent code in Java is hard. occam was the first language to be designed to explicitly support concurrent (in addition to sequential) execution, automatically providing communication and synchronisation between concurrent processes. If you’ve used go, you’ll find occam familiar: it’s based on the same foundation.

My first goal is to get a version of eForth running on my emulator (as I’ve long wanted to understand Forth’s internals). The eForth that exists is a) untested by its author and b) only buildable on MASM 6, which is hard to obtain (legally). I’m trying to make this project as open and cross-platform as possible, so first I had to write a MASM-like macro assembler for the Transputer instruction set This is mostly done now, written in Scala, and just requires a little packaging work to run it on Mac OS X, Linux and Windows.

I’ve written up the history of this project at Parachute History, so won’t repeat myself here..

I’m not yet ready to release this, since it doesn’t build on Windows or Linux yet, and there are a few major elements missing. Getting it running on Windows will require a bit of porting; Linux should be a cinch.

Once I have a cross-platform build of the emulator, I intend to rewrite my host interface to be compatible with the standard iServer (what I have now is a homebrew experimental ‘getting started’ server).

There are quite a few instructions missing from my emulator – mostly the floating point subset, which will be a major undertaking.

The emulator handles all the instructions needed by eForth. eForth itself will need its I/O code modifying to work with an iServer.

Once eForth is running, I have plans for higher-level languages targetting the Transputer…

… but what I have now is:

… to be continued!

Abstract: Oracle is shutting down Kenai and Java.net on April 28, 2017, and as one of the open source projects I’m a member of was formerly hosted there, we needed to move away. This move comprises source code and mailing lists; this post concerns the former, and is a rough note on how I’m migrating svn repositories to git (hosted on github), with the method, and a script that makes it easier.

It’s an ideal time to migrate your subversion repositories to somewhere else, and since git/github are the de facto/fashion standards these days, you may want to consider converting your entire history to git, and hosting it at github. (Disclaimer, I prefer mercurial and bitbucket, and some of the below could be used there too…)

To convert a svn repo to git, and push to github, there are several stages. I did this on OS X – I’d do this on some form of UNIX, but you may get results with Windows too.

I’m going to use svn2git, so let’s get that installed:

Following the installation instructions at https://github.com/nirvdrum/svn2git

I made sure I had ruby, ruby gems, svn, git, and git-svn installed (these package names might not be precise for your system; they’re from memory)
I can do:


$ cd /tmp
$ mkdir foo
$ cd foo
$ git init
$ git svn
git-svn - bidirectional operations between a single Subversion tree and git
usage: git svn [options] [arguments]
… blah blah …

So,

$ sudo gem install svn2git

The conversion from svn to git makes a lot of network access to the svn repo, and so to reduce this, let’s “clone” the svn repo onto the local system.
Mostly following the notes at https://journal.paul.querna.org/articles/2006/09/14/using-svnsync/, first, initialise a local repo:


$ mkdir svnconversion
$ cd svnconversion
$ svnadmin create svnrepo

Note that git’s svn code expects the subversion repo it converts to have a filesystem format version between 1 and 4, that is, up to Subversion 1.6. So if you have a version of the svn client that’s more recent than that, you’ll have to use the command:


$ svnadmin create —compatible-version 1.6 svnrepo

(see http://svnbook.red-bean.com/nightly/en/svn.reposadmin.create.html for details)


$ ls -l svnrepo
total 16
-rw-r--r-- 1 matt staff 246 1 Sep 22:58 README.txt
drwxr-xr-x 6 matt staff 204 1 Sep 22:58 conf
drwxr-sr-x 15 matt staff 510 1 Sep 22:58 db
-r--r--r-- 1 matt staff 2 1 Sep 22:58 format
drwxr-xr-x 12 matt staff 408 1 Sep 22:58 hooks
drwxr-xr-x 4 matt staff 136 1 Sep 22:58 locks

$ cd svnrepo

Now create the pre-revprop-change hook:

$ echo '#!/bin/sh' > hooks/pre-revprop-change
$ chmod 755 hooks/pre-revprop-change

Let’s prepare to sync the svn repo here:


$ svnsync init file:///tmp/svnconversion/svnrepo https://svn.java.net/svn/name-of-remote-svn-repo

Now let’s do the actual sync. This is what takes the time on large repositories…


$ svnsync --non-interactive sync file:///tmp/svnconversion/svnrepo
# Make tea…

OK, now we have the “clone” of the svn repo, so let’s convert it to git. The first thing you’ll need is an author mapping file. This converts the short author names used in svn commits into the longer “name ” form used by git.

Note there are many possible structures for svn repos, with the ‘standard’ layout having branches/tags/trunk. This page assumes that your svn repo looks like that. If it doesn’t, then see https://github.com/nirvdrum/svn2git where there are many possibilities documented to aid your conversion.

See the svn2git github page for details of how to create this authors.txt file.

Converting to git is as simple as:

$ cd /tmp/svnconversion
$ mkdir gitrepo
$ cd gitrepo
$ svn2git —authors ../authors.txt file:///tmp/svnconversion/svnrepo

Then create a new repository using the GitHub web UI, add it as a remote, and push, mirroring all branches to the remote:


$ git remote add origin https://github.com/your-repo-name.git
$ git push --mirror origin

The following is a script I wrote to make it easier to perform the above steps repeatedly, as I had several repositories to convert. It assumes you have exported the GITORGANISATION environment variable.

#!/bin/bash


usage() {
	echo "svn-to-git-conversion [syncsetup|sync|convert|push] http://url/of/svn/repository local-repo-dir-prefix ~/path/to/authors"
	exit 1
}

PHASE="$1"
# syncsetup, sync, convert or push

SVNURL="$2"
# https://svn.java.net/svn/jxta-c~svn

LOCALREPODIRPREFIX="$3"
SVNREPONAME=${LOCALREPODIRPREFIX}-svn
GITREPONAME=${LOCALREPODIRPREFIX}-git
# prefix of relative folder (eg jxta-c) where repository will be svnsynced to eg jxta-c-svn
# and of relative folder where repository will be converted eg jxta-c-git

AUTHORS="$4"
# path to author mapping file

if [ "$PHASE" != "syncsetup" -a "$PHASE" != "sync" -a "$PHASE" != "convert" -a "$PHASE" != "push" ]
then
	usage
	exit
fi

SVNREPOFILEURL="file://`pwd`/$SVNREPONAME"
echo local svn repository url is $SVNREPOFILEURL

if [ "$PHASE" = "syncsetup" ]
then
	svnadmin create --compatible-version 1.6 $SVNREPONAME
	echo '#!/bin/sh' > $SVNREPONAME/hooks/pre-revprop-change
	chmod 755 $SVNREPONAME/hooks/pre-revprop-change
	svnsync init $SVNREPOFILEURL $SVNURL
fi 

if [ "$PHASE" = "sync" ]
then
	svn propdel svn:sync-lock --revprop -r 0 $SVNREPOFILEURL
	svnsync --non-interactive sync $SVNREPOFILEURL
	echo Users in the SVN repository to be added to the $AUTHORS file:
	svn log --quiet $SVNREPOFILEURL | grep -E "r[0-9]+ \| .+ \|" | cut -d'|' -f2 | sed 's/ //g' | sort | uniq
	echo Top-level structure of the SVN repository: 
	svn ls $SVNREPOFILEURL
fi

if [ "$PHASE" = "convert" ]
then
	mkdir $GITREPONAME
	cd $GITREPONAME
	svn2git --authors $AUTHORS $SVNREPOFILEURL
fi

if [ "$PHASE" = "push" ]
then
	cd $GITREPONAME
	git remote add origin https://github.com/$GITORGANISATION/$GITREPONAME.git
	git push --mirror origin
fi

In part 2 of this series, I described the construction of the HF antenna analyser project I’m building, from Beric Dunn’s schematics and Arduino firmware. In this article, I’ll finish some small items of construction, and look at testing and driving the analyser. All resources, pictures and files for this project are available from the project GitHub repository, with driver software available from the driver GitHub repository.

Errata

The Scan LED wasn’t working, and this was because R12 was too large, so I replaced it with a 1K Ohm. Sorted. Also, the SIL headers I’d ordered originally were too small for the pins of the Arduino Micro and DDS module. It took some time to locate suitable replacements, and find a supplier who wasn’t going to charge me £4.95 just for placing an order as a private (hobbyist) customer. Fortunately, I discovered Proto-Pic, a UK supplier that could provide 10-pin and 6-pin SIL headers. I ordered 2×10 pin Stackable Arduino Headers (PPPRT-11376) and 6×6 pin Stackable Arduino Headers (PPPRT-09280) for £4.78 including P&P. When fitting the 6-pin headers for the Arduino Micro (three per side), you may find that they are quite tight together, so sand down the inner edges a little. The Arduino Micro was still quite a tight fit, but it’s far more secure than it was.

Boxing it up

I cut a few more tracks on the veroboard near the mounting holes so that the metal spacers and screws I found in my spares box wouldn’t short anything out, then started fitting the board into the enclosure, cutting holes as appropriate. I added a switch into the power line… the result looks like this:

And when the LetraSet goes on:

Software, Firmware

I’ve made a few changes to Beric’s original firmware (see here), but will keep the commands and output format compatible, so if you’re driving my modified firmware with Beric’s Windows driver, everything should still work.

I use Windows 2000 on an old laptop in the Shack: I couldn’t get it working with the Arduino drivers, so I couldn’t use Beric’s Windows driver software. I needed a Linux or Mac OSX solution, so started writing a Scala GUI driver that would run on Mac, Windows or Linux, and have got this to the point where I need to add serial drivers like RxTx, getting the native libraries packaged, etc., etc.

However, that’s on hold, since I was contacted by Simon Kennedy G0FCU, who reports that he’s built an analyser from my layout which worked first time!! He’s running on Linux, and has passed the transformed scan output into gnuplot to yield a nice graph. I hadn’t considered gnuplot, and the results look far better than I could write quickly.

So, I reused the code I wrote several years ago for serial line/data monitoring, and wrote an analyser driver in C that produces nice graphs via gnuplot. So far it builds on Mac OSX. In the near future I’ll provide downloadable packages for Debian/Ubuntu/Mint, Red Hat/CentOS and hopefully Raspberry Pi.

Testing

The analyser as it stands is not without problems – the first frequency set during a scan usually reports a very high SWR – I don’t think the setting of the DDS frequency after a reset is working reliably. From looking at the DDS data sheet timing diagrams, short delays are needed after resetting, and updating the frequency – these are not in the current firmware…

Also repeated scans tend to show quite different plots – however, there are points in these repeated plots that are similar, hopefully indicating the resonant frequencies.

Beric mentioned (on the k6bez_projects Yahoo! group) that “With the low powers being used, and the germanium diodes being used, it makes sense to take the square of the detected voltages before calculating the VSWR.”…

Simon pointed out that “the variable VSWR is defined as a double. This means that when REV >= FWD and VSWR is set to 999 it causes an overflow in the println command that multiplies VSWR by 1000 and turns it into an int. Making VSWR a long should fix this.” He also suggested some other changes to the VSWR calculation…

… these are changes I’m testing, and hope to commit soon.

I’ll add some options to the software/firmware to plot the detector voltages over time for a few seconds – an oscilloscope probing the FWD/REV detector output shows some digital noise. I had added an LED-fading effect to show that the board is active, and this exacerbates the noise. This noise makes it through to the VSWR measurement. I’ll try taking the mode of several measurements… Once the DDS is generating the relevant frequency, I’m expecting these voltages to be perfectly stable.

I’m investigating these issues, and hope to resolve them in software/firmware – I hope no changes are needed to the hardware to fix the problems I’m seeing, but can’t rule out shielding the DDS, and/or using shielded cable for the FWD/REV connections between the op-amp and Arduino Micro.

In the next article, I’ll show how to drive the analyser with the driver software, and hopefully resolve the noise issue.

Will M0CUV actually find the resonant frequency of his loft-based 20m dog-leg dipole made from speaker wire? Will the analyser show the tight bandwidth of the 80m loop? Stay tuned! (groan)

73 de Matt M0CUV

I’ve recently been building a small set of CentOS server virtual machines with various settings preconfigured, and packages preinstalled. These were built from the ‘minimal’ CentOS-6.5-x86_64-minimal.iso distribution, as you don’t need a GUI to administer a Linux server. Initially these VMs were built manually, following a build document, but after several additions to the VMs, and documenting these updates in the build document, I decided to automate the whole process. This post describes how I achieved this – I had some problems, hope this helps…

UPDATED: The need to specify an IP adderss for the remote_host property has been fixed in Packer’s GitHub repo, and should be in a release coming soon!

I decided to use Mitchell Hashimoto’s excellent Packer system. I’m running it on an Ubuntu Linux 12.04 desktop VM. Eventually this will be changed to run under Jenkins, so that changes to the configuration can be checked into source control, and the whole process can be fully automated. Until then, I’ve automated it using Windows 7 as my main system, with VMware Player 6.0.1 running the Ubuntu Linux desktop. I also have an instance of VMware ESXi 5.5.0 also running under VMware Player. The Ubuntu VM with Packer creates the new CentOS VMs inside this Nested ESXi. If you haven’t seen the film Inception, now might be a good time to watch it…. Both the Ubuntu and ESXi VMs use bridged networking, and are on the same IP network.

On the ESXi system, I have:

  • installed the VMware Tools for Nested ESXi
  • configured remote SSH access and the ESXi Shell (under Troubleshooting Mode Options) – Packer currently requires SSH access to ESXi, rather than using VMware’s API; this may change in the future
  • enabled discovery of IP address information via ARP packet inspection. This is disabled by default, and is enabled by SSH using esxcli system settings advanced set -o /Net/GuestIPHack -i 1
  • allowed Packer to connect to the VNC session of the VM being built, so that it can provide the early boot commands to the CentOS installer (specifically, giving KickStart a specific configuration file, served by a small web server – more on this later). To enable VNC access, I used the vSphere client to visit the server’s Configuration/Security Profile settings, and under Firewall/Properties…, enabled gdbserver (which enables ports in the range VNC requires, 5900 etc.) and also SSH Client and SSH Server (I forget some of the other things I tried… sorry!)
  • configured a datastore called ‘vmdatastore’ which is where I want Packer to build the VMs.

On the Ubuntu system, I have a directory containing:

  • The CentOS minimal .ISO
  • A Kickstart file. This was taken from a manual installation’s anaconda-ks.cfg, and modified using a CentOS desktop’s KickStart Configuration tool. See below for its contents.
  • The Packer .JSON script. See below.
  • A script to launch a webserver to serve this directory – Packer needs to get the .ISO and KickStart file over the network, and this is how it’s served. Nothing complex: python has a simple one-line server which I use here.
  • A script to run packer.
  • A script to run on the built VM after the OS has been installed. This isn’t the hard part, so this just echoes something: in reality, this installs the packages I need, configures all kinds of stuff.

So let’s see some scripts. They are all in my ~/packertemplatebuilding directory. The Ubuntu desktop VM’s IP address is 192.168.0.1, and the ESXi VM’s IP address is 192.168.0.2; root SSH access to ESXi is used, and the password is ‘rootpassword’. (Of course these are not the real settings!)

The webserver launching script:

#!/bin/sh
python -m SimpleHTTPServer &

The Packer launch script:

#!/bin/sh
# export PACKER_LOG=enable
packer build base-packer.json

The Packer script – one of the problems I had was that the IP addresses you see in here were initially given as hostnames, and set in DNS. This didn’t work, as Packer (0.5.1) is using Go’s net.ParseIP(string-ip-addr) on the remote_host setting, which yielded the error “Unable to determine Host IP”. Using IP addresses isn’t ideal, but works for me:

{
  "builders": [
    {
      "type": "vmware-iso",
      "iso_url": "http://192.168.0.1:8000/CentOS-6.5-x86_64-minimal.iso",
      "iso_checksum": "0d9dc37b5dd4befa1c440d2174e88a87",
      "iso_checksum_type": "md5",
      "disk_size": "10240",
      "disk_type_id": "thin",
      "http_directory": "~/packertemplatebuilding",
      "remote_host": "192.168.0.2",
      "remote_datastore": "vmdatastore",
      "remote_username": "root",
      "remote_password": "rootpassword",
      "remote_type": "esx5",
      "ssh_username": "root",
      "ssh_password": "rootpassword",
      "ssh_port": 22,
      "ssh_wait_timeout": "250s",
      "shutdown_command": "shutdown -h now",
      "headless": "false",
      "boot_command": [
        "<tab> text ks=http://192.168.0.1:8000/ks.cfg<enter><wait>"
      ],
      "boot_wait": "20s",
      "vmx_data": {
        "ethernet0.networkName": "VM Network",
        "memsize": "2048",
        "numvcpus": "2",
        "cpuid.coresPerSocket": "1",
        "ide0:0.fileName": "disk.vmdk",
        "ide0:0.present": "TRUE",
        "ide0:0.redo": "",
        "scsi0:0.present": "FALSE"
      }
    }
  ],
"provisioners": [
    {
      "type": "shell",
      "script": "ssh-commands.sh"
    }
  ]
}

Note that this need for IP addresses has been fixed and will be in a future Packer release.

The ssh-commands.sh script:

#!/bin/sh
echo Starting post-kickstart setup

And finally, the Kickstart file ks.cfg, note the hashed value of the VM’s root password has been redacted. Use the Kickstart Configuration tool to set yours appropriately:

#platform=x86, AMD64, or Intel EM64T
#version=DEVEL
# Firewall configuration
firewall --enabled --ssh --service=ssh
# Install OS instead of upgrade
install
# Use CDROM installation media
cdrom

rootpw  --iscrypted insert-hashed-password-here
authconfig --enableshadow --passalgo=sha512

# System keyboard
keyboard uk
# System language
lang en_GB
# SELinux configuration
selinux --enforcing
# Do not configure the X Window System
skipx
# Installation logging level
logging --level=info

# Reboot after installation
reboot

# System timezone
timezone --isUtc Europe/London
# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on
# System bootloader configuration
bootloader --append="crashkernel=auto rhgb quiet" --location=mbr --driveorder="sda"

# Partition clearing information
zerombr
clearpart --all  --drives=sda

# Disk partitioning information
part /boot --fstype="ext4" --size=500
part pv.008002 --grow --size=1
volgroup vg_centos --pesize=4096 pv.008002
logvol / --fstype=ext4 --name=lv_root --vgname=vg_centos --grow --size=1024 --maxsize=51200
logvol swap --name=lv_swap --vgname=vg_centos --grow --size=3072 --maxsize=3072

%packages --nobase
@core

%end

And that’s it! You’ll have to adjust the timings of the various delays in the Packer .JSON file to match your system. Have the appropriate amount of fun!

Earlier posts discussed the distributed microblogging system I’m building, why I’m writing it, how you would use it, and how it works. In this post, I’ll describe the tools & technologies I’m using to write it, and how you can get involved. It’ll take a long time to write, given the amount of time I can devote to it, given life, family, study and day job, so I’d be very happy to receive help!

The software is written in Scala, and its code is currently hosted in a Mercurial repository on BitBucket. I build it with Maven, write it in IntelliJ IDEA, and use test-driven development as rigorously as possible. It is released under the Apache License, v2.0. Installable software is available for Mac OS X (Snow Leopard or greater), Windows (XP or greater), and Ubuntu Linux (10.04 or greater). I use Software Crafting as my approach, apprentices are always welcome!

The main technologies I’m using are JXTA for the peer-to-peer communications (of which, more later), Play for the client REST API, Bootstrap, Jquery and HTML5 for the web UI. Storage is handled by an embedded H2 database, with my CommonDb Framework for data access.

The rough architectural plan is that on top of JXTA, I intend to have an anti-corruption messaging/asynchronous RPC layer feeding into the domain model, this being isolated from JXTA. Group membership may be handled by an implementation of the Paxos consensus algorithm. Replication is to be handled by a simple gossip protocol, both for the updates to the directory, and between peers in a message replica set.

Interested in contributing to the project? Contact me via this blog, or via @mattgumbley or @devzendo on Twitter; you can find my mail details on the Contact page.

To be continued…

Next Page »