New Startup – Traversys

I once asked a friend who had ran a few businesses in his spare time how easy it was to start up a company. He told me it was very easy, cost as little as £10. I was a bit taken aback. I’d always wanted to start a business, and had a few ideas here and there, but I was held back by my thoughts and fears about all the costs, the legal mumbo jumbo and how to market.

For many years I’ve had ideas for solutions, integrations that I’ve developed for my employers (sometimes in my own time) in order to solve a specific problem I was having. I wasn’t asked to do it, I wasn’t sponsored to do it – I did it because I needed something. The problem is that once you build something for your employer – the intellectual property becomes theirs – despite the fact they never commissioned it (and in some cases they don’t even care).

The best I got for my contributions was sometimes a pat on the head, the worst, would be that it was completely ignored or not understood. I can’t say what’s more frustrating but what I can say is that I’m ready to end those frustrations.

Fast forward a few years, and I’ve decided to partner with a long-time friend and start Traversys.

From now on, the tools that I need are going to be built on our own time, with our own hardware, and with free license for us to use them in our employment. Crucially, the IP will be owned by Traversys – and in many cases will be Open Sourced.

My friend was right. Starting your company is actually very easy, but starting a company when your business partner lives overseas, and you intend to keep working the day job (which influences and funds your ideas) is not.

The chance of success is there, but more importantly this is about finding a place for our ideas to become tangible products, that others may find useful, and be prepared to pay for.

Merry Christmas

Merry Christmas to any faithful readers left.

Apologies for the long absence, if you’ve been following the tweet summaries you will see I’ve been doing a fair bit of travel for my job and haven’t really had time to focus on or update the blog – so much so that I forgot to renew the domain and for a few days the url pointed at some advertising page.

I’m back now, with some new toys recently so I’m hoping to do another homebaked tutorial for connecting PC to TV for media playback.

Also, Happy New Year!

Hacked Again…!

A short while ago this site was hacked in the backend by a script kiddie exploiting a vulnerability through swapping out the default theme. Harmless, but I spent a good while figuring out what they’d done and how, to be able to block it and prevent it happening again.

This time I’ve been hacked by some pro-Palestinian group or something – weird as I’m heading out to Saudi Arabia in a few days time and no-where does this blog mention anything about the Middle-East or politics. I didn’t spend half as much time fixing this one. In fact I must thank the previous hacker for enlightening me – and each subsequent hack makes me analyse my security more.

These cyber-jihadi’s were more capable, in that they had changed my password and default email. Thankfully I googled and quickly found a useful blogpost from Mahesh Kukreja for restoring my login. It seems that the hacker had exploited a known vulnerability in WordPress that had not been fixed in my implementation (despite being the latest version).

I’ve blocked the IP address, and the exploit (using security logs plugin), as well as employing the fix in my login php.

Once I was into my dashboard, I quickly checked nothing else had been touched, reset my password, updated the current theme which purged their changes and modified my security settings and htaccess file.

Since he had been kind enough to leave his email address I also pinged a quick email to inform him he was twat. Probably stepped over the line – I’ll learn one day.

Posting from Android

I’m posting this from my Android phone with the WordPress app. Neat huh?

Normal posting will resume as soon as I can find time to sit down and focus.

Geocaching Log Feed Added

Since I am limited to Blogger for publishing our “My Finds” PQ, I’ve used a WordPress widget to add a link to the latest posts here.

You can find them on the Geocaching page.

Blog Your Geocaching Found Logs with Blogger

You may or may not be aware that Google have recently released a command line tool called Google CL which allows limited updating of some of it’s primary services from the command line – including Blogger.

I have been working on a script and looking for a utility to parse the “My Finds” pocket query for uploading to a blog for a while now so on hearing this news I set to work to see if I could create an automated script. You can see the results on my old blogger account, which I have now renamed _TeamFitz_ and repurposed for publishing our Geocaching adventures.

It’s a little bit clunky and could be improved, but the script is now complete and ready for ‘beta’. I’m publishing it here and releasing it under GPL for others to download, copy and modify for their own Geocaching blogs.

A few snags:

  • It will only work with one “Find” per cache – if you found twice it may screw up the parser.
  • Google have an arbitrary limit of 30-40 auto-posts per day, which is entirely fair, it will then turn on word verification which will prevent CL updates. I have limited the script to parse only 30 posts at a time.

You will need to download and install Google CL, it goes without saying the script is Linux only but if someone wants to adapt it to Windows they are welcome.

I have commented out the “google” upload line for test runs, remove # to make it active.

Either cut n’ paste the code below, or download the script from YourFileLink. Please comment and post links to your own blogs if you use it, also let me know if there are any bugs I haven’t addressed.

# Script to unzip, parse and publish
# Pocket Queries from to Blogger
# Created by Wes Fitzpatrick (
# 30-Nov-2009. Please distribute freely under GPL.
# Change History
# ==============
# 24-07/2010 - Added integration with Blogger CL
# Notes
# =====
# Setup variables before use.
# Blogger has a limit on posts per day, if it
# exceeds this limit then word verification
# will be turned on. This script has been limited
# to parse 30 logs.
# Blogger does not accept date args from Google CL,
# consequently posts will be dated as current time.
# Bugs
# ====
# * Will break if more than one found log
# * Will break on undeclared "found" types
#   e.g. "Attended"
# * If the script breaks then all temporary files
#   will need to be deleted before rerunning:
#	.out
#	.tmp
#	new
#	all files in /export/pub
#####     Use entirely at your own risk!      #####
##### Do not run more than twice in 24 hours! #####
set -e
PQ=`echo $PQZIP | cut -f1 -d.`
TAGS="Geocaching, Pocket Query, Found Logs"
if [ ! $PQZIP ];then
echo ""
echo "Please supply a PQ zip file!"
echo ""
exit 0
if [ ! -f "$EXCLUDES" ];then
touch "$EXCLUDES"
# Unzip Pocket Query
echo "Unzipping PQ..."
unzip $PQZIP
# Delete header tag
echo "		...Deleting Header"
sed -i '/My Finds Pocket Query/d' $PQGPX
sed -i 's/'"$(printf '\015')"'$//g' $PQGPX
# Create list of GC Codes for removing duplicates
echo "		...Creating list of GC Codes"
grep "<name>GC.*</name>" $PQGPX | perl -ne 'm/>([^<>]+?)<\// && print$1."\n"' >  $GCLIST
# Make individual gpx files
echo ""
echo "Splitting gpx file..."
echo "	New GC Codes:"
cat  $GCLIST | while read GCCODE; do
#Test if the GC code has already been published
if [ ! `egrep "$GCCODE$" "$EXCLUDES"` ]; then
if [ ! "$COUNTER" = "0" ]; then
echo "      	$GCCODE"
sed -n "/<name>${GCCODE}<\/name>/,/<\/wpt>/p" "$PQGPX" >> "$TMPFILE"
grep "<groundspeak:log id=" "$TMPFILE" | cut -f2 -d'"' | sort | uniq > "$LOGLIST"
cat $LOGLIST | while read LOGID; do
sed -n "/<groundspeak:log id=\"$LOGID\">/,/<\/groundspeak:log>/p" "$TMPFILE" >> "$LOGID.out"
FOUNDIT=`egrep -H "<groundspeak:type>(Attended|Found it|Webcam Photo Taken)" *.out | cut -f1 -d: | sort | uniq`
rm -f *.out
URLNAME=`grep "<urlname>.*</urlname>" "$TMPFILE" | perl -ne 'm/>([^<>]+?)<\// && print$1."\n"'`
echo "      	$URLNAME"
# Replace some of the XML tags in the temporary split file
echo "      		...Converting XML labels"
sed -i '/<groundspeak:short_description/,/groundspeak:short_description>/d' "$TMPFILE"
sed -i '/<groundspeak:long_description/,/groundspeak:long_description>/d' "$TMPFILE"
sed -i '/<groundspeak:encoded_hints/,/groundspeak:encoded_hints>/d' "$TMPFILE"
sed -i 's/<url>/<a href="/g' "$TMPFILE"
sed -i "s/<\/url>/\">$GCCODE<\/a>/g" "$TMPFILE"
LINK=`grep "" "$TMPFILE"`
OWNER=`grep "groundspeak:placed_by" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
TYPE=`grep "groundspeak:type" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
SIZE=`grep "groundspeak:container" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
DIFF=`grep "groundspeak:difficulty" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
TERR=`grep "groundspeak:terrain" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
COUNTRY=`grep "groundspeak:country" "$TMPFILE" | cut -f2 -d">" | cut -f1 -d"<"`
STATE=`grep "<groundspeak:state>.*<\/groundspeak:state>" "$TMPFILE" | perl -ne 'm/>([^<>]+?)<\// && print$1."\n"'`
# Now remove XML from the GC file
DATE=`grep "groundspeak:date" " $GCFILE" | cut -f2 -d">" | cut -f1 -d"<" | cut -f1 -dT`
TIME=`grep "groundspeak:date" " $GCFILE" | cut -f2 -d">" | cut -f1 -d"<" | cut -f2 -dT | cut -f1 -dZ`
sed -i '/groundspeak:log/d' " $GCFILE"
sed -i '/groundspeak:date/d' " $GCFILE"
sed -i '/groundspeak:type/d' " $GCFILE"
sed -i '/groundspeak:finder/d' " $GCFILE"
sed -i 's/<groundspeak:text encoded="False">//g' " $GCFILE"
sed -i 's/<groundspeak:text encoded="True">//g' " $GCFILE"
sed -i 's/<\/groundspeak:text>//g' " $GCFILE"
# Insert variables into the new GC file
echo "      		...Converting File"
sed -i "1i\Listing Name: $URLNAME" " $GCFILE"
sed -i "2i\GCCODE: $GCCODE" " $GCFILE"
sed -i "3i\Found on $DATE at $TIME" " $GCFILE"
sed -i "4i\Placed by: $OWNER" " $GCFILE"
sed -i "5i\Size: $SIZE (Difficulty: $DIFF / Terrain: $TERR)" " $GCFILE"
sed -i "6i\Location: $STATE, $COUNTRY" " $GCFILE"
sed -i "7i\$LINK" " $GCFILE"
sed -i "8i\ " " $GCFILE"
touch new
echo ""
echo "			Reached 30 post limit!"
echo ""
# Pubish the new GC logs to Blogger
if [ -f new ]; then
echo ""
echo -n "Do you want to publish to Blogger (y/n)? "
if [ $ANSWER = "y" ]; then
echo ""
echo "	Publishing to Blogger..."
echo ""
egrep -H "Found on [12][0-9][0-9][0-9]-" "$PUBLISH"/* | sort -k 3 | cut -f1 -d: | while read CODE; do
CACHE=`grep "Listing Name: " "$CODE" | cut -f2 -d:`
GC=`grep "GCCODE: " "$CODE" | cut -f2 -d:`
sed -i '/Listing Name: /d' "$CODE"
sed -i '/GCCODE: /d' "$CODE"
#google blogger post --blog "$BLOG" --title "$GC: $CACHE" --user "$USER" --tags "$TAGS" "$CODE"
echo "blogger post --blog $BLOG --title $GC: $CACHE --user $USER --tags $TAGS $CODE"
mv "$CODE" "$EXPORT"
echo "		Published: $CODE"
echo "$GC" >> "$EXCLUDES"
echo ""
echo "                  New logs published!"
echo ""
echo "                  Not published!"
echo ""
echo "			No new logs."
echo ""
rm -f *.out
rm -f *.tmp
rm -f "$EXPORT"/*.tmp
rm -f new

Solve my Mystery Geocache

I’m headed to Cork, Ireland for a week and a half, so there’ll be little or no posts, check twitter @wafitz for updates.

Currently I’m sitting on a car ferry from Fishguard using wifi@sea – via satellite, pretty cool.

In the meantime, we just published a new mystery/puzzle geocache. To be solved at home then searched for at night following a trail of firetacks.

See if you can solve it here:
GC2A3MP"> listing
Mystery journal

Site Hacked

Update: for clarity now my head is a bit clearer from 48 hour flu…

Well, it seems like some script kiddie had decided to target my website whilst I was lying in bed all day yesterday with the flu and completely unaware.

Despite the WP software being completely up to date they found a way in, and I’m still working on the exact method of entry. I’m assuming they somehow gained my password and accessed via my account, but it could be a sophisticated inject – since nothing else seemed to be touched so far.

It seems they were able to replace the current theme with the default, then simply overwrite the index.php with their own html. I checked my stats and found some suspicious URL requests which were not in my blacklist – which are now added.

I’ve done some security hardening of the website today with some more stringent security measures. Though I’m aware there’s no such thing as 100% invulnerability, the purpose is really to make hacking this domain not worth it. This is a ‘hobby’ site after all, there’s not much kudos to gain from pwning this domain – hence my suspicion it was a script kiddy above all else.

Good reminder for frequent backups, I guess.

Doggets, Blackfriars Bridge, London

The beautiful wife!