OpenVPN and default gateway

I rarely have to use a VPN, usually ssh does the trick for what I need. However, for one subject I need to access a server from within a variety of applications, and that server is not exposed outside of the campus network, even though it has a global DNS entry.

I can use Tunnelblick to VPN into the system, and then access the server as if I were on campus. The downside of this is that with the default settings, it routes all traffic through the VPN, which then would either slow everything down, or in this case, prevents anything from getting to the outside internets.

This is not really acceptable: if I am working on a problem and I need to access something I don’t have locally, like some documentation, then I have to disconnect the VPN, look up the data and then reconnect the VPN.

There is a line near the bottom of the .ovpn file that sets up the default gateway:

# Make the VPN the default route. redirect-gateway def1

It’s somewhat tricky to understand how to fix this - I had to restart a couple of times because I had screwed up the routing table.

You need to replace that line with one like the following:

route <address-inside-network> <netmask> <gateway>

In my case, to access, I needed to have:


Restarting the VPN connection then means I can access, and the wider internets.

I’ve also written a script that gets a new authentication keyfile, since the one that is provided doesn’t work very well. I’ve even made it so that it will automatically grab a new keyfile when the old one is out of date.

Procedural Programming

A big thing is made about teaching Java, and using Object Oriented techniques. “You can only program using Object Oriented methodologies in Java.”


You can, and I have noticed it happening more, easily fudge up a procedural programming paradigm in Java.

Static methods appear as procedures and functions.

In CP2A, this is exactly what is starting to happen. Instead of building on the OO crap, stuff is being done in public static void main(String[] args), which is all procedural.

This is fine, but why bother teaching OO under Java first, and then this after? Just skip the middle man and start with the procedural stuff. I’d rather have done Pascal again… nah, just kidding.

iCal and Attachments

I use iCal for all of my time management needs, and I mostly like it. It allows me to keep my timetable well organised, and with some of the new features, it makes it quick and easy for me to find information as well.

In a recent incarnation, iCal obtained the capability of storing attachments as well as URLs. In the past, I just used to drag a document from the Finder to the URL area, and it creates an ugly-looking but still functional link to the document. Thus, for a Tutorial, I can have the answers in a PDF document, and easily get access to it directly from iCal.

Attachments are better, as you can have more than one of them. But there is a problem. Whilst you can have a document as an attachment, if you have a document bundle, it doesn’t work. You can create the attachment, but you cannot open it.

It took me some time to figure this out.

So, you can have many single-file documents as attachments, but only one bundle, as a URL.

The other issue with attachments is that they are copies. If you have an attachment, and you edit it, it doesn’t edit the original file you dragged there, only the one that is stored with that calendar entry. Thus, URLs are actually more useful, as this creates a link to the original document.

If only you could have multiple URLs stored in one calendar entry or to do item.

On Teaching Programming

Before I begin, I’ll recap my qualifications, and why I think I can write this article, and have it stand as meaning something.

I have spent the last 9 years teaching. I know how to teach. I haven’t always managed to have the best results, but I have a solid understanding of educational theories and principles. I have taught a little bit of programming, somewhat unsuccessfully, although I have written programs of one sort or another consistently over the past 20 years. Most of my programming has not been for commercial purposes, in fact quite a lot of it has been programming for programming’s sake.

This year has so far been a huge eye-opener for me. I returned to study after a 9-year hiatus, and it was 4 years since my previous study in Computer Science/Engineering. As mentioned before, I haven’t exactly done nothing related to Computer Science in that time, but haven’t been in a formal education setting related to programming in about 13 years.

One of the things that stopped me from returning to study was that all three Universities in the city I live in have only taught Java in the introductory courses for the past god-knows-how-many-years. I’d tried a couple of times to learn Java from various books, but always returned to python, or other more productive languages. I’d studied C, and done quite well at that, a long time ago, so it wasn’t that I was afraid of “real” languages, but more that Java just had no appeal for me. I think the first time I tried I gave up at “primitives and objects are totally different.” I certainly remember thinking “You are joking!?” when this came up again at the start of this year when I learned Java.

So, now I’m studying full-time at Flinders University, doing Computer Science. A lot of what I’m about to say may be perceived to be somewhat critical of that institution, but please bear in mind that some of what I’m studying is useful, advanced and interesting. It’s just that some of it isn’t.

The introductory programming topic is all Java. But not even real Java. See, they are using the fantastic IDE called BlueJ, which removes a lot of the complexity of Object Oriented programming. By fantastic, I mean fucking shit. The whole point of programming is that it is somewhat complex, but I’ll get to that later. So this BlueJ thing takes a different approach. Instead of writing code, and seeing how that works, the first stuff you tend to do is graphical, and you instantiate classes by right-clicking on them, and selecting from a menu. In fact, for the first week’s work (or the first day, since I did the course intensively), I don’t think we wrote a line of code at all.

And the students who suffered through this, and the running of the same topic in normal semester last year, are really feeling it now. We are (almost) doing some real programming, and in many cases these students haven’t grasped what I consider to be the basics of programming. They haven’t totally understood selection and iteration, let alone recursion.

I think Object Oriented programming is a great paradigm. I mean, I wrote an Object Oriented chess game (without the artificial intelligence) in less than 10 hours of coding time. That’s a game, complete with GUI, that allows users to click-click to move pieces. It checks validity of moves, redraws the screen, and so on. Doing it procedurally would really suck.

But I went into that with a solid grounding in imperative programming. I learned how to construct loops and selection statements in BASIC back in the 80s. I used to criticise BASIC and Pascal, but I think I’m starting to see the value in having those type of languages, the ones that are really limited and limiting, but allow you to learn in a safe environment. By safe I mean less-threatening, because you can still, if you try hard enough, break things.

I think more importantly though, the first language people learn should be interpreted. For starters, it removes the barrier to entry of having to understand the compile/execute cycle. More so, it provides immediate feedback on what you type in.

Back to educational-land. I have studied a significant amount of Psychology, and know one thing. The sooner after an action you receive feedback, the more likely you will take away the lesson from the situation. If you type in a command, it will fail immediately, and you can then try to get it right.

Interpreted languages don’t need to be restricted to type-in-command : get-feedback styles of programming. They can be used in batch mode, but being able to experiment with the code as you go along makes a big difference to learning how stuff works. I’ll repeat the example I used last November. When I had a Commodore 128D (think of a C=64, but with a separate keyboard, a larger case, a floppy drive, and more memory: 128kb!)I remember at first being stumped by the error message that appeared when you moved the cursor up over the READY. prompt and pressed return.


(Apologies for the all caps, that was the way it was back then…)

It wasn’t until I started programming on that machine, which had a low entry cost since there was a built-in basic interpreter which was basically the access point to the OS. When you deal with data structures on C= BASIC, you use a command called READ, which works in conjunction with DATA. Because you could sometimes have a READ when there wasn’t a corresponding DATA statement (or argument), then you would run out of data. And the error message shown above would appear.

The first time I saw the error message on one of my own programs something clicked. I finally understood what the computer meant when I did the whole READY. thing. But more than that, I realised that computers are in many senses contextually insensitive. The computer had no idea that I wasn’t writing a program. It was inside the BASIC interpreter, therefore it was a program.

That’s the key. With computers, you need to spell stuff out instruction by instruction. In the early days, this was done with setting switches to reflect binary values. As we go on, we abstract this process. Next it was machine code, then assembly language, then higher-level languages.

In some ways, you lose something at each level. What you gain, however, in most cases exceeds that which you lose. I’d hate to try to write anything significant in assembler, let alone machine code. Being able to grasp the full idea that you are working on, being able to fit one concept in a screen, and not having to worry about things that are at a higher or lower level of abstraction enables you to better write bug-free code.

It is possible that the current limit of reasonable abstraction will be extended in the future - natural language processing and diagram-based programming tools may someday become the norm. At this point in time, however, syntax is still important, perhaps even more so than semantics. Programmers must still spell every keyword correctly. Compilers and interpreters aren’t smart enough to determine which else clause goes with which if determinant without some sort of structure, be it braces or indentation. They can’t just guess, and get it right, nor can they make sound judgements based on context.

So, it’s still important for beginning programmers to learn how to structure a loop, or several types of loop. More important, IMHO, than knowing about objects and inheritance. Yet the current trend towards OO as the be-all and end-all of learning coding means that these ideas are given precedence.

I think the saddest reflection of this is that students are not capable or interested in advanced programming topics. With 120 students at Flinders University doing Computer Programming 2A, this drops to 10 doing Programming Language Concepts. And PLC is going to make those 10 much better programmers. Because it teaches them about the structure and interpretation of computer programs. Not just how to knock together a few classes that kind-of work and get the job done.

But I rant.

New MacBook Pro Battery

My battery performance had turned decidedly poor, so I arranged for a replacement (free of charge) through Apple.

It arrived today.

Here are the last few days worth of battery reports from Coconut Battery. Can you tell which is the new battery?


Mercurial with OS X GUI tools.

Before anyone gets excited, this isn’t about the long-awaited Finder plugin that will do for hg repositories what SCPlugin does for svn repositories: adding badges to the icons and allowing operations from within the Finder.

No, this is about using two great tools, SubEthaEdit and Changes with mercurial.

Firstly, let’s look at how we can use SubEthaEdit to be the editor for commit messages.

SubEthaEdit has a command line tool, which has some useful arguments. Use man see to see them.

The first of the ones I use is -w, which waits until the file has been closed before continuing the execution of the calling program. This is a required argument, as without it your message won’t be committed properly.

The next I use is -r, which causes the application that called see to be brought to the front after closing the file. This is not completely necessary, but saves a mouse-click.

-o new-window means open the file you are planning to edit in a new window. Again, not completely necessary, and irrelevant if you don’t use tabs at all, but I find it helps me to see which file I am editing if it appears ‘new’.

-m allows you to choose a particular mode to edit the file in. I have created a handy little hgCommit mode, so this can be used. I like this idea, as it means that lines that will not be committed are easily distinguished. And you can use a different background colour, so that it’s really clear what you are doing.

Finally, -j allows a custom title addition. Again, this is just a nicety, but I use it nonetheless.

The file ~/.hgrc allows you to have settings for an editor - I find that SubEthaEdit doesn’t quite work right with crontab, so I leave the EDITOR environment variable to nano. In the [ui] section of the .hgrc file, I have a line that looks like:

editor = see -w -r -o new-window -m hgCommit -j 'Mercurial Commit Message'

My hgCommit mode can also be downloaded, if you wish:

The next hint is using Changes. You’ll probably know this, if you have read the Changes WIki, but you can use the extdiff extension of Mercurial. The bit I missed is that you can also use Changes to merge by default. I wish you could do the same for diff.

To make Changes the default merge tool, create a script at /usr/local/bin/chmerge, or somewhere similar.

All that needs to go in this file is:

1    #!/bin/sh
2    chdiff ---wait $3 $1

And ensure it is executable. Mine is also owned by root, I think.

Then, in your .hgrc, add in the following line after your editor line:

merge = /usr/local/bin/chmerge

My final hint is how to get around errors when you have a remote filesystem mounted under sshfs, and you get an untrusted user or group error when trying to perform a mercurial operation. In my case, the files in question were owned by user 1001/group 1001. I added the following to my .hgrc:

1    [trusted]
2    users = 1001
3    groups = 1001

Making EyeTV 1000x better.

One command:

defaults write com.elgato.eyetv “apple remote menu button behavior” -int 1

This made my day.

Don’t do what I did first, which was:

defaults write com.elgato.eyetv “apple remote menu button behaviour” -int 1

It still frustrates me that the world has chosen to LCD in spelling: colour and behaviour shouldn’t be spelled the USA way…

Update: this seems to cause a crash when starting Front Row. I’ve had to revert to the old behaviour. Poo.

Input Managers and the Leopard Firewall

I’d figured out some time ago that an Input Manager or two that I was using was interfering to some extent with the MacOS Leopard Firewall.

When you have the firewall in “Set access for specific services and applications” mode, and you start an application which tries to open a TCP or UDP port, then you get a message like:


When you click one of the buttons, an entry is added to the preferences list:


However, the application’s executable code is checked by the system to see that it is the same application as was run when this choice was approved. So, if you have something like an Input Manager, which alters the executable code as it is run, then you have this message appear every time you launch.

This was a real problem for me, using Inquisitor with Safari. Sure, it’s a great little tool to get the pre-search results in the browser before you press enter, but I decided it wasn’t worth the annoyance of having to click Allow each time I start up the application.

So, if you are having issues with the Firewall dialog appearing each time you start an application, and you haven’t installed a new version, consider removing any unneeded Input Mangers. You’ll probably need to remove them all to get it to stop, but that might just be worth it.

Naked Mole Rat

I swear this looks like something I have seen before. I just can’t put my penis on it…


Using Dynamic DNS as (partial) authentication.

One thing that you can do with Apache is limit access to particular domains. For instance, you can have a process running on a Server that handles internal requests as well as external requests, and have the internal site never exposed to the outside internet. This can be done using the apache Allow and Deny directives.

But, sometimes I need to work remotely, and still have access to the intranet data, such as the company wiki and bugzilla database. But I don’t know which IP addresses I will be using, and whilst I can open it up to allow a range of IP addresses in, this means that someone else could see the data.

So, set up a dynamic DNS for your laptop, and put in an Allow for this DNS entry. Then, you just have to update the address whenever you want to access it - or even better, update it whenever your IP address changes. That means, even if someone comes on to the same IP address after you, as long as you have a new IP address, they won’t be able to get in.

This does point out the flaw in the system: if you log off, and don’t log back on (or don’t renew your IP address), then that person can access your intranet data. So, you should not use this as a sole means of authentication. Instead, use http authentication, or preferably, some other method of protecting access. But as a lightweight (ie, no VPN) system, this looks pretty good. It should even work if you are behind a firewall that prevents VPN access. And adding a new user requires a bit of work - creating a new Dynamic Hostname and adding this to the httpd.conf file - or wherever your server config data is stored.

It strikes me you could use sub-domains to do this, too, and have, or whatever. Then an allow of should allow anyone using a subdomain. I don’t know how you can do subdomains with DynDNS, but it may be possible with some other system. (Or, if you run your own DNS, you could come up with a method of doing it there, which gives you more flexibility. However, if you have a DNS, you can probably stretch to a VPN too).