Combining recurring events with approval

A while ago, I talked about the issues with taking a set of potentially recurring events and merging them into the least number of events that fully describe all of the occurrences. This was a bit of a challenge, but with a TDD process, I was able to get something that, as far as my testing tells me works. Thus, deleting one middle occurrence of a recurring event results in two events being left, and re-creating that occurrence results in the one event being created.

This goes even further, with occurrences that touch or overlap being merged into potentially a new event. This is a requirement of our problem domain, since these events are used to see if a person is available to perform a task, and two adjacent occurrences would otherwise make it hard to determine if the person is indeed available.

So, it turns out that this is only part of the problem. In addition, changes to availability may require approval by a manager. So, in addition to being able to merge objects, we also need to be able to merge pending objects with one another, and mark pending objects as superseding other objects. And, even more fun, if a pending object is created that matches a pending-deletion object, for instance, then that object is restored: the two pending events cancel one another out. Finally, a pending object can also be ‘psuedo-merged’ with an availability, where it then is the union of the two events, and supersedes the approved event!

This problem is even stickier than the previous part. Having said that, I have it working, although at this stage it is not possible to have recurring pending events. Things were made even more difficult by the way that I handle superseding events. This cannot be just references in the database, as events and occurrences may be deleted and created by saving another event, so a different method of keeping track of which events are superseded is required.

The good news is, this is coming along well. It did occur to me on the way home today that without ‘never ending’ events, this would possibly collapse the problem space down. Occurrences would no longer need to be recreated quite as often, and indeed they may be the sole storage representation: abstract events could then be created on the fly, purely for descriptive purposes. All I need is some way to represent events that don’t have an end date…

…or I could make it that never-ending events must end in 2050. Then, any event that ends that year will be represented to the user as never-ending. This seems like a slippery Y2K-style slope, but if I keep moving the goal-posts, then it might work. And simplify my code immensely. It turns out there are only ~2100 weeks until then, and my events at this stage only occur weekly (it makes the simplification simpler to only have to look for weekly patterns, and that is how people usually determine their schedule).

I’m glad we had this little chat.

Combining recurring events

On a project I am working on for work at the moment, we have a need to handle recurring events. These events need to be merged if they are the same and can be simplified in any way. As it turns out, this is quite a tricky problem.

There are really only two ways that events can be merged. If two events have the same frequency, and occur on the same day, or on consecutive days according to the period, and they have the same start and finish times, then they can be merged into one.

Similarly, if two events have the same period, occur on all of the same days, and overlap or touch in times, they can be merged into a single event.

From there, it starts to get complicated. Two events that partially overlap may be able to merge parts of themselves, possibly resulting in an extra event being created.

For instance, an event every Tuesday at 9am-12noon, that lasts for 10 weeks, and another event every Tuesday 11am-2pm that starts 5 weeks later, and also runs for 10 weeks, will result in 3 events: one that lasts 5 weeks 9-12, one that lasts 5 more weeks 9-2, and one that lasts a further 5 weeks 11-2.

This on its own is easy to deal with, but as soon as you consider events that have no end date, things get tricky quickly.

Autolinking URLs

I use ecto for posting, which is pretty cool. I’d like to be able to have it so that it creates links automatically from URLs that I happen to write in. Thus http://www.google.com automatically becomes http://www.google.com enabling me to just enter a URL in a post. The best method for doing this would have to be regex:

((?<!(<a href="))(?<!(>))((https?|ftp://)|((mailto|aim|svn):)|(file:///)|((?<!://)www\.))[a-zA-Z0-9_$!&%?,#@'/.*+;:=~-]+\w)\.{0}

This will match a URL that isn’t already in an href link. Took me ages to work out. This will also match www.google.com. And not the dot at the end. I’m still working on code that will create links. I think a seperate regex for each type of URL might be useful.

Hamoa BeachGomezHow We Operate

Hillegass/Cocoa Programming for Mac OS X

I bought a copy of this fantastic textbook on Friday.

I won’t write a review, suffice to say it is the book to use to learn how to program on the Mac.

Why GUIs make us dumber.

I’ve been teaching a Year 12 Information Technology course this year. In some ways I’ve enjoyed it (it’s the first time I’ve actually ever been paid for doing something to do with computer programming, and it has helped me to make the decision to go back to Uni and complete my degree in this area), but in lots of ways I really haven’t.

I don’t know how much the students are to blame (and I’m game making a statement like that, since some of them read this blog). I’m thinking more and more that perhaps there is something else that isn’t making them succeed as much as I would have liked. It may be that the group of students I have just aren’t motivated in the way I was as a student, or perhaps even I wasn’t at their age. This may be true, since I was busy dropping out of Uni when I was about 18, after all.

But I was still teaching myself stuff to do with programming at that age. Whether it was logo or Basic at school, Pascal C or C++ at University, or Python, JavaScript and AppleScript since then, I’ve continued to learn stuff related to coding my whole life. I even taught myself Amiga E back when I had an A1200.

Perhaps there is an overstatement with the title to this post. I’m not sure that GUIs do indeed make us dumber. It’s a little more complicated than that.

Back in the 70s, 80s, and even the early 90s, computers really weren’t that universal. You could do all sorts of things without having to use a computer. Now, computers are everywhere. The idea that everyone would rely on a world-wide computer network for a significant amount of their social interaction was something that was limited to science fiction books. And even they mostly underestimated just how much the internet, and computers, would affect our lives.

Because computers weren’t so widespread, only a small fraction of the population were using them. Those people were likely to be, perhaps I’d suggest with a touch of smugness, smarter than those who weren’t. At least, they were a different type of people. Perhaps the type of people to whom a regimented, logical way of thought was more natural. A group of people who are prepared, and able to memorise strange, obscure sequences of characters and numbers.

Because, back then, that was the only way to interact with computers. I mean, anyone could remember:

load "*",8,1

to load and run a game on a commodore 64. But figuring out why, when you pressed up onto the line labelled READY. and pressing return yielded an arcane “Out Of Data Error” perhaps required more thought. Being able to grasp the concepts behind programming: the various looping and decision structures was easy to people who saw the world through binary glasses.

Users of computers now don’t have to worry any more about getting the command exactly right. You don’t have to even remember the dir command, or the C64 load “$”,8 to get a listing of a directory. You just double-click an icon that looks like a folder, and you get an automatic list of whatever files and folders are inside of it.

This has taught users that remembering arcane commands isn’t important. Until you have to write programs yourself.

This is the biggest thing my students struggle with. Not understanding that a spelling error or typo can have a totally bizarre impact on the execution of a program (or even display or behaviour of a web-page, for those that have only ever used DreamWeaver or FrontPage). This takes the basest manifestation that False has a different meaning in virtually every programming language from false, to not recognising that not having quotes around a string causes the language to attempt to execute it.

I do blame GUIs for these shortfalls in the skills and knowledge of younger people. When the user interface prevents you from making mistakes in data entry, then you don’t develop the problem-solving skills to overcome errors in other contexts when they do appear.

And computer programming is all about problem solving. Being able to logically approach an error message, interpret the likely causes and fix the issue requires perhaps more skills than I had anticipated these students would have. Would things have been better for my teaching of this subject if I had students more like myself, who are all busy doing Physics-Chemistry-double-Maths? Perhaps. But I think they too would lack some of the skills required.

Learning to Program

I learned programming originally, I think, on an Apple //. We had some of these at my primary school, and I clearly remember doing something like programming, even if it was only Logo, in class time. Or perhaps it wasn’t in class time, but at lunch and recess. I had a C=128 at home, and did lots of programming in Commodore Basic, most notably the more advanced basic that they shipped with the 128. Very little of the stuff was to do with peeks and pokes, but there was a little of this more assembly-level stuff.

At high school we did some programming, I think, on the BBC micros. I’ll have to check with the two guys I still keep in contact with, on WoW of all places, but I remember doing some programming on the machines at my first high school.

When I went away to boarding school I ‘moved on’ from computer programming in some senses, and went more into the hard sciences and mathematics. I think a big part of this was me not having any respect for (a) the machines we were forced to use in IT, and (b) the teacher we had for IT, who we also had for Religious Education. (!)

In actuality, I failed year 10 IT. Not because it was hard, but because it was easy. I did all of the work, and then gave it to the girls I fancied to hand in. (Hi Catie, Moose and co!) This wasn’t programming at all, but using computer applications. The stuff that is really boring. Learning how to use Excel, Word and the like. And, note, this was pre-windows.

During this phase, I was into Amiga. I didn’t actually have one, I still only had the C128D, I think. We had a choice between the BBC micros and the dodgy XT PCs. Both of which were limited to monochrome screens. I was, in the boarding house, barely living with the glory (!) of 256 colours, while my friends at home were basking in 4096 colours. And at school, the dull orange on black of the PCs.

I think I got an Amiga 500 while I was still at school. It was awesome, and I loved it. Games were much better on it, and it had a real windowed operating system, that actually multitasked better back then with a slow CPU and 512k of memory than the much more powerful machines in my lab pool do with Windows XP.

My learning of different computer programming languages began in earnest when I started University. Finally, I had proper instruction in several languages, and I was first forced to learn Pascal. We were taught using Turbo Pascal on the PC lab at Uni, but I think I downloaded and installed a pascal compiler on the VAX/VMS terminals we also had access to, but everyone only used for email. I used them for reading usenet, and learning how to push the limits of what the sysadmins allowed us to do.

Second semester of Uni was better - we did C. Finally, a real language. Again, Borland was the platform of choice, but I was different. As well as using the VAX/VMS machines, I managed to get myself an account on the Unix server, lux. I didn’t bother attending lectures (the room wasn’t large enough for all of the students, the lecturer wasn’t a good communicator), and I got enough out of the tutorials.

I submitted all of my work to the tutor via email, and obtained, IIRC, 97% for the subject. I think he was impressed that the rest of the drones were using Turbo C, and I was doing it the real way.

All of this coding was done without a windowing environment. Even though I had a lux account, I wasn’t allowed to actually go into the lab that had the X-Windows terminals in it. I did spend a bit of time in at Adelaide Uni, and got some exposure to X-Windows there, but not much.

Third semester, and things started to go a bit shaky for me. The programming subject was called “Data Structures in Pascal”, and I must say I wasn’t so keen on diving back into this toy language. I got hold of the C++ books for the second semester, and started teaching myself a bit of this.

However, all good things come to a close, and with my rather poor performance overall, I was forced to choose something else to do.

After a couple of years, I came across python. I don’t think it was even while I was still at Uni - I think it was something I picked up on my own while teaching. I do remember printing out all of the 1.5.2 documentation (I think I still have it in a filing cabinet somewhere). I may have started while at Uni in my Education degree, I can’t recall.

Git vs. svn

I’d love to move to Git, even though I’ve really only just started using svn. There’s only one thing holding me back: the fact that all of the tools I use support Subversion, but not Git. I’d probably settle for just one non-CLI tool, like SCPlugin. I’ve even downloaded the source code, and I will have a bit of a look at how that works, and if it is possible to translate that into Git-speak. I really like how Komodo and Xcode will also display the SCM status of a file - it’s just those little extra features that make it kinda easy to remember to commit changes from time to time.

Subversion and OS X

There are a couple of cool things you can do with Subversion and OS X. The first is the Finder plugin, that recognises when you are viewing files that have been checked out of an SVN repository, and puts nice little badges on the icons, so you can see if they are up to date, changed or not in the repository. This is great for a couple of reasons - it works regardless of IDE or editing program(s) you are using. As it turns out, both things I use (Komodo and Xcode) have SVN handling built in (or at least I think Komodo Edit does, I’m still using the Komodo IDE demo on my new machine…). But sometimes I want to have files that are of a different format, or projects that are not necessarily coding projects, that I still want to have version control over. So, I have a local subversion repository directory (~/.SVN, so I don’t see it in the Finder, but it should still be backed up when I back up my home directory), and I’ve currently got a couple of repositories - one for Jaq’s website, and another for my new project, which will (eventually) add the ability to change an iCal event attendee’s status with a pop-up menu. The first of these brings up the only limitation of SVN I’ve come across so far. I use xattrs to store metadata in a multi-machine setup (more than one Mac can’t seem to share metadata very well, things like Finder comments aren’t always propagated across network shares). SVN doesn’t seem to store xattrs, which makes the really cool system I made for generating Jaq’s website fairly useless.

Disabling Menus

All of the cool kids are talking about Joel Spolsky’s post about disabling menus.

I agree with Gruber et. al., he’s dead wrong. Disabling a menu is a simple, yet powerful method of implicitly informing the user what she can or cannot do at the moment. Hiding menus, a-la Windows/Office “Show only recently used items”, however, it truly evil. This is not even hiding disabled items, just hiding ones that haven’t been used recently. Now, if I use one machine, and then go to another, it doesn’t know which ones I have used recently. Bullshit Incarnate!

Anyway, real users don’t use menus. Other than to learn the keyboard shortcuts.

The Old New Thing had a post way back in 2004, which has a mostly sensible answer: When do you disable an option and when do you remove it? Not strictly dealing with just menu items, but regardless, still logical.

ulimit and time

For a Uni project, I have to write the same algorithm different ways, and in different programming paradigms. I also need to collect data on the execution of said programs. Since some algorithms may run in large time on large data sets, I need to stop execution after 30 minutes of run time. And, since there needs to be between 50 and 64 runs of each data size, I wanted to automate the process.

Using the $ time `other-command` is the most obvious way to time execution of any command line application on any decent operating system. However, the time that comes with OS X is somewhat limited - from the man page:

NAME

time – time command execution

SYNOPSIS

time [-lp] utility

It only has two options, and they aren’t that useful. However, it is possible to use this command to time execution.

Because it outputs it’s timings to stderr, then you need to do some tricky python to capture it and save it to a file. I used sys.popen3(), and then read from the stderr stream. Which worked okay, but there isn’t really a way to stop execution after a certain time frame. You can try to use TimeoutFunctionException - which I can’t remember where I got it from, but it’s cool. It doesn’t work in this case, since the sys.popenX calls run in a sub-process, and thus continue to run after the exception is raised. Fail.

About this time in my thought process, I came across a better time. GNU time allows you to select the output format (excellent, no need to parse the output quite so much!), and to output to a file instead of stderr/stdout (even better). It also allows to append to a file instead of overwriting, and some other cool stuff.

But it doesn’t solve the issue of commands running too long. This was a killer, since the process continues to run, but worse than this, new processes are started. So the machine clogs to a virtual halt.

Then I came across ulimit. This handy tool can limit the resources a user or process is able to use.

$ ulimit -t X; time -f %U -a -o {DATAFILE} {COMMAND} {ARGS}

This command will limit a command to X seconds (or slightly less, since the time command itself uses some time. It will then execute COMMAND, with arguments ARGS, and time the execution, appending the run time only, on a seperate line, to the file DATAFILE.

Note that this is no excuse for not using code profiling. I have already run profile.run() on my code to work out where the slowdowns are in the python versions, and then optimised them. This is more like the last phase, actual comparisons.