NSSegmentedControl selecting NSTabView

I discovered, quite by accident the other day, that it is possible to use an NSSegmentedControl to control which Tab of an NSTabView is displayed. Here is how to do it.

First of all, it is much easier to change the selected tab if you leave the tabs on to begin with. So, I would suggest building all of the NSTabView’s tabs first. I’ve done five, each with a different control.

View1.png View2.png

Now, you can alter the NSTabView so it doesn’t show the Tabs:

View1Tabless.png TabViewInspector.png

You can now add the NSSegmentedControl, and style it as you wish. I really like the Small Square styling.

SmallSquareNSTabView.png

Now to hook up the connection. There is an outlet on NSTabView called takeSelectedTabViewFromSender:, which can be hooked up to an NSSegmentedControl.

Connection.png

You will need to ensure that your initially selected cell and view are the same index, which prohibits having it save the value between runs (or you might be able to, if you know more than me).

Artificial Intelligence and Ebby

What is AI? What implications does complexity theory have for AI? Give an example of something that is easy for humans but hard for computers. Explain.

There are many definitions of Artificial Intelligence. I think the most amusing is “AI is whatever humans can do, but computers cannot do yet.” - which reflects the idea that truly intelligent machines are not possible - there is always something that separates the machines from us.

There are four main types of systems that could be said to be intelligent - those that think like humans, those that act like humans, those that think rationally, and those that act rationally. (Russell & Norvig). One typical test of an artificially intelligent agent is the Turing Test - where a machine and a human both communicate (using some sort of text-based interface, at this stage) with another human, the judge. If the judge cannot tell which of the candidates is the computer, and which is the human, then the computer is said to have passed the Turing Test.

One task that is generally much harder for machines than humans is language interpretation and acquisition. Steven Pinkett, in his book “The Language Instinct” argues that we are hardwired to develop language complete with the syntactical rules that most languages contain, and indeed in instances with children grouped together they did indeed develop - in one generation - a complete creole that was just as complex as any of their parent’s languages.

Even if we simplify the problem of understanding language to that of text-based (rather than verbal, and speech recognition) communication, it is still a problem that is hard for computers to solve. Indeed, it appears that more progress has been made in the area of speech recognition than language recognition in recent times. Google now has the ability for a user of an iPhone to speak words into their device, and generating a search query of their database on the words that were spoken.

If we move back to the definition of AI as a machine thinking just like a human, then we arrive at the question: can a machine ever truly think like a human? Can it ever have a mind? Can it be intelligent? Are these the same thing?

I personally believe that in principle there is nothing preventing a computer, if it were complex enough itself, from for all intents and purposes being indistinguishable from a human mind. The so-called singularity point - where computing power reaches and then exceeds that of the human mind, is approaching quickly, and may provide us with untapped opportunity. If it walks like a duck and quacks like a duck…

I think that stating outright that a robot or computer cannot think is foolish. Even if I state that I can think (and wouldn’t a computer that was trying to make you believe it was a human do the same!), how do I really know that I do think? Even more to the point, how do I know that you can think? Some years ago, an article appeared in New Scientist about virtual worlds. I’m not talking about World of Warcraft, or even Second Life, I’m talking here about a complete simulation of a universe. Eventually, a civilisation will garner enough computing power to create such a simulation that can have self-aware (or programmed to believe they are self-aware, which might well be the same thing!). Once that point has been reached, then said technological society will be able to generate numerous such simulations. And some of those simulations will be able to create simulations, if they are left to run long enough.

So, once one civilisation reaches this tipping point, there will spring into existence a multitude of simulations, and perhaps only one real universe: statistically speaking are we likely to be that special one universe that hasn’t reached that point yet? We’ve already had rude shocks when we discovered that Earth isn’t at the centre of the universe, or even necessarily that special. Why should this be the special universe.

So, we’ve decided that we are a simulation. Since you and I both think that we can think, clearly simulatrons can think. Thankyou for your time.

In all seriousness (and I don’t think that I subscribe to the above argument), I’ll go back to the duck thing. Who cares if an intelligence is “real” or just simulated. As long as it makes the decisions that would be made in a similar way (or perhaps makes better decisions - although we know from Asimov’s stories that the Robots will eventually overthrow us - I for one welcome out new Robot Overlords), who cares? It’s like saying that women are not as good as men. It’s just siliconism. Think racism or sexism, if you like.

If I assess most people I come across (or virtually all, if you read the You-Tube comments), then they are not intelligent. They’d certainly fail the Turing Test when compared to some of the more advanced “AI” systems out there. They’d probably fail at quite a few other tests while they are at it, but that’s another story. I do recall reading some years ago that the amount of intelligence on a planet stays the same - it’s just that the number of people increases. So the average goes down. (I think this one was Herbert, not Asimov, but I can’t recall. It does feel a bit “Foundation”).

So, is the Turing Test a reasonable test? It certainly centres on language - something that lots of “ppl sm 2 not get now”. But I think it is still valid to some extent. Something that can have a level of success with the Turing Test, be it a human or machine, clearly has some level of intelligence, whether that intelligence is “natural”, or simulated. But I’m not sure that it is a real good measure of intelligence.

Other authors have examined a range of “other” intelligences. Dimasio, for instance, discusses Emotional Intelligence, and the “EG”, although this for me was somewhat unfulfilling. IQ tests are out of fashion now, although in many cases whilst they are culturally biased, this is no different to the bias that we have given to language in the Turing Test. All of these tests are based around human intelligence, and the secret here is that we think that this is the only type of intelligence. Because we don’t know about the other types.

My dog is almost certainly intelligent, so some degree. She remembers things, and when she could see and hear properly, was much more aware of her environment that at times I was. She still has a much better sense of smell than I ever could (or would want, but that’s another matter). She would fail a Turing Test, yet I would say she was smarter than many students I have taught…

Well, this started out as a practice essay for an exam I have tomorrow. Not sure that most of it will be that relevant. But it was fun.

1Password Licenses

I have a couple of licenses for 1Password to give away. Leave a comment with your name and email if you are interested.

Mac only!

Update: all have been given away. Got quite a few responses in Whirlpool, before my thread was shut down.

Pragmatic Programmers discount.

Pragmatic Programmers are having a sale this Friday. Use the coupon code ‘turkey’ to get a 25% discount.

A web-focused Git workflow

A nice post from Joe Maller about web-focused workflow, using Git.

A web-focused Git workflow After months of looking, struggling through Git-SVN glitches and letting things roll around in my head, I’ve finally arrived at a web-focused Git workflow that’s simple, flexible and easy to use.

From A web-focused Git workflow

I’m thinking about doing something similar with Mercurial, since that’s my DVCS of choice. Most of the concepts are directly comparable, but the commands are a bit different. When I’m done, I’ll write it up.

(This is more a note to self, than anything else!)

iSync Menu

iSyncMenu.png

This stupid menu keeps appearing. I’ve turned it off several times, but it reappears.

Doesn’t seem to be every time I reboot.

rdar://6384278

Stack Overflow and asking questions.

I have only asked on question on Stack Overflow, and I really only did that to ensure I got the Beta badge.

Most of the questions that are on there can be solved by googling other sites. I find that that is a better way of solving problems for me, rather than asking a question.

I’m not trying to say I know the answers to everything, it is just that (with Google), one doesn’t need to ask that many questions. One just needs to search. And search clearly.

WOOT!

I seriously, seriously Want One Of These:

51dmPeiGT2L._SL500_AA280_.jpg

(via Twitter @fraserspeirs, available from Amazon)

Emphasis

fail-owned-quotation-marks-correction-sign-fail.jpg

LMAO.

Via FAIL blog.

Whoa, Google. Not sure I like that.

Searching for some info about iSoftPhone today, I noticed something odd in the Google results page:

google-resutls-visited.png

The two sites I had previously visited via Google are shown in the list, and the date I last visited.

Now, this is getting a little spooky, since this information isn’t stored on my computer. The fact the links are purple is enough of a reminder for me.

Oh, and I thought of another thing. Those dates are actually wrong. I’m located “in the future” according to Google. I actually visited those sites on Nov 10.