Archive for the ‘Artificial Intelligence’ Category

Making perception primary.

July 28, 2017

i’ve spent  long time wondering about the physical basis of perceptual entities. There’s lots of possible types of perceptual entity, visual, auditory, or the perception of time: indeed every possible form of mental activity. I’ve always been thinking about how the physical nature of the brain can perform physical activities theatre then interpreted as mental events. This is a hard problem: how do mental events supervene on physical events. No-one has the answer.

But now I’m wondering if this is the wrong question (and whether that’s why it’s quite so hard). We are very attached to out view of physical reality, whether that’s the physical nature of matter (quarks, electrons, atoms, molecules, or just pieces of stone and wood…), and energy (sound, music, light, and so on), so we look to physical reality to provide a basis for mental events. We know that physical reality is tricky: the physicists tell us that our everyday view of solid matter is not the only reality, that’s largely space. And we know that light is an electromagnetic radiation within a small say of wavelengths.

In fact all that we directly perceive is mental events. Everything else is provided to us as mental events, whether directly through our senses, or less directly through instrumentation that maps something invisible to something sensory, or less directly still through processing signals, or simply reading about it. So lets start at the other end, and make the mental events primary. So let’s start by assuming the reality of the mental events. Let’s not try to explain them away as accidental results of some physical process that’s dong something else.

It’s not that  don’t believe there is some physical correlate of mental events (I do: I can’t accept that the mental event has no physical correlate at all: to do so would be to accept the possibility of disembodied mental systems, which for this scientist seems a step too far right now). What I would suggest is that by making the mental events primal, we start to see just how far our “artificial intelligence” systems are from minds. Yes, we can map vectors to vectors, and learn about the deep structure of visual and auditory information; yes we can build systems that can perform certain types of mathematical reasoning, are create plans. But no, we can’t provide any sort of autonomous volition, not even the coalition that an amoeba has when swimming up a concentration gradient of some nutrient. We might be able to recognise the gradient (maybe – actually, that’s still quite hard), but we wouldn’t know that we wanted to swim up it.

I think we’re a whole lot further from the Singularity than is currently assumed. Yes we can build awfully clever automata, and make them perform some sparkling recognition  tricks, but little more than this.

 

Artificial Intelligence: are we nearly there yet?

May 2, 2013

Last night I gave a public lecture, at my University, with the title above. It went well: there were about 50 people, between about 11 and 75 in age, with some academics, some teachers, and quite a few whom I simply didn’t know. I spoke to my slides for about 45 minutes, then opened the floor to questions: and there really were a lot. I’m happy with the talk, I had been worried about it, for it’s a very different thing to be talking to a audience that’s come out in the evening, from lecturing to students. But this went well. Pitching it was an issue: how can one present material about artificial intelligence which fits all these people. I tried, and I think I succeeded. I had a very interesting discussion with a 17 year old lad at the end: I’d been saying that the concept of the AI Singularity was predicated in the concept of abstract intelligence – which is something I really don’t believe in. But he pointed out that there was nothing in  my argument to stop an embodied intelligence from building a more intelligent embodied intelligence, and that this could still be at the root of a positive-feedback intelligence loop. I couldn’t fault his logic. So now I’m not sure whether to worry about the singularity or not! Actually, Jurgen Schmidhuber thinks I should stop worrying and look at what’s already been done!

It took me a little while to work out why I was so pleased to have given the talk: then I remembered going to some public lectures in Glasgow University in the mid-1960’s, as a teenager, and really enjoying them. It is good to give something back!

Note: I’ve now written a 1000 word extract on AI, possibly for a newspaper – though it doesn’t mention the singularity. And now the Deccan Herald has published it!