Archive for the ‘Artificial Intelligence’ Category

On redefining intelligence, and adding volition.

February 20, 2024

Every time we get near a machine that displays intelligence we redefine intelligence. By and large we really don’t like the idea that machines might become as intelligent as we humans are. Or think we are, at any rate!

So here, I’ll try and define a few tasks, and discuss what sorts of intelligence they might need.

Gaming

  • The ability to  play noughts and crosses? (simple games)
  • The ability to play chess/go/etc. (perfect sequence games
  • The ability to play poker/backgammon/cards (games of skill and chance) [advantage: purely cognitive, as we are not considering moving chess pieces (easier) or picking up cards off a green baize table cover (harder))].

Robot-based definitions:

  • The ability to find and pick up an object?
  • The ability to find and pick up an object, and then place it in the appropriate place
  • The ability to find and pick up an object, and then place it in the appropriate place in a disordered environment.
  • Given a problem, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.
  • Given a problem, to decide to solve the problem using available objects & tools, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.

More robotics …

  • The ability of a robot to follow a line on the ground
  • The ability to find the way to the door;
  • The ability to maneuver from one place to the door in a cluttered environment?
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of for a good reason.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of its own volition (!)
  • The ability to cross a considerable geographic distance (with a power source).

And more down-to Earth robotics…

  • The ability to be useful in a kitchen environment (like a sous chef)
  • The ability to be really useful in a kitchen environment (like a cook or a  chef)
  • The ability to take a passive part in caring for a person.
  • The ability to take an active part in caring for a person.

Science/engineering problems

  • The ability to answer textual questions sensibly.
  • The ability to answer technical questions correctly (with reference to available information
  • The ability to invent/create new solutions to technical problems. (hard to define, as novelty is often in the combination of the existing answers)

What do these graded problems tell us?

The problems range from what we now see as simple issues, to problems that robots (particularly narrow-AI systems/robots) can do, to much more difficult activities. Particularly in robotics, where the system interacts with the everyday world the difficulties are much harder than in purely cognitive areas. But that’s new, and really reflects the availability of huge amounts of computing power.

It suggests that the next big push is in the manipulation side, the part we humans tend to take for granted because these are less specifically human. Mobility, manipulating natural objects, navigating the world, finding food and shelter. We seem much nearer to solving the problems of cognition, solving abstract (or rather, abstracted) problems, rather than in problem solving in the practical sense. We need to think about the cerebellum as well as the cortex and neocortex.

Some of the problems go much further and require volition. This is different, a stronger version of intelligence that goes beyond the usual machine definition, (though not beyond the human definition). AI (and AGI) is not able to manage this successfully, currently. Yet there are goal-oriented planning systems (and have been for some time), at the currently less fashionable end of AI research. Once you mix these with capable (in the active within a real environment sense), you run the danger of a goal-oriented system performing acts pursuant to that goal that are dangerous in a very real sense.

It is one thing to envision an active caring robot cooking some vegetables for its client, and interacting with knives or kitchen equipment in a complex and self-directed way, and quite another to imagine a robotic soldier seeking out and eliminating  the ”enemy”. Yet if we were capable of doing the one, we would likely be capable of doing the other.

On turning 65

October 6, 2017

Well, here I am: 65 on the 3rd October, Tag der deutschen Einheit, for those in Germany, but no public holiday here in Scotland. And now what?

I’m planned to go down to 20% of full time at the end of this month (was to be 50%, but I reckoned, I’d end up working 100% for 50% of the salary. At least at 20% I can say “no” more easily. Plan is to work on various research projects (on the silicon cochlea, on the neuro-robotics project, on the contextual learning project, to name three), and to do a little  teaching too, but not to much, and , more importantly, to drop all the admin materials (like being in charge of impact, or of research within the Department). But it may not all be so easy.

We’ve lost 2.8 staff, out of a small group: 0.8 is me, 1.0 is one staff member who has gone to London, and 1.0 is another staff member who has been appointed to a promoted post in an ancient Scottish University. All quite normal, but unusual for us, in that they all happened so close together. So I suspect there may be pressure on me to do more teaching, marking etc …

But if required, I can resist!

Meanwhile, I’m aware I’m much less busy than last year or the year before at this time. Though still officially full time, it feels like rather less than that: I’m only working 35 hours a week, rather than the 50 odd I was usually working. And I can actually write some code again. So far, the man beneficiary seems to have been editors of journals, because I’ve agreed to review rather more than I usually do, but I’ll need to keep that within limits.

I’m trying also to take up other interests, after all, after 43 years in Computing, there might be other things to do. So I’ learning the clarinet, as well as playing piano with some friends who seem quite interested in getting a few gigs together… watch this space (and SoundCloud too!)

Making perception primary.

July 28, 2017

i’ve spent  long time wondering about the physical basis of perceptual entities. There’s lots of possible types of perceptual entity, visual, auditory, or the perception of time: indeed every possible form of mental activity. I’ve always been thinking about how the physical nature of the brain can perform physical activities theatre then interpreted as mental events. This is a hard problem: how do mental events supervene on physical events. No-one has the answer.

But now I’m wondering if this is the wrong question (and whether that’s why it’s quite so hard). We are very attached to out view of physical reality, whether that’s the physical nature of matter (quarks, electrons, atoms, molecules, or just pieces of stone and wood…), and energy (sound, music, light, and so on), so we look to physical reality to provide a basis for mental events. We know that physical reality is tricky: the physicists tell us that our everyday view of solid matter is not the only reality, that’s largely space. And we know that light is an electromagnetic radiation within a small say of wavelengths.

In fact all that we directly perceive is mental events. Everything else is provided to us as mental events, whether directly through our senses, or less directly through instrumentation that maps something invisible to something sensory, or less directly still through processing signals, or simply reading about it. So lets start at the other end, and make the mental events primary. So let’s start by assuming the reality of the mental events. Let’s not try to explain them away as accidental results of some physical process that’s dong something else.

It’s not that  don’t believe there is some physical correlate of mental events (I do: I can’t accept that the mental event has no physical correlate at all: to do so would be to accept the possibility of disembodied mental systems, which for this scientist seems a step too far right now). What I would suggest is that by making the mental events primal, we start to see just how far our “artificial intelligence” systems are from minds. Yes, we can map vectors to vectors, and learn about the deep structure of visual and auditory information; yes we can build systems that can perform certain types of mathematical reasoning, are create plans. But no, we can’t provide any sort of autonomous volition, not even the coalition that an amoeba has when swimming up a concentration gradient of some nutrient. We might be able to recognise the gradient (maybe – actually, that’s still quite hard), but we wouldn’t know that we wanted to swim up it.

I think we’re a whole lot further from the Singularity than is currently assumed. Yes we can build awfully clever automata, and make them perform some sparkling recognition  tricks, but little more than this.

 

Artificial Intelligence: are we nearly there yet?

May 2, 2013

Last night I gave a public lecture, at my University, with the title above. It went well: there were about 50 people, between about 11 and 75 in age, with some academics, some teachers, and quite a few whom I simply didn’t know. I spoke to my slides for about 45 minutes, then opened the floor to questions: and there really were a lot. I’m happy with the talk, I had been worried about it, for it’s a very different thing to be talking to a audience that’s come out in the evening, from lecturing to students. But this went well. Pitching it was an issue: how can one present material about artificial intelligence which fits all these people. I tried, and I think I succeeded. I had a very interesting discussion with a 17 year old lad at the end: I’d been saying that the concept of the AI Singularity was predicated in the concept of abstract intelligence – which is something I really don’t believe in. But he pointed out that there was nothing in  my argument to stop an embodied intelligence from building a more intelligent embodied intelligence, and that this could still be at the root of a positive-feedback intelligence loop. I couldn’t fault his logic. So now I’m not sure whether to worry about the singularity or not! Actually, Jurgen Schmidhuber thinks I should stop worrying and look at what’s already been done!

It took me a little while to work out why I was so pleased to have given the talk: then I remembered going to some public lectures in Glasgow University in the mid-1960’s, as a teenager, and really enjoying them. It is good to give something back!

Note: I’ve now written a 1000 word extract on AI, possibly for a newspaper – though it doesn’t mention the singularity. And now the Deccan Herald has published it!