Posts Tagged ‘artificial general intelligence’

Revisiting Artificial Intelligence.

November 7, 2024

[This piece has been inspired by reading about the University of Maryland’s new cross-disciplinary Artificial Intelligence Interdisciplinary Institute at Maryland.]

I’ve written and spoken quite a lot about AI, about how AI has changed its meaning since the term was introduced in 1956, or about technical issues, or about neuromorphic implementations, or even about the possible dangers of AI. But now, I want to write about the new directions that neurobiologically inspired AI is taking.

AI (as it currently stands, as of November 2024) is the result of the convergence of several technologies.

  • the internet itself (allowing free and easy interconnectivity)
  • lots of people and organisations using the internet, and posting vast amounts of data (text, sounds, graphics, images etc.) on it, making them freely available
  • many years of research into adaptive systems such as neural networks
  • and, of course, the electronic technologies that underly cheap digital computing and communications, the enabling technology for all of the above.

This technical convergence is having huge social effects well outside of technology.

The industrial revolution changed what was valued in people: being strong was no longer important when machines could provide power. Are we coming to a point where intelligence will no longer be important when machines can provide it? And if so, what will matter? What do we actually want from AI? And who wants it, who will control it, and to what end?

It is, of course, not quite that simple. Strength is quite easy to define, but intelligence is much more difficult. Chess-playing machines don’t replace chess players (even if they could) because the interest is in the game itself: the existence of a perfect chess playing machine would not stop people playing chess. And the nature of intelligence in chess playing machines is not applicable directly to other problems. We currently have machines that learn statistical patterns and regularities from enormous volumes of data, and we use these in conjunction with generative systesm which produce text without any understanding of it. These systems are trained, and this training uses many thousands of computers over a long period of time. This is so expensive that very few companies can undertake it. Accessing or using these trained machines is much less expensive, but relies on the trained system being made available.

Fortunately (or unfortunately) progress in AI is rapid. I am not referring to the use of AI, but to the underlying ideas. And these new ideas will revolutionise the way AI can be used. There is another convergence of technologies at work here: the convergence of neurophysiology and microelectronics. For many years researchers have worked to understand how neurons (brain cells) work and communicate, and this research is starting to produce results that could underlie a much better understanding of what underlies real intelligence – and hence allow better approximations of artificial intelligence. Current systems use a very basic concept of the neuron (in neural networks), but new ideas – specifically two point neurons, modelled on pyramidal neocortical neurons – are arriving. These are much more powerful for many deep mathematical/ statistical reasons, and one result of this much more powerful algorithm is that less computing power should be necessary to make them work. This could enable democratisation of training AI systems.

Perhaps more importantly, some researchers suggest that the way in which these neurons co-operate (rather than compete) may be critical in making the whole system work. Bill Phillips’ book “The co-operative neuron” analyses how this may work in real brains, but it is only a matter of time before the concepts are implemented electronically. This has huge implications because for the first time we begin to understand the way in which the brain produces the mind. And our electronic technology may be able to recreate this. Such synthetic intelligence could be very different from the relatively unsophisticated systems that we currently call intelligent.

This makes the development of interdisciplinary institutes like the one recently set up at the University of Maryland timely and critical. We urgently need the humanities here. These developments are too important to be left to the technologists.

On redefining intelligence, and adding volition.

February 20, 2024

Every time we get near a machine that displays intelligence we redefine intelligence. By and large we really don’t like the idea that machines might become as intelligent as we humans are. Or think we are, at any rate!

So here, I’ll try and define a few tasks, and discuss what sorts of intelligence they might need.

Gaming

  • The ability to  play noughts and crosses? (simple games)
  • The ability to play chess/go/etc. (perfect sequence games
  • The ability to play poker/backgammon/cards (games of skill and chance) [advantage: purely cognitive, as we are not considering moving chess pieces (easier) or picking up cards off a green baize table cover (harder))].

Robot-based definitions:

  • The ability to find and pick up an object?
  • The ability to find and pick up an object, and then place it in the appropriate place
  • The ability to find and pick up an object, and then place it in the appropriate place in a disordered environment.
  • Given a problem, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.
  • Given a problem, to decide to solve the problem using available objects & tools, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.

More robotics …

  • The ability of a robot to follow a line on the ground
  • The ability to find the way to the door;
  • The ability to maneuver from one place to the door in a cluttered environment?
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of for a good reason.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of its own volition (!)
  • The ability to cross a considerable geographic distance (with a power source).

And more down-to Earth robotics…

  • The ability to be useful in a kitchen environment (like a sous chef)
  • The ability to be really useful in a kitchen environment (like a cook or a  chef)
  • The ability to take a passive part in caring for a person.
  • The ability to take an active part in caring for a person.

Science/engineering problems

  • The ability to answer textual questions sensibly.
  • The ability to answer technical questions correctly (with reference to available information
  • The ability to invent/create new solutions to technical problems. (hard to define, as novelty is often in the combination of the existing answers)

What do these graded problems tell us?

The problems range from what we now see as simple issues, to problems that robots (particularly narrow-AI systems/robots) can do, to much more difficult activities. Particularly in robotics, where the system interacts with the everyday world the difficulties are much harder than in purely cognitive areas. But that’s new, and really reflects the availability of huge amounts of computing power.

It suggests that the next big push is in the manipulation side, the part we humans tend to take for granted because these are less specifically human. Mobility, manipulating natural objects, navigating the world, finding food and shelter. We seem much nearer to solving the problems of cognition, solving abstract (or rather, abstracted) problems, rather than in problem solving in the practical sense. We need to think about the cerebellum as well as the cortex and neocortex.

Some of the problems go much further and require volition. This is different, a stronger version of intelligence that goes beyond the usual machine definition, (though not beyond the human definition). AI (and AGI) is not able to manage this successfully, currently. Yet there are goal-oriented planning systems (and have been for some time), at the currently less fashionable end of AI research. Once you mix these with capable (in the active within a real environment sense), you run the danger of a goal-oriented system performing acts pursuant to that goal that are dangerous in a very real sense.

It is one thing to envision an active caring robot cooking some vegetables for its client, and interacting with knives or kitchen equipment in a complex and self-directed way, and quite another to imagine a robotic soldier seeking out and eliminating  the ”enemy”. Yet if we were capable of doing the one, we would likely be capable of doing the other.

Artificial Intelligence and its Dangers.

December 27, 2023

An apology

I’ve been really remiss: I haven’t updated this blog for more than two years. But perhaps today (Storm Gerrit outside, with wind and heavy rain) is a good day to get back to it.

Artificial Intelligence

I recently gave a talk on Artificial Intelligence (AI) to a group of retired academics (Stirling University Retired Staff Association, SURSA). As an academic, I worked on related matters for many years, and remain involved in AI. This was a non-technical talk, to academics from many disciplines. It went down well, and led to lots of discussion.

It also led to me talking to quite a variety of people about AI after the talk, as I felt more able to take a broader perspective on AI than I had as a researcher, where I necessarily concentrated on some tightly defined areas. This led me to thinking more about both the possible dangers of AI research in the near future.

What are the dangers of AI?

A great deal has been written about the existential dangers of AI: I’m less than convinced. Firstly, because AI, at least as matters stand, is only intelligent in certain senses. It lacks volition (or will) entirely, which to me means that it’s not about to take over or enslave the human population of the Earth. (I note that intelligence with volition, as is to be found in humans, has indeed taken over and enslaved parts of the animal and plant kingdoms).

Secondly, current AI systems are generally deep convolutional neural networks (DCNNs) mixed with systems for generating text, and sometimes logical inference engines. These are made up of a number of components which, while not new, can take advantage of major advances in computing technology, as well as the vast amount of digitised data available on the internet. Often they are used as a user-friendly gateway on to the WWW, enabling quite complex questions to be asked and answered, instead of searching the internet using a set of keywords. This is an intelligent user interface, but not intelligent in human terms.

Of course, such systems can replace humans in many tasks which currently are undertaken by well-educated people (including the chattering classes who write newspaper articles and web blogs!). Tasks like summarising data, researching past legal cases, or writing summaries of research in specific areas might become automatable. This continues a process started with the spinning jenny, and running through the industrial revolution where human strength and some artisanal skills ceased to be a qualification for work. While this is certainly a problem, it is not an existential one. The major issue here is who has the power: who (and how many) will benefit from these changes, which makes this a political issue.

I am more concerned about volition.

As matters stand this is not an area that is well understood. Can intelligence, volition and awareness be separated? Can volition be inserted into the types of AI system that we can currently build?

I don’t think this is possible with our current level of understanding, but there is a great deal of research ongoing into neuroscience and neurophysiology, and it is certainly not beyond imagination that this will lead to a much more scientificically grounded theory of mind. And volition could well be understood sufficiently to be implemented in AI systems.

Another possibility is that we will discover that volition and awareness are restricted to living systems: but are we trying to build synthetic living systems? Possibly, but this is an area that has huge ethical issues. Electronic systems (including computers) are definitely not alive in any sense, but the idea of interfacing electronics to cultured neural systems is one that has been around for quite some time (though the technical difficulties are considerable).

In conclusion: should we be worried – or how worried should we be?

There are many things that we can be worried about, ranging from nuclear war, to everyday land wars, to ecological catastrophies, to pandemics. Where might AI and its dangers fit within these dangers? From an employment point of view, AI has the capacity to remove certain types of job, but equally to create novel forms of job, dealing with data in a different way. I am not a great believer in the capability of governments large or small being able to influence technology. Generally, governments are well behind in technological sophistication, and have rings run round them by technology companies.  

We do need to think hard about giving systems volition. Yet equally we do want to build autonomous systems for particular applications (think of undersea or space exploration), which would provide some reasons for research into awareness of the systems environment. This could require research into providing certain types of autonomous drive, particularly where the delay between anyone who might control the system and the system itself is large. But rolling out such systems with volition, without careful safeguards could indeed be dangerous. But what form might these safeguards take, given that governments are generally behind the times, and that these systems are produced by large independent technology companies primarily motivated by profit? This is a difficult question, and one to which I do not have an answer.

Off to AGI 2012 tomorrow

December 6, 2012

The 2012 Artificial General Intelligence conference beckons. Down to Oxford on the train, to present on Perceptual Time (see July post). At least I’ll be able to concentrate on just one thing at a time, instead of trying to herd cats, and perform academic duties and discuss the budget all at once. Sometimes i think I’d get more research done if I retired, and made my workshop in my garage a bit friendlier. I reckon I could maybe even build a prototype of the hearing aid I nearly patented in 2000. Still, between that and the novel microphone work that’s ongoing… maybe one day… But I digress. Artificial General Intelligence is the subject for this coming week-end, and it looks quite interesting. The last AGI meeting I was at was in Memphis, and had lots about the singularity, but perhaps that’s a little passé now. Looking forward to it, and to travelling by train and not plane as well, even if it does take rather a long time to get there. But for now, I’d better go and get packed!