An apology
I’ve been really remiss: I haven’t updated this blog for more than two years. But perhaps today (Storm Gerrit outside, with wind and heavy rain) is a good day to get back to it.
Artificial Intelligence
I recently gave a talk on Artificial Intelligence (AI) to a group of retired academics (Stirling University Retired Staff Association, SURSA). As an academic, I worked on related matters for many years, and remain involved in AI. This was a non-technical talk, to academics from many disciplines. It went down well, and led to lots of discussion.
It also led to me talking to quite a variety of people about AI after the talk, as I felt more able to take a broader perspective on AI than I had as a researcher, where I necessarily concentrated on some tightly defined areas. This led me to thinking more about both the possible dangers of AI research in the near future.
What are the dangers of AI?
A great deal has been written about the existential dangers of AI: I’m less than convinced. Firstly, because AI, at least as matters stand, is only intelligent in certain senses. It lacks volition (or will) entirely, which to me means that it’s not about to take over or enslave the human population of the Earth. (I note that intelligence with volition, as is to be found in humans, has indeed taken over and enslaved parts of the animal and plant kingdoms).
Secondly, current AI systems are generally deep convolutional neural networks (DCNNs) mixed with systems for generating text, and sometimes logical inference engines. These are made up of a number of components which, while not new, can take advantage of major advances in computing technology, as well as the vast amount of digitised data available on the internet. Often they are used as a user-friendly gateway on to the WWW, enabling quite complex questions to be asked and answered, instead of searching the internet using a set of keywords. This is an intelligent user interface, but not intelligent in human terms.
Of course, such systems can replace humans in many tasks which currently are undertaken by well-educated people (including the chattering classes who write newspaper articles and web blogs!). Tasks like summarising data, researching past legal cases, or writing summaries of research in specific areas might become automatable. This continues a process started with the spinning jenny, and running through the industrial revolution where human strength and some artisanal skills ceased to be a qualification for work. While this is certainly a problem, it is not an existential one. The major issue here is who has the power: who (and how many) will benefit from these changes, which makes this a political issue.
I am more concerned about volition.
As matters stand this is not an area that is well understood. Can intelligence, volition and awareness be separated? Can volition be inserted into the types of AI system that we can currently build?
I don’t think this is possible with our current level of understanding, but there is a great deal of research ongoing into neuroscience and neurophysiology, and it is certainly not beyond imagination that this will lead to a much more scientificically grounded theory of mind. And volition could well be understood sufficiently to be implemented in AI systems.
Another possibility is that we will discover that volition and awareness are restricted to living systems: but are we trying to build synthetic living systems? Possibly, but this is an area that has huge ethical issues. Electronic systems (including computers) are definitely not alive in any sense, but the idea of interfacing electronics to cultured neural systems is one that has been around for quite some time (though the technical difficulties are considerable).
In conclusion: should we be worried – or how worried should we be?
There are many things that we can be worried about, ranging from nuclear war, to everyday land wars, to ecological catastrophies, to pandemics. Where might AI and its dangers fit within these dangers? From an employment point of view, AI has the capacity to remove certain types of job, but equally to create novel forms of job, dealing with data in a different way. I am not a great believer in the capability of governments large or small being able to influence technology. Generally, governments are well behind in technological sophistication, and have rings run round them by technology companies.
We do need to think hard about giving systems volition. Yet equally we do want to build autonomous systems for particular applications (think of undersea or space exploration), which would provide some reasons for research into awareness of the systems environment. This could require research into providing certain types of autonomous drive, particularly where the delay between anyone who might control the system and the system itself is large. But rolling out such systems with volition, without careful safeguards could indeed be dangerous. But what form might these safeguards take, given that governments are generally behind the times, and that these systems are produced by large independent technology companies primarily motivated by profit? This is a difficult question, and one to which I do not have an answer.
Tags: ai, artificial general intelligence, Artificial Intelligence, artificial intelligence., chatgpt, machine-learning, Public lecture, technology
December 28, 2023 at 8:31 pm |
Interesting. My own view is similar. We will take time to adapt to the new roles for humans as previous white collar jobs are automated, but we’ve done this before from at least the beginning of the industrial age. For safeguards, I think something like Asimov’s laws of robotics are a starting point, but any general rules are likely to be fraught with unexpected consequences. That is a non trivial problem.
December 28, 2023 at 11:18 pm |
Definitely non-trivial. Asimov’s laws presuppose (i) a real understanding of the context of the robot in its embedding in its world, (ii) possibly some sort of (? free) will/volition in the robot, insofar as the possibility of disobeying these laws suggests volition. But I entirely agree that before we enable autonomous systems to take (non-trivial) actions, we need internal safeguards.