Posts Tagged ‘technology’

The Jamcorder and MIDI.

March 11, 2025

I just bought a Jamcorder, imported from the states. This little device connects to an electric piano using midi (USB or 5 pin DIN connector), but can still connect to a digital audio workstation (DAW) simultaneously (given the appropriate cables). It comes complete with the cables for simply connecting to the piano. There’s an app that goes with it (also called jamcorder) which works on iPhones, iPads and M1 to M4 Macintosh computers (and Android too, though I haven’t tested that).

I’m using it with my Roland electric piano, and Reaper, and have been using the software on a large iPad and two different (M1 and M4) Macs. (The screen on my iPhone is just too small for me to like using it!).

It records everything you play using MIDI. The files are small, and it comes with a 16Gbyte memory card, which should be enough for a very long time! It’s clever enough to stop recording when you aren’t playing.

I’m still learning how to use the app properly, though it’s easy to use for simply replaying the music using MIDI. It will play it either on the device or on the electric piano. And it will save the midi file for later use, whether in a DAW or whatever.

What it brought how to me was the difference between a MIDI file (which records exactly what you play), and a music score. You don’t actually want the score to record exactly what you play! You end up with demisemiquavers (32nd notes, in the US), and very short rests, as well as seeing just how inexact the timing on thechords you play are! Some systems that translate MIDI to music score do exactly that (the one in Reaper, for example), while others try to be more musician-friendly. The issue is that the MIDI file doesn’t know (for example) the time signature, where bars begin and end, the key signature, etc. To some extent these may be inferrable from the MIDI file, but not necessarily easily or correctly, partticularly if the time or key signature changes – or if one intends to speed up or slow down in the music.

I need to investigate further what software would be best for transcription: there’s quite a lot of choice, some free, and some relatively expensive.

But the Jamcorder itself? Strongly recommended, and easy to use, at least in the basic way that was intended.

Revisiting Artificial Intelligence.

November 7, 2024

[This piece has been inspired by reading about the University of Maryland’s new cross-disciplinary Artificial Intelligence Interdisciplinary Institute at Maryland.]

I’ve written and spoken quite a lot about AI, about how AI has changed its meaning since the term was introduced in 1956, or about technical issues, or about neuromorphic implementations, or even about the possible dangers of AI. But now, I want to write about the new directions that neurobiologically inspired AI is taking.

AI (as it currently stands, as of November 2024) is the result of the convergence of several technologies.

  • the internet itself (allowing free and easy interconnectivity)
  • lots of people and organisations using the internet, and posting vast amounts of data (text, sounds, graphics, images etc.) on it, making them freely available
  • many years of research into adaptive systems such as neural networks
  • and, of course, the electronic technologies that underly cheap digital computing and communications, the enabling technology for all of the above.

This technical convergence is having huge social effects well outside of technology.

The industrial revolution changed what was valued in people: being strong was no longer important when machines could provide power. Are we coming to a point where intelligence will no longer be important when machines can provide it? And if so, what will matter? What do we actually want from AI? And who wants it, who will control it, and to what end?

It is, of course, not quite that simple. Strength is quite easy to define, but intelligence is much more difficult. Chess-playing machines don’t replace chess players (even if they could) because the interest is in the game itself: the existence of a perfect chess playing machine would not stop people playing chess. And the nature of intelligence in chess playing machines is not applicable directly to other problems. We currently have machines that learn statistical patterns and regularities from enormous volumes of data, and we use these in conjunction with generative systesm which produce text without any understanding of it. These systems are trained, and this training uses many thousands of computers over a long period of time. This is so expensive that very few companies can undertake it. Accessing or using these trained machines is much less expensive, but relies on the trained system being made available.

Fortunately (or unfortunately) progress in AI is rapid. I am not referring to the use of AI, but to the underlying ideas. And these new ideas will revolutionise the way AI can be used. There is another convergence of technologies at work here: the convergence of neurophysiology and microelectronics. For many years researchers have worked to understand how neurons (brain cells) work and communicate, and this research is starting to produce results that could underlie a much better understanding of what underlies real intelligence – and hence allow better approximations of artificial intelligence. Current systems use a very basic concept of the neuron (in neural networks), but new ideas – specifically two point neurons, modelled on pyramidal neocortical neurons – are arriving. These are much more powerful for many deep mathematical/ statistical reasons, and one result of this much more powerful algorithm is that less computing power should be necessary to make them work. This could enable democratisation of training AI systems.

Perhaps more importantly, some researchers suggest that the way in which these neurons co-operate (rather than compete) may be critical in making the whole system work. Bill Phillips’ book “The co-operative neuron” analyses how this may work in real brains, but it is only a matter of time before the concepts are implemented electronically. This has huge implications because for the first time we begin to understand the way in which the brain produces the mind. And our electronic technology may be able to recreate this. Such synthetic intelligence could be very different from the relatively unsophisticated systems that we currently call intelligent.

This makes the development of interdisciplinary institutes like the one recently set up at the University of Maryland timely and critical. We urgently need the humanities here. These developments are too important to be left to the technologists.

Interfacing electronics and neural cultures: brain organoid (artificial?) intelligence:

October 15, 2024

There have been major advances in culturing neurons (and associated brain cells), and integrating them within electronic circuits. There’s an excellent review on Frontiers in Science. There are two possible aims for this work: understanding neural circuitry better (with many clinical applications), and integrating neural circuitry into artificial intelligence systems (because real brains are much better at certain types of everyday tasks).

Back in the ealy 2000’s I worked (with Nhamo Mtetwa) in this area on an EPSRC project including Glasgow University (Prof Adam Curtis, Prof Chris Wilkinson, Brad Dworak) and Edinburgh University (Prof Alan Murray, Dr Mike Hunter, Dr Nikki Macleod). At that time Stirling was involved in data analysis, intended for data from multi-electrode array (MEA) based cultures of early rat neurons, . But culturing them proved almost impossible for us, and at Stirling we worked on data from the Potter/Wagenaar lab at Georgia Tech. I gave a talk about the area in Georgia Tech in April 2003. But a great deal has changed since then: we were too early (or perhaps just not inventive enough!) to the game.

Possibly the biggest change is in the source of the neural culture. The use of induced pluripotent stem cells (iPSC) from human skin samples, and the building of 3-D brain organoids (small cultures of neural and associated cells) means that one need not be sacrificing newborn rats, and secondly that the cultures are (at least in a sense) made from human-like neural cells. This, and improvements in the size and flexibility of electrodes (and faster processing of their signals) means that such cultures can much more reliably be built and instrumented. But how should this work be continued? There is a long discussion in the Frontiers paper, itself part of a larger discussion on Frontiers, including a very good discussion of ethics issues. Should we be seeking better understanding of the brain, improving our ability to deal with clinical issues (mental health and physical damage to neural tissue), and/or incorporating these organoids into AI systems?

It is clear that better understanding of brains (and more generally nerve tissue) has clinical applications. It may raise philosophical issues as well: once we understand the connection between consciousness, awareness, cognition and neural tissue, we may need to re-jig our ideas of what makes an addictive personality, or of makes people criminal, quite apart from the possibility of re-creating these in synthesised neural cultures. But that is (probably) a little further down the road.

Incorporating organoids into AI systems has attractions. While real brains run at much lower speeds, they are highly parallel and very energy efficient, more than compensating for this. However, neural tissue needs to be kept at a constant temperature and perfused with nutrients and water. While microfluidics have advanced a great deal, I reckon that these disadvantages may make putting such systems into consumer equipment unattractive. In addition, such systems have a limited lifespan compared to the (essentially infinite) lifespan of semiconductor based electronics. Further, there are still issues related to the longevity of the electronics/tissue connection. On the other hand, in the 1980’s I never expected to see hand-held devices with 64 bit processors and many gigabytes of memory, so one can never be sure!

Lastly, I want to return to the attempts we made more than 20 years ago. It seems to me that I have been trying to make scientific/technological advancements too early. Without iPSCs, without good microfluidics, creating and keeping neural cultures alive was very difficult, making instrumenting them just too difficult (at least for us). And before that, I was working on a binaural hearing aid that attempted to find the sound sources, and allow the user to select the one of interest using an iPad-like interface – but in the year 2000. Too early. Right now many researchers are refining Transformer-like AI systems, jumping on a fast-rolling bandwagon. Too late. Getting the timing just right is the really difficult trick!

Neuromorphic Systems: revisited

March 29, 2024

I’ve been interested in Neuromorphic Systems for a long time: I helped hold the first and the second European Workshops on Neuromorphic systems at Stirling University (Scotland, UK) way back in the 1990’s. I kept working on this area for some time, but then became Head of Department, and became more interested in Neuroinformatics (and was, for a time, the UK representative at the International Neuroinformatics Co-ordinating Forum). But the Medical Research Council pulled the plug on official UK membership, and much water has flown under many bridges since then… Now I’m Emeritus Professor (with the freedopm that brings), and more importantly, there’s been a huge increase in interest in this overall area.

What is meant by Neuromorphic Systems has moved on. At the very beginning (following Carver Mead’s book) this often meant based on MOS transistors in the subthreshold domain, because of their exponential transfer function. But this specific meaning was widened to include more normal analogue circuitry, and spiking systems as well. These days, the meaning has move on some more. Here, I attempt to redefine neuromorphic systems, primarily to avoid the term becoming associated with all the different types of neural networks, and thus becoming more or less meaningless!

What is meant by (or rather, what do I mean by) Neuromorphic systems?

I consider that there are two main branches of Neuromorphic systems:

1: Hardware (or hardware and software) that models neural systems. Examples are

  • modelling ion channels,
  • modelling patches of active membrane,
  • modelling single neurons or neural microcircuits, and
  • modelling larger-scale aspects of a brain.

2: Neurobiologically inspired hardware (or hardware/software) for solving real problems, particularly sensory or cognitive problems. Examples are

  • auditory, visual, tactile (etc.) sensors designed for interpretation (rather than reproduction),
  • systems for processing sensory data (whether from neuromorphic sensors or other sources),
  • brain/computer interface systems processing real neural data.

One important aspect of these systems (whether implemented in analogue, mixed signal or digital domains) is real-time operation.

I have been trying to avoid neuromorphic systems becomeing snowed under by all the other large-scale applications of neurally inspired systms, such as neural networks, reinforcement learning systems, and all the systems that process huge volumes of data off-line to build recognition and generative sytems.

Why do this now?

There is renewed interest in neuromorphic systems for at least four reasons.

Firstly, although the large language models (GPTs) work extremely well, they only do so after being trained on extremely large volumes of data, and this training takes a very long time on a large number of processors. This means that training this type of AI system is only possible for those with large numbers of processors (google, microsoft, for example). Further, these systems are “in the cloud”, so that information has to be sent to them, and the results received. There is real interest in building stand-alone systems that sit “at the edge”, rather than “in the cloud”, and neuromorpphic systems are one possible way of achieveing this.

Secondly, there are advances in hardware, as well as in hardware design. Novel devices, specifically memristors are being developed by many different groups, and are being integrated into existing digital designs. This is still difficult but is becoming commercially viable. Such devices make adaptable memory possible in analog, mixed signal, and digital systems without either (relatively) large capacitors or complex digital circuitry.

Thirdly, there is increasing interest in incorporating neuromorphics into robotic systems. This needs not only the first reason above, but also effective real-time sensory systems that can enable the robotic system to co-exist with humans in real environments. There has been interest in neuromorphic cameras right from the start, (there’s a chapter in Mead’s 1989 book on this), but newer systems, like those from Inivation , are now commercially available. There are ideas for neuromorphic microphones and olfactory sensors too, though real neuromorphic microphones are still difficult. The primary aim is sensors that work for interpretation, rather than reproduction.

Fourthly, there have been major advances in neuroscience and neurophysiology, leading to new ideas about how neurons and neural circuitry work. There are many different types of neuron, and our understanding of their operation (both singly and in local microcircuits) has moved beyond the earlier leaky integrate-and-fire neuron. It is still early days for implementing these relatively new ideas in electronics.

As a result, there are more researchers working in the neuromorphic area than ever before. At the same time, there is a much larger community working on neural networks, large language models, big data, and so on, one one aim of this blog article is to identify the Neuromorphic systems community.

We need to be able to meet up and share ideas (as well as taking part in large conferences that include aspects of neuromorphic systems, such as neural net conferences (like NIPS, ICANN, etc.) and ISSCC and other chip design conferences. There are excellent workshops on the area (Telluride and Cappocaccia), but I’d like to start a discussion on how we might meet up and share ideas on a less formal basis.

On redefining intelligence, and adding volition.

February 20, 2024

Every time we get near a machine that displays intelligence we redefine intelligence. By and large we really don’t like the idea that machines might become as intelligent as we humans are. Or think we are, at any rate!

So here, I’ll try and define a few tasks, and discuss what sorts of intelligence they might need.

Gaming

  • The ability to  play noughts and crosses? (simple games)
  • The ability to play chess/go/etc. (perfect sequence games
  • The ability to play poker/backgammon/cards (games of skill and chance) [advantage: purely cognitive, as we are not considering moving chess pieces (easier) or picking up cards off a green baize table cover (harder))].

Robot-based definitions:

  • The ability to find and pick up an object?
  • The ability to find and pick up an object, and then place it in the appropriate place
  • The ability to find and pick up an object, and then place it in the appropriate place in a disordered environment.
  • Given a problem, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.
  • Given a problem, to decide to solve the problem using available objects & tools, to be able to find objects, manipulate them, work on them, place them, in order to solve the problem.

More robotics …

  • The ability of a robot to follow a line on the ground
  • The ability to find the way to the door;
  • The ability to maneuver from one place to the door in a cluttered environment?
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of for a good reason.
  • The ability to find the way to the door (in a cluttered environment), and then open it, and exit the room of its own volition (!)
  • The ability to cross a considerable geographic distance (with a power source).

And more down-to Earth robotics…

  • The ability to be useful in a kitchen environment (like a sous chef)
  • The ability to be really useful in a kitchen environment (like a cook or a  chef)
  • The ability to take a passive part in caring for a person.
  • The ability to take an active part in caring for a person.

Science/engineering problems

  • The ability to answer textual questions sensibly.
  • The ability to answer technical questions correctly (with reference to available information
  • The ability to invent/create new solutions to technical problems. (hard to define, as novelty is often in the combination of the existing answers)

What do these graded problems tell us?

The problems range from what we now see as simple issues, to problems that robots (particularly narrow-AI systems/robots) can do, to much more difficult activities. Particularly in robotics, where the system interacts with the everyday world the difficulties are much harder than in purely cognitive areas. But that’s new, and really reflects the availability of huge amounts of computing power.

It suggests that the next big push is in the manipulation side, the part we humans tend to take for granted because these are less specifically human. Mobility, manipulating natural objects, navigating the world, finding food and shelter. We seem much nearer to solving the problems of cognition, solving abstract (or rather, abstracted) problems, rather than in problem solving in the practical sense. We need to think about the cerebellum as well as the cortex and neocortex.

Some of the problems go much further and require volition. This is different, a stronger version of intelligence that goes beyond the usual machine definition, (though not beyond the human definition). AI (and AGI) is not able to manage this successfully, currently. Yet there are goal-oriented planning systems (and have been for some time), at the currently less fashionable end of AI research. Once you mix these with capable (in the active within a real environment sense), you run the danger of a goal-oriented system performing acts pursuant to that goal that are dangerous in a very real sense.

It is one thing to envision an active caring robot cooking some vegetables for its client, and interacting with knives or kitchen equipment in a complex and self-directed way, and quite another to imagine a robotic soldier seeking out and eliminating  the ”enemy”. Yet if we were capable of doing the one, we would likely be capable of doing the other.

Artificial Intelligence and its Dangers.

December 27, 2023

An apology

I’ve been really remiss: I haven’t updated this blog for more than two years. But perhaps today (Storm Gerrit outside, with wind and heavy rain) is a good day to get back to it.

Artificial Intelligence

I recently gave a talk on Artificial Intelligence (AI) to a group of retired academics (Stirling University Retired Staff Association, SURSA). As an academic, I worked on related matters for many years, and remain involved in AI. This was a non-technical talk, to academics from many disciplines. It went down well, and led to lots of discussion.

It also led to me talking to quite a variety of people about AI after the talk, as I felt more able to take a broader perspective on AI than I had as a researcher, where I necessarily concentrated on some tightly defined areas. This led me to thinking more about both the possible dangers of AI research in the near future.

What are the dangers of AI?

A great deal has been written about the existential dangers of AI: I’m less than convinced. Firstly, because AI, at least as matters stand, is only intelligent in certain senses. It lacks volition (or will) entirely, which to me means that it’s not about to take over or enslave the human population of the Earth. (I note that intelligence with volition, as is to be found in humans, has indeed taken over and enslaved parts of the animal and plant kingdoms).

Secondly, current AI systems are generally deep convolutional neural networks (DCNNs) mixed with systems for generating text, and sometimes logical inference engines. These are made up of a number of components which, while not new, can take advantage of major advances in computing technology, as well as the vast amount of digitised data available on the internet. Often they are used as a user-friendly gateway on to the WWW, enabling quite complex questions to be asked and answered, instead of searching the internet using a set of keywords. This is an intelligent user interface, but not intelligent in human terms.

Of course, such systems can replace humans in many tasks which currently are undertaken by well-educated people (including the chattering classes who write newspaper articles and web blogs!). Tasks like summarising data, researching past legal cases, or writing summaries of research in specific areas might become automatable. This continues a process started with the spinning jenny, and running through the industrial revolution where human strength and some artisanal skills ceased to be a qualification for work. While this is certainly a problem, it is not an existential one. The major issue here is who has the power: who (and how many) will benefit from these changes, which makes this a political issue.

I am more concerned about volition.

As matters stand this is not an area that is well understood. Can intelligence, volition and awareness be separated? Can volition be inserted into the types of AI system that we can currently build?

I don’t think this is possible with our current level of understanding, but there is a great deal of research ongoing into neuroscience and neurophysiology, and it is certainly not beyond imagination that this will lead to a much more scientificically grounded theory of mind. And volition could well be understood sufficiently to be implemented in AI systems.

Another possibility is that we will discover that volition and awareness are restricted to living systems: but are we trying to build synthetic living systems? Possibly, but this is an area that has huge ethical issues. Electronic systems (including computers) are definitely not alive in any sense, but the idea of interfacing electronics to cultured neural systems is one that has been around for quite some time (though the technical difficulties are considerable).

In conclusion: should we be worried – or how worried should we be?

There are many things that we can be worried about, ranging from nuclear war, to everyday land wars, to ecological catastrophies, to pandemics. Where might AI and its dangers fit within these dangers? From an employment point of view, AI has the capacity to remove certain types of job, but equally to create novel forms of job, dealing with data in a different way. I am not a great believer in the capability of governments large or small being able to influence technology. Generally, governments are well behind in technological sophistication, and have rings run round them by technology companies.  

We do need to think hard about giving systems volition. Yet equally we do want to build autonomous systems for particular applications (think of undersea or space exploration), which would provide some reasons for research into awareness of the systems environment. This could require research into providing certain types of autonomous drive, particularly where the delay between anyone who might control the system and the system itself is large. But rolling out such systems with volition, without careful safeguards could indeed be dangerous. But what form might these safeguards take, given that governments are generally behind the times, and that these systems are produced by large independent technology companies primarily motivated by profit? This is a difficult question, and one to which I do not have an answer.