Group leader: Bob Kirk
When: Fortnightly 11:0am-1:0pm on Wednesdays April 15 and 29, May 13 and 27, June 10 and 24.
Venue: Admiral Rodney
Can machines think? Can they be intelligent? Might they even be conscious? Prompted by the enormous impact of AI, many people with no previous tendency to philosophise are puzzling over such questions. This course introduces the philosophy of mind, which provides a framework for thinking about them. Roughly, philosophy consists of some bright ideas and theories, some not-so-bright ones, and arguments pro and con, all flowing from attempts to deal with questions we cannot help raising. More and more answers are provided by the sciences, but some questions resist a scientific approach yet still seem important. They count as philosophical. For this course you will be expected to start with no knowledge of philosophy at all.
Topics for the sessions
1. The idea of minds, souls and spirits as things distinct from the body.
2. A contrasting idea: being conscious, having a mind, feelings and other mental attributes is nothing more than behaving and being disposed to behave in certain ways (a philosophical version of behaviourism).
3. Another idea: the mind is the brain (a version of materialism/physicalism). Yet another: mental states are ones which perform certain kinds of functions (functionalism).
4. Objections to physicalism and functionalism.
5. Can machines think?
6. Might some machines be conscious?
The aim is to give you some idea of the problems, and the point of the theories, and plenty of practice criticising arguments. We need to know what philosophers say, but it is more important to understand what problems they are dealing with and what can be said for and against their ideas. The sessions will run in a relaxed atmosphere where things are made as clear as possible and everyone has a chance to speak – if they wish. The sessions will not be lectures. Typically an introduction of fifteen minutes will be followed by discussion, then further exposition and further discussion, and so on. The sessions will build on what has gone before, so if you care about 5 and 6 you will benefit most if you take in 1-4 as well.
Possible reading (not compulsory!)
Ryle, Gilbert (1949), The Concept of Mind, Hutchinson(a readable and influential classic).
Warburton, Nigel (1992 and later), Philosophy: The Basics, Routledge (brisk, clear, reliable).
The Stanford Encyclopedia of Philosophy (plato.stanford.edu. The first paragraph of each entry is introductory and accessible).
The Internet Encyclopedia of Philosophy (https://iep.utm.edu).

René Descartes, 1596-1650
INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION I
The idea of minds, souls and spirits as things distinct from the body
Descartes was a dualist: he argued that minds and bodies are fundamentally different kinds of thing. For him, the mark of a mind is that it thinks; the mark of a body is that it is extended in space. He was also an interactionist: minds affect bodies and vice versa. Minds are immaterial and can exist apart from bodies.

He had two different arguments for his dualism. The first arises from his project of doubting everything that could be doubted, with a view to discovering foundations for certain knowledge. His first result was the ‘cogito’: I think therefore I am (cogito ergo sum). He concluded that he could not possibly doubt that he existed. On the other hand, he reasoned, he could consistently doubt the evidence of his senses, for he knew they sometimes deceived him. He could even doubt whether he had a body – there might be a malicious demon controlling his experiences. So:
(D1) I cannot consistently doubt that I exist.
(D2) I cannot consistently doubt that I am a thinking thing.
(D3) I can consistently doubt that my body exists.
(D4) If I can ‘conceive clearly and distinctly one thing without another’, then I am ‘certain that the one is distinct or different from the other’.
(D5) Therefore my body is not the same thing as myself.
(D6) Therefore I am a non-physical thinking thing.
Descartes’ second line of argument presupposes that matter cannot do what minds can: that machines cannot use language or behave appropriately in indefinitely many situations. But he assumed that the only way for a machine to produce a given pattern of behaviour was for that precise pattern to have been anticipated by its constructors and a special mechanism devised to produce it in response to a particular stimulus: a ‘reflex’. His reasoning is understandable, given there were no computers in the 17th C.
Reading: Discourse on Method and the Meditations (Penguin etc.)

Elizabeth of Bohemia, 1618-80
If souls, minds or spirits are separate from the body, how do they manage without brains? If brains are not needed for thinking and feeling, why do we have them?
The belief that souls survive death has been around for thousands of years, but there has long been opposition to it. Notably, some Greek philosophers of the 5th century BC maintained that nothing exists but atoms and empty space.
INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION II
Mental states as behaviour
Gilbert Ryle in his classic The Concept of Mind ridiculed Cartesian dualism as the ‘myth of the ghost in the machine’. Consider how we know what other people are thinking or feeling. It’s because we can see and hear what they do: we know on the basis of their behaviour. Behaviourists of the philosophical kind urge that instead of following Descartes and conceiving of the mind as if it were some kind of thing, we should think of expressions such as ‘the mind’, ‘intelligence’, ‘understanding’, ‘emotion’, as ways of talking about people’s behaviour, and how they would behave in various situations.
Of course we normally suppose there is a big difference between behaviour on the one hand, and thoughts and feelings on the other. And obviously mental states are not just a matter of actual behaviour. But the notion of dispositions does useful work here. If a sugar-lump is in water, it dissolves. But some sugar-lumps never encounter a liquid: they are burnt, pulverized or otherwise destroyed; yet they would dissolve if they were to be put in water; they retain that disposition. Beliefs and desires offer plausible illustrations of behaviourism. They have the interesting feature of ‘intentionality’: they have content, they are about things, and some may be true or false. Behaviourism offers explanations of that feature. What is it to believe there are whales in the Atlantic? Well, if you ask me where there are whales, I am disposed to reply ‘in the Atlantic’, and so on. Or if I want a cup of coffee, then if I am near a coffee shop I might be disposed to go in. Similarly for intentions: if I intend to catch a bus I will go to the bus stop.
The dispositions don’t have to be simple. Consider a complex computer program like one for a rail itinerary. It doesn’t necessarily give the same output for the same input (unlike the systems Descartes assumed were the only kind a machine could instantiate). Typically, complex programs produce different outputs depending on the system’s internal state, which changes over time.
Behaviourism may seem plausible for some kinds of mental state. But what about experiences? Take the case of pain. Damage to bodily tissues disposes one to wince, groan, or even scream, depending on how bad the damage is, and we are disposed to lose those dispositions when the pain stops.
The ‘perfect actor’ objection: one can simulate pain behaviour without being in pain. Reply: the simulator has different dispositions from someone genuinely in pain.
But behaviourism claims there is no more to pain than that: to have a toothache is simply to be disposed to say ‘Ow, that hurts!’ if someone touches the affected jaw; to avoid using that side of the mouth; take aspirins; visit the dentist, and so on. But we think of the pains as causing the behaviour – which would be nonsense if the pains were just those dispositions.

Gilbert Ryle,1900-1976
INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION III
Is the mind the brain? Or is it whatever does the work of a brain?
The identity theory. So a behaviouristic approach has trouble with sensations. A different suggestion is the so-called identity theory: sensations and other experiences are brain processes. We know that brain damage can impair mental functioning and that the mental effects of drugs are put down to their effects on the transmission of electrochemical impulses among brain cells. Does that mean the mind is the brain?
Saying the mind is the same thing as the brain doesn’t explain what it is about the brain that matters, any more than saying that chess is a board game explains what chess is. And even if having a brain were sufficient for having a mind, it doesn’t appear to be necessary as well. Why should having a brain be the only way to have a mind? Why shouldn’t organisms different from us and all terrestrial creatures, and with different nervous systems, be minded? (The nervous systems of octopuses are unlike ours, yet these creatures are intelligent.) And we are excessively used to the idea that suitably programmed artificial systems without nerves may be minded. If those are genuine possibilities, we can’t just take for granted that minds are the same things as brains. In any case, it seems clear that if we think they are, that’s because we assume that what matters about brains is what they do, what their functions are.
Functionalism comes in several varieties, but the central thought is that to have a mind is to be a natural or artificial system in which certain functions, such as those involved in memory, perception and problem-solving, are performed. On this account, when we explain behaviour in terms of emotions, desires, beliefs, and so on we are alluding to states that perform different functions in the individual’s life. One key implication is that, like behaviourism, functionalism is indifferent to the question of what sort of stuff actually performs the relevant functions. As Hilary Putnam, a pioneering functionalist, put it,‘we could be made of Swiss cheese and it wouldn’t matter’ – so long as the relevant functions are performed. (Ιt’s a stretch to claim that they might be performed by cheese – though who can be sure, given modern technology?)

Hilary Putnam 1926-2016
Behaviourism, the identity theory and functionalism encourage the view that the whole universe is purely physical (materialism or physicalism). The Greek thinker Democritus said, ‘By convention there is colour, by convention there is sweetness, by convention there is bitterness; but in truth there are atoms and empty space’.

Democritus c.460-370 BC
INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION IV
Difficulties for physicalism and functionalism
We have to admit that perception ... cannot be explained mechanically… If we imagine a machine whose construction ensures that it has thoughts, feelings, and perceptions, we can conceive it to be so enlarged, while keeping the same proportions, that we could enter it like a mill. On that supposition, when visiting it we shall find inside only components pushing one another, and never anything that could explain a perception. — Leibniz (1646–1716), Monadologie, §17.

(Functionalists will reply that you need to know what functions are involved in having ‘thoughts, feelings and perceptions’. Just staring at the machinery won’t tell you.)
Epiphenomenalism. Developments in the 19th century encouraged the idea that physics could explain all explicable physical events: that the physical world is ‘closed under causation’. But what about consciousness? Experiences seem unlike anything physical. Epiphenomenalists concluded that consciousness was non-physical and has no effects on the physical world: we are ‘conscious automata’.
Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious. So much for the philosophy of consciousness. – J A Fodor, ‘The Big Idea: Can there be a Science of Mind?’, Times Literary Supplement, 3 July 1992.
Zombies. Epiphenomenalism hints at a strange possibility: creatures exactly like us in all physical details and behaving exactly like us, yet not conscious – ‘philosophical’ (not folkloric) zombies. Of course natural laws rule that out. But if it were no more than ‘logically’ possible, physicalism would be false. There are strong reasons to think zombies are not possible, mainly to do with the causal links between consciousness and behaviour.
In ‘What is it Like to Be a Bat?’ Thomas Nagel maintains that the‘subjective character of experience…is not analysable in terms of any explanatory system of functional states…’. We can’t tell what it is like to be such an alien creature as a bat. He thinks such knowledge lies beyond the reach of physicalism and functionalism. But distinguish between knowing what it is for there to be something it is like, and knowing what experiences are like.

Jackson’s Mary. Mary is a super-scientist with normal vision, kept in a colourless environment from birth. She knows all the physical facts about colour vision, but when she emerges from her grey world and sees colours for the first time, it seems she immediately acquires new information: she learns what it is like to see the blue sky, red tomatoes, and so. In ‘Epiphenomenal Qualia’ Frank Jackson concludes that there is a kind of information beyond what the physical facts can yield – so that physicalism is false. Is that right?

INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION V
Can machines think?
We can start from some old ideas, still much debated.
The Turing Test. The first is from the great mathematician, logician, and pioneer of computer science, Alan Turing. Instead of that question, he suggested the ‘imitation game’: discover by questioning within a time limit whether the tested system can make the tester think it is human. He said, ‘in about fifty years’ (from 1950) ‘it will be possible to program computers ... to play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning’. Some programs surely pass that test.

The Chinese Room argument. But is the test sound? Notice it only takes account of behaviour, ignoring how the behaviour is produced. John Searle’s Chinese Room argument aimed to prove that what goes on inside the system matters. Suppose a certain program enables a computer to understand Chinese, and that there are rule-books in English enabling Searle to do what the computer does. If he has these rules in a room, then when written Chinese is put through his letter-box, they should enable him (in time!) to put out acceptable Chinese responses – and to understand Chinese. But he insists he never gets to understand it; the characters remain meaningless to him. He concludes that because the computer does what he does, it doesn’t understand Chinese either. The two main responses to his argument are the Systems Reply and the Robot Reply; he has replied (inadequately I think) to both.

Brute force v. intelligence. Contrast two types of programs for playing board games. One, ‘brute force’, either anticipates every possible position and puts out a fixed response, or else follows an algorithm (e.g. the one by which you never lose at noughts and crosses). Such a program would surely not produce intelligence. The other type may be said to enable the system to weigh up the pros and cons of possible moves – seeming to give it a degree of intelligence.
Block’s machines. Ned Block describes a theoretically (not practically) possible brute force system that produces intelligent responses to questions for e.g. Turing’s five minutes. It has only ‘the intelligence of a toaster’, and he concludes that the right behaviour is not enough.

What matters? Suggestion: intelligence requires the system to work out its own behaviour using its own assessment of its situation. (Do LLMs do that?)
Block, N. (1981), ‘Psychologism and Behaviourism’;
Searle, J R (1980), ‘Minds, Brains, and Programs’;
Turing, A M (1950), ‘Computing Machinery and Intelligence’.
INTRODUCTION TO THE PHILOSOPHY OF MIND: SESSION VI
Might some machines be conscious?
Descartes thought animals are insentient machines: they don’t e.g. feel pain or pleasure. In L’homme Machine (1747) La Mettrie argued that we too are machines – in which case the answer to our question is yes. So let’s say a machine is any system controlled by a computer. Some people treat their chatbots – controlled by large language models (LLMs) – as conscious. And there are reasons to think LLMs can’t make machines conscious (see below). But does the same go for all possible machines? Some salient points:

LLMs. The behaviour of LLMs results from processes designed to produce outputs with the highest probability of being acceptable given the (likely very large) input. This probability is based on text scraped from the internet. But is the right behaviour enough? Doesn’t the nature of the internal processing matter?
What is that nature? ‘Conscious’ is a word from ‘folk’ psychology, by which we characterize and explain our behaviour and thoughts in terms of beliefs, desires, sensations, emotions, and so on. Folk psychology requires conscious subjects to be able to collect information via their sense organs: toperceive. And this information must be usable by the subject – so the perceiver must have its own goals in the light of which it monitors its behaviour. Do LLMs fit this requirement? Doubtful.
Does biology matter? Some people think computers are not made of the right sort of stuff. But does it matter what they are made of, provided they do the right things? It doesn’t matter for computers! Perhaps functionalism offers the best approach.
An unbridgeable gap. Experiences seem to be utterly different in kind from physical processes. And Leibniz was right that if there were a conscious machine, just staring at it would not by itself explain its consciousness (handout IV). But functionalism holds there is no more to consciousness than for the right functions to be performed – and clearly you would not necessarily learn the functions of a process by looking at it. The idea, then, is that if the right functions are performed, there is something it is like regardless of what performs them. There is indeed an unbridgeable gap between knowing what functions are being performed and knowing what it is like for the subject, but none (according to functionalism) between performance of the right functions and consciousness.
But what are the right functions? Rough suggestion: a system is conscious if sensory information is forced into it so as to act on its processes of interpretation, assessment, and decision making, regardless of the relevance of that information to the system’s own goals. (Of course vastly more could be said.)
What do you think, then, so far? Might a machine be conscious?