Hello everyone and thank you for the really interesting discussions that we had for my week of the course on minds, brains, and computers. There were some great discussions on the forum, and I'm not able to reply to them all. But there were just two points that I thought might be worth bringing up and following up because people might want to read more about them. So first concerns animal minds, and this came up on the thread about intentionality. Just to clarify, yes I am using aboutness to refer to what most philosophers would refer to as intentionality. The reason I didn't use the word intentionality is it does come with a lot of philosophical kind of conceptual baggage which I didn't necessarily want to tie in, such as whether all mental states have intentionality. So that's why I used the term aboutness with these talks. Anyway in these discussions there was some questions about whether animals have minds and if they do have minds, whether we can know. Now this is actually quite a vexed question in philosophy. And it's one of those nice questions that borders philosophy and psychology and kind of cognitive ethology. So there is a school of philosophy that says, look, in order to have thoughts, you have to have language. And therefore, because animals, as far as we can tell, aren't language users, yes there's chimpanzees who can kind of connect some words with objects in the world. But it doesn't appear that they have a very complex grammar yet. Until we've got animals that can use language, those animals can't think. Now another school of philosophers, particularly more cognitive science based philosophers like Peter Carruthers, say well this doesn't seem to be a very fair argument. Just because we can't necessarily express what's going on in an animal's mind because we naturally have a human anthropocentric viewpoint, that doesn't mean that these animals aren't thinking about things. So say you have a dog which is barking up a tree. It's seen a squirrel run up a tree, and it's standing at the bottom barking. Now in our human terms, it seems to make sense to say, well, that dog thinks that there's a squirrel up the tree. But of course this only really roughly captures what's going on in the dog's mind. For example, dogs might not know that squirrels are mammals, that squirrels typically have a bushy tails. It doesn't have the kind of rich concept of a squirrel which we humans have. And therefore the content of a dog's thought will be very different from ours. But we can infer from its behavior, the way it changes its behavior if that squirrel moves from the tree its barking up to a different tree. That the dog perhaps has a mental state which roughly kind of approximates what we say as the dog is thinking that the squirrel is up the tree, even if it may not necessarily match onto the human elements of that thought. For those of you who are interested in animal minds, I can highly recommend the website of the philosopher Peter Carruthers. He has all his papers available freely, and he has a whole section on animal minds. And some of those papers are introductory papers, so they might be nice to have a look at. The second issue I wanted to bring up regarded the Turing Machine. And somebody said, well, really the Turing Machine doesn't test whether the computer can think like a human. Rather it tests at what stage we as humans are willing to attribute conscious states to other things. And this is a really interesting question. This is something that I'm quite interested in, particularly from a developmental perspective. At what point do sort of infants start thinking about other people as locuses of conscious experience? At what point do they kind of think, well that person has a point of view in the world, and a perspective? And, truth be told, we don't know very much about this. So, you're right, there are two questions. There's a question of, what behavioral cues do humans pick up on in order to attribute conscious awareness to another thing? And, secondly, whether those cues really are good cues to be picking up on in order to attribute consciousness to another thing, or are we just being misled? So you have the more kind of metaphysical question if you like which is, does this thing have consciousness regardless of our human intuitions about it? And a separate question which is, what are those behavioral cues that humans pick up on in the world such that they attribute conscious awareness to another thing? And then there's a methodological question of, well, how should we measure consciousness if not by our human intuitions as to whether something has consciousness or not? And that's a big methodological question which requires further thought. Anyway, I hope you enjoyed the lectures on philosophy of the mind. I've certainly enjoyed reading the discussion forums, and keep on philosophizing. Thank you very much.