Hello everyone, and thank you for the really interesting discussions that we had for my week of the course on minds, brains, and computers. There were some great discussions on the forum, and I'm not able to reply to them all. But there were just two points that I thought might be worth bringing up and following up, because people might want to read more about them. So the first concerns animal minds, and this came up on the thread about intentionality. So just to clarify, yes, I am using aboutness to refer to what most philosophers would refer to as intentionality. The reason I didn't use the word intentionality is it does come with a lot of philosophical, kind of conceptual baggage, which I didn't necessarily want to tie in, such as whether all mental states have intentionality. So, that's why I use the term aboutness for these talks. Anyway, in these discussions, there were some questions about whether animals have minds, and if they do have minds whether we can know. Now, this is actually quite a vexed question in philosophy, and it's one of those nice questions that borders philosophy and psychology and cognitive ethology. So there is a school of philosophy that says, look, in order to have thoughts you have to have language. And therefore because animals, as far as we can tell, aren't language users. Yes there's chimpanzees who can kind of connect some words with objects in the world, but it doesn't appear that they have very complex grammar yet. Until we've got animals that can use language, those animals can't think. Now another school of philosophers particularly kind of more cognitive science based philosophers like Peter Carruthers say, well this doesn't seem to be a very fair argument. Just because we can't necessarily express what's going on in an animal's mind, because we naturally have a human anthropocentric viewpoint. That doesn't mean that these animals aren't thinking about things. So say you have a dog which is barking up a tree. It's seen a squirrel run up a tree and it's standing at the bottom barking. Now, in our hu, hu, in our human terms, it seems to make sense to say, well, that dog thinks that there's a squirrel up in the tree. But of course this only really roughly captures what's going on in the dog's mind. For example, dogs might not know that squirrels are mammals. That squirrels typically have bushy tails. It doesn't have the kind of rich concept of a squirrel which we humans have. And therefore the content of the dog's thought will be very different from ours. But we can infer from its behavior, the way it changes its behavior if that squirrel moves from the tree it's barking up to a different tree, that the dog perhaps has a mental state which roughly can approximate what we say as the dog is thinking that the squirrel is up the tree, even if it need not necessarily match onto the human elements of that thought. For those of you who are interested in animal minds, I can highly recommend the website of the philosopher Peter Carruthers. He has all his papers available freely, and he has a whole section on animal minds, and some of those papers are introductory papers. So they might be nice to have a look at. The second issue I wanted to bring up regarded the Turing machine. And somebody said, well, really, the Turing machine doesn't test whether a computer can think like a human. Rather, it tests, at what stage we as humans are willing to attribute conscious states to other things. And this is a really interesting question. This is something that I'm quite interested in particularly from a developmental perspective. At what point do you sort of infants start thinking about other people as locus's of conscious experience? At what point do they kind of think, well that person has a point of view in the world and a perspective? And truth be told, we don't know very much about this. So, you are right, there are two questions. There's the question of what enables or what behavioral cues do humans pick up on in order to attribute conscious awareness to another thing? And second, whether those cues really are good cues to be picking on, in order to attribute consciousness to another thing? Or are we just being misled? So, you have the more kind of metaphysical if you like, which is, does this thing have consciousness regardless of our human intuitions about it? And, a separate question which is, what are those behavioral cues that humans pick up on in the world such that they attribute conscious awareness to another thing? And then there's a methodological question of, well how should we measure consciousness if not by our human intuitions as to whether something has consciousness or not? And, that's a big methodological question which requires further thought. Anyway, I hope you enjoyed the lectures on philosophy of mind. I've certainly enjoyed reading the discussion forums, and keep on philosophizing. Thank you very much.